path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
p1/fingeruebungen.ipynb | ###Markdown
Programmierung für KI Winterersemester 2021/22Prof. Dr. Heiner Giefers / Prof. Dr. Doga Arinir Aufgabe 1Schreiben Sie Python Code zur Berechnung des Umfangs und der Fläche eines Rechtecks mit einer Höhe von 5 cm und einer Breite von 8 cm. Legen Sie für die Höhe und Breite Variablen an.*Erwartete Ausgabe:* Umfang des Rechtecks = 26 Zentimeter Fläche des Rechtecks = 40 Quadratzentimeter
###Code
# Lösung für Aufgabe 1
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Aufgabe 2Schreiben Sie Python Code zur Berechnung des Umfangs und der Fläche eines Kreises mit einem bestimmten Radius.*Erwartete Ausgabe:* Umfang des Kreises = 25.13272 Zentimeter Fläche des Kreises = 50.26544 Quadratzentimeter
###Code
# Lösung für Aufgabe 2
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Aufgabe 3Ändern Sie die Aufgaben 1 oder 2 so ab, dass Sie die Seitenlängen des Rechtecks, bzw. den Radius des Kreises nicht als feste Werte im Programm anlegen, sondern vom Benutzer erfragen. Verwenden Sie dazu die `input` Funktion.*Erwartete Ausgabe (z.B.):* Gib den Radius an: 7 Umfang des Kreises = 43.98226 Zentimeter Fläche des Kreises = 153.93791 Quadratzentimeter
###Code
# Lösung für Aufgabe 3
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Aufgabe 4Schreiben Sie Python Code, der eine Basis und einen Exponenten vom Benutzer erfragt und damit die Potenz berechnet und ausgibt.*Erwartete Ausgabe:* Gib die Basis an: 7 Gib denn Exponenten an: 2.1 7.0 hoch 2.1 ist 59.52588815791429
###Code
# Lösung für Aufgabe 4
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Aufgabe 5Schreiben Sie Python Code, der zwei ganze Zahlen erfragt und ausgibt, wie viele Zahlen zwischen den beiden eingegebenen Zahlen liegen. Das Programm soll auch funktionieren, wenn zuerst die größere Zahl eingegeben wurde.*Erwartete Ausgabe:* Gib eine Zahl ein: 17 Gib eine weitere Zahl ein: 5 Zwischen 17 und 5 liegen 11 Werte
###Code
# Lösung für Aufgabe 5
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Aufgabe 6Schreiben Sie Python Code, der auf Grundlage einer Progressionstabelle aus dem Bruttolohn den Nettolohn berechnet. Die Steuersätze sind in 6 Tarifzonen aufgeteilt. Für Jahreseinkommen unter 10.000 Euro müssen keine Steuern bezahlt werden, für Einkommen zwischen 10.000 und 14.000 14%, usw.Das Programm soll den Bruttolohn erfragen und den Nettolohn ausgeben.| Tarifzone | Einkommensbereich | Steuersatz || --------- | ----------------- | ---------- || Zone 0 | 0 bis 10.000 Euro | 0% || Zone 1 | bis 14.000 Euro | 14% || Zone 2 | bis 31.000 Euro | 22% || Zone 3 | bis 56.000 Euro | 29% || Zone 4 | bis 83.000 Euro | 32% || Zone 5 | über 83.000 Euro | 36% |*Erwartete Ausgabe:* Bruttolohn: 17000 Der Nettolohn betraegt 13260.0 Euro
###Code
# Lösung für Aufgabe 6
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Aufgabe 7Schreiben Sie Python Code, der eine Zahl und einen möglichen Teiler der Zahl erfragt. Das Programm soll berechnen, ob die zweite Zahl tatsächlich ein Teiler der ersten Zahl ist. (Hinweis: Dass eine Zahl durch eine andere teilbar ist, erkennt man daran, dass der Rest der ganzzahligen Division 0 ist. Diesen Rest können Sie mit der Modulo Operation berechnen.)*Erwartete Ausgabe:* Gib eine Zahl ein: 30 Gib einen möglichen Teiler an: 7 7 ist kein Teiler von 30
###Code
# Lösung für Aufgabe 7
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Aufgabe 8Schreiben Sie Python Code, der die Zahlen von 1 bis 50 ausgibt.Hinweis: In dieser Aufgabe benötigen Sie eine Programmschleife, die erst in Kapitel 5 des Lehrbuchs behandelt wird. Daher machen wir an dieser Stelle einen kleinen Vorgriff.Für unsere Zwecke hier ist eine `while`-Schleife gut geeignet. Die Schleife wird folgendermaßen programmiert:```pythonwhile : ````` und `` sind natürlich nur Platzhalter für *echten* Python Code. Probieren Sie einfach mal aus, die richtige Lösung zu finden. Denken Sie daran, dass sie *mitzählen* müssen, wie viele Zahlen sie schon ausgegeben haben. Dafür benötigen Sie unbedingt eine Variable.*Erwartete Ausgabe:* 1 2 3 4 5 ... *Variante 2: Wie können Sie folgende Ausgabe erreichen? In der Hilfe der `print` Funktion finden Sie eine Möglichkeit.* 1 2 3 4 5 ...
###Code
# Lösung für Aufgabe 8
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Aufgabe 9Schreiben Sie Python Code, das vom Benutzer die Eingabe einer Zahl verlangt. Solange die eingegebene Zahl keine 0 ist, soll das Programm die Summe aller bisher eingegebenen Zahlen ausgeben und nach einer weiteren Zahl fragen. *Erwartete Ausgabe:* Gib eine Zahl ein: 3 Die Summe ist: 3 Gib eine Zahl ein: 5 Die Summe ist: 8 Gib eine Zahl ein: 2 Die Summe ist: 10 Gib eine Zahl ein: 0 Die Summe ist: 10
###Code
# Lösung für Aufgabe 9
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Aufgabe 10Schreiben Sie Python Code, der eine Zahl vom Benutzer erfragt und dann alle Teiler dieser Zahl ausgibt. (Hinweis: Wenn `n` die eingegebene Zahl ist, probieren Sie einfach alle Zahlen von 1 bis `n` aus. Ist die aktuelle Zahl ein Teiler von `n`, so geben Sie sie aus. Falls nicht, gehen Sie zur nächsten Zahl über.*Erwartete Ausgabe:* Gib eine Zahl ein: 64 1 ist ein Teiler von 64 2 ist ein Teiler von 64 4 ist ein Teiler von 64 8 ist ein Teiler von 64 16 ist ein Teiler von 64 32 ist ein Teiler von 64 64 ist ein Teiler von 64
###Code
# Lösung für Aufgabe 10
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Aufgabe 11Schreiben Sie Python Code, der die Multiplikationstabelle für das Kleine Einmaleins ausgibt. Das Programm soll die Werte berechnen und prinzipiell auch für Operanden größer 10 verwendbar sein.Hinweis: Um die Tabelle schöner zu formatieren, können Sie zum Ausgeben des Werts einen Format-Sting verwenden. Um den Wert von `zahl` mit 4 Stellen auszugeben, verwenden Sie den Format-String `f"{zahl:4d}"`.*Erwartete Ausgabe:* 1 2 3 4 5 6 7 8 9 10 2 4 6 8 10 12 14 16 18 20 3 6 9 12 15 18 21 24 27 30 4 8 12 16 20 24 28 32 36 40 5 10 15 20 25 30 35 40 45 50 6 12 18 24 30 36 42 48 54 60 7 14 21 28 35 42 49 56 63 70 8 16 24 32 40 48 56 64 72 80 9 18 27 36 45 54 63 72 81 90 10 20 30 40 50 60 70 80 90 100
###Code
# Lösung für Aufgabe 11
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Aufgabe 12Schreiben Sie Python Code, der eine Folge von Werten berechnet, bei der jedes Element die Summe der beiden vorangegangene Werte ist. Der Startpunkt der Folge sind die Werte `1` und `1`. Das dritte Element der Folge ist demnach `2` (`1+1`), das vierte `3` (`1+2`), das fünfte `5` (`2+3`), das sechste `8` (`3+8`), usw. Fragen Sie vor der Berechnung ab, bis zu welchem Element die Folge berechnet werden soll.*Erwartete Ausgabe:* Bis zu welcher Stelle soll die Folge berechnet werden? 10 Element 1 = 1 Element 2 = 1 Element 3 = 2 Element 4 = 3 Element 5 = 5 Element 6 = 8 Element 7 = 13 Element 8 = 21 Element 9 = 34 Element 10 = 55
###Code
# Lösung für Aufgabe 12
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____ |
Python/CARTOONING-AN-IMAGE-USING-OPENCV-master/CARTOONING AN IMAGE USING OPENCV.ipynb | ###Markdown
Import Libraries
###Code
import cv2
import numpy as np
import matplotlib.pyplot as plt
import os
%matplotlib inline
os.chdir('E:\Project-CARTOON')
###Output
_____no_output_____
###Markdown
Reading Image
###Code
img=cv2.imread("BIRD.png")
type(img)
img.shape
plt.imshow(img)
###Output
_____no_output_____
###Markdown
Getting Edges
###Code
gray=cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
gray=cv2.medianBlur(gray, 5)
edges=cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 9, 9)
###Output
_____no_output_____
###Markdown
Cartoonization
###Code
color= cv2.bilateralFilter(img, 9, 250, 250)
cartoon= cv2.bitwise_and(color, color, mask=edges)
###Output
_____no_output_____
###Markdown
Showing Output
###Code
cv2.imshow("Image", img)
cv2.imshow("edges", edges)
cv2.imshow("Cartoon", cartoon)
cv2.waitKey(0)
cv2.destroyAllWindows()
###Output
_____no_output_____ |
MLP_fashionMNIST.ipynb | ###Markdown
뉴럴넷으로 패션 아이템 구분하기Fashion MNIST 데이터셋과 앞서 배운 인공신경망을 이용하여 패션아이템을 구분해봅니다.
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torchvision import transforms, datasets
USE_CUDA = torch.cuda.is_available()
DEVICE = torch.device("cuda" if USE_CUDA else "cpu")
EPOCHS = 30
BATCH_SIZE = 64
###Output
_____no_output_____
###Markdown
데이터셋 불러오기
###Code
transform = transforms.Compose([
transforms.ToTensor()
])
trainset = datasets.FashionMNIST(
root = './.data/',
train = True,
download = True,
transform = transform
)
testset = datasets.FashionMNIST(
root = './.data/',
train = False,
download = True,
transform = transform
)
train_loader = torch.utils.data.DataLoader(
dataset = trainset,
batch_size = BATCH_SIZE,
shuffle = True,
)
test_loader = torch.utils.data.DataLoader(
dataset = testset,
batch_size = BATCH_SIZE,
shuffle = True,
)
###Output
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to ./.data/FashionMNIST/raw/train-images-idx3-ubyte.gz
###Markdown
뉴럴넷으로 Fashion MNIST 학습하기입력 `x` 는 `[배치크기, 색, 높이, 넓이]`로 이루어져 있습니다.`x.size()`를 해보면 `[64, 1, 28, 28]`이라고 표시되는 것을 보실 수 있습니다.Fashion MNIST에서 이미지의 크기는 28 x 28, 색은 흑백으로 1 가지 입니다.그러므로 입력 x의 총 특성값 갯수는 28 x 28 x 1, 즉 784개 입니다.우리가 사용할 모델은 3개의 레이어를 가진 인공신경망 입니다.
###Code
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 10)
def forward(self, x):
x = x.view(-1, 784)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
###Output
_____no_output_____
###Markdown
모델 준비하기`to()` 함수는 모델의 파라미터들을 지정한 곳으로 보내는 역할을 합니다.일반적으로 CPU 1개만 사용할 경우 필요는 없지만,GPU를 사용하고자 하는 경우 `to("cuda")`로 지정하여 GPU로 보내야 합니다.지정하지 않을 경우 계속 CPU에 남아 있게 되며 빠른 훈련의 이점을 누리실 수 없습니다.최적화 알고리즘으로 파이토치에 내장되어 있는 `optim.SGD`를 사용하겠습니다.
###Code
model = Net().to(DEVICE)
optimizer = optim.SGD(model.parameters(), lr=0.01)
###Output
###Markdown
학습하기
###Code
def train(model, train_loader, optimizer):
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
# 학습 데이터를 DEVICE의 메모리로 보냄
data, target = data.to(DEVICE), target.to(DEVICE)
optimizer.zero_grad()
output = model(data)
loss = F.cross_entropy(output, target)
loss.backward()
optimizer.step()
###Output
_____no_output_____
###Markdown
테스트하기
###Code
def evaluate(model, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to(DEVICE), target.to(DEVICE)
output = model(data)
# 모든 오차 더하기
test_loss += F.cross_entropy(output, target,
reduction='sum').item()
# 가장 큰 값을 가진 클래스가 모델의 예측입니다.
# 예측과 정답을 비교하여 일치할 경우 correct에 1을 더합니다.
pred = output.max(1, keepdim=True)[1]
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
test_accuracy = 100. * correct / len(test_loader.dataset)
return test_loss, test_accuracy
###Output
_____no_output_____
###Markdown
코드 돌려보기자, 이제 모든 준비가 끝났습니다. 코드를 돌려서 실제로 훈련이 되는지 확인해봅시다!
###Code
for epoch in range(1, EPOCHS + 1):
train(model, train_loader, optimizer)
test_loss, test_accuracy = evaluate(model, test_loader)
print('[{}] Test Loss: {:.4f}, Accuracy: {:.2f}%'.format(
epoch, test_loss, test_accuracy))
###Output
_____no_output_____ |
Material, Exercicios/Codigos/iadell7_K_means_video_aula.ipynb | ###Markdown
Leitura dos dados
###Code
df = pd.read_csv('Mall_Customers.csv')
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Verificar dados nulos
###Code
df.isnull().sum()
###Output
_____no_output_____
###Markdown
Informações estatísticas
###Code
df.describe()
###Output
_____no_output_____
###Markdown
Gerando gráfico de renda anual versus score do cliente
###Code
plt.scatter(df['Annual Income (k$)'], df['Spending Score (1-100)'], marker='.')
plt.xlabel('Renda Anual [k$]')
plt.ylabel('Score (1-100)')
plt.show()
###Output
_____no_output_____
###Markdown
Selecioando dados para agrupamento
###Code
X = df[['Annual Income (k$)', 'Spending Score (1-100)']]
X.head()
###Output
_____no_output_____
###Markdown
Importando K-means
###Code
from sklearn.cluster import KMeans
###Output
_____no_output_____
###Markdown
Clusterizando com k = 5
###Code
modelo_kmeans = KMeans(n_clusters= 5, init='k-means++')
y_kmeans= modelo_kmeans.fit_predict(X)
print(y_kmeans)
###Output
[2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2 3 2
3 2 3 2 3 2 1 2 3 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 0 4 0 1 0 4 0 4 0 1 0 4 0 4 0 4 0 4 0 1 0 4 0 4 0
4 0 4 0 4 0 4 0 4 0 4 0 4 0 4 0 4 0 4 0 4 0 4 0 4 0 4 0 4 0 4 0 4 0 4 0 4
0 4 0 4 0 4 0 4 0 4 0 4 0 4 0]
###Markdown
Visualizando o primeiro grupo criado
###Code
print(X[y_kmeans == 0])
###Output
Annual Income (k$) Spending Score (1-100)
123 69 91
125 70 77
127 71 95
129 71 75
131 71 75
133 72 71
135 73 88
137 73 73
139 74 72
141 75 93
143 76 87
145 77 97
147 77 74
149 78 90
151 78 88
153 78 76
155 78 89
157 78 78
159 78 73
161 79 83
163 81 93
165 85 75
167 86 95
169 87 63
171 87 75
173 87 92
175 88 86
177 88 69
179 93 90
181 97 86
183 98 88
185 99 97
187 101 68
189 103 85
191 103 69
193 113 91
195 120 79
197 126 74
199 137 83
###Markdown
Visualizando os grupos
###Code
k_grupos = 5
cores = ['r', 'b', 'k', 'y', 'g']
for k in range(k_grupos):
cluster = X[y_kmeans == k]
plt.scatter(cluster['Annual Income (k$)'], cluster['Spending Score (1-100)'],
s = 100, c = cores[k], label = f'Cluster {k}')
plt.title('Grupos de clientes')
plt.xlabel('Renda Anual (k$)')
plt.ylabel('Score (1-100)')
plt.grid()
plt.legend()
plt.show()
###Output
_____no_output_____ |
notebooks/ROI/02_Nearshore/01b_HYSWAN_SIMULATION/01_WAVES_MDA.ipynb | ###Markdown
... ***CURRENTLY UNDER DEVELOPMENT*** ... Selection of representative cases of multivariate wave conditions to simulate with SWAN Maximum Dissimilarity Algorithm (MDA)inputs required: * Historical waves * Emulator output - wave conditionsin this notebook: * Split sea and swell components * MDA selection of representative number of events Workflow:
###Code
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# common
import os
import os.path as op
# pip
import numpy as np
import pandas as pd
import xarray as xr
import matplotlib.pyplot as plt
# DEV: override installed teslakit
import sys
sys.path.insert(0, op.join(os.path.abspath(''), '..', '..', '..', '..'))
# teslakit
from teslakit.database import Database, hyswan_db
from teslakit.climate_emulator import Climate_Emulator
from teslakit.mda import MaxDiss_Simplified_NoThreshold, nearest_indexes
from teslakit.plotting.mda import Plot_MDA_Data
###Output
_____no_output_____
###Markdown
Database and Site parameters
###Code
# --------------------------------------
# Teslakit database
p_data = r'/Users/nico/Projects/TESLA-kit/TeslaKit/data'
db = Database(p_data)
# set site
db.SetSite('ROI')
# hyswan simulation database
db_sim = hyswan_db(db.paths.site.HYSWAN.sim)
# Climate Emulator DWTs-WVS simulations
CE = Climate_Emulator(db.paths.site.EXTREMES.climate_emulator)
WVS_sim = CE.LoadSim_All()
# --------------------------------------
# Set MDA parameters
# variables to use
vns = ['tp', 'dir']
# subset size, scalar and directional indexes
n_subset = 125 # subset size
ix_scalar = [0] # tp
ix_directional = [1] # dir
###Output
_____no_output_____
###Markdown
Prepare Sea and Swells data
###Code
def split_sea_swells(WVS):
'''
splits WVS dataframe data into sea waves & swell waves dataframes
requires WVS to contain variables with these names:
'sea_Hs', 'sea_Tp', 'sea_Dir'
'swell_1_Hs', 'swell_1_Tp', 'swell_1_Dir'
'swell_2_Hs', 'swell_2_Tp', 'swell_2_Dir'
...
'''
# store n_sim if found in WVS dataset
vns_extra = []
if 'n_sim' in list(WVS.columns):
vns_extra.append('n_sim')
# Prepare SEA waves
vns_sea = ['sea_Hs', 'sea_Tp', 'sea_Dir'] + vns_extra
wvs_sea = WVS[vns_sea]
wvs_sea.dropna(inplace=True) # clean nans
wvs_sea.rename(columns={"sea_Hs":"hs", "sea_Tp":"tp", "sea_Dir": "dir"}, inplace=True) # rename columns
wvs_sea = wvs_sea[wvs_sea["dir"]<=360] # filter data
# Prepare SWELL_1 waves
vns_swell_1 = ['swell_1_Hs', 'swell_1_Tp', 'swell_1_Dir'] + vns_extra
wvs_swell_1 = WVS[vns_swell_1]
wvs_swell_1.dropna(inplace=True)
wvs_swell_1.rename(columns={"swell_1_Hs":"hs", "swell_1_Tp":"tp", "swell_1_Dir": "dir"}, inplace=True)
wvs_swell_1 = wvs_swell_1[wvs_swell_1["dir"]<=360]
# Prepare SWELL_2 waves
vns_swell_2 = ['swell_2_Hs', 'swell_2_Tp', 'swell_2_Dir'] + vns_extra
wvs_swell_2 = WVS[vns_swell_2]
wvs_swell_2.dropna(inplace=True)
wvs_swell_2.rename(columns={"swell_2_Hs":"hs", "swell_2_Tp":"tp", "swell_2_Dir": "dir"}, inplace=True)
wvs_swell_2 = wvs_swell_2[wvs_swell_2["dir"]<=360]
# join swell data
wvs_swell = pd.concat([wvs_swell_1, wvs_swell_2], ignore_index=True)
return wvs_sea, wvs_swell
# --------------------------------------
# split simulated waves data by family
wvs_sea_sim, wvs_swl_sim = split_sea_swells(WVS_sim)
db_sim.Save('sea_dataset', wvs_sea_sim)
db_sim.Save('swl_dataset', wvs_swl_sim)
###Output
_____no_output_____
###Markdown
MaxDiss Classification
###Code
# --------------------------------------
# Sea
data = wvs_sea_sim[vns].values[:]
# MDA algorithm
sel = MaxDiss_Simplified_NoThreshold(data, n_subset, ix_scalar, ix_directional)
wvs_sea_sim_subset = pd.DataFrame(data=sel, columns=vns)
# add nearest hs to sea subset
ix_n = nearest_indexes(wvs_sea_sim_subset[vns].values[:], data, ix_scalar, ix_directional)
wvs_sea_sim_subset['hs'] = wvs_sea_sim['hs'].iloc[ix_n].values[:]
wvs_sea_sim_subset['n_sim'] = wvs_sea_sim['n_sim'].iloc[ix_n].values[:]
# plot results
Plot_MDA_Data(wvs_sea_sim, wvs_sea_sim_subset);
# Store MDA sea subset
db_sim.Save('sea_subset', wvs_sea_sim_subset)
# --------------------------------------
# Swells
data = wvs_swl_sim[vns].values[:]
# MDA algorithm
sel = MaxDiss_Simplified_NoThreshold(data, n_subset, ix_scalar, ix_directional)
wvs_swl_sim_subset = pd.DataFrame(data=sel, columns=vns)
# add nearest hs to swells subset
ix_n = nearest_indexes(wvs_swl_sim_subset[vns].values[:], data, ix_scalar, ix_directional)
wvs_swl_sim_subset['hs'] = wvs_swl_sim['hs'].iloc[ix_n].values[:]
wvs_swl_sim_subset['n_sim'] = wvs_swl_sim['n_sim'].iloc[ix_n].values[:]
# plot results
Plot_MDA_Data(wvs_swl_sim, wvs_swl_sim_subset);
# Store MDA swell subset
db_sim.Save('swl_subset', wvs_swl_sim_subset)
###Output
MaxDiss waves parameters: 2143588 --> 125
MDA centroids: 125/125
|
examples/batch_mode.ipynb | ###Markdown
Imports
###Code
import os
import sys
import pandas as pd
import matplotlib.pyplot as plt
try:
root = os.path.dirname(os.path.abspath(__file__))
except:
root = os.getcwd()
sys.path.append(os.path.dirname(root))
# Import dispatcher
from SynAS.SynAS import dispatcher
###Output
_____no_output_____
###Markdown
Instantiate
###Code
seed = 20 # set seed
length = 60*60 # 1 hour
dispatch = dispatcher(length=length, seed=seed)
###Output
_____no_output_____
###Markdown
Get Dispatch (native interval)
###Code
res = dispatch.get_sequence()
# res.index = [pd.to_datetime('2020-01-01')+pd.DateOffset(seconds=ix) for ix in res.index]
# res = res.resample('1S').ffill()
ax = res.plot(legend=False, figsize=(12,3))
ax.set_ylabel('Regulation Dispatch [kW]')
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Get Dispatch (scaled interval)
###Code
res = dispatch.get_sequence(timestamp='hour')
# res.index = [pd.to_datetime('2020-01-01')+pd.DateOffset(hours=ix) for ix in res.index]
# res = res.resample('1S').ffill()
ax = res.plot(legend=False, figsize=(12,3))
ax.set_ylabel('Regulation Dispatch [kW]')
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
ddos2019_experiments.ipynb | ###Markdown
Load original datasets
###Code
#Open CICIDS2019 train
columns = ['Flow Duration','Protocol','Total Length of Fwd Packets','Total Length of Bwd Packets', \
'Fwd Packet Length Mean','Bwd Packet Length Mean','Total Fwd Packets','Total Backward Packets', \
'Fwd IAT Mean','Bwd IAT Mean','Fwd IAT Std','Label', 'Timestamp', 'Flow Packets/s']
ngramcolumns = ['Flow Duration', 'Timestamp','Source IP','Total Fwd Packets', 'Total Backward Packets', \
'Total Length of Fwd Packets', 'Total Length of Bwd Packets', 'Fwd Packet Length Max', \
'Fwd Packet Length Min', 'Bwd Packet Length Max', 'Bwd Packet Length Min', \
'Flow Bytes/s', 'Flow Packets/s', 'Flow IAT Mean', 'Flow IAT Std', \
'Flow IAT Max', 'Flow IAT Min', 'Label', 'Timestamp']
dtypes = {'Flow Duration':np.int32,'Protocol': np.int8,'Total Length of Fwd Packets':np.int32,'Total Length of Bwd Packets':np.int32, \
'Fwd Packet Length Mean':np.float32,'Bwd Packet Length Mean':np.float32,'Total Fwd Packets':np.int32, \
'Total Backward Packets':np.int16,'Fwd IAT Mean':np.float32,'Bwd IAT Mean':np.float32, 'Fwd IAT Std': np.float32, \
'Fwd Packet Length Max': np.int16, 'Fwd Packet Length Min': np.int16, 'Bwd Packet Length Max': np.int32, \
'Bwd Packet Length Min': np.int16, 'Flow IAT Mean': np.float32, 'Flow IAT Std': np.float32, \
'Flow IAT Max': np.int32, 'Flow IAT Min': np.int32, 'Label':object}
dirpath_train = "/mnt/h/CICIDS/DDoS2019/train/"
#Remove below attacks from train set as test set does not have them
excludelabels = ['DrDoS_NTP', 'DrDoS_DNS', 'DrDoS_SNMP', 'DrDoS_SSDP', 'TFTP']
filepaths_train = [dirpath_train+f for f in os.listdir(dirpath_train) \
if (f.endswith('.csv') and not any(label in f for label in excludelabels))]
print("Importing training data: starting with " + filepaths_train[0])
df_train = pd.read_csv(filepaths_train[0] ,sep=',',header=0, usecols=columns, dtype= dtypes, skipinitialspace=True)
for filename in filepaths_train[1:]:
print("Concatenating: " + filename)
df_train = pd.concat([df_train,pd.read_csv(filename, sep=',',header=0, usecols=columns, dtype= dtypes, skipinitialspace=True)],ignore_index=True)
df_train.sort_values(by='Timestamp', inplace=True)
#Open CICIDS2019 test
dirpath_test = "/mnt/h/CICIDS/DDoS2019/test/"
ngramcolumns = ['Flow Duration', 'Timestamp','Source IP','Total Fwd Packets', 'Total Backward Packets', \
'Total Length of Fwd Packets', 'Total Length of Bwd Packets', 'Fwd Packet Length Max', \
'Fwd Packet Length Min', 'Bwd Packet Length Max', 'Bwd Packet Length Min', \
'Flow Bytes/s', 'Flow Packets/s', 'Flow IAT Mean', 'Flow IAT Std', \
'Flow IAT Max', 'Flow IAT Min', 'Label']
dtypes = {'Flow Duration':np.int32,'Protocol': np.int8,'Total Length of Fwd Packets':np.int32,'Total Length of Bwd Packets':np.int32,'Fwd Packet Length Mean':np.float32,'Bwd Packet Length Mean':np.float32,'Total Fwd Packets':np.int32,'Total Backward Packets':np.int16,'Fwd IAT Mean':np.float32,'Bwd IAT Mean':np.float32, 'Fwd IAT Std': np.float32,'Label':object}
#Remove below attack from test set as train set does not have it
excludelabel = 'Portmap'
filepaths_test = [dirpath_test+f for f in os.listdir(dirpath_test) if (f.endswith('.csv') and not excludelabel in f)]
print("Importing testing data: starting with " + filepaths_test[0])
df_test = pd.read_csv(filepaths_test[0] ,sep=',',header=0, usecols=columns, dtype= dtypes, skipinitialspace=True)
for filename in filepaths_test[1:]:
print("Concatenating: " + filename)
df_test = pd.concat([df_test,pd.read_csv(filename, sep=',',header=0, usecols=columns, dtype= dtypes, skipinitialspace=True)],ignore_index=True)
df_test.sort_values(by='Timestamp', inplace=True)
#Drop rows with infinity values of packet/s feature
with pd.option_context('mode.use_inf_as_na', True):
df_train.dropna(subset=['Flow Packets/s'], how='all', inplace=True)
df_test.dropna(subset=['Flow Packets/s'], how='all', inplace=True)
#Drop WebDDoS attack label as test set does not have it but its also mixed in other files
df_train.drop(index=df_train[df_train['Label'] == 'WebDDoS'].index, inplace=True)
###Output
_____no_output_____
###Markdown
Add binary label and fix test label
###Code
df_train['BinLabel'] = np.where(df_train['Label'] == 'BENIGN', 'Benign','Malicious')
print(df_train['BinLabel'].value_counts())
print(df_train['Label'].value_counts())
df_test['BinLabel'] = np.where(df_test['Label'] == 'BENIGN', 'Benign','Malicious')
print(df_test['BinLabel'].value_counts())
print(df_test['Label'].value_counts())
###Output
Malicious 8570126
Benign 32455
Name: BinLabel, dtype: int64
DrDoS_MSSQL 2458902
DrDoS_NetBIOS 2252286
DrDoS_UDP 1798872
DrDoS_LDAP 1241093
Syn 642981
UDP-lag 175992
BENIGN 32455
Name: Label, dtype: int64
###Markdown
Data composition (before removing NaNs) Training dataBenign: 11,579 instances (0.07%)Malicious: 15,879,535 instances (99.93%) Testing dataBenign: 52,231 instances (0.26%)Malicious: 20,120,600 instances (99.74%) One hot encoding
###Code
# One hot encoding for protocol
ohe_df = pd.get_dummies(df_train['Protocol'], prefix="proto")
df_train = df_train.join(ohe_df)
ohe_df = pd.get_dummies(df_test['Protocol'], prefix="proto")
df_test = df_test.join(ohe_df)
###Output
_____no_output_____
###Markdown
Combine datasets and select features
###Code
#Get input columns and corresponding label vector
#Use duration, protocol, src bytes&packets per flow, dst bytes&packets per flow, mean src/dst bytes per flow
#features = df_train.drop(['id','proto','service','state','attack_cat','label'],axis=1)
features = ['Flow Duration','proto_0','proto_6','proto_17','Total Length of Fwd Packets','Total Length of Bwd Packets','Fwd Packet Length Mean','Bwd Packet Length Mean','Total Fwd Packets','Total Backward Packets','Fwd IAT Mean','Bwd IAT Mean']
#Switch train and test set from their site for better classification scores due to sample size during training
label = 'BinLabel'
y_train = df_test[label]
y_test = df_train[label]
df_temp = df_train.copy()
df_train = df_test[features].copy()
df_test = df_temp[features]
df_temp = np.nan
###Output
_____no_output_____
###Markdown
Random Forest implementation
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report, confusion_matrix
cfs = []
preds = []
for i in range(5):
rf_clf = RandomForestClassifier(n_estimators=100,min_samples_split=10,min_samples_leaf=5,max_samples=0.8,criterion='gini',n_jobs=5,verbose=10)
rf_clf.fit(df_train,y_train)
y_pred = rf_clf.predict(df_test)
cfs.append(confusion_matrix(y_test, y_pred))
preds.append([y_test, y_pred])
print(cfs)
print(np.shape(cfs))
cf = np.mean(cfs,axis=(0))
print(cf)
print(np.std(cfs,axis=(0)))
objectToFile(preds, "ddos2019_preds_reduced_normal"+label)
###Output
[array([[ 11352, 105],
[ 1663578, 13641278]]), array([[ 11417, 40],
[ 1636411, 13668445]]), array([[ 11400, 57],
[ 1645465, 13659391]]), array([[ 11375, 82],
[ 1645483, 13659373]]), array([[ 11379, 78],
[ 1645465, 13659391]])]
(5, 2, 2)
[[1.13846000e+04 7.24000000e+01]
[1.64728040e+06 1.36575756e+07]]
[[ 22.24050359 22.24050359]
[8872.17699553 8872.17699553]]
###Markdown
Or preload results from previous run
###Code
#Load object from file
from sklearn.metrics import confusion_matrix
label='BinLabel'
preds_mem = objectFromFile("ddos2019_preds_reduced_normal"+label)
cfs = []
for pred_tuple in preds_mem:
cfs.append(confusion_matrix(pred_tuple[0], pred_tuple[1]))
###Output
_____no_output_____
###Markdown
Visualize results
###Code
from sklearn.metrics import accuracy_score
from sklearn.metrics import recall_score
paper1_acc = 0.99
paper1_rec = 0.99
paper1_spec = 0.99
paper2_acc = 0.9993
paper2_rec = 0.999
paper2_spec = 0.999
#tn, fp, fn, tp = np.mean(cfs,axis=0).ravel()
#print(tn,fp,fn,tp)
#acc_scores = [accuracy_score(pred_tuple[0], pred_tuple[1]) for pred_tuple in preds]
#rec_score = tp / (tp+fn)
#spec_score = tn / (tn+fp)
acc_scores = [0.962]
rec_score = 0.962
spec_score = 0.972
print(np.mean(acc_scores), "\n")
print(rec_score, "\n")
print(spec_score)
import matplotlib.patches as mpatches
#Colors
clr_acc = 'royalblue'
clr_rec = 'salmon'
clr_spec = 'lightgreen'
acc_patch = mpatches.Patch(color=clr_acc, label='accuracy')
rec_patch = mpatches.Patch(color=clr_rec, label='recall')
spec_patch = mpatches.Patch(color=clr_spec, label='specificity')
labels = ['Elsayed et al.\nRNN (77 features)', 'Lucky et al. \nDT (3 features)', 'Our work\nRF (30 features)']
x = np.arange(len(labels))*10
width = 2.5 # the width of the bars
pad_width = 3
scores = [paper1_acc,paper1_rec,paper1_spec,paper2_acc,paper2_rec,paper2_spec,np.mean(acc_scores),rec_score,spec_score]
fig, ax = plt.subplots(figsize=(7,6))
#Spawn bar(s) of group 1
plt.bar(x[0]-pad_width, height=scores[0], width=width, color=clr_acc)
plt.bar(x[0], height=scores[1], width=width, color=clr_rec)
plt.bar(x[0]+pad_width, height=scores[2], width=width, color=clr_spec)
#Spawn bar(s) of group 2
plt.bar(x[1]-pad_width, height=scores[3], width=width, color=clr_acc)
plt.bar(x[1], height=scores[4], width=width, color=clr_rec)
plt.bar(x[1]+pad_width, height=scores[5], width=width, color=clr_spec)
#Spawn bar(s) of group 3
plt.bar(x[2]-pad_width, height=scores[6], width=width, color=clr_acc)
plt.bar(x[2], height=scores[7], width=width, color=clr_rec)
plt.bar(x[2]+pad_width, height=scores[8], width=width, color=clr_spec)
#Hide the left, right and top spines
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.spines['left'].set_visible(False)
plt.tick_params(left = False)
#Set plot details
plt.rc('font', size=13)
#plt.ylabel('Metric score')
plt.yticks()
ax.set_yticklabels([])
#ax.get_yaxis().set_visible(False)
plt.xticks(size='14')
plt.ylim([0.8, 1])
plt.title("CIC-DDoS2019 results comparison", fontweight='bold', pad=25)
ax.set_xticks(x)
ax.set_xticklabels(labels)
add_value_labels(ax)
#ax.legend(handles=[acc_patch,rec_patch,spec_patch],bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.)
ax.set_axisbelow(True)
plt.grid(axis='y', color='grey')
fig.tight_layout()
plt.savefig('ddos2019_binaryclass_reduced_bars.png',bbox_inches='tight')
plt.show()
np.set_printoptions(suppress=True)
print('mean\n', np.mean(cfs,axis=0))
print('std. dev\n', np.std(cfs,axis=0))
print('std. dev %\n', np.divide(np.std(cfs,axis=0),np.mean(cfs,axis=0))*100)
#Plot confusion matrix
import seaborn as sns
#labels = ['Benign','Malicious']
#Standard heatmap
cf_norm = cf/cf.sum(axis=1)[:,None]
cf_percentages = ["{0:.2%}".format(value) for value in cf_norm.flatten()]
cf_numbers = [abbrv_num(value) for value in cf.flatten()]
cf_labels = ['{v1}\n({v2})'.format(v1=v1, v2=v2) for v1,v2 in zip(cf_percentages,cf_numbers)]
cf_labels = np.asarray(cf_labels).reshape(cf.shape)
fig, ax = plt.subplots(figsize=(6, 6))
plt.rc('font', size=14)
#column_labels = sorted(y_test.unique())
#column_labels[6] = 'Benign'
column_labels = ['Benign', 'Malicious']
sns.heatmap(cf_norm, annot=cf_labels, fmt='',cmap='Blues',cbar=False, vmin=0.0, vmax=1.0, ax=ax, xticklabels=column_labels, yticklabels=column_labels)
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.yticks(rotation='0', size='12')
plt.xticks(rotation='65', size='12')
plt.title("CICIDS-DDoS2019 mean multiclass classification matrix")
plt.savefig('ddos2019_binaryclass_cf_reduced.png',bbox_inches='tight')
plt.show()
importance = rf_clf.feature_importances_
print(features)
print(importance)
# summarize feature importance
for i,v in sorted(enumerate(importance),key=lambda x: x[1], reverse=True):
print('Feature: %s, Score: %.5f' % (features[i],v))
###Output
['Flow Duration', 'proto_0', 'proto_6', 'proto_17', 'Total Length of Fwd Packets', 'Total Length of Bwd Packets', 'Fwd Packet Length Mean', 'Bwd Packet Length Mean', 'Total Fwd Packets', 'Total Backward Packets', 'Fwd IAT Mean', 'Bwd IAT Mean']
[0.05142496 0.00310571 0.01284563 0.01087059 0.16057637 0.17294885
0.1707864 0.25728134 0.08805984 0.03080252 0.02494325 0.01635454]
7
Feature: Bwd Packet Length Mean, Score: 0.25728
5
Feature: Total Length of Bwd Packets, Score: 0.17295
6
Feature: Fwd Packet Length Mean, Score: 0.17079
4
Feature: Total Length of Fwd Packets, Score: 0.16058
8
Feature: Total Fwd Packets, Score: 0.08806
0
Feature: Flow Duration, Score: 0.05142
9
Feature: Total Backward Packets, Score: 0.03080
10
Feature: Fwd IAT Mean, Score: 0.02494
11
Feature: Bwd IAT Mean, Score: 0.01635
2
Feature: proto_6, Score: 0.01285
3
Feature: proto_17, Score: 0.01087
1
Feature: proto_0, Score: 0.00311
###Markdown
[array([[ 57521, 3410], [ 600486, 50126860]]), array([[ 57612, 3319], [ 600329, 50127017]]), array([[ 57567, 3364], [ 600482, 50126864]]), array([[ 57518, 3413], [ 600527, 50126819]]), array([[ 57455, 3476], [ 600539, 50126807]])]Mean:(5, 2, 2)[[5.75346000e+04 3.39640000e+03] [6.00472600e+05 5.01268734e+07]] Std. Dev.:[[52.60646348 52.60646348] [75.17606002 75.17606002]] N-grams experiment
###Code
#Retain 60% of each class in df_train for lower memory usage
#Retain only 20% of the largest class
percentage = 60
for label in df_train['Label'].unique():
label_df = df_train.loc[df_train['Label'] == label]
if label is 'TFTP':
cutoff = round(len(label_df)/100*20)
else:
cutoff = round(len(label_df)/100*percentage)
indices = label_df.iloc[cutoff:].index
print(label)
print(len(indices))
df_train.drop(index=indices, inplace=True)
#Clear vars to save memory
label_df = None
indices = None
###Output
DrDoS_NTP
481057
BENIGN
22745
DrDoS_DNS
2028404
DrDoS_LDAP
871972
DrDoS_MSSQL
1808997
DrDoS_NetBIOS
1637312
DrDoS_SNMP
2063948
DrDoS_SSDP
1044244
DrDoS_UDP
1253858
UDP-lag
146584
WebDDoS
176
Syn
632916
TFTP
8033032
###Markdown
Show amount of source IPs with more than 'threshold' flows in train dataset
###Code
#Train dataset value counts per IP
threshold = 2
vc_tr = df_train['Source IP'].value_counts()
res_tr = df_train[df_train['Source IP'].isin(vc_tr[vc_tr>threshold].index)]['Source IP'].value_counts()
print(res_tr)
###Output
172.16.0.5 29994117
192.168.50.1 9855
192.168.50.7 9660
192.168.50.6 9281
192.168.50.8 8111
...
52.43.17.8 3
107.178.246.49 3
209.170.115.32 3
34.203.79.136 3
35.164.138.68 3
Name: Source IP, Length: 296, dtype: int64
###Markdown
Transform df_train
###Code
df_train.columns
df_train.drop(['Timestamp','Label'], axis=1, inplace=True)
#Per source IP, grab N-gram and transform numerical features into new
#Done for bigrams and trigrams
features = ['Flow Duration', 'Total Fwd Packets', 'Total Backward Packets', \
'Total Length of Fwd Packets', 'Total Length of Bwd Packets', 'Fwd Packet Length Max', \
'Fwd Packet Length Min', 'Bwd Packet Length Max', 'Bwd Packet Length Min', \
'Flow Bytes/s', 'Flow Packets/s', 'Flow IAT Mean', 'Flow IAT Std', \
'Flow IAT Max', 'Flow IAT Min']
#Create/reset columns for n_gram features
for feature in features:
column_mean = 'ngram_' + feature + '_mean'
column_std = 'ngram_' + feature + '_std'
if column_mean not in df_train.columns:
df_train[column_mean] = np.nan
if column_std not in df_train.columns:
df_train[column_std] = np.nan
#List of ngram features
featurelist = df_train.filter(regex='^ngram', axis=1).columns
#Window size 2 = bigrams, 3 = trigrams
winsize = 3
#Window type
indexer = pd.api.indexers.FixedForwardWindowIndexer(window_size=winsize)
for itr, feature in enumerate(features): #Iterate over all features to be transformed
column_mean = 'ngram_' + feature + '_mean'
column_std = 'ngram_' + feature + '_std'
for (srcIP, _) in res_tr.iteritems(): #Iterate over all Source IP starting with most-occurring
sub_df = df_train[df_train['Source IP'] == srcIP]
sub_df.loc[:,column_mean] = sub_df[feature].rolling(window=indexer, min_periods=winsize).mean()
sub_df.loc[:,column_std] = sub_df[feature].rolling(window=indexer, min_periods=winsize).std()
df_train.loc[:,[column_mean, column_std]] = df_train[[column_mean, column_std]].combine_first(sub_df[[column_mean, column_std]])
df_train.drop(columns=feature)
print('Progress: ' + str(itr+1) + '/' + str(len(features)), end='\r')
df_train_feather_path = "/mnt/h/CICIDS/DDoS2019/feather/trigram_train_feather"
df_train.reset_index().to_feather(df_train_feather_path)
###Output
_____no_output_____
###Markdown
Load df_train from memory and reduce memory footprint
###Code
#Load df_train from memory and reduce memory footprint
df_train_feather_path = "/mnt/h/CICIDS/DDoS2019/feather/trigram_train_feather"
df_train = pd.read_feather(df_train_feather_path)
df_train
#Drop cols without/with ngram features
ngramcols = df_train.filter(regex='^ngram', axis=1).columns
normalcols = ['Flow Duration', 'Total Fwd Packets', 'Total Backward Packets', \
'Total Length of Fwd Packets', 'Total Length of Bwd Packets', 'Fwd Packet Length Max', \
'Fwd Packet Length Min', 'Bwd Packet Length Max', 'Bwd Packet Length Min', \
'Flow Bytes/s', 'Flow Packets/s', 'Flow IAT Mean', 'Flow IAT Std', \
'Flow IAT Max', 'Flow IAT Min']
df_train.dropna(subset=ngramcols, axis=0, how='any', inplace=True)
print(df_train.shape)
df_train.set_index('index', inplace=True)
#Throw away original/ngram features + socket information to save memory
socket_columns = ['Source IP', 'Timestamp']
df_train.drop(columns=ngramcols, inplace=True)
df_train.drop(columns=socket_columns, inplace=True)
print(df_train.columns)
df_train
df_train_feather_path = "/mnt/h/CICIDS/DDoS2019/feather/trigram_train_feather_reduced"
df_train.reset_index().to_feather(df_train_feather_path)
###Output
_____no_output_____
###Markdown
Show amount of source IPs with more than 'threshold' flows in test dataset
###Code
#Test dataset value counts per IP
threshold = 2
vc_te = df_test['Source IP'].value_counts()
res_te = df_test[df_test['Source IP'].isin(vc_te[vc_te>threshold].index)]['Source IP'].value_counts()
print(res_te)
###Output
_____no_output_____
###Markdown
Transform df_test
###Code
#Per source IP, grab N-gram and transform numerical features into new
#Done for bigrams and trigrams
features = ['Flow Duration', 'Total Fwd Packets', 'Total Backward Packets', \
'Total Length of Fwd Packets', 'Total Length of Bwd Packets', 'Fwd Packet Length Max', \
'Fwd Packet Length Min', 'Bwd Packet Length Max', 'Bwd Packet Length Min', \
'Flow Bytes/s', 'Flow Packets/s', 'Flow IAT Mean', 'Flow IAT Std', \
'Flow IAT Max', 'Flow IAT Min']
#Create/reset columns for n_gram features
for feature in features:
column_mean = 'ngram_' + feature + '_mean'
column_std = 'ngram_' + feature + '_std'
if column_mean not in df_test.columns:
df_test[column_mean] = np.nan
if column_std not in df_test.columns:
df_test[column_std] = np.nan
#List of ngram features
featurelist = df_test.filter(regex='^ngram', axis=1).columns
#Window size 2 = bigrams, 3 = trigrams
winsize = 3
#Window type
indexer = pd.api.indexers.FixedForwardWindowIndexer(window_size=winsize)
for itr, (srcIP, _) in enumerate(res_te.iteritems()):
sub_df = df_test[df_test['Source IP'] == srcIP]
for feature in features:
column_mean = 'ngram_' + feature + '_mean'
column_std = 'ngram_' + feature + '_std'
sub_df.loc[:,column_mean] = sub_df[feature].rolling(window=indexer, min_periods=winsize).mean()
sub_df.loc[:,column_std] = sub_df[feature].rolling(window=indexer, min_periods=winsize).std()
df_test.loc[:,featurelist] = df_test[featurelist].combine_first(sub_df[featurelist])
print('Progress: ' + str(itr+1) + '/' + str(len(res_te)), end='\r')
#Drop rows without ngram features
df_test.dropna(subset=df_test.filter(regex='^ngram', axis=1).columns, axis=0, how='any', inplace=True)
print(df_test.shape)
print(df_test.filter(regex='^ngram', axis=1).columns)
df_test_feather_path = "/mnt/h/CICIDS/DDoS2019/feather/trigram_test_feather"
df_test.reset_index().to_feather(df_test_feather_path)
###Output
_____no_output_____
###Markdown
Load df_test from memory and reduce memory footprint
###Code
#Load df_test from memory and reduce memory footprint
df_test_feather_path = "/mnt/h/CICIDS/DDoS2019/feather/trigram_test_feather"
df_test = pd.read_feather(df_test_feather_path)
#Drop cols without/with ngram features
ngramcols = df_test.filter(regex='^ngram', axis=1).columns
normalcols = ['Flow Duration', 'Total Fwd Packets', 'Total Backward Packets', \
'Total Length of Fwd Packets', 'Total Length of Bwd Packets', 'Fwd Packet Length Max', \
'Fwd Packet Length Min', 'Bwd Packet Length Max', 'Bwd Packet Length Min', \
'Flow Bytes/s', 'Flow Packets/s', 'Flow IAT Mean', 'Flow IAT Std', \
'Flow IAT Max', 'Flow IAT Min']
df_test.dropna(subset=ngramcols, axis=0, how='any', inplace=True)
print(df_test.shape)
df_test.set_index('index', inplace=True)
#Throw away original/ngram features + socket information to save memory
socket_columns = ['Source IP', 'Timestamp']
df_test.drop(columns=ngramcols, inplace=True)
df_test.drop(columns=socket_columns, inplace=True)
df_test_feather_path = "/mnt/h/CICIDS/DDoS2019/feather/trigram_test_feather_reduced"
df_test.reset_index().to_feather(df_test_feather_path)
###Output
_____no_output_____
###Markdown
Load train and test datasets and train Random Forest classifier
###Code
#Load df_train from memory and reduce memory footprint
df_train_feather_path = "/mnt/h/CICIDS/DDoS2019/feather/trigram_train_feather_reduced"
df_test_feather_path = "/mnt/h/CICIDS/DDoS2019/feather/trigram_test_feather_reduced"
df_train = pd.read_feather(df_train_feather_path)
df_test = pd.read_feather(df_test_feather_path)
#Fix index, throw away multiclass label
df_train.set_index('index', inplace=True)
df_test.set_index('index', inplace=True)
#df_train.drop(columns=['Label'], inplace=True)
#df_test.drop(columns=['Label'], inplace=True)
df_train
df_test
# If dataframe classes should intersect in train and test set, apply below
excludelabels = ['DrDoS_NTP', 'DrDoS_DNS', 'DrDoS_SNMP', 'DrDoS_SSDP', 'TFTP', 'Portmap', 'WebDDoS']
print(df_train['Label'].unique())
print(df_test['Label'].unique())
df_train.drop(df_train[df_train['Label'].isin(excludelabels)].index, inplace=True)
df_test.drop(df_test[df_test['Label'].isin(excludelabels)].index, inplace=True)
print(df_train['Label'].unique())
print(df_test['Label'].unique())
#Switch train and test set according to general feature set results
df_temp = df_train.copy()
df_train = df_test.copy()
df_test = df_temp.copy()
df_temp = np.nan
# Compare ngram feature set to alternative feature set
#features = df_train.filter(regex='^ngram', axis=1).columns
features = ['Flow Duration', 'Total Fwd Packets', 'Total Backward Packets', \
'Total Length of Fwd Packets', 'Total Length of Bwd Packets', 'Fwd Packet Length Max', \
'Fwd Packet Length Min', 'Bwd Packet Length Max', 'Bwd Packet Length Min', \
'Flow Bytes/s', 'Flow Packets/s', 'Flow IAT Mean', 'Flow IAT Std', \
'Flow IAT Max', 'Flow IAT Min']
label = 'BinLabel'
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report, confusion_matrix
cfs = []
preds = []
for i in range(5):
rf_clf = RandomForestClassifier(n_estimators=100,min_samples_split=10,min_samples_leaf=5,max_samples=0.8,criterion='gini',n_jobs=5,verbose=10)
rf_clf.fit(df_train[features],df_train[label])
y_pred = rf_clf.predict(df_test[features])
cfs.append(confusion_matrix(df_test[label], y_pred))
preds.append([df_test[label], y_pred])
###Output
[Parallel(n_jobs=5)]: Using backend ThreadingBackend with 5 concurrent workers.
###Markdown
Ngrams results
###Code
np.set_printoptions(suppress=True)
print(cfs)
print(np.shape(cfs))
cf = np.mean(cfs,axis=(0))
print(cf)
print(np.std(cfs,axis=(0)))
print('std. dev %\n', np.divide(np.std(cfs,axis=0),np.mean(cfs,axis=0))*100)
objectToFile(preds, "ddos2019_ngrams_preds_reduced_"+label)
###Output
[array([[ 32339, 116],
[1078201, 7491925]]), array([[ 32359, 96],
[1109228, 7460898]]), array([[ 32348, 107],
[1105337, 7464789]]), array([[ 32351, 104],
[1110460, 7459666]]), array([[ 32357, 98],
[1070700, 7499426]])]
(5, 2, 2)
[[ 32350.8 104.2]
[1094785.2 7475340.8]]
[[ 7.11055553 7.11055553]
[16856.84940195 16856.84940195]]
std. dev %
[[0.02197954 6.82394965]
[1.53974034 0.22549941]]
###Markdown
Read in ngrams preds if needed
###Code
from sklearn.metrics import classification_report, confusion_matrix
label = 'BinLabel'
preds_mem = objectFromFile("ddos2019_ngrams_preds_reduced_"+label)
cfs = []
for pred_tuple in preds_mem:
cfs.append(confusion_matrix(pred_tuple[0], pred_tuple[1]))
np.set_printoptions(suppress=True)
cf = np.mean(cfs,axis=(0))
#Plot confusion matrix
import seaborn as sns
#labels = ['Benign','Malicious']
#Standard heatmap
cf_norm = cf/cf.sum(axis=1)[:,None]
cf_percentages = ["{0:.2%}".format(value) for value in cf_norm.flatten()]
cf_numbers = [abbrv_num(value) for value in cf.flatten()]
cf_labels = ['{v1}\n({v2})'.format(v1=v1, v2=v2) for v1,v2 in zip(cf_percentages,cf_numbers)]
cf_labels = np.asarray(cf_labels).reshape(cf.shape)
fig, ax = plt.subplots(figsize=(6, 6))
plt.rc('font', size=14)
#column_labels = sorted(df_test[label].unique())
#column_labels[6] = 'Benign'
column_labels = ['Benign', 'Malicious']
sns.heatmap(cf_norm, annot=cf_labels, fmt='',cmap='Blues',cbar=False, vmin=0.0, vmax=1.0, ax=ax, xticklabels=column_labels, yticklabels=column_labels)
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.yticks(rotation='0', size='12')
plt.xticks(rotation='65', size='12')
plt.title("CICIDS-DDoS2019 mean binary classification matrix - Trigrams")
#plt.savefig('ddos2019_binaryclass_cf_trigrams_reduced.png',bbox_inches='tight')
plt.show()
from sklearn.metrics import accuracy_score
from sklearn.metrics import recall_score
tn, fp, fn, tp = np.mean(cfs,axis=0).ravel()
print(tn,fp,fn,tp)
acc_scores = [accuracy_score(pred_tuple[0], pred_tuple[1]) for pred_tuple in preds_mem]
rec_score = tp / (tp+fn)
spec_score = tn / (tn+fp)
print('Accuracy: ' + str(np.mean(acc_scores)), "\n")
print('Recall: ' + str(rec_score), "\n")
print('Specificity: ' + str(spec_score))
importance = rf_clf.feature_importances_
# summarize feature importance
for i,v in sorted(enumerate(importance),key=lambda x: x[1], reverse=True):
print('Feature: %s, Score: %.5f' % (features[i],v))
###Output
Feature: ngram_Bwd Packet Length Max_mean, Score: 0.17347
Feature: ngram_Total Backward Packets_std, Score: 0.11545
Feature: ngram_Bwd Packet Length Max_std, Score: 0.09821
Feature: ngram_Total Length of Bwd Packets_mean, Score: 0.08741
Feature: ngram_Fwd Packet Length Min_mean, Score: 0.07771
Feature: ngram_Bwd Packet Length Min_mean, Score: 0.06124
Feature: ngram_Total Length of Bwd Packets_std, Score: 0.05852
Feature: ngram_Total Length of Fwd Packets_mean, Score: 0.05624
Feature: ngram_Bwd Packet Length Min_std, Score: 0.04448
Feature: ngram_Total Fwd Packets_mean, Score: 0.03906
Feature: ngram_Fwd Packet Length Max_mean, Score: 0.02764
Feature: ngram_Flow Bytes/s_mean, Score: 0.02334
Feature: ngram_Total Backward Packets_mean, Score: 0.01281
Feature: ngram_Fwd Packet Length Min_std, Score: 0.01237
Feature: ngram_Flow Duration_std, Score: 0.01128
Feature: ngram_Total Length of Fwd Packets_std, Score: 0.01051
Feature: ngram_Fwd Packet Length Max_std, Score: 0.01011
Feature: ngram_Flow IAT Min_mean, Score: 0.00995
Feature: ngram_Flow Bytes/s_std, Score: 0.00932
Feature: ngram_Flow Packets/s_mean, Score: 0.00869
Feature: ngram_Flow IAT Max_std, Score: 0.00758
Feature: ngram_Flow Duration_mean, Score: 0.00672
Feature: ngram_Flow IAT Min_std, Score: 0.00621
Feature: ngram_Flow IAT Std_mean, Score: 0.00618
Feature: ngram_Flow Packets/s_std, Score: 0.00576
Feature: ngram_Total Fwd Packets_std, Score: 0.00538
Feature: ngram_Flow IAT Max_mean, Score: 0.00420
Feature: ngram_Flow IAT Std_std, Score: 0.00370
Feature: ngram_Flow IAT Mean_std, Score: 0.00335
Feature: ngram_Flow IAT Mean_mean, Score: 0.00310
###Markdown
Alternative features results
###Code
np.set_printoptions(suppress=True)
print(cfs)
print(np.shape(cfs))
cf = np.mean(cfs,axis=(0))
print(cf)
print(np.std(cfs,axis=(0)))
print('std. dev %\n', np.divide(np.std(cfs,axis=0),np.mean(cfs,axis=0))*100)
objectToFile(preds, "ddos2019_alternative_preds_reduced_"+label)
#Plot confusion matrix
import seaborn as sns
#labels = ['Benign','Malicious']
#Standard heatmap
cf_norm = cf/cf.sum(axis=1)[:,None]
cf_percentages = ["{0:.2%}".format(value) for value in cf_norm.flatten()]
cf_numbers = [abbrv_num(value) for value in cf.flatten()]
cf_labels = ['{v1}\n({v2})'.format(v1=v1, v2=v2) for v1,v2 in zip(cf_percentages,cf_numbers)]
cf_labels = np.asarray(cf_labels).reshape(cf.shape)
fig, ax = plt.subplots(figsize=(6, 6))
plt.rc('font', size=14)
column_labels = sorted(df_test[label].unique())
#column_labels[6] = 'Benign'
column_labels = ['Benign', 'Malicious']
sns.heatmap(cf_norm, annot=cf_labels, fmt='',cmap='Blues',cbar=False, vmin=0.0, vmax=1.0, ax=ax, xticklabels=column_labels, yticklabels=column_labels)
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.yticks(rotation='0', size='12')
plt.xticks(rotation='65', size='12')
plt.title("CICIDS-DDoS2019 mean binary classification matrix - Alternative")
plt.savefig('ddos2019_binaryclass_cf_alternative_reduced.png',bbox_inches='tight')
plt.show()
from sklearn.metrics import accuracy_score
from sklearn.metrics import recall_score
tn, fp, fn, tp = np.mean(cfs,axis=0).ravel()
print(tn,fp,fn,tp)
acc_scores = [accuracy_score(pred_tuple[0], pred_tuple[1]) for pred_tuple in preds]
rec_score = tp / (tp+fn)
spec_score = tn / (tn+fp)
print('Accuracy: ' + str(np.mean(acc_scores)), "\n")
print('Recall: ' + str(rec_score), "\n")
print('Specificity: ' + str(spec_score))
importance = rf_clf.feature_importances_
# summarize feature importance
for i,v in sorted(enumerate(importance),key=lambda x: x[1], reverse=True):
print('Feature: %s, Score: %.5f' % (features[i],v))
###Output
Feature: Bwd Packet Length Max, Score: 0.20926
Feature: Fwd Packet Length Min, Score: 0.15905
Feature: Total Length of Bwd Packets, Score: 0.15752
Feature: Total Length of Fwd Packets, Score: 0.11179
Feature: Bwd Packet Length Min, Score: 0.07860
Feature: Total Fwd Packets, Score: 0.07568
Feature: Fwd Packet Length Max, Score: 0.05490
Feature: Flow Bytes/s, Score: 0.03904
Feature: Flow Duration, Score: 0.02438
Feature: Total Backward Packets, Score: 0.02056
Feature: Flow IAT Max, Score: 0.01773
Feature: Flow IAT Std, Score: 0.01425
Feature: Flow IAT Mean, Score: 0.01309
Feature: Flow Packets/s, Score: 0.01269
Feature: Flow IAT Min, Score: 0.01146
###Markdown
Barplot of own feature sets
###Code
import matplotlib.patches as mpatches
#Scores
genset_acc = 0.892
genset_rec = 0.892
genset_spec = 0.994
trigramset_acc = 0.873
trigramset_rec = 0.872
trigramset_spec = 0.997
altset_acc = 0.962
altset_rec = 0.962
altset_spec = 0.972
#Colors
clr_acc = 'royalblue'
clr_rec = 'salmon'
clr_spec = 'lightgreen'
acc_patch = mpatches.Patch(color=clr_acc, label='accuracy')
rec_patch = mpatches.Patch(color=clr_rec, label='recall')
spec_patch = mpatches.Patch(color=clr_spec, label='specificity')
labels = ['General\n (12 features)', 'Alternative\n (15 features)', \
'Trigram\n (30 features)']
x = np.arange(len(labels))*10
width = 2.5 # the width of the bars
pad_width = 3
scores = [genset_acc,genset_rec,genset_spec,trigramset_acc,trigramset_rec,trigramset_spec,altset_acc,altset_rec,altset_spec]
fig, ax = plt.subplots(figsize=(7,6))
#Spawn bar(s) of group 1
plt.bar(x[0]-pad_width, height=scores[0], width=width, color=clr_acc)
plt.bar(x[0], height=scores[1], width=width, color=clr_rec)
plt.bar(x[0]+pad_width, height=scores[2], width=width, color=clr_spec)
#Spawn bar(s) of group 2
plt.bar(x[1]-pad_width, height=scores[3], width=width, color=clr_acc)
plt.bar(x[1], height=scores[4], width=width, color=clr_rec)
plt.bar(x[1]+pad_width, height=scores[5], width=width, color=clr_spec)
#Spawn bar(s) of group 3
plt.bar(x[2]-pad_width, height=scores[6], width=width, color=clr_acc)
plt.bar(x[2], height=scores[7], width=width, color=clr_rec)
plt.bar(x[2]+pad_width, height=scores[8], width=width, color=clr_spec)
#Hide the left, right and top spines
ax.spines['right'].set_visible(False)
ax.spines['top'].set_visible(False)
ax.spines['left'].set_visible(True)
plt.tick_params(left = False)
#Set plot details
plt.rc('font', size=13)
plt.ylabel('Metric score')
plt.yticks()
#ax.set_yticklabels([])
plt.ylim([0.8, 1])
#ax.get_yaxis().set_visible(False)
plt.xticks(size='14')
plt.title("CIC-DDoS2019 feature sets comparison", fontweight='bold', pad=25)
ax.set_xticks(x)
ax.set_xticklabels(labels)
add_value_labels(ax)
#ax.legend(handles=[acc_patch,rec_patch,spec_patch], loc='lower right', borderaxespad=0.)
ax.set_axisbelow(True)
plt.grid(axis='y', color='grey')
fig.tight_layout()
plt.savefig('ddos2019_binaryclass_reduced_featuresets_bars.png',bbox_inches='tight')
plt.show()
###Output
_____no_output_____ |
sql-scavenger-hunt-day-5.ipynb | ###Markdown
If you haven't used BigQuery datasets on Kaggle previously, check out the Scavenger Hunt Handbook kernel to get started. ___ Previous days:* [**Day 1:** SELECT, FROM & WHERE](https://www.kaggle.com/rtatman/sql-scavenger-hunt-day-1/)* [**Day 2:** GROUP BY, HAVING & COUNT()](https://www.kaggle.com/rtatman/sql-scavenger-hunt-day-2/)* [**Day 3:** ORDER BY & Dates](https://www.kaggle.com/rtatman/sql-scavenger-hunt-day-3/)* [**Day 4:** WITH & AS](https://www.kaggle.com/rtatman/sql-scavenger-hunt-day-4/)____ JOIN___Whew, we've come a long way from Day 1! By now, you have the tools to get many different configurations of information from a single table. But what if your database has more than one table and you want to look at information from multiple tables?That's where JOIN comes in! Today, we're going to learn about how to use JOIN to combine data from two tables. This will let us answer more types of questions. It's also one of the more complex parts of SQL. Don't worry, though, we're going to go through some examples together. JOIN___Let's keep working with our imaginary Pets dataset, but this time let's add a second table. The first table, "Pets", has three columns, with information on the ID number of each pet, the pet's name and the type of animal it is. The new table, "Owners", has three columns, with the ID number of each owner, the name of the owner and the ID number of their pet. Each row in each table is associated with a single pet and we refer to the same pets in both tables. We can tell this because there are two columns (ID in the "Pets" table and Pet_ID in the "Owners" table) that have the same information in them: the ID number of the pet. We can match rows that have the same value in these columns to get information that applies to a certain pet.For example, we can see by looking at the Pets table that the pet that has the ID 1 is named Dr. Harris Bonkers. We can also tell by looking at the Owners table that the name of the owner who owns the pet with the ID 1 is named Aubrey Little. We can use this information to figure out that Dr. Harris Bonkers is owned by Aubrey Little. Fortunately, we don't have to do this by hand to figure out which owner's name goes with which pet name. We can use JOIN to do this for us! JOIN allows us to create a third, new, table that has information from both tables. For example, we might want to have a single table with just two columns: one with the name of the pet and one with the name of the owner. This would look something like this: The syntax to create that table looks like this: SELECT p.Name AS Pet_Name, o.Name as Owner_Name FROM `bigquery-public-data.pet_records.pets` as p INNER JOIN `bigquery-public-data.pet_records.owners` as o ON p.ID = o.Pet_IDNotice that since the ID column exists in both datasets, we have to clarify which one we want to use. When you're joining tables, it's a good habit to specificy which table all of your columns come from. That way you don't have to pull up the schema every time you go back to read the query.The type of JOIN we're using today is called an INNER JOIN. That just means that a row will only be put in the final output table if the value in the column you're using to combine them shows up in both the tables you're joining. For example, if Tom's ID code of 4 didn't exist in the `Pets` table, we would only get 3 rows back from this query. There are other types of JOIN, but an INNER JOIN won't give you an output that's larger than your input tables, so it's a good one to start with. > **What does "ON" do?** It says which column in each table to look at to combine the tables. Here were using the "ID" column from the Pets table and the "Pet_ID" table from the Owners table.Now that we've talked about the concept behind using JOIN, let's work through an example together. Example: How many files are covered by each license?____Today we're going to be using the GitHub Repos dataset. GitHub is an place for people to store & collaborate on different versions of their computer code. A "repo" is a collection of code associated with a specific project. Most public code on Github is shared under a specific license, which determines how it can be used and by who. For our example, we're going to look at how many different files have been released under each licenses. First, of course, we need to get our environment ready to go:
###Code
# import package with helper functions
import bq_helper
# create a helper object for this dataset
github = bq_helper.BigQueryHelper(active_project="bigquery-public-data",
dataset_name="github_repos")
###Output
_____no_output_____
###Markdown
Now we're ready to get started on our query. This one is going to be a bit of a beast, so stick with me! The only new syntax we'll see is around the JOIN clause, everything is something we've already learned. :)First, I'm going to specify which columns I'd like to be returned in the final table that's returned to me. Here, I'm selecting the COUNT of the "path" column from the sample_files table and then calling it "number_of_files". I'm *also* specifying that I was to include the "license" column, even though there's no "license" column in the "sample_files" table. SELECT L.license, COUNT(sf.path) AS number_of_files FROM `bigquery-public-data.github_repos.sample_files` as sfSpeaking of the JOIN clause, we still haven't actually told SQL we want to join anything! To do this, we need to specify what type of join we want (in this case an inner join) and how which columns we want to JOIN ON. Here, I'm using ON to specify that I want to use the "repo_name" column from the each table. INNER JOIN `bigquery-public-data.github_repos.licenses` as L ON sf.repo_name = L.repo_nameAnd, finally, we have a GROUP BY and ORDER BY clause that apply to the final table that's been returned to us. We've seen these a couple of times at this point. :) GROUP BY license ORDER BY number_of_files DESC Alright, that was a lot, but you should have an idea what each part of this query is doing. :) Without any further ado, let' put it into action.
###Code
# You can use two dashes (--) to add comments in SQL
query = ("""
-- Select all the columns we want in our joined table
SELECT L.license, COUNT(sf.path) AS number_of_files
FROM `bigquery-public-data.github_repos.sample_files` as sf
-- Table to merge into sample_files
INNER JOIN `bigquery-public-data.github_repos.licenses` as L
ON sf.repo_name = L.repo_name -- what columns should we join on?
GROUP BY L.license
ORDER BY number_of_files DESC
""")
file_count_by_license = github.query_to_pandas_safe(query, max_gb_scanned=6)
###Output
_____no_output_____
###Markdown
Whew, that was a big query! But it gave us a nice tidy little table that nicely summarizes how many files have been committed under each license:
###Code
# print out all the returned results
print(file_count_by_license)
###Output
license number_of_files
0 gpl-2.0 22031724
1 mit 21186498
2 apache-2.0 7578582
3 gpl-3.0 5550163
4 bsd-3-clause 3319394
5 agpl-3.0 1435105
6 lgpl-2.1 962034
7 bsd-2-clause 779810
8 lgpl-3.0 684163
9 mpl-2.0 504080
10 cc0-1.0 437764
11 epl-1.0 389338
12 unlicense 209350
13 artistic-2.0 155854
14 isc 133570
###Markdown
And that's how to get started using JOIN in BigQuery! There are many other kinds of joins (you can [read about some here](https://cloud.google.com/bigquery/docs/reference/standard-sql/query-syntaxjoin-types)), so once you're very comfortable with INNER JOIN you can start exploring some of them. :) Scavenger hunt___Now it's your turn! Here is the question I would like you to get the data to answer. Just one today, since you've been working hard this week. :)* How many commits (recorded in the "sample_commits" table) have been made in repos written in the Python programming language? (I'm looking for the number of commits per repo for all the repos written in Python. * You'll want to JOIN the sample_files and sample_commits questions to answer this. * **Hint:** You can figure out which files are written in Python by filtering results from the "sample_files" table using `WHERE path LIKE '%.py'`. This will return results where the "path" column ends in the text ".py", which is one way to identify which files have Python code.In order to answer these questions, you can fork this notebook by hitting the blue "Fork Notebook" at the very top of this page (you may have to scroll up). "Forking" something is making a copy of it that you can edit on your own without changing the original.
###Code
# Your code goes here :)
query1 = """
with repodata as
(
SELECT sc.repo_name as reponame, sc.commit as commits, sf.path as path
FROM `bigquery-public-data.github_repos.sample_files` as sf inner join `bigquery-public-data.github_repos.sample_commits` as sc on sf.repo_name=sc.repo_name
WHERE sf.path like '%.py'
)
SELECT reponame as Repository_Name, count(commits) as No_of_Commits
FROM repodata
GROUP BY reponame
ORDER BY No_of_Commits DESC
"""
repo_name = github.query_to_pandas_safe(query1, max_gb_scanned=20)
print(repo_name.head())
import matplotlib.pyplot as plt
plt.barh(repo_name.Repository_Name,repo_name.No_of_Commits,log=True)
#Here log=True will set the axis on logarithmic scale and hence wewill be able to view all the data
#as the scale for the data is not same
###Output
_____no_output_____ |
ai-platform-unified/notebooks/official/feature_store/gapic-feature-store.ipynb | ###Markdown
Run in Colab View on GitHub OverviewThis Colab introduces Feature Store, a managed cloud service for machine learning engineers and data scientists to store, serve, manage and share machine learning features at a large scale.This Colab assumes that you understand basic Google Cloud concepts such as [Project](https://cloud.google.com/storage/docs/projects), [Storage](https://cloud.google.com/storage) and [Vertex AI](https://cloud.google.com/vertex-ai/docs). Some machine learning knowledge is also helpful but not required. DatasetThis Colab uses a movie recommendation dataset as an example throughout all the sessions. The task is to train a model to predict if a user is going to watch a movie and serve this model online. ObjectiveIn this notebook, you will learn how to: * How to import your features into Feature Store. * How to serve online prediction requests using the imported features. * How to access imported features in offline jobs, such as training jobs. Costs This tutorial uses billable components of Google Cloud:* Vertex AI* Cloud Storage* Cloud BigtableLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. Set up your local development environment**If you are using Colab or Google Cloud Notebooks**, your environment already meetsall the requirements to run this notebook. You can skip this step. **Otherwise**, make sure your environment meets this notebook's requirements.You need the following:* The Google Cloud SDK* Git* Python 3* virtualenv* Jupyter notebook running in a virtual environment with Python 3The Google Cloud guide to [Setting up a Python developmentenvironment](https://cloud.google.com/python/setup) and the [Jupyterinstallation guide](https://jupyter.org/install) provide detailed instructionsfor meeting these requirements. The following steps provide a condensed set ofinstructions:1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)1. [Install Python 3.](https://cloud.google.com/python/setupinstalling_python)1. [Install virtualenv](https://cloud.google.com/python/setupinstalling_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.1. To install Jupyter, run `pip install jupyter` on thecommand-line in a terminal shell.1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.1. Open this notebook in the Jupyter Notebook Dashboard. Install additional packagesFor this Colab, you need the Vertex SDK for Python.
###Code
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip install {USER_FLAG} --upgrade git+https://github.com/googleapis/python-aiplatform.git@main-test
###Output
_____no_output_____
###Markdown
Restart the kernelAfter you install the SDK, you need to restart the notebook kernel so it can find the packages. You can restart kernel from *Kernel -> Restart Kernel*, or running the following:
###Code
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin Select a GPU runtime**Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. [Enable the Vertex AI API and Compute Engine API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com,compute_component).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
###Output
_____no_output_____
###Markdown
Otherwise, set your project ID here.
###Code
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "python-docs-samples-tests" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Google Cloud Notebooks**, your environment is alreadyauthenticated. Skip this step. **If you are using Colab**, run the cell below and follow the instructionswhen prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:1. In the Cloud Console, go to the [**Create service account key** page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).2. Click **Create service account**.3. In the **Service account name** field, enter a name, and click **Create**.4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type "Vertex AI"into the filter box, and select **Vertex AI Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.5. Click *Create*. A JSON file that contains your key downloads to yourlocal environment.6. Enter the path to your service account key as the`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
###Code
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Prepare for output Step 1. Create dataset for outputYou need a BigQuery dataset to host the output data in `us-central1`. Input the name of the dataset you want to created and specify the name of the table you want to store the output later. These will be used later in the notebook.**Make sure that the table name does NOT already exist**.
###Code
from datetime import datetime
from google.cloud import bigquery
# Output dataset
DESTINATION_DATA_SET = "movie_predictions" # @param {type:"string"}
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
DESTINATION_DATA_SET = "{prefix}_{timestamp}".format(
prefix=DESTINATION_DATA_SET, timestamp=TIMESTAMP
)
# Output table. Make sure that the table does NOT already exist; the BatchReadFeatureValues API cannot overwrite an existing table
DESTINATION_TABLE_NAME = "training_data" # @param {type:"string"}
DESTINATION_PATTERN = "bq://{project}.{dataset}.{table}"
DESTINATION_TABLE_URI = DESTINATION_PATTERN.format(
project=PROJECT_ID, dataset=DESTINATION_DATA_SET, table=DESTINATION_TABLE_NAME
)
# Create dataset
REGION = "us-central1" # @param {type:"string"}
client = bigquery.Client()
dataset_id = "{}.{}".format(client.project, DESTINATION_DATA_SET)
dataset = bigquery.Dataset(dataset_id)
dataset.location = REGION
dataset = client.create_dataset(dataset, timeout=30)
print("Created dataset {}.{}".format(client.project, dataset.dataset_id))
###Output
_____no_output_____
###Markdown
Import libraries and define constants
###Code
# Other than project ID and featurestore ID and endpoints needs to be set
API_ENDPOINT = "us-central1-aiplatform.googleapis.com" # @param {type:"string"}
INPUT_CSV_FILE = "gs://cloud-samples-data-us-central1/ai-platform-unified/datasets/featurestore/movie_prediction.csv"
from google.cloud.aiplatform_v1beta1 import (
FeaturestoreOnlineServingServiceClient, FeaturestoreServiceClient)
from google.cloud.aiplatform_v1beta1.types import FeatureSelector, IdMatcher
from google.cloud.aiplatform_v1beta1.types import \
entity_type as entity_type_pb2
from google.cloud.aiplatform_v1beta1.types import feature as feature_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore as featurestore_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_monitoring as featurestore_monitoring_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_online_service as featurestore_online_service_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_service as featurestore_service_pb2
from google.cloud.aiplatform_v1beta1.types import io as io_pb2
from google.protobuf.duration_pb2 import Duration
# Create admin_client for CRUD and data_client for reading feature values.
admin_client = FeaturestoreServiceClient(client_options={"api_endpoint": API_ENDPOINT})
data_client = FeaturestoreOnlineServingServiceClient(
client_options={"api_endpoint": API_ENDPOINT}
)
# Represents featurestore resource path.
BASE_RESOURCE_PATH = admin_client.common_location_path(PROJECT_ID, REGION)
###Output
_____no_output_____
###Markdown
Terminology and Concept Featurestore Data modelFeature Store organizes data with the following 3 important hierarchical concepts:```Featurestore -> EntityType -> Feature```* **Featurestore**: the place to store your features* **EntityType**: under a Featurestore, an *EntityType* describes an object to be modeled, real one or virtual one.* **Feature**: under an EntityType, a *feature* describes an attribute of the EntityTypeIn the movie prediction example, you will create a featurestore called *movie_prediction*. This store has 2 entity types: *Users* and *Movies*. The Users entity type has the age, gender, and like genres features. The Movies entity type has the genres and average rating features. Create Featurestore and Define Schemas Create FeaturestoreThe method to create a featurestore returns a[long-running operation](https://google.aip.dev/151) (LRO). An LRO starts an asynchronous job. LROs are returned for other APImethods too, such as updating or deleting a featurestore. Calling`create_fs_lro.result()` waits for the LRO to complete.
###Code
FEATURESTORE_ID = "movie_prediction_{timestamp}".format(timestamp=TIMESTAMP)
create_lro = admin_client.create_featurestore(
featurestore_service_pb2.CreateFeaturestoreRequest(
parent=BASE_RESOURCE_PATH,
featurestore_id=FEATURESTORE_ID,
featurestore=featurestore_pb2.Featurestore(
display_name="Featurestore for movie prediction",
online_serving_config=featurestore_pb2.Featurestore.OnlineServingConfig(
fixed_node_count=3
),
),
)
)
# Wait for LRO to finish and get the LRO result.
print(create_lro.result())
###Output
_____no_output_____
###Markdown
You can use [GetFeaturestore](https://cloud.google.com/vertex-ai/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.GetFeaturestore) or [ListFeaturestores](https://cloud.google.com/vertex-ai/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.ListFeaturestores) to check if the Featurestore was successfully created. The following example gets the details of the Featurestore.
###Code
admin_client.get_featurestore(
name=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID)
)
###Output
_____no_output_____
###Markdown
Create Entity TypeYou can specify a monitoring config which will by default be inherited by all Features under this EntityType.
###Code
# Create users entity type with monitoring enabled.
# All Features belonging to this EntityType will by default inherit the monitoring config.
users_entity_type_lro = admin_client.create_entity_type(
featurestore_service_pb2.CreateEntityTypeRequest(
parent=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
entity_type_id="users",
entity_type=entity_type_pb2.EntityType(
description="Users entity",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
monitoring_interval=Duration(seconds=86400), # 1 day
),
),
),
)
)
# Similarly, wait for EntityType creation operation.
print(users_entity_type_lro.result())
# Create movies entity type without a monitoring configuration.
movies_entity_type_lro = admin_client.create_entity_type(
featurestore_service_pb2.CreateEntityTypeRequest(
parent=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
entity_type_id="movies",
entity_type=entity_type_pb2.EntityType(description="Movies entity"),
)
)
# Similarly, wait for EntityType creation operation.
print(movies_entity_type_lro.result())
###Output
_____no_output_____
###Markdown
Create FeatureYou can also set a custom monitoring configuration at the Feature level, and view the properties and metrics in the console: sample [properties](https://storage.googleapis.com/cloud-samples-data/ai-platform-unified/datasets/featurestore/Feature%20Properties.png), sample [metrics](https://storage.googleapis.com/cloud-samples-data/ai-platform-unified/datasets/featurestore/Feature%20Snapshot%20Distribution.png).
###Code
# Create features for the 'users' entity.
# 'age' Feature leaves the monitoring config unset, which means it'll inherit the config from EntityType.
# 'gender' Feature explicitly disables monitoring.
# 'liked_genres' Feature is a STRING_ARRAY type, so it is automatically excluded from monitoring.
# For Features with monitoring enabled, distribution statistics are updated periodically in the console.
admin_client.batch_create_features(
parent=admin_client.entity_type_path(PROJECT_ID, REGION, FEATURESTORE_ID, "users"),
requests=[
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.INT64,
description="User age",
),
feature_id="age",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="User gender",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
disabled=True,
),
),
),
feature_id="gender",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING_ARRAY,
description="An array of genres that this user liked",
),
feature_id="liked_genres",
),
],
).result()
# Create features for movies type.
# 'title' Feature enables monitoring.
admin_client.batch_create_features(
parent=admin_client.entity_type_path(PROJECT_ID, REGION, FEATURESTORE_ID, "movies"),
requests=[
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="The title of the movie",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
monitoring_interval=Duration(seconds=172800), # 2 days
),
),
),
feature_id="title",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="The genres of the movie",
),
feature_id="genres",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.DOUBLE,
description="The average rating for the movie, range is [1.0-5.0]",
),
feature_id="average_rating",
),
],
).result()
###Output
_____no_output_____
###Markdown
Search created featuresWhile the [ListFeatures](https://cloud.google.com/vertex-ai/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.ListFeatures) method allows you to easily view all features of a singleentity type, the [SearchFeatures](https://cloud.google.com/vertex-ai/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.SearchFeatures) method searches across all featurestoresand entity types in a given location (such as `us-central1`). This can help you discover features that were created by someone else.You can query based on feature properties including feature ID, entity type ID,and feature description. You can also limit results by filtering on a specificfeaturestore, feature value type, and/or labels.
###Code
# Search for all features across all featurestores.
list(admin_client.search_features(location=BASE_RESOURCE_PATH))
###Output
_____no_output_____
###Markdown
Now, narrow down the search to features that are of type `DOUBLE`
###Code
# Search for all features with value type `DOUBLE`
list(
admin_client.search_features(
featurestore_service_pb2.SearchFeaturesRequest(
location=BASE_RESOURCE_PATH, query="value_type=DOUBLE"
)
)
)
###Output
_____no_output_____
###Markdown
Or, limit the search results to features with specific keywords in their ID and type.
###Code
# Filter on feature value type and keywords.
list(
admin_client.search_features(
featurestore_service_pb2.SearchFeaturesRequest(
location=BASE_RESOURCE_PATH, query="feature_id:title AND value_type=STRING"
)
)
)
###Output
_____no_output_____
###Markdown
Import Feature ValuesYou need to import feature values before you can use them for online/offline serving. In this step, you will learn how to import feature values by calling the ImportFeatureValues API using the Python SDK. Source Data Format and LayoutAs mentioned above, BigQuery table/Avro/CSV are supported. No matter what format you are using, each imported entity *must* have an ID; also, each entity can *optionally* have a timestamp, sepecifying when the feature values are generated. This Colab uses Avro as an input, located at this public [bucket](https://pantheon.corp.google.com/storage/browser/cloud-samples-data/ai-platform-unified/datasets/featurestore;tab=objects?project=storage-samples&prefix=&forceOnObjectsSortingFiltering=false). The Avro schemas are as follows:**For the Users entity**:```schema = { "type": "record", "name": "User", "fields": [ { "name":"user_id", "type":["null","string"] }, { "name":"age", "type":["null","long"] }, { "name":"gender", "type":["null","string"] }, { "name":"liked_genres", "type":{"type":"array","items":"string"} }, { "name":"update_time", "type":["null",{"type":"long","logicalType":"timestamp-micros"}] }, ] }```**For the Movies entity**```schema = { "type": "record", "name": "Movie", "fields": [ { "name":"movie_id", "type":["null","string"] }, { "name":"average_rating", "type":["null","double"] }, { "name":"title", "type":["null","string"] }, { "name":"genres", "type":["null","string"] }, { "name":"update_time", "type":["null",{"type":"long","logicalType":"timestamp-micros"}] }, ]}``` Import feature values for UsersWhen importing, specify the following in your request:* Data source format: BigQuery Table/Avro/CSV* Data source URL* Destination: featurestore/entity types/features to be imported
###Code
import_users_request = featurestore_service_pb2.ImportFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
avro_source=io_pb2.AvroSource(
# Source
gcs_source=io_pb2.GcsSource(
uris=[
"gs://cloud-samples-data-us-central1/ai-platform-unified/datasets/featurestore/users.avro"
]
)
),
entity_id_field="user_id",
feature_specs=[
# Features
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="age"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="gender"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(
id="liked_genres"
),
],
feature_time_field="update_time",
worker_count=10,
)
# Start to import, will take a couple of minutes
ingestion_lro = admin_client.import_feature_values(import_users_request)
# Polls for the LRO status and prints when the LRO has completed
ingestion_lro.result()
###Output
_____no_output_____
###Markdown
Import feature values for MoviesSimilarly, import feature values for 'movies' into the featurestore.
###Code
import_movie_request = featurestore_service_pb2.ImportFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "movies"
),
avro_source=io_pb2.AvroSource(
gcs_source=io_pb2.GcsSource(
uris=[
"gs://cloud-samples-data-us-central1/ai-platform-unified/datasets/featurestore/movies.avro"
]
)
),
entity_id_field="movie_id",
feature_specs=[
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="title"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="genres"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(
id="average_rating"
),
],
feature_time_field="update_time",
worker_count=10,
)
# Start to import, will take a couple of minutes
ingestion_lro = admin_client.import_feature_values(import_movie_request)
# Polls for the LRO status and prints when the LRO has completed
ingestion_lro.result()
###Output
_____no_output_____
###Markdown
Online serving The[Online Serving APIs](https://cloud.google.com/vertex-ai/docs/reference/rpc/google.cloud.aiplatform.v1beta1featurestoreonlineservingservice)lets you serve feature values for small batches of entities. It's designed for latency-sensitive service, such as online model prediction. For example, for a movie service, you might want to quickly shows movies that the current user would most likely watch by using online predictions. Read one entity per requestThe ReadFeatureValues API is used to read feature values of one entity; henceits custom HTTP verb is `readFeatureValues`. By default, the API will return the latest value of each feature, meaning the feature values with the most recent timestamp.To read feature values, specify the entity ID and features to read. The responsecontains a `header` and an `entity_view`. Each row of data in the `entity_view`contains one feature value, in the same order of features as listed in the response header.
###Code
# Fetch the following 3 features.
feature_selector = FeatureSelector(
id_matcher=IdMatcher(ids=["age", "gender", "liked_genres"])
)
data_client.read_feature_values(
featurestore_online_service_pb2.ReadFeatureValuesRequest(
# Fetch from the following feature store/entity type
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
# Fetch the user features whose ID is "alice"
entity_id="alice",
feature_selector=feature_selector,
)
)
###Output
_____no_output_____
###Markdown
Read multiple entities per requestTo read feature values from multiple entities, use theStreamingReadFeatureValues API, which is almost identical to the previousReadFeatureValues API. Note that fetching only a small number of entities is recomended when using this API due to its latency-sensitive nature.
###Code
# Read the same set of features as above, but for multiple entities.
response_stream = data_client.streaming_read_feature_values(
featurestore_online_service_pb2.StreamingReadFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
entity_ids=["alice", "bob"],
feature_selector=feature_selector,
)
)
# Iterate and process response. Note the first one is always the header only.
for response in response_stream:
print(response)
###Output
_____no_output_____
###Markdown
Now that you have learned how to featch imported feature values for online serving, the next step is learning how to use imported feature values for offline use cases. Batch ServingBatch Serving is used to fetch a large batch of feature values for high-throughput, typically for training a model or batch prediction. In this section, you will learn how to prepare for training examples by calling the BatchReadFeatureValues API. Use case**The task** is to prepare a training dataset to train a model, which predicts if a given user will watch a given movie. To achieve this, you need 2 sets of input:* Features: you already imported into the featurestore.* Labels: the groud-truth data recorded that user X has watched movie Y.To be more specific, the ground-truth observation is described in Table 1 and the desired training dataset is described in Table 2. Each row in Table 2 is a result of joining the imported feature values from Feature Store according to the entity IDs and timestamps in Table 1. In this example, the `age`, `gender` and `liked_genres` features from `users` andthe `genres` and `average_rating` features from `movies` are chosen to train the model. Note that only positive examples are shown in these 2 tables, i.e., you can imagine there is a label column whose values are all `True`.BatchReadFeatureValues API takes Table 1 asinput, joins all required feature values from the featurestore, and returns Table 2 for training.Table 1. Ground-truth Datausers | movies | timestamp ----- | -------- | -------------------- alice | Cinema Paradiso | 2019-11-01T00:00:00Z bob | The Shining | 2019-11-15T18:09:43Z ... | ... | ... Table 2. Expected Training Data Generated by Batch Read API (Positive Samples)timestamp | entity_type_users | age | gender | liked_genres | entity_type_movies | genres | average_rating -------------------- | ----------------- | --------------- | ---------------- | -------------------- | -------- | --------- | ----- 2019-11-01T00:00:00Z | bob | 35 | M | [Action, Crime] | The Shining | Horror | 4.8 2019-11-01T00:00:00Z | alice | 55 | F | [Drama, Comedy] | Cinema Paradiso | Romance | 4.5 ... | ... | ... | ... | ... | ... | ... | ... Why timestamp?Note that there is a `timestamp` column in Table 2. This indicates the time when the ground-truth was observed. This is to avoid data inconsistency.For example, the 1st row of Table 2 indicates that user `alice` watched movie `Cinema Paradiso` on `2019-11-01T00:00:00Z`. The featurestore keeps feature values for all timestamps but fetches feature values *only* at the given timestamp during batch serving. On 2019-11-01 alice might be 54 years old, but now alice might be 56; featurestore returns `age=54` as alice's age, instead of `age=56`, because that is the value of the feature at the observation time. Similarly, other features might be time-variant as well, such as liked_genres. Batch Read Feature ValuesAssemble the request which specify the following info:* Where is the label data, i.e., Table 1.* Which features are read, i.e., the column names in Table 2.The output is stored in a BigQuery table.
###Code
batch_serving_request = featurestore_service_pb2.BatchReadFeatureValuesRequest(
# featurestore info
featurestore=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
# URL for the label data, i.e., Table 1.
csv_read_instances=io_pb2.CsvSource(
gcs_source=io_pb2.GcsSource(uris=[INPUT_CSV_FILE])
),
destination=featurestore_service_pb2.FeatureValueDestination(
bigquery_destination=io_pb2.BigQueryDestination(
# Output to BigQuery table created earlier
output_uri=DESTINATION_TABLE_URI
)
),
entity_type_specs=[
featurestore_service_pb2.BatchReadFeatureValuesRequest.EntityTypeSpec(
# Read the 'age', 'gender' and 'liked_genres' features from the 'users' entity
entity_type_id="users",
feature_selector=FeatureSelector(
id_matcher=IdMatcher(
ids=[
# features, use "*" if you want to select all features within this entity type
"age",
"gender",
"liked_genres",
]
)
),
),
featurestore_service_pb2.BatchReadFeatureValuesRequest.EntityTypeSpec(
# Read the 'average_rating' and 'genres' feature values of the 'movies' entity
entity_type_id="movies",
feature_selector=FeatureSelector(
id_matcher=IdMatcher(ids=["average_rating", "genres"])
),
),
],
)
# Execute the batch read
batch_serving_lro = admin_client.batch_read_feature_values(batch_serving_request)
# This long runing operation will poll until the batch read finishes.
batch_serving_lro.result()
###Output
_____no_output_____
###Markdown
After the LRO finishes, you should be able to see the result from the [BigQuery console](https://console.cloud.google.com/bigquery), in the dataset created earlier. Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.You can also keep the project but delete the featurestore:
###Code
admin_client.delete_featurestore(
request=featurestore_service_pb2.DeleteFeaturestoreRequest(
name=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
force=True,
)
).result()
client.delete_dataset(
DESTINATION_DATA_SET, delete_contents=True, not_found_ok=True
) # Make an API request.
print("Deleted dataset '{}'.".format(DESTINATION_DATA_SET))
###Output
_____no_output_____
###Markdown
Run in Colab View on GitHub OverviewThis Colab introduces Feature Store, a managed cloud service for machine learning engineers and data scientists to store, serve, manage and share machine learning features at a large scale.This Colab assumes that you understand basic Google Cloud concepts such as [Project](https://cloud.google.com/storage/docs/projects), [Storage](https://cloud.google.com/storage) and [Vertex AI](https://cloud.google.com/vertex-ai/docs). Some machine learning knowledge is also helpful but not required. DatasetThis Colab uses a movie recommendation dataset as an example throughout all the sessions. The task is to train a model to predict if a user is going to watch a movie and serve this model online. ObjectiveIn this notebook, you will learn how to: * How to import your features into Feature Store. * How to serve online prediction requests using the imported features. * How to access imported features in offline jobs, such as training jobs. Costs This tutorial uses billable components of Google Cloud:* Vertex AI* Cloud Storage* Cloud BigtableLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. Set up your local development environment**If you are using Colab or Google Cloud Notebooks**, your environment already meetsall the requirements to run this notebook. You can skip this step. **Otherwise**, make sure your environment meets this notebook's requirements.You need the following:* The Google Cloud SDK* Git* Python 3* virtualenv* Jupyter notebook running in a virtual environment with Python 3The Google Cloud guide to [Setting up a Python developmentenvironment](https://cloud.google.com/python/setup) and the [Jupyterinstallation guide](https://jupyter.org/install) provide detailed instructionsfor meeting these requirements. The following steps provide a condensed set ofinstructions:1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)1. [Install Python 3.](https://cloud.google.com/python/setupinstalling_python)1. [Install virtualenv](https://cloud.google.com/python/setupinstalling_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.1. To install Jupyter, run `pip3 install jupyter` on thecommand-line in a terminal shell.1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.1. Open this notebook in the Jupyter Notebook Dashboard. Install additional packagesFor this Colab, you need the Vertex SDK for Python.
###Code
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install {USER_FLAG} --upgrade git+https://github.com/googleapis/python-aiplatform.git@main-test
###Output
_____no_output_____
###Markdown
Restart the kernelAfter you install the SDK, you need to restart the notebook kernel so it can find the packages. You can restart kernel from *Kernel -> Restart Kernel*, or running the following:
###Code
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin Select a GPU runtime**Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. [Enable the Vertex AI API and Compute Engine API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com,compute_component).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
###Output
_____no_output_____
###Markdown
Otherwise, set your project ID here.
###Code
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "python-docs-samples-tests" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Google Cloud Notebooks**, your environment is alreadyauthenticated. Skip this step. **If you are using Colab**, run the cell below and follow the instructionswhen prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:1. In the Cloud Console, go to the [**Create service account key** page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).2. Click **Create service account**.3. In the **Service account name** field, enter a name, and click **Create**.4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type "Vertex AI"into the filter box, and select **Vertex AI Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.5. Click *Create*. A JSON file that contains your key downloads to yourlocal environment.6. Enter the path to your service account key as the`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
###Code
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Prepare for output Step 1. Create dataset for outputYou need a BigQuery dataset to host the output data in `us-central1`. Input the name of the dataset you want to created and specify the name of the table you want to store the output later. These will be used later in the notebook.**Make sure that the table name does NOT already exist**.
###Code
from datetime import datetime
from google.cloud import bigquery
# Output dataset
DESTINATION_DATA_SET = "movie_predictions" # @param {type:"string"}
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
DESTINATION_DATA_SET = "{prefix}_{timestamp}".format(
prefix=DESTINATION_DATA_SET, timestamp=TIMESTAMP
)
# Output table. Make sure that the table does NOT already exist; the BatchReadFeatureValues API cannot overwrite an existing table
DESTINATION_TABLE_NAME = "training_data" # @param {type:"string"}
DESTINATION_PATTERN = "bq://{project}.{dataset}.{table}"
DESTINATION_TABLE_URI = DESTINATION_PATTERN.format(
project=PROJECT_ID, dataset=DESTINATION_DATA_SET, table=DESTINATION_TABLE_NAME
)
# Create dataset
REGION = "us-central1" # @param {type:"string"}
client = bigquery.Client()
dataset_id = "{}.{}".format(client.project, DESTINATION_DATA_SET)
dataset = bigquery.Dataset(dataset_id)
dataset.location = REGION
dataset = client.create_dataset(dataset, timeout=30)
print("Created dataset {}.{}".format(client.project, dataset.dataset_id))
###Output
_____no_output_____
###Markdown
Import libraries and define constants
###Code
# Other than project ID and featurestore ID and endpoints needs to be set
API_ENDPOINT = "us-central1-aiplatform.googleapis.com" # @param {type:"string"}
INPUT_CSV_FILE = "gs://cloud-samples-data-us-central1/ai-platform-unified/datasets/featurestore/movie_prediction.csv"
from google.cloud.aiplatform_v1beta1 import (
FeaturestoreOnlineServingServiceClient, FeaturestoreServiceClient)
from google.cloud.aiplatform_v1beta1.types import FeatureSelector, IdMatcher
from google.cloud.aiplatform_v1beta1.types import \
entity_type as entity_type_pb2
from google.cloud.aiplatform_v1beta1.types import feature as feature_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore as featurestore_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_monitoring as featurestore_monitoring_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_online_service as featurestore_online_service_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_service as featurestore_service_pb2
from google.cloud.aiplatform_v1beta1.types import io as io_pb2
from google.protobuf.duration_pb2 import Duration
# Create admin_client for CRUD and data_client for reading feature values.
admin_client = FeaturestoreServiceClient(client_options={"api_endpoint": API_ENDPOINT})
data_client = FeaturestoreOnlineServingServiceClient(
client_options={"api_endpoint": API_ENDPOINT}
)
# Represents featurestore resource path.
BASE_RESOURCE_PATH = admin_client.common_location_path(PROJECT_ID, REGION)
###Output
_____no_output_____
###Markdown
Terminology and Concept Featurestore Data modelFeature Store organizes data with the following 3 important hierarchical concepts:```Featurestore -> EntityType -> Feature```* **Featurestore**: the place to store your features* **EntityType**: under a Featurestore, an *EntityType* describes an object to be modeled, real one or virtual one.* **Feature**: under an EntityType, a *feature* describes an attribute of the EntityTypeIn the movie prediction example, you will create a featurestore called *movie_prediction*. This store has 2 entity types: *Users* and *Movies*. The Users entity type has the age, gender, and like genres features. The Movies entity type has the genres and average rating features. Create Featurestore and Define Schemas Create FeaturestoreThe method to create a featurestore returns a[long-running operation](https://google.aip.dev/151) (LRO). An LRO starts an asynchronous job. LROs are returned for other APImethods too, such as updating or deleting a featurestore. Calling`create_fs_lro.result()` waits for the LRO to complete.
###Code
FEATURESTORE_ID = "movie_prediction_{timestamp}".format(timestamp=TIMESTAMP)
create_lro = admin_client.create_featurestore(
featurestore_service_pb2.CreateFeaturestoreRequest(
parent=BASE_RESOURCE_PATH,
featurestore_id=FEATURESTORE_ID,
featurestore=featurestore_pb2.Featurestore(
display_name="Featurestore for movie prediction",
online_serving_config=featurestore_pb2.Featurestore.OnlineServingConfig(
fixed_node_count=3
),
),
)
)
# Wait for LRO to finish and get the LRO result.
print(create_lro.result())
###Output
_____no_output_____
###Markdown
You can use [GetFeaturestore](https://cloud.google.com/vertex-ai/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.GetFeaturestore) or [ListFeaturestores](https://cloud.google.com/vertex-ai/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.ListFeaturestores) to check if the Featurestore was successfully created. The following example gets the details of the Featurestore.
###Code
admin_client.get_featurestore(
name=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID)
)
###Output
_____no_output_____
###Markdown
Create Entity TypeYou can specify a monitoring config which will by default be inherited by all Features under this EntityType.
###Code
# Create users entity type with monitoring enabled.
# All Features belonging to this EntityType will by default inherit the monitoring config.
users_entity_type_lro = admin_client.create_entity_type(
featurestore_service_pb2.CreateEntityTypeRequest(
parent=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
entity_type_id="users",
entity_type=entity_type_pb2.EntityType(
description="Users entity",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
monitoring_interval=Duration(seconds=86400), # 1 day
),
),
),
)
)
# Similarly, wait for EntityType creation operation.
print(users_entity_type_lro.result())
# Create movies entity type without a monitoring configuration.
movies_entity_type_lro = admin_client.create_entity_type(
featurestore_service_pb2.CreateEntityTypeRequest(
parent=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
entity_type_id="movies",
entity_type=entity_type_pb2.EntityType(description="Movies entity"),
)
)
# Similarly, wait for EntityType creation operation.
print(movies_entity_type_lro.result())
###Output
_____no_output_____
###Markdown
Create FeatureYou can also set a custom monitoring configuration at the Feature level, and view the properties and metrics in the console: sample [properties](https://storage.googleapis.com/cloud-samples-data/ai-platform-unified/datasets/featurestore/Feature%20Properties.png), sample [metrics](https://storage.googleapis.com/cloud-samples-data/ai-platform-unified/datasets/featurestore/Feature%20Snapshot%20Distribution.png).
###Code
# Create features for the 'users' entity.
# 'age' Feature leaves the monitoring config unset, which means it'll inherit the config from EntityType.
# 'gender' Feature explicitly disables monitoring.
# 'liked_genres' Feature is a STRING_ARRAY type, so it is automatically excluded from monitoring.
# For Features with monitoring enabled, distribution statistics are updated periodically in the console.
admin_client.batch_create_features(
parent=admin_client.entity_type_path(PROJECT_ID, REGION, FEATURESTORE_ID, "users"),
requests=[
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.INT64,
description="User age",
),
feature_id="age",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="User gender",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
disabled=True,
),
),
),
feature_id="gender",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING_ARRAY,
description="An array of genres that this user liked",
),
feature_id="liked_genres",
),
],
).result()
# Create features for movies type.
# 'title' Feature enables monitoring.
admin_client.batch_create_features(
parent=admin_client.entity_type_path(PROJECT_ID, REGION, FEATURESTORE_ID, "movies"),
requests=[
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="The title of the movie",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
monitoring_interval=Duration(seconds=172800), # 2 days
),
),
),
feature_id="title",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="The genres of the movie",
),
feature_id="genres",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.DOUBLE,
description="The average rating for the movie, range is [1.0-5.0]",
),
feature_id="average_rating",
),
],
).result()
###Output
_____no_output_____
###Markdown
Search created featuresWhile the [ListFeatures](https://cloud.google.com/vertex-ai/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.ListFeatures) method allows you to easily view all features of a singleentity type, the [SearchFeatures](https://cloud.google.com/vertex-ai/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.SearchFeatures) method searches across all featurestoresand entity types in a given location (such as `us-central1`). This can help you discover features that were created by someone else.You can query based on feature properties including feature ID, entity type ID,and feature description. You can also limit results by filtering on a specificfeaturestore, feature value type, and/or labels.
###Code
# Search for all features across all featurestores.
list(admin_client.search_features(location=BASE_RESOURCE_PATH))
###Output
_____no_output_____
###Markdown
Now, narrow down the search to features that are of type `DOUBLE`
###Code
# Search for all features with value type `DOUBLE`
list(
admin_client.search_features(
featurestore_service_pb2.SearchFeaturesRequest(
location=BASE_RESOURCE_PATH, query="value_type=DOUBLE"
)
)
)
###Output
_____no_output_____
###Markdown
Or, limit the search results to features with specific keywords in their ID and type.
###Code
# Filter on feature value type and keywords.
list(
admin_client.search_features(
featurestore_service_pb2.SearchFeaturesRequest(
location=BASE_RESOURCE_PATH, query="feature_id:title AND value_type=STRING"
)
)
)
###Output
_____no_output_____
###Markdown
Import Feature ValuesYou need to import feature values before you can use them for online/offline serving. In this step, you will learn how to import feature values by calling the ImportFeatureValues API using the Python SDK. Source Data Format and LayoutAs mentioned above, BigQuery table/Avro/CSV are supported. No matter what format you are using, each imported entity *must* have an ID; also, each entity can *optionally* have a timestamp, sepecifying when the feature values are generated. This Colab uses Avro as an input, located at this public [bucket](https://pantheon.corp.google.com/storage/browser/cloud-samples-data/ai-platform-unified/datasets/featurestore;tab=objects?project=storage-samples&prefix=&forceOnObjectsSortingFiltering=false). The Avro schemas are as follows:**For the Users entity**:```schema = { "type": "record", "name": "User", "fields": [ { "name":"user_id", "type":["null","string"] }, { "name":"age", "type":["null","long"] }, { "name":"gender", "type":["null","string"] }, { "name":"liked_genres", "type":{"type":"array","items":"string"} }, { "name":"update_time", "type":["null",{"type":"long","logicalType":"timestamp-micros"}] }, ] }```**For the Movies entity**```schema = { "type": "record", "name": "Movie", "fields": [ { "name":"movie_id", "type":["null","string"] }, { "name":"average_rating", "type":["null","double"] }, { "name":"title", "type":["null","string"] }, { "name":"genres", "type":["null","string"] }, { "name":"update_time", "type":["null",{"type":"long","logicalType":"timestamp-micros"}] }, ]}``` Import feature values for UsersWhen importing, specify the following in your request:* Data source format: BigQuery Table/Avro/CSV* Data source URL* Destination: featurestore/entity types/features to be imported
###Code
import_users_request = featurestore_service_pb2.ImportFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
avro_source=io_pb2.AvroSource(
# Source
gcs_source=io_pb2.GcsSource(
uris=[
"gs://cloud-samples-data-us-central1/ai-platform-unified/datasets/featurestore/users.avro"
]
)
),
entity_id_field="user_id",
feature_specs=[
# Features
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="age"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="gender"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(
id="liked_genres"
),
],
feature_time_field="update_time",
worker_count=10,
)
# Start to import, will take a couple of minutes
ingestion_lro = admin_client.import_feature_values(import_users_request)
# Polls for the LRO status and prints when the LRO has completed
ingestion_lro.result()
###Output
_____no_output_____
###Markdown
Import feature values for MoviesSimilarly, import feature values for 'movies' into the featurestore.
###Code
import_movie_request = featurestore_service_pb2.ImportFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "movies"
),
avro_source=io_pb2.AvroSource(
gcs_source=io_pb2.GcsSource(
uris=[
"gs://cloud-samples-data-us-central1/ai-platform-unified/datasets/featurestore/movies.avro"
]
)
),
entity_id_field="movie_id",
feature_specs=[
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="title"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="genres"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(
id="average_rating"
),
],
feature_time_field="update_time",
worker_count=10,
)
# Start to import, will take a couple of minutes
ingestion_lro = admin_client.import_feature_values(import_movie_request)
# Polls for the LRO status and prints when the LRO has completed
ingestion_lro.result()
###Output
_____no_output_____
###Markdown
Online serving The[Online Serving APIs](https://cloud.google.com/vertex-ai/docs/reference/rpc/google.cloud.aiplatform.v1beta1featurestoreonlineservingservice)lets you serve feature values for small batches of entities. It's designed for latency-sensitive service, such as online model prediction. For example, for a movie service, you might want to quickly shows movies that the current user would most likely watch by using online predictions. Read one entity per requestThe ReadFeatureValues API is used to read feature values of one entity; henceits custom HTTP verb is `readFeatureValues`. By default, the API will return the latest value of each feature, meaning the feature values with the most recent timestamp.To read feature values, specify the entity ID and features to read. The responsecontains a `header` and an `entity_view`. Each row of data in the `entity_view`contains one feature value, in the same order of features as listed in the response header.
###Code
# Fetch the following 3 features.
feature_selector = FeatureSelector(
id_matcher=IdMatcher(ids=["age", "gender", "liked_genres"])
)
data_client.read_feature_values(
featurestore_online_service_pb2.ReadFeatureValuesRequest(
# Fetch from the following feature store/entity type
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
# Fetch the user features whose ID is "alice"
entity_id="alice",
feature_selector=feature_selector,
)
)
###Output
_____no_output_____
###Markdown
Read multiple entities per requestTo read feature values from multiple entities, use theStreamingReadFeatureValues API, which is almost identical to the previousReadFeatureValues API. Note that fetching only a small number of entities is recomended when using this API due to its latency-sensitive nature.
###Code
# Read the same set of features as above, but for multiple entities.
response_stream = data_client.streaming_read_feature_values(
featurestore_online_service_pb2.StreamingReadFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
entity_ids=["alice", "bob"],
feature_selector=feature_selector,
)
)
# Iterate and process response. Note the first one is always the header only.
for response in response_stream:
print(response)
###Output
_____no_output_____
###Markdown
Now that you have learned how to featch imported feature values for online serving, the next step is learning how to use imported feature values for offline use cases. Batch ServingBatch Serving is used to fetch a large batch of feature values for high-throughput, typically for training a model or batch prediction. In this section, you will learn how to prepare for training examples by calling the BatchReadFeatureValues API. Use case**The task** is to prepare a training dataset to train a model, which predicts if a given user will watch a given movie. To achieve this, you need 2 sets of input:* Features: you already imported into the featurestore.* Labels: the groud-truth data recorded that user X has watched movie Y.To be more specific, the ground-truth observation is described in Table 1 and the desired training dataset is described in Table 2. Each row in Table 2 is a result of joining the imported feature values from Feature Store according to the entity IDs and timestamps in Table 1. In this example, the `age`, `gender` and `liked_genres` features from `users` andthe `genres` and `average_rating` features from `movies` are chosen to train the model. Note that only positive examples are shown in these 2 tables, i.e., you can imagine there is a label column whose values are all `True`.BatchReadFeatureValues API takes Table 1 asinput, joins all required feature values from the featurestore, and returns Table 2 for training.Table 1. Ground-truth Datausers | movies | timestamp ----- | -------- | -------------------- alice | Cinema Paradiso | 2019-11-01T00:00:00Z bob | The Shining | 2019-11-15T18:09:43Z ... | ... | ... Table 2. Expected Training Data Generated by Batch Read API (Positive Samples)timestamp | entity_type_users | age | gender | liked_genres | entity_type_movies | genres | average_rating -------------------- | ----------------- | --------------- | ---------------- | -------------------- | -------- | --------- | ----- 2019-11-01T00:00:00Z | bob | 35 | M | [Action, Crime] | The Shining | Horror | 4.8 2019-11-01T00:00:00Z | alice | 55 | F | [Drama, Comedy] | Cinema Paradiso | Romance | 4.5 ... | ... | ... | ... | ... | ... | ... | ... Why timestamp?Note that there is a `timestamp` column in Table 2. This indicates the time when the ground-truth was observed. This is to avoid data inconsistency.For example, the 1st row of Table 2 indicates that user `alice` watched movie `Cinema Paradiso` on `2019-11-01T00:00:00Z`. The featurestore keeps feature values for all timestamps but fetches feature values *only* at the given timestamp during batch serving. On 2019-11-01 alice might be 54 years old, but now alice might be 56; featurestore returns `age=54` as alice's age, instead of `age=56`, because that is the value of the feature at the observation time. Similarly, other features might be time-variant as well, such as liked_genres. Batch Read Feature ValuesAssemble the request which specify the following info:* Where is the label data, i.e., Table 1.* Which features are read, i.e., the column names in Table 2.The output is stored in a BigQuery table.
###Code
batch_serving_request = featurestore_service_pb2.BatchReadFeatureValuesRequest(
# featurestore info
featurestore=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
# URL for the label data, i.e., Table 1.
csv_read_instances=io_pb2.CsvSource(
gcs_source=io_pb2.GcsSource(uris=[INPUT_CSV_FILE])
),
destination=featurestore_service_pb2.FeatureValueDestination(
bigquery_destination=io_pb2.BigQueryDestination(
# Output to BigQuery table created earlier
output_uri=DESTINATION_TABLE_URI
)
),
entity_type_specs=[
featurestore_service_pb2.BatchReadFeatureValuesRequest.EntityTypeSpec(
# Read the 'age', 'gender' and 'liked_genres' features from the 'users' entity
entity_type_id="users",
feature_selector=FeatureSelector(
id_matcher=IdMatcher(
ids=[
# features, use "*" if you want to select all features within this entity type
"age",
"gender",
"liked_genres",
]
)
),
),
featurestore_service_pb2.BatchReadFeatureValuesRequest.EntityTypeSpec(
# Read the 'average_rating' and 'genres' feature values of the 'movies' entity
entity_type_id="movies",
feature_selector=FeatureSelector(
id_matcher=IdMatcher(ids=["average_rating", "genres"])
),
),
],
)
# Execute the batch read
batch_serving_lro = admin_client.batch_read_feature_values(batch_serving_request)
# This long runing operation will poll until the batch read finishes.
batch_serving_lro.result()
###Output
_____no_output_____
###Markdown
After the LRO finishes, you should be able to see the result from the [BigQuery console](https://console.cloud.google.com/bigquery), in the dataset created earlier. Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.You can also keep the project but delete the featurestore:
###Code
admin_client.delete_featurestore(
request=featurestore_service_pb2.DeleteFeaturestoreRequest(
name=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
force=True,
)
).result()
client.delete_dataset(
DESTINATION_DATA_SET, delete_contents=True, not_found_ok=True
) # Make an API request.
print("Deleted dataset '{}'.".format(DESTINATION_DATA_SET))
###Output
_____no_output_____
###Markdown
Run in Colab View on GitHub OverviewThis Colab introduces Feature Store, a managed cloud service for machine learning engineers and data scientists to store, serve, manage and share machine learning features at a large scale.This Colab assumes that you understand basic Google Cloud concepts such as [Project](https://cloud.google.com/storage/docs/projects), [Storage](https://cloud.google.com/storage) and [AI Platform (Unified)](https://cloud.google.com/ai-platform-unified/docs). Some machine learning knowledge is also helpful but not required. DatasetThis Colab uses a movie recommendation dataset as an example throughout all the sessions. The task is to train a model to predict if a user is going to watch a movie and serve this model online. ObjectiveIn this notebook, you will learn how to: * How to import your features into Feature Store. * How to serve online prediction requests using the imported features. * How to access imported features in offline jobs, such as training jobs. Costs This tutorial uses billable components of Google Cloud:* AI Platform (Unified)* Cloud Storage* Cloud BigtableLearn about [AI Platform (Unified)pricing](https://cloud.google.com/ai-platform-unified/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. Set up your local development environment**If you are using Colab or AI Platform Notebooks**, your environment already meetsall the requirements to run this notebook. You can skip this step. **Otherwise**, make sure your environment meets this notebook's requirements.You need the following:* The Google Cloud SDK* Git* Python 3* virtualenv* Jupyter notebook running in a virtual environment with Python 3The Google Cloud guide to [Setting up a Python developmentenvironment](https://cloud.google.com/python/setup) and the [Jupyterinstallation guide](https://jupyter.org/install) provide detailed instructionsfor meeting these requirements. The following steps provide a condensed set ofinstructions:1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)1. [Install Python 3.](https://cloud.google.com/python/setupinstalling_python)1. [Install virtualenv](https://cloud.google.com/python/setupinstalling_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.1. To install Jupyter, run `pip install jupyter` on thecommand-line in a terminal shell.1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.1. Open this notebook in the Jupyter Notebook Dashboard. Install additional packagesFor this Colab, you need the AI Platform SDK.
###Code
# Uninstall previous version of google-cloud-aiplatform SDK, if any.
!pip uninstall google-cloud-aiplatform -y
# Install the latest public release version
# !pip install -U google-cloud-aiplatform
# Install the testing version
!pip install git+https://github.com/googleapis/python-aiplatform.git@main-test
###Output
_____no_output_____
###Markdown
Restart the kernelAfter you install the SDK, you need to restart the notebook kernel so it can find the packages. You can restart kernel from *Kernel -> Restart Kernel*, or running the following:
###Code
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin Select a GPU runtime**Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. [Enable the AI Platform (Unified) API and Compute Engine API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com,compute_component).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
###Output
_____no_output_____
###Markdown
Otherwise, set your project ID here.
###Code
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "python-docs-samples-tests" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using AI Platform Notebooks**, your environment is alreadyauthenticated. Skip this step. **If you are using Colab**, run the cell below and follow the instructionswhen prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:1. In the Cloud Console, go to the [**Create service account key** page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).2. Click **Create service account**.3. In the **Service account name** field, enter a name, and click **Create**.4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type "AI Platform"into the filter box, and select **AI Platform Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.5. Click *Create*. A JSON file that contains your key downloads to yourlocal environment.6. Enter the path to your service account key as the`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
###Code
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on AI Platform, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Prepare for output Step 1. Create dataset for outputYou need a BigQuery dataset to host the output data in `us-central1`. Input the name of the dataset you want to created and specify the name of the table you want to store the output later. These will be used later in the notebook.**Make sure that the table name does NOT already exist**.
###Code
from datetime import datetime
from google.cloud import bigquery
# Output dataset
DESTINATION_DATA_SET = "movie_predictions" # @param {type:"string"}
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
DESTINATION_DATA_SET = "{prefix}_{timestamp}".format(
prefix=DESTINATION_DATA_SET, timestamp=TIMESTAMP
)
# Output table. Make sure that the table does NOT already exist; the BatchReadFeatureValues API cannot overwrite an existing table
DESTINATION_TABLE_NAME = "training_data" # @param {type:"string"}
DESTINATION_PATTERN = "bq://{project}.{dataset}.{table}"
DESTINATION_TABLE_URI = DESTINATION_PATTERN.format(
project=PROJECT_ID, dataset=DESTINATION_DATA_SET, table=DESTINATION_TABLE_NAME
)
# Create dataset
REGION = "us-central1" # @param {type:"string"}
client = bigquery.Client()
dataset_id = "{}.{}".format(client.project, DESTINATION_DATA_SET)
dataset = bigquery.Dataset(dataset_id)
dataset.location = REGION
dataset = client.create_dataset(dataset, timeout=30)
print("Created dataset {}.{}".format(client.project, dataset.dataset_id))
###Output
_____no_output_____
###Markdown
Import libraries and define constants
###Code
# Other than project ID and featurestore ID and endpoints needs to be set
API_ENDPOINT = "us-central1-aiplatform.googleapis.com" # @param {type:"string"}
INPUT_CSV_FILE = "gs://cloud-samples-data-us-central1/ai-platform-unified/datasets/featurestore/movie_prediction.csv"
from google.cloud.aiplatform_v1beta1 import (
FeaturestoreOnlineServingServiceClient, FeaturestoreServiceClient)
from google.cloud.aiplatform_v1beta1.types import FeatureSelector, IdMatcher
from google.cloud.aiplatform_v1beta1.types import \
entity_type as entity_type_pb2
from google.cloud.aiplatform_v1beta1.types import feature as feature_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore as featurestore_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_monitoring as featurestore_monitoring_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_online_service as featurestore_online_service_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_service as featurestore_service_pb2
from google.cloud.aiplatform_v1beta1.types import io as io_pb2
from google.protobuf.duration_pb2 import Duration
# Create admin_client for CRUD and data_client for reading feature values.
admin_client = FeaturestoreServiceClient(client_options={"api_endpoint": API_ENDPOINT})
data_client = FeaturestoreOnlineServingServiceClient(
client_options={"api_endpoint": API_ENDPOINT}
)
# Represents featurestore resource path.
BASE_RESOURCE_PATH = admin_client.common_location_path(PROJECT_ID, REGION)
###Output
_____no_output_____
###Markdown
Terminology and Concept Featurestore Data modelFeature Store organizes data with the following 3 important hierarchical concepts:```Featurestore -> EntityType -> Feature```* **Featurestore**: the place to store your features* **EntityType**: under a Featurestore, an *EntityType* describes an object to be modeled, real one or virtual one.* **Feature**: under an EntityType, a *feature* describes an attribute of the EntityTypeIn the movie prediction example, you will create a featurestore called *movie_prediction*. This store has 2 entity types: *Users* and *Movies*. The Users entity type has the age, gender, and like genres features. The Movies entity type has the genres and average rating features. Create Featurestore and Define Schemas Create FeaturestoreThe method to create a featurestore returns a[long-running operation](https://google.aip.dev/151) (LRO). An LRO starts an asynchronous job. LROs are returned for other APImethods too, such as updating or deleting a featurestore. Calling`create_fs_lro.result()` waits for the LRO to complete.
###Code
FEATURESTORE_ID = "movie_prediction_{timestamp}".format(timestamp=TIMESTAMP)
create_lro = admin_client.create_featurestore(
featurestore_service_pb2.CreateFeaturestoreRequest(
parent=BASE_RESOURCE_PATH,
featurestore_id=FEATURESTORE_ID,
featurestore=featurestore_pb2.Featurestore(
display_name="Featurestore for movie prediction",
online_serving_config=featurestore_pb2.Featurestore.OnlineServingConfig(
fixed_node_count=3
),
),
)
)
# Wait for LRO to finish and get the LRO result.
print(create_lro.result())
###Output
_____no_output_____
###Markdown
You can use [GetFeaturestore](https://cloud.google.com/ai-platform-unified/featurestore/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.GetFeaturestore) or [ListFeaturestores](https://cloud.google.com/ai-platform-unified/featurestore/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.ListFeaturestores) to check if the Featurestore was successfully created. The following example gets the details of the Featurestore.
###Code
admin_client.get_featurestore(
name=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID)
)
###Output
_____no_output_____
###Markdown
Create Entity TypeYou can specify a monitoring config which will by default be inherited by all Features under this EntityType.
###Code
# Create users entity type with monitoring enabled.
# All Features belonging to this EntityType will by default inherit the monitoring config.
users_entity_type_lro = admin_client.create_entity_type(
featurestore_service_pb2.CreateEntityTypeRequest(
parent=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
entity_type_id="users",
entity_type=entity_type_pb2.EntityType(
description="Users entity",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
monitoring_interval=Duration(seconds=86400), # 1 day
),
),
),
)
)
# Similarly, wait for EntityType creation operation.
print(users_entity_type_lro.result())
# Create movies entity type without a monitoring configuration.
movies_entity_type_lro = admin_client.create_entity_type(
featurestore_service_pb2.CreateEntityTypeRequest(
parent=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
entity_type_id="movies",
entity_type=entity_type_pb2.EntityType(description="Movies entity"),
)
)
# Similarly, wait for EntityType creation operation.
print(movies_entity_type_lro.result())
###Output
_____no_output_____
###Markdown
Create FeatureYou can also set a custom monitoring configuration at the Feature level, and view the properties and metrics in the console: sample [properties](https://storage.googleapis.com/cloud-samples-data/ai-platform-unified/datasets/featurestore/Feature%20Properties.png), sample [metrics](https://storage.googleapis.com/cloud-samples-data/ai-platform-unified/datasets/featurestore/Feature%20Snapshot%20Distribution.png).
###Code
# Create features for the 'users' entity.
# 'age' Feature leaves the monitoring config unset, which means it'll inherit the config from EntityType.
# 'gender' Feature explicitly disables monitoring.
# 'liked_genres' Feature is a STRING_ARRAY type, so it is automatically excluded from monitoring.
# For Features with monitoring enabled, distribution statistics are updated periodically in the console.
admin_client.batch_create_features(
parent=admin_client.entity_type_path(PROJECT_ID, REGION, FEATURESTORE_ID, "users"),
requests=[
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.INT64,
description="User age",
),
feature_id="age",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="User gender",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
disabled=True,
),
),
),
feature_id="gender",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING_ARRAY,
description="An array of genres that this user liked",
),
feature_id="liked_genres",
),
],
).result()
# Create features for movies type.
# 'title' Feature enables monitoring.
admin_client.batch_create_features(
parent=admin_client.entity_type_path(PROJECT_ID, REGION, FEATURESTORE_ID, "movies"),
requests=[
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="The title of the movie",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
monitoring_interval=Duration(seconds=172800), # 2 days
),
),
),
feature_id="title",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="The genres of the movie",
),
feature_id="genres",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.DOUBLE,
description="The average rating for the movie, range is [1.0-5.0]",
),
feature_id="average_rating",
),
],
).result()
###Output
_____no_output_____
###Markdown
Search created featuresWhile the [ListFeatures](https://cloud.google.com/ai-platform-unified/featurestore/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.ListFeatures) method allows you to easily view all features of a singleentity type, the [SearchFeatures](https://cloud.google.com/ai-platform-unified/featurestore/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.SearchFeatures) method searches across all featurestoresand entity types in a given location (such as `us-central1`). This can help you discover features that were created by someone else.You can query based on feature properties including feature ID, entity type ID,and feature description. You can also limit results by filtering on a specificfeaturestore, feature value type, and/or labels.
###Code
# Search for all features across all featurestores.
list(admin_client.search_features(location=BASE_RESOURCE_PATH))
###Output
_____no_output_____
###Markdown
Now, narrow down the search to features that are of type `DOUBLE`
###Code
# Search for all features with value type `DOUBLE`
list(
admin_client.search_features(
featurestore_service_pb2.SearchFeaturesRequest(
location=BASE_RESOURCE_PATH, query="value_type=DOUBLE"
)
)
)
###Output
_____no_output_____
###Markdown
Or, limit the search results to features with specific keywords in their ID and type.
###Code
# Filter on feature value type and keywords.
list(
admin_client.search_features(
featurestore_service_pb2.SearchFeaturesRequest(
location=BASE_RESOURCE_PATH, query="feature_id:title AND value_type=STRING"
)
)
)
###Output
_____no_output_____
###Markdown
Import Feature ValuesYou need to import feature values before you can use them for online/offline serving. In this step, you will learn how to import feature values by calling the ImportFeatureValues API using the Python SDK. Source Data Format and LayoutAs mentioned above, BigQuery table/Avro/CSV are supported. No matter what format you are using, each imported entity *must* have an ID; also, each entity can *optionally* have a timestamp, sepecifying when the feature values are generated. This Colab uses Avro as an input, located at this public [bucket](https://pantheon.corp.google.com/storage/browser/cloud-samples-data/ai-platform-unified/datasets/featurestore;tab=objects?project=storage-samples&prefix=&forceOnObjectsSortingFiltering=false). The Avro schemas are as follows:**For the Users entity**:```schema = { "type": "record", "name": "User", "fields": [ { "name":"user_id", "type":["null","string"] }, { "name":"age", "type":["null","long"] }, { "name":"gender", "type":["null","string"] }, { "name":"liked_genres", "type":{"type":"array","items":"string"} }, { "name":"update_time", "type":["null",{"type":"long","logicalType":"timestamp-micros"}] }, ] }```**For the Movies entity**```schema = { "type": "record", "name": "Movie", "fields": [ { "name":"movie_id", "type":["null","string"] }, { "name":"average_rating", "type":["null","double"] }, { "name":"title", "type":["null","string"] }, { "name":"genres", "type":["null","string"] }, { "name":"update_time", "type":["null",{"type":"long","logicalType":"timestamp-micros"}] }, ]}``` Import feature values for UsersWhen importing, specify the following in your request:* Data source format: BigQuery Table/Avro/CSV* Data source URL* Destination: featurestore/entity types/features to be imported
###Code
import_users_request = featurestore_service_pb2.ImportFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
avro_source=io_pb2.AvroSource(
# Source
gcs_source=io_pb2.GcsSource(
uris=[
"gs://cloud-samples-data-us-central1/ai-platform-unified/datasets/featurestore/users.avro"
]
)
),
entity_id_field="user_id",
feature_specs=[
# Features
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="age"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="gender"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(
id="liked_genres"
),
],
feature_time_field="update_time",
worker_count=10,
)
# Start to import, will take a couple of minutes
ingestion_lro = admin_client.import_feature_values(import_users_request)
# Polls for the LRO status and prints when the LRO has completed
ingestion_lro.result()
###Output
_____no_output_____
###Markdown
Import feature values for MoviesSimilarly, import feature values for 'movies' into the featurestore.
###Code
import_movie_request = featurestore_service_pb2.ImportFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "movies"
),
avro_source=io_pb2.AvroSource(
gcs_source=io_pb2.GcsSource(
uris=[
"gs://cloud-samples-data-us-central1/ai-platform-unified/datasets/featurestore/movies.avro"
]
)
),
entity_id_field="movie_id",
feature_specs=[
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="title"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="genres"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(
id="average_rating"
),
],
feature_time_field="update_time",
worker_count=10,
)
# Start to import, will take a couple of minutes
ingestion_lro = admin_client.import_feature_values(import_movie_request)
# Polls for the LRO status and prints when the LRO has completed
ingestion_lro.result()
###Output
_____no_output_____
###Markdown
Online serving The[Online Serving APIs](https://cloud.google.com/ai-platform-unified/featurestore/docs/reference/rpc/google.cloud.aiplatform.v1beta1featurestoreonlineservingservice)lets you serve feature values for small batches of entities. It's designed for latency-sensitive service, such as online model prediction. For example, for a movie service, you might want to quickly shows movies that the current user would most likely watch by using online predictions. Read one entity per requestThe ReadFeatureValues API is used to read feature values of one entity; henceits custom HTTP verb is `readFeatureValues`. By default, the API will return the latest value of each feature, meaning the feature values with the most recent timestamp.To read feature values, specify the entity ID and features to read. The responsecontains a `header` and an `entity_view`. Each row of data in the `entity_view`contains one feature value, in the same order of features as listed in the response header.
###Code
# Fetch the following 3 features.
feature_selector = FeatureSelector(
id_matcher=IdMatcher(ids=["age", "gender", "liked_genres"])
)
data_client.read_feature_values(
featurestore_online_service_pb2.ReadFeatureValuesRequest(
# Fetch from the following feature store/entity type
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
# Fetch the user features whose ID is "alice"
entity_id="alice",
feature_selector=feature_selector,
)
)
###Output
_____no_output_____
###Markdown
Read multiple entities per requestTo read feature values from multiple entities, use theStreamingReadFeatureValues API, which is almost identical to the previousReadFeatureValues API. Note that fetching only a small number of entities is recomended when using this API due to its latency-sensitive nature.
###Code
# Read the same set of features as above, but for multiple entities.
response_stream = data_client.streaming_read_feature_values(
featurestore_online_service_pb2.StreamingReadFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
entity_ids=["alice", "bob"],
feature_selector=feature_selector,
)
)
# Iterate and process response. Note the first one is always the header only.
for response in response_stream:
print(response)
###Output
_____no_output_____
###Markdown
Now that you have learned how to featch imported feature values for online serving, the next step is learning how to use imported feature values for offline use cases. Batch ServingBatch Serving is used to fetch a large batch of feature values for high-throughput, typically for training a model or batch prediction. In this section, you will learn how to prepare for training examples by calling the BatchReadFeatureValues API. Use case**The task** is to prepare a training dataset to train a model, which predicts if a given user will watch a given movie. To achieve this, you need 2 sets of input:* Features: you already imported into the featurestore.* Labels: the groud-truth data recorded that user X has watched movie Y.To be more specific, the ground-truth observation is described in Table 1 and the desired training dataset is described in Table 2. Each row in Table 2 is a result of joining the imported feature values from Feature Store according to the entity IDs and timestamps in Table 1. In this example, the `age`, `gender` and `liked_genres` features from `users` andthe `genres` and `average_rating` features from `movies` are chosen to train the model. Note that only positive examples are shown in these 2 tables, i.e., you can imagine there is a label column whose values are all `True`.BatchReadFeatureValues API takes Table 1 asinput, joins all required feature values from the featurestore, and returns Table 2 for training.Table 1. Ground-truth Datausers | movies | timestamp ----- | -------- | -------------------- alice | Cinema Paradiso | 2019-11-01T00:00:00Z bob | The Shining | 2019-11-15T18:09:43Z ... | ... | ... Table 2. Expected Training Data Generated by Batch Read API (Positive Samples)timestamp | entity_type_users | age | gender | liked_genres | entity_type_movies | genres | average_rating -------------------- | ----------------- | --------------- | ---------------- | -------------------- | -------- | --------- | ----- 2019-11-01T00:00:00Z | bob | 35 | M | [Action, Crime] | The Shining | Horror | 4.8 2019-11-01T00:00:00Z | alice | 55 | F | [Drama, Comedy] | Cinema Paradiso | Romance | 4.5 ... | ... | ... | ... | ... | ... | ... | ... Why timestamp?Note that there is a `timestamp` column in Table 2. This indicates the time when the ground-truth was observed. This is to avoid data inconsistency.For example, the 1st row of Table 2 indicates that user `alice` watched movie `Cinema Paradiso` on `2019-11-01T00:00:00Z`. The featurestore keeps feature values for all timestamps but fetches feature values *only* at the given timestamp during batch serving. On 2019-11-01 alice might be 54 years old, but now alice might be 56; featurestore returns `age=54` as alice's age, instead of `age=56`, because that is the value of the feature at the observation time. Similarly, other features might be time-variant as well, such as liked_genres. Batch Read Feature ValuesAssemble the request which specify the following info:* Where is the label data, i.e., Table 1.* Which features are read, i.e., the column names in Table 2.The output is stored in a BigQuery table.
###Code
batch_serving_request = featurestore_service_pb2.BatchReadFeatureValuesRequest(
# featurestore info
featurestore=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
# URL for the label data, i.e., Table 1.
csv_read_instances=io_pb2.CsvSource(
gcs_source=io_pb2.GcsSource(uris=[INPUT_CSV_FILE])
),
destination=featurestore_service_pb2.FeatureValueDestination(
bigquery_destination=io_pb2.BigQueryDestination(
# Output to BigQuery table created earlier
output_uri=DESTINATION_TABLE_URI
)
),
entity_type_specs=[
featurestore_service_pb2.BatchReadFeatureValuesRequest.EntityTypeSpec(
# Read the 'age', 'gender' and 'liked_genres' features from the 'users' entity
entity_type_id="users",
feature_selector=FeatureSelector(
id_matcher=IdMatcher(
ids=[
# features, use "*" if you want to select all features within this entity type
"age",
"gender",
"liked_genres",
]
)
),
),
featurestore_service_pb2.BatchReadFeatureValuesRequest.EntityTypeSpec(
# Read the 'average_rating' and 'genres' feature values of the 'movies' entity
entity_type_id="movies",
feature_selector=FeatureSelector(
id_matcher=IdMatcher(ids=["average_rating", "genres"])
),
),
],
)
# Execute the batch read
batch_serving_lro = admin_client.batch_read_feature_values(batch_serving_request)
# This long runing operation will poll until the batch read finishes.
batch_serving_lro.result()
###Output
_____no_output_____
###Markdown
After the LRO finishes, you should be able to see the result from the [BigQuery console](https://console.cloud.google.com/bigquery), in the dataset created earlier. Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.You can also keep the project but delete the featurestore:
###Code
admin_client.delete_featurestore(
request=featurestore_service_pb2.DeleteFeaturestoreRequest(
name=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
force=True,
)
).result()
client.delete_dataset(
DESTINATION_DATA_SET, delete_contents=True, not_found_ok=True
) # Make an API request.
print("Deleted dataset '{}'.".format(DESTINATION_DATA_SET))
###Output
_____no_output_____
###Markdown
Run in Colab View on GitHub OverviewThis Colab introduces Feature Store, a managed cloud service for machine learning engineers and data scientists to store, serve, manage and share machine learning features at a large scale.This Colab assumes that you understand basic Google Cloud concepts such as [Project](https://cloud.google.com/storage/docs/projects), [Storage](https://cloud.google.com/storage) and [Vertex AI](https://cloud.google.com/vertex-ai/docs). Some machine learning knowledge is also helpful but not required. DatasetThis Colab uses a movie recommendation dataset as an example throughout all the sessions. The task is to train a model to predict if a user is going to watch a movie and serve this model online. ObjectiveIn this notebook, you will learn how to: * How to import your features into Feature Store. * How to serve online prediction requests using the imported features. * How to access imported features in offline jobs, such as training jobs. Costs This tutorial uses billable components of Google Cloud:* Vertex AI* Cloud Storage* Cloud BigtableLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. Set up your local development environment**If you are using Colab or Google Cloud Notebooks**, your environment already meetsall the requirements to run this notebook. You can skip this step. **Otherwise**, make sure your environment meets this notebook's requirements.You need the following:* The Google Cloud SDK* Git* Python 3* virtualenv* Jupyter notebook running in a virtual environment with Python 3The Google Cloud guide to [Setting up a Python developmentenvironment](https://cloud.google.com/python/setup) and the [Jupyterinstallation guide](https://jupyter.org/install) provide detailed instructionsfor meeting these requirements. The following steps provide a condensed set ofinstructions:1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)1. [Install Python 3.](https://cloud.google.com/python/setupinstalling_python)1. [Install virtualenv](https://cloud.google.com/python/setupinstalling_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.1. To install Jupyter, run `pip3 install jupyter` on thecommand-line in a terminal shell.1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.1. Open this notebook in the Jupyter Notebook Dashboard. Install additional packagesFor this Colab, you need the Vertex SDK for Python.
###Code
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install {USER_FLAG} --upgrade git+https://github.com/googleapis/[email protected]
###Output
_____no_output_____
###Markdown
Restart the kernelAfter you install the SDK, you need to restart the notebook kernel so it can find the packages. You can restart kernel from *Kernel -> Restart Kernel*, or running the following:
###Code
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin Select a GPU runtime**Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. [Enable the Vertex AI API and Compute Engine API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com,compute_component).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
###Output
_____no_output_____
###Markdown
Otherwise, set your project ID here.
###Code
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "python-docs-samples-tests" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Google Cloud Notebooks**, your environment is alreadyauthenticated. Skip this step. **If you are using Colab**, run the cell below and follow the instructionswhen prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:1. In the Cloud Console, go to the [**Create service account key** page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).2. Click **Create service account**.3. In the **Service account name** field, enter a name, and click **Create**.4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type "Vertex AI"into the filter box, and select **Vertex AI Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.5. Click *Create*. A JSON file that contains your key downloads to yourlocal environment.6. Enter the path to your service account key as the`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
###Code
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Prepare for output Step 1. Create dataset for outputYou need a BigQuery dataset to host the output data in `us-central1`. Input the name of the dataset you want to created and specify the name of the table you want to store the output later. These will be used later in the notebook.**Make sure that the table name does NOT already exist**.
###Code
from datetime import datetime
from google.cloud import bigquery
# Output dataset
DESTINATION_DATA_SET = "movie_predictions" # @param {type:"string"}
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
DESTINATION_DATA_SET = "{prefix}_{timestamp}".format(
prefix=DESTINATION_DATA_SET, timestamp=TIMESTAMP
)
# Output table. Make sure that the table does NOT already exist; the BatchReadFeatureValues API cannot overwrite an existing table
DESTINATION_TABLE_NAME = "training_data" # @param {type:"string"}
DESTINATION_PATTERN = "bq://{project}.{dataset}.{table}"
DESTINATION_TABLE_URI = DESTINATION_PATTERN.format(
project=PROJECT_ID, dataset=DESTINATION_DATA_SET, table=DESTINATION_TABLE_NAME
)
# Create dataset
REGION = "us-central1" # @param {type:"string"}
client = bigquery.Client()
dataset_id = "{}.{}".format(client.project, DESTINATION_DATA_SET)
dataset = bigquery.Dataset(dataset_id)
dataset.location = REGION
dataset = client.create_dataset(dataset, timeout=30)
print("Created dataset {}.{}".format(client.project, dataset.dataset_id))
###Output
_____no_output_____
###Markdown
Import libraries and define constants
###Code
# Other than project ID and featurestore ID and endpoints needs to be set
API_ENDPOINT = "us-central1-aiplatform.googleapis.com" # @param {type:"string"}
INPUT_CSV_FILE = "gs://cloud-samples-data-us-central1/ai-platform-unified/datasets/featurestore/movie_prediction.csv"
from google.cloud.aiplatform_v1beta1 import (
FeaturestoreOnlineServingServiceClient, FeaturestoreServiceClient)
from google.cloud.aiplatform_v1beta1.types import FeatureSelector, IdMatcher
from google.cloud.aiplatform_v1beta1.types import \
entity_type as entity_type_pb2
from google.cloud.aiplatform_v1beta1.types import feature as feature_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore as featurestore_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_monitoring as featurestore_monitoring_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_online_service as featurestore_online_service_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_service as featurestore_service_pb2
from google.cloud.aiplatform_v1beta1.types import io as io_pb2
from google.protobuf.duration_pb2 import Duration
# Create admin_client for CRUD and data_client for reading feature values.
admin_client = FeaturestoreServiceClient(client_options={"api_endpoint": API_ENDPOINT})
data_client = FeaturestoreOnlineServingServiceClient(
client_options={"api_endpoint": API_ENDPOINT}
)
# Represents featurestore resource path.
BASE_RESOURCE_PATH = admin_client.common_location_path(PROJECT_ID, REGION)
###Output
_____no_output_____
###Markdown
Terminology and Concept Featurestore Data modelFeature Store organizes data with the following 3 important hierarchical concepts:```Featurestore -> EntityType -> Feature```* **Featurestore**: the place to store your features* **EntityType**: under a Featurestore, an *EntityType* describes an object to be modeled, real one or virtual one.* **Feature**: under an EntityType, a *feature* describes an attribute of the EntityTypeIn the movie prediction example, you will create a featurestore called *movie_prediction*. This store has 2 entity types: *Users* and *Movies*. The Users entity type has the age, gender, and like genres features. The Movies entity type has the genres and average rating features. Create Featurestore and Define Schemas Create FeaturestoreThe method to create a featurestore returns a[long-running operation](https://google.aip.dev/151) (LRO). An LRO starts an asynchronous job. LROs are returned for other APImethods too, such as updating or deleting a featurestore. Calling`create_fs_lro.result()` waits for the LRO to complete.
###Code
FEATURESTORE_ID = "movie_prediction_{timestamp}".format(timestamp=TIMESTAMP)
create_lro = admin_client.create_featurestore(
featurestore_service_pb2.CreateFeaturestoreRequest(
parent=BASE_RESOURCE_PATH,
featurestore_id=FEATURESTORE_ID,
featurestore=featurestore_pb2.Featurestore(
display_name="Featurestore for movie prediction",
online_serving_config=featurestore_pb2.Featurestore.OnlineServingConfig(
fixed_node_count=3
),
),
)
)
# Wait for LRO to finish and get the LRO result.
print(create_lro.result())
###Output
_____no_output_____
###Markdown
You can use [GetFeaturestore](https://cloud.google.com/vertex-ai/featurestore/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.GetFeaturestore) or [ListFeaturestores](https://cloud.google.com/vertex-ai/featurestore/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.ListFeaturestores) to check if the Featurestore was successfully created. The following example gets the details of the Featurestore.
###Code
admin_client.get_featurestore(
name=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID)
)
###Output
_____no_output_____
###Markdown
Create Entity TypeYou can specify a monitoring config which will by default be inherited by all Features under this EntityType.
###Code
# Create users entity type with monitoring enabled.
# All Features belonging to this EntityType will by default inherit the monitoring config.
users_entity_type_lro = admin_client.create_entity_type(
featurestore_service_pb2.CreateEntityTypeRequest(
parent=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
entity_type_id="users",
entity_type=entity_type_pb2.EntityType(
description="Users entity",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
monitoring_interval=Duration(seconds=86400), # 1 day
),
),
),
)
)
# Similarly, wait for EntityType creation operation.
print(users_entity_type_lro.result())
# Create movies entity type without a monitoring configuration.
movies_entity_type_lro = admin_client.create_entity_type(
featurestore_service_pb2.CreateEntityTypeRequest(
parent=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
entity_type_id="movies",
entity_type=entity_type_pb2.EntityType(description="Movies entity"),
)
)
# Similarly, wait for EntityType creation operation.
print(movies_entity_type_lro.result())
###Output
_____no_output_____
###Markdown
Create FeatureYou can also set a custom monitoring configuration at the Feature level, and view the properties and metrics in the console: sample [properties](https://storage.googleapis.com/cloud-samples-data/ai-platform-unified/datasets/featurestore/Feature%20Properties.png), sample [metrics](https://storage.googleapis.com/cloud-samples-data/ai-platform-unified/datasets/featurestore/Feature%20Snapshot%20Distribution.png).
###Code
# Create features for the 'users' entity.
# 'age' Feature leaves the monitoring config unset, which means it'll inherit the config from EntityType.
# 'gender' Feature explicitly disables monitoring.
# 'liked_genres' Feature is a STRING_ARRAY type, so it is automatically excluded from monitoring.
# For Features with monitoring enabled, distribution statistics are updated periodically in the console.
admin_client.batch_create_features(
parent=admin_client.entity_type_path(PROJECT_ID, REGION, FEATURESTORE_ID, "users"),
requests=[
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.INT64,
description="User age",
),
feature_id="age",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="User gender",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
disabled=True,
),
),
),
feature_id="gender",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING_ARRAY,
description="An array of genres that this user liked",
),
feature_id="liked_genres",
),
],
).result()
# Create features for movies type.
# 'title' Feature enables monitoring.
admin_client.batch_create_features(
parent=admin_client.entity_type_path(PROJECT_ID, REGION, FEATURESTORE_ID, "movies"),
requests=[
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="The title of the movie",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
monitoring_interval=Duration(seconds=172800), # 2 days
),
),
),
feature_id="title",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="The genres of the movie",
),
feature_id="genres",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.DOUBLE,
description="The average rating for the movie, range is [1.0-5.0]",
),
feature_id="average_rating",
),
],
).result()
###Output
_____no_output_____
###Markdown
Search created featuresWhile the [ListFeatures](https://cloud.google.com/vertex-ai/featurestore/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.ListFeatures) method allows you to easily view all features of a singleentity type, the [SearchFeatures](https://cloud.google.com/vertex-ai/featurestore/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.SearchFeatures) method searches across all featurestoresand entity types in a given location (such as `us-central1`). This can help you discover features that were created by someone else.You can query based on feature properties including feature ID, entity type ID,and feature description. You can also limit results by filtering on a specificfeaturestore, feature value type, and/or labels.
###Code
# Search for all features across all featurestores.
list(admin_client.search_features(location=BASE_RESOURCE_PATH))
###Output
_____no_output_____
###Markdown
Now, narrow down the search to features that are of type `DOUBLE`
###Code
# Search for all features with value type `DOUBLE`
list(
admin_client.search_features(
featurestore_service_pb2.SearchFeaturesRequest(
location=BASE_RESOURCE_PATH, query="value_type=DOUBLE"
)
)
)
###Output
_____no_output_____
###Markdown
Or, limit the search results to features with specific keywords in their ID and type.
###Code
# Filter on feature value type and keywords.
list(
admin_client.search_features(
featurestore_service_pb2.SearchFeaturesRequest(
location=BASE_RESOURCE_PATH, query="feature_id:title AND value_type=STRING"
)
)
)
###Output
_____no_output_____
###Markdown
Import Feature ValuesYou need to import feature values before you can use them for online/offline serving. In this step, you will learn how to import feature values by calling the ImportFeatureValues API using the Python SDK. Source Data Format and LayoutAs mentioned above, BigQuery table/Avro/CSV are supported. No matter what format you are using, each imported entity *must* have an ID; also, each entity can *optionally* have a timestamp, sepecifying when the feature values are generated. This Colab uses Avro as an input, located at this public [bucket](https://pantheon.corp.google.com/storage/browser/cloud-samples-data/ai-platform-unified/datasets/featurestore;tab=objects?project=storage-samples&prefix=&forceOnObjectsSortingFiltering=false). The Avro schemas are as follows:**For the Users entity**:```schema = { "type": "record", "name": "User", "fields": [ { "name":"user_id", "type":["null","string"] }, { "name":"age", "type":["null","long"] }, { "name":"gender", "type":["null","string"] }, { "name":"liked_genres", "type":{"type":"array","items":"string"} }, { "name":"update_time", "type":["null",{"type":"long","logicalType":"timestamp-micros"}] }, ] }```**For the Movies entity**```schema = { "type": "record", "name": "Movie", "fields": [ { "name":"movie_id", "type":["null","string"] }, { "name":"average_rating", "type":["null","double"] }, { "name":"title", "type":["null","string"] }, { "name":"genres", "type":["null","string"] }, { "name":"update_time", "type":["null",{"type":"long","logicalType":"timestamp-micros"}] }, ]}``` Import feature values for UsersWhen importing, specify the following in your request:* Data source format: BigQuery Table/Avro/CSV* Data source URL* Destination: featurestore/entity types/features to be imported
###Code
import_users_request = featurestore_service_pb2.ImportFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
avro_source=io_pb2.AvroSource(
# Source
gcs_source=io_pb2.GcsSource(
uris=[
"gs://cloud-samples-data-us-central1/ai-platform-unified/datasets/featurestore/users.avro"
]
)
),
entity_id_field="user_id",
feature_specs=[
# Features
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="age"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="gender"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(
id="liked_genres"
),
],
feature_time_field="update_time",
worker_count=10,
)
# Start to import, will take a couple of minutes
ingestion_lro = admin_client.import_feature_values(import_users_request)
# Polls for the LRO status and prints when the LRO has completed
ingestion_lro.result()
###Output
_____no_output_____
###Markdown
Import feature values for MoviesSimilarly, import feature values for 'movies' into the featurestore.
###Code
import_movie_request = featurestore_service_pb2.ImportFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "movies"
),
avro_source=io_pb2.AvroSource(
gcs_source=io_pb2.GcsSource(
uris=[
"gs://cloud-samples-data-us-central1/ai-platform-unified/datasets/featurestore/movies.avro"
]
)
),
entity_id_field="movie_id",
feature_specs=[
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="title"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="genres"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(
id="average_rating"
),
],
feature_time_field="update_time",
worker_count=10,
)
# Start to import, will take a couple of minutes
ingestion_lro = admin_client.import_feature_values(import_movie_request)
# Polls for the LRO status and prints when the LRO has completed
ingestion_lro.result()
###Output
_____no_output_____
###Markdown
Online serving The[Online Serving APIs](https://cloud.google.com/vertex-ai/featurestore/docs/reference/rpc/google.cloud.aiplatform.v1beta1featurestoreonlineservingservice)lets you serve feature values for small batches of entities. It's designed for latency-sensitive service, such as online model prediction. For example, for a movie service, you might want to quickly shows movies that the current user would most likely watch by using online predictions. Read one entity per requestThe ReadFeatureValues API is used to read feature values of one entity; henceits custom HTTP verb is `readFeatureValues`. By default, the API will return the latest value of each feature, meaning the feature values with the most recent timestamp.To read feature values, specify the entity ID and features to read. The responsecontains a `header` and an `entity_view`. Each row of data in the `entity_view`contains one feature value, in the same order of features as listed in the response header.
###Code
# Fetch the following 3 features.
feature_selector = FeatureSelector(
id_matcher=IdMatcher(ids=["age", "gender", "liked_genres"])
)
data_client.read_feature_values(
featurestore_online_service_pb2.ReadFeatureValuesRequest(
# Fetch from the following feature store/entity type
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
# Fetch the user features whose ID is "alice"
entity_id="alice",
feature_selector=feature_selector,
)
)
###Output
_____no_output_____
###Markdown
Read multiple entities per requestTo read feature values from multiple entities, use theStreamingReadFeatureValues API, which is almost identical to the previousReadFeatureValues API. Note that fetching only a small number of entities is recomended when using this API due to its latency-sensitive nature.
###Code
# Read the same set of features as above, but for multiple entities.
response_stream = data_client.streaming_read_feature_values(
featurestore_online_service_pb2.StreamingReadFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
entity_ids=["alice", "bob"],
feature_selector=feature_selector,
)
)
# Iterate and process response. Note the first one is always the header only.
for response in response_stream:
print(response)
###Output
_____no_output_____
###Markdown
Now that you have learned how to featch imported feature values for online serving, the next step is learning how to use imported feature values for offline use cases. Batch ServingBatch Serving is used to fetch a large batch of feature values for high-throughput, typically for training a model or batch prediction. In this section, you will learn how to prepare for training examples by calling the BatchReadFeatureValues API. Use case**The task** is to prepare a training dataset to train a model, which predicts if a given user will watch a given movie. To achieve this, you need 2 sets of input:* Features: you already imported into the featurestore.* Labels: the groud-truth data recorded that user X has watched movie Y.To be more specific, the ground-truth observation is described in Table 1 and the desired training dataset is described in Table 2. Each row in Table 2 is a result of joining the imported feature values from Feature Store according to the entity IDs and timestamps in Table 1. In this example, the `age`, `gender` and `liked_genres` features from `users` andthe `genres` and `average_rating` features from `movies` are chosen to train the model. Note that only positive examples are shown in these 2 tables, i.e., you can imagine there is a label column whose values are all `True`.BatchReadFeatureValues API takes Table 1 asinput, joins all required feature values from the featurestore, and returns Table 2 for training.Table 1. Ground-truth Datausers | movies | timestamp ----- | -------- | -------------------- alice | Cinema Paradiso | 2019-11-01T00:00:00Z bob | The Shining | 2019-11-15T18:09:43Z ... | ... | ... Table 2. Expected Training Data Generated by Batch Read API (Positive Samples)timestamp | entity_type_users | age | gender | liked_genres | entity_type_movies | genres | average_rating -------------------- | ----------------- | --------------- | ---------------- | -------------------- | -------- | --------- | ----- 2019-11-01T00:00:00Z | bob | 35 | M | [Action, Crime] | The Shining | Horror | 4.8 2019-11-01T00:00:00Z | alice | 55 | F | [Drama, Comedy] | Cinema Paradiso | Romance | 4.5 ... | ... | ... | ... | ... | ... | ... | ... Why timestamp?Note that there is a `timestamp` column in Table 2. This indicates the time when the ground-truth was observed. This is to avoid data inconsistency.For example, the 1st row of Table 2 indicates that user `alice` watched movie `Cinema Paradiso` on `2019-11-01T00:00:00Z`. The featurestore keeps feature values for all timestamps but fetches feature values *only* at the given timestamp during batch serving. On 2019-11-01 alice might be 54 years old, but now alice might be 56; featurestore returns `age=54` as alice's age, instead of `age=56`, because that is the value of the feature at the observation time. Similarly, other features might be time-variant as well, such as liked_genres. Batch Read Feature ValuesAssemble the request which specify the following info:* Where is the label data, i.e., Table 1.* Which features are read, i.e., the column names in Table 2.The output is stored in a BigQuery table.
###Code
batch_serving_request = featurestore_service_pb2.BatchReadFeatureValuesRequest(
# featurestore info
featurestore=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
# URL for the label data, i.e., Table 1.
csv_read_instances=io_pb2.CsvSource(
gcs_source=io_pb2.GcsSource(uris=[INPUT_CSV_FILE])
),
destination=featurestore_service_pb2.FeatureValueDestination(
bigquery_destination=io_pb2.BigQueryDestination(
# Output to BigQuery table created earlier
output_uri=DESTINATION_TABLE_URI
)
),
entity_type_specs=[
featurestore_service_pb2.BatchReadFeatureValuesRequest.EntityTypeSpec(
# Read the 'age', 'gender' and 'liked_genres' features from the 'users' entity
entity_type_id="users",
feature_selector=FeatureSelector(
id_matcher=IdMatcher(
ids=[
# features, use "*" if you want to select all features within this entity type
"age",
"gender",
"liked_genres",
]
)
),
),
featurestore_service_pb2.BatchReadFeatureValuesRequest.EntityTypeSpec(
# Read the 'average_rating' and 'genres' feature values of the 'movies' entity
entity_type_id="movies",
feature_selector=FeatureSelector(
id_matcher=IdMatcher(ids=["average_rating", "genres"])
),
),
],
)
# Execute the batch read
batch_serving_lro = admin_client.batch_read_feature_values(batch_serving_request)
# This long runing operation will poll until the batch read finishes.
batch_serving_lro.result()
###Output
_____no_output_____
###Markdown
After the LRO finishes, you should be able to see the result from the [BigQuery console](https://console.cloud.google.com/bigquery), in the dataset created earlier. Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.You can also keep the project but delete the featurestore:
###Code
admin_client.delete_featurestore(
request=featurestore_service_pb2.DeleteFeaturestoreRequest(
name=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
force=True,
)
).result()
client.delete_dataset(
DESTINATION_DATA_SET, delete_contents=True, not_found_ok=True
) # Make an API request.
print("Deleted dataset '{}'.".format(DESTINATION_DATA_SET))
###Output
_____no_output_____
###Markdown
Run in Colab View on GitHub OverviewThis Colab introduces Feature Store, a managed cloud service for machine learning engineers and data scientists to store, serve, manage and share machine learning features at a large scale.This Colab assumes that you understand basic Google Cloud concepts such as [Project](https://cloud.google.com/storage/docs/projects), [Storage](https://cloud.google.com/storage) and [Vertex AI](https://cloud.google.com/vertex-ai/docs). Some machine learning knowledge is also helpful but not required. DatasetThis Colab uses a movie recommendation dataset as an example throughout all the sessions. The task is to train a model to predict if a user is going to watch a movie and serve this model online. ObjectiveIn this notebook, you will learn how to: * How to import your features into Feature Store. * How to serve online prediction requests using the imported features. * How to access imported features in offline jobs, such as training jobs. Costs This tutorial uses billable components of Google Cloud:* Vertex AI* Cloud Storage* Cloud BigtableLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. Set up your local development environment**If you are using Colab or Google Cloud Notebooks**, your environment already meetsall the requirements to run this notebook. You can skip this step. **Otherwise**, make sure your environment meets this notebook's requirements.You need the following:* The Google Cloud SDK* Git* Python 3* virtualenv* Jupyter notebook running in a virtual environment with Python 3The Google Cloud guide to [Setting up a Python developmentenvironment](https://cloud.google.com/python/setup) and the [Jupyterinstallation guide](https://jupyter.org/install) provide detailed instructionsfor meeting these requirements. The following steps provide a condensed set ofinstructions:1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)1. [Install Python 3.](https://cloud.google.com/python/setupinstalling_python)1. [Install virtualenv](https://cloud.google.com/python/setupinstalling_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.1. To install Jupyter, run `pip install jupyter` on thecommand-line in a terminal shell.1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.1. Open this notebook in the Jupyter Notebook Dashboard. Install additional packagesFor this Colab, you need the Vertex SDK for Python.
###Code
# Uninstall previous version of google-cloud-aiplatform SDK, if any.
!pip uninstall google-cloud-aiplatform -y
# Install the latest public release version
# !pip install -U google-cloud-aiplatform
# Install the testing version
!pip install git+https://github.com/googleapis/python-aiplatform.git@main-test
###Output
_____no_output_____
###Markdown
Restart the kernelAfter you install the SDK, you need to restart the notebook kernel so it can find the packages. You can restart kernel from *Kernel -> Restart Kernel*, or running the following:
###Code
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin Select a GPU runtime**Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. [Enable the Vertex AI API and Compute Engine API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com,compute_component).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
###Output
_____no_output_____
###Markdown
Otherwise, set your project ID here.
###Code
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "python-docs-samples-tests" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Google Cloud Notebooks**, your environment is alreadyauthenticated. Skip this step. **If you are using Colab**, run the cell below and follow the instructionswhen prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:1. In the Cloud Console, go to the [**Create service account key** page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).2. Click **Create service account**.3. In the **Service account name** field, enter a name, and click **Create**.4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type "Vertex AI"into the filter box, and select **Vertex AI Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.5. Click *Create*. A JSON file that contains your key downloads to yourlocal environment.6. Enter the path to your service account key as the`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
###Code
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebooks, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Prepare for output Step 1. Create dataset for outputYou need a BigQuery dataset to host the output data in `us-central1`. Input the name of the dataset you want to created and specify the name of the table you want to store the output later. These will be used later in the notebook.**Make sure that the table name does NOT already exist**.
###Code
from datetime import datetime
from google.cloud import bigquery
# Output dataset
DESTINATION_DATA_SET = "movie_predictions" # @param {type:"string"}
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
DESTINATION_DATA_SET = "{prefix}_{timestamp}".format(
prefix=DESTINATION_DATA_SET, timestamp=TIMESTAMP
)
# Output table. Make sure that the table does NOT already exist; the BatchReadFeatureValues API cannot overwrite an existing table
DESTINATION_TABLE_NAME = "training_data" # @param {type:"string"}
DESTINATION_PATTERN = "bq://{project}.{dataset}.{table}"
DESTINATION_TABLE_URI = DESTINATION_PATTERN.format(
project=PROJECT_ID, dataset=DESTINATION_DATA_SET, table=DESTINATION_TABLE_NAME
)
# Create dataset
REGION = "us-central1" # @param {type:"string"}
client = bigquery.Client()
dataset_id = "{}.{}".format(client.project, DESTINATION_DATA_SET)
dataset = bigquery.Dataset(dataset_id)
dataset.location = REGION
dataset = client.create_dataset(dataset, timeout=30)
print("Created dataset {}.{}".format(client.project, dataset.dataset_id))
###Output
_____no_output_____
###Markdown
Import libraries and define constants
###Code
# Other than project ID and featurestore ID and endpoints needs to be set
API_ENDPOINT = "us-central1-aiplatform.googleapis.com" # @param {type:"string"}
INPUT_CSV_FILE = "gs://cloud-samples-data-us-central1/ai-platform-unified/datasets/featurestore/movie_prediction.csv"
from google.cloud.aiplatform_v1beta1 import (
FeaturestoreOnlineServingServiceClient, FeaturestoreServiceClient)
from google.cloud.aiplatform_v1beta1.types import FeatureSelector, IdMatcher
from google.cloud.aiplatform_v1beta1.types import \
entity_type as entity_type_pb2
from google.cloud.aiplatform_v1beta1.types import feature as feature_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore as featurestore_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_monitoring as featurestore_monitoring_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_online_service as featurestore_online_service_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_service as featurestore_service_pb2
from google.cloud.aiplatform_v1beta1.types import io as io_pb2
from google.protobuf.duration_pb2 import Duration
# Create admin_client for CRUD and data_client for reading feature values.
admin_client = FeaturestoreServiceClient(client_options={"api_endpoint": API_ENDPOINT})
data_client = FeaturestoreOnlineServingServiceClient(
client_options={"api_endpoint": API_ENDPOINT}
)
# Represents featurestore resource path.
BASE_RESOURCE_PATH = admin_client.common_location_path(PROJECT_ID, REGION)
###Output
_____no_output_____
###Markdown
Terminology and Concept Featurestore Data modelFeature Store organizes data with the following 3 important hierarchical concepts:```Featurestore -> EntityType -> Feature```* **Featurestore**: the place to store your features* **EntityType**: under a Featurestore, an *EntityType* describes an object to be modeled, real one or virtual one.* **Feature**: under an EntityType, a *feature* describes an attribute of the EntityTypeIn the movie prediction example, you will create a featurestore called *movie_prediction*. This store has 2 entity types: *Users* and *Movies*. The Users entity type has the age, gender, and like genres features. The Movies entity type has the genres and average rating features. Create Featurestore and Define Schemas Create FeaturestoreThe method to create a featurestore returns a[long-running operation](https://google.aip.dev/151) (LRO). An LRO starts an asynchronous job. LROs are returned for other APImethods too, such as updating or deleting a featurestore. Calling`create_fs_lro.result()` waits for the LRO to complete.
###Code
FEATURESTORE_ID = "movie_prediction_{timestamp}".format(timestamp=TIMESTAMP)
create_lro = admin_client.create_featurestore(
featurestore_service_pb2.CreateFeaturestoreRequest(
parent=BASE_RESOURCE_PATH,
featurestore_id=FEATURESTORE_ID,
featurestore=featurestore_pb2.Featurestore(
display_name="Featurestore for movie prediction",
online_serving_config=featurestore_pb2.Featurestore.OnlineServingConfig(
fixed_node_count=3
),
),
)
)
# Wait for LRO to finish and get the LRO result.
print(create_lro.result())
###Output
_____no_output_____
###Markdown
You can use [GetFeaturestore](https://cloud.google.com/vertex-ai/featurestore/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.GetFeaturestore) or [ListFeaturestores](https://cloud.google.com/vertex-ai/featurestore/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.ListFeaturestores) to check if the Featurestore was successfully created. The following example gets the details of the Featurestore.
###Code
admin_client.get_featurestore(
name=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID)
)
###Output
_____no_output_____
###Markdown
Create Entity TypeYou can specify a monitoring config which will by default be inherited by all Features under this EntityType.
###Code
# Create users entity type with monitoring enabled.
# All Features belonging to this EntityType will by default inherit the monitoring config.
users_entity_type_lro = admin_client.create_entity_type(
featurestore_service_pb2.CreateEntityTypeRequest(
parent=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
entity_type_id="users",
entity_type=entity_type_pb2.EntityType(
description="Users entity",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
monitoring_interval=Duration(seconds=86400), # 1 day
),
),
),
)
)
# Similarly, wait for EntityType creation operation.
print(users_entity_type_lro.result())
# Create movies entity type without a monitoring configuration.
movies_entity_type_lro = admin_client.create_entity_type(
featurestore_service_pb2.CreateEntityTypeRequest(
parent=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
entity_type_id="movies",
entity_type=entity_type_pb2.EntityType(description="Movies entity"),
)
)
# Similarly, wait for EntityType creation operation.
print(movies_entity_type_lro.result())
###Output
_____no_output_____
###Markdown
Create FeatureYou can also set a custom monitoring configuration at the Feature level, and view the properties and metrics in the console: sample [properties](https://storage.googleapis.com/cloud-samples-data/ai-platform-unified/datasets/featurestore/Feature%20Properties.png), sample [metrics](https://storage.googleapis.com/cloud-samples-data/ai-platform-unified/datasets/featurestore/Feature%20Snapshot%20Distribution.png).
###Code
# Create features for the 'users' entity.
# 'age' Feature leaves the monitoring config unset, which means it'll inherit the config from EntityType.
# 'gender' Feature explicitly disables monitoring.
# 'liked_genres' Feature is a STRING_ARRAY type, so it is automatically excluded from monitoring.
# For Features with monitoring enabled, distribution statistics are updated periodically in the console.
admin_client.batch_create_features(
parent=admin_client.entity_type_path(PROJECT_ID, REGION, FEATURESTORE_ID, "users"),
requests=[
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.INT64,
description="User age",
),
feature_id="age",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="User gender",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
disabled=True,
),
),
),
feature_id="gender",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING_ARRAY,
description="An array of genres that this user liked",
),
feature_id="liked_genres",
),
],
).result()
# Create features for movies type.
# 'title' Feature enables monitoring.
admin_client.batch_create_features(
parent=admin_client.entity_type_path(PROJECT_ID, REGION, FEATURESTORE_ID, "movies"),
requests=[
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="The title of the movie",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
monitoring_interval=Duration(seconds=172800), # 2 days
),
),
),
feature_id="title",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="The genres of the movie",
),
feature_id="genres",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.DOUBLE,
description="The average rating for the movie, range is [1.0-5.0]",
),
feature_id="average_rating",
),
],
).result()
###Output
_____no_output_____
###Markdown
Search created featuresWhile the [ListFeatures](https://cloud.google.com/vertex-ai/featurestore/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.ListFeatures) method allows you to easily view all features of a singleentity type, the [SearchFeatures](https://cloud.google.com/vertex-ai/featurestore/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.SearchFeatures) method searches across all featurestoresand entity types in a given location (such as `us-central1`). This can help you discover features that were created by someone else.You can query based on feature properties including feature ID, entity type ID,and feature description. You can also limit results by filtering on a specificfeaturestore, feature value type, and/or labels.
###Code
# Search for all features across all featurestores.
list(admin_client.search_features(location=BASE_RESOURCE_PATH))
###Output
_____no_output_____
###Markdown
Now, narrow down the search to features that are of type `DOUBLE`
###Code
# Search for all features with value type `DOUBLE`
list(
admin_client.search_features(
featurestore_service_pb2.SearchFeaturesRequest(
location=BASE_RESOURCE_PATH, query="value_type=DOUBLE"
)
)
)
###Output
_____no_output_____
###Markdown
Or, limit the search results to features with specific keywords in their ID and type.
###Code
# Filter on feature value type and keywords.
list(
admin_client.search_features(
featurestore_service_pb2.SearchFeaturesRequest(
location=BASE_RESOURCE_PATH, query="feature_id:title AND value_type=STRING"
)
)
)
###Output
_____no_output_____
###Markdown
Import Feature ValuesYou need to import feature values before you can use them for online/offline serving. In this step, you will learn how to import feature values by calling the ImportFeatureValues API using the Python SDK. Source Data Format and LayoutAs mentioned above, BigQuery table/Avro/CSV are supported. No matter what format you are using, each imported entity *must* have an ID; also, each entity can *optionally* have a timestamp, sepecifying when the feature values are generated. This Colab uses Avro as an input, located at this public [bucket](https://pantheon.corp.google.com/storage/browser/cloud-samples-data/ai-platform-unified/datasets/featurestore;tab=objects?project=storage-samples&prefix=&forceOnObjectsSortingFiltering=false). The Avro schemas are as follows:**For the Users entity**:```schema = { "type": "record", "name": "User", "fields": [ { "name":"user_id", "type":["null","string"] }, { "name":"age", "type":["null","long"] }, { "name":"gender", "type":["null","string"] }, { "name":"liked_genres", "type":{"type":"array","items":"string"} }, { "name":"update_time", "type":["null",{"type":"long","logicalType":"timestamp-micros"}] }, ] }```**For the Movies entity**```schema = { "type": "record", "name": "Movie", "fields": [ { "name":"movie_id", "type":["null","string"] }, { "name":"average_rating", "type":["null","double"] }, { "name":"title", "type":["null","string"] }, { "name":"genres", "type":["null","string"] }, { "name":"update_time", "type":["null",{"type":"long","logicalType":"timestamp-micros"}] }, ]}``` Import feature values for UsersWhen importing, specify the following in your request:* Data source format: BigQuery Table/Avro/CSV* Data source URL* Destination: featurestore/entity types/features to be imported
###Code
import_users_request = featurestore_service_pb2.ImportFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
avro_source=io_pb2.AvroSource(
# Source
gcs_source=io_pb2.GcsSource(
uris=[
"gs://cloud-samples-data-us-central1/ai-platform-unified/datasets/featurestore/users.avro"
]
)
),
entity_id_field="user_id",
feature_specs=[
# Features
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="age"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="gender"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(
id="liked_genres"
),
],
feature_time_field="update_time",
worker_count=10,
)
# Start to import, will take a couple of minutes
ingestion_lro = admin_client.import_feature_values(import_users_request)
# Polls for the LRO status and prints when the LRO has completed
ingestion_lro.result()
###Output
_____no_output_____
###Markdown
Import feature values for MoviesSimilarly, import feature values for 'movies' into the featurestore.
###Code
import_movie_request = featurestore_service_pb2.ImportFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "movies"
),
avro_source=io_pb2.AvroSource(
gcs_source=io_pb2.GcsSource(
uris=[
"gs://cloud-samples-data-us-central1/ai-platform-unified/datasets/featurestore/movies.avro"
]
)
),
entity_id_field="movie_id",
feature_specs=[
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="title"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="genres"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(
id="average_rating"
),
],
feature_time_field="update_time",
worker_count=10,
)
# Start to import, will take a couple of minutes
ingestion_lro = admin_client.import_feature_values(import_movie_request)
# Polls for the LRO status and prints when the LRO has completed
ingestion_lro.result()
###Output
_____no_output_____
###Markdown
Online serving The[Online Serving APIs](https://cloud.google.com/vertex-ai/featurestore/docs/reference/rpc/google.cloud.aiplatform.v1beta1featurestoreonlineservingservice)lets you serve feature values for small batches of entities. It's designed for latency-sensitive service, such as online model prediction. For example, for a movie service, you might want to quickly shows movies that the current user would most likely watch by using online predictions. Read one entity per requestThe ReadFeatureValues API is used to read feature values of one entity; henceits custom HTTP verb is `readFeatureValues`. By default, the API will return the latest value of each feature, meaning the feature values with the most recent timestamp.To read feature values, specify the entity ID and features to read. The responsecontains a `header` and an `entity_view`. Each row of data in the `entity_view`contains one feature value, in the same order of features as listed in the response header.
###Code
# Fetch the following 3 features.
feature_selector = FeatureSelector(
id_matcher=IdMatcher(ids=["age", "gender", "liked_genres"])
)
data_client.read_feature_values(
featurestore_online_service_pb2.ReadFeatureValuesRequest(
# Fetch from the following feature store/entity type
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
# Fetch the user features whose ID is "alice"
entity_id="alice",
feature_selector=feature_selector,
)
)
###Output
_____no_output_____
###Markdown
Read multiple entities per requestTo read feature values from multiple entities, use theStreamingReadFeatureValues API, which is almost identical to the previousReadFeatureValues API. Note that fetching only a small number of entities is recomended when using this API due to its latency-sensitive nature.
###Code
# Read the same set of features as above, but for multiple entities.
response_stream = data_client.streaming_read_feature_values(
featurestore_online_service_pb2.StreamingReadFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
entity_ids=["alice", "bob"],
feature_selector=feature_selector,
)
)
# Iterate and process response. Note the first one is always the header only.
for response in response_stream:
print(response)
###Output
_____no_output_____
###Markdown
Now that you have learned how to featch imported feature values for online serving, the next step is learning how to use imported feature values for offline use cases. Batch ServingBatch Serving is used to fetch a large batch of feature values for high-throughput, typically for training a model or batch prediction. In this section, you will learn how to prepare for training examples by calling the BatchReadFeatureValues API. Use case**The task** is to prepare a training dataset to train a model, which predicts if a given user will watch a given movie. To achieve this, you need 2 sets of input:* Features: you already imported into the featurestore.* Labels: the groud-truth data recorded that user X has watched movie Y.To be more specific, the ground-truth observation is described in Table 1 and the desired training dataset is described in Table 2. Each row in Table 2 is a result of joining the imported feature values from Feature Store according to the entity IDs and timestamps in Table 1. In this example, the `age`, `gender` and `liked_genres` features from `users` andthe `genres` and `average_rating` features from `movies` are chosen to train the model. Note that only positive examples are shown in these 2 tables, i.e., you can imagine there is a label column whose values are all `True`.BatchReadFeatureValues API takes Table 1 asinput, joins all required feature values from the featurestore, and returns Table 2 for training.Table 1. Ground-truth Datausers | movies | timestamp ----- | -------- | -------------------- alice | Cinema Paradiso | 2019-11-01T00:00:00Z bob | The Shining | 2019-11-15T18:09:43Z ... | ... | ... Table 2. Expected Training Data Generated by Batch Read API (Positive Samples)timestamp | entity_type_users | age | gender | liked_genres | entity_type_movies | genres | average_rating -------------------- | ----------------- | --------------- | ---------------- | -------------------- | -------- | --------- | ----- 2019-11-01T00:00:00Z | bob | 35 | M | [Action, Crime] | The Shining | Horror | 4.8 2019-11-01T00:00:00Z | alice | 55 | F | [Drama, Comedy] | Cinema Paradiso | Romance | 4.5 ... | ... | ... | ... | ... | ... | ... | ... Why timestamp?Note that there is a `timestamp` column in Table 2. This indicates the time when the ground-truth was observed. This is to avoid data inconsistency.For example, the 1st row of Table 2 indicates that user `alice` watched movie `Cinema Paradiso` on `2019-11-01T00:00:00Z`. The featurestore keeps feature values for all timestamps but fetches feature values *only* at the given timestamp during batch serving. On 2019-11-01 alice might be 54 years old, but now alice might be 56; featurestore returns `age=54` as alice's age, instead of `age=56`, because that is the value of the feature at the observation time. Similarly, other features might be time-variant as well, such as liked_genres. Batch Read Feature ValuesAssemble the request which specify the following info:* Where is the label data, i.e., Table 1.* Which features are read, i.e., the column names in Table 2.The output is stored in a BigQuery table.
###Code
batch_serving_request = featurestore_service_pb2.BatchReadFeatureValuesRequest(
# featurestore info
featurestore=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
# URL for the label data, i.e., Table 1.
csv_read_instances=io_pb2.CsvSource(
gcs_source=io_pb2.GcsSource(uris=[INPUT_CSV_FILE])
),
destination=featurestore_service_pb2.FeatureValueDestination(
bigquery_destination=io_pb2.BigQueryDestination(
# Output to BigQuery table created earlier
output_uri=DESTINATION_TABLE_URI
)
),
entity_type_specs=[
featurestore_service_pb2.BatchReadFeatureValuesRequest.EntityTypeSpec(
# Read the 'age', 'gender' and 'liked_genres' features from the 'users' entity
entity_type_id="users",
feature_selector=FeatureSelector(
id_matcher=IdMatcher(
ids=[
# features, use "*" if you want to select all features within this entity type
"age",
"gender",
"liked_genres",
]
)
),
),
featurestore_service_pb2.BatchReadFeatureValuesRequest.EntityTypeSpec(
# Read the 'average_rating' and 'genres' feature values of the 'movies' entity
entity_type_id="movies",
feature_selector=FeatureSelector(
id_matcher=IdMatcher(ids=["average_rating", "genres"])
),
),
],
)
# Execute the batch read
batch_serving_lro = admin_client.batch_read_feature_values(batch_serving_request)
# This long runing operation will poll until the batch read finishes.
batch_serving_lro.result()
###Output
_____no_output_____
###Markdown
After the LRO finishes, you should be able to see the result from the [BigQuery console](https://console.cloud.google.com/bigquery), in the dataset created earlier. Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.You can also keep the project but delete the featurestore:
###Code
admin_client.delete_featurestore(
request=featurestore_service_pb2.DeleteFeaturestoreRequest(
name=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
force=True,
)
).result()
client.delete_dataset(
DESTINATION_DATA_SET, delete_contents=True, not_found_ok=True
) # Make an API request.
print("Deleted dataset '{}'.".format(DESTINATION_DATA_SET))
###Output
_____no_output_____
###Markdown
Run in Colab View on GitHub OverviewThis Colab introduces Feature Store, a managed cloud service for machine learning engineers and data scientists to store, serve, manage and share machine learning features at a large scale.This Colab assumes that you understand basic Google Cloud concepts such as [Project](https://cloud.google.com/storage/docs/projects), [Storage](https://cloud.google.com/storage) and [Vertex AI](https://cloud.google.com/vertex-ai/docs). Some machine learning knowledge is also helpful but not required. DatasetThis Colab uses a movie recommendation dataset as an example throughout all the sessions. The task is to train a model to predict if a user is going to watch a movie and serve this model online. ObjectiveIn this notebook, you will learn how to: * How to import your features into Feature Store. * How to serve online prediction requests using the imported features. * How to access imported features in offline jobs, such as training jobs. Costs This tutorial uses billable components of Google Cloud:* Vertex AI* Cloud Storage* Cloud BigtableLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. Set up your local development environment**If you are using Colab or Google Cloud Notebooks**, your environment already meetsall the requirements to run this notebook. You can skip this step. **Otherwise**, make sure your environment meets this notebook's requirements.You need the following:* The Google Cloud SDK* Git* Python 3* virtualenv* Jupyter notebook running in a virtual environment with Python 3The Google Cloud guide to [Setting up a Python developmentenvironment](https://cloud.google.com/python/setup) and the [Jupyterinstallation guide](https://jupyter.org/install) provide detailed instructionsfor meeting these requirements. The following steps provide a condensed set ofinstructions:1. [Install and initialize the Cloud SDK.](https://cloud.google.com/sdk/docs/)1. [Install Python 3.](https://cloud.google.com/python/setupinstalling_python)1. [Install virtualenv](https://cloud.google.com/python/setupinstalling_and_using_virtualenv) and create a virtual environment that uses Python 3. Activate the virtual environment.1. To install Jupyter, run `pip3 install jupyter` on thecommand-line in a terminal shell.1. To launch Jupyter, run `jupyter notebook` on the command-line in a terminal shell.1. Open this notebook in the Jupyter Notebook Dashboard. Install additional packagesFor this Colab, you need the Vertex SDK for Python.
###Code
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install {USER_FLAG} --upgrade git+https://github.com/googleapis/python-aiplatform.git@main-test
###Output
_____no_output_____
###Markdown
Restart the kernelAfter you install the SDK, you need to restart the notebook kernel so it can find the packages. You can restart kernel from *Kernel -> Restart Kernel*, or running the following:
###Code
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin Select a GPU runtime**Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select "Runtime --> Change runtime type > GPU"** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.1. [Make sure that billing is enabled for your project](https://cloud.google.com/billing/docs/how-to/modify-project).1. [Enable the Vertex AI API and Compute Engine API](https://console.cloud.google.com/flows/enableapi?apiid=aiplatform.googleapis.com,compute_component).1. If you are running this notebook locally, you will need to install the [Cloud SDK](https://cloud.google.com/sdk).1. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands. Set your project ID**If you don't know your project ID**, you may be able to get your project ID using `gcloud`.
###Code
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
###Output
_____no_output_____
###Markdown
Otherwise, set your project ID here.
###Code
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "python-docs-samples-tests" # @param {type:"string"}
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Google Cloud Notebooks**, your environment is alreadyauthenticated. Skip this step. **If you are using Colab**, run the cell below and follow the instructionswhen prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:1. In the Cloud Console, go to the [**Create service account key** page](https://console.cloud.google.com/apis/credentials/serviceaccountkey).2. Click **Create service account**.3. In the **Service account name** field, enter a name, and click **Create**.4. In the **Grant this service account access to project** section, click the **Role** drop-down list. Type "Vertex AI"into the filter box, and select **Vertex AI Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.5. Click *Create*. A JSON file that contains your key downloads to yourlocal environment.6. Enter the path to your service account key as the`GOOGLE_APPLICATION_CREDENTIALS` variable in the cell below and run the cell.
###Code
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Prepare for output Step 1. Create dataset for outputYou need a BigQuery dataset to host the output data in `us-central1`. Input the name of the dataset you want to created and specify the name of the table you want to store the output later. These will be used later in the notebook.**Make sure that the table name does NOT already exist**.
###Code
from datetime import datetime
from google.cloud import bigquery
# Output dataset
DESTINATION_DATA_SET = "movie_predictions" # @param {type:"string"}
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
DESTINATION_DATA_SET = "{prefix}_{timestamp}".format(
prefix=DESTINATION_DATA_SET, timestamp=TIMESTAMP
)
# Output table. Make sure that the table does NOT already exist; the BatchReadFeatureValues API cannot overwrite an existing table
DESTINATION_TABLE_NAME = "training_data" # @param {type:"string"}
DESTINATION_PATTERN = "bq://{project}.{dataset}.{table}"
DESTINATION_TABLE_URI = DESTINATION_PATTERN.format(
project=PROJECT_ID, dataset=DESTINATION_DATA_SET, table=DESTINATION_TABLE_NAME
)
# Create dataset
REGION = "us-central1" # @param {type:"string"}
client = bigquery.Client()
dataset_id = "{}.{}".format(client.project, DESTINATION_DATA_SET)
dataset = bigquery.Dataset(dataset_id)
dataset.location = REGION
dataset = client.create_dataset(dataset, timeout=30)
print("Created dataset {}.{}".format(client.project, dataset.dataset_id))
###Output
_____no_output_____
###Markdown
Import libraries and define constants
###Code
# Other than project ID and featurestore ID and endpoints needs to be set
API_ENDPOINT = "us-central1-aiplatform.googleapis.com" # @param {type:"string"}
INPUT_CSV_FILE = "gs://cloud-samples-data-us-central1/ai-platform-unified/datasets/featurestore/movie_prediction.csv"
from google.cloud.aiplatform_v1beta1 import (
FeaturestoreOnlineServingServiceClient, FeaturestoreServiceClient)
from google.cloud.aiplatform_v1beta1.types import FeatureSelector, IdMatcher
from google.cloud.aiplatform_v1beta1.types import \
entity_type as entity_type_pb2
from google.cloud.aiplatform_v1beta1.types import feature as feature_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore as featurestore_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_monitoring as featurestore_monitoring_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_online_service as featurestore_online_service_pb2
from google.cloud.aiplatform_v1beta1.types import \
featurestore_service as featurestore_service_pb2
from google.cloud.aiplatform_v1beta1.types import io as io_pb2
from google.protobuf.duration_pb2 import Duration
# Create admin_client for CRUD and data_client for reading feature values.
admin_client = FeaturestoreServiceClient(client_options={"api_endpoint": API_ENDPOINT})
data_client = FeaturestoreOnlineServingServiceClient(
client_options={"api_endpoint": API_ENDPOINT}
)
# Represents featurestore resource path.
BASE_RESOURCE_PATH = admin_client.common_location_path(PROJECT_ID, REGION)
###Output
_____no_output_____
###Markdown
Terminology and Concept Featurestore Data modelFeature Store organizes data with the following 3 important hierarchical concepts:```Featurestore -> EntityType -> Feature```* **Featurestore**: the place to store your features* **EntityType**: under a Featurestore, an *EntityType* describes an object to be modeled, real one or virtual one.* **Feature**: under an EntityType, a *feature* describes an attribute of the EntityTypeIn the movie prediction example, you will create a featurestore called *movie_prediction*. This store has 2 entity types: *Users* and *Movies*. The Users entity type has the age, gender, and like genres features. The Movies entity type has the genres and average rating features. Create Featurestore and Define Schemas Create FeaturestoreThe method to create a featurestore returns a[long-running operation](https://google.aip.dev/151) (LRO). An LRO starts an asynchronous job. LROs are returned for other APImethods too, such as updating or deleting a featurestore. Calling`create_fs_lro.result()` waits for the LRO to complete.
###Code
FEATURESTORE_ID = "movie_prediction_{timestamp}".format(timestamp=TIMESTAMP)
create_lro = admin_client.create_featurestore(
featurestore_service_pb2.CreateFeaturestoreRequest(
parent=BASE_RESOURCE_PATH,
featurestore_id=FEATURESTORE_ID,
featurestore=featurestore_pb2.Featurestore(
display_name="Featurestore for movie prediction",
online_serving_config=featurestore_pb2.Featurestore.OnlineServingConfig(
fixed_node_count=3
),
),
)
)
# Wait for LRO to finish and get the LRO result.
print(create_lro.result())
###Output
_____no_output_____
###Markdown
You can use [GetFeaturestore](https://cloud.google.com/vertex-ai/featurestore/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.GetFeaturestore) or [ListFeaturestores](https://cloud.google.com/vertex-ai/featurestore/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.ListFeaturestores) to check if the Featurestore was successfully created. The following example gets the details of the Featurestore.
###Code
admin_client.get_featurestore(
name=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID)
)
###Output
_____no_output_____
###Markdown
Create Entity TypeYou can specify a monitoring config which will by default be inherited by all Features under this EntityType.
###Code
# Create users entity type with monitoring enabled.
# All Features belonging to this EntityType will by default inherit the monitoring config.
users_entity_type_lro = admin_client.create_entity_type(
featurestore_service_pb2.CreateEntityTypeRequest(
parent=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
entity_type_id="users",
entity_type=entity_type_pb2.EntityType(
description="Users entity",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
monitoring_interval=Duration(seconds=86400), # 1 day
),
),
),
)
)
# Similarly, wait for EntityType creation operation.
print(users_entity_type_lro.result())
# Create movies entity type without a monitoring configuration.
movies_entity_type_lro = admin_client.create_entity_type(
featurestore_service_pb2.CreateEntityTypeRequest(
parent=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
entity_type_id="movies",
entity_type=entity_type_pb2.EntityType(description="Movies entity"),
)
)
# Similarly, wait for EntityType creation operation.
print(movies_entity_type_lro.result())
###Output
_____no_output_____
###Markdown
Create FeatureYou can also set a custom monitoring configuration at the Feature level, and view the properties and metrics in the console: sample [properties](https://storage.googleapis.com/cloud-samples-data/ai-platform-unified/datasets/featurestore/Feature%20Properties.png), sample [metrics](https://storage.googleapis.com/cloud-samples-data/ai-platform-unified/datasets/featurestore/Feature%20Snapshot%20Distribution.png).
###Code
# Create features for the 'users' entity.
# 'age' Feature leaves the monitoring config unset, which means it'll inherit the config from EntityType.
# 'gender' Feature explicitly disables monitoring.
# 'liked_genres' Feature is a STRING_ARRAY type, so it is automatically excluded from monitoring.
# For Features with monitoring enabled, distribution statistics are updated periodically in the console.
admin_client.batch_create_features(
parent=admin_client.entity_type_path(PROJECT_ID, REGION, FEATURESTORE_ID, "users"),
requests=[
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.INT64,
description="User age",
),
feature_id="age",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="User gender",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
disabled=True,
),
),
),
feature_id="gender",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING_ARRAY,
description="An array of genres that this user liked",
),
feature_id="liked_genres",
),
],
).result()
# Create features for movies type.
# 'title' Feature enables monitoring.
admin_client.batch_create_features(
parent=admin_client.entity_type_path(PROJECT_ID, REGION, FEATURESTORE_ID, "movies"),
requests=[
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="The title of the movie",
monitoring_config=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig(
snapshot_analysis=featurestore_monitoring_pb2.FeaturestoreMonitoringConfig.SnapshotAnalysis(
monitoring_interval=Duration(seconds=172800), # 2 days
),
),
),
feature_id="title",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.STRING,
description="The genres of the movie",
),
feature_id="genres",
),
featurestore_service_pb2.CreateFeatureRequest(
feature=feature_pb2.Feature(
value_type=feature_pb2.Feature.ValueType.DOUBLE,
description="The average rating for the movie, range is [1.0-5.0]",
),
feature_id="average_rating",
),
],
).result()
###Output
_____no_output_____
###Markdown
Search created featuresWhile the [ListFeatures](https://cloud.google.com/vertex-ai/featurestore/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.ListFeatures) method allows you to easily view all features of a singleentity type, the [SearchFeatures](https://cloud.google.com/vertex-ai/featurestore/docs/reference/rpc/google.cloud.aiplatform.v1beta1google.cloud.aiplatform.v1beta1.FeaturestoreService.SearchFeatures) method searches across all featurestoresand entity types in a given location (such as `us-central1`). This can help you discover features that were created by someone else.You can query based on feature properties including feature ID, entity type ID,and feature description. You can also limit results by filtering on a specificfeaturestore, feature value type, and/or labels.
###Code
# Search for all features across all featurestores.
list(admin_client.search_features(location=BASE_RESOURCE_PATH))
###Output
_____no_output_____
###Markdown
Now, narrow down the search to features that are of type `DOUBLE`
###Code
# Search for all features with value type `DOUBLE`
list(
admin_client.search_features(
featurestore_service_pb2.SearchFeaturesRequest(
location=BASE_RESOURCE_PATH, query="value_type=DOUBLE"
)
)
)
###Output
_____no_output_____
###Markdown
Or, limit the search results to features with specific keywords in their ID and type.
###Code
# Filter on feature value type and keywords.
list(
admin_client.search_features(
featurestore_service_pb2.SearchFeaturesRequest(
location=BASE_RESOURCE_PATH, query="feature_id:title AND value_type=STRING"
)
)
)
###Output
_____no_output_____
###Markdown
Import Feature ValuesYou need to import feature values before you can use them for online/offline serving. In this step, you will learn how to import feature values by calling the ImportFeatureValues API using the Python SDK. Source Data Format and LayoutAs mentioned above, BigQuery table/Avro/CSV are supported. No matter what format you are using, each imported entity *must* have an ID; also, each entity can *optionally* have a timestamp, sepecifying when the feature values are generated. This Colab uses Avro as an input, located at this public [bucket](https://pantheon.corp.google.com/storage/browser/cloud-samples-data/ai-platform-unified/datasets/featurestore;tab=objects?project=storage-samples&prefix=&forceOnObjectsSortingFiltering=false). The Avro schemas are as follows:**For the Users entity**:```schema = { "type": "record", "name": "User", "fields": [ { "name":"user_id", "type":["null","string"] }, { "name":"age", "type":["null","long"] }, { "name":"gender", "type":["null","string"] }, { "name":"liked_genres", "type":{"type":"array","items":"string"} }, { "name":"update_time", "type":["null",{"type":"long","logicalType":"timestamp-micros"}] }, ] }```**For the Movies entity**```schema = { "type": "record", "name": "Movie", "fields": [ { "name":"movie_id", "type":["null","string"] }, { "name":"average_rating", "type":["null","double"] }, { "name":"title", "type":["null","string"] }, { "name":"genres", "type":["null","string"] }, { "name":"update_time", "type":["null",{"type":"long","logicalType":"timestamp-micros"}] }, ]}``` Import feature values for UsersWhen importing, specify the following in your request:* Data source format: BigQuery Table/Avro/CSV* Data source URL* Destination: featurestore/entity types/features to be imported
###Code
import_users_request = featurestore_service_pb2.ImportFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
avro_source=io_pb2.AvroSource(
# Source
gcs_source=io_pb2.GcsSource(
uris=[
"gs://cloud-samples-data-us-central1/ai-platform-unified/datasets/featurestore/users.avro"
]
)
),
entity_id_field="user_id",
feature_specs=[
# Features
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="age"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="gender"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(
id="liked_genres"
),
],
feature_time_field="update_time",
worker_count=10,
)
# Start to import, will take a couple of minutes
ingestion_lro = admin_client.import_feature_values(import_users_request)
# Polls for the LRO status and prints when the LRO has completed
ingestion_lro.result()
###Output
_____no_output_____
###Markdown
Import feature values for MoviesSimilarly, import feature values for 'movies' into the featurestore.
###Code
import_movie_request = featurestore_service_pb2.ImportFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "movies"
),
avro_source=io_pb2.AvroSource(
gcs_source=io_pb2.GcsSource(
uris=[
"gs://cloud-samples-data-us-central1/ai-platform-unified/datasets/featurestore/movies.avro"
]
)
),
entity_id_field="movie_id",
feature_specs=[
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="title"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(id="genres"),
featurestore_service_pb2.ImportFeatureValuesRequest.FeatureSpec(
id="average_rating"
),
],
feature_time_field="update_time",
worker_count=10,
)
# Start to import, will take a couple of minutes
ingestion_lro = admin_client.import_feature_values(import_movie_request)
# Polls for the LRO status and prints when the LRO has completed
ingestion_lro.result()
###Output
_____no_output_____
###Markdown
Online serving The[Online Serving APIs](https://cloud.google.com/vertex-ai/featurestore/docs/reference/rpc/google.cloud.aiplatform.v1beta1featurestoreonlineservingservice)lets you serve feature values for small batches of entities. It's designed for latency-sensitive service, such as online model prediction. For example, for a movie service, you might want to quickly shows movies that the current user would most likely watch by using online predictions. Read one entity per requestThe ReadFeatureValues API is used to read feature values of one entity; henceits custom HTTP verb is `readFeatureValues`. By default, the API will return the latest value of each feature, meaning the feature values with the most recent timestamp.To read feature values, specify the entity ID and features to read. The responsecontains a `header` and an `entity_view`. Each row of data in the `entity_view`contains one feature value, in the same order of features as listed in the response header.
###Code
# Fetch the following 3 features.
feature_selector = FeatureSelector(
id_matcher=IdMatcher(ids=["age", "gender", "liked_genres"])
)
data_client.read_feature_values(
featurestore_online_service_pb2.ReadFeatureValuesRequest(
# Fetch from the following feature store/entity type
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
# Fetch the user features whose ID is "alice"
entity_id="alice",
feature_selector=feature_selector,
)
)
###Output
_____no_output_____
###Markdown
Read multiple entities per requestTo read feature values from multiple entities, use theStreamingReadFeatureValues API, which is almost identical to the previousReadFeatureValues API. Note that fetching only a small number of entities is recomended when using this API due to its latency-sensitive nature.
###Code
# Read the same set of features as above, but for multiple entities.
response_stream = data_client.streaming_read_feature_values(
featurestore_online_service_pb2.StreamingReadFeatureValuesRequest(
entity_type=admin_client.entity_type_path(
PROJECT_ID, REGION, FEATURESTORE_ID, "users"
),
entity_ids=["alice", "bob"],
feature_selector=feature_selector,
)
)
# Iterate and process response. Note the first one is always the header only.
for response in response_stream:
print(response)
###Output
_____no_output_____
###Markdown
Now that you have learned how to featch imported feature values for online serving, the next step is learning how to use imported feature values for offline use cases. Batch ServingBatch Serving is used to fetch a large batch of feature values for high-throughput, typically for training a model or batch prediction. In this section, you will learn how to prepare for training examples by calling the BatchReadFeatureValues API. Use case**The task** is to prepare a training dataset to train a model, which predicts if a given user will watch a given movie. To achieve this, you need 2 sets of input:* Features: you already imported into the featurestore.* Labels: the groud-truth data recorded that user X has watched movie Y.To be more specific, the ground-truth observation is described in Table 1 and the desired training dataset is described in Table 2. Each row in Table 2 is a result of joining the imported feature values from Feature Store according to the entity IDs and timestamps in Table 1. In this example, the `age`, `gender` and `liked_genres` features from `users` andthe `genres` and `average_rating` features from `movies` are chosen to train the model. Note that only positive examples are shown in these 2 tables, i.e., you can imagine there is a label column whose values are all `True`.BatchReadFeatureValues API takes Table 1 asinput, joins all required feature values from the featurestore, and returns Table 2 for training.Table 1. Ground-truth Datausers | movies | timestamp ----- | -------- | -------------------- alice | Cinema Paradiso | 2019-11-01T00:00:00Z bob | The Shining | 2019-11-15T18:09:43Z ... | ... | ... Table 2. Expected Training Data Generated by Batch Read API (Positive Samples)timestamp | entity_type_users | age | gender | liked_genres | entity_type_movies | genres | average_rating -------------------- | ----------------- | --------------- | ---------------- | -------------------- | -------- | --------- | ----- 2019-11-01T00:00:00Z | bob | 35 | M | [Action, Crime] | The Shining | Horror | 4.8 2019-11-01T00:00:00Z | alice | 55 | F | [Drama, Comedy] | Cinema Paradiso | Romance | 4.5 ... | ... | ... | ... | ... | ... | ... | ... Why timestamp?Note that there is a `timestamp` column in Table 2. This indicates the time when the ground-truth was observed. This is to avoid data inconsistency.For example, the 1st row of Table 2 indicates that user `alice` watched movie `Cinema Paradiso` on `2019-11-01T00:00:00Z`. The featurestore keeps feature values for all timestamps but fetches feature values *only* at the given timestamp during batch serving. On 2019-11-01 alice might be 54 years old, but now alice might be 56; featurestore returns `age=54` as alice's age, instead of `age=56`, because that is the value of the feature at the observation time. Similarly, other features might be time-variant as well, such as liked_genres. Batch Read Feature ValuesAssemble the request which specify the following info:* Where is the label data, i.e., Table 1.* Which features are read, i.e., the column names in Table 2.The output is stored in a BigQuery table.
###Code
batch_serving_request = featurestore_service_pb2.BatchReadFeatureValuesRequest(
# featurestore info
featurestore=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
# URL for the label data, i.e., Table 1.
csv_read_instances=io_pb2.CsvSource(
gcs_source=io_pb2.GcsSource(uris=[INPUT_CSV_FILE])
),
destination=featurestore_service_pb2.FeatureValueDestination(
bigquery_destination=io_pb2.BigQueryDestination(
# Output to BigQuery table created earlier
output_uri=DESTINATION_TABLE_URI
)
),
entity_type_specs=[
featurestore_service_pb2.BatchReadFeatureValuesRequest.EntityTypeSpec(
# Read the 'age', 'gender' and 'liked_genres' features from the 'users' entity
entity_type_id="users",
feature_selector=FeatureSelector(
id_matcher=IdMatcher(
ids=[
# features, use "*" if you want to select all features within this entity type
"age",
"gender",
"liked_genres",
]
)
),
),
featurestore_service_pb2.BatchReadFeatureValuesRequest.EntityTypeSpec(
# Read the 'average_rating' and 'genres' feature values of the 'movies' entity
entity_type_id="movies",
feature_selector=FeatureSelector(
id_matcher=IdMatcher(ids=["average_rating", "genres"])
),
),
],
)
# Execute the batch read
batch_serving_lro = admin_client.batch_read_feature_values(batch_serving_request)
# This long runing operation will poll until the batch read finishes.
batch_serving_lro.result()
###Output
_____no_output_____
###Markdown
After the LRO finishes, you should be able to see the result from the [BigQuery console](https://console.cloud.google.com/bigquery), in the dataset created earlier. Cleaning upTo clean up all Google Cloud resources used in this project, you can [delete the Google Cloudproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.You can also keep the project but delete the featurestore:
###Code
admin_client.delete_featurestore(
request=featurestore_service_pb2.DeleteFeaturestoreRequest(
name=admin_client.featurestore_path(PROJECT_ID, REGION, FEATURESTORE_ID),
force=True,
)
).result()
client.delete_dataset(
DESTINATION_DATA_SET, delete_contents=True, not_found_ok=True
) # Make an API request.
print("Deleted dataset '{}'.".format(DESTINATION_DATA_SET))
###Output
_____no_output_____ |
junk/hw7.ipynb | ###Markdown
A tiny bit better.
###Code
pd.DataFrame(data=np.squeeze(lr.coef_), index=df_train_month_enc.columns, columns=["Coef"]).sort_values(by="Coef", ascending=False)
###Output
_____no_output_____
###Markdown
Let's add some lag features. I'm arbitrarily deciding on 4 lags for `AveragePrice` (the most important feature).
###Code
def add_lags(df):
df = create_lag_feature(df, "AveragePrice", -1, ["region", "type"])
df = create_lag_feature(df, "AveragePrice", -2, ["region", "type"])
df = create_lag_feature(df, "AveragePrice", -3, ["region", "type"])
df = create_lag_feature(df, "AveragePrice", -4, ["region", "type"])
return df
df_train_month_lag = add_lags(df_train_month)
df_test_month_lag = add_lags(df_test_month)
df_train_month_lag
df_train_month_lag_enc, y_train, df_test_month_lag_enc, y_test = preprocess_features(df_train_month_lag, df_test_month_lag,
numeric_features + ["AveragePrice_lag1", "AveragePrice_lag2", "AveragePrice_lag3", "AveragePrice_lag4"],
categorical_features + ["Month"],
keep_features,
drop_features,
target_feature)
lr = Ridge()
lr.fit(df_train_month_lag_enc, y_train);
lr.score(df_train_month_lag_enc, y_train)
lr.score(df_test_month_lag_enc, y_test)
###Output
_____no_output_____
###Markdown
This did not seem to help.
###Code
pd.DataFrame(data=np.squeeze(lr.coef_), index=df_train_month_lag_enc.columns, columns=["Coef"]).sort_values(by="Coef", ascending=False)
###Output
_____no_output_____
###Markdown
We can also try a random forest:
###Code
rf = RandomForestRegressor()
rf.fit(df_train_month_lag_enc, y_train);
rf.score(df_train_month_lag_enc, y_train)
rf.score(df_test_month_lag_enc, y_test)
###Output
_____no_output_____
###Markdown
For the random forest it may be helpful to model the difference between today and tomorrow. The linear model does not care about this because it just corresponds to changing the coefficient corresponding to `AveragePrice` by 1, but for the random forest it may help:
###Code
rf = RandomForestRegressor()
rf.fit(df_train_month_lag_enc, y_train - df_train_month_lag_enc["AveragePrice"]);
r2_score(y_train, rf.predict(df_train_month_lag_enc) + df_train_month_lag_enc["AveragePrice"])
r2_score(y_test, rf.predict(df_test_month_lag_enc) + df_test_month_lag_enc["AveragePrice"])
###Output
_____no_output_____
###Markdown
This massively overfits when we do this shifting. Let's try a simpler model...
###Code
rf = RandomForestRegressor(max_depth=8)
rf.fit(df_train_month_lag_enc, y_train - df_train_month_lag_enc["AveragePrice"]);
r2_score(y_train, rf.predict(df_train_month_lag_enc) + df_train_month_lag_enc["AveragePrice"])
r2_score(y_test, rf.predict(df_test_month_lag_enc) + df_test_month_lag_enc["AveragePrice"])
###Output
_____no_output_____
###Markdown
Doesn't realy help. Also, we can just confirm that this shifting has no effect on the linear model (well, a small effect because it's `Ridge` instead of `LinearRegression`, but small):
###Code
lr = Ridge()
lr.fit(df_train_month_lag_enc, y_train - df_train_month_lag_enc["AveragePrice"]);
r2_score(y_train, lr.predict(df_train_month_lag_enc) + df_train_month_lag_enc["AveragePrice"])
r2_score(y_test, lr.predict(df_test_month_lag_enc) + df_test_month_lag_enc["AveragePrice"])
###Output
_____no_output_____
###Markdown
Indeed, this is essentially the same score we had before. Overall, adding the month helped, but adding the lagged price was surprisingly unhelpful. Perhaps lagged version of other features would have been better, or other representations of the time of year, or dealing with the regions and avocado types a bit more carefully. END SOLUTION 1(g)rubric={points:3}We talked a little bit about _seasonality_, which is the idea of a periodic component to the time series. For example, in Lecture 16 we attempted to capture this by encoding the month. Something we didn't discuss is _trends_, which are long-term variations in the quantity of interest. Aside from the effects of climate change, the amount of rain in Australia is likely to vary during the year but less likely to have long-term trends over the years. Avocado prices, on the other hand, could easily exhibit trends: for example avocados may just cost more in 2020 than they did in 2015.Briefly discuss in ~1 paragraph: to what extent, if any, was your model above able to account for seasonality? What about trends? BEGIN SOLUTIONI tried to take seasonality into account by having the month as an OHE variable. As far as trends are concerned, the year is also a numeric variable in the model, so it could learn that the price in 2017 is higher than in 2015, say. However, there are very few years in the training set (2015, 16, 17), so that is not a lot of data to learn from. Perhaps including the number of months since the start of the dataset, or something like that, would enable the model to do a bit better with trends. Nonetheless, extrapolating is very hard so we can't necessarily trust our models' handing of trend.
###Code
pd.DataFrame(data=np.squeeze(lr.coef_), index=df_train_month_lag_enc.columns, columns=["Coef"]).loc["year"]
###Output
_____no_output_____
###Markdown
CPSC 330 hw7
###Code
import numpy as np
import pandas as pd
### BEGIN SOLUTION
from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OrdinalEncoder, OneHotEncoder
from sklearn.linear_model import Ridge
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from sklearn.metrics import r2_score
### END SOLUTION
###Output
_____no_output_____
###Markdown
Instructionsrubric={points:5}Follow the [homework submission instructions](https://github.students.cs.ubc.ca/cpsc330-2019w-t2/home/blob/master/docs/homework_instructions.md). Exercise 1: time series predictionIn this exercise we'll be looking at a [dataset of avocado prices](https://www.kaggle.com/neuromusic/avocado-prices). You should start by downloading the dataset. As usual, please do not commit it to your repos.
###Code
df = pd.read_csv("avocado.csv", parse_dates=["Date"], index_col=0)
df.head()
df.shape
df["Date"].min()
df["Date"].max()
###Output
_____no_output_____
###Markdown
It looks like the data ranges from the start of 2015 to March 2018 (~2 years ago), for a total of 3.25 years or so. Let's split the data so that we have a 6 months of test data.
###Code
split_date = '20170925'
df_train = df[df["Date"] <= split_date]
df_test = df[df["Date"] > split_date]
assert len(df_train) + len(df_test) == len(df)
###Output
_____no_output_____
###Markdown
1(a)rubric={points:3}In the Rain is Australia dataset from Lecture 16, we had different measurements for each Location. What about this dataset: for which categorical feature(s), if any, do we have separate measurements? Justify your answer by referencing the dataset. BEGIN SOLUTION
###Code
df.sort_values(by="Date").head()
###Output
_____no_output_____
###Markdown
From the above, we definitely see measurements on the same day at different regresion. Let's now group by region.
###Code
df.sort_values(by=["region", "Date"]).head()
###Output
_____no_output_____
###Markdown
From the above we see that, even in Albany, we have two measurements on the same date. This seems to be due to the type of avocado.
###Code
df.sort_values(by=["region", "type", "Date"]).head()
###Output
_____no_output_____
###Markdown
Great, now we have a sequence of dates with a single row per date. So, the answer is that we have a separate timeseries for each combination of `region` and `type`. END SOLUTION 1(b)rubric={points:3}In the Rain in Australia dataset, the measurements were generally equally spaced but with some exceptions. How about with this dataset? Justify your answer by referencing the dataset. BEGIN SOLUTION I think it's not unreasonable to do this on `df` rather than `df_train`, but either way is fine.
###Code
for name, group in df.groupby(['region', 'type']):
print("%-40s %s" % (name, group["Date"].sort_values().diff().min()))
for name, group in df.groupby(['region', 'type']):
print("%-40s %s" % (name, group["Date"].sort_values().diff().max()))
###Output
('Albany', 'conventional') 7 days 00:00:00
('Albany', 'organic') 7 days 00:00:00
('Atlanta', 'conventional') 7 days 00:00:00
('Atlanta', 'organic') 7 days 00:00:00
('BaltimoreWashington', 'conventional') 7 days 00:00:00
('BaltimoreWashington', 'organic') 7 days 00:00:00
('Boise', 'conventional') 7 days 00:00:00
('Boise', 'organic') 7 days 00:00:00
('Boston', 'conventional') 7 days 00:00:00
('Boston', 'organic') 7 days 00:00:00
('BuffaloRochester', 'conventional') 7 days 00:00:00
('BuffaloRochester', 'organic') 7 days 00:00:00
('California', 'conventional') 7 days 00:00:00
('California', 'organic') 7 days 00:00:00
('Charlotte', 'conventional') 7 days 00:00:00
('Charlotte', 'organic') 7 days 00:00:00
('Chicago', 'conventional') 7 days 00:00:00
('Chicago', 'organic') 7 days 00:00:00
('CincinnatiDayton', 'conventional') 7 days 00:00:00
('CincinnatiDayton', 'organic') 7 days 00:00:00
('Columbus', 'conventional') 7 days 00:00:00
('Columbus', 'organic') 7 days 00:00:00
('DallasFtWorth', 'conventional') 7 days 00:00:00
('DallasFtWorth', 'organic') 7 days 00:00:00
('Denver', 'conventional') 7 days 00:00:00
('Denver', 'organic') 7 days 00:00:00
('Detroit', 'conventional') 7 days 00:00:00
('Detroit', 'organic') 7 days 00:00:00
('GrandRapids', 'conventional') 7 days 00:00:00
('GrandRapids', 'organic') 7 days 00:00:00
('GreatLakes', 'conventional') 7 days 00:00:00
('GreatLakes', 'organic') 7 days 00:00:00
('HarrisburgScranton', 'conventional') 7 days 00:00:00
('HarrisburgScranton', 'organic') 7 days 00:00:00
('HartfordSpringfield', 'conventional') 7 days 00:00:00
('HartfordSpringfield', 'organic') 7 days 00:00:00
('Houston', 'conventional') 7 days 00:00:00
('Houston', 'organic') 7 days 00:00:00
('Indianapolis', 'conventional') 7 days 00:00:00
('Indianapolis', 'organic') 7 days 00:00:00
('Jacksonville', 'conventional') 7 days 00:00:00
('Jacksonville', 'organic') 7 days 00:00:00
('LasVegas', 'conventional') 7 days 00:00:00
('LasVegas', 'organic') 7 days 00:00:00
('LosAngeles', 'conventional') 7 days 00:00:00
('LosAngeles', 'organic') 7 days 00:00:00
('Louisville', 'conventional') 7 days 00:00:00
('Louisville', 'organic') 7 days 00:00:00
('MiamiFtLauderdale', 'conventional') 7 days 00:00:00
('MiamiFtLauderdale', 'organic') 7 days 00:00:00
('Midsouth', 'conventional') 7 days 00:00:00
('Midsouth', 'organic') 7 days 00:00:00
('Nashville', 'conventional') 7 days 00:00:00
('Nashville', 'organic') 7 days 00:00:00
('NewOrleansMobile', 'conventional') 7 days 00:00:00
('NewOrleansMobile', 'organic') 7 days 00:00:00
('NewYork', 'conventional') 7 days 00:00:00
('NewYork', 'organic') 7 days 00:00:00
('Northeast', 'conventional') 7 days 00:00:00
('Northeast', 'organic') 7 days 00:00:00
('NorthernNewEngland', 'conventional') 7 days 00:00:00
('NorthernNewEngland', 'organic') 7 days 00:00:00
('Orlando', 'conventional') 7 days 00:00:00
('Orlando', 'organic') 7 days 00:00:00
('Philadelphia', 'conventional') 7 days 00:00:00
('Philadelphia', 'organic') 7 days 00:00:00
('PhoenixTucson', 'conventional') 7 days 00:00:00
('PhoenixTucson', 'organic') 7 days 00:00:00
('Pittsburgh', 'conventional') 7 days 00:00:00
('Pittsburgh', 'organic') 7 days 00:00:00
('Plains', 'conventional') 7 days 00:00:00
('Plains', 'organic') 7 days 00:00:00
('Portland', 'conventional') 7 days 00:00:00
('Portland', 'organic') 7 days 00:00:00
('RaleighGreensboro', 'conventional') 7 days 00:00:00
('RaleighGreensboro', 'organic') 7 days 00:00:00
('RichmondNorfolk', 'conventional') 7 days 00:00:00
('RichmondNorfolk', 'organic') 7 days 00:00:00
('Roanoke', 'conventional') 7 days 00:00:00
('Roanoke', 'organic') 7 days 00:00:00
('Sacramento', 'conventional') 7 days 00:00:00
('Sacramento', 'organic') 7 days 00:00:00
('SanDiego', 'conventional') 7 days 00:00:00
('SanDiego', 'organic') 7 days 00:00:00
('SanFrancisco', 'conventional') 7 days 00:00:00
('SanFrancisco', 'organic') 7 days 00:00:00
('Seattle', 'conventional') 7 days 00:00:00
('Seattle', 'organic') 7 days 00:00:00
('SouthCarolina', 'conventional') 7 days 00:00:00
('SouthCarolina', 'organic') 7 days 00:00:00
('SouthCentral', 'conventional') 7 days 00:00:00
('SouthCentral', 'organic') 7 days 00:00:00
('Southeast', 'conventional') 7 days 00:00:00
('Southeast', 'organic') 7 days 00:00:00
('Spokane', 'conventional') 7 days 00:00:00
('Spokane', 'organic') 7 days 00:00:00
('StLouis', 'conventional') 7 days 00:00:00
('StLouis', 'organic') 7 days 00:00:00
('Syracuse', 'conventional') 7 days 00:00:00
('Syracuse', 'organic') 7 days 00:00:00
('Tampa', 'conventional') 7 days 00:00:00
('Tampa', 'organic') 7 days 00:00:00
('TotalUS', 'conventional') 7 days 00:00:00
('TotalUS', 'organic') 7 days 00:00:00
('West', 'conventional') 7 days 00:00:00
('West', 'organic') 7 days 00:00:00
('WestTexNewMexico', 'conventional') 7 days 00:00:00
('WestTexNewMexico', 'organic') 21 days 00:00:00
###Markdown
It looks almost perfect - just organic avocados in WestTexNewMexico seems to be missing a couple measurements.
###Code
name
group["Date"].sort_values().diff().value_counts()
###Output
_____no_output_____
###Markdown
So, in one case there's a 2-week jump, and in one cast there's a 3-week jump.
###Code
group["Date"].sort_values().reset_index(drop=True).diff().sort_values()
###Output
_____no_output_____
###Markdown
We can see the anomalies occur at index 48 and 127. (Note: I had to `reset_index` because the index was not unique to each row.)
###Code
group["Date"].sort_values().reset_index(drop=True)[45:50]
###Output
_____no_output_____
###Markdown
We can spot the first anomaly: a 2-week jump from Nov 29, 2015 to Dec 13, 2015.
###Code
group["Date"].sort_values().reset_index(drop=True)[125:130]
###Output
_____no_output_____
###Markdown
And we can spot the second anomaly: a 3-week jump from June 11, 2017 to July 2, 2017. END SOLUTION 1(c)rubric={points:1}In the Rain is Australia dataset, each location was a different place in Australia. For this dataset, look at the names of the regions. Do you think the regions are also all distinct, or are there overlapping regions? Justify your answer by referencing the data. BEGIN SOLUTION
###Code
df["region"].unique()
###Output
_____no_output_____
###Markdown
There seems to be a hierarchical structure here: `TotalUS` is split into bigger regions like `West`, `Southeast`, `Northeast`, `Midsouth`; and `California` is split into cities like `Sacramento`, `SanDiego`, `LosAngeles`. It's a bit hard to figure out what's going on.
###Code
df.query("region == 'TotalUS' and type == 'conventional' and Date == '20150104'")["Total Volume"].values[0]
df.query("region != 'TotalUS' and type == 'conventional' and Date == '20150104'")["Total Volume"].sum()
###Output
_____no_output_____
###Markdown
Since the individual regions sum up to more than the total US, it seems that some of the other regions are double-counted, which is consistent with a hierarchical structure. For example, Los Angeles is probalby double counted because it's within `LosAngeles` but also within `California`. What a mess! END SOLUTION We will use the entire dataset despite any location-based weirdness uncovered in the previous part.We will be trying to forecast the avocado price, which is the `AveragePrice` column. The function below is adapted from Lecture 16, with some improvements.
###Code
def create_lag_feature(df, orig_feature, lag, groupby, new_feature_name=None, clip=False):
"""
Creates a new feature that's a lagged version of an existing one.
NOTE: assumes df is already sorted by the time columns and has unique indices.
Parameters
----------
df : pandas.core.frame.DataFrame
The dataset.
orig_feature : str
The column name of the feature we're copying
lag : int
The lag; negative lag means values from the past, positive lag means values from the future
groupby : list
Column(s) to group by in case df contains multiple time series
new_feature_name : str
Override the default name of the newly created column
clip : bool
If True, remove rows with a NaN values for the new feature
Returns
-------
pandas.core.frame.DataFrame
A new dataframe with the additional column added.
"""
if new_feature_name is None:
if lag < 0:
new_feature_name = "%s_lag%d" % (orig_feature, -lag)
else:
new_feature_name = "%s_ahead%d" % (orig_feature, lag)
new_df = df.assign(**{new_feature_name : np.nan})
for name, group in new_df.groupby(groupby):
if lag < 0: # take values from the past
new_df.loc[group.index[-lag:],new_feature_name] = group.iloc[:lag][orig_feature].values
else: # take values from the future
new_df.loc[group.index[:-lag], new_feature_name] = group.iloc[lag:][orig_feature].values
if clip:
new_df = new_df.dropna(subset=[new_feature_name])
return new_df
###Output
_____no_output_____
###Markdown
We first sort our dataframe properly:
###Code
df_sort = df.sort_values(by=["region", "type", "Date"]).reset_index(drop=True)
df_sort
###Output
_____no_output_____
###Markdown
We then call `create_lag_feature`. This creates a new column in the dataset `AveragePriceNextWeek`, which is the following week's `AveragePrice`. We have set `clip=True` which means it will remove rows where the target would be missing.
###Code
df_hastarget = create_lag_feature(df_sort, "AveragePrice", +1, ["region", "type"], "AveragePriceNextWeek", clip=True)
df_hastarget
###Output
_____no_output_____
###Markdown
I will now split the data:
###Code
df_train = df_hastarget[df_hastarget["Date"] <= split_date]
df_test = df_hastarget[df_hastarget["Date"] > split_date]
###Output
_____no_output_____
###Markdown
1(d)rubric={points:1}Why was it reasonable for me to do this operation _before_ splitting the data, despite the fact that this usually constitutes a violation of the Golden Rule? BEGIN SOLUTIONBecause we were only looking at the dates and creating the future feature. The difference is that the very last time point in our training set now contains the average price from the first time point in our test set. This is a realistic scenario if we wre actually using this model to forecast, so it's not a major concern. END SOLUTION 1(e)rubric={points:1}Next we will want to build some models to forecast the average avocado price a week in advance. Before we start with any ML, let's try a baseline: just predicting the previous week's `AveragePrice`. What $R^2$ do you get with this approach? BEGIN SOLUTION
###Code
r2_score(df_train["AveragePriceNextWeek"], df_train["AveragePrice"])
r2_score(df_test["AveragePriceNextWeek"], df_test["AveragePrice"])
###Output
_____no_output_____
###Markdown
Interesting that this is a less effective prediction strategy in the later part of the dataset. I guess that means the price was fluctuating more in late 2017 / early 2018? END SOLUTION 1(f)rubric={points:10}Build some models to forecast the average avocado price. Experiment with a few approachs for encoding the date. Justify the decisions you make. Which approach worked best? Report your test score and briefly discuss your results.Benchmark: you should be able to achieve $R^2$ of at least 0.79 on the test set. I got to 0.80, but not beyond that. Let me know if you do better!Note: because we only have 2 splits here, we need to be a bit wary of overfitting on the test set. Try not to test on it a ridiculous number of times. If you are interested in some proper ways of dealing with this, see for example sklearn's [TimeSeriesSplit](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.TimeSeriesSplit.html), which is like cross-validation for time series data. BEGIN SOLUTION
###Code
df_train.head()
(df_train.loc[:, "Small Bags": "XLarge Bags"].sum(axis=1) - df_train["Total Bags"]).abs().max()
###Output
_____no_output_____
###Markdown
It seems that `Total Bags` is (approximately) the sum of the other 3 bag features, so I will drop `Total Bags`.
###Code
(df_train.loc[:, "4046": "4770"].sum(axis=1) - df_train["Total Volume"]).abs().max()
###Output
_____no_output_____
###Markdown
It seems that `Total Volume` is _not_ the sum of the 3 avocado types, so I will keep all 4 columns.
###Code
df_train.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 15441 entries, 0 to 18222
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Date 15441 non-null datetime64[ns]
1 AveragePrice 15441 non-null float64
2 Total Volume 15441 non-null float64
3 4046 15441 non-null float64
4 4225 15441 non-null float64
5 4770 15441 non-null float64
6 Total Bags 15441 non-null float64
7 Small Bags 15441 non-null float64
8 Large Bags 15441 non-null float64
9 XLarge Bags 15441 non-null float64
10 type 15441 non-null object
11 year 15441 non-null int64
12 region 15441 non-null object
13 AveragePriceNextWeek 15441 non-null float64
dtypes: datetime64[ns](1), float64(10), int64(1), object(2)
memory usage: 1.8+ MB
###Markdown
It seems there are no null values, so I will not do any imputation. Will plot a single time series for exploration purposes:
###Code
df_train.query("region == 'TotalUS'").set_index("Date").groupby("type")["AveragePrice"].plot(legend=True);
df_train.query("region == 'TotalUS' and type == 'conventional'").plot(x="Date", y="Total Volume");
###Output
_____no_output_____
###Markdown
We see some seasonality in the total volume, but not much in the average price - interesting. I will not scale the `AveragePrice` because I am not scaling `AveragePriceNextWeek` either, and it may be helpful to keep them the same. Alternatively, it may have been effective to predict the _change_ in price instead of next's week's price.
###Code
numeric_features = ["Total Volume", "4046", "4225", "4770", "Small Bags", "Large Bags", "XLarge Bags", "year"]
categorical_features = ["type", "region"]
keep_features = ["AveragePrice"]
drop_features = ["Date", "Total Bags"]
target_feature = "AveragePriceNextWeek"
###Output
_____no_output_____
###Markdown
Next, I grab the `preprocess_features` function from Lecture 16, with a minor modification to allow un-transformed features via `keep_features`:
###Code
def preprocess_features(df_train, df_test,
numeric_features,
categorical_features,
keep_features,
drop_features,
target_feature):
all_features = numeric_features + categorical_features + keep_features + drop_features + [target_feature]
if set(df_train.columns) != set(all_features):
print("Missing columns", set(df_train.columns) - set(all_features))
print("Extra columns", set(all_features) - set(df_train.columns))
raise Exception("Columns do not match")
# Put the columns in the order we want
df_train = df_train[all_features]
df_test = df_test[all_features]
numeric_transformer = Pipeline([
('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())
])
categorical_transformer = Pipeline([
('imputer', SimpleImputer(strategy='most_frequent')),
('onehot', OneHotEncoder(sparse=False, drop='first'))
])
preprocessor = ColumnTransformer([
('numeric', numeric_transformer, numeric_features),
('categorical', categorical_transformer, categorical_features)
], remainder='passthrough')
preprocessor.fit(df_train);
if len(categorical_features) > 0:
ohe = preprocessor.named_transformers_['categorical'].named_steps['onehot']
ohe_feature_names = list(ohe.get_feature_names(categorical_features))
new_columns = numeric_features + ohe_feature_names + keep_features + drop_features + [target_feature]
else:
new_columns = all_features
X_train_enc = pd.DataFrame(preprocessor.transform(df_train), index=df_train.index, columns=new_columns)
X_test_enc = pd.DataFrame(preprocessor.transform(df_test), index=df_test.index, columns=new_columns)
X_train_enc = X_train_enc.drop(columns=drop_features + [target_feature])
X_test_enc = X_test_enc.drop( columns=drop_features + [target_feature])
y_train = df_train[target_feature]
y_test = df_test[ target_feature]
return X_train_enc, y_train, X_test_enc, y_test
df_train_enc, y_train, df_test_enc, y_test = preprocess_features(df_train, df_test,
numeric_features,
categorical_features,
keep_features,
drop_features,
target_feature)
df_train_enc.head()
lr = Ridge()
lr.fit(df_train_enc, y_train);
lr.score(df_train_enc, y_train)
lr.score(df_test_enc, y_test)
lr_coef = pd.DataFrame(data=np.squeeze(lr.coef_), index=df_train_enc.columns, columns=["Coef"])
lr_coef.sort_values(by="Coef", ascending=False)
###Output
_____no_output_____
###Markdown
This is not a very impressive showing. We're doing almost the same as the baseline. Let's see if encoding the date helps at all. We'll try to OHE the month.
###Code
df_train_month = df_train.assign(Month=df_train["Date"].apply(lambda x: x.month))
df_test_month = df_test.assign( Month=df_test[ "Date"].apply(lambda x: x.month))
df_train_month_enc, y_train, df_test_month_enc, y_test = preprocess_features(df_train_month, df_test_month,
numeric_features,
categorical_features + ["Month"],
keep_features,
drop_features,
target_feature)
df_train_month_enc.head()
lr = Ridge()
lr.fit(df_train_month_enc, y_train);
lr.score(df_train_month_enc, y_train)
lr.score(df_test_month_enc, y_test)
###Output
_____no_output_____
###Markdown
A tiny bit better.
###Code
pd.DataFrame(data=np.squeeze(lr.coef_), index=df_train_month_enc.columns, columns=["Coef"]).sort_values(by="Coef", ascending=False)
###Output
_____no_output_____
###Markdown
Let's add some lag features. I'm arbitrarily deciding on 4 lags for `AveragePrice` (the most important feature).
###Code
def add_lags(df):
df = create_lag_feature(df, "AveragePrice", -1, ["region", "type"])
df = create_lag_feature(df, "AveragePrice", -2, ["region", "type"])
df = create_lag_feature(df, "AveragePrice", -3, ["region", "type"])
df = create_lag_feature(df, "AveragePrice", -4, ["region", "type"])
return df
df_train_month_lag = add_lags(df_train_month)
df_test_month_lag = add_lags(df_test_month)
df_train_month_lag
df_train_month_lag_enc, y_train, df_test_month_lag_enc, y_test = preprocess_features(df_train_month_lag, df_test_month_lag,
numeric_features + ["AveragePrice_lag1", "AveragePrice_lag2", "AveragePrice_lag3", "AveragePrice_lag4"],
categorical_features + ["Month"],
keep_features,
drop_features,
target_feature)
lr = Ridge()
lr.fit(df_train_month_lag_enc, y_train);
lr.score(df_train_month_lag_enc, y_train)
lr.score(df_test_month_lag_enc, y_test)
###Output
_____no_output_____
###Markdown
This did not seem to help.
###Code
pd.DataFrame(data=np.squeeze(lr.coef_), index=df_train_month_lag_enc.columns, columns=["Coef"]).sort_values(by="Coef", ascending=False)
###Output
_____no_output_____
###Markdown
We can also try a random forest:
###Code
rf = RandomForestRegressor()
rf.fit(df_train_month_lag_enc, y_train);
rf.score(df_train_month_lag_enc, y_train)
rf.score(df_test_month_lag_enc, y_test)
###Output
_____no_output_____
###Markdown
For the random forest it may be helpful to model the difference between today and tomorrow. The linear model does not care about this because it just corresponds to changing the coefficient corresponding to `AveragePrice` by 1, but for the random forest it may help:
###Code
rf = RandomForestRegressor()
rf.fit(df_train_month_lag_enc, y_train - df_train_month_lag_enc["AveragePrice"]);
r2_score(y_train, rf.predict(df_train_month_lag_enc) + df_train_month_lag_enc["AveragePrice"])
r2_score(y_test, rf.predict(df_test_month_lag_enc) + df_test_month_lag_enc["AveragePrice"])
###Output
_____no_output_____
###Markdown
This massively overfits when we do this shifting. Let's try a simpler model...
###Code
rf = RandomForestRegressor(max_depth=8)
rf.fit(df_train_month_lag_enc, y_train - df_train_month_lag_enc["AveragePrice"]);
r2_score(y_train, rf.predict(df_train_month_lag_enc) + df_train_month_lag_enc["AveragePrice"])
r2_score(y_test, rf.predict(df_test_month_lag_enc) + df_test_month_lag_enc["AveragePrice"])
###Output
_____no_output_____
###Markdown
Doesn't realy help. Also, we can just confirm that this shifting has no effect on the linear model (well, a small effect because it's `Ridge` instead of `LinearRegression`, but small):
###Code
lr = Ridge()
lr.fit(df_train_month_lag_enc, y_train - df_train_month_lag_enc["AveragePrice"]);
r2_score(y_train, lr.predict(df_train_month_lag_enc) + df_train_month_lag_enc["AveragePrice"])
r2_score(y_test, lr.predict(df_test_month_lag_enc) + df_test_month_lag_enc["AveragePrice"])
###Output
_____no_output_____
###Markdown
Indeed, this is essentially the same score we had before. Overall, adding the month helped, but adding the lagged price was surprisingly unhelpful. Perhaps lagged version of other features would have been better, or other representations of the time of year, or dealing with the regions and avocado types a bit more carefully. END SOLUTION 1(g)rubric={points:3}We talked a little bit about _seasonality_, which is the idea of a periodic component to the time series. For example, in Lecture 16 we attempted to capture this by encoding the month. Something we didn't discuss is _trends_, which are long-term variations in the quantity of interest. Aside from the effects of climate change, the amount of rain in Australia is likely to vary during the year but less likely to have long-term trends over the years. Avocado prices, on the other hand, could easily exhibit trends: for example avocados may just cost more in 2020 than they did in 2015.Briefly discuss in ~1 paragraph: to what extent, if any, was your model above able to account for seasonality? What about trends? BEGIN SOLUTIONI tried to take seasonality into account by having the month as an OHE variable. As far as trends are concerned, the year is also a numeric variable in the model, so it could learn that the price in 2017 is higher than in 2015, say. However, there are very few years in the training set (2015, 16, 17), so that is not a lot of data to learn from. Perhaps including the number of months since the start of the dataset, or something like that, would enable the model to do a bit better with trends. Nonetheless, extrapolating is very hard so we can't necessarily trust our models' handing of trend.
###Code
pd.DataFrame(data=np.squeeze(lr.coef_), index=df_train_month_lag_enc.columns, columns=["Coef"]).loc["year"]
###Output
_____no_output_____
###Markdown
CPSC 330 hw7
###Code
import numpy as np
import pandas as pd
### BEGIN SOLUTION
from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OrdinalEncoder, OneHotEncoder
from sklearn.linear_model import Ridge
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from sklearn.metrics import r2_score
### END SOLUTION
###Output
_____no_output_____
###Markdown
Instructionsrubric={points:5}Follow the [homework submission instructions](https://github.students.cs.ubc.ca/cpsc330-2019w-t2/home/blob/master/docs/homework_instructions.md). Exercise 1: time series predictionIn this exercise we'll be looking at a [dataset of avocado prices](https://www.kaggle.com/neuromusic/avocado-prices). You should start by downloading the dataset. As usual, please do not commit it to your repos.
###Code
df = pd.read_csv("avocado.csv", parse_dates=["Date"], index_col=0)
df.head()
df.shape
df["Date"].min()
df["Date"].max()
###Output
_____no_output_____
###Markdown
It looks like the data ranges from the start of 2015 to March 2018 (~2 years ago), for a total of 3.25 years or so. Let's split the data so that we have a 6 months of test data.
###Code
split_date = '20170925'
df_train = df[df["Date"] <= split_date]
df_test = df[df["Date"] > split_date]
assert len(df_train) + len(df_test) == len(df)
###Output
_____no_output_____
###Markdown
1(a)rubric={points:3}In the Rain is Australia dataset from Lecture 16, we had different measurements for each Location. What about this dataset: for which categorical feature(s), if any, do we have separate measurements? Justify your answer by referencing the dataset. BEGIN SOLUTION
###Code
df.sort_values(by="Date").head()
###Output
_____no_output_____
###Markdown
From the above, we definitely see measurements on the same day at different regresion. Let's now group by region.
###Code
df.sort_values(by=["region", "Date"]).head()
###Output
_____no_output_____
###Markdown
From the above we see that, even in Albany, we have two measurements on the same date. This seems to be due to the type of avocado.
###Code
df.sort_values(by=["region", "type", "Date"]).head()
###Output
_____no_output_____
###Markdown
Great, now we have a sequence of dates with a single row per date. So, the answer is that we have a separate timeseries for each combination of `region` and `type`. END SOLUTION 1(b)rubric={points:3}In the Rain in Australia dataset, the measurements were generally equally spaced but with some exceptions. How about with this dataset? Justify your answer by referencing the dataset. BEGIN SOLUTION I think it's not unreasonable to do this on `df` rather than `df_train`, but either way is fine.
###Code
for name, group in df.groupby(['region', 'type']):
print("%-40s %s" % (name, group["Date"].sort_values().diff().min()))
for name, group in df.groupby(['region', 'type']):
print("%-40s %s" % (name, group["Date"].sort_values().diff().max()))
###Output
('Albany', 'conventional') 7 days 00:00:00
('Albany', 'organic') 7 days 00:00:00
('Atlanta', 'conventional') 7 days 00:00:00
('Atlanta', 'organic') 7 days 00:00:00
('BaltimoreWashington', 'conventional') 7 days 00:00:00
('BaltimoreWashington', 'organic') 7 days 00:00:00
('Boise', 'conventional') 7 days 00:00:00
('Boise', 'organic') 7 days 00:00:00
('Boston', 'conventional') 7 days 00:00:00
('Boston', 'organic') 7 days 00:00:00
('BuffaloRochester', 'conventional') 7 days 00:00:00
('BuffaloRochester', 'organic') 7 days 00:00:00
('California', 'conventional') 7 days 00:00:00
('California', 'organic') 7 days 00:00:00
('Charlotte', 'conventional') 7 days 00:00:00
('Charlotte', 'organic') 7 days 00:00:00
('Chicago', 'conventional') 7 days 00:00:00
('Chicago', 'organic') 7 days 00:00:00
('CincinnatiDayton', 'conventional') 7 days 00:00:00
('CincinnatiDayton', 'organic') 7 days 00:00:00
('Columbus', 'conventional') 7 days 00:00:00
('Columbus', 'organic') 7 days 00:00:00
('DallasFtWorth', 'conventional') 7 days 00:00:00
('DallasFtWorth', 'organic') 7 days 00:00:00
('Denver', 'conventional') 7 days 00:00:00
('Denver', 'organic') 7 days 00:00:00
('Detroit', 'conventional') 7 days 00:00:00
('Detroit', 'organic') 7 days 00:00:00
('GrandRapids', 'conventional') 7 days 00:00:00
('GrandRapids', 'organic') 7 days 00:00:00
('GreatLakes', 'conventional') 7 days 00:00:00
('GreatLakes', 'organic') 7 days 00:00:00
('HarrisburgScranton', 'conventional') 7 days 00:00:00
('HarrisburgScranton', 'organic') 7 days 00:00:00
('HartfordSpringfield', 'conventional') 7 days 00:00:00
('HartfordSpringfield', 'organic') 7 days 00:00:00
('Houston', 'conventional') 7 days 00:00:00
('Houston', 'organic') 7 days 00:00:00
('Indianapolis', 'conventional') 7 days 00:00:00
('Indianapolis', 'organic') 7 days 00:00:00
('Jacksonville', 'conventional') 7 days 00:00:00
('Jacksonville', 'organic') 7 days 00:00:00
('LasVegas', 'conventional') 7 days 00:00:00
('LasVegas', 'organic') 7 days 00:00:00
('LosAngeles', 'conventional') 7 days 00:00:00
('LosAngeles', 'organic') 7 days 00:00:00
('Louisville', 'conventional') 7 days 00:00:00
('Louisville', 'organic') 7 days 00:00:00
('MiamiFtLauderdale', 'conventional') 7 days 00:00:00
('MiamiFtLauderdale', 'organic') 7 days 00:00:00
('Midsouth', 'conventional') 7 days 00:00:00
('Midsouth', 'organic') 7 days 00:00:00
('Nashville', 'conventional') 7 days 00:00:00
('Nashville', 'organic') 7 days 00:00:00
('NewOrleansMobile', 'conventional') 7 days 00:00:00
('NewOrleansMobile', 'organic') 7 days 00:00:00
('NewYork', 'conventional') 7 days 00:00:00
('NewYork', 'organic') 7 days 00:00:00
('Northeast', 'conventional') 7 days 00:00:00
('Northeast', 'organic') 7 days 00:00:00
('NorthernNewEngland', 'conventional') 7 days 00:00:00
('NorthernNewEngland', 'organic') 7 days 00:00:00
('Orlando', 'conventional') 7 days 00:00:00
('Orlando', 'organic') 7 days 00:00:00
('Philadelphia', 'conventional') 7 days 00:00:00
('Philadelphia', 'organic') 7 days 00:00:00
('PhoenixTucson', 'conventional') 7 days 00:00:00
('PhoenixTucson', 'organic') 7 days 00:00:00
('Pittsburgh', 'conventional') 7 days 00:00:00
('Pittsburgh', 'organic') 7 days 00:00:00
('Plains', 'conventional') 7 days 00:00:00
('Plains', 'organic') 7 days 00:00:00
('Portland', 'conventional') 7 days 00:00:00
('Portland', 'organic') 7 days 00:00:00
('RaleighGreensboro', 'conventional') 7 days 00:00:00
('RaleighGreensboro', 'organic') 7 days 00:00:00
('RichmondNorfolk', 'conventional') 7 days 00:00:00
('RichmondNorfolk', 'organic') 7 days 00:00:00
('Roanoke', 'conventional') 7 days 00:00:00
('Roanoke', 'organic') 7 days 00:00:00
('Sacramento', 'conventional') 7 days 00:00:00
('Sacramento', 'organic') 7 days 00:00:00
('SanDiego', 'conventional') 7 days 00:00:00
('SanDiego', 'organic') 7 days 00:00:00
('SanFrancisco', 'conventional') 7 days 00:00:00
('SanFrancisco', 'organic') 7 days 00:00:00
('Seattle', 'conventional') 7 days 00:00:00
('Seattle', 'organic') 7 days 00:00:00
('SouthCarolina', 'conventional') 7 days 00:00:00
('SouthCarolina', 'organic') 7 days 00:00:00
('SouthCentral', 'conventional') 7 days 00:00:00
('SouthCentral', 'organic') 7 days 00:00:00
('Southeast', 'conventional') 7 days 00:00:00
('Southeast', 'organic') 7 days 00:00:00
('Spokane', 'conventional') 7 days 00:00:00
('Spokane', 'organic') 7 days 00:00:00
('StLouis', 'conventional') 7 days 00:00:00
('StLouis', 'organic') 7 days 00:00:00
('Syracuse', 'conventional') 7 days 00:00:00
('Syracuse', 'organic') 7 days 00:00:00
('Tampa', 'conventional') 7 days 00:00:00
('Tampa', 'organic') 7 days 00:00:00
('TotalUS', 'conventional') 7 days 00:00:00
('TotalUS', 'organic') 7 days 00:00:00
('West', 'conventional') 7 days 00:00:00
('West', 'organic') 7 days 00:00:00
('WestTexNewMexico', 'conventional') 7 days 00:00:00
('WestTexNewMexico', 'organic') 21 days 00:00:00
###Markdown
It looks almost perfect - just organic avocados in WestTexNewMexico seems to be missing a couple measurements.
###Code
name
group["Date"].sort_values().diff().value_counts()
###Output
_____no_output_____
###Markdown
So, in one case there's a 2-week jump, and in one cast there's a 3-week jump.
###Code
group["Date"].sort_values().reset_index(drop=True).diff().sort_values()
###Output
_____no_output_____
###Markdown
We can see the anomalies occur at index 48 and 127. (Note: I had to `reset_index` because the index was not unique to each row.)
###Code
group["Date"].sort_values().reset_index(drop=True)[45:50]
###Output
_____no_output_____
###Markdown
We can spot the first anomaly: a 2-week jump from Nov 29, 2015 to Dec 13, 2015.
###Code
group["Date"].sort_values().reset_index(drop=True)[125:130]
###Output
_____no_output_____
###Markdown
And we can spot the second anomaly: a 3-week jump from June 11, 2017 to July 2, 2017. END SOLUTION 1(c)rubric={points:1}In the Rain is Australia dataset, each location was a different place in Australia. For this dataset, look at the names of the regions. Do you think the regions are also all distinct, or are there overlapping regions? Justify your answer by referencing the data. BEGIN SOLUTION
###Code
df["region"].unique()
###Output
_____no_output_____
###Markdown
There seems to be a hierarchical structure here: `TotalUS` is split into bigger regions like `West`, `Southeast`, `Northeast`, `Midsouth`; and `California` is split into cities like `Sacramento`, `SanDiego`, `LosAngeles`. It's a bit hard to figure out what's going on.
###Code
df.query("region == 'TotalUS' and type == 'conventional' and Date == '20150104'")["Total Volume"].values[0]
df.query("region != 'TotalUS' and type == 'conventional' and Date == '20150104'")["Total Volume"].sum()
###Output
_____no_output_____
###Markdown
Since the individual regions sum up to more than the total US, it seems that some of the other regions are double-counted, which is consistent with a hierarchical structure. For example, Los Angeles is probalby double counted because it's within `LosAngeles` but also within `California`. What a mess! END SOLUTION We will use the entire dataset despite any location-based weirdness uncovered in the previous part.We will be trying to forecast the avocado price, which is the `AveragePrice` column. The function below is adapted from Lecture 16, with some improvements.
###Code
def create_lag_feature(df, orig_feature, lag, groupby, new_feature_name=None, clip=False):
"""
Creates a new feature that's a lagged version of an existing one.
NOTE: assumes df is already sorted by the time columns and has unique indices.
Parameters
----------
df : pandas.core.frame.DataFrame
The dataset.
orig_feature : str
The column name of the feature we're copying
lag : int
The lag; negative lag means values from the past, positive lag means values from the future
groupby : list
Column(s) to group by in case df contains multiple time series
new_feature_name : str
Override the default name of the newly created column
clip : bool
If True, remove rows with a NaN values for the new feature
Returns
-------
pandas.core.frame.DataFrame
A new dataframe with the additional column added.
"""
if new_feature_name is None:
if lag < 0:
new_feature_name = "%s_lag%d" % (orig_feature, -lag)
else:
new_feature_name = "%s_ahead%d" % (orig_feature, lag)
new_df = df.assign(**{new_feature_name : np.nan})
for name, group in new_df.groupby(groupby):
if lag < 0: # take values from the past
new_df.loc[group.index[-lag:],new_feature_name] = group.iloc[:lag][orig_feature].values
else: # take values from the future
new_df.loc[group.index[:-lag], new_feature_name] = group.iloc[lag:][orig_feature].values
if clip:
new_df = new_df.dropna(subset=[new_feature_name])
return new_df
###Output
_____no_output_____
###Markdown
We first sort our dataframe properly:
###Code
df_sort = df.sort_values(by=["region", "type", "Date"]).reset_index(drop=True)
df_sort
###Output
_____no_output_____
###Markdown
We then call `create_lag_feature`. This creates a new column in the dataset `AveragePriceNextWeek`, which is the following week's `AveragePrice`. We have set `clip=True` which means it will remove rows where the target would be missing.
###Code
df_hastarget = create_lag_feature(df_sort, "AveragePrice", +1, ["region", "type"], "AveragePriceNextWeek", clip=True)
df_hastarget
###Output
_____no_output_____
###Markdown
I will now split the data:
###Code
df_train = df_hastarget[df_hastarget["Date"] <= split_date]
df_test = df_hastarget[df_hastarget["Date"] > split_date]
###Output
_____no_output_____
###Markdown
1(d)rubric={points:1}Why was it reasonable for me to do this operation _before_ splitting the data, despite the fact that this usually constitutes a violation of the Golden Rule? BEGIN SOLUTIONBecause we were only looking at the dates and creating the future feature. The difference is that the very last time point in our training set now contains the average price from the first time point in our test set. This is a realistic scenario if we wre actually using this model to forecast, so it's not a major concern. END SOLUTION 1(e)rubric={points:1}Next we will want to build some models to forecast the average avocado price a week in advance. Before we start with any ML, let's try a baseline: just predicting the previous week's `AveragePrice`. What $R^2$ do you get with this approach? BEGIN SOLUTION
###Code
r2_score(df_train["AveragePriceNextWeek"], df_train["AveragePrice"])
r2_score(df_test["AveragePriceNextWeek"], df_test["AveragePrice"])
###Output
_____no_output_____
###Markdown
Interesting that this is a less effective prediction strategy in the later part of the dataset. I guess that means the price was fluctuating more in late 2017 / early 2018? END SOLUTION 1(f)rubric={points:10}Build some models to forecast the average avocado price. Experiment with a few approachs for encoding the date. Justify the decisions you make. Which approach worked best? Report your test score and briefly discuss your results.Benchmark: you should be able to achieve $R^2$ of at least 0.79 on the test set. I got to 0.80, but not beyond that. Let me know if you do better!Note: because we only have 2 splits here, we need to be a bit wary of overfitting on the test set. Try not to test on it a ridiculous number of times. If you are interested in some proper ways of dealing with this, see for example sklearn's [TimeSeriesSplit](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.TimeSeriesSplit.html), which is like cross-validation for time series data. BEGIN SOLUTION
###Code
df_train.head()
(df_train.loc[:, "Small Bags": "XLarge Bags"].sum(axis=1) - df_train["Total Bags"]).abs().max()
###Output
_____no_output_____
###Markdown
It seems that `Total Bags` is (approximately) the sum of the other 3 bag features, so I will drop `Total Bags`.
###Code
(df_train.loc[:, "4046": "4770"].sum(axis=1) - df_train["Total Volume"]).abs().max()
###Output
_____no_output_____
###Markdown
It seems that `Total Volume` is _not_ the sum of the 3 avocado types, so I will keep all 4 columns.
###Code
df_train.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 15441 entries, 0 to 18222
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Date 15441 non-null datetime64[ns]
1 AveragePrice 15441 non-null float64
2 Total Volume 15441 non-null float64
3 4046 15441 non-null float64
4 4225 15441 non-null float64
5 4770 15441 non-null float64
6 Total Bags 15441 non-null float64
7 Small Bags 15441 non-null float64
8 Large Bags 15441 non-null float64
9 XLarge Bags 15441 non-null float64
10 type 15441 non-null object
11 year 15441 non-null int64
12 region 15441 non-null object
13 AveragePriceNextWeek 15441 non-null float64
dtypes: datetime64[ns](1), float64(10), int64(1), object(2)
memory usage: 1.8+ MB
###Markdown
It seems there are no null values, so I will not do any imputation. Will plot a single time series for exploration purposes:
###Code
df_train.query("region == 'TotalUS'").set_index("Date").groupby("type")["AveragePrice"].plot(legend=True);
df_train.query("region == 'TotalUS' and type == 'conventional'").plot(x="Date", y="Total Volume");
###Output
_____no_output_____
###Markdown
We see some seasonality in the total volume, but not much in the average price - interesting. I will not scale the `AveragePrice` because I am not scaling `AveragePriceNextWeek` either, and it may be helpful to keep them the same. Alternatively, it may have been effective to predict the _change_ in price instead of next's week's price.
###Code
numeric_features = ["Total Volume", "4046", "4225", "4770", "Small Bags", "Large Bags", "XLarge Bags", "year"]
categorical_features = ["type", "region"]
keep_features = ["AveragePrice"]
drop_features = ["Date", "Total Bags"]
target_feature = "AveragePriceNextWeek"
###Output
_____no_output_____
###Markdown
Next, I grab the `preprocess_features` function from Lecture 16, with a minor modification to allow un-transformed features via `keep_features`:
###Code
def preprocess_features(df_train, df_test,
numeric_features,
categorical_features,
keep_features,
drop_features,
target_feature):
all_features = numeric_features + categorical_features + keep_features + drop_features + [target_feature]
if set(df_train.columns) != set(all_features):
print("Missing columns", set(df_train.columns) - set(all_features))
print("Extra columns", set(all_features) - set(df_train.columns))
raise Exception("Columns do not match")
# Put the columns in the order we want
df_train = df_train[all_features]
df_test = df_test[all_features]
numeric_transformer = Pipeline([
('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())
])
categorical_transformer = Pipeline([
('imputer', SimpleImputer(strategy='most_frequent')),
('onehot', OneHotEncoder(sparse=False, drop='first'))
])
preprocessor = ColumnTransformer([
('numeric', numeric_transformer, numeric_features),
('categorical', categorical_transformer, categorical_features)
], remainder='passthrough')
preprocessor.fit(df_train);
if len(categorical_features) > 0:
ohe = preprocessor.named_transformers_['categorical'].named_steps['onehot']
ohe_feature_names = list(ohe.get_feature_names(categorical_features))
new_columns = numeric_features + ohe_feature_names + keep_features + drop_features + [target_feature]
else:
new_columns = all_features
X_train_enc = pd.DataFrame(preprocessor.transform(df_train), index=df_train.index, columns=new_columns)
X_test_enc = pd.DataFrame(preprocessor.transform(df_test), index=df_test.index, columns=new_columns)
X_train_enc = X_train_enc.drop(columns=drop_features + [target_feature])
X_test_enc = X_test_enc.drop( columns=drop_features + [target_feature])
y_train = df_train[target_feature]
y_test = df_test[ target_feature]
return X_train_enc, y_train, X_test_enc, y_test
df_train_enc, y_train, df_test_enc, y_test = preprocess_features(df_train, df_test,
numeric_features,
categorical_features,
keep_features,
drop_features,
target_feature)
df_train_enc.head()
lr = Ridge()
lr.fit(df_train_enc, y_train);
lr.score(df_train_enc, y_train)
lr.score(df_test_enc, y_test)
lr_coef = pd.DataFrame(data=np.squeeze(lr.coef_), index=df_train_enc.columns, columns=["Coef"])
lr_coef.sort_values(by="Coef", ascending=False)
###Output
_____no_output_____
###Markdown
This is not a very impressive showing. We're doing almost the same as the baseline. Let's see if encoding the date helps at all. We'll try to OHE the month.
###Code
df_train_month = df_train.assign(Month=df_train["Date"].apply(lambda x: x.month))
df_test_month = df_test.assign( Month=df_test[ "Date"].apply(lambda x: x.month))
df_train_month_enc, y_train, df_test_month_enc, y_test = preprocess_features(df_train_month, df_test_month,
numeric_features,
categorical_features + ["Month"],
keep_features,
drop_features,
target_feature)
df_train_month_enc.head()
lr = Ridge()
lr.fit(df_train_month_enc, y_train);
lr.score(df_train_month_enc, y_train)
lr.score(df_test_month_enc, y_test)
###Output
_____no_output_____ |
Chapter02/Exercise2.17/Exercise 2.17.ipynb | ###Markdown
Implementing lesk algorithm from scratch using string similarity and text vectorization
###Code
import pandas as pd
from sklearn.metrics.pairwise import cosine_similarity
from nltk import word_tokenize
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.datasets import fetch_20newsgroups
import numpy as np
def get_tf_idf_vectors(corpus):
tfidf_vectorizer = TfidfVectorizer()
tfidf_results = tfidf_vectorizer.fit_transform(corpus).todense()
return tfidf_results
def to_lower_case(corpus):
lowercase_corpus = [x.lower() for x in corpus]
return lowercase_corpus
def find_sentence_defnition(sent_vector,defnition_vectors):
"""
This method will find cosine similarity of sentence with
the possible definitions and return the one with highest similarity score
along with the similarity score.
"""
result_dict = {}
for defnition_id,def_vector in defnition_vectors.items():
sim = cosine_similarity(sent_vector,def_vector)
result_dict[defnition_id] = sim[0][0]
defnition = sorted(result_dict.items(), key=lambda x: x[1], reverse=True)[0]
return defnition[0],defnition[1]
corpus = ["On the banks of river Ganga, there lies the scent of spirituality",
"An institute where people can store extra cash or money.",
"The land alongside or sloping down to a river or lake"
"What you do defines you",
"Your deeds define you",
"Once upon a time there lived a king.",
"Who is your queen?",
"He is desperate",
"Is he not desperate?"]
lower_case_corpus = to_lower_case(corpus)
corpus_tf_idf = get_tf_idf_vectors(lower_case_corpus)
sent_vector = corpus_tf_idf[0]
defnition_vectors = {'def1':corpus_tf_idf[1],'def2':corpus_tf_idf[2]}
defnition_id, score = find_sentence_defnition(sent_vector,defnition_vectors)
print("The defnition of word {} is {} with similarity of {}".format('bank',defnition_id,score))
###Output
The defnition of word bank is def2 with similarity of 0.14419130686278897
|
Guides/python/excelToPandas.ipynb | ###Markdown
Import Excel or CSV To PandasThis file covers the process of importing excel and csv files into a pandas dataframe. Note: the methods for importing excel and csv files is almost identical. The major difference is in the method used. This notebook serves as a tutorial for both.__Importing Excel (xlsx):__ The function used is [read_excel](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html). __Importing comma separated values (csv):__ The function used is [read_csv](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html). Step 1Lets start by importing pandas and os. We will be using pandas to create a dataframe from our data, and os to get file paths.
###Code
import pandas as pd
import os
###Output
_____no_output_____
###Markdown
Step 2Now lets create a variable, filePath, that is a string containing the full path to the file we want to import. The code below looks in the current working directory for the file given a file name input by the user. This isn't necessary, and is just included for convienence. Alternatively, user can input a full path into the filePath variable.
###Code
dirPath = os.path.realpath('.')
fileName = 'assets/coolingExample.xlsx'
filePath = os.path.join(dirPath, fileName)
###Output
_____no_output_____
###Markdown
Step 3Great! Now lets read the data into a dataframe called df.This will allow our data to be accessible by the string in the header.
###Code
df = pd.read_excel(filePath,header=0)
df.head()
###Output
_____no_output_____
###Markdown
Our data is now accessible by a key value. The keys are the column headers in the dataframe. In this example case, those are 'Time (s) - Dev1/ai0' and 'Temperature - Dev1/ai0'. For example, lets access the data in the first column.
###Code
df[df.columns[0]]
###Output
_____no_output_____
###Markdown
What would happen if we tried to access the data with an invalid key, say 1 for example? Lets try it to find out.Note: I enclose this code in a try: except: statement in order to prevent a huge error from being generated.
###Code
try:
df[1]
except KeyError:
print("KeyError: 1 - not a valid key")
###Output
KeyError: 1 - not a valid key
###Markdown
So lets say you have a large dataframe with unknown columns. There is a simple way to index them without having prior knowledge of what the dataframe columns are. Namely, the columns method in pandas.
###Code
cols = df.columns
for col in cols:
print(df[col])
###Output
0 11:17:30
1 11:17:30
2 11:17:30
3 11:17:30
4 11:17:30
5 11:17:30
6 11:17:30
7 11:17:30
8 11:17:30
9 11:17:30
10 11:17:31
11 11:17:31
12 11:17:31
13 11:17:31
14 11:17:31
15 11:17:31
16 11:17:31
17 11:17:31
18 11:17:31
19 11:17:31
20 11:17:32
21 11:17:32
22 11:17:32
23 11:17:32
24 11:17:32
25 11:17:32
26 11:17:32
27 11:17:32
28 11:17:32
29 11:17:32
...
2439 11:21:33
2440 11:21:34
2441 11:21:34
2442 11:21:34
2443 11:21:34
2444 11:21:34
2445 11:21:34
2446 11:21:34
2447 11:21:34
2448 11:21:34
2449 11:21:34
2450 11:21:35
2451 11:21:35
2452 11:21:35
2453 11:21:35
2454 11:21:35
2455 11:21:35
2456 11:21:35
2457 11:21:35
2458 11:21:35
2459 11:21:35
2460 11:21:36
2461 11:21:36
2462 11:21:36
2463 11:21:36
2464 11:21:36
2465 11:21:36
2466 11:21:36
2467 11:21:36
2468 11:21:36
Name: Time - Dev2/ai0, dtype: object
0 85.4
1 85.6
2 84.9
3 85.8
4 85.2
5 85.1
6 86.1
7 85.1
8 85.0
9 85.8
10 85.0
11 85.6
12 85.1
13 85.2
14 85.1
15 85.1
16 85.8
17 85.1
18 85.6
19 85.1
20 86.1
21 86.4
22 85.8
23 86.6
24 86.1
25 85.8
26 85.9
27 86.1
28 85.5
29 85.8
...
2439 4.2
2440 3.1
2441 3.8
2442 5.1
2443 4.4
2444 4.3
2445 4.7
2446 4.3
2447 4.4
2448 4.4
2449 4.4
2450 4.0
2451 2.7
2452 4.6
2453 4.8
2454 3.5
2455 4.2
2456 3.2
2457 3.7
2458 3.8
2459 3.5
2460 3.4
2461 3.9
2462 3.4
2463 4.0
2464 4.1
2465 3.5
2466 3.5
2467 3.1
2468 3.9
Name: Temperature - Dev2/ai0, dtype: float64
###Markdown
Data Manipulation _(Plots)_Now that we have the data easily accessible in python, lets look at how to plot it. Pandas allows you to use matplotlib to plot, however it is done using methods built into pandas.Although the methods to create an manipulate plots are built into Pandas, we will still have to import matplotlib to save and show the plots.
###Code
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
In order to demonstrate the plotting capabilities of pandas arrays, lets use the example data that we imported earlier. The data frame contains only the two columns that were in the file; temperature and time. Because of this simplicity, we can trust pandas to properly interpret the first column as time and the second column as th measurement (temperature). Thus we can plot with the simple command.df.plot()
###Code
plt.figure(1)
ax = df.plot()
plt.show()
###Output
_____no_output_____
###Markdown
While this simplification is nice, it is generally better to specify what data you want to plot. Particularly if you are automating the plotting of a large set of dataframes. To do this, specify the x and y arrays in your dataframe as you would in a standard matplotlib plot call, however since this plotting function is a method of the dataframe, you need only specify the column.I.e.
###Code
plt.figure(2)
ax = df.plot(cols[0],cols[1])
plt.show()
###Output
_____no_output_____
###Markdown
Now that we have the basics down, lets spice up the plot a little bit.
###Code
plt.figure(3)
ax = df.plot(cols[0],cols[1])
ax.set_title('This is a Title')
ax.set_ylabel('Temperature (deg F)')
ax.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Data Manipulation _(Timestamps)_One thing you probably noticed in these plots is that the time axis isn't all that useful. It would be better to change the timestamps to a more useful form like seconds since start. Lets go through the process of making that conversion.First, lets see what the timestamp currently looks like.
###Code
df[cols[0]][0]
###Output
_____no_output_____
###Markdown
Good news! Since python interpreted the date as a datetime object, we can use datetime object methods to determine the time in seconds. The one caveat is that we can only determine a time _difference_, not an absolute time. For more on this, read [this stackoverflow question.](http://stackoverflow.com/questions/7852855/how-to-convert-a-python-datetime-object-to-seconds)The first thing we have to do is convert these datetime.time objects into datetime.datetime objects using datetime.combineNote: importing datetime is a little weird.. datetime is both a module and a class.
###Code
from datetime import datetime, date
startTime = df[cols[0]][0]
timeArray = []
for i in range(0,len(df[cols[0]])):
timeArray.append((datetime.combine(date.today(), df[cols[0]][i]) - datetime.combine(date.today(), startTime)).total_seconds())
###Output
_____no_output_____
###Markdown
Note: There is probably a better way of doing this (i.e. without a loop, but I'm tired and can't think of anything right now)
###Code
plt.figure(4)
plt.plot(timeArray, df[cols[1]], 'b')
plt.title('This is a graph with a better time axis')
plt.ylabel('Temperature (deg F)')
plt.xlabel('Time (s)')
plt.grid()
plt.show()
###Output
_____no_output_____ |
reports/.ipynb_checkpoints/Final Report-checkpoint.ipynb | ###Markdown
**Final Report** Project Overview To understand whether student performance in final grade is affected by student previous grades, demographic, social and school related information, I want to perform a linear regression model. Instead of fitting 1 linear regression with no regularization, I apply different regularized linear regression methods and compare their result to the unregularized one. These methods include Lasso (L1 regularization), Ridge (L2 regularization), Elastic Net (L1 + L2 regularization). The comparison will suggest me the best model to help answer my question above. Data Description Data used in this project is taken from the [Student Performance Data Set](http://archive.ics.uci.edu/ml/datasets/Student+Performance). It contains student achievement in secondary education of two Portuguese schools, along with their grades, demographic, social and school related features. The data was collected by using school reports and questionnaires.The final dataset (after cleaning) has 32 features, with 382 observations in total. None of those observations has missing values. To know more details about the features, please refer to this [README](https://github.com/hadinh1306/feat-select_student-performance/tree/master/data/raw_data). To answer my question, I choose `G3` - the final grade as my response variables, and other 31 variables as my explanatory variables. Since I decide to use Scikit-learn to fit models to my data and Scikit-learn Regression Models only understand numeric data, I transform text values of all categorical variables into numeric ones. See source code for cleaning data [here](https://github.com/hadinh1306/feat-select_student-performance/blob/master/src/clean.ipynb). Exploratory Data Analysis Since there are 31 explanatory variables, I do not plot the relationship of each variable to `G3`. Instead, I choose some selective variables that intuitively make sense to have effects on my response variable. **Effect of first and second period grades on final grade**First of all, I believe that previous scores may have effect on the final score. Below is the pair plot to visualize effects first period and second period grades have on final grade.From this graph, I can infer that students with consistently high grade in previous 2 tests would get high score in final test also.  **Effect of gender on final grade**Secondly, I visualize difference in final grade between male and female students. Where I come from - Vietnam, people believe that female students are more hard working than male thus might have higher score, but at certain age or in certain fields male starts to have higher score in exams. The graph below shows that male students from the two Portugese schools score slightly higher in their final compared to their female counterparts. Through my feature and model selection process, I will find out if gender difference actually has any relation to final grade.  **Effect of weekly study time on final grade**Intuitively speeking, students who spend more time in studying would score higher in their tests. It is true in this case, proven by the graph below. Interestingly, students who spend less than 2 hours and students who spend from 2 to 5 hours studying per week has very minimal difference in final grade. Average grades of those who spend 5 to 10 hours increase by 1 to 2 points. However, spending more than 10 hours studying per week only helps final grade to increase by around 0.5 points.  **Effect of decision to go to higher education on final grade** Intuitively, students with intention to go to higher education would grade higher those who have no intention. This is proven through the below graph. Students with intention for higher education has average final grade around 2.5 points higher than those with no intention.  Feature and Model Selection I first split the data into 3 subsets - training, validation and test sets. My plan is to train different models on the training set, and apply the best model on validation set to test set. There are 4 models I use in the feature selection process:- Unregularized linear model- Ridge (L2-regularized) linear model - Lasso (L1-regularized) linear model - Elastic Net (L1 + L2 regularized) linear modelBefore officially fitting each model to my training set, I use `GridsearchCV` to find best hyperparameters (`alpha` for all models, in addition to `l1 ratio` for Elastic Net). Using the best hyperparameters for each model, I calculate some *scores* (documented in a table below) to compare these models together.
###Code
import pandas as pd
summary = pd.read_csv("../data/analysis/model_score.csv")
summary
###Output
_____no_output_____ |
03 Types, type conversions and floating point arithmetic.ipynb | ###Markdown
geklont von: https://github.com/CambridgeEngineering/PartIA-Computing-Michaelmas EinführungWe have thus far avoided discussing directly *types*. The '*type*' is the type of object that a variable is associated with. This affects how a computer stores the object in memory, and how operations, such as multiplication and division, are performed.In *statically typed* languages, like C and C++, types come up from the very beginning because you usually need to specify types explicitly in your programs. Python is a *dynamically typed* language, which means types are deduced when a program is run. This is why we have been able to postpone the discussion until now.It is important to have a basic understanding of types, and how types can affect how your programs behave. One can go very deep into this topic, especially for numerical computations, but we will cover the general concept from a high level, show some examples, and highlight some potential pitfalls for engineering computations. This is a dry topic - it contains important background information that you need to know for later, so hang in there. The below account highlights what can go wrong without an awareness of types and how computers process numbers.Wir haben es bisher vermieden, direkt über * Typen * zu diskutieren. Der '* type *' ist der Objekttyp, dem eine Variable zugeordnet ist. Dies wirkt sich darauf aus, wie ein Computer das Objekt im Speicher speichert und wie Operationen wie Multiplikation und Division ausgeführt werden.In * statisch typisierten * Sprachen, wie C und C ++, kommen Typen von Anfang an daher, weilIn der Regel müssen Sie Typen explizit in Ihren Programmen angeben. Python ist eine * dynamisch typisierte * Sprache, was bedeutet, dass Typen abgeleitet werden, wenn ein Programm ausgeführt wird. Deshalb konnten wir die Diskussion bisher verschieben.Es ist wichtig, ein grundlegendes Verständnis von Typen zu haben und wie Typen das Verhalten Ihrer Programme beeinflussen können. Man kann sehr tief in dieses Thema einsteigen, insbesondere für numerische Berechnungen, aber wir werden das allgemeine Konzept von einem hohen Niveau abdecken.Zeigen Sie einige Beispiele und heben Sie einige potenzielle Fallstricke für Konstruktionsberechnungen hervor.Dies ist ein trockenes Thema - es enthält wichtige Hintergrundinformationen, die Sie für später wissen müssen, also bleiben Sie dran. Das folgende Konto zeigt auf, was schief gehen kann, ohne dass man sich der Typen bewusst ist und wie Computer Zahlen verarbeiten. Patriot Missile Misserfolg und die Ariane-5-ExplosionThere have been numerous accidents due to programs not correctly handling types, type conversions and floating point arithmetic. Here are two examples: 1. In 1991, a US Patriot missile failed to intercept an Iraqi Scud missile at Dhahran in Saudi Arabi, leading to a loss of life. The subsequent investigation found that the Patriot missile failed to intercept the Scud missile due to a software flaw. The software developers did not account for the effects of 'floating point arithmetic'. This led to a small error in computing the time, which in turn caused the Patriot to miss the incoming Scud missile. Es gab zahlreiche Unfälle, weil Programme Typen, Typkonvertierungen und Fließkomma-Arithmetik nicht korrekt handhabten. Hier sind zwei Beispiele:1. Im Jahr 1991 konnte eine US-amerikanische Patriot-Rakete keine irakische Scud-Rakete in Dhahran in Saudi-Arabi abfangen, was dazu führte ein Verlust von Leben. Die anschließende Untersuchung ergab, dass die Patriot-Rakete die Scud-Rakete aufgrund eines Softwarefehlers nicht abfangen konnte. Die Softwareentwickler haben die Auswirkungen von Fließkomma nicht berücksichtigt Arithmetik'. Dies führte zu einem kleinen Fehler in der Zeitberechnung, wodurch der Patriot den ankommenden Scud verfehlte Rakete. We will reproduce the precise mistake the developers of the Patriot Missile software made. See https://en.wikipedia.org/wiki/MIM-104_PatriotFailure_at_Dhahran for more background on the interception failure. 1. Poor programming related to how computers store numbers led in 1996 to a European Space Agency *Ariane 5* unmanned rocket exploding shortly after lift-off. The rocket payload, worth US\$500 M, was destroyed. You can find background at https://en.wikipedia.org/wiki/Cluster_(spacecraft)Launch_failure. We will reproduce their mistake, and show how a few lines code would have saved over US\$500 M. Wir werden den genauen Fehler reproduzieren, den die Entwickler der Patriot Missile-Software gemacht haben. Sehen https://en.wikipedia.org/wiki/MIM-104_PatriotFailure_at_Dhahran für mehr Hintergrundinformationen zum Abhören Fehler. 1. Schlechte Programmierung in Bezug darauf, wie die Zahl der Computergeschäfte 1996 zu einer europäischen Weltraumorganisation führte * Ariane 5 * Unbemannte Rakete explodiert kurz nach dem Abheben. Die Raketennutzlast im Wert von 500 Millionen US-Dollar wurde zerstört. Sie können Hintergrundinformationen finden Sie unter https://en.wikipedia.org/wiki/Cluster_(spacecraft)Launch_failure. Wir werden ihren Fehler reproduzieren und zeigen, wie ein paar Zeilen Code über 500 US $ gespart hätten. Background: bits and bytesAn important part of understanding types is appreciating how computer storage works. Computer memory is made up of *bits*, and each bit can take on one of two values - 0 or 1. A bit is the smallest building block of memory.Bits are very fine-grained, so for many computer architectures the smallest 'block' we can normally work with is a *byte*. One byte is made up of 8 bits. This why when we talk about bits, e.g. a 64-bit operating system, the number of bits will almost always be a multiple of 8 (one byte).The 'bigger' a thing we want to store, the more bytes we need. This is important for engineering computations since the the number of bytes used to store a number determines the accuracy with which the number can be stored,and how big or small the number can be. The more bytes the greater the accuracy, but the price to be paid is higher memory usage. Also, it can be more expensive to perform operations like multiplication and division when using more bytes.Ein wichtiger Teil des Verständnisses von Typen ist es, die Funktionsweise des Computerspeichers zu schätzen. Der Computerspeicher besteht aus * Bits *, und jedes Bit kann eine von zwei annehmenWerte - 0 oder 1. Ein Bit ist der kleinste Baustein des Speichers.Die Bits sind sehr feinkörnig, daher ist für viele Computerarchitekturen der kleinste "Block", mit dem wir normalerweise arbeiten können, ein * Byte *. Ein Byte besteht aus 8 Bits. Deshalb, wenn wir über Bits sprechen, z. Bei einem 64-Bit-Betriebssystem ist die Anzahl der Bits fast immer ein Vielfaches von 8 (ein Byte).Je größer, was wir speichern möchten, desto mehr Bytes benötigen wir. Dies ist wichtig für Konstruktionsberechnungen, da die Anzahl der zum Speichern einer Anzahl verwendeten Bytes die Genauigkeit bestimmt, mit der die Anzahl gespeichert werden kann.und wie groß oder klein die Zahl sein kann. Je mehr Bytes, desto höher die Genauigkeit, aber der zu zahlende Preis ist eine höhere Speicherauslastung. Es kann auch teurer sein, Operationen wie Multiplikation und Division auszuführen, wenn mehr Bytes verwendet werden. Objectives- Introduce primitive data types (booleans, strings and numerical types)- Type inspection- Basic type conversion- Introduction to pitfalls of floating point arithmetic Ziele- Einführung primitiver Datentypen (Booleans, Strings und numerische Typen)- Typprüfung- Grundtypumwandlung- Einführung in die Fallstricke der Fließkomma-Arithmetik What is type?All variables have a 'type', which indicates what the variable is, e.g. a number, a string of characters, etc. In 'statically typed' languages we usually need to be explicit in declaring the type of a variable in a program. In a dynamically typed language, such as Python, variables still have types but the interpreter can determine types dynamically.Type is important because it determines how a variable is stored, how it behaves when we perform operations on it, and how it interacts with other variables. For example, multiplication of two real numbers is different from multiplication of two complex numbers.Alle Variablen haben einen 'Typ', der angibt, was die Variable ist, z. eine Zahl, eine Zeichenfolge usw. In "statisch typisierten" Sprachen müssen wir normalerweise den Typ einer Variablen in einem Programm explizit angeben. In einer dynamisch typisierten Sprache wie Python haben Variablen immer noch Typen, der Interpreter kann jedoch Typen dynamisch bestimmen.Der Typ ist wichtig, da er bestimmt, wie eine Variable gespeichert wird, wie sie sich beim Ausführen von Operationen verhält und wie sie mit anderen Variablen interagiert. Beispielsweise unterscheidet sich die Multiplikation von zwei reellen Zahlen von der Multiplikation von zwei komplexen Zahlen. Introspection Before getting into types, we look at how we can check the type in Python. A powerful feature of Python is *introspection*. This means that we can probe a program to ask about the type of a variable. To check the type of a variable we use the function `type`:SelbstbeobachtungBevor wir mit den Typen beginnen, schauen wir uns an, wie wir den Typ in Python prüfen können. Eine leistungsstarke Funktion von Python ist * Introspection *. Dies bedeutet, dass wir ein Programm untersuchen können, um nach dem Typ einer Variablen zu fragen. Überprüfenden Typ einer Variablen verwenden wir die Funktion `type`:
###Code
x = True
print(type(x))
a = 1
print(type(a))
a = 1.0
print(type(a))
###Output
<class 'bool'>
<class 'int'>
<class 'float'>
###Markdown
Note that `a = 1` and `a = 1.0` are different types! This distinction is very important for numerical computations.More on this further down.Use `type` freely when exploring and testing, to develop an understanding for what your program is doing.Beachten Sie, dass "a = 1" und "a = 1.0" verschiedene Typen sind! Diese Unterscheidung ist für numerische Berechnungen sehr wichtig.Mehr dazu weiter unten.Verwenden Sie "type" beim Erkunden und Testen, um ein Verständnis dafür zu entwickeln, was Ihr Programm tut. BooleansYou have already seen the 'Boolean' type that can take on one of two values - true or false. This is the simplest typeSie haben bereits den Typ 'Boolean' gesehen, der einen von zwei Werten annehmen kann - wahr oder falsch. Dies ist der einfachste Typ.
###Code
a = True
b = False
test = a or b # test will be True if a or b are True
print(test, type(test))
###Output
True <class 'bool'>
###Markdown
In principle, we could represent a boolean with just one bit (0 or 1 switch).Im Prinzip könnten wir einen Boolean mit nur einem Bit (0 oder 1 Schalter) darstellen. StringsA string is a collection of characters. We have been using strings in previous activities for printing informative messages. In Python we create a string using single or double quotes (the choice is personal preference), e.g.Eine Zeichenfolge ist eine Sammlung von Zeichen. Wir haben in früheren Aktivitäten Zeichenfolgen zum Drucken von Informationsnachrichten verwendet. In Python erstellen wir eine Zeichenfolge mit einfachen oder doppelten Anführungszeichen (die Wahl liegt nach persönlichen Vorlieben), z. my_string = 'This is a string.' or my_string = "This is a string." Below we assign a string to a variable, display the string, and then check its type:Im Folgenden weisen wir einer Variablen eine Zeichenfolge zu, zeigen die Zeichenfolge an und prüfen dann ihren Typ:
###Code
my_string = "This is a string."
print(my_string)
print(type(my_string))
###Output
This is a string.
<class 'str'>
###Markdown
We can perform many different operations on strings. We can extract a particular character as a new string:Wir können viele verschiedene Operationen an Zeichenketten ausführen. Wir können ein bestimmtes Zeichen als neue Zeichenfolge extrahieren:
###Code
# Get 3rd character (Python counts from zero)
s2 = my_string[2]
print(s2)
print(type(s2))
###Output
i
<class 'str'>
###Markdown
or extract a range of characters:oder extrahiere eine Reihe von Zeichen:
###Code
# Get first six characters, print and check type
s3 = my_string[0:6]
print(s3)
print(type(s3))
# Get last four characters and print
s4 = my_string[-4:]
print(s4)
###Output
This i
<class 'str'>
ing.
###Markdown
We can add strings together:Wir können Strings zusammen hinzufügen:
###Code
introduction = "My name is:"
name = "Joe"
personal_introduction = introduction + " " + name
print(personal_introduction)
###Output
My name is: Joe
###Markdown
We can also check the length (number of characters) of a string using `len`:Wir können die Länge (Anzahl der Zeichen) eines Strings auch mit `len` überprüfen:
###Code
print(len(personal_introduction))
###Output
15
###Markdown
There are *many* more operations that can be performed on strings. We will see more in later activities.Es gibt * viele * weitere Operationen, die für Zeichenfolgen ausgeführt werden können. Wir werden mehr in späteren Aktivitäten sehen. Numeric typesNumeric types are important in many computing applications, and particularly in scientific and engineering programs. Python 3 has three native numerical types:- integers (`int`)- floating point numbers (`float`)- complex numbers (`complex`)This is typical for most programming languages, although there can be some subtle differences.Numerische Typen sind in vielen Computeranwendungen und insbesondere in wissenschaftlichen und technischen Programmen von Bedeutung. Python 3 hat drei native numerische Typen:- ganze Zahlen ("int")- Fließkommazahlen ("Float")- komplexe Zahlen ("Komplex")Dies ist typisch für die meisten Programmiersprachen, es kann jedoch geringfügige Unterschiede geben. IntegersIntegers (`int`) are whole numbers, and can be postive or negative. Integers should be used when a value can only take on a whole number, e.g. the year, or the number of students following this course. Python infers the type of a number from the way we input it. It will infer an `int` if we assign a number with no decimal place:Ganzzahlen ("int") sind ganze Zahlen und können positiv oder negativ sein. Ganzzahlen sollten verwendet werden, wenn ein Wert nur eine ganze Zahl annehmen kann, z. das Jahr oder die Anzahl der Studenten, die an diesem Kurs teilnehmen. Python leitet den Typ einer Zahl von der Art ab, wie wir sie eingeben. Es wird ein "int" abgeleitet, wenn wir eine Zahl ohne Dezimalstelle zuweisen:
###Code
a = 2
print(type(a))
###Output
<class 'int'>
###Markdown
If we add a decimal point, the variable type becomes a `float` (more on this later)Wenn wir einen Dezimalpunkt hinzufügen, wird der Variablentyp zu einem Float (mehr dazu später).
###Code
a = 2.0
print(type(a))
###Output
<class 'float'>
###Markdown
Integer operations that result in an integer, such as multiplying or adding two integers, are performed exactly (there is no error). This does however depend on a variable having enough memory (sufficient bytes) to represent the result.Ganzzahloperationen, die zu einer Ganzzahl führen, z. B. Multiplizieren oder Hinzufügen von zwei Ganzzahlen, werden exakt ausgeführt (es liegt kein Fehler vor). Dies hängt jedoch davon ab, dass eine Variable über genügend Speicher (genug Bytes) verfügt, um das Ergebnis darzustellen. Integer storage and overflowIn most languages, a fixed number of bits are used to store a given type of integer. In C and C++ a standard integer (`int`) is usually stored using 32 bits (it is possible to declare shorter and longer integer types). The largest integer that can be stored using 32 bits is $2^{31} - 1 = 2,147,483,647$.We explain later where this comes from. The message for now is that for a fixed number of bits, there is a bound on the largest number that can be represented/stored. Integer overflowInteger overflow is when an operation creates an integer that is too big to be represented by the given integer type. For example, attempting to assign $2^{31} + 1$ to a 32-bit integer will cause an overflow and potentially unpredictable program response. This would usually be a *bug*.The Ariane 5 rocket explosion in 1996 was caused by integer overflow. The rocket navigation software was taken from the older, slower Ariane 4 rocket. The program assigned the rocket speed to a 16-bit integer (the largest number a 16-bit integer can store is $2^{15} - 1 = 32767$), but the Ariane 5 could travel faster than the older generation of rocket and the speed value exceeded $32767$. The resulting integer overflow led to failure of the rocket's navigation system andexplosion of the rocket; a very costly rocket and a very expensive payload were destroyed.We will reproduce the error that caused this failure when we look at *type conversions*.Python avoids integer overflows by dynamically changing the number of bits used to represent an integer. You can inspect the number of bits required to store an integer in binary (not including the bit for the sign) using the function [bit_length](https://docs.python.org/3/library/stdtypes.htmlint.bit_length):Ganzzahlspeicher und ÜberlaufIn den meisten Sprachen wird eine bestimmte Anzahl von Bits verwendet, um einen bestimmten Integer-Typ zu speichern. In C und C ++ wird eine Standard-Integer-Zahl (Int) normalerweise mit 32 Bit gespeichert (kürere und längere Integer-Typen können deklariert werden). Die größte Ganzzahl, die unter Verwendung von 32 Bits gespeichert werden kann, ist 231 - 1 = 2.147.483.647. Wir erklären später, woher das kommt. Die Nachricht ist vorerst, dass für eine feste Anzahl von Bits die größte Anzahl, die dargestellt / gespeichert werden kann, begrenzt ist.GanzzahlüberlaufGanzzahlüberlauf ist, wenn eine Operation eine Ganzzahl erstellt, die zu groß ist, um von dem angegebenen Ganzzahlentyp dargestellt zu werden. Wenn Sie beispielsweise versuchen, einer 32-Bit-Ganzzahl 231 + 1 zuzuweisen, wird dies zu einem Überlauf und möglicherweise zu einer unvorhersehbaren Programmreaktion führen. Dies wäre normalerweise ein Fehler.Die Ariane-5-Raketenexplosion im Jahr 1996 wurde durch einen ganzzahligen Überlauf verursacht. Die Raketennavigationssoftware wurde von der älteren, langsameren Ariane-4-Rakete übernommen. Das Programm hat die Raketengeschwindigkeit einer 16-Bit-Ganzzahl zugewiesen (die größte Zahl, die eine 16-Bit-Ganzzahl speichern kann, ist 215-1 = 32767), aber die Ariane 5 könnte sich schneller bewegen als die ältere Raketengeneration und der Geschwindigkeitswert überschritt 32767 . Der resultierende ganzzahlige Überlauf führte zum Versagen des Navigationssystems der Rakete und zur Explosion der Rakete; Eine sehr teure Rakete und eine sehr teure Nutzlast wurden zerstört. Wir werden den Fehler reproduzieren, der zu diesem Fehler geführt hat, wenn wir Typkonvertierungen betrachten.Python vermeidet ganzzahlige Überläufe, indem es die Anzahl der zur Darstellung einer ganzen Zahl verwendeten Bits dynamisch ändert. Sie können die Anzahl der Bits, die zum Speichern einer Ganzzahl in binär erforderlich sind (ohne das Bit für das Vorzeichen), mit der Funktion bit_length überprüfen:
###Code
a = 8
print(type(a))
print(a.bit_length())
###Output
<class 'int'>
4
###Markdown
We see that 4 bits are necessary to represent the number 8. If we increase the size of the number dramatically by raising it to the power of 12:Wir sehen, dass 4 Bits notwendig sind, um die Zahl 8 darzustellen. Wenn wir die Größe der Zahl dramatisch erhöhen, indem wir sie auf 12 erhöhen:
###Code
b = a**12
print(b)
type(b)
print(b.bit_length())
###Output
68719476736
37
###Markdown
We see that 37 bits are required to represent the number. If the `int` type was limited to 32 bits for storing the value, this operation would have caused an overflow.Wir sehen, dass 37 Bits erforderlich sind, um die Zahl darzustellen. Wenn der "int" -Typ zum Speichern des Werts auf 32 Bit begrenzt ist, hätte dieser Vorgang einen Überlauf verursacht. Gangnam StyleIn 2014, Google switched from 32-bit integers to 64-bit integers to count views when the video "Gangnam Style" was viewed more than 2,147,483,647 times, which is the limit of 32-bit integers (see https://plus.google.com/+YouTube/posts/BUXfdWqu86Q).Gangnam StyleIm Jahr 2014 wechselte Google von 32-Bit-Ganzzahlen auf 64-Bit-Ganzzahlen, um die Ansichten zu zählen, wenn das Video "Gangnam Style" mehr als 2.147.483.647 Mal angesehen wurde. Dies ist die Grenze für 32-Bit-Ganzzahlen Boeing 787 Dreamliner bugDue to an integer overflow bug, the electricity generators on a Boeing 787 will shut down if the plane ispowered continuously for 248 days, due to an overflow. The 'quick fix' was to make sure that generator control units do not operate for more than 248 days.See Boeing 787 Dreamliner FehlerAufgrund eines ganzzahligen Überlauffehlers werden die Stromgeneratoren einer Boeing 787 heruntergefahren, wenn das Flugzeug vorhanden ist248 Tage ununterbrochen mit Strom versorgt, aufgrund eines Überlaufs. Die "schnelle Lösung" bestand darin, dies sicherzustellenGeneratorsteuergeräte funktionieren nicht länger als 248 Tage.Sehenhttps://www.theguardian.com/business/2015/may/01/us-aviation-authority-boeing-787-dreamliner-bug-could-cause-loss-of-control and https://s3.amazonaws.com/public-inspection.federalregister.gov/2015-10066.pdf for background. Floating point storageMost engineering calculations involve numbers that cannot be represented as integers. Numbers that have a decimal point are stored using the `float` type. Computers store floating point numbers by storing the sign, the significand (also known as the mantissa) and the exponent, e.g.: for $10.45$Die meisten Konstruktionsberechnungen enthalten Zahlen, die nicht als Ganzzahlen dargestellt werden können. Zahlen mit Dezimalpunkt werden mit dem Float-Typ gespeichert. Computer speichern Gleitkommazahlen, indem sie das Vorzeichen, den Signifikand (auch bekannt als Mantisse) und den Exponenten speichern, z. B. für 10.45$$10.45 = \underbrace{+}_{\text{sign}} \underbrace{1045}_{\text{significand}} \times \underbrace{10^{-2}}_{\text{exponent} = -2}$$Python uses 64 bits to store a `float` (in C and C++ this is known as a `double`). The sign requires one bit, and there are standards that specify how many bits should be used for the significand and how many for the exponent.Since a finite number of bits are used to store a number, the precision with which numbers can be represented is limited. As a guide, using 64 bits a floating point number is precise to 15 to 17 significant figures.More on this, and why the Patriot missile failed, later.Python verwendet 64 Bits, um einen "Float" zu speichern (in C und C ++ wird dies als "Double" bezeichnet). Das Vorzeichen erfordert ein Bit, und es gibt Standards, die angeben, wie viele Bits für den Signifikanz und wie viele für den Exponenten verwendet werden sollen.Da eine endliche Anzahl von Bits zum Speichern einer Zahl verwendet wird, ist die Genauigkeit, mit der Zahlen dargestellt werden können, begrenzt. Bei der Verwendung von 64 Bits ist eine Fließkommazahl auf 15 bis 17 signifikante Stellen genau.Mehr dazu und warum die Patriot-Rakete später versagte. FloatsWe can declare a float by adding a decimal point:Wir können einen Float deklarieren, indem Sie einen Dezimalpunkt hinzufügen:
###Code
a = 2.0
print(a)
print(type(a))
b = 3.
print(b)
print(type(b))
###Output
2.0
<class 'float'>
3.0
<class 'float'>
###Markdown
or by using `e` or `E` (the choice between `e` and `E` is just a matter of taste):oder mit "e" oder "E" (die Wahl zwischen "e" und "E" ist nur eine Frage des Geschmacks):
###Code
a = 2e0
print(a, type(a))
b = 2e3
print(b, type(b))
c = 2.1E3
print(c, type(c))
###Output
2.0 <class 'float'>
2000.0 <class 'float'>
2100.0 <class 'float'>
###Markdown
Complex numbersA complex number is a more elaborate float with two parts - the real and imaginary components. We can declare a complex number in Python by adding `j` or `J` after the complex part of the number:Eine komplexe Zahl ist ein aufwendigerer Float mit zwei Teilen - den realen und den imaginären Komponenten. Wir können eine komplexe Zahl in Python deklarieren, indem Sie nach dem komplexen Teil der Zahl "j" oder "J" hinzufügen:
###Code
a = 2j
print(a, type(a))
b = 4 - 3j
print(b, type(b))
###Output
2j <class 'complex'>
(4-3j) <class 'complex'>
###Markdown
The usual addition, subtraction, multiplication and division operations can all be performed on complex numbers. The real and imaginary parts can be extracted:Die üblichen Additions-, Subtraktions-, Multiplikations- und Divisionsoperationen können alle mit komplexen Zahlen durchgeführt werden. Die Real- und Imaginärteile können extrahiert werden:
###Code
print(b.imag)
print(b.real)
###Output
-3.0
4.0
###Markdown
and the complex conjugate can be taken:und das komplexe Konjugtion kann genommen werden:
###Code
print(b.conjugate())
###Output
(4+3j)
###Markdown
We can compute the modulus of a complex number using `abs`:Wir können den Modulus einer komplexen Zahl mit "abs" berechnen:
###Code
print(abs(b))
###Output
5.0
###Markdown
More generally, `abs` returns the absolute value, e.g.:Allgemeiner gibt "abs" den absoluten Wert zurück, z.
###Code
a = -21.6
a = abs(a)
print(a)
###Output
21.6
###Markdown
Type conversions (casting)We can often change between types. This is called *type conversion* or *type casting*. In some cases it happens implicitly, and in other cases we can instruct our program to change the type.If we add two integers, the results will be an integer:Wir können oft zwischen den Typen wechseln. Dies wird als * Typumwandlung * oder * Typgießen * bezeichnet. In einigen Fällen geschieht dies implizit, und in anderen Fällen können wir unser Programm anweisen, den Typ zu ändern.Wenn wir zwei Ganzzahlen hinzufügen, werden die Ergebnisse eine Ganzzahl sein:
###Code
a = 4
b = 15
c = a + b
print(c, type(c))
###Output
19 <class 'int'>
###Markdown
However, if we add an `int` and a `float`, the result will be a float:Wenn wir jedoch ein "int" und ein "float" hinzufügen, wird das Ergebnis ein float sein:
###Code
a = 4
b = 15.0 # Adding the '.0' tells Python that it is a float
c = a + b
print(c, type(c))
###Output
19.0 <class 'float'>
###Markdown
If we divide two integers, the result will be a `float`:Wenn wir zwei Ganzzahlen teilen, ist das Ergebnis ein "Float":
###Code
a = 16
b = 4
c = a/b
print(c, type(c))
b = 2
###Output
4.0 <class 'float'>
###Markdown
When dividing two integers, we can do 'integer division' using `//`, e.g.Wenn Sie zwei Ganzzahlen teilen, können Sie eine Ganzzahldivision mit "//" ausführen, z.
###Code
a = 16
b = 3
c = a//b
print(c, type(c))
###Output
5 <class 'int'>
###Markdown
in which case the result is an `int`.In general, operations that mix an `int` and `float` will generate a `float`, and operations that mix an `int` or a `float` with `complex` will return a `complex` type. If in doubt, use `type` to experiment and check. In diesem Fall ist das Ergebnis ein "int".Im Allgemeinen erzeugen Operationen, die ein "int" und "float" mischen, ein "float", und Operationen, die ein "int" oder ein "float" mit "complex" mischen, geben einen "komplexen" Typ zurück. Wenn Sie Zweifel haben, verwenden Sie 'type' zum Experimentieren und Überprüfen. Explicit type conversionWe can explicitly change the type (perform a cast), e.g. cast from an `int` to a `float`:Wir können den Typ explizit ändern (Cast durchführen), z. Besetzung von "int" in "float":
###Code
a = 1
print(a, type(a))
a = float(a) # This converts the int associated with 'a' to a float, and assigns the result to the variable 'a'
print(a, type(a))
###Output
1 <class 'int'>
1.0 <class 'float'>
###Markdown
Going the other way,Den anderen Weg gehen,
###Code
y = 1.99
print(y, type(y))
z = int(y)
print(z, type(z))
###Output
1.99 <class 'float'>
1 <class 'int'>
###Markdown
Note that rounding is applied when converting from a `float` to an `int`; the values after the decimal point are discarded. This type of rounding is called 'round towards zero' or 'truncation'.A common task is converting numerical types to-and-from strings. We might read a number from a file as a string, or a user might input a value which Python reads in as a string. Converting a float to a string:Beachten Sie, dass beim Konvertieren von "float" in "int" eine Rundung angewendet wird. Die Werte nach dem Komma werden verworfen. Diese Art der Rundung wird als "Rundung gegen Null" oder "Verkürzung" bezeichnet.Eine übliche Aufgabe ist das Konvertieren von numerischen Typen in und aus Strings. Wir lesen möglicherweise eine Zahl aus einer Datei als Zeichenfolge oder ein Benutzer gibt einen Wert ein, den Python als Zeichenfolge einliest. Einen Float in einen String konvertieren:
###Code
a = 1.023
b = str(a)
print(b, type(b))
###Output
1.023 <class 'str'>
###Markdown
and in the other direction:und in die andere Richtung:
###Code
a = "15.07"
b = "18.07"
print(a + b)
print(float(a) + float(b))
###Output
15.0718.07
33.14
###Markdown
If we tried ```pythonprint(int(a) + int(b))```we could get an error that the strings could not be converted to `int`. It works in the case:Es könnte ein Fehler auftreten, dass die Zeichenfolgen nicht in int konvertiert werden konnten. Es funktioniert in dem Fall:
###Code
a = "15"
b = "18"
print(int(a) + int(b))
###Output
33
###Markdown
since these strings can be correctly cast to integers.da diese Zeichenfolgen korrekt in Ganzzahlen umgewandelt werden können. Ariane 5 rocket explosion and type conversionThe Ariane 5 rocket explosion was caused by an integer overflow. The speed of the rocket was stored as a 64-bit float, and this was converted in the navigation software to a 16-bit integer. However, the value of the float was greater than $32767$, the largest number a 16-bit integer can represent, and this led to an overflow that in turn caused the navigation system to fail and the rocket to explode.We can demonstrate what happened in the rocket program. We consider a speed of $40000.54$ (units are not relevant to what is being demonstrated), stored as a `float` (64 bits):Die Ariane-5-Raketenexplosion wurde durch einen ganzzahligen Überlauf verursacht. Die Geschwindigkeit der Rakete wurde als 64-Bit-Float gespeichert und in der Navigationssoftware in eine 16-Bit-Ganzzahl umgewandelt. Der Wert des Floats war jedoch höher als $ 32767 $. Die größte Zahl, die eine 16-Bit-Ganzzahl darstellen kann, führte zu einem Überlauf, der wiederum dazu führte, dass das Navigationssystem ausfiel und die Rakete explodierte.Wir können demonstrieren, was im Raketenprogramm passiert ist. Wir betrachten eine Geschwindigkeit von 40000,54 $ (Einheiten sind nicht relevant für das, was demonstriert wird), gespeichert als "Float" (64 Bit):
###Code
speed_float = 40000.54
###Output
_____no_output_____
###Markdown
If we first convert the float to a 32-bit `int` (we use NumPy to get integers with a fixed number of bits, more on NumPy in a later notebook):Wenn wir den Float zuerst in ein 32-Bit-Int (konvertieren) konvertieren (wir verwenden NumPy, um Ganzzahlen mit einer festen Anzahl von Bits zu erhalten, mehr zu NumPy in einem späteren Notizbuch):
###Code
import numpy as np
speed_int = np.int32(speed_float) # Convert the speed to a 32-bit int
print(speed_int)
###Output
40000
###Markdown
The conversion behaves as we would expect. Now, if we convert the speed from the `float` to a 16-bit integer:Die Konvertierung verhält sich wie erwartet. Wenn wir nun die Geschwindigkeit vom "float" in eine 16-Bit-Ganzzahl konvertieren:
###Code
speed_int = np.int16(speed_float)
print(speed_int)
###Output
-25536
###Markdown
We see clearly the result of an integer overflow since the 16-bit integer has too few bits to represent the number 40000.The Ariane 5 failure would have been averted with pre-launch testing and the following few lines:Wir sehen deutlich das Ergebnis eines Ganzzahlüberlaufs, da die 16-Bit-Ganzzahl zu wenige Bits aufweist, um die Zahl darzustellen40000.Der Ausfall der Ariane 5 wäre durch Tests vor dem Start und den folgenden wenigen Zeilen verhindert worden:
###Code
if abs(speed_float) > np.iinfo(np.int16).max:
print("***Error, cannot assign speed to 16-bit int. Will cause overflow.")
# Call command here to exit program
else:
speed_int = np.int16(speed_float)
###Output
***Error, cannot assign speed to 16-bit int. Will cause overflow.
###Markdown
These few lines and careful testing would have saved the $500M payload and the cost of the rocket.The Ariane 5 incident is an example not only of a poor piece of programming, but also very poor testing and software engineering. Careful pre-launch testing of the software would have detected this problem. The program should have checked the value of the velocity before performing the conversion, and triggered an error message that the type conversion would cause an overflow.Diese wenigen Leitungen und sorgfältige Tests hätten die Nutzlast von 500 Millionen US-Dollar und die Kosten der Rakete eingespart.Der Vorfall der Ariane 5 ist nicht nur ein Beispiel für eine schlechte Programmierung, sondern auch für sehr schlechte Test- und Softwareentwicklung. Ein sorgfältiges Testen der Software vor dem Start hätte dieses Problem erkannt. Das Programm sollte vor der Konvertierung den Wert der Geschwindigkeit überprüft und eine Fehlermeldung ausgegeben haben, dass die Typkonvertierung einen Überlauf verursachen würde. Binary representation and floating point arithmetic Binary (base 2) representationComputers store data using 'bits', and a bit is a switch that can have a value of 0 or 1. This means that computers store numbers in binary (base 2), whereas we almost always work with decimal numbers (base 10).For example, the binary number $110$ is equal toComputer speichern Daten mit 'Bits', und ein Bit ist ein Schalter, der den Wert 0 oder 1 annehmen kann. Dies bedeutet, dass Computer Zahlen binär (Basis 2) speichern, während wir fast immer mit Dezimalzahlen (Basis 10) arbeiten.Zum Beispiel ist die Binärzahl $ 110 $ gleich $ 0$0 \times 2^{0} + 1 \times 2^{1} + 1 \times 2^{2} = 6$(read $110$ right-to-left).Below is a table with decimal (base 10) and the corresponding binary (base 2) representation of some numbers. Nachfolgend finden Sie eine Tabelle mit Dezimalzahlen (Basis 10) und der entsprechenden binären (Basis 2) Darstellung einiger Zahlen.See if you want to learn more.|Decimal | Binary || ------ |-------- ||0 | 0 | |1 | 1 | |2 | 10 ||3 | 11 ||4 | 100 ||5 | 101 ||6 | 110 ||7 | 111 ||8 | 1000 ||9 | 1001 ||10 | 1010 ||11 | 1011 ||12 | 1100 ||13 | 1101 ||14 | 1110 ||15 | 1111 |To represent any integer, all we need are enough bits to store the binary representation. If we have $n$ bits, the largest number we can store is $2^{n -1} - 1$ (the power is $n - 1$ because we use one bit to store the sign of the integer).We can display the binary representation of an integer in Python using the function `bin`:Um eine ganze Zahl darzustellen, brauchen wir nur genug Bits, um die binäre Darstellung zu speichern. Wenn wir $ n $ -Bits haben, ist die größte Anzahl, die wir speichern können, $ 2 ^ {n -1} - 1 $ (die Potenz ist $ n - 1 $, da wir das Vorzeichen der Ganzzahl mit einem Bit speichern).Wir können die binäre Darstellung einer Ganzzahl in Python mit der Funktion `bin` anzeigen:
###Code
print(bin(2))
print(bin(6))
print(bin(110))
###Output
0b10
0b110
0b1101110
###Markdown
The prefix `0b` is to denote that the representation is binary.Das Präfix '0b' gibt an, dass die Darstellung binär ist. Floating point numbersWe introduced the representationWir haben die Darstellung eingeführt$$10.45 = \underbrace{+}_{\text{sign}} \underbrace{1045}_{\text{significand}} \times \underbrace{10^{-2}}_{\text{exponent}}$$earlier. However, this was a little misleading because computers do not use base 10to store the significand and the exponent, but base 2. When using the familiar base 10, we cannot represent $1/3$ exactly as a decimal. If we liked using base 3 (ternary numeral system) for our mental arithmetic (which we really don't), we could represent $1/3$ exactly. However, fractions that are simple to represent exactly in base 10 might not be representable in another base.A consequence is that fractions that are simple in base 10 cannot necessarily be represented exactly by computers using binary.A classic example is $1/10 = 0.1$. This simple number cannot be represented exactly inbinary. On the contrary, $1/2 = 0.5$ can be represented exactly. To explore this, let's assign the number 0.1 to the variable `x` and print the result:vorhin. Dies war jedoch etwas irreführend, da Computer nicht die Basis 10 verwendenum den signifikanten und den Exponenten zu speichern, aber Basis 2.Bei Verwendung der bekannten Basis 10 können wir $ 1/3 $ nicht exakt als Dezimalzahl darstellen. Wenn wir Basis 3 (Ternäres Zahlensystem) für unsere mentale Arithmetik verwenden wollten (was wir wirklich nicht tun), könnten wir 1/3 $ genau darstellen. Brüche, die in der Basis 10 einfach dargestellt werden können, sind jedoch möglicherweise nicht in einer anderen Basis darstellbar.Dies hat zur Folge, dass Bruchteile, die in der Basis 10 einfach sind, nicht unbedingt von binären Computern korrekt dargestellt werden können.Ein klassisches Beispiel ist $ 1/10 = 0,1 $. Diese einfache Nummer kann nicht exakt in dargestellt werdenbinär. Im Gegensatz dazu kann $ 1/2 = 0,5 $ genau dargestellt werden. Um dies herauszufinden, weisen wir der Variablen 'x' die Nummer 0,1 zu und drucken das Ergebnis:
###Code
x = 0.1
print(x)
###Output
0.1
###Markdown
This looks fine, but the `print` statement is hiding some details. Asking the `print` statement to use 30 characters we see that `x` is not exactly 0.1:Das sieht gut aus, aber die "print" -Anweisung verbirgt einige Details. Wenn Sie die `print`-Anweisung bitten, 30 Zeichen zu verwenden, sehen wir, dass 'x' nicht genau 0,1 ist:
###Code
print('{0:.30f}'.format(x))
###Output
0.100000000000000005551115123126
###Markdown
The difference between 0.1 and the binary representation is the *roundoff error* (we'll look at print formatting syntax in a later activity). From the above, we can see that the representation is accurate to about 17 significant figures.Checking for 0.5, we see that it appears to be represented exactly:Der Unterschied zwischen 0.1 und der Binärdarstellung ist der * Rundungsfehler * (die Formatierungssyntax des Druckens wird in einer späteren Übung beschrieben). Aus dem Obigen ist ersichtlich, dass die Darstellung auf ungefähr 17 signifikante Zahlen genau ist.Bei der Überprüfung von 0,5 sehen wir, dass es genau dargestellt zu sein scheint:
###Code
print('{0:.30f}'.format(0.5))
###Output
0.500000000000000000000000000000
###Markdown
The round-off error for the 0.1 case is small, and in many cases will not present a problem. However, sometimes round-off errors can accumulate and destroy accuracy.Der Rundungsfehler für den 0,1-Fall ist klein und stellt in vielen Fällen kein Problem dar. Manchmal können Rundungsfehler jedoch akkumulieren und die Genauigkeit zerstören. Example: inexact representationIt is trivial that$$x = 11x - 10x$$If $x = 0.1$, we can write$$x = 11x - 1$$Now, starting with $x = 0.1$ we evaluate the right-hand side to get a 'new' $x$, and use this new $x$ to then evaluate the right-hand side again. The arithmetic is trivial: $x$ should remain equal to $0.1$.We test this in a program that repeats this process 20 times: Beginnend mit $ x = 0,1 $ werten wir nun die rechte Seite aus, um ein "neues" $ x $ zu erhalten, und verwenden dieses neue $ x $, um die rechte Seite erneut auszuwerten. Die Arithmetik ist trivial: $ x $ sollte bei $ 0,1 $ bleiben.Wir testen das in einem Programm, das diesen Vorgang 20 Mal wiederholt:
###Code
x = 0.1
for i in range(20):
x = x*11 - 1
print(x)
###Output
0.10000000000000009
0.10000000000000098
0.10000000000001075
0.10000000000011822
0.10000000000130038
0.1000000000143042
0.10000000015734622
0.10000000173080847
0.10000001903889322
0.10000020942782539
0.10000230370607932
0.10002534076687253
0.10027874843559781
0.1030662327915759
0.13372856070733485
0.4710141677806834
4.181155845587517
44.992714301462684
493.9198573160895
5432.118430476985
###Markdown
The solution blows up and deviates widely from $x = 0.1$. Round-off errors are amplified at each step, leading to a completely wrong answer. The computer representation of $0.1$ is not exact, and every time we multiply $0.1$ by $11$, we increase the error by around a factor of 10 (we can see above that we lose a digit of accuracy in each step). You can observe the same issue using spreadsheet programs.Die Lösung sprengt und weicht stark von $ x = 0,1 $ ab. Rundungsfehler werden bei jedem Schritt verstärkt und führen zu einer völlig falschen Antwort. Die Computerrepräsentation von $ 0,1 $ ist nicht exakt, und jedes Mal, wenn wir $ 0,1 $ mit $ 11 $ multiplizieren, erhöhen wir den Fehler um einen Faktor von 10 (wir können oben sehen, dass wir in jedem Schritt eine Genauigkeitsziffer verlieren).Sie können dasselbe Problem mit Tabellenkalkulationsprogrammen beobachten. If we use $x = 0.5$, which can be represented exactly in binary:
###Code
x = 0.5
for i in range(20):
x = x*11 - 5
print(x)
###Output
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
###Markdown
The result is exact in this case.By default, Python uses 64 bits to store a float. We can use the module NumPy to create a float that uses only 32 bits. Testing this for the $x = 0.1$ case:Das Ergebnis ist in diesem Fall genau.Standardmäßig verwendet Python 64 Bits, um einen Float zu speichern. Wir können das Modul NumPy verwenden, um eineFloat, der nur 32 Bit verwendet. Testen Sie dies für den Fall $ x = 0,1 $:
###Code
x = np.float32(0.1)
for i in range(20):
x = x*11 - 1
print(x)
###Output
0.10000001639127731
0.10000018030405045
0.1000019833445549
0.10002181679010391
0.10023998469114304
0.1026398316025734
0.12903814762830734
0.41941962391138077
3.6136158630251884
38.74977449327707
425.2475194260478
4676.722713686526
51442.949850551784
565871.4483560696
6224584.931916766
68470433.25108442
753174764.7619286
8284922411.381214
91134146524.19336
1002475611765.127
###Markdown
The error blows up faster in this case compared to the 64 bit case - using 32 bits leads to a poorer approximation of $0.1$ than when using 64 bits.*Note:* Some languages have special tools for performing decimal (base 10) arithmetic (e.g., https://docs.python.org/3/library/decimal.html). This would, for example, allow $0.1$ to be represented exactly. However, decimal is not the 'natural' arithmetic of computers so operations in decimal could be expected to be much slower.Der Fehler springt in diesem Fall im Vergleich zum 64-Bit-Fall schneller an - die Verwendung von 32 Bit führt zu einer schlechteren Näherung von 0,1 als bei der Verwendung von 64 Bit.Hinweis: Einige Sprachen verfügen über spezielle Werkzeuge zum Ausführen von Dezimalarithmetik (Basis 10) (z. B. https://docs.python.org/3/library/decimal.html). Dies würde beispielsweise erlauben, dass 0,1 genau dargestellt wird. Dezimalzahlen sind jedoch nicht die "natürliche" Arithmetik von Computern, daher kann man davon ausgehen, dass Dezimaloperationen wesentlich langsamer sind. Patriot Missile FailureThe inexact representation of $0.1$ was the cause of the software error in the Patriot missile system (see preamble to this notebook). The missile system tracked time from boot (system start) using an integer counter that was incremented every $1/10$ of a second. Toget the time in seconds, the missile software multiplied the counter by the float representation of $0.1$. The control software used 24 bits to store floats. The round-off error due to the inexact representation of $0.1$ lead to an error of $0.32$ s after 100 hours of operation (time since boot), which due to the high velocity of the missile was enough to cause failure to intercept the incoming Scud.We don't have 24-bit floats in Python, but we can test with 16, 32 and 64 bit floats.We first compute what the system counter (an integer) would be after 100 hours:Die ungenaue Darstellung von 0,1 $ war die Ursache des Softwarefehlers im Patriot-Raketensystem (siehe Präambel zu diesem Notebook).Das Raketensystem verfolgte die Zeit vom Start (Systemstart) aus mit einem ganzzahligen Zähler, der alle 1/10 $ einer Sekunde inkrementiert wurde. ZuUm die Zeit in Sekunden zu ermitteln, multiplizierte die Raketensoftware den Zähler mit der Float-Darstellung von 0,1 $.Die Steuerungssoftware verwendete 24 Bits zum Speichern von Floats. Der Rundungsfehler aufgrund der ungenauen Darstellung von $ 0,1 $ führte nach 100 Betriebsstunden (Zeit seit dem Start) zu einem Fehler von $ 0,32 $ s, der aufgrund der hohen Geschwindigkeit des Flugkörpers ausreichte, um das Abfangen des eingehenden Systems zu verhindern ScudWir haben keine 24-Bit-Floats in Python, aber wir können 16, 32 und 64-Bit-Floats testen.Zuerst berechnen wir, wie der Systemzähler (eine ganze Zahl) nach 100 Stunden sein würde:
###Code
# Compute internal system counter after 100 hours (counter increments every 1/10 s)
num_hours = 100
num_seconds = num_hours*60*60
system_counter = num_seconds*10 # system clock counter
###Output
_____no_output_____
###Markdown
Converting the system counter to seconds using different representations of 0.1:Konvertieren des Systemzählers in Sekunden mit unterschiedlichen Darstellungen von 0,1:
###Code
# Test with 16 bit float
dt = np.float16(0.1)
time = dt*system_counter
print("Time error after 100 hours using 16 bit float (s):", abs(time - num_seconds))
# Test with 32 bit float
dt = np.float32(0.1)
time = dt*system_counter
print("Time error after 100 hours using 32 bit float (s):", abs(time - num_seconds))
# Test with 64 bit float
dt = np.float64(0.1)
time = dt*system_counter
print("Time error after 100 hours using 64 bit float (s):", abs(time - num_seconds))
###Output
Time error after 100 hours using 16 bit float (s): 87.890625
Time error after 100 hours using 32 bit float (s): 0.005364418029785156
Time error after 100 hours using 64 bit float (s): 0.0
|
temas/I.computo_cientifico/1.7.Reescribir_funciones_a_C++_Rcpp.ipynb | ###Markdown
**Notas para contenedor de docker:** Comando de docker para ejecución de la nota de forma local:nota: cambiar `` por la ruta de directorio que se desea mapear a `/datos` dentro del contenedor de docker.```docker run --rm -v :/datos --name jupyterlab_r_kernel_local -p 8888:8888 -d palmoreck/jupyterlab_r_kernel:1.1.0```password para jupyterlab: `qwerty`Detener el contenedor de docker:```docker stop jupyterlab_r_kernel_local``` Documentación de la imagen de docker `palmoreck/jupyterlab_r_kernel:1.1.0` en [liga](https://github.com/palmoreck/dockerfiles/tree/master/jupyterlab/r_kernel). --- Esta nota utiliza métodos vistos en [1.5.Integracion_numerica](https://github.com/ITAM-DS/analisis-numerico-computo-cientifico/blob/master/temas/I.computo_cientifico/1.5.Integracion_numerica.ipynb) **Instalación de microbenchmark:**
###Code
install.packages("microbenchmark",lib="/usr/local/lib/R/site-library/",
repos="https://cran.itam.mx/",verbose=TRUE)
###Output
system (cmd0): /usr/lib/R/bin/R CMD INSTALL
foundpkgs: microbenchmark, /tmp/Rtmpu90LwL/downloaded_packages/microbenchmark_1.4-7.tar.gz
files: /tmp/Rtmpu90LwL/downloaded_packages/microbenchmark_1.4-7.tar.gz
1): succeeded '/usr/lib/R/bin/R CMD INSTALL -l '/usr/local/lib/R/site-library' /tmp/Rtmpu90LwL/downloaded_packages/microbenchmark_1.4-7.tar.gz'
###Markdown
Rcpp Documentación de Rcpp:* [rcpp por Dirk Eddelbuettel](http://dirk.eddelbuettel.com/code/rcpp.html)* [rcpp.org](http://www.rcpp.org/) **Rcpp** permite conectar `C++` a `R` de forma sencilla al utilizar la `API` de Rcpp.**¿Por qué usar Rcpp?**Aunque `C` o `C++` requieren más líneas de código, son órdenes de magnitud más rápidos que R. Sacrificamos las ventajas que tiene R como rapidez en programación por velocidad en ejecución.**¿Cuando podríamos usar Rcpp?*** En loops que no pueden vectorizarse de forma sencilla. Si tenemos loops en los que una iteración depende de la anterior.* Si hay que llamar una función millones de veces.* Si después de hacer perfilamiento y optimización de código no llegamos a nuestro tiempo objetivo. **Por qué no usamos `C`?**Sí es posible llamar funciones de `C` desde `R` pero resulta en más trabajo por parte de l@s programador@s. Por ejemplo, de acuerdo a H. Wickham:*...R’s C API. Unfortunately this API is not well documented. I’d recommend starting with my notes at [R’s C interface](http://adv-r.had.co.nz/C-interface.html). After that, read “[The R API](http://cran.rstudio.com/doc/manuals/r-devel/R-exts.htmlThe-R-API)” in “Writing R Extensions”. A number of exported functions are not documented, so you’ll also need to read the [R source code](https://github.com/wch/r-source) to figure out the details.*Y como primer acercamiento a la compilación de código desde `R` es preferible seguir las recomendaciones de H. Wickham en utilizar la API de `Rcpp`. **También se utiliza el paquete [microbenchmark](https://www.rdocumentation.org/packages/microbenchmark/versions/1.4-7/topics/microbenchmark) para medir tiempos de forma exacta:**Un *microbenchmark* es la medición del performance de un bloque pequeño de código. El paquete de `R` con el mismo nombre devolverá el tiempo medido en *miliseconds* (ms), *microseconds* ($\mu s$) o *nanoseconds* (ns) para el bloque de código dado y se repetirá ésta medición un número definido de veces. Las diferencias al correr varias veces la función de *microbenchmark* pueden deberse a varias razones tan simples como tener otras tareas corriendo en tu computadora. **En lo que sigue se utiliza el método del rectángulo para aproximar la integral definida de una función.**
###Code
library(Rcpp)
library(microbenchmark)
###Output
_____no_output_____
###Markdown
La regla del rectángulo en código de `R` y utilizando [vapply](https://www.rdocumentation.org/packages/functools/versions/0.2.0/topics/Vapply) (`vapply` es más rápido que `sapply` pues se especifica con anterioridad el tipo de `output` que devuelve) es la siguiente:
###Code
Rcf1<-function(f,a,b,n){
#Compute numerical approximation using rectangle or mid-point method in
#an interval.
#Nodes are generated via formula: x_i = a+(i+1/2)h_hat for i=0,1,...,n-1 and h_hat=(b-a)/n
#Args:
# f (function): function of integrand
# a (int): left point of interval
# b (int): right point of interval
# n (int): number of subintervals
#Returns:
# Rcf (float)
h_hat<-(b-a)/n
sum_res<-0
x<-vapply(0:(n-1),function(j)a+(j+1/2.0)*h_hat,numeric(1))
for(j in 1:n){
sum_res<-sum_res+f(x[j])
}
h_hat*sum_res
}
f<-function(x)exp(-x^2)
###Output
_____no_output_____
###Markdown
Probaremos esta implementación `Rcf1` básica para medir su tiempo de ejecución:
###Code
n<-10**6
aprox<-Rcf1(f,0,1,n)
aprox
###Output
_____no_output_____
###Markdown
**Recuérdese** revisar el error relativo:
###Code
err_relativo<-function(aprox,obj)abs(aprox-obj)/abs(obj)
obj<-integrate(Vectorize(f),0,1) #en la documentación de integrate
#se menciona que se utilice Vectorize
err_relativo(aprox,obj$value)
system.time(Rcf1(f,0,1,n))
###Output
_____no_output_____
###Markdown
Una implementación que utiliza la función `sum` de `R` es la siguiente:
###Code
Rcf2<-function(f,a,b,n){
#Compute numerical approximation using rectangle or mid-point method in
#an interval.
#Nodes are generated via formula: x_i = a+(i+1/2)h_hat for i=0,1,...,n-1 and h_hat=(b-a)/n
#Args:
# f (function): function of integrand
# a (int): left point of interval
# b (int): right point of interval
# n (int): number of subintervals
#Returns:
# Rcf (float)
h_hat<-(b-a)/n
x<-vapply(0:(n-1),function(j)a+(j+1/2.0)*h_hat,numeric(1))
h_hat*sum(f(x))
}
aprox<-Rcf2(f,0,1,n)
aprox
err_relativo(aprox,obj$value)
system.time(Rcf2(f,0,1,n))
###Output
_____no_output_____
###Markdown
y se redujo el tiempo de cálculo. Hacia la compilación con Rcpp En `Rcpp` se tiene la función [cppFunction](https://www.rdocumentation.org/packages/Rcpp/versions/1.0.3/topics/cppFunction) que recibe código escrito en `C++` para definir una función que puede ser utilizada desde `R`. Antes de usar tal función, reescribamos la regla del rectángulo de modo que no se utilice `vapply`:
###Code
Rcf3<-function(f,a,b,n){
#Compute numerical approximation using rectangle or mid-point method in
#an interval.
#Nodes are generated via formula: x_i = a+(i+1/2)h_hat for i=0,1,...,n-1 and h_hat=(b-a)/n
#Args:
# f (function): function of integrand
# a (int): left point of interval
# b (int): right point of interval
# n (int): number of subintervals
#Returns:
# Rcf (float)
h_hat<-(b-a)/n
sum_res<-0
for(i in 0:(n-1)){
x<-a+(i+1/2.0)*h_hat
sum_res<-sum_res+f(x)
}
h_hat*sum_res
}
n<-10**6
aprox<-Rcf3(f,0,1,n)
aprox
err_relativo(aprox,obj$value)
system.time(Rcf4(f,0,1,n))
###Output
_____no_output_____
###Markdown
Entonces se define el `source code` escrito en `C++` que será el primer parámetro que recibirá `cppFunction`:
###Code
f_str<-'double Rcf_Rcpp(double a, double b, int n){
double h_hat;
double sum_res=0;
int i;
double x;
h_hat=(b-a)/n;
for(i=0;i<=n-1;i++){
x = a+(i+1/2.0)*h_hat;
sum_res+=exp(-pow(x,2));
}
return h_hat*sum_res;
}'
cppFunction(f_str)
###Output
_____no_output_____
###Markdown
Si queremos obtener más información de la ejecución de la línea anterior podemos usar:
###Code
cppFunction(f_str, verbose=TRUE, rebuild=TRUE) #también usamos rebuild=TRUE
#para que se vuelva a compilar,
#ligar con la librería en C++
#y todo lo que realiza cppFunction
#detrás del telón
###Output
Generated code for function definition:
--------------------------------------------------------
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
double Rcf_Rcpp(double a, double b, int n){
double h_hat;
double sum_res=0;
int i;
double x;
h_hat=(b-a)/n;
for(i=0;i<=n-1;i++){
x = a+(i+1/2.0)*h_hat;
sum_res+=exp(-pow(x,2));
}
return h_hat*sum_res;
}
Generated extern "C" functions
--------------------------------------------------------
#include <Rcpp.h>
// Rcf_Rcpp
double Rcf_Rcpp(double a, double b, int n);
RcppExport SEXP sourceCpp_1_Rcf_Rcpp(SEXP aSEXP, SEXP bSEXP, SEXP nSEXP) {
BEGIN_RCPP
Rcpp::RObject rcpp_result_gen;
Rcpp::RNGScope rcpp_rngScope_gen;
Rcpp::traits::input_parameter< double >::type a(aSEXP);
Rcpp::traits::input_parameter< double >::type b(bSEXP);
Rcpp::traits::input_parameter< int >::type n(nSEXP);
rcpp_result_gen = Rcpp::wrap(Rcf_Rcpp(a, b, n));
return rcpp_result_gen;
END_RCPP
}
Generated R functions
-------------------------------------------------------
`.sourceCpp_1_DLLInfo` <- dyn.load('/tmp/Rtmpu90LwL/sourceCpp-x86_64-pc-linux-gnu-1.0.3/sourcecpp_12515b9b19/sourceCpp_3.so')
Rcf_Rcpp <- Rcpp:::sourceCppFunction(function(a, b, n) {}, FALSE, `.sourceCpp_1_DLLInfo`, 'sourceCpp_1_Rcf_Rcpp')
rm(`.sourceCpp_1_DLLInfo`)
Building shared library
--------------------------------------------------------
DIR: /tmp/Rtmpu90LwL/sourceCpp-x86_64-pc-linux-gnu-1.0.3/sourcecpp_12515b9b19
/usr/lib/R/bin/R CMD SHLIB -o 'sourceCpp_3.so' --preclean 'file1239a2facb.cpp'
###Markdown
**Comentarios:*** Al ejecutar la línea de `cppFunction`, `Rcpp` compilará el código de `C++` y construirá una función de `R` que se conecta con la función compilada de `C++` (este se le llama `wrapper`). * Si se observa en la salida de arriba se verá que hay un bloque de `C` y un tipo de dato `SEXP` que de acuerdo a H. Wickham:*...functions that talk to R must use the SEXP type for both inputs and outputs. SEXP, short for S expression, is the C struct used to represent every type of object in R. A C function typically starts by converting SEXPs to atomic C objects, and ends by converting C objects back to a SEXP. (The R API is designed so that these conversions often don’t require copying.)* Revisemos el tiempo de esta función:
###Code
aprox_rcpp<-Rcf_Rcpp(0,1,n)
err_relativo(aprox_rcpp,obj$value)
system.time(Rcf_Rcpp(0,1,n))
###Output
_____no_output_____
###Markdown
Y utilizando `microbenchmark`:
###Code
mbk<-microbenchmark(
Rcf1(f,0,1,n),
Rcf2(f,0,1,n),
Rcf3(f,0,1,n),
Rcf_Rcpp(0,1,n),
times=10
)
print(mbk)
###Output
Unit: milliseconds
expr min lq mean median uq
Rcf1(f, 0, 1, n) 1134.9580 1143.60462 1286.19387 1207.79665 1217.92329
Rcf2(f, 0, 1, n) 668.5134 687.85826 746.14824 731.34272 751.19857
Rcf3(f, 0, 1, n) 524.9488 536.67870 545.12018 539.86892 552.13084
Rcf_Rcpp(0, 1, n) 16.5403 17.25606 18.64566 17.97957 19.13085
max neval
1723.17939 10
947.80422 10
577.35055 10
24.64291 10
###Markdown
Se observa que la función compilada `Rcf_Rcpp` es dos órdenes de magnitud más rápida que `Rcf1` y un orden de magnitud más rápida que `Rcf2` y `Rcf3`. **NumericVector** En `Rcpp` se tienen clases que se relacionan con los tipos de dato en `R` para vectores. Entre éstas se encuentran `NumericVector`, `IntegerVector`, `CharacterVector` y `LogicalVector` que se relacionan con vectores tipo `numeric`, `integer`, `character` y `logical`. Por ejemplo, para el caso de `NumericVector` se tiene:
###Code
f_str <-'NumericVector el(NumericVector x){
return exp(log(x));
}'
cppFunction(f_str)
print(el(seq(0,1,by=.1)))
###Output
[1] 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
###Markdown
Para el caso de la regla de integración del rectángulo, podríamos pensar en `R` en una implementación como la siguiente:
###Code
Rcf3b<-function(f,a,b,n){
#Compute numerical approximation using rectangle or mid-point method in
#an interval.
#Nodes are generated via formula: x_i = a+(i+1/2)h_hat for i=0,1,...,n-1 and h_hat=(b-a)/n
#Args:
# f (function): function of integrand
# a (int): left point of interval
# b (int): right point of interval
# n (int): number of subintervals
#Returns:
# Rcf (float)
h_hat<-(b-a)/n
fx<-f(vapply(0:(n-1),function(j)a+(j+1/2.0)*h_hat,numeric(1))) #evaluate f
h_hat*sum(fx)
}
aprox<-Rcf3b(f,0,1,n)
err_relativo(aprox,obj$value)
system.time(Rcf3(f,0,1,n))
###Output
_____no_output_____
###Markdown
Y para poner un ejemplo de `NumericVector` para esta regla, podemos primero calcular los nodos y evaluar `f` en ellos:
###Code
a<-0
b<-1
h_hat<-(b-a)/n
fx<-f(vapply(0:(n-1),function(j)a+(j+1/2.0)*h_hat,numeric(1)))
print(tail(fx))
f_str<-'
double Rcf_Rcpp2(NumericVector f_x,double h_hat){
double sum_res=0;
int i;
int n = f_x.size();
for(i=0;i<=n-1;i++){
sum_res+=f_x[i];
}
return h_hat*sum_res;
}'
cppFunction(f_str,rebuild=TRUE)
system.time(Rcf_Rcpp2(fx,h_hat))
###Output
_____no_output_____
###Markdown
**Revisamos** el error relativo:
###Code
aprox_rcpp2<-Rcf_Rcpp2(fx,h_hat)
err_relativo(aprox_rcpp2,obj$value)
###Output
_____no_output_____
###Markdown
Y constrastamos con `microbenchmark`:
###Code
mbk<-microbenchmark(
Rcf1(f,0,1,n),
Rcf2(f,0,1,n),
Rcf3(f,0,1,n),
Rcf3b(f,0,1,n),
Rcf_Rcpp(0,1,n),
Rcf_Rcpp2(fx,h_hat),
times=10
)
print(mbk)
###Output
Unit: milliseconds
expr min lq mean median
Rcf1(f, 0, 1, n) 1192.918665 1228.678904 1315.967484 1264.626690
Rcf2(f, 0, 1, n) 708.988752 721.018438 838.991386 791.609931
Rcf3(f, 0, 1, n) 533.528447 557.654910 642.007659 599.152741
Rcf3b(f, 0, 1, n) 688.495578 723.240941 840.585161 743.023979
Rcf_Rcpp(0, 1, n) 16.944433 17.587898 21.350258 21.209751
Rcf_Rcpp2(fx, h_hat) 1.047825 1.074875 1.348535 1.126084
uq max neval
1414.261255 1489.786964 10
855.935157 1190.395839 10
690.248859 850.867292 10
942.462597 1213.450117 10
24.679616 29.429521 10
1.200255 3.288781 10
###Markdown
**Comentarios:** * Obsérvese que está utilizando el método `.size()` que regresa un `integer`.* No estamos midiendo en condiciones iguales pues las otras funciones construían los nodos... por ejemplo es súper rápida la ejecución de `Rcf_Rcpp2` y no tanto la siguiente:
###Code
system.time(fx<-f(vapply(0:(n-1),function(j)a+(j+1/2.0)*h_hat,numeric(1))))
###Output
_____no_output_____
###Markdown
Entonces debimos de haber medido como:
###Code
mbk<-microbenchmark(
Rcf1(f,0,1,n),
Rcf2(f,0,1,n),
Rcf3(f,0,1,n),
Rcf3b(f,0,1,n),
Rcf_Rcpp(0,1,n),
f(vapply(0:(n-1),function(j)a+(j+1/2.0)*h_hat,numeric(1))),
times=10
)
print(mbk)
###Output
Unit: milliseconds
expr min
Rcf1(f, 0, 1, n) 1118.9874
Rcf2(f, 0, 1, n) 661.8217
Rcf3(f, 0, 1, n) 518.4573
Rcf3b(f, 0, 1, n) 657.6112
Rcf_Rcpp(0, 1, n) 16.7331
f(vapply(0:(n - 1), function(j) a + (j + 1/2) * h_hat, numeric(1))) 708.2641
lq mean median uq max neval
1148.5022 1221.05506 1177.96496 1294.26572 1396.64626 10
669.4143 712.31013 681.45650 695.29276 1009.91352 10
531.4959 596.47586 557.93353 652.30480 778.27262 10
683.1871 735.76753 686.61225 727.13774 1014.09308 10
17.0036 18.01895 18.27387 18.49182 19.71365 10
744.7894 824.46560 758.49632 923.38135 1010.55658 10
###Markdown
* También se pueden devolver vectores de tipo `NumericVector` por ejemplo para crear los nodos:
###Code
f_str<-'NumericVector Nodos(double a, double b, int n){
double h_hat=(b-a)/n;
int i;
NumericVector x(n);
for(i=0;i<n;i++)
x[i]=a+(i+1/2.0)*h_hat;
return x;
}'
cppFunction(f_str,rebuild=TRUE)
print(Nodos(0,1,2))
###Output
[1] 0.25 0.75
###Markdown
**También en `Rcpp` es posible llamar funciones definidas en el ambiente global, por ejemplo:**
###Code
f_str='RObject fun(double x){
Environment env = Environment::global_env();
Function f=env["f"];
return f(x);
}
'
cppFunction(f_str,rebuild=TRUE)
fun(1)
f(1)
fun
###Output
_____no_output_____
###Markdown
**.Call es una función base para llamar funciones de `C` desde `R`:**.*There are two ways to call C functions from R: .C() and .Call(). .C() is a quick and dirty way to call an C function that doesn’t know anything about R because .C() automatically converts between R vectors and the corresponding C types. .Call() is more flexible, but more work: your C function needs to use the R API to convert its inputs to standard C data types.* **H. Wickham**.
###Code
f
###Output
_____no_output_____
###Markdown
**Notas para contenedor de docker:** Comando de docker para ejecución de la nota de forma local:nota: cambiar `` por la ruta de directorio que se desea mapear a `/datos` dentro del contenedor de docker.```docker run --rm -v :/datos --name jupyterlab_r_kernel_local -p 8888:8888 -d palmoreck/jupyterlab_r_kernel:1.1.0```password para jupyterlab: `qwerty`Detener el contenedor de docker:```docker stop jupyterlab_r_kernel_local``` Documentación de la imagen de docker `palmoreck/jupyterlab_r_kernel:1.1.0` en [liga](https://github.com/palmoreck/dockerfiles/tree/master/jupyterlab/r_kernel). --- Esta nota utiliza métodos vistos en [1.5.Integracion_numerica](https://github.com/ITAM-DS/analisis-numerico-computo-cientifico/blob/master/temas/I.computo_cientifico/1.5.Integracion_numerica.ipynb) **Instalación de Rcpp:**
###Code
install.packages("Rcpp",lib="/usr/local/lib/R/site-library/",
repos="https://cran.itam.mx/",verbose=TRUE)
install.packages("microbenchmark",lib="/usr/local/lib/R/site-library/",
repos="https://cran.itam.mx/",verbose=TRUE)
###Output
system (cmd0): /usr/lib/R/bin/R CMD INSTALL
foundpkgs: microbenchmark, /tmp/Rtmpu90LwL/downloaded_packages/microbenchmark_1.4-7.tar.gz
files: /tmp/Rtmpu90LwL/downloaded_packages/microbenchmark_1.4-7.tar.gz
1): succeeded '/usr/lib/R/bin/R CMD INSTALL -l '/usr/local/lib/R/site-library' /tmp/Rtmpu90LwL/downloaded_packages/microbenchmark_1.4-7.tar.gz'
###Markdown
Rcpp Documentación de Rcpp:* [rcpp por Dirk Eddelbuettel](http://dirk.eddelbuettel.com/code/rcpp.html)* [rcpp.org](http://www.rcpp.org/) **Rcpp** permite conectar `C++` a `R` de forma sencilla al utilizar la `API` de Rcpp.**¿Por qué usar Rcpp?**Aunque `C` o `C++` requieren más líneas de código, son órdenes de magnitud más rápidos que R. Sacrificamos las ventajas que tiene R como rapidez en programación por velocidad en ejecución.**¿Cuando podríamos usar Rcpp?*** En loops que no pueden vectorizarse de forma sencilla. Si tenemos loops en los que una iteración depende de la anterior.* Si hay que llamar una función millones de veces.* Si después de hacer perfilamiento y optimización de código no llegamos a nuestro tiempo objetivo. **Por qué no usamos `C`?**Sí es posible llamar funciones de `C` desde `R` pero resulta en más trabajo por parte de l@s programador@s. Por ejemplo, de acuerdo a H. Wickham:*...R’s C API. Unfortunately this API is not well documented. I’d recommend starting with my notes at [R’s C interface](http://adv-r.had.co.nz/C-interface.html). After that, read “[The R API](http://cran.rstudio.com/doc/manuals/r-devel/R-exts.htmlThe-R-API)” in “Writing R Extensions”. A number of exported functions are not documented, so you’ll also need to read the [R source code](https://github.com/wch/r-source) to figure out the details.*Y como primer acercamiento a la compilación de código desde `R` es preferible seguir las recomendaciones de H. Wickham en utilizar la API de `Rcpp`. **También se utiliza el paquete [microbenchmark](https://www.rdocumentation.org/packages/microbenchmark/versions/1.4-7/topics/microbenchmark) para medir tiempos de forma exacta:**Un *microbenchmark* es la medición del performance de un bloque pequeño de código. El paquete de `R` con el mismo nombre devolverá el tiempo medido en *miliseconds* (ms), *microseconds* ($\mu s$) o *nanoseconds* (ns) para el bloque de código dado y se repetirá ésta medición un número definido de veces. Las diferencias al correr varias veces la función de *microbenchmark* pueden deberse a varias razones tan simples como tener otras tareas corriendo en tu computadora. **En lo que sigue se utiliza el método del rectángulo para aproximar la integral definida de una función.**
###Code
library(Rcpp)
library(microbenchmark)
###Output
_____no_output_____
###Markdown
La regla del rectángulo en código de `R` y utilizando [vapply](https://www.rdocumentation.org/packages/functools/versions/0.2.0/topics/Vapply) (`vapply` es más rápido que `sapply` pues se especifica con anterioridad el tipo de `output` que devuelve) es la siguiente:
###Code
Rcf1<-function(f,a,b,n){
#Compute numerical approximation using rectangle or mid-point method in
#an interval.
#Nodes are generated via formula: x_i = a+(i+1/2)h_hat for i=0,1,...,n-1 and h_hat=(b-a)/n
#Args:
# f (function): function of integrand
# a (int): left point of interval
# b (int): right point of interval
# n (int): number of subintervals
#Returns:
# Rcf (float)
h_hat<-(b-a)/n
sum_res<-0
x<-vapply(0:(n-1),function(j)a+(j+1/2.0)*h_hat,numeric(1))
for(j in 1:n){
sum_res<-sum_res+f(x[j])
}
h_hat*sum_res
}
f<-function(x)exp(-x^2)
###Output
_____no_output_____
###Markdown
Probaremos esta implementación `Rcf1` básica para medir su tiempo de ejecución:
###Code
n<-10**6
aprox<-Rcf1(f,0,1,n)
aprox
###Output
_____no_output_____
###Markdown
**Recuérdese** revisar el error relativo:
###Code
err_relativo<-function(aprox,obj)abs(aprox-obj)/abs(obj)
obj<-integrate(Vectorize(f),0,1) #en la documentación de integrate
#se menciona que se utilice Vectorize
err_relativo(aprox,obj$value)
system.time(Rcf1(f,0,1,n))
###Output
_____no_output_____
###Markdown
Una implementación que utiliza la función `sum` de `R` es la siguiente:
###Code
Rcf2<-function(f,a,b,n){
#Compute numerical approximation using rectangle or mid-point method in
#an interval.
#Nodes are generated via formula: x_i = a+(i+1/2)h_hat for i=0,1,...,n-1 and h_hat=(b-a)/n
#Args:
# f (function): function of integrand
# a (int): left point of interval
# b (int): right point of interval
# n (int): number of subintervals
#Returns:
# Rcf (float)
h_hat<-(b-a)/n
x<-vapply(0:(n-1),function(j)a+(j+1/2.0)*h_hat,numeric(1))
h_hat*sum(f(x))
}
aprox<-Rcf2(f,0,1,n)
aprox
err_relativo(aprox,obj$value)
system.time(Rcf2(f,0,1,n))
###Output
_____no_output_____
###Markdown
y se redujo el tiempo de cálculo. Hacia la compilación con Rcpp En `Rcpp` se tiene la función [cppFunction](https://www.rdocumentation.org/packages/Rcpp/versions/1.0.3/topics/cppFunction) que recibe código escrito en `C++` para definir una función que puede ser utilizada desde `R`. Antes de usar tal función, reescribamos la regla del rectángulo de modo que no se utilice `vapply`:
###Code
Rcf3<-function(f,a,b,n){
#Compute numerical approximation using rectangle or mid-point method in
#an interval.
#Nodes are generated via formula: x_i = a+(i+1/2)h_hat for i=0,1,...,n-1 and h_hat=(b-a)/n
#Args:
# f (function): function of integrand
# a (int): left point of interval
# b (int): right point of interval
# n (int): number of subintervals
#Returns:
# Rcf (float)
h_hat<-(b-a)/n
sum_res<-0
for(i in 0:(n-1)){
x<-a+(i+1/2.0)*h_hat
sum_res<-sum_res+f(x)
}
h_hat*sum_res
}
n<-10**6
aprox<-Rcf3(f,0,1,n)
aprox
err_relativo(aprox,obj$value)
system.time(Rcf4(f,0,1,n))
###Output
_____no_output_____
###Markdown
Entonces se define el `source code` escrito en `C++` que será el primer parámetro que recibirá `cppFunction`:
###Code
f_str<-'double Rcf_Rcpp(double a, double b, int n){
double h_hat;
double sum_res=0;
int i;
double x;
h_hat=(b-a)/n;
for(i=0;i<=n-1;i++){
x = a+(i+1/2.0)*h_hat;
sum_res+=exp(-pow(x,2));
}
return h_hat*sum_res;
}'
cppFunction(f_str)
###Output
_____no_output_____
###Markdown
Si queremos obtener más información de la ejecución de la línea anterior podemos usar:
###Code
cppFunction(f_str, verbose=TRUE, rebuild=TRUE) #también usamos rebuild=TRUE
#para que se vuelva a compilar,
#ligar con la librería en C++
#y todo lo que realiza cppFunction
#detrás del telón
###Output
Generated code for function definition:
--------------------------------------------------------
#include <Rcpp.h>
using namespace Rcpp;
// [[Rcpp::export]]
double Rcf_Rcpp(double a, double b, int n){
double h_hat;
double sum_res=0;
int i;
double x;
h_hat=(b-a)/n;
for(i=0;i<=n-1;i++){
x = a+(i+1/2.0)*h_hat;
sum_res+=exp(-pow(x,2));
}
return h_hat*sum_res;
}
Generated extern "C" functions
--------------------------------------------------------
#include <Rcpp.h>
// Rcf_Rcpp
double Rcf_Rcpp(double a, double b, int n);
RcppExport SEXP sourceCpp_1_Rcf_Rcpp(SEXP aSEXP, SEXP bSEXP, SEXP nSEXP) {
BEGIN_RCPP
Rcpp::RObject rcpp_result_gen;
Rcpp::RNGScope rcpp_rngScope_gen;
Rcpp::traits::input_parameter< double >::type a(aSEXP);
Rcpp::traits::input_parameter< double >::type b(bSEXP);
Rcpp::traits::input_parameter< int >::type n(nSEXP);
rcpp_result_gen = Rcpp::wrap(Rcf_Rcpp(a, b, n));
return rcpp_result_gen;
END_RCPP
}
Generated R functions
-------------------------------------------------------
`.sourceCpp_1_DLLInfo` <- dyn.load('/tmp/Rtmpu90LwL/sourceCpp-x86_64-pc-linux-gnu-1.0.3/sourcecpp_12515b9b19/sourceCpp_3.so')
Rcf_Rcpp <- Rcpp:::sourceCppFunction(function(a, b, n) {}, FALSE, `.sourceCpp_1_DLLInfo`, 'sourceCpp_1_Rcf_Rcpp')
rm(`.sourceCpp_1_DLLInfo`)
Building shared library
--------------------------------------------------------
DIR: /tmp/Rtmpu90LwL/sourceCpp-x86_64-pc-linux-gnu-1.0.3/sourcecpp_12515b9b19
/usr/lib/R/bin/R CMD SHLIB -o 'sourceCpp_3.so' --preclean 'file1239a2facb.cpp'
###Markdown
**Comentarios:*** Al ejecutar la línea de `cppFunction`, `Rcpp` compilará el código de `C++` y construirá una función de `R` que se conecta con la función compilada de `C++` (este se le llama `wrapper`). * Si se observa en la salida de arriba se verá que hay un bloque de `C` y un tipo de dato `SEXP` que de acuerdo a H. Wickham:*...functions that talk to R must use the SEXP type for both inputs and outputs. SEXP, short for S expression, is the C struct used to represent every type of object in R. A C function typically starts by converting SEXPs to atomic C objects, and ends by converting C objects back to a SEXP. (The R API is designed so that these conversions often don’t require copying.)* Revisemos el tiempo de esta función:
###Code
aprox_rcpp<-Rcf_Rcpp(0,1,n)
err_relativo(aprox_rcpp,obj$value)
system.time(Rcf_Rcpp(0,1,n))
###Output
_____no_output_____
###Markdown
Y utilizando `microbenchmark`:
###Code
mbk<-microbenchmark(
Rcf1(f,0,1,n),
Rcf2(f,0,1,n),
Rcf3(f,0,1,n),
Rcf_Rcpp(0,1,n),
times=10
)
print(mbk)
###Output
Unit: milliseconds
expr min lq mean median uq
Rcf1(f, 0, 1, n) 1134.9580 1143.60462 1286.19387 1207.79665 1217.92329
Rcf2(f, 0, 1, n) 668.5134 687.85826 746.14824 731.34272 751.19857
Rcf3(f, 0, 1, n) 524.9488 536.67870 545.12018 539.86892 552.13084
Rcf_Rcpp(0, 1, n) 16.5403 17.25606 18.64566 17.97957 19.13085
max neval
1723.17939 10
947.80422 10
577.35055 10
24.64291 10
###Markdown
Se observa que la función compilada `Rcf_Rcpp` es dos órdenes de magnitud más rápida que `Rcf1` y un orden de magnitud más rápida que `Rcf2` y `Rcf3`. **NumericVector** En `Rcpp` se tienen clases que se relacionan con los tipos de dato en `R` para vectores. Entre éstas se encuentran `NumericVector`, `IntegerVector`, `CharacterVector` y `LogicalVector` que se relacionan con vectores tipo `numeric`, `integer`, `character` y `logical`. Por ejemplo, para el caso de `NumericVector` se tiene:
###Code
f_str <-'NumericVector el(NumericVector x){
return exp(log(x));
}'
cppFunction(f_str)
print(el(seq(0,1,by=.1)))
###Output
[1] 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
###Markdown
Para el caso de la regla de integración del rectángulo, podríamos pensar en `R` en una implementación como la siguiente:
###Code
Rcf3b<-function(f,a,b,n){
#Compute numerical approximation using rectangle or mid-point method in
#an interval.
#Nodes are generated via formula: x_i = a+(i+1/2)h_hat for i=0,1,...,n-1 and h_hat=(b-a)/n
#Args:
# f (function): function of integrand
# a (int): left point of interval
# b (int): right point of interval
# n (int): number of subintervals
#Returns:
# Rcf (float)
h_hat<-(b-a)/n
fx<-f(vapply(0:(n-1),function(j)a+(j+1/2.0)*h_hat,numeric(1))) #evaluate f
h_hat*sum(fx)
}
aprox<-Rcf3b(f,0,1,n)
err_relativo(aprox,obj$value)
system.time(Rcf3(f,0,1,n))
###Output
_____no_output_____
###Markdown
Y para poner un ejemplo de `NumericVector` para esta regla, podemos primero calcular los nodos y evaluar `f` en ellos:
###Code
a<-0
b<-1
h_hat<-(b-a)/n
fx<-f(vapply(0:(n-1),function(j)a+(j+1/2.0)*h_hat,numeric(1)))
print(tail(fx))
f_str<-'
double Rcf_Rcpp2(NumericVector f_x,double h_hat){
double sum_res=0;
int i;
int n = f_x.size();
for(i=0;i<=n-1;i++){
sum_res+=f_x[i];
}
return h_hat*sum_res;
}'
cppFunction(f_str,rebuild=TRUE)
system.time(Rcf_Rcpp2(fx,h_hat))
###Output
_____no_output_____
###Markdown
**Revisamos** el error relativo:
###Code
aprox_rcpp2<-Rcf_Rcpp2(fx,h_hat)
err_relativo(aprox_rcpp2,obj$value)
###Output
_____no_output_____
###Markdown
Y constrastamos con `microbenchmark`:
###Code
mbk<-microbenchmark(
Rcf1(f,0,1,n),
Rcf2(f,0,1,n),
Rcf3(f,0,1,n),
Rcf3b(f,0,1,n),
Rcf_Rcpp(0,1,n),
Rcf_Rcpp2(fx,h_hat),
times=10
)
print(mbk)
###Output
Unit: milliseconds
expr min lq mean median
Rcf1(f, 0, 1, n) 1192.918665 1228.678904 1315.967484 1264.626690
Rcf2(f, 0, 1, n) 708.988752 721.018438 838.991386 791.609931
Rcf3(f, 0, 1, n) 533.528447 557.654910 642.007659 599.152741
Rcf3b(f, 0, 1, n) 688.495578 723.240941 840.585161 743.023979
Rcf_Rcpp(0, 1, n) 16.944433 17.587898 21.350258 21.209751
Rcf_Rcpp2(fx, h_hat) 1.047825 1.074875 1.348535 1.126084
uq max neval
1414.261255 1489.786964 10
855.935157 1190.395839 10
690.248859 850.867292 10
942.462597 1213.450117 10
24.679616 29.429521 10
1.200255 3.288781 10
###Markdown
**Comentarios:** * Obsérvese que está utilizando el método `.size()` que regresa un `integer`.* No estamos midiendo en condiciones iguales pues las otras funciones construían los nodos... por ejemplo es súper rápida la ejecución de `Rcf_Rcpp2` y no tanto la siguiente:
###Code
system.time(fx<-f(vapply(0:(n-1),function(j)a+(j+1/2.0)*h_hat,numeric(1))))
###Output
_____no_output_____
###Markdown
Entonces debimos de haber medido como:
###Code
mbk<-microbenchmark(
Rcf1(f,0,1,n),
Rcf2(f,0,1,n),
Rcf3(f,0,1,n),
Rcf3b(f,0,1,n),
Rcf_Rcpp(0,1,n),
f(vapply(0:(n-1),function(j)a+(j+1/2.0)*h_hat,numeric(1))),
times=10
)
print(mbk)
###Output
Unit: milliseconds
expr min
Rcf1(f, 0, 1, n) 1118.9874
Rcf2(f, 0, 1, n) 661.8217
Rcf3(f, 0, 1, n) 518.4573
Rcf3b(f, 0, 1, n) 657.6112
Rcf_Rcpp(0, 1, n) 16.7331
f(vapply(0:(n - 1), function(j) a + (j + 1/2) * h_hat, numeric(1))) 708.2641
lq mean median uq max neval
1148.5022 1221.05506 1177.96496 1294.26572 1396.64626 10
669.4143 712.31013 681.45650 695.29276 1009.91352 10
531.4959 596.47586 557.93353 652.30480 778.27262 10
683.1871 735.76753 686.61225 727.13774 1014.09308 10
17.0036 18.01895 18.27387 18.49182 19.71365 10
744.7894 824.46560 758.49632 923.38135 1010.55658 10
###Markdown
* También se pueden devolver vectores de tipo `NumericVector` por ejemplo para crear los nodos:
###Code
f_str<-'NumericVector Nodos(double a, double b, int n){
double h_hat=(b-a)/n;
int i;
NumericVector x(n);
for(i=0;i<n;i++)
x[i]=a+(i+1/2.0)*h_hat;
return x;
}'
cppFunction(f_str,rebuild=TRUE)
print(Nodos(0,1,2))
###Output
[1] 0.25 0.75
###Markdown
**También en `Rcpp` es posible llamar funciones definidas en el ambiente global, por ejemplo:**
###Code
f_str='RObject fun(double x){
Environment env = Environment::global_env();
Function f=env["f"];
return f(x);
}
'
cppFunction(f_str,rebuild=TRUE)
fun(1)
f(1)
fun
###Output
_____no_output_____
###Markdown
**.Call es una función base para llamar funciones de `C` desde `R`:**.*There are two ways to call C functions from R: .C() and .Call(). .C() is a quick and dirty way to call an C function that doesn’t know anything about R because .C() automatically converts between R vectors and the corresponding C types. .Call() is more flexible, but more work: your C function needs to use the R API to convert its inputs to standard C data types.* **H. Wickham**.
###Code
f
###Output
_____no_output_____ |
notebooks/keras-transfer-learning-tutorial.ipynb | ###Markdown
Deep Learning from Pre-Trained Models with Keras IntroductionImageNet, an image recognition benchmark dataset*, helped trigger the modern AI explosion. In 2012, the AlexNet architecture (a deep convolutional-neural-network) rocked the ImageNet benchmark competition, handily beating the next best entrant. By 2014, all the leading competitors were deep learning based. Since then, accuracy scores continued to improve, eventually surpassing human performance.In this hands-on tutorial, and later exercise, we will build on this pioneering work to create our own neural-network architecture for image recognition. Participants will use the elegant Keras deep learning programming interface to build and train TensorFlow models for image classification tasks on the CIFAR-10 / MNIST datasets*. We will demonstrate the use of transfer learning* (to give our networks a head-start by building on top of existing, ImageNet pre-trained, network layers*), and explore how to improve model performance for standard deep learning pipelines. We will use cloud-based interactive Jupyter notebooks to work through our explorations step-by-step.This tutorial is designed as an introduction to the topic for a general, but technical audience. As a practical introduction, it will focus on tools and their application. Previous ML (Machine Learning) experience is not required; but, previous experience with scripting in Python will help. Participants are expected to bring their own laptops and sign-up for free online cloud services (e.g., Google Colab, Kaggle). They may also need to download free, open-source software prior to arriving for the workshop.This tutorial assumes some basic knowledge of neural networks. If you’re not already familiar with neural networks, then you can learn the basics concepts behind neural networks at [course.fast.ai](https://course.fast.ai/).* Tutorial materials are derived from: * [PyTorch Tutorials](https://github.com/kaust-vislab/pytorch-tutorials) by David Pugh. * [What is torch.nn really?](https://pytorch.org/tutorials/beginner/nn_tutorial.html) by Jeremy Howard, Rachel Thomas, Francisco Ingham. * [Machine Learning Notebooks](https://github.com/ageron/handson-ml2) (2nd Ed.) by Aurélien Géron. * *Deep Learning with Python* by François Chollet. Jupyter NotebooksThis is a Jupyter Notebook. It provides a simple, cell-based, IDE for developing and exploring complex ideas via code, visualizations, and documentation.A notebook has two primary types of cells: i) `markdown` cells for textual notes and documentation, such as the one you are reading now, and ii) `code` cells, which contain snippets of code (typically *Python*, but also *bash* scripts) that can be executed. The currently selected cell appears within a box. A green box indicates that the cell is editable. Clicking inside a *code* cell makes it selected and editable. Double-click inside *markdown* cells to edit.Use `Tab` for context-sensitive code-completion assistance when editing Python code in *code* cells. For example, use code assistance after a `.` seperator to find available object members. For help documentation, create a new *code* cell, and use commands like `dir(`*module*`)`, `help(`*topic*`)`, `?`*name*, or `??`*function* for user provided *module*, *topic*, variable *name*, or *function* name. The magic `?` and `??` commands show documentation / source code in a separate pane.Clicking on `[Run]` or pressing `Ctrl-Enter` will execute the contents of a cell. A *markdown* cell converts to its display version, and a *code* cell runs the code inside. To the left of a *code* cell is a small text bracket `In [ ]:`. If the bracket contains an asterix, e.g., `In [*]:`, that cell is currently executing. Only one cell executes at a time (if multiple cells are *Run*, they are queued up to execute in the order they were run). When a *code* cell finishes executing, the bracket shows an execution count in the bracket – each *code* cell execution increments the counter and provides a way to determine the order in which codes were executed – e.g., `In [7]` for the seventh cell to complete. The output produced by a *code* cell appears at the bottom of that cell after it executes. The output generated by a code cell includes anything printed to the output during execution (e.g., print statements, or thrown errors) and the final value generated by the cell (i.e., not the intermediate values). The final value is 'pretty printed' by Jupyter.Typically, notebooks are written to be executed in order, from top to bottom. Behind the scenes, however, each Notebook has a single Python state (the `kernel`), and each *code* cell that executes, modifies that state. It is possible to modify and re-run earlier cells; however, care must be taken to also re-run any other cells that depend upon the modified one. List the Python state global variables with the magic command `%wgets`. The *kernel* can be restarted to a known state, and cell output cleared, if the Python state becomes too confusing to fix manually (choose `Restart & Clear Output` from the Jupyter `Kernel` menu) – this requires running each *code* cell again.Complete user documentation is available at [jupyter-notebook.readthedocs.io](https://jupyter-notebook.readthedocs.io/en/stable/notebook.htmlnotebook-user-interface). Many helpful tips and techniques from [28 Jupyter Notebook Tips, Tricks, and Shortcuts](https://www.dataquest.io/blog/jupyter-notebook-tips-tricks-shortcuts/). Setup Setup ColabIn order to run this notebook in [Google Colab](https://colab.research.google.com) you will need a [Google Account](https://accounts.google.com/). Sign-in to your Google account, if necessary, and then start the notebook.Change Google Colab runtime to use GPU:* Click `Runtime` -> `Change runtime type` menu item* Specify `Hardware accelerator` as `GPU`* Click **[Save]** buttonThe session indicator (toolbar / status ribbon under menu) should briefly appear as `Connecting...`. When the session restarts, continue with the next cell (specifying TensorFlow version v2.x):
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
###Output
_____no_output_____
###Markdown
Download DataThere are two image datasets ([CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) and [MNIST](http://yann.lecun.com/exdb/mnist/index.html)) which these tutorial / exercise notebooks use.These datasets are available from a variety of sources, including this repository – depending on how the notebook was launched (e.g., Git+LFS / Binder contains entire repository, Google Colab only contains the notebook).Because data is the fundamental fuel for deep learning, we need to ensure the required datasets for this tutorial are available to the current notebook session. The following steps will ensure the data is already available (or downloaded), and cached where Keras can find them.
###Code
# %load cache_utils.py
import pathlib
import tensorflow.keras.utils as Kutils
def cache_mnist_data():
for n in ["mnist.npz", "kaggle/train.csv", "kaggle/test.csv"]:
path = pathlib.Path("../datasets/mnist/%s" % n).absolute()
if not path.is_file():
print("Skipping: missing local dataset file: %s" % n)
else:
DATA_URL = "file:///" + str(path)
try:
data_file_path = Kutils.get_file(n.replace('/','-mnist-'), DATA_URL)
print("Cached file: %s" % n)
except (FileNotFoundError, ValueError, Exception) as e:
print("Cache Failed: First fetch file: %s" % n)
def cache_cifar10_data():
for n in ["cifar-10.npz", "cifar-10-batches-py.tar.gz"]:
path = pathlib.Path("../datasets/cifar10/%s" % n).absolute()
if not path.is_file():
print("Skipping: missing local dataset file: %s" % n)
else:
DATA_URL = "file:///" + str(path)
try:
data_file_path = Kutils.get_file(n, DATA_URL)
print("Cached file: %s" % n)
except (FileNotFoundError, ValueError, Exception) as e:
print("Cache Failed: First fetch file: %s" % n)
def cache_models():
for n in ["vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5"]:
path = pathlib.Path("../models/%s" % n).absolute()
if not path.is_file():
print("Skipping: missing local dataset file: %s" % n)
else:
DATA_URL = "file:///" + str(path)
try:
data_file_path = Kutils.get_file(n, DATA_URL, cache_subdir='models')
print("Cached file: %s" % n)
except (FileNotFoundError, ValueError, Exception) as e:
print("Cache Failed: First fetch file: %s" % n)
###Output
_____no_output_____
###Markdown
Follow the instructions below and run just the appropriate cells needed to acquire the required datasets: Download CIFAR10 DataIf you are using Binder to run this notebook, then the data is already downloaded and available. Skip to the next step.If you are using Google Colab to run this notebook, then you will need to download the data before proceeding. Download CIFAR10 with KerasIf you are running this notebook using Google Colab, then download the data using the Keras `load_data()` API by running the code in the following cell.
###Code
from tensorflow.keras.datasets import cifar10
cache_cifar10_data()
cifar10.load_data();
###Output
_____no_output_____
###Markdown
Tutorial SetupInitialize the Python environment by importing and verifying the modules we will use.
###Code
import os
import sys
import pathlib
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow.keras as keras
###Output
_____no_output_____
###Markdown
`%matplotlib inline` is a magic command that makes *matplotlib* charts and plots appear was outputs in the notebook.`%matplotlib notebook` enables semi-interactive plots that can be enlarged, zoomed, and cropped while the plot is active. One issue with this option is that new plots appear in the active plot widget, not in the cell where the data was produced.
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
Now check the runtime environment to ensure it can run this notebook. If there is an `Exception`, or if there are no GPUs, you will need to run this notebook in a more capable environment (see `README.md`, or ask instructor for additional help).
###Code
# %load verify_runtime.py
# Verify runtime environment
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
IS_COLAB = True
except Exception:
IS_COLAB = False
print("is_colab:", IS_COLAB)
assert tf.__version__ >= "2.0", "TensorFlow version >= 2.0 required."
print("tensorflow_version:", tf.__version__)
assert sys.version_info >= (3, 5), "Python >= 3.5 required."
print("python_version:", "%s.%s.%s-%s" % (sys.version_info.major,
sys.version_info.minor,
sys.version_info.micro,
sys.version_info.releaselevel
))
print("executing_eagerly:", tf.executing_eagerly())
try:
__physical_devices = tf.config.list_physical_devices('GPU')
except AttributeError:
__physical_devices = tf.config.experimental.list_physical_devices('GPU')
if len(__physical_devices) == 0:
print("No GPUs available. Expect training to be very slow.")
if IS_COLAB:
print("Go to `Runtime` > `Change runtime` and select a GPU hardware accelerator."
"Then `Save` to restart session.")
else:
print("is_built_with_cuda:", tf.test.is_built_with_cuda())
print("gpus_available:", [d.name for d in __physical_devices])
###Output
_____no_output_____
###Markdown
CIFAR10 - Dataset ProcessingThe previously acquired CIFAR10 dataset is the essential input needed to train an image classification model. Before using the dataset, there are several preprocessing steps required to load the data, and create the correctly sized training, validation, and testing arrays used as input to the network.The following data preparation steps are needed before they can become inputs to the network:* Cache the downloaded dataset (to assist Keras `load_data()` functionality).* Load the dataset (CIFAR10 is small, and fits into a `numpy` array).* Verify the shape and type of the data, and understand it...* Convert label indices into categorical vectors.* Convert image data from integer to float values, and normalize. * Verify converted input data. Cache DataMake downloaded data available to Keras (and check if it's really there). Provide dataset utility functions.__Note__: We are ready to begin if the `find` command shows a found data file.
###Code
# Cache CIFAR10 Datasets
cache_cifar10_data()
%%bash
find ~/.keras -name "cifar-10*" -type f
###Output
_____no_output_____
###Markdown
These helper function assist with managing the three label representations we will encounter:* label index: a number representing a class* label names: a *human readable* text representation of a class* category vector: a vector space to represent the categoriesThe label index `1` represents an `automobile`, and `2` represents a `bird`; but, `1.5` doesn't make a `bird-mobile`. We need a representation where each dimension is a continuum of that feature. There are 10 distinct categories, so we encode them as a 10-dimensional vector space, where the i-th dimension represents the i-th class. An `automobile` becomes `[0,1,0,0,0,0,0,0,0,0]`, a `bird` becomes `[0,0,1,0,0,0,0,0,0,0]` (these are called *one-hot encodings*), and a `bird-mobile` (which we couldn't represent previously) can be encoded as `[0,0.5,0.5,0,0,0,0,0,0,0]`.**Note:** We already know how our dataset is represented. Typically, one would load the data first, analyse the class representation, and then write the helper functions.
###Code
# Helper functionality to provide human-readable labels
cifar10_label_names = ['airplane', 'automobile',
'bird', 'cat', 'deer', 'dog', 'frog', 'horse',
'ship', 'truck']
def cifar10_index_label(idx):
return cifar10_label_names[int(idx)]
def cifar10_category_label(cat):
return cifar10_index_label(cat.argmax())
def cifar10_label(v):
return cifar10_index_label(v) if np.isscalar(v) or np.size(v) == 1 else cifar10_category_label(v)
###Output
_____no_output_____
###Markdown
Load DataDatasets for classification require two parts: i) the input data (`x` in our nomenclature), and ii) the labels (`y`). Classifiction takes an `x` as input, and returns a `y` (the class) as output.When training a model from a dataset (called the `train`ing dataset), it is important to keep some of the data aside (called the `test` set). If we didn't, the model could just memorize the data without learning a generalization that would apply to novel related data. The `test` set is used to evaluate the typical real performance of the model.
###Code
from tensorflow.keras.datasets import cifar10
# The data, split between train and test sets:
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
###Output
_____no_output_____
###Markdown
**Note:** Backup plan: Run the following cell if the data didn't load via `cifar10.load_data` above.
###Code
# Try secondary data source if the first didn't work
try:
print("data loaded." if type((x_train, y_train, x_test, y_test)) else "load failed...")
except NameError:
with np.load('../datasets/cifar10/cifar-10.npz') as data:
x_train = data['x_train']
y_train = data['y_train']
x_test = data['x_test']
y_test = data['y_test']
print("alternate data load." if type((x_train, y_train, x_test, y_test)) else "failed...")
###Output
_____no_output_____
###Markdown
Explore DataExplore data types, shape, and value ranges. Ensure they make sense, and you understand the data well.
###Code
print('x_train type:', type(x_train), ',', 'y_train type:', type(y_train))
print('x_train dtype:', x_train.dtype, ',', 'y_train dtype:', y_train.dtype)
print('x_train shape:', x_train.shape, ',', 'y_train shape:', y_train.shape)
print('x_test shape:', x_test.shape, ',', 'y_test shape:', y_test.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
print('x_train (min, max, mean): (%s, %s, %s)' % (x_train.min(), x_train.max(), x_train.mean()))
print('y_train (min, max): (%s, %s)' % (y_train.min(), y_train.max()))
###Output
_____no_output_____
###Markdown
* The data is stored in Numpy arrays.* The datatype for both input data and labels is a small unsigned int. They represent different things though. The input data represents pixel value, the labels represent the category.* There are 50000 training data samples, and 10000 testing samples.* Each input sample is a colour images of 32x32 pixels, with 3 channels of colour (RGB), for a total size of 3072 bytes. Each label sample is a single byte. * A 32x32 pixel, 3-channel colour image (2-D) can be represented as a point in a 3072 dimensional vector space.* We can see that pixel values range between 0-255 (that is the range of `uint8`) and the mean value is close to the middle. The label values range between 0-9, which corresponds to the 10 categories the labels represent.Lets explore the dataset visually, looking at some actual images, and get a statistical overview of the data.Most of the code in the plotting function below is there to tweak the appearance of the output. The key functionality comes from `matplotlib` functions `imshow` and `hist`, and `numpy` function `histogram`.
###Code
def cifar10_imageset_plot(img_data=None):
(x_imgs, y_imgs) = img_data if img_data else (x_train, y_train)
fig = plt.figure(figsize=(16,8))
for i in range(40):
plt.subplot(4, 10, i + 1)
plt.xticks([])
plt.yticks([])
idx = int(random.uniform(0, x_imgs.shape[0]))
plt.title(cifar10_label(y_imgs[idx]))
plt.imshow(x_imgs[idx], cmap=plt.get_cmap('gray'))
plt.show()
# Show array of random labelled images with matplotlib (re-run cell to see new examples)
cifar10_imageset_plot((x_train, y_train))
# %load histogram_utils.py
# Histogram utils
def histogram_plot(img_data=None):
(x_data, y_data) = img_data if img_data else (x_train, y_train)
fig = plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
plt.hist(y_data, bins = range(int(y_data.min()), int(y_data.max() + 2)))
plt.xticks(range(int(y_data.min()), int(y_data.max() + 2)))
plt.title("y histogram")
plt.subplot(1,2,2)
plt.hist(x_data.flat, bins = range(int(x_data.min()), int(x_data.max() + 2)))
plt.title("x histogram")
plt.tight_layout()
plt.show()
hist, bins = np.histogram(y_data, bins = range(int(y_data.min()), int(y_data.max() + 2)))
print('y histogram counts:', hist)
def histogram_label_plot(train_img_data=None, test_img_data=None):
(x_train_data, y_train_data) = train_img_data if train_img_data else (x_train, y_train)
(x_test_data, y_test_data) = test_img_data if test_img_data else (x_test, y_test)
x_data_min = int(min(x_train_data.min(), x_test_data.min()))
x_data_max = int(min(x_train_data.max(), x_test_data.max()))
y_data_min = int(min(y_train_data.min(), y_test_data.min()))
y_data_max = int(min(y_train_data.max(), y_test_data.max()))
num_rows = y_data_max - y_data_min + 1
fig = plt.figure(figsize=(12,12))
plot_num = 1
for lbl in range(y_data_min, y_data_max):
plt.subplot(num_rows, 2 , plot_num)
plt.hist(x_train_data[y_train_data.squeeze() == lbl].flat, bins = range(x_data_min, x_data_max + 2))
plt.title("x train histogram - label %s" % lbl)
plt.subplot(num_rows, 2 , plot_num + 1)
plt.hist(x_test_data[y_test_data.squeeze() == lbl].flat, bins = range(x_data_min, x_data_max + 2))
plt.title("x test histogram - label %s" % lbl)
plot_num += 2
plt.tight_layout(pad=0)
plt.show()
histogram_plot((x_train, y_train))
histogram_plot((x_test, y_test))
###Output
_____no_output_____
###Markdown
The data looks reasonable: there are sufficient examples for each category (y_train) and a near-normal distribution of pixel values that appears similar in both the train and test datasets.If there had been a significant imbalance in the number of examples in each category, test accuracy would be adversely affected (infrequent categories tend to get ignored). Use a tool like [`imbalanced-learn`](https://imbalanced-learn.org/stable/) to resample and re-balance the dataset.Lets do one more sanity check to ensure that the data distributions are also similar per-category.
###Code
histogram_label_plot((x_train, y_train), (x_test, y_test))
###Output
_____no_output_____
###Markdown
Again, the data looks reasonable. The distributions differ slightly between categories, but are similar between the train and test datasets for a given category label.If there had been a significant difference in distributions, consider resampling the train and test datasets, or consider some kind of normalization as part of the training and inference data pipelines. The next aspect of the input data to grapple with is how the input vector space corresponds with the output category space. Is the correspondence simple, e.g., distances in the input space relate to distances in the output space; or, more complex. Visualizing training samples using PCA[Principal Components Analysis (PCA)](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) can be used as a visualization tool to see if there are any obvious patterns in the training samples.PCA re-represents the input data by changing the basis vectors that represent them. These new orthonormal basis vectors (eigen vectors) represent variance in the data (ordered from largest to smallest). Projecting the data samples onto the first few (2 or 3) dimensions will let us see the data with the biggest differences accounted for.The following cell uses `scikit-learn` to calculate PCA eigen vectors for a random subset of the data (10%).
###Code
import sklearn
import sklearn.decomposition
_prng = np.random.RandomState(42)
pca = sklearn.decomposition.PCA(n_components=40, random_state=_prng)
x_train_flat = x_train.reshape(*x_train.shape[:1], -1)
y_train_flat = y_train.reshape(y_train.shape[0])
print("x_train:", x_train.shape, "y_train", y_train.shape)
print("x_train_flat:", x_train_flat.shape, "y_train_flat", y_train_flat.shape)
pca_train_features = pca.fit_transform(x_train_flat, y_train_flat)
print("pca_train_features:", pca_train_features.shape)
# Sample 10% of the PCA results
_idxs = _prng.randint(y_train_flat.shape[0], size=y_train_flat.shape[0] // 10)
pca_features = pca_train_features[_idxs]
pca_category = y_train_flat[_idxs]
print("pca_features:", pca_features.shape,
"pca_category", pca_category.shape,
"min,max category:", pca_category.min(), pca_category.max())
def pca_components_plot(components_, shape_=(32, 32, 3)):
fig = plt.figure(figsize=(16,8))
for i in range(min(40, components_.shape[0])):
plt.subplot(4, 10, i + 1)
plt.xticks([])
plt.yticks([])
eigen_vect = (components_[i] - np.min(components_[i])) / np.ptp(pca.components_[i])
plt.title('component: %s' % i)
plt.imshow(eigen_vect.reshape(shape_), cmap=plt.get_cmap('gray'))
plt.show()
###Output
_____no_output_____
###Markdown
This plot shows the new eigen vector basis functions suggested by the PCA analysis. Any image in our dataset can be created as a linear combination of these basis vectors. At a guess, the most prevalent feature of the dataset is that there is something at the centre of the image that is distinct from the background (components 0 & 2) and there is often a difference between 'land' & 'sky' (component 1) – compare with the sample images shown previously.
###Code
pca_components_plot(pca.components_)
###Output
_____no_output_____
###Markdown
These are 2D and 3D scatter plot functions that colour the points by their labels (so we can see if any 'clumps' of points correspond to actual categories).
###Code
def category_scatter_plot(features, category, title='CIFAR10'):
num_category = 1 + category.max() - category.min()
fig, ax = plt.subplots(1, 1, figsize=(12, 10))
cm = plt.cm.get_cmap('tab10', num_category)
sc = ax.scatter(features[:,0], features[:,1], c=category, alpha=0.4, cmap=cm)
ax.set_xlabel("Component 1")
ax.set_ylabel("Component 2")
ax.set_title(title)
plt.colorbar(sc)
plt.show()
from mpl_toolkits.mplot3d import Axes3D
def category_scatter3d_plot(features, category, title='CIFAR10'):
num_category = 1 + category.max() - category.min()
mean_feat = np.mean(features, axis=0)
std_feat = np.std(features, axis=0)
min_range = mean_feat - std_feat
max_range = mean_feat + std_feat
fig = plt.figure(figsize=(12, 10))
cm = plt.cm.get_cmap('tab10', num_category)
ax = fig.add_subplot(111, projection='3d')
sc = ax.scatter(features[:,0], features[:,1], features[:,2],
c=category, alpha=0.85, cmap=cm)
ax.set_xlabel("Component 1")
ax.set_ylabel("Component 2")
ax.set_zlabel("Component 3")
ax.set_title(title)
ax.set_xlim(2.0 * min_range[0], 2.0 * max_range[0])
ax.set_ylim(2.0 * min_range[1], 2.0 * max_range[1])
ax.set_zlim(2.0 * min_range[2], 2.0 * max_range[2])
plt.colorbar(sc)
plt.show()
category_scatter_plot(pca_features, pca_category, title='CIFAR10 - PCA')
###Output
_____no_output_____
###Markdown
**Note:** 3D PCA plot works best with `%matplotlib notebook` to enable interactive rotation (enabled at start of session).
###Code
category_scatter3d_plot(pca_features, pca_category, title='CIFAR10 - PCA')
###Output
_____no_output_____
###Markdown
The data in its original image space does not appear to cluster into corresponding categories. Visualizing training sample using t-SNE[t-distributed Stochastic Neighbor Embedding (t-SNE)](https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.htmlsklearn.manifold.TSNE) is a tool to visualize high-dimensional data. It converts similarities between data points to joint probabilities and tries to minimize the Kullback-Leibler divergence between the joint probabilities of the low-dimensional embedding and the high-dimensional data. For more details on t-SNE including other use cases see this excellent *Toward Data Science* [blog post](https://towardsdatascience.com/an-introduction-to-t-sne-with-python-example-5a3a293108d1).Informally, t-SNE is preserving the local neighbourhood of data points to help uncover the manifold on which the data lies. For example, a flat piece of paper with two coloured (e.g., red and blue) regions would be a simple manifold to characterize in 3D space; but, if the paper is crumpled up, it becomes very hard to characterize in the original 3D space (blue and red regions could be very close in this representational space) – instead, by following the cumpled paper (manifold) we would recover the fact that blue and red regions are really very distant, and not nearby at all.It is highly recommended to use another dimensionality reduction method (e.g. PCA) to reduce the number of dimensions to a reasonable amount if the number of features is very high. This will suppress some noise and speed up the computation of pairwise distances between samples.* [An Introduction to t-SNE with Python Example](https://towardsdatascience.com/an-introduction-to-t-sne-with-python-example-5a3a293108d1)
###Code
import sklearn
import sklearn.decomposition
import sklearn.pipeline
import sklearn.manifold
_prng = np.random.RandomState(42)
embedding2_pipeline = sklearn.pipeline.make_pipeline(
sklearn.decomposition.PCA(n_components=0.95, random_state=_prng),
sklearn.manifold.TSNE(n_components=2, random_state=_prng))
embedding3_pipeline = sklearn.pipeline.make_pipeline(
sklearn.decomposition.PCA(n_components=0.95, random_state=_prng),
sklearn.manifold.TSNE(n_components=3, random_state=_prng))
# Sample 10% of the data
_prng = np.random.RandomState(42)
_idxs = _prng.randint(y_train_flat.shape[0], size=y_train_flat.shape[0] // 10)
tsne_features = x_train_flat[_idxs]
tsne_category = y_train_flat[_idxs]
print("tsne_features:", tsne_features.shape,
"tsne_category", tsne_category.shape,
"min,max category:", tsne_category.min(), tsne_category.max())
# t-SNE is SLOW (but can be GPU accelerated!);
# lengthy operation, be prepared to wait...
transform2_tsne_features = embedding2_pipeline.fit_transform(tsne_features)
print("transform2_tsne_features:", transform2_tsne_features.shape)
for i in range(2):
print("min,max features[%s]:" % i,
transform2_tsne_features[:,i].min(),
transform2_tsne_features[:,i].max())
category_scatter_plot(transform2_tsne_features, tsne_category, title='CIFAR10 - t-SNE')
###Output
_____no_output_____
###Markdown
**Note:** Skip this step during the tutorial, it will take too long to complete.
###Code
# t-SNE is SLOW (but can be GPU accelerated!);
# extremely lengthy operation, be prepared to wait... and wait...
transform3_tsne_features = embedding3_pipeline.fit_transform(tsne_features)
print("transform3_tsne_features:", transform3_tsne_features.shape)
for i in range(3):
print("min,max features[%s]:" % i,
transform3_tsne_features[:,i].min(),
transform3_tsne_features[:,i].max())
category_scatter3d_plot(transform3_tsne_features, tsne_category, title='CIFAR10 - t-SNE')
###Output
_____no_output_____
###Markdown
t-SNE relates the data points (images) according to their closest neighbours. Hints of underlying categories appear; but are not cleanly seperable into the original categories. Data ConversionThe data type for the training data is `uint8`, while the input type for the network will be `float32` so the data must be converted. Also, the labels need to be categorical, or *one-hot encoded*, as discussed previously. Keras provides utility functions to convert labels to categories (`to_categorical`), and `numpy` makes it easy to perform operations over entire arrays.* https://keras.io/examples/cifar10_cnn/
###Code
num_classes = (y_train.max() - y_train.min()) + 1
print('num_classes =', num_classes)
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
train_data = (x_train, y_train)
test_data = (x_test, y_test)
###Output
_____no_output_____
###Markdown
After the data conversion, notice that the datatypes are `float32`, the input `x` data shapes are the same; but, the `y` classification labels are now 10-dimensional, instead of scalar.
###Code
print('x_train type:', type(x_train))
print('x_train dtype:', x_train.dtype)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('y_train type:', type(y_train))
print('y_train dtype:', y_train.dtype)
print('y_train shape:', y_train.shape)
print('y_test shape:', y_test.shape)
###Output
_____no_output_____
###Markdown
Acquire Pre-Trained NetworkDownload an *ImageNet* pretrained VGG16 network[1](fn1), sans classification layer, shaped for 32x32px colour images[*](https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5) (the smallest supported size). This image-feature detection network is an example of a deep CNN (Convolutional Neural Network).**Note:** The network must be fixed – it was already trained on a very large dataset, so training it on our smaller dataset would result in it un-learning valuable generic features.[1] *Very Deep Convolutional Networks for Large-Scale Image Recognition** by Karen Simonyan and Andrew Zisserman, [arXiv (2014)](https://arxiv.org/abs/1409.1556).
###Code
cache_models()
from tensorflow.keras.applications import VGG16
conv_base = VGG16(weights='imagenet', include_top=False, input_shape=(32, 32, 3))
conv_base.trainable = False
keras.utils.plot_model(conv_base)
conv_base.summary()
###Output
_____no_output_____
###Markdown
The summary shows the layers, starting from the InputLayer and proceeding through Conv2D convolutional layers, which are then collected at MaxPooling2D layers.A convolutional kernel is a small matrix that looks for a specific, localized, pattern on its inputs. This pattern is called a `feature`. The kernel is applied at each location on the input image, and the output is another image – a feature image – that represent the strength of that feature at the given location. Because the inputs to convolution are images, and the outputs are also images – but transformed into a different feature space – it is possible to stack many convolutional layers on top of each other.A feature image can be reduced in size with a MaxPooling2D layer. This layer 'pools' an `MxN` region to a single value, taking the largest value from the 'pool'. The 'Max' in 'MaxPooling' is keeping the *best* evidence for that feature, found in the original region.The InputLayer shape and data type should match with the input data:*Note:* The first dimension of the shape will differ; the input layer has `None` to indicate it accepts a batch sized collection of arrays of the remaining shape. The input data shape will indicate, in that first axis, how many samples it contains.
###Code
print("input layer shape:", conv_base.layers[0].input.shape)
print("input layer dtype:", conv_base.layers[0].input.dtype)
print("input layer type:", type(conv_base.layers[0].input))
print("input data shape:", x_train.shape)
print("input data dtype:", x_train.dtype)
print("input data type:", type(x_train))
###Output
_____no_output_____
###Markdown
Explore Convolutional LayersThe following are visualization functions (and helpers) for understanding what the convolutional layers in a network have learned.We may ask questions about each convolutional kernal in a convolutional layer:* What local features is the kernel looking for: `visualize_conv_layer_weights`* For a given input image, what feature image will the kernal produce: `visualize_conv_layer_output`* What input image makes the kernel respond most strongly: `visualize_conv_layer_response`
###Code
def cifar10_image_plot(img_data=None, image_index=None):
(x_imgs, y_imgs) = img_data if img_data else (x_train, y_train)
if not image_index:
image_index = int(random.uniform(0, x_imgs.shape[0]))
plt.imshow(x_imgs[image_index], cmap='gray')
plt.title("%s" % cifar10_label(y_imgs[image_index]))
plt.xlabel("#%s" % image_index)
plt.show()
return image_index
def get_model_layer(model, layer_name):
if type(layer_name) == str:
layer = model.get_layer(layer_name)
else:
m = model
for ln in layer_name:
model = m
m = m.get_layer(ln)
layer = m
return (model, layer)
def visualize_conv_layer_weights(model, layer_name):
(model, layer) = get_model_layer(model, layer_name)
layer_weights = layer.weights[0]
max_size = layer_weights.shape[3]
col_size = 12
row_size = int(np.ceil(float(max_size) / float(col_size)))
print("conv layer: %s shape: %s size: (%s,%s) count: %s" %
(layer_name,
layer_weights.shape,
layer_weights.shape[0], layer_weights.shape[1],
max_size))
fig, ax = plt.subplots(row_size, col_size, figsize=(12, 1.2 * row_size))
idx = 0
for row in range(0,row_size):
for col in range(0,col_size):
ax[row][col].set_xticks([])
ax[row][col].set_yticks([])
if idx < max_size:
ax[row][col].imshow(layer_weights[:, :, 0, idx], cmap='gray')
else:
fig.delaxes(ax[row][col])
idx += 1
plt.tight_layout()
plt.show()
def visualize_conv_layer_output(model, layer_name, image_index=None):
(model, layer) = get_model_layer(model, layer_name)
layer_output = layer.output
if not image_index:
image_index = cifar10_image_plot()
intermediate_model = keras.models.Model(inputs = model.input, outputs=layer_output)
intermediate_prediction = intermediate_model.predict(x_train[image_index].reshape(1,32,32,3))
max_size = layer_output.shape[3]
col_size = 10
row_size = int(np.ceil(float(max_size) / float(col_size)))
print("conv layer: %s shape: %s size: (%s,%s) count: %s" %
(layer_name,
layer_output.shape,
layer_output.shape[1], layer_output.shape[2],
max_size))
fig, ax = plt.subplots(row_size, col_size, figsize=(12, 1.2 * row_size))
idx = 0
for row in range(0,row_size):
for col in range(0,col_size):
ax[row][col].set_xticks([])
ax[row][col].set_yticks([])
if idx < max_size:
ax[row][col].imshow(intermediate_prediction[0, :, :, idx], cmap='gray')
else:
fig.delaxes(ax[row][col])
idx += 1
plt.tight_layout()
plt.show()
from tensorflow.keras import backend as K
def process_image(x):
epsilon = 1e-5
# Normalizes the tensor: centers on 0, ensures that std is 0.1 Clips to [0, 1]
x -= x.mean()
x /= (x.std() + epsilon)
x *= 0.1
x += 0.5
x = np.clip(x, 0, 1)
x *= 255
x = np.clip(x, 0, 255).astype('uint8')
return x
def generate_response_pattern(model, conv_layer_output, filter_index=0):
#step_size = 1.0
epsilon = 1e-5
img_tensor = tf.Variable(tf.random.uniform((1, 32, 32, 3)) * 20 + 128.0, trainable=True)
response_model = keras.models.Model([model.inputs], [conv_layer_output])
for i in range(40):
with tf.GradientTape() as gtape:
layer_output = response_model(img_tensor)
loss = K.mean(layer_output[0, :, :, filter_index])
grads = gtape.gradient(loss, img_tensor)
grads /= (K.sqrt(K.mean(K.square(grads))) + epsilon)
img_tensor = tf.Variable(tf.add(img_tensor, grads))
img = np.array(img_tensor[0])
return process_image(img)
def visualize_conv_layer_response(model, layer_name):
(model, layer) = get_model_layer(model, layer_name)
layer_output = layer.output
max_size = layer_output.shape[3]
col_size = 12
row_size = int(np.ceil(float(max_size) / float(col_size)))
print("conv layer: %s shape: %s size: (%s,%s) count: %s" %
(layer_name,
layer_output.shape,
layer_output.shape[1], layer_output.shape[2],
max_size))
fig, ax = plt.subplots(row_size, col_size, figsize=(12, 1.2 * row_size))
idx = 0
for row in range(0,row_size):
for col in range(0,col_size):
ax[row][col].set_xticks([])
ax[row][col].set_yticks([])
if idx < max_size:
img = generate_response_pattern(model, layer_output, idx)
ax[row][col].imshow(img, cmap='gray')
ax[row][col].set_title("%s" % idx)
else:
fig.delaxes(ax[row][col])
idx += 1
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Looking at the the first 4 convolution layers, we see that:* All the kernels are 3x3 (i.e., 9 elements each)* Layers 1 & 2 have 64 kernels each (64 different possible features)* Layers 3 & 4 have 128 kernels each (128 different possible features)* Light pixels indicate preference for an activated pixel* Dark pixels indicate preference for an inactive pixel* The kernels seem to represent edges and lines at various angles
###Code
for n in [l.name for l in conv_base.layers if isinstance(l, keras.layers.Conv2D)][:4]:
visualize_conv_layer_weights(conv_base, n)
###Output
_____no_output_____
###Markdown
For the given input image, show the corresponding feature image. At the lower level layers (e.g., first Conv2D layer), the feature images seem to capture concepts like 'edges' or maybe 'solid colour'?At higher layers, the size of the feature images decrease because of the MaxPooling. They also appear more abstract – harder to visually recognize than the original image – however, the features are spatially related to the original image (e.g., if there is a white/high value in the lower-left corner of the feature image, then somewhere on the lower-left corner of the original image, there exists pixels that the network is confident represent the feature in question).
###Code
image_index = cifar10_image_plot()
for n in [l.name for l in conv_base.layers if isinstance(l, keras.layers.Conv2D)][:7]:
visualize_conv_layer_output(conv_base, n, image_index)
###Output
_____no_output_____
###Markdown
This plot shows which input images cause the greatest response from the convolution kernels. At lower layers, we see many simple 'wave' textures showing that these kernals like to see edges at particular angles. At lower-middle layers, the paterns show larger scale and more complexity (like dots and curves); but, still lots of angled edges.
###Code
for n in [l.name for l in conv_base.layers if isinstance(l, keras.layers.Conv2D)][:4]:
visualize_conv_layer_response(conv_base, n)
###Output
_____no_output_____
###Markdown
The patterns in the higher levels can get even more complex; but, some of them don't seem to encode for anything but noise. Maybe these could be pruned to make a smaller network...**Note:** Skip this step during the tutorial, it will take too long to complete.
###Code
# NOTE: Visualize mid to higher level convolutional layers;
# lengthy operation, be prepared to wait...
for n in [l.name for l in conv_base.layers if isinstance(l, keras.layers.Conv2D)][4:]:
visualize_conv_layer_response(conv_base, n)
###Output
_____no_output_____
###Markdown
CNN Base + Classifier ModelCreate a simple model that has the pre-trained CNN (Convolutional Neural Network) as a base, and adds a basic classifier on top.The new layer types are Flatten, Dense, Dropout, and Activation.The Flatten layer reshapes the input dimensions (2D + 1 channel) into a single dimension.The Dense(x) layer is a layer of (`x`) neurons (represented as a flat 1D array) connected to a flat input. The size of the input and outputs do not need to match.The Dropout(x) layer withholds a random fraction (`x`) of the input neurons from training during each batch of data. This limits the ability of the network to `overfit` on the training data (i.e., memorize training data, rather than learn generalizable rules).Activation is an essential part of (or addition to) each layer. Layers like Dense are simply linear functions (weighted sums + a bias). Without a non-linear component, the network could not learn a non-linear function. Activations like 'relu' (Rectified Linear Unit), 'tanh', or 'sigmoid' are functions to introduce a non-linearity. They also clamp output values within known ranges.The 'softmax' activation is used to produce probability distributions over multiple categories.This example uses the Sequential API to build the final network.* [Activation Functions in Neural Networks](https://towardsdatascience.com/activation-functions-neural-networks-1cbd9f8d91d6)
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense, Activation, Dropout
from tensorflow.keras.applications import VGG16
def create_cnnbase_classifier_model(conv_base=None):
if not conv_base:
conv_base = VGG16(weights='imagenet', include_top=False, input_shape=(32, 32, 3))
conv_base.trainable = False
model = Sequential()
model.add(conv_base)
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
return model
###Output
_____no_output_____
###Markdown
Create our model *model_transfer_cnn* by calling the creation function *create_cnnbase_classifier_model* above.Notice the split of total parameters (\~15 million) between trainable (\~0.3 million for our classifier) and non-trainable (\~14.7 million for the pre-trained CNN).Note also that the final Dense layer squeezes the network down to the number of categories.
###Code
model_transfer_cnn = create_cnnbase_classifier_model(conv_base)
model_transfer_cnn.summary()
###Output
_____no_output_____
###Markdown
Train ModelTraining a model typically involves setting relevant hyperparameters that control aspects of the training process. Common hyperparameters include:* `epochs`: The number of training passes through the entire dataset. The number of epochs depends upon the complexity of the dataset, and how effectively the network architecture of the model can learn it. If the value is too small, the model accuracy will be low. If the value is too big, then the training will take too long for no additional benefit, as the model accuracy will plateau.* `batch_size`: The number of samples to train during each step. The number should be set so that the GPU memory and compute are well utilized. The `learning_rate` needs to be set accordingly.* `learning_rate`: The step-size to update model weights during the training update phase (backpropagation). Too small, and learning takes too long. Too large, and we may step over the minima we are trying to find. The learning rate can be increased as the batch sizes increases (with some caveats), on the assumption that with more data in a larger batch, the error gradient will be more accurate, so therefore, we can take a larger step.* `decay`: Used by some optimizers to decrease the `learning_rate` over time, on the assumption that as we get closer to our goal, we should focus on smaller refinement steps.
###Code
batch_size = 128 #32
epochs = 25 #100
learning_rate = 1e-3 #1e-4
decay = 1e-6
###Output
_____no_output_____
###Markdown
The model needs to be compiled prior to use. This step enables the model to train efficiently on the GPU device.This step also specifies the loss functions, accuracy metrics, learning strategy (optimizers), and more.Our `loss` is *categorical_crossentropy* because we are doing multi-category classification.We use an RMSprop optimizer, which is a varient of standard gradient descent optimizers that also includes momentum. Momentum is used to speed up learning in directions where it has been making more progress.* [A Look at Gradient Descent and RMSprop Optimizers](https://towardsdatascience.com/a-look-at-gradient-descent-and-rmsprop-optimizers-f77d483ef08b)* [Understanding RMSprop — faster neural network learning](https://towardsdatascience.com/understanding-rmsprop-faster-neural-network-learning-62e116fcf29a)
###Code
from tensorflow.keras.optimizers import RMSprop
model_transfer_cnn.compile(loss='categorical_crossentropy',
optimizer=RMSprop(learning_rate=learning_rate, decay=decay),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The model `fit` function trains the network, and returns a history of training and testing accuracy.*Note:* Because we already have a test dataset, and we are not validating our hyperparameters, we will use the test dataset for validation. We could have also reserved a fraction of the training data to use for validation.
###Code
%%time
history = model_transfer_cnn.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
###Output
_____no_output_____
###Markdown
Evaluate Model Visualize accuracy and loss for training and validation.* https://keras.io/visualization/
###Code
def history_plot(history):
fig = plt.figure(figsize=(12,5))
plt.title('Model accuracy & loss')
# Plot training & validation accuracy values
ax1 = fig.add_subplot()
#ax1.set_ylim(0, 1.1 * max(history.history['loss']+history.history['val_loss']))
ax1.set_prop_cycle(color=['green', 'red'])
p1 = ax1.plot(history.history['loss'], label='Train Loss')
p2 = ax1.plot(history.history['val_loss'], label='Test Loss')
# Plot training & validation loss values
ax2 = ax1.twinx()
ax2.set_ylim(0, 1.1 * max(history.history['accuracy']+history.history['val_accuracy']))
ax2.set_prop_cycle(color=['blue', 'orange'])
p3 = ax2.plot(history.history['accuracy'], label='Train Acc')
p4 = ax2.plot(history.history['val_accuracy'], label='Test Acc')
ax1.set_ylabel('Loss')
ax1.set_xlabel('Epoch')
ax2.set_ylabel('Accuracy')
pz = p3 + p4 + p1 + p2
plt.legend(pz, [l.get_label() for l in pz], loc='center right')
plt.show()
###Output
_____no_output_____
###Markdown
The history plot shows characteristic features of training performance over successive epochs. Accuracy and loss are related, in that a reduction in loss produces an increase in accuracy. The graph shows characteristic arcs for training and testing accuracy / loss over training time (epochs).The primary measure to improve is *testing accuracy*, because that indicates how well the model generalizes to data it must typically classify.The accuracy curves show that testing accuracy has plateaued (with some variability), while training accuracy increases (but at a slowing rate). The difference between training and testing accuracy shows overfitting of the model (i.e., the model can memorize what it has seen better than it can generalize the classification rules).We would like a model that *can* overfit (otherwise it might not be large enough to capture the complexity of the data domain), but doesn't. And then, it is only trained until *test accuracy* peaks.Could the model 100% overfit the data? The graph doesn't answer definitively yet, but training accuracy seems to be slowing, while training loss is still decreasing (with lots of room to improve – the loss axis does not start at zero).*Note:* The model contains Dropout layers to help prevent overfitting. What happens to training and testing accuracy when those layers are removed?
###Code
history_plot(history)
# Score trained model.
scores = model_transfer_cnn.evaluate(x_test, y_test, verbose=0)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
###Output
_____no_output_____
###Markdown
The following prediction plot functions provide insight into aspects of model prediction.
###Code
def prediction_plot(model, test_data):
(x_test, y_test) = test_data
fig = plt.figure(figsize=(16,8))
correct = 0
total = 0
rSym = ''
for i in range(40):
plt.subplot(4, 10, i + 1)
plt.xticks([])
plt.yticks([])
idx = int(random.uniform(0, x_test.shape[0]))
result = model.predict(x_test[idx:idx+1])[0]
if y_test is not None:
rCorrect = True if cifar10_label(y_test[idx]) == cifar10_label(result) else False
rSym = '✔' if rCorrect else '✘'
correct += 1 if rCorrect else 0
total += 1
plt.title("%s %s" % (rSym, cifar10_label(result)))
plt.imshow(x_test[idx], cmap=plt.get_cmap('gray'))
plt.show()
if y_test is not None:
print("% 3.2f%% correct (%s/%s)" % (100.0 * float(correct) / float(total), correct, total))
def prediction_classes_plot(model, test_data):
(x_test, y_test) = test_data
fig = plt.figure(figsize=(16,8))
correct = 0
total = 0
rSym = ''
for i in range(40):
plt.subplot(4, 10, i + 1)
plt.xticks([])
plt.yticks([])
idx = int(random.uniform(0, x_test.shape[0]))
result = model.predict_classes(x_test[idx:idx+1])[0]
if y_test is not None:
rCorrect = True if cifar10_label(y_test[idx]) == cifar10_label(result) else False
rSym = '✔' if rCorrect else '✘'
correct += 1 if rCorrect else 0
total += 1
plt.title("%s %s" % (rSym, cifar10_label(result)))
plt.imshow(x_test[idx], cmap=plt.get_cmap('gray'))
plt.show()
if y_test is not None:
print("% 3.2f%% correct (%s/%s)" % (100.0 * float(correct) / float(total), correct, total))
def prediction_proba_plot(model, test_data):
(x_test, y_test) = test_data
fig = plt.figure(figsize=(15,15))
for i in range(10):
plt.subplot(10, 2, (2*i) + 1)
plt.xticks([])
plt.yticks([])
idx = int(random.uniform(0, x_test.shape[0]))
result = model.predict_proba(x_test[idx:idx+1])[0] * 100 # prob -> percent
if y_test is not None:
plt.title("%s" % cifar10_label(y_test[idx]))
plt.xlabel("#%s" % idx)
plt.imshow(x_test[idx], cmap=plt.get_cmap('gray'))
ax = plt.subplot(10, 2, (2*i) + 2)
plt.bar(np.arange(len(result)), result, label='%')
plt.xticks(range(0, len(result) + 1))
ax.set_xticklabels(cifar10_label_names)
plt.title("classifier probabilities")
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
* *Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization* by Ramprasaath Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra [arXiv (2016)](https://arxiv.org/abs/1610.02391)* https://jacobgil.github.io/deeplearning/class-activation-maps* https://keras.io/examples/vision/grad_cam/
###Code
from tensorflow.keras import backend as K
def generate_activation_pattern(model, conv_layer_output, category_idx, image):
epsilon = 1e-10
activation_model = keras.models.Model([model.inputs], [conv_layer_output, model.output])
with tf.GradientTape() as gtape:
conv_output, prediction = activation_model(image)
category_output = prediction[:, category_idx]
grads = gtape.gradient(category_output, conv_output)
pooled_grads = K.mean(grads, axis=(0, 1, 2))
heatmap = tf.reduce_mean(tf.multiply(pooled_grads, conv_output), axis=-1) * -1.
heatmap = np.maximum(heatmap, 0)
heatmap /= np.max(heatmap) + epsilon
return(heatmap)
def activation_plot(model, layer_name, image_data, image_index=None):
(layer_model, conv_layer) = get_model_layer(model, layer_name)
(x_imgs, y_cat) = image_data
if not image_index:
image_index = int(random.uniform(0, x_imgs.shape[0]))
image = x_imgs[image_index:image_index+1]
fig = plt.figure(figsize=(16,8))
plt.subplot(1, num_classes + 2, 1)
plt.xticks([])
plt.yticks([])
plt.title(cifar10_label(y_cat[image_index]))
plt.xlabel("#%s" % image_index)
plt.imshow(image.reshape(32, 32, 3))
result = model.predict(image)[0]
for i in range(num_classes):
activation = generate_activation_pattern(model, conv_layer.output, i, image)
activation = np.copy(activation)
plt.subplot(1, num_classes + 2, i + 2)
plt.xticks([])
plt.yticks([])
plt.title(cifar10_label(i))
plt.xlabel("(% 3.2f%%)" % (result[i] * 100.0))
plt.imshow(activation[0])
plt.show()
###Output
_____no_output_____
###Markdown
This plot shows what the model thinks is the most likely class for each image.
###Code
prediction_classes_plot(model_transfer_cnn, (x_test, y_test))
###Output
_____no_output_____
###Markdown
This plot shows the probabilities that the model assigns to each category class, and provides a sense of how confident the network is with its classifications.
###Code
prediction_proba_plot(model_transfer_cnn, (x_test, y_test))
# TODO: Complete activation plot
#activation_plot(model_transfer_cnn, ('vgg16', 'block5_conv3'), (x_test, y_test), 1)
###Output
_____no_output_____
###Markdown
CNN Classifier ModelCreate a basic CNN (Convolutional Neural Network) based classifier from scratch.We have encountered Conv2D and MaxPooling2D layers previously, but here we see how they are declared. Conv2D layers specify the number of convolution kernels and their shape. MaxPooling2D layers specify the size of each pool (i.e., the scaling factors).Notice the total number of parameters (\~1.25 million) in this smaller network.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense, Activation, Dropout, Conv2D, MaxPooling2D
def create_cnn_classifier_model():
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same',
input_shape=x_train.shape[1:]))
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
return model
model_simple_cnn = create_cnn_classifier_model()
model_simple_cnn.summary()
batch_size = 128 #32
epochs = 25 #100
learning_rate = 1e-3 #1e-4
decay = 1e-6
from tensorflow.keras.optimizers import RMSprop
model_simple_cnn.compile(loss='categorical_crossentropy',
optimizer=RMSprop(learning_rate=learning_rate, decay=decay),
metrics=['accuracy'])
%%time
history = model_simple_cnn.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
###Output
_____no_output_____
###Markdown
The notable features of the history plot for this model are:* Training accuracy is ~10 percentage points better than the previous model,* test accuracy more closely tracks training accuracy, and* test accuracy shows more variability.
###Code
history_plot(history)
# Score trained model.
scores = model_simple_cnn.evaluate(x_test, y_test, verbose=0)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
prediction_classes_plot(model_simple_cnn, (x_test, y_test))
prediction_proba_plot(model_simple_cnn, (x_test, y_test))
for n in [l.name for l in model_simple_cnn.layers if isinstance(l, keras.layers.Conv2D)][:4]:
visualize_conv_layer_weights(model_simple_cnn, n)
image_index = cifar10_image_plot()
for n in [l.name for l in model_simple_cnn.layers if isinstance(l, keras.layers.Conv2D)]:
visualize_conv_layer_output(model_simple_cnn, n, image_index)
###Output
_____no_output_____
###Markdown
Interesting aspects of the convolutional layer response for our *model_simple_cnn* model:* There are fewer Conv2D layers in this simple model* Compared to the pre-trained VGG16 convolutional base network, * the latter levels are the first edge detection kernels, and * there are no layers with higher-level features.
###Code
for n in [l.name for l in model_simple_cnn.layers if isinstance(l, keras.layers.Conv2D)][:4]:
visualize_conv_layer_response(model_simple_cnn, n)
###Output
_____no_output_____
###Markdown
This plot shows which pixels of the original image contributed the most 'confidence' to the classification categories.The technique is better applied to larger images where the object of interest might be anywhere inside the image.
###Code
n = [l.name for l in model_simple_cnn.layers if isinstance(l, keras.layers.Conv2D)][-1]
print(n)
for i in range(5):
activation_plot(model_simple_cnn, n, (x_test, y_test))
###Output
_____no_output_____
###Markdown
Combined ModelsKeras supports a functional interface to take network architectures beyond simply sequential networks.The new layer types are Input and Concatenate; and, there is an explicit Model class.The Input layer is a special layer denoting sources of input from training batches.The Concatenate layer combines multiple inputs (along an axis with the same size) and creates a larger layer incorporating all the input values.Model construction is also different. Instead of using a `Sequential` model, and `add`ing layers to it:* An explicit Input layer is created, * we pass inputs into the layers explicity,* the output from a layer become input for arbitrary other layers, and finally,* A Model object is created with the source Input layer as inputs and outputs from the final layer.We'll demonstrate by creating a new network which combines the two CNN classifier networks we created previously.*Note:* Network models provided as an argument are changed to be non-trainable (the assumption is that they were already trained).
###Code
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Concatenate, Flatten, Dense, Activation, Dropout
from tensorflow.keras.optimizers import RMSprop
def create_combined_classifier_model(trained_model1=None, trained_model2=None):
if trained_model1:
network1 = trained_model1
network1.trainable = False
else:
network1 = create_cnnbase_classifier_model()
if trained_model2:
network2 = trained_model2
network2.trainable = False
else:
network2 = create_cnn_classifier_model()
inputs = Input(shape=(32,32,3), name='cifar10_image')
c1 = network1(inputs)
c2 = network2(inputs)
c = Concatenate()([c1, c2])
x = Dense(512)(c)
x = Activation('relu')(x)
x = Dropout(0.5)(x)
x = Dense(num_classes)(x)
outputs = Activation('softmax')(x)
model = Model(inputs=inputs, outputs=outputs, name='combined_cnn_classifier')
return model
###Output
_____no_output_____
###Markdown
Combining Pre-Trained ModelsThis version of the combined classifier uses both of the trained networks we created previously.Notice the trainable parameters (~16,000) is very small. How will this affect training?
###Code
model_combined = create_combined_classifier_model(model_transfer_cnn, model_simple_cnn)
model_combined.summary()
###Output
_____no_output_____
###Markdown
This plot shows a graph representation of the layer connections. Notice how a single input feeds the previously created Sequential networks, their output is combine via Concatenate, and then a classifier network is added on top.
###Code
keras.utils.plot_model(model_combined)
###Output
_____no_output_____
###Markdown
Reduce number of `epochs` because this network is mostly trained (execpt for the final classifier), and there are few trainable parameters.
###Code
batch_size = 128 #32
epochs = 5 #100
learning_rate = 1e-3 #1e-4
decay = 1e-6
model_combined.compile(loss='categorical_crossentropy',
optimizer=RMSprop(learning_rate=learning_rate, decay=decay),
metrics=['accuracy'])
%%time
history = model_combined.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
###Output
_____no_output_____
###Markdown
It looks like everything we needed to learn was learned in a single epoch.
###Code
history_plot(history)
###Output
_____no_output_____
###Markdown
Here is an interesting, possibly counter-intuitive, result: combining two weaker networks can create a stronger one.The reason is that the weakness in one model, might be a strength in the other model (each has 'knowledge' that the other doesn't); we just need a layer to discriminate when to trust each model. At a larger scale (of layers and models) is what is happening at the lower level of the neurons themselves.
###Code
# Score trained model.
scores = model_combined.evaluate(x_test, y_test, verbose=0)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
# NOTE: Sequential Model provides `predict_classes` or `predict_proba`
# Functional API Model does not; because it may have multiple outputs
# Using simple `predict` plot instead
prediction_plot(model_combined, (x_test, y_test))
###Output
_____no_output_____
###Markdown
The combine model improves accuracy by 0.5-2% (with TF 2.0), and takes 1/5th of the time to train. Training Combining ModelsThis version of the combined classifier uses both network architectures seen previously; except, in this version, the models need to be trained from scratch. The following cells repeat the previous experiments with this combined classifier.*Spoiler:* The combined network doesn't perform any better than the partially trained one did, but takes much longer to train (more epochs).**Note:** Skip this step during the tutorial, it will cause unecessary delay.
###Code
%%time
batch_size = 128 #32
epochs = 25 #100
learning_rate = 1e-3 #1e-4
decay = 1e-6
model_combined = create_combined_classifier_model()
model_combined.compile(loss='categorical_crossentropy',
optimizer=RMSprop(learning_rate=learning_rate, decay=decay),
metrics=['accuracy'])
history = model_combined.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
history_plot(history)
# Score trained model.
scores = model_combined.evaluate(x_test, y_test, verbose=0)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
###Output
_____no_output_____
###Markdown
Skip ConnectionsFrom previous comparisons of the `visualize_conv_layer_response` plots of the two basic CNN models, it becomes apparent that the pre-trained VGG16 network contains more complex *knowledge* about images: there were more convolutional layers with a greater variety of patterns and features they could represent.In the previous cnnbase_classifier model `model_transfer_cnn`, only the last Conv2D layer fed directly to the classifier, and the feature information contained in the middle layers wasn't directly available to the classifier.Skip Connections are a way to bring lower level feature encodings to higher levels of the network directly. They are also useful during training very deep networks to deal with the problem of *vanishing gradients*.In the following example, the original CNN base of the pre-trained VGG16 model is decomposed into layered groups, and a new network created that feeds these intermediate layers to the top of the network, where they are concatenated together to perform the final classification.* https://towardsdatascience.com/understanding-and-coding-a-resnet-in-keras-446d7ff84d33* https://arxiv.org/abs/1608.04117
###Code
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Input, Concatenate, Flatten, Dense, Activation, Dropout
from tensorflow.keras.applications import VGG16
from tensorflow.keras.optimizers import RMSprop
def create_cnnbase_skipconnected_classifier_model(conv_base=None):
if not conv_base:
conv_base = VGG16(weights='imagenet', include_top=False, input_shape=(32, 32, 3))
conv_base.trainable = False
# Split conv_base into groups of CNN layers topped by a MaxPooling2D layer
cb_idxs = [i for (i,l) in enumerate(conv_base.layers) if isinstance(l, keras.layers.MaxPooling2D)]
all_idxs = [-1] + cb_idxs
idx_pairs = [l for l in zip(all_idxs, cb_idxs)]
cb_layers = [conv_base.layers[i+1:j+1] for (i,j) in idx_pairs]
# Dense Pre-Classifier Layers creation function - used repeatedly at multiple network locations
def dense_classes(l):
x = Dense(512)(l)
x = Activation('relu')(x)
x = Dropout(0.5)(x)
x = Dense(num_classes)(x)
return x
inputs = Input(shape=(32,32,3), name='cifar10_image')
# Join split groups into a sequence, but keep track of their outputs to create skip connections
skips = []
inz = inputs
for lz in cb_layers:
m = Sequential()
m.trainable = False
for ls in lz:
m.add(ls)
# inz is the output of model m, but the input for next layer group
inz = m(inz)
skips += [inz]
# Flatten all outputs (which had different dimensions) to Concatenate them on a common axis
flats = [dense_classes(Flatten()(l)) for l in skips]
c = Concatenate()(flats)
x = dense_classes(c)
outputs = Activation('softmax')(x)
model = Model(inputs=inputs, outputs=outputs)
return model
model_skipconnected = create_cnnbase_skipconnected_classifier_model(conv_base)
model_skipconnected.summary()
keras.utils.plot_model(model_skipconnected)
%%time
batch_size = 128 #32
epochs = 25 #100
learning_rate = 1e-3 #1e-4
decay = 1e-6
model_skipconnected.compile(loss='categorical_crossentropy',
optimizer=RMSprop(learning_rate=learning_rate, decay=decay),
metrics=['accuracy'])
history = model_skipconnected.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
history_plot(history)
###Output
_____no_output_____
###Markdown
A significant improvement over the first pre-trained model.
###Code
# Score trained model.
scores = model_skipconnected.evaluate(x_test, y_test, verbose=0)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
# Using simple `predict` plot because model uses Functional API
prediction_plot(model_skipconnected, (x_test, y_test))
###Output
_____no_output_____
###Markdown
Deep Learning from Pre-Trained Models with Keras IntroductionImageNet, an image recognition benchmark dataset*, helped trigger the modern AI explosion. In 2012, the AlexNet architecture (a deep convolutional-neural-network) rocked the ImageNet benchmark competition, handily beating the next best entrant. By 2014, all the leading competitors were deep learning based. Since then, accuracy scores continued to improve, eventually surpassing human performance.In this hands-on tutorial we will build on this pioneering work to create our own neural-network architecture for image recognition. Participants will use the elegant Keras deep learning programming interface to build and train TensorFlow models for image classification tasks on the CIFAR-10 / MNIST datasets*. We will demonstrate the use of transfer learning* (to give our networks a head-start by building on top of existing, ImageNet pre-trained, network layers*), and explore how to improve model performance for standard deep learning pipelines. We will use cloud-based interactive Jupyter notebooks to work through our explorations step-by-step. Once participants have successfully trained their custom model we will show them how to submit their model's predictions to Kaggle for scoring*.This tutorial aims to prepare participants for the HPC Saudi 2020 Student AI Competition.Participants are expected to bring their own laptops and sign-up for free online cloud services (e.g., Google Colab, Kaggle). They may also need to download free, open-source software prior to arriving for the workshop.This tutorial assumes some basic knowledge of neural networks. If you’re not already familiar with neural networks, then you can learn the basics concepts behind neural networks at [course.fast.ai](https://course.fast.ai/).* Tutorial materials are derived from: * [PyTorch Tutorials](https://github.com/kaust-vislab/pytorch-tutorials) by David Pugh. * [What is torch.nn really?](https://pytorch.org/tutorials/beginner/nn_tutorial.html) by Jeremy Howard, Rachel Thomas, Francisco Ingham. * [Machine Learning Notebooks](https://github.com/ageron/handson-ml2) (2nd Ed.) by Aurélien Géron. * *Deep Learning with Python* by François Chollet. Jupyter NotebooksThis is a Jupyter Notebook. It provides a simple, cell-based, IDE for developing and exploring complex ideas via code, visualizations, and documentation.A notebook has two primary types of cells: i) `markdown` cells for textual notes and documentation, such as the one you are reading now, and ii) `code` cells, which contain snippets of code (typically *Python*, but also *bash* scripts) that can be executed. The currently selected cell appears within a box. A green box indicates that the cell is editable. Clicking inside a *code* cell makes it selected and editable. Double-click inside *markdown* cells to edit.Use `Tab` for context-sensitive code-completion assistance when editing Python code in *code* cells. For example, use code assistance after a `.` seperator to find available object members. For help documentation, create a new *code* cell, and use commands like `dir(`*module*`)`, `help(`*topic*`)`, `?`*name*, or `??`*function* for user provided *module*, *topic*, variable *name*, or *function* name. The magic `?` and `??` commands show documentation / source code in a separate pane.Clicking on `[Run]` or pressing `Ctrl-Enter` will execute the contents of a cell. A *markdown* cell converts to its display version, and a *code* cell runs the code inside. To the left of a *code* cell is a small text bracket `In [ ]:`. If the bracket contains an asterix, e.g., `In [*]:`, that cell is currently executing. Only one cell executes at a time (if multiple cells are *Run*, they are queued up to execute in the order they were run). When a *code* cell finishes executing, the bracket shows an execution count in the bracket – each *code* cell execution increments the counter and provides a way to determine the order in which codes were executed – e.g., `In [7]` for the seventh cell to complete. The output produced by a *code* cell appears at the bottom of that cell after it executes. The output generated by a code cell includes anything printed to the output during execution (e.g., print statements, or thrown errors) and the final value generated by the cell (i.e., not the intermediate values). The final value is 'pretty printed' by Jupyter.Typically, notebooks are written to be executed in order, from top to bottom. Behind the scenes, however, each Notebook has a single Python state (the `kernel`), and each *code* cell that executes, modifies that state. It is possible to modify and re-run earlier cells; however, care must be taken to also re-run any other cells that depend upon the modified one. List the Python state global variables with the magic command `%wgets`. The *kernel* can be restarted to a known state, and cell output cleared, if the Python state becomes too confusing to fix manually (choose `Restart & Clear Output` from the Jupyter `Kernel` menu) – this requires running each *code* cell again.Complete user documentation is available at [jupyter-notebook.readthedocs.io](https://jupyter-notebook.readthedocs.io/en/stable/notebook.htmlnotebook-user-interface). Many helpful tips and techniques from [28 Jupyter Notebook Tips, Tricks, and Shortcuts](https://www.dataquest.io/blog/jupyter-notebook-tips-tricks-shortcuts/). Setup Create a Kaggle Account 1. Register for an accountIn order to download Kaggle competition data you will first need to create a [Kaggle](https://www.kaggle.com/) account. 2. Create an API keyOnce you have registered for a Kaggle account you will need to create [API credentials](https://github.com/Kaggle/kaggle-apiapi-credentials) in order to be able to use the `kaggle` CLI to download data.* Go to the `Account` tab of your user profile, * and click `Create New API Token` from the API section. This generates a `kaggle.json` file (with 'username' and 'key' values) to download. Setup ColabIn order to run this notebook in [Google Colab](https://colab.research.google.com) you will need a [Google Account](https://accounts.google.com/). Sign-in to your Google account, if necessary, and then start the notebook.Change Google Colab runtime to use GPU:* Click `Runtime` -> `Change runtime type` menu item* Specify `Runtime type` as `Python 3`* Specify `Hardware accelerator` as `GPU`* Click **[Save]** buttonThe session indicator (toolbar / status ribbon under menu) should briefly appear as `Connecting...`. When the session restarts, continue with the next cell (specifying TensorFlow version v2.x):
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
###Output
_____no_output_____
###Markdown
Download DataThere are two image datasets ([CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) and [MNIST](http://yann.lecun.com/exdb/mnist/index.html)) which these tutorial / exercise notebooks use.These datasets are available from a variety of sources, including this repository – depending on how the notebook was launched (e.g., Git+LFS / Binder contains entire repository, Google Colab only contains the notebook).Because data is the fundamental fuel for deep learning, we need to ensure the required datasets for this tutorial are available to the current notebook session. The following steps will ensure the data is already available (or downloaded), and cached where Keras can find them.Follow instructions and run the cells below to acquire required datasets:
###Code
import pathlib
import tensorflow.keras.utils as Kutils
def cache_mnist_data():
for n in ["mnist.npz", "kaggle/train.csv", "kaggle/test.csv"]:
path = pathlib.Path("../datasets/mnist/%s" % n).absolute()
DATA_URL = "file:///" + str(path)
data_file_path = Kutils.get_file(n.replace('/','-mnist-'), DATA_URL)
print("cached file: %s" % n)
def cache_cifar10_data():
for n in ["cifar-10.npz", "cifar-10-batches-py.tar.gz"]:
path = pathlib.Path("../datasets/cifar10/%s" % n).absolute()
DATA_URL = "file:///" + str(path)
if path.is_file():
data_file_path = Kutils.get_file(n, DATA_URL)
print("cached file: %s" % n)
else:
print("FAILED: First fetch file: %s" % n)
def cache_models():
for n in ["vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5"]:
path = pathlib.Path("../models/%s" % n).absolute()
DATA_URL = "file:///" + str(path)
if path.is_file():
data_file_path = Kutils.get_file(n, DATA_URL, cache_subdir='models')
print("cached file: %s" % n)
###Output
_____no_output_____
###Markdown
Download MNIST Data If you are using Binder to run this notebook, then the data is already downloaded and available. Skip to the next step.If you are using Google Colab to run this notebook, then you will need to download the data before proceeding. Download MNIST from Kaggle**Note:** Before attempting to download the competition data you will need to login to your [Kaggle](https://www.kaggle.com) account and accept the rules for this competition.Set your Kaggle username and API key (from the `kaggle.json` file) into the cell below, and execute the code to download the Kaggle [Digit Recognizer: Learn computer vision with the famous MNIST data](https://www.kaggle.com/c/digit-recognizer) competition data.
###Code
%%bash
# NOTE: Replace YOUR_USERNAME and YOUR_API_KEY with actual credentials
export KAGGLE_USERNAME="YOUR_USERNAME"
export KAGGLE_KEY="YOUR_API_KEY"
kaggle competitions download -c digit-recognizer -p ../datasets/mnist/kaggle
%%bash
unzip -n ../datasets/mnist/kaggle/digit-recognizer.zip -d ../datasets/mnist/kaggle
###Output
_____no_output_____
###Markdown
(Alternative) Download MNIST from GitHubIf you are running this notebook using Google Colab, but did *not* create a Kaggle account and API key, then dowload the data from our GitHub repository by running the code in the following cells.
###Code
import pathlib
import requests
def fetch_mnist_data():
RAW_URL = "https://github.com/holstgr-kaust/keras-tutorials/raw/master/datasets/mnist"
DEST_DIR = pathlib.Path('../datasets/mnist')
DEST_DIR.mkdir(parents=True, exist_ok=True)
for n in ["mnist.npz", "kaggle/train.csv", "kaggle/test.csv", "kaggle/sample_submission.csv"]:
path = DEST_DIR / n
if not path.is_file(): # Don't download if file exists
with path.open(mode = 'wb') as f:
response = requests.get(RAW_URL + "/" + n)
f.write(response.content)
fetch_mnist_data()
cache_mnist_data()
###Output
_____no_output_____
###Markdown
(Alternative) Download MNIST with KerasIf you are running this notebook using Google Colab, but did *not* create a Kaggle account and API key, then dowload the data using the Keras load_data() API by running the code in the following cells.
###Code
from tensorflow.keras.datasets import mnist
cache_mnist_data()
mnist.load_data();
###Output
_____no_output_____
###Markdown
Download CIFAR10 DataIf you are using Binder to run this notebook, then the data is already downloaded and available. Skip to the next step.If you are using Google Colab to run this notebook, then you will need to download the data before proceeding. Download CIFAR10 from Kaggle**Note:** Before attempting to download the competition data you will need to login to your [Kaggle](https://www.kaggle.com) account.Set your Kaggle username and API key (from the `kaggle.json` file) into the cell below, and execute the code to download the Kaggle [Digit Recognizer: Learn computer vision with the famous MNIST data](https://www.kaggle.com/c/digit-recognizer) competition data.
###Code
%%bash
# NOTE: Replace YOUR_USERNAME and YOUR_API_KEY with actual credentials
export KAGGLE_USERNAME="YOUR_USERNAME"
export KAGGLE_KEY="YOUR_API_KEY"
kaggle datasets download guesejustin/cifar10-keras-files-cifar10load-data -p ../datasets/cifar10/
%%bash
unzip -n ../datasets/cifar10/cifar10-keras-files-cifar10load-data.zip -d ../datasets/cifar10
###Output
_____no_output_____
###Markdown
(Alternative) Download CIFAR10 from GitHubIf you are running this notebook using Google Colab, but did *not* create a Kaggle account and API key, then dowload the data from our GitHub repository by running the code in the following cells.
###Code
import os
import pathlib
import requests
def fetch_cifar10_data():
RAW_URL = "https://github.com/holstgr-kaust/keras-tutorials/raw/master/datasets/cifar10"
DEST_DIR = pathlib.Path('../datasets/cifar10')
DEST_DIR.mkdir(parents=True, exist_ok=True)
for n in ["cifar-10.npz", "cifar-10-batches-py.tar.gz"]:
path = DEST_DIR / n
if not path.is_file(): # Don't download if file exists
with path.open(mode = 'wb') as f:
response = requests.get(RAW_URL + "/" + n)
f.write(response.content)
print("downloaded file: %s" % n)
fetch_cifar10_data()
cache_cifar10_data()
%%bash
DEST_DIR='../datasets/cifar10'
tar xvpf "${DEST_DIR}/cifar-10-batches-py.tar.gz" --directory="${DEST_DIR}"
###Output
_____no_output_____
###Markdown
(Alternative) Download CIFAR10 with KerasIf you are running this notebook using Google Colab, but did *not* create a Kaggle account and API key, then dowload the data using the Keras load_data() API by running the code in the following cells.
###Code
from tensorflow.keras.datasets import cifar10
cache_cifar10_data()
cifar10.load_data();
###Output
_____no_output_____
###Markdown
Tutorial SetupInitialize the Python environment by importing and verifying the modules we will use.
###Code
import os
import sys
import pathlib
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow.keras as keras
###Output
_____no_output_____
###Markdown
`%matplotlib inline` is a magic command that makes *matplotlib* charts and plots appear was outputs in the notebook.`%matplotlib notebook` enables semi-interactive plots that can be enlarged, zoomed, and cropped while the plot is active. One issue with this option is that new plots appear in the active plot widget, not in the cell where the data was produced.
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
Now check the runtime environment to ensure it can run this notebook. If there is an `Exception`, or if there are no GPUs, you will need to run this notebook in a more capable environment (see `README.md`, or ask instructor for additional help).
###Code
# Verify runtime environment
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
IS_COLAB = True
except Exception:
IS_COLAB = False
print("is_colab:", IS_COLAB)
assert tf.__version__ >= "2.0", "TensorFlow version >= 2.0 required."
print("tensorflow_version:", tf.__version__)
assert sys.version_info >= (3, 5), "Python >= 3.5 required."
print("python_version:", "%s.%s.%s-%s" % (sys.version_info.major,
sys.version_info.minor,
sys.version_info.micro,
sys.version_info.releaselevel
))
print("executing_eagerly:", tf.executing_eagerly())
__physical_devices = tf.config.experimental.list_physical_devices('GPU')
if len(__physical_devices) == 0:
print("No GPUs available. Expect training to be very slow.")
if IS_COLAB:
print("Go to `Runtime` > `Change runtime` and select a GPU hardware accelerator."
"Then `Save` to restart session.")
else:
print("is_built_with_cuda:", tf.test.is_built_with_cuda())
print("is_gpu_available:", tf.test.is_gpu_available(), [d.name for d in __physical_devices])
###Output
_____no_output_____
###Markdown
CIFAR10 - Dataset ProcessingThe previously acquired CIFAR10 dataset is the essential input needed to train an image classification model. Before using the dataset, there are several preprocessing steps required to load the data, and create the correctly sized training, validation, and testing arrays used as input to the network.The following data preparation steps are needed before they can become inputs to the network:* Cache the downloaded dataset (to use Keras `load_data()` functionality).* Load the dataset (CIFAR10 is small, and fits into a `numpy` array).* Verify the shape and type of the data, and understand it...* Convert label indices into categorical vectors.* Convert image data from integer to float values, and normalize. * Verify converted input data. Cache DataMake downloaded data available to Keras (and check if it's really there). Provide dataset utility functions.
###Code
# Cache CIFAR10 Datasets
cache_cifar10_data()
%%bash
find ~/.keras -name "cifar-10*" -type f
###Output
_____no_output_____
###Markdown
These helper function assist with managing the three label representations we will encounter:* label index: a number representing a class* label names: a *human readable* text representation of a class* category vector: a vector space to represent the categoriesThe label index `1` represents an `automobile`, and `2` represents a `bird`; but, `1.5` doesn't make a `bird-mobile`. We need a representation where each dimension is a continuum of that feature. There are 10 distinct categories, so we encode them as a 10-dimensional vector space, where the i-th dimension represents the i-th class. An `automobile` becomes `[0,1,0,0,0,0,0,0,0,0]`, a `bird` becomes `[0,0,1,0,0,0,0,0,0,0]` (these are called *one-hot encodings*), and a `bird-mobile` (which we couldn't represent previously) can be encoded as `[0,0.5,0.5,0,0,0,0,0,0,0]`.**Note:** We already know how our dataset is represented. Typically, one would load the data first, analyse the class representation, and then write the helper functions.
###Code
# Helper functionality to provide human-readable labels
cifar10_label_names = ['airplane', 'automobile',
'bird', 'cat', 'deer', 'dog', 'frog', 'horse',
'ship', 'truck']
def cifar10_index_label(idx):
return cifar10_label_names[int(idx)]
def cifar10_category_label(cat):
return cifar10_index_label(cat.argmax())
def cifar10_label(v):
return cifar10_index_label(v) if np.isscalar(v) or np.size(v) == 1 else cifar10_category_label(v)
###Output
_____no_output_____
###Markdown
Load DataDatasets for classification require two parts: i) the input data (`x` in our nomenclature), and ii) the labels (`y`). Classifiction takes an `x` as input, and returns a `y` (the class) as output.When training a model from a dataset (called the `train`ing dataset), it is important to keep some of the data aside (called the `test` set). If we didn't, the model could just memorize the data without learning a generalization that would apply to novel related data. The `test` set is used to evaluate the typical real performance of the model.
###Code
from tensorflow.keras.datasets import cifar10
# The data, split between train and test sets:
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
###Output
_____no_output_____
###Markdown
**Note:** Backup plan: Run the following cell if the data didn't load via `cifar10.load_data` above.
###Code
# Try secondary data source if the first didn't work
try:
print("data loaded." if type((x_train, y_train, x_test, y_test)) else "load failed...")
except NameError:
with np.load('../datasets/cifar10/cifar-10.npz') as data:
x_train = data['x_train']
y_train = data['y_train']
x_test = data['x_test']
y_test = data['y_test']
print("alternate data load." if type((x_train, y_train, x_test, y_test)) else "failed...")
###Output
_____no_output_____
###Markdown
Explore DataExplore data types, shape, and value ranges. Ensure they make sense, and you understand the data well.
###Code
print('x_train type:', type(x_train), ',', 'y_train type:', type(y_train))
print('x_train dtype:', x_train.dtype, ',', 'y_train dtype:', y_train.dtype)
print('x_train shape:', x_train.shape, ',', 'y_train shape:', y_train.shape)
print('x_test shape:', x_test.shape, ',', 'y_test shape:', y_test.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
print('x_train (min, max, mean): (%s, %s, %s)' % (x_train.min(), x_train.max(), x_train.mean()))
print('y_train (min, max): (%s, %s)' % (y_train.min(), y_train.max()))
###Output
_____no_output_____
###Markdown
* The data is stored in Numpy arrays.* The datatype for both input data and labels is a small unsigned int. They represent different things though. The input data represents pixel value, the labels represent the category.* There are 50000 training data samples, and 10000 testing samples.* Each input sample is a colour images of 32x32 pixels, with 3 channels of colour (RGB), for a total size of 3072 bytes. Each label sample is a single byte. * A 32x32 pixel, 3-channel colour image (2-D) can be represented as a point in a 3072 dimensional vector space.* We can see that pixel values range between 0-255 (that is the range of `uint8`) and the mean value is close to the middle. The label values range between 0-9, which corresponds to the 10 categories the labels represent.Lets explore the dataset visually, looking at some actual images, and get a statistical overview of the data.Most of the code in the plotting function below is there to tweak the appearance of the output. The key functionality comes from `matplotlib` functions `imshow` and `hist`, and `numpy` function `histogram`.
###Code
def cifar10_imageset_plot(img_data=None):
(x_imgs, y_imgs) = img_data if img_data else (x_train, y_train)
fig = plt.figure(figsize=(16,8))
for i in range(40):
plt.subplot(4, 10, i + 1)
plt.xticks([])
plt.yticks([])
idx = int(random.uniform(0, x_imgs.shape[0]))
plt.title(cifar10_label(y_imgs[idx]))
plt.imshow(x_imgs[idx], cmap=plt.get_cmap('gray'))
plt.show()
# Show array of random labelled images with matplotlib (re-run cell to see new examples)
cifar10_imageset_plot((x_train, y_train))
def histogram_plot(img_data=None):
(x_data, y_data) = img_data if img_data else (x_train, y_train)
hist, bins = np.histogram(y_data, bins = range(int(y_data.min()), int(y_data.max() + 2)))
fig = plt.figure(figsize=(12,5))
plt.subplot(1,2,1)
plt.hist(y_data, bins = range(int(y_data.min()), int(y_data.max() + 2)))
plt.xticks(range(int(y_data.min()), int(y_data.max() + 2)))
plt.title("y histogram")
plt.subplot(1,2,2)
plt.hist(x_data.flat, bins = range(int(x_data.min()), int(x_data.max() + 2)))
plt.title("x histogram")
plt.tight_layout()
plt.show()
print('y histogram counts:', hist)
histogram_plot((x_train, y_train))
histogram_plot((x_test, y_test))
###Output
_____no_output_____
###Markdown
The data looks reasonable: there are sufficient examples for each category (y_train) and a near-normal distribution of pixel values that appears similar in both the train and test datasets.The next aspect of the input data to grapple with is how the input vector space corresponds with the output category space. Is the correspondence simple, e.g., distances in the input space relate to distances in the output space; or, more complex. Visualizing training samples using PCA[Principal Components Analysis (PCA)](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) can be used as a visualization tool to see if there are any obvious patterns in the training samples.PCA re-represents the input data by changing the basis vectors that represent them. These new orthonormal basis vectors (eigen vectors) represent variance in the data (ordered from largest to smallest). Projecting the data samples onto the first few (2 or 3) dimensions will let us see the data with the biggest differences accounted for.The following cell uses `scikit-learn` to calculate PCA eigen vectors for a random subset of the data (10%).
###Code
import sklearn
import sklearn.decomposition
_prng = np.random.RandomState(42)
pca = sklearn.decomposition.PCA(n_components=40, random_state=_prng)
x_train_flat = x_train.reshape(*x_train.shape[:1], -1)
y_train_flat = y_train.reshape(y_train.shape[0])
print("x_train:", x_train.shape, "y_train", y_train.shape)
print("x_train_flat:", x_train_flat.shape, "y_train_flat", y_train_flat.shape)
pca_train_features = pca.fit_transform(x_train_flat, y_train_flat)
print("pca_train_features:", pca_train_features.shape)
# Sample 10% of the PCA results
_idxs = _prng.randint(y_train_flat.shape[0], size=y_train_flat.shape[0] // 10)
pca_features = pca_train_features[_idxs]
pca_category = y_train_flat[_idxs]
print("pca_features:", pca_features.shape,
"pca_category", pca_category.shape,
"min,max category:", pca_category.min(), pca_category.max())
def pca_components_plot(components_, shape_=(32, 32, 3)):
fig = plt.figure(figsize=(16,8))
for i in range(min(40, components_.shape[0])):
plt.subplot(4, 10, i + 1)
plt.xticks([])
plt.yticks([])
eigen_vect = (components_[i] - np.min(components_[i])) / np.ptp(pca.components_[i])
plt.title('component: %s' % i)
plt.imshow(eigen_vect.reshape(shape_), cmap=plt.get_cmap('gray'))
plt.show()
###Output
_____no_output_____
###Markdown
This plot shows the new eigen vector basis functions suggested by the PCA analysis. Any image in our dataset can be created as a linear combination of these basis vectors. At a guess, the most prevalent feature of the dataset is that there is something at the centre of the image that is distinct from the background (components 0 & 2) and there is often a difference between 'land' & 'sky' (component 1) – compare with the sample images shown previously.
###Code
pca_components_plot(pca.components_)
###Output
_____no_output_____
###Markdown
These are 2D and 3D scatter plot functions that colour the points by their labels (so we can see if any 'clumps' of points correspond to actual categories).
###Code
def category_scatter_plot(features, category, title='CIFAR10'):
num_category = 1 + category.max() - category.min()
fig, ax = plt.subplots(1, 1, figsize=(12, 10))
cm = plt.cm.get_cmap('tab10', num_category)
sc = ax.scatter(features[:,0], features[:,1], c=category, alpha=0.4, cmap=cm)
ax.set_xlabel("Component 1")
ax.set_ylabel("Component 2")
ax.set_title(title)
plt.colorbar(sc)
plt.show()
from mpl_toolkits.mplot3d import Axes3D
def category_scatter3d_plot(features, category, title='CIFAR10'):
num_category = 1 + category.max() - category.min()
mean_feat = np.mean(features, axis=0)
std_feat = np.std(features, axis=0)
min_range = mean_feat - std_feat
max_range = mean_feat + std_feat
fig = plt.figure(figsize=(12, 10))
cm = plt.cm.get_cmap('tab10', num_category)
ax = fig.add_subplot(111, projection='3d')
sc = ax.scatter(features[:,0], features[:,1], features[:,2],
c=category, alpha=0.85, cmap=cm)
ax.set_xlabel("Component 1")
ax.set_ylabel("Component 2")
ax.set_zlabel("Component 3")
ax.set_title(title)
ax.set_xlim(2.0 * min_range[0], 2.0 * max_range[0])
ax.set_ylim(2.0 * min_range[1], 2.0 * max_range[1])
ax.set_zlim(2.0 * min_range[2], 2.0 * max_range[2])
plt.colorbar(sc)
plt.show()
category_scatter_plot(pca_features, pca_category, title='CIFAR10 - PCA')
###Output
_____no_output_____
###Markdown
**Note:** 3D PCA plot works best with `%matplotlib notebook` to enable interactive rotation (enabled at start of session).
###Code
category_scatter3d_plot(pca_features, pca_category, title='CIFAR10 - PCA')
###Output
_____no_output_____
###Markdown
The data in its original image space does not appear to cluster into corresponding categories. Visualizing training sample using t-SNE[t-distributed Stochastic Neighbor Embedding (t-SNE)](https://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.htmlsklearn.manifold.TSNE) is a tool to visualize high-dimensional data. It converts similarities between data points to joint probabilities and tries to minimize the Kullback-Leibler divergence between the joint probabilities of the low-dimensional embedding and the high-dimensional data. For more details on t-SNE including other use cases see this excellent *Toward Data Science* [blog post](https://towardsdatascience.com/an-introduction-to-t-sne-with-python-example-5a3a293108d1).Informally, t-SNE is preserving the local neighbourhood of data points to help uncover the manifold on which the data lies. For example, a flat piece of paper with two coloured (e.g., red and blue) regions would be a simple manifold to characterize in 3D space; but, if the paper is crumpled up, it becomes very hard to characterize in the original 3D space (blue and red regions could be very close in this representational space) – instead, by following the cumpled paper (manifold) we would recover the fact that blue and red regions are really very distant, and not nearby at all.It is highly recommended to use another dimensionality reduction method (e.g. PCA) to reduce the number of dimensions to a reasonable amount if the number of features is very high. This will suppress some noise and speed up the computation of pairwise distances between samples.* [An Introduction to t-SNE with Python Example](https://towardsdatascience.com/an-introduction-to-t-sne-with-python-example-5a3a293108d1)
###Code
import sklearn
import sklearn.decomposition
import sklearn.pipeline
import sklearn.manifold
_prng = np.random.RandomState(42)
embedding2_pipeline = sklearn.pipeline.make_pipeline(
sklearn.decomposition.PCA(n_components=0.95, random_state=_prng),
sklearn.manifold.TSNE(n_components=2, random_state=_prng))
embedding3_pipeline = sklearn.pipeline.make_pipeline(
sklearn.decomposition.PCA(n_components=0.95, random_state=_prng),
sklearn.manifold.TSNE(n_components=3, random_state=_prng))
# Sample 10% of the data
_prng = np.random.RandomState(42)
_idxs = _prng.randint(y_train_flat.shape[0], size=y_train_flat.shape[0] // 10)
tsne_features = x_train_flat[_idxs]
tsne_category = y_train_flat[_idxs]
print("tsne_features:", tsne_features.shape,
"tsne_category", tsne_category.shape,
"min,max category:", tsne_category.min(), tsne_category.max())
# t-SNE is SLOW (but can be GPU accelerated!);
# lengthy operation, be prepared to wait...
transform2_tsne_features = embedding2_pipeline.fit_transform(tsne_features)
print("transform2_tsne_features:", transform2_tsne_features.shape)
for i in range(2):
print("min,max features[%s]:" % i,
transform2_tsne_features[:,i].min(),
transform2_tsne_features[:,i].max())
category_scatter_plot(transform2_tsne_features, tsne_category, title='CIFAR10 - t-SNE')
###Output
_____no_output_____
###Markdown
**Note:** Skip this step during the tutorial, it will take too long to complete.
###Code
# t-SNE is SLOW (but can be GPU accelerated!);
# extremely lengthy operation, be prepared to wait... and wait...
transform3_tsne_features = embedding3_pipeline.fit_transform(tsne_features)
print("transform3_tsne_features:", transform3_tsne_features.shape)
for i in range(3):
print("min,max features[%s]:" % i,
transform3_tsne_features[:,i].min(),
transform3_tsne_features[:,i].max())
category_scatter3d_plot(transform3_tsne_features, tsne_category, title='CIFAR10 - t-SNE')
###Output
_____no_output_____
###Markdown
t-SNE relates the data points (images) according to their closest neighbours. Hints of underlying categories appear; but are not cleanly seperable into the original categories. Data ConversionThe data type for the training data is `uint8`, while the input type for the network will be `float32` so the data must be converted. Also, the labels need to be categorical, or *one-hot encoded*, as discussed previously. Keras provides utility functions to convert labels to categories (`to_categorical`), and `numpy` makes it easy to perform operations over entire arrays.* https://keras.io/examples/cifar10_cnn/
###Code
num_classes = (y_train.max() - y_train.min()) + 1
print('num_classes =', num_classes)
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
train_data = (x_train, y_train)
test_data = (x_test, y_test)
###Output
_____no_output_____
###Markdown
After the data conversion, notice that the datatypes are `float32`, the input `x` data shapes are the same; but, the `y` classification labels are now 10-dimensional, instead of scalar.
###Code
print('x_train type:', type(x_train))
print('x_train dtype:', x_train.dtype)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('y_train type:', type(y_train))
print('y_train dtype:', y_train.dtype)
print('y_train shape:', y_train.shape)
print('y_test shape:', y_test.shape)
###Output
_____no_output_____
###Markdown
Acquire Pre-Trained NetworkDownload an *ImageNet* pretrained VGG16 network[1](fn1), sans classification layer, shaped for 32x32px colour images[*](https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5) (the smallest supported size). This image-feature detection network is an example of a deep CNN (Convolutional Neural Network).**Note:** The network must be fixed – it was already trained on a very large dataset, so training it on our smaller dataset would result in it un-learning valuable generic features.[1] *Very Deep Convolutional Networks for Large-Scale Image Recognition** by Karen Simonyan and Andrew Zisserman, [arXiv (2014)](https://arxiv.org/abs/1409.1556).
###Code
cache_models()
from tensorflow.keras.applications import VGG16
conv_base = VGG16(weights='imagenet', include_top=False, input_shape=(32, 32, 3))
conv_base.trainable = False
conv_base.summary()
###Output
_____no_output_____
###Markdown
The summary shows the layers, starting from the InputLayer and proceeding through Conv2D convolutional layers, which are then collected at MaxPooling2D layers.A convolutional kernel is a small matrix that looks for a specific, localized, pattern on its inputs. This pattern is called a `feature`. The kernel is applied at each location on the input image, and the output is another image – a feature image – that represent the strength of that feature at the given location. Because the inputs to convolution are images, and the outputs are also images – but transformed into a different feature space – it is possible to stack many convolutional layers on top of each other.A feature image can be reduced in size with a MaxPooling2D layer. This layer 'pools' an `MxN` region to a single value, taking the largest value from the 'pool'. The 'Max' in 'MaxPooling' is keeping the *best* evidence for that feature, found in the original region.The InputLayer shape and data type should match with the input data:*Note:* The first dimension of the shape will differ; the input layer has `None` to indicate it accepts a batch sized collection of arrays of the remaining shape. The input data shape will indicate, in that first axis, how many samples it contains.
###Code
print("input layer shape:", conv_base.layers[0].input.shape)
print("input layer dtype:", conv_base.layers[0].input.dtype)
print("input layer type:", type(conv_base.layers[0].input))
print("input data shape:", x_train.shape)
print("input data dtype:", x_train.dtype)
print("input data type:", type(x_train))
###Output
_____no_output_____
###Markdown
Explore Convolutional LayersThe following are visualization functions (and helpers) for understanding what the convolutional layers in a network have learned.We may ask questions about each convolutional kernal in a convolutional layer:* What local features is the kernel looking for: `visualize_conv_layer_weights`* For a given input image, what feature image will the kernal produce: `visualize_conv_layer_output`* What input image makes the kernel respond most strongly: `visualize_conv_layer_response`
###Code
def cifar10_image_plot(img_data=None, image_index=None):
(x_imgs, y_imgs) = img_data if img_data else (x_train, y_train)
if not image_index:
image_index = int(random.uniform(0, x_imgs.shape[0]))
plt.imshow(x_imgs[image_index], cmap='gray')
plt.title("%s" % cifar10_label(y_imgs[image_index]))
plt.xlabel("#%s" % image_index)
plt.show()
return image_index
def get_model_layer(model, layer_name):
if type(layer_name) == str:
layer = model.get_layer(layer_name)
else:
m = model
for ln in layer_name:
model = m
m = m.get_layer(ln)
layer = m
return (model, layer)
def visualize_conv_layer_weights(model, layer_name):
(model, layer) = get_model_layer(model, layer_name)
layer_weights = layer.weights[0]
max_size = layer_weights.shape[3]
col_size = 12
row_size = int(np.ceil(float(max_size) / float(col_size)))
print("conv layer: %s shape: %s size: (%s,%s) count: %s" %
(layer_name,
layer_weights.shape,
layer_weights.shape[0], layer_weights.shape[1],
max_size))
fig, ax = plt.subplots(row_size, col_size, figsize=(12, 1.2 * row_size))
idx = 0
for row in range(0,row_size):
for col in range(0,col_size):
ax[row][col].set_xticks([])
ax[row][col].set_yticks([])
if idx < max_size:
ax[row][col].imshow(layer_weights[:, :, 0, idx], cmap='gray')
else:
fig.delaxes(ax[row][col])
idx += 1
plt.tight_layout()
plt.show()
def visualize_conv_layer_output(model, layer_name, image_index=None):
(model, layer) = get_model_layer(model, layer_name)
layer_output = layer.output
if not image_index:
image_index = cifar10_image_plot()
intermediate_model = keras.models.Model(inputs = model.input, outputs=layer_output)
intermediate_prediction = intermediate_model.predict(x_train[image_index].reshape(1,32,32,3))
max_size = layer_output.shape[3]
col_size = 10
row_size = int(np.ceil(float(max_size) / float(col_size)))
print("conv layer: %s shape: %s size: (%s,%s) count: %s" %
(layer_name,
layer_output.shape,
layer_output.shape[1], layer_output.shape[2],
max_size))
fig, ax = plt.subplots(row_size, col_size, figsize=(12, 1.2 * row_size))
idx = 0
for row in range(0,row_size):
for col in range(0,col_size):
ax[row][col].set_xticks([])
ax[row][col].set_yticks([])
if idx < max_size:
ax[row][col].imshow(intermediate_prediction[0, :, :, idx], cmap='gray')
else:
fig.delaxes(ax[row][col])
idx += 1
plt.tight_layout()
plt.show()
from tensorflow.keras import backend as K
def process_image(x):
epsilon = 1e-5
# Normalizes the tensor: centers on 0, ensures that std is 0.1 Clips to [0, 1]
x -= x.mean()
x /= (x.std() + epsilon)
x *= 0.1
x += 0.5
x = np.clip(x, 0, 1)
x *= 255
x = np.clip(x, 0, 255).astype('uint8')
return x
def generate_response_pattern(model, conv_layer_output, filter_index=0):
#step_size = 1.0
epsilon = 1e-5
img_tensor = tf.Variable(tf.random.uniform((1, 32, 32, 3)) * 20 + 128.0, trainable=True)
response_model = keras.models.Model([model.inputs], [conv_layer_output])
for i in range(40):
with tf.GradientTape() as gtape:
layer_output = response_model(img_tensor)
loss = K.mean(layer_output[0, :, :, filter_index])
grads = gtape.gradient(loss, img_tensor)
grads /= (K.sqrt(K.mean(K.square(grads))) + epsilon)
img_tensor = tf.Variable(tf.add(img_tensor, grads))
img = np.array(img_tensor[0])
return process_image(img)
def visualize_conv_layer_response(model, layer_name):
(model, layer) = get_model_layer(model, layer_name)
layer_output = layer.output
max_size = layer_output.shape[3]
col_size = 12
row_size = int(np.ceil(float(max_size) / float(col_size)))
print("conv layer: %s shape: %s size: (%s,%s) count: %s" %
(layer_name,
layer_output.shape,
layer_output.shape[1], layer_output.shape[2],
max_size))
fig, ax = plt.subplots(row_size, col_size, figsize=(12, 1.2 * row_size))
idx = 0
for row in range(0,row_size):
for col in range(0,col_size):
ax[row][col].set_xticks([])
ax[row][col].set_yticks([])
if idx < max_size:
img = generate_response_pattern(model, layer_output, idx)
ax[row][col].imshow(img, cmap='gray')
ax[row][col].set_title("%s" % idx)
else:
fig.delaxes(ax[row][col])
idx += 1
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Looking at the the first 4 convolution layers, we see that:* All the kernels are 3x3 (i.e., 9 elements each)* Layers 1 & 2 have 64 kernels each (64 different possible features)* Layers 3 & 4 have 128 kernels each (128 different possible features)* Light pixels indicate preference for an activated pixel* Dark pixels indicate preference for an inactive pixel* The kernels seem to represent edges and lines at various angles
###Code
for n in [l.name for l in conv_base.layers if isinstance(l, keras.layers.Conv2D)][:4]:
visualize_conv_layer_weights(conv_base, n)
###Output
_____no_output_____
###Markdown
For the given input image, show the corresponding feature image. At the lower level layers (e.g., first Conv2D layer), the feature images seem to capture concepts like 'edges' or maybe 'solid colour'?At higher layers, the size of the feature images decrease because of the MaxPooling. They also appear more abstract – harder to visually recognize than the original image – however, the features are spatially related to the original image (e.g., if there is a white/high value in the lower-left corner of the feature image, then somewhere on the lower-left corner of the original image, there exists pixels that the network is confident represent the feature in question).
###Code
image_index = cifar10_image_plot()
for n in [l.name for l in conv_base.layers if isinstance(l, keras.layers.Conv2D)][:7]:
visualize_conv_layer_output(conv_base, n, image_index)
###Output
_____no_output_____
###Markdown
This plot shows which input images cause the greatest response from the convolution kernels. At lower layers, we see many simple 'wave' textures showing that these kernals like to see edges at particular angles. At lower-middle layers, the paterns show larger scale and more complexity (like dots and curves); but, still lots of angled edges.
###Code
for n in [l.name for l in conv_base.layers if isinstance(l, keras.layers.Conv2D)][:4]:
visualize_conv_layer_response(conv_base, n)
###Output
_____no_output_____
###Markdown
The patterns in the higher levels can get even more complex; but, some of them don't seem to encode for anything but noise. Maybe these could be pruned to make a smaller network...**Note:** Skip this step during the tutorial, it will take too long to complete.
###Code
# NOTE: Visualize mid to higher level convolutional layers;
# lengthy operation, be prepared to wait...
for n in [l.name for l in conv_base.layers if isinstance(l, keras.layers.Conv2D)][4:]:
visualize_conv_layer_response(conv_base, n)
###Output
_____no_output_____
###Markdown
CNN Base + Classifier ModelCreate a simple model that has the pre-trained CNN (Convolutional Neural Network) as a base, and adds a basic classifier on top.The new layer types are Flatten, Dense, Dropout, and Activation.The Flatten layer reshapes the input dimensions (2D + 1 channel) into a single dimension.The Dense(x) layer is a layer of (`x`) neurons (represented as a flat 1D array) connected to a flat input. The size of the input and outputs do not need to match.The Dropout(x) layer withholds a random fraction (`x`) of the input neurons from training during each batch of data. This limits the ability of the network to `overfit` on the training data (i.e., memorize training data, rather than learn generalizable rules).Activation is an essential part of (or addition to) each layer. Layers like Dense are simply linear functions (weighted sums + a bias). Without a non-linear component, the network could not learn a non-linear function. Activations like 'relu' (Rectified Linear Unit), 'tanh', or 'sigmoid' are functions to introduce a non-linearity. They also clamp output values within known ranges.The 'softmax' activation is used to produce probability distributions over multiple categories.This example uses the Sequential API to build the final network.* [Activation Functions in Neural Networks](https://towardsdatascience.com/activation-functions-neural-networks-1cbd9f8d91d6)
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense, Activation, Dropout
from tensorflow.keras.applications import VGG16
def create_cnnbase_classifier_model(conv_base=None):
if not conv_base:
conv_base = VGG16(weights='imagenet', include_top=False, input_shape=(32, 32, 3))
conv_base.trainable = False
model = Sequential()
model.add(conv_base)
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
return model
###Output
_____no_output_____
###Markdown
Create our model *model_transfer_cnn* by calling the creation function *create_cnnbase_classifier_model* above.Notice the split of total parameters (\~15 million) between trainable (\~0.3 million for our classifier) and non-trainable (\~14.7 million for the pre-trained CNN).Note also that the final Dense layer squeezes the network down to the number of categories.
###Code
model_transfer_cnn = create_cnnbase_classifier_model(conv_base)
model_transfer_cnn.summary()
###Output
_____no_output_____
###Markdown
Train ModelTraining a model typically involves setting relevant hyperparameters that control aspects of the training process. Common hyperparameters include:* `epochs`: The number of training passes through the entire dataset. The number of epochs depends upon the complexity of the dataset, and how effectively the network architecture of the model can learn it. If the value is too small, the model accuracy will be low. If the value is too big, then the training will take too long for no additional benefit, as the model accuracy will plateau.* `batch_size`: The number of samples to train during each step. The number should be set so that the GPU memory and compute are well utilized. The `learning_rate` needs to be set accordingly.* `learning_rate`: The step-size to update model weights during the training update phase (backpropagation). Too small, and learning takes too long. Too large, and we may step over the minima we are trying to find. The learning rate can be increased as the batch sizes increases (with some caveats), on the assumption that with more data in a larger batch, the error gradient will be more accurate, so therefore, we can take a larger step.* `decay`: Used by some optimizers to decrease the `learning_rate` over time, on the assumption that as we get closer to our goal, we should focus on smaller refinement steps.
###Code
batch_size = 128 #32
epochs = 25 #100
learning_rate = 1e-3 #1e-4
decay = 1e-6
###Output
_____no_output_____
###Markdown
The model needs to be compiled prior to use. This step enables the model to train efficiently on the GPU device.This step also specifies the loss functions, accuracy metrics, learning strategy (optimizers), and more.Our `loss` is *categorical_crossentropy* because we are doing multi-category classification.We use an RMSprop optimizer, which is a varient of standard gradient descent optimizers that also includes momentum. Momentum is used to speed up learning in directions where it has been making more progress.* [A Look at Gradient Descent and RMSprop Optimizers](https://towardsdatascience.com/a-look-at-gradient-descent-and-rmsprop-optimizers-f77d483ef08b)* [Understanding RMSprop — faster neural network learning](https://towardsdatascience.com/understanding-rmsprop-faster-neural-network-learning-62e116fcf29a)
###Code
from tensorflow.keras.optimizers import RMSprop
model_transfer_cnn.compile(loss='categorical_crossentropy',
optimizer=RMSprop(learning_rate=learning_rate, decay=decay),
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
The model `fit` function trains the network, and returns a history of training and testing accuracy.*Note:* Because we already have a test dataset, and we are not validating our hyperparameters, we will use the test dataset for validation. We could have also reserved a fraction of the training data to use for validation.
###Code
history = model_transfer_cnn.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
###Output
_____no_output_____
###Markdown
Evaluate Model Visualize accuracy and loss for training and validation.* https://keras.io/visualization/
###Code
def history_plot(history):
fig = plt.figure(figsize=(12,5))
plt.title('Model accuracy & loss')
# Plot training & validation accuracy values
ax1 = fig.add_subplot()
#ax1.set_ylim(0, 1.1 * max(history.history['loss']+history.history['val_loss']))
ax1.set_prop_cycle(color=['green', 'red'])
p1 = ax1.plot(history.history['loss'], label='Train Loss')
p2 = ax1.plot(history.history['val_loss'], label='Test Loss')
# Plot training & validation loss values
ax2 = ax1.twinx()
ax2.set_ylim(0, 1.1 * max(history.history['accuracy']+history.history['val_accuracy']))
ax2.set_prop_cycle(color=['blue', 'orange'])
p3 = ax2.plot(history.history['accuracy'], label='Train Acc')
p4 = ax2.plot(history.history['val_accuracy'], label='Test Acc')
ax1.set_ylabel('Loss')
ax1.set_xlabel('Epoch')
ax2.set_ylabel('Accuracy')
pz = p3 + p4 + p1 + p2
plt.legend(pz, [l.get_label() for l in pz], loc='center right')
plt.show()
###Output
_____no_output_____
###Markdown
The history plot shows characteristic features of training performance over successive epochs. Accuracy and loss are related, in that a reduction in loss produces an increase in accuracy. The graph shows characteristic arcs for training and testing accuracy / loss over training time (epochs).The primary measure to improve is *testing accuracy*, because that indicates how well the model generalizes to data it must typically classify.The accuracy curves show that testing accuracy has plateaued (with some variability), while training accuracy increases (but at a slowing rate). The difference between training and testing accuracy shows overfitting of the model (i.e., the model can memorize what it has seen better than it can generalize the classification rules).We would like a model that *can* overfit (otherwise it might not be large enough to capture the complexity of the data domain), but doesn't. And then, it is only trained until *test accuracy* peaks.Could the model 100% overfit the data? The graph doesn't answer definitively yet, but training accuracy seems to be slowing, while training loss is still decreasing (with lots of room to improve – the loss axis does not start at zero).*Note:* The model contains Dropout layers to help prevent overfitting. What happens to training and testing accuracy when those layers are removed?
###Code
history_plot(history)
# Score trained model.
scores = model_transfer_cnn.evaluate(x_test, y_test, verbose=0)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
###Output
_____no_output_____
###Markdown
The following prediction plot functions provide insight into aspects of model prediction.
###Code
def prediction_plot(model, test_data):
(x_test, y_test) = test_data
fig = plt.figure(figsize=(16,8))
correct = 0
total = 0
rSym = ''
for i in range(40):
plt.subplot(4, 10, i + 1)
plt.xticks([])
plt.yticks([])
idx = int(random.uniform(0, x_test.shape[0]))
result = model.predict(x_test[idx:idx+1])[0]
if y_test is not None:
rCorrect = True if cifar10_label(y_test[idx]) == cifar10_label(result) else False
rSym = '✔' if rCorrect else '✘'
correct += 1 if rCorrect else 0
total += 1
plt.title("%s %s" % (rSym, cifar10_label(result)))
plt.imshow(x_test[idx], cmap=plt.get_cmap('gray'))
plt.show()
if y_test is not None:
print("% 3.2f%% correct (%s/%s)" % (100.0 * float(correct) / float(total), correct, total))
def prediction_classes_plot(model, test_data):
(x_test, y_test) = test_data
fig = plt.figure(figsize=(16,8))
correct = 0
total = 0
rSym = ''
for i in range(40):
plt.subplot(4, 10, i + 1)
plt.xticks([])
plt.yticks([])
idx = int(random.uniform(0, x_test.shape[0]))
result = model.predict_classes(x_test[idx:idx+1])[0]
if y_test is not None:
rCorrect = True if cifar10_label(y_test[idx]) == cifar10_label(result) else False
rSym = '✔' if rCorrect else '✘'
correct += 1 if rCorrect else 0
total += 1
plt.title("%s %s" % (rSym, cifar10_label(result)))
plt.imshow(x_test[idx], cmap=plt.get_cmap('gray'))
plt.show()
if y_test is not None:
print("% 3.2f%% correct (%s/%s)" % (100.0 * float(correct) / float(total), correct, total))
def prediction_proba_plot(model, test_data):
(x_test, y_test) = test_data
fig = plt.figure(figsize=(15,15))
for i in range(10):
plt.subplot(10, 2, (2*i) + 1)
plt.xticks([])
plt.yticks([])
idx = int(random.uniform(0, x_test.shape[0]))
result = model.predict_proba(x_test[idx:idx+1])[0] * 100 # prob -> percent
if y_test is not None:
plt.title("%s" % cifar10_label(y_test[idx]))
plt.xlabel("#%s" % idx)
plt.imshow(x_test[idx], cmap=plt.get_cmap('gray'))
ax = plt.subplot(10, 2, (2*i) + 2)
plt.bar(np.arange(len(result)), result, label='%')
plt.xticks(range(0, len(result) + 1))
ax.set_xticklabels(cifar10_label_names)
plt.title("classifier probabilities")
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
* *Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization* by Ramprasaath Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra [arXiv (2016)](https://arxiv.org/abs/1610.02391)* https://jacobgil.github.io/deeplearning/class-activation-maps
###Code
from tensorflow.keras import backend as K
def generate_activation_pattern(model, conv_layer_output, category_idx, image):
epsilon = 1e-10
activation_model = keras.models.Model([model.inputs], [conv_layer_output, model.output])
with tf.GradientTape() as gtape:
conv_output, prediction = activation_model(image)
category_output = prediction[:, category_idx]
grads = gtape.gradient(category_output, conv_output)
pooled_grads = K.mean(grads, axis=(0, 1, 2))
heatmap = tf.reduce_mean(tf.multiply(pooled_grads, conv_output), axis=-1) * -1.
heatmap = np.maximum(heatmap, 0)
heatmap /= np.max(heatmap) + epsilon
return(heatmap)
def activation_plot(model, layer_name, image_data, image_index=None):
(layer_model, conv_layer) = get_model_layer(model, layer_name)
(x_imgs, y_cat) = image_data
if not image_index:
image_index = int(random.uniform(0, x_imgs.shape[0]))
image = x_imgs[image_index:image_index+1]
fig = plt.figure(figsize=(16,8))
plt.subplot(1, num_classes + 2, 1)
plt.xticks([])
plt.yticks([])
plt.title(cifar10_label(y_cat[image_index]))
plt.xlabel("#%s" % image_index)
plt.imshow(image.reshape(32, 32, 3))
result = model.predict(image)[0]
for i in range(num_classes):
activation = generate_activation_pattern(model, conv_layer.output, i, image)
activation = np.copy(activation)
plt.subplot(1, num_classes + 2, i + 2)
plt.xticks([])
plt.yticks([])
plt.title(cifar10_label(i))
plt.xlabel("(% 3.2f%%)" % (result[i] * 100.0))
plt.imshow(activation[0])
plt.show()
###Output
_____no_output_____
###Markdown
This plot shows what the model thinks is the most likely class for each image.
###Code
prediction_classes_plot(model_transfer_cnn, (x_test, y_test))
###Output
_____no_output_____
###Markdown
This plot shows the probabilities that the model assigns to each category class, and provides a sense of how confident the network is with its classifications.
###Code
prediction_proba_plot(model_transfer_cnn, (x_test, y_test))
# TODO: Complete activation plot
#activation_plot(model_transfer_cnn, ('vgg16', 'block5_conv3'), (x_test, y_test), 1)
###Output
_____no_output_____
###Markdown
CNN Classifier ModelCreate a basic CNN (Convolutional Neural Network) based classifier from scratch.We have encountered Conv2D and MaxPooling2D layers previously, but here we see how they are declared. Conv2D layers specify the number of convolution kernels and their shape. MaxPooling2D layers specify the size of each pool (i.e., the scaling factors).Notice the total number of parameters (\~1.25 million) in this smaller network.
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense, Activation, Dropout, Conv2D, MaxPooling2D
def create_cnn_classifier_model():
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same',
input_shape=x_train.shape[1:]))
model.add(Activation('relu'))
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), padding='same'))
model.add(Activation('relu'))
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes))
model.add(Activation('softmax'))
return model
model_simple_cnn = create_cnn_classifier_model()
model_simple_cnn.summary()
batch_size = 128 #32
epochs = 25 #100
learning_rate = 1e-3 #1e-4
decay = 1e-6
from tensorflow.keras.optimizers import RMSprop
model_simple_cnn.compile(loss='categorical_crossentropy',
optimizer=RMSprop(learning_rate=learning_rate, decay=decay),
metrics=['accuracy'])
%%time
history = model_simple_cnn.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
###Output
_____no_output_____
###Markdown
The notable features of the history plot for this model are:* Training accuracy is ~10 percentage points better than the previous model,* test accuracy more closely tracks training accuracy, and* test accuracy shows more variability.
###Code
history_plot(history)
# Score trained model.
scores = model_simple_cnn.evaluate(x_test, y_test, verbose=0)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
prediction_classes_plot(model_simple_cnn, (x_test, y_test))
prediction_proba_plot(model_simple_cnn, (x_test, y_test))
for n in [l.name for l in model_simple_cnn.layers if isinstance(l, keras.layers.Conv2D)][:4]:
visualize_conv_layer_weights(model_simple_cnn, n)
image_index = cifar10_image_plot()
for n in [l.name for l in model_simple_cnn.layers if isinstance(l, keras.layers.Conv2D)]:
visualize_conv_layer_output(model_simple_cnn, n, image_index)
###Output
_____no_output_____
###Markdown
Interesting aspects of the convolutional layer response for our *model_simple_cnn* model:* There are fewer Conv2D layers in this simple model* Compared to the pre-trained VGG16 convolutional base network, * the latter levels are the first edge detection kernels, and * there are no layers with higher-level features.
###Code
for n in [l.name for l in model_simple_cnn.layers if isinstance(l, keras.layers.Conv2D)][:4]:
visualize_conv_layer_response(model_simple_cnn, n)
###Output
_____no_output_____
###Markdown
This plot shows which pixels of the original image contributed the most 'confidence' to the classification categories.The technique is better applied to larger images where the object of interest might be anywhere inside the image.
###Code
n = [l.name for l in model_simple_cnn.layers if isinstance(l, keras.layers.Conv2D)][-1]
print(n)
for i in range(5):
activation_plot(model_simple_cnn, n, (x_test, y_test))
###Output
_____no_output_____
###Markdown
Combined ModelsKeras supports a functional interface to take network architectures beyond simply sequential networks.The new layer types are Input and Concatenate; and, there is an explicit Model class.The Input layer is a special layer denoting sources of input from training batches.The Concatenate layer combines multiple inputs (along an axis with the same size) and creates a larger layer incorporating all the input values.Model construction is also different. Instead of using a `Sequential` model, and `add`ing layers to it:* An explicit Input layer is created, * we pass inputs into the layers explicity,* the output from a layer become input for arbitrary other layers, and finally,* A Model object is created with the source Input layer as inputs and outputs from the final layer.We'll demonstrate by creating a new network which combines the two CNN classifier networks we created previously.*Note:* Network models provided as an argument are changed to be non-trainable (the assumption is that they were already trained).
###Code
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Concatenate, Flatten, Dense, Activation, Dropout
from tensorflow.keras.optimizers import RMSprop
def create_combined_classifier_model(trained_model1=None, trained_model2=None):
if trained_model1:
network1 = trained_model1
network1.trainable = False
else:
network1 = create_cnnbase_classifier_model()
if trained_model2:
network2 = trained_model2
network2.trainable = False
else:
network2 = create_cnn_classifier_model()
inputs = Input(shape=(32,32,3), name='cifar10_image')
c1 = network1(inputs)
c2 = network2(inputs)
c = Concatenate()([c1, c2])
x = Dense(512)(c)
x = Activation('relu')(x)
x = Dropout(0.5)(x)
x = Dense(num_classes)(x)
outputs = Activation('softmax')(x)
model = Model(inputs=inputs, outputs=outputs, name='combined_cnn_classifier')
return model
###Output
_____no_output_____
###Markdown
Combining Pre-Trained ModelsThis version of the combined classifier uses both of the trained networks we created previously.Notice the trainable parameters (~16,000) is very small. How will this affect training?
###Code
model_combined = create_combined_classifier_model(model_transfer_cnn, model_simple_cnn)
model_combined.summary()
###Output
_____no_output_____
###Markdown
This plot shows a graph representation of the layer connections. Notice how a single input feeds the previously created Sequential networks, their output is combine via Concatenate, and then a classifier network is added on top.
###Code
keras.utils.plot_model(model_combined)
###Output
_____no_output_____
###Markdown
Reduce number of `epochs` because this network is mostly trained (execpt for the final classifier), and there are few trainable parameters.
###Code
batch_size = 128 #32
epochs = 5 #100
learning_rate = 1e-3 #1e-4
decay = 1e-6
model_combined.compile(loss='categorical_crossentropy',
optimizer=RMSprop(learning_rate=learning_rate, decay=decay),
metrics=['accuracy'])
history = model_combined.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
###Output
_____no_output_____
###Markdown
It looks like everything we needed to learn was learned in a single epoch.
###Code
history_plot(history)
###Output
_____no_output_____
###Markdown
Here is an interesting, possibly counter-intuitive, result: combining two weaker networks can create a stronger one.The reason is that the weakness in one model, might be a strength in the other model (each has 'knowledge' that the other doesn't); we just need a layer to discriminate when to trust each model. At a larger scale (of layers and models) is what is happening at the lower level of the neurons themselves.
###Code
# Score trained model.
scores = model_combined.evaluate(x_test, y_test, verbose=0)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
# NOTE: Sequential Model provides `predict_classes` or `predict_proba`
# Functional API Model does not; because it may have multiple outputs
# Using simple `predict` plot instead
prediction_plot(model_combined, (x_test, y_test))
###Output
_____no_output_____
###Markdown
The combine model improves accuracy by 2%, and takes 1/5th of the time to train. Training Combining ModelsThis version of the combined classifier uses both network architectures seen previously; except, in this version, the models need to be trained from scratch. The following cells repeat the previous experiments with this combined classifier.*Spoiler:* The combined network doesn't perform any better than the partially trained one did, but takes much longer to train (more epochs).**Note:** Skip this step during the tutorial, it will cause unecessary delay.
###Code
batch_size = 128 #32
epochs = 25 #100
learning_rate = 1e-3 #1e-4
decay = 1e-6
model_combined = create_combined_classifier_model()
model_combined.compile(loss='categorical_crossentropy',
optimizer=RMSprop(learning_rate=learning_rate, decay=decay),
metrics=['accuracy'])
history = model_combined.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
history_plot(history)
# Score trained model.
scores = model_combined.evaluate(x_test, y_test, verbose=0)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
###Output
_____no_output_____
###Markdown
Skip ConnectionsFrom previous comparisons of the `visualize_conv_layer_response` plots of the two basic CNN models, it becomes apparent that the pre-trained VGG16 network contains more complex *knowledge* about images: there were more convolutional layers with a greater variety of patterns and features they could represent.In the previous cnnbase_classifier model `model_transfer_cnn`, only the last Conv2D layer fed directly to the classifier, and the feature information contained in the middle layers wasn't directly available to the classifier.Skip Connections are a way to bring lower level feature encodings to higher levels of the network directly. They are also useful during training very deep networks to deal with the problem of *vanishing gradients*.In the following example, the original CNN base of the pre-trained VGG16 model is decomposed into layered groups, and a new network created that feeds these intermediate layers to the top of the network, where they are concatenated together to perform the final classification.* https://towardsdatascience.com/understanding-and-coding-a-resnet-in-keras-446d7ff84d33* https://arxiv.org/abs/1608.04117
###Code
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import Input, Concatenate, Flatten, Dense, Activation, Dropout
from tensorflow.keras.applications import VGG16
from tensorflow.keras.optimizers import RMSprop
def create_cnnbase_skipconnected_classifier_model(conv_base=None):
if not conv_base:
conv_base = VGG16(weights='imagenet', include_top=False, input_shape=(32, 32, 3))
conv_base.trainable = False
# Split conv_base into groups of CNN layers topped by a MaxPooling2D layer
cb_idxs = [i for (i,l) in enumerate(conv_base.layers) if isinstance(l, keras.layers.MaxPooling2D)]
all_idxs = [-1] + cb_idxs
idx_pairs = [l for l in zip(all_idxs, cb_idxs)]
cb_layers = [conv_base.layers[i+1:j+1] for (i,j) in idx_pairs]
# Dense Pre-Classifier Layers creation function - used repeatedly at multiple network locations
def dense_classes(l):
x = Dense(512)(l)
x = Activation('relu')(x)
x = Dropout(0.5)(x)
x = Dense(num_classes)(x)
return x
inputs = Input(shape=(32,32,3), name='cifar10_image')
# Join split groups into a sequence, but keep track of their outputs to create skip connections
skips = []
inz = inputs
for lz in cb_layers:
m = Sequential()
m.trainable = False
for ls in lz:
m.add(ls)
# inz is the output of model m, but the input for next layer group
inz = m(inz)
skips += [inz]
# Flatten all outputs (which had different dimensions) to Concatenate them on a common axis
flats = [dense_classes(Flatten()(l)) for l in skips]
c = Concatenate()(flats)
x = dense_classes(c)
outputs = Activation('softmax')(x)
model = Model(inputs=inputs, outputs=outputs)
return model
model_skipconnected = create_cnnbase_skipconnected_classifier_model(conv_base)
model_skipconnected.summary()
keras.utils.plot_model(model_skipconnected)
batch_size = 128 #32
epochs = 25 #100
learning_rate = 1e-3 #1e-4
decay = 1e-6
model_skipconnected.compile(loss='categorical_crossentropy',
optimizer=RMSprop(learning_rate=learning_rate, decay=decay),
metrics=['accuracy'])
history = model_skipconnected.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
history_plot(history)
###Output
_____no_output_____
###Markdown
A significant improvement over the first pre-trained model.
###Code
# Score trained model.
scores = model_skipconnected.evaluate(x_test, y_test, verbose=0)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
# Using simple `predict` plot because model uses Functional API
prediction_plot(model_skipconnected, (x_test, y_test))
###Output
_____no_output_____
###Markdown
Data AgumentationData augmentation is a technique to expand the set of available training data and can significantly improve the performance of image processing networks.**Note:** Training examples in this section may take significant time. The approach does not improve accuracy results on this simple dataset, but is included here for illustration of the technique.
###Code
from tensorflow.keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
featurewise_center=False, # set input mean to 0 over the dataset
samplewise_center=False, # set each sample mean to 0
featurewise_std_normalization=False, # divide inputs by std of the dataset
samplewise_std_normalization=False, # divide each input by its std
zca_whitening=False, # apply ZCA whitening
zca_epsilon=1e-06, # epsilon for ZCA whitening
rotation_range=10, # randomly rotate images in the range (degrees, 0 to 180)
# randomly shift images horizontally (fraction of total width)
width_shift_range=0.1,
# randomly shift images vertically (fraction of total height)
height_shift_range=0.1,
shear_range=0.1, # set range for random shear
zoom_range=0.1, # set range for random zoom
channel_shift_range=0.0, # set range for random channel shifts
# set mode for filling points outside the input boundaries
fill_mode='nearest',
cval=0.0, # value used for fill_mode = "constant"
horizontal_flip=True, # randomly flip images
vertical_flip=False, # randomly flip images
# set rescaling factor (applied before any other transformation)
rescale=None,
# set function that will be applied on each input
preprocessing_function=None,
# image data format, either "channels_first" or "channels_last"
data_format=None,
# fraction of images reserved for validation (strictly between 0 and 1)
validation_split=0.0
)
# Compute quantities required for feature-wise normalization
# (std, mean, and principal components if ZCA whitening is applied).
datagen.fit(x_train)
exampledata = datagen.flow(x_train, y_train, batch_size=batch_size)
cifar10_imageset_plot((exampledata[0][0], exampledata[0][1]))
###Output
_____no_output_____
###Markdown
CNN Base + Classifier Model Agumented
###Code
batch_size = 128 #32
epochs = 12 #25 #100
learning_rate = 1e-3 #1e-4
decay = 1e-6
model_augmented = create_cnnbase_classifier_model(conv_base)
model_augmented.compile(loss='categorical_crossentropy',
optimizer=RMSprop(learning_rate=learning_rate, decay=decay),
metrics=['accuracy'])
history = model_augmented.fit(datagen.flow(x_train, y_train, batch_size=batch_size),
validation_data=(x_test, y_test),
epochs=epochs,
shuffle=True,
use_multiprocessing=True, workers=4
)
history_plot(history)
# Score trained model.
scores = model_augmented.evaluate(x_test, y_test, verbose=0)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
###Output
_____no_output_____
###Markdown
CNN Classifier Model Augmented
###Code
batch_size = 128 #32
epochs = 12 #25 #100
learning_rate = 1e-3 #1e-4
decay = 1e-6
model_augmented = create_cnn_classifier_model()
model_augmented.compile(loss='categorical_crossentropy',
optimizer=RMSprop(learning_rate=learning_rate, decay=decay),
metrics=['accuracy'])
history = model_augmented.fit(datagen.flow(x_train, y_train, batch_size=batch_size),
validation_data=(x_test, y_test),
epochs=epochs,
shuffle=True,
use_multiprocessing=True, workers=4
)
history_plot(history)
# Score trained model.
scores = model_augmented.evaluate(x_test, y_test, verbose=0)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
###Output
_____no_output_____
###Markdown
CNN Base + Skip Connected Classifier Model Agumented
###Code
batch_size = 128 #32
epochs = 12 #25 #100
learning_rate = 1e-3 #1e-4
decay = 1e-6
model_augmented = create_cnnbase_skipconnected_classifier_model(conv_base)
model_augmented.compile(loss='categorical_crossentropy',
optimizer=RMSprop(learning_rate=learning_rate, decay=decay),
metrics=['accuracy'])
history = model_augmented.fit(datagen.flow(x_train, y_train, batch_size=batch_size),
validation_data=(x_test, y_test),
epochs=epochs,
shuffle=True,
use_multiprocessing=True, workers=4
)
history_plot(history)
# Score trained model.
scores = model_augmented.evaluate(x_test, y_test, verbose=0)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
###Output
_____no_output_____
###Markdown
Mixed Precision**TODO:** Fix performance issues**Note:** Mixed Precision is still experimental...* https://www.tensorflow.org/guide/keras/mixed_precision* https://www.tensorflow.org/api_docs/python/tf/keras/mixed_precision/experimental/Policy* https://developer.nvidia.com/automatic-mixed-precision* https://docs.nvidia.com/deeplearning/sdk/mixed-precision-training/index.html```pythonopt = tf.train.experimental.enable_mixed_precision_graph_rewrite(opt) ```
###Code
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense, Activation, Dropout, Conv2D, MaxPooling2D
policy = tf.keras.mixed_precision.experimental.Policy('mixed_float16')
#tf.keras.mixed_precision.experimental.set_policy('mixed_float16')
def create_mixed_precision_cnn_classifier_model():
model = Sequential()
model.add(Conv2D(32, (3, 3), padding='same',
input_shape=x_train.shape[1:],
dtype=policy))
model.add(Activation('relu', dtype=policy))
model.add(Conv2D(32, (3, 3), dtype=policy))
model.add(Activation('relu', dtype=policy))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25, dtype=policy))
model.add(Conv2D(64, (3, 3), padding='same', dtype=policy))
model.add(Activation('relu', dtype=policy))
model.add(Conv2D(64, (3, 3), dtype=policy))
model.add(Activation('relu', dtype=policy))
model.add(MaxPooling2D(pool_size=(2, 2), dtype=policy))
model.add(Dropout(0.25, dtype=policy))
model.add(Flatten(dtype=policy))
# Dense layers use global policy of 'mixed_float16';
# does computations in float16, keeps variables in float32.
model.add(Dense(512, dtype=policy))
model.add(Activation('relu', dtype=policy))
model.add(Dropout(0.5, dtype=policy))
model.add(Dense(num_classes, dtype=policy))
# Softmax should be done in float32 for numeric stability. We pass
# dtype='float32' to use float32 instead of the global policy.
model.add(Activation('softmax', dtype='float32'))
return model
from tensorflow.keras.optimizers import RMSprop
model_mixedprecision_cnn = create_mixed_precision_cnn_classifier_model()
model_mixedprecision_cnn.summary()
batch_size = 128 #32
epochs = 25 #100
learning_rate = 1e-3 #1e-4
decay = 1e-6
model_mixedprecision_cnn.compile(loss='categorical_crossentropy',
optimizer=RMSprop(learning_rate=learning_rate, decay=decay),
metrics=['accuracy'])
%%time
history = model_mixedprecision_cnn.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True)
# Score trained model.
scores = model_mixedprecision_cnn.evaluate(x_test, y_test, verbose=0)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
# Reset Policy
tf.keras.mixed_precision.experimental.set_policy('float32')
###Output
_____no_output_____
###Markdown
Multi-GPU ExampleUsing multiple GPUs on a single node is a simple way to speed up deep learning. Keras / TensorFlow support this with a small modification to code.First, determine if multiple GPUs are available:
###Code
physical_devices = tf.config.experimental.list_physical_devices('GPU')
device_count = len(physical_devices)
print("GPU count:", device_count)
print("GPU devices:", physical_devices)
###Output
_____no_output_____
###Markdown
When scaling to `n` GPUs, there is `n *` the available GPU memory, so we can increase the batch_size by `n`. A larger batch size means that there is more data evaluated by the batch step, which creates a more accurate and representative loss gradient – so we can take a larger corrective step by multiply the learning_rate by `n`. Because we are learning `n *` more each epoch, we only need `1/n`th the number of training epochs.There are additional subtleties and mitigating strategies to be aware of when scaling batch sizes larger. Some of these are discussed in [Deep Learning at scale: Accurate, Large Mini batch SGD](https://towardsdatascience.com/deep-learning-at-scale-accurate-large-mini-batch-sgd-8207d54bfe02).
###Code
# Multi-GPU Example
assert device_count >= 2, "Two or more GPUs required to demonstrate multi-gpu functionality"
from tensorflow.keras.optimizers import Adam, RMSprop
from tensorflow.keras.callbacks import LearningRateScheduler, ReduceLROnPlateau
batch_size = device_count * 128 #32
epochs = 25 // device_count + 1 #100
learning_rate = device_count * 1e-3 #1e-4
decay = 1e-6
def lr_schedule(epoch):
initial_lr = device_count * 1e-3
warmup_epochs = 5
warmup_lr = (epoch + 1) * initial_lr / warmup_epochs
return warmup_lr if epoch <= warmup_epochs else initial_lr
lr_scheduler = LearningRateScheduler(lr_schedule, verbose=1)
lr_reducer = ReduceLROnPlateau(factor=np.sqrt(0.1), cooldown=0, patience=5, min_lr=0.5e-6)
callbacks = [lr_reducer, lr_scheduler]
strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
model_multigpu = create_cnnbase_classifier_model()
model_multigpu.compile(loss='categorical_crossentropy',
optimizer=RMSprop(learning_rate=learning_rate, decay=decay, momentum=0.5),
# TODO: Explore Adam without lr_scheduling
#optimizer=Adam(learning_rate=learning_rate),
metrics=['accuracy'])
history = model_multigpu.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True,
callbacks=callbacks,
use_multiprocessing=True, workers=4
)
history_plot(history)
# Score trained model.
scores = model_multigpu.evaluate(x_test, y_test, verbose=0)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
###Output
_____no_output_____ |
dev/01b_dispatch.ipynb | ###Markdown
Type dispatch> Basic single and dual parameter dispatch Helpers
###Code
#exports
def type_hints(f):
"Same as `typing.get_type_hints` but returns `{}` if not allowed type"
return typing.get_type_hints(f) if isinstance(f, typing._allowed_types) else {}
#export
def anno_ret(func):
"Get the return annotation of `func`"
if not func: return None
ann = type_hints(func)
if not ann: return None
return ann.get('return')
#hide
def f(x) -> float: return x
test_eq(anno_ret(f), float)
def f(x) -> typing.Tuple[float,float]: return x
test_eq(anno_ret(f), typing.Tuple[float,float])
def f(x) -> None: return x
test_eq(anno_ret(f), NoneType)
def f(x): return x
test_eq(anno_ret(f), None)
test_eq(anno_ret(None), None)
#export
cmp_instance = functools.cmp_to_key(lambda a,b: 0 if a==b else 1 if issubclass(a,b) else -1)
td = {int:1, numbers.Number:2, numbers.Integral:3}
test_eq(sorted(td, key=cmp_instance), [numbers.Number, numbers.Integral, int])
#export
def _p2_anno(f):
"Get the 1st 2 annotations of `f`, defaulting to `object`"
hints = type_hints(f)
ann = [o for n,o in hints.items() if n!='return']
while len(ann)<2: ann.append(object)
return ann[:2]
def _f(a): pass
test_eq(_p2_anno(_f), (object,object))
def _f(a, b): pass
test_eq(_p2_anno(_f), (object,object))
def _f(a:None, b)->str: pass
test_eq(_p2_anno(_f), (NoneType,object))
def _f(a:str, b)->float: pass
test_eq(_p2_anno(_f), (str,object))
def _f(a:None, b:str)->float: pass
test_eq(_p2_anno(_f), (NoneType,str))
def _f(a:int, b:int)->float: pass
test_eq(_p2_anno(_f), (int,int))
def _f(self, a:int, b:int): pass
test_eq(_p2_anno(_f), (int,int))
def _f(a:int, b:str)->float: pass
test_eq(_p2_anno(_f), (int,str))
test_eq(_p2_anno(attrgetter('foo')), (object,object))
###Output
_____no_output_____
###Markdown
TypeDispatch - The following class is the basis that allows us to do type dipatch with type annotations. It contains a dictionary type -> functions and ensures that the proper function is called when passed an object (depending on its type).
###Code
#export
class _TypeDict:
def __init__(self): self.d,self.cache = {},{}
def _reset(self):
self.d = {k:self.d[k] for k in sorted(self.d, key=cmp_instance, reverse=True)}
self.cache = {}
def add(self, t, f):
"Add type `t` and function `f`"
if not isinstance(t,tuple): t=(t,)
for t_ in t: self.d[t_] = f
self._reset()
def all_matches(self, k):
"Find first matching type that is a super-class of `k`"
if k not in self.cache:
types = [f for f in self.d if k==f or (isinstance(k,type) and issubclass(k,f))]
self.cache[k] = [self.d[o] for o in types]
return self.cache[k]
def __getitem__(self, k):
"Find first matching type that is a super-class of `k`"
res = self.all_matches(k)
return res[0] if len(res) else None
def __repr__(self): return self.d.__repr__()
def first(self): return next(iter(self.d.values()))
#export
class TypeDispatch:
"Dictionary-like object; `__getitem__` matches keys of types using `issubclass`"
def __init__(self, *funcs):
self.funcs = _TypeDict()
for o in funcs: self.add(o)
self.inst = None
def add(self, f):
"Add type `t` and function `f`"
a0,a1 = _p2_anno(f)
t = self.funcs.d.get(a0)
if t is None:
t = _TypeDict()
self.funcs.add(a0, t)
t.add(a1, f)
def first(self): return self.funcs.first().first()
def returns(self, x): return anno_ret(self[type(x)])
def returns_none(self, x):
r = anno_ret(self[type(x)])
return r if r == NoneType else None
def _attname(self,k): return getattr(k,'__name__',str(k))
def __repr__(self):
r = [f'({self._attname(k)},{self._attname(l)}) -> {v.__name__}'
for k in self.funcs.d for l,v in self.funcs[k].d.items()]
return '\n'.join(r)
def __call__(self, *args, **kwargs):
ts = L(args).map(type)[:2]
f = self[tuple(ts)]
if not f: return args[0]
if self.inst is not None: f = types.MethodType(f, self.inst)
return f(*args, **kwargs)
def __get__(self, inst, owner):
self.inst = inst
return self
def __getitem__(self, k):
"Find first matching type that is a super-class of `k`"
k = L(k if isinstance(k, tuple) else (k,))
while len(k)<2: k.append(object)
r = self.funcs.all_matches(k[0])
if len(r)==0: return None
for t in r:
o = t[k[1]]
if o is not None: return o
return None
def f_col(x:typing.Collection): return x
def f_nin(x:numbers.Integral)->int: return x+1
def f_ni2(x:int): return x
def f_bll(x:(bool,list)): return x
def f_num(x:numbers.Number): return x
t = TypeDispatch(f_nin,f_ni2,f_num,f_bll)
t.add(f_ni2) #Should work even if we add the same function twice.
test_eq(t[int], f_ni2)
test_eq(t[np.int32], f_nin)
test_eq(t[str], None)
test_eq(t[float], f_num)
test_eq(t[bool], f_bll)
test_eq(t[list], f_bll)
t.add(f_col)
test_eq(t[str], f_col)
test_eq(t[np.int32], f_nin)
o = np.int32(1)
test_eq(t(o), 2)
test_eq(t.returns(o), int)
assert t.first() is not None
t
def m_nin(self, x:(str,numbers.Integral)): return str(x)+'1'
def m_bll(self, x:bool): self.foo='a'
def m_num(self, x:numbers.Number): return x
t = TypeDispatch(m_nin,m_num,m_bll)
class A: f = t
a = A()
test_eq(a.f(1), '11')
test_eq(a.f(1.), 1.)
test_is(a.f.inst, a)
a.f(False)
test_eq(a.foo, 'a')
def f1(x:numbers.Integral, y): return x+1
def f2(x:int, y:float): return x+y
t = TypeDispatch(f1,f2)
test_eq(t[int], f1)
test_eq(t[int,int], f1)
test_eq(t[int,float], f2)
test_eq(t[float,float], None)
test_eq(t[np.int32,float], f1)
test_eq(t(3,2.0), 5)
test_eq(t(3,2), 4)
test_eq(t('a'), 'a')
t
###Output
_____no_output_____
###Markdown
typedispatch Decorator
###Code
#export
class DispatchReg:
"A global registry for `TypeDispatch` objects keyed by function name"
def __init__(self): self.d = defaultdict(TypeDispatch)
def __call__(self, f):
nm = f'{f.__qualname__}'
self.d[nm].add(f)
return self.d[nm]
typedispatch = DispatchReg()
@typedispatch
def f_td_test(x, y): return f'{x}{y}'
@typedispatch
def f_td_test(x:numbers.Integral, y): return x+1
@typedispatch
def f_td_test(x:int, y:float): return x+y
test_eq(f_td_test(3,2.0), 5)
test_eq(f_td_test(3,2), 4)
test_eq(f_td_test('a','b'), 'ab')
###Output
_____no_output_____
###Markdown
Export -
###Code
#hide
from local.notebook.export import notebook2script
notebook2script(all_fs=True)
###Output
Converted 00_test.ipynb.
Converted 01_core.ipynb.
Converted 01a_utils.ipynb.
Converted 01b_dispatch.ipynb.
Converted 01c_torch_core.ipynb.
Converted 02_script.ipynb.
Converted 03_dataloader.ipynb.
Converted 04_transform.ipynb.
Converted 05_data_core.ipynb.
Converted 06_data_transforms.ipynb.
Converted 07_vision_core.ipynb.
Converted 08_pets_tutorial.ipynb.
Converted 09_vision_augment.ipynb.
Converted 10_data_block.ipynb.
Converted 11_layers.ipynb.
Converted 11a_vision_models_xresnet.ipynb.
Converted 12_optimizer.ipynb.
Converted 13_learner.ipynb.
Converted 14_callback_schedule.ipynb.
Converted 14a_callback_data.ipynb.
Converted 15_callback_hook.ipynb.
Converted 16_callback_progress.ipynb.
Converted 17_callback_tracker.ipynb.
Converted 18_callback_fp16.ipynb.
Converted 19_callback_mixup.ipynb.
Converted 20_metrics.ipynb.
Converted 21_tutorial_imagenette.ipynb.
Converted 22_vision_learner.ipynb.
Converted 23_tutorial_transfer_learning.ipynb.
Converted 30_text_core.ipynb.
Converted 31_text_data.ipynb.
Converted 32_text_models_awdlstm.ipynb.
Converted 33_text_models_core.ipynb.
Converted 34_callback_rnn.ipynb.
Converted 35_tutorial_wikitext.ipynb.
Converted 36_text_models_qrnn.ipynb.
Converted 37_text_learner.ipynb.
Converted 38_tutorial_ulmfit.ipynb.
Converted 40_tabular_core.ipynb.
Converted 41_tabular_model.ipynb.
Converted 42_tabular_rapids.ipynb.
This cell doesn't have an export destination and was ignored:
e
Converted 50_data_block_examples.ipynb.
Converted 90_notebook_core.ipynb.
Converted 91_notebook_export.ipynb.
Converted 92_notebook_showdoc.ipynb.
Converted 93_notebook_export2html.ipynb.
Converted 94_notebook_test.ipynb.
Converted 95_index.ipynb.
Converted 96_data_external.ipynb.
Converted 97_utils_test.ipynb.
Converted notebook2jekyll.ipynb.
|
_posts/scikit/multi-class-sgd-on-iris-dataset/Multi-Class SGD On The Iris Dataset.ipynb | ###Markdown
Plot decision surface of multi-class SGD on iris dataset. The hyperplanes corresponding to the three one-versus-all (OVA) classifiers are represented by the dashed lines. New to Plotly?Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).You can set up Plotly to work in [online](https://plot.ly/python/getting-started/initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/start-plotting-online).We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started! Version
###Code
import sklearn
sklearn.__version__
###Output
_____no_output_____
###Markdown
Imports
###Code
print(__doc__)
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.linear_model import SGDClassifier
###Output
Automatically created module for IPython interactive environment
###Markdown
Calculations
###Code
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features. We could
# avoid this ugly slicing by using a two-dim dataset
y = iris.target
colors = ["blue", "red", "yellow"]
# shuffle
idx = np.arange(X.shape[0])
np.random.seed(13)
np.random.shuffle(idx)
X = X[idx]
y = y[idx]
# standardize
mean = X.mean(axis=0)
std = X.std(axis=0)
X = (X - mean) / std
h = .02 # step size in the mesh
clf = SGDClassifier(alpha=0.001, n_iter=100).fit(X, y)
# create a mesh to plot in
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
x_ = np.arange(x_min, x_max, h)
y_ = np.arange(y_min, y_max, h)
xx, yy = np.meshgrid(x_, y_)
###Output
_____no_output_____
###Markdown
Plot Results
###Code
def matplotlib_to_plotly(cmap, pl_entries):
h = 1.0/(pl_entries-1)
pl_colorscale = []
for k in range(pl_entries):
C = map(np.uint8, np.array(cmap(k*h)[:3])*255)
pl_colorscale.append([k*h, 'rgb'+str((C[0], C[1], C[2]))])
return pl_colorscale
cmap = matplotlib_to_plotly(plt.cm.Paired, 5)
data = []
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
cs = go.Contour(x=x_, y=y_, z=Z,
colorscale=cmap,
showscale=False
)
data.append(cs)
# Plot also the training points
xmin = min(X[idx, 0])
xmax = max(X[idx, 0])
for i, color in zip(clf.classes_, colors):
idx = np.where(y == i)
t = go.Scatter(x=X[idx, 0][0], y=X[idx, 1][0],
mode='markers',
marker=dict(color=colors[i],
line=dict(color='black', width=1)),
name=iris.target_names[i],
)
data.append(t)
# Plot the three one-against-all classifiers
coef = clf.coef_
intercept = clf.intercept_
def plot_hyperplane(c, color):
def line(x0):
return (-(x0 * coef[c, 0]) - intercept[c]) / coef[c, 1]
trace = go.Scatter(x=[x_min, x_max], y=[line(x_min), line(x_max)],
mode='lines',
line=dict(color=color, dash='dash'),
showlegend=False)
return trace
for i, color in zip(clf.classes_, colors):
data.append(plot_hyperplane(i, color))
layout = go.Layout(title="Decision surface of multi-class SGD",
xaxis=dict(range=[min(x_), max(x_)]),
yaxis=dict(range=[min(y_), max(y_)]),
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig)
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700" rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/css" href="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
! pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'Multi-Class SGD On The Iris Dataset.ipynb', 'scikit-learn/plot-sgd-iris/', 'Multi-Class SGD On The Iris Dataset | plotly',
' ',
title = 'Multi-Class SGD On The Iris Dataset | plotly',
name = 'Multi-Class SGD On The Iris Dataset',
has_thumbnail='true', thumbnail='thumbnail/sgd-iris.jpg',
language='scikit-learn', page_type='example_index',
display_as='linear_models', order=24,
ipynb= '~Diksha_Gabha/3265')
###Output
_____no_output_____ |
experimental_models/text_base_model.ipynb | ###Markdown
Uni-Modal Text Classifier Base Model
###Code
import numpy as np
from sklearn import metrics
import tensorflow
from tensorflow import keras
from utils import data, training, plotting, models_text, optimise_txt
gpus = tensorflow.config.list_physical_devices('GPU')
for gpu in gpus:
print("Name:", gpu.name, " Type:", gpu.device_type)
###Output
_____no_output_____
###Markdown
Text Pre-ProcessingLoad and pre-process text corpus.
###Code
label_map = {
'geol_geow': 0,
'geol_sed': 1,
'gphys_gen': 2,
'log_sum': 3,
'pre_site': 4,
'vsp_file': 5
}
doc_data = data.DocumentData(label_map, 2020, drop_nans='text')
doc_data.load_text_data()
###Output
_____no_output_____
###Markdown
Base 1D CNN ClassifierBase 1D CCN text classifier architecture based on the sentence classifier proposed by Kim et al. Test model through a single training cycle:
###Code
base_text_cnn = models_text.text_cnn_model(doc_data)
base_text_cnn.summary()
early_stopping = keras.callbacks.EarlyStopping(
monitor='val_loss',
patience=2, mode='min',
restore_best_weights=True
)
history = base_text_cnn.fit(
doc_data.text_train,
doc_data.y_train,
epochs=100,
validation_data=(doc_data.text_val, doc_data.y_val),
callbacks=[early_stopping]
)
plotting.plot_history(history, 'Text Classifier Base Model')
base_text_cnn.evaluate(doc_data.text_test, doc_data.y_test)
y_probs = base_text_cnn.predict(doc_data.text_test)
y_hat = np.argmax(y_probs, axis=-1)
y = np.argmax(doc_data.y_test, axis=-1)
model_utils.confusion_matrix(y, y_hat, label_map, 'Text Classifier Base Model')
labels = [label for label in label_map]
print(metrics.classification_report(y, y_hat, target_names=labels))
###Output
precision recall f1-score support
geol_geow 0.88 0.86 0.87 296
geol_sed 0.82 0.87 0.84 193
gphys_gen 0.82 0.81 0.82 175
log_sum 0.61 0.72 0.66 177
pre_site 0.95 0.86 0.91 211
vsp_file 0.97 0.87 0.92 124
accuracy 0.83 1176
macro avg 0.84 0.83 0.84 1176
weighted avg 0.84 0.83 0.84 1176
###Markdown
Get average performance over 10 random initialisations of model:
###Code
metric_averages = .iterate_training(models.text_cnn_model, doc_data, 10, y, 'text', model_params={'doc_data': doc_data})
metric_averages
###Output
_____no_output_____
###Markdown
Hyperparameter Grid SearchGrid search to find optimal hyperparameters for convolutional layer, hyperparameter ranges are based on previous work by Zhang et al, 2016.
###Code
filter_regions = (1, 3, 5, 7, 10)
feature_maps = (10, 50, 100, 200, 400, 600)
dropout_rate = (0.1, 0.2, 0.3, 0.4, 0.5)
l2_norm_constraints = (0.5, 1, 2, 3)
models.text_grid_search(doc_data, filter_regions, feature_maps, dropout_rate, l2_norm_constraints, 'grid_search_logs/text_cnn_grid_search.log')
###Output
Epoch 1/100
118/118 [==============================] - 30s 254ms/step - loss: 19.5697 - accuracy: 0.2641 - val_loss: 14.5428 - val_accuracy: 0.4462
Epoch 2/100
118/118 [==============================] - 30s 258ms/step - loss: 11.3003 - accuracy: 0.3760 - val_loss: 8.4072 - val_accuracy: 0.5016
Epoch 3/100
118/118 [==============================] - 29s 243ms/step - loss: 6.6348 - accuracy: 0.4290 - val_loss: 4.9801 - val_accuracy: 0.5698
Epoch 4/100
118/118 [==============================] - 28s 242ms/step - loss: 4.0354 - accuracy: 0.4623 - val_loss: 3.1267 - val_accuracy: 0.5974
Epoch 5/100
118/118 [==============================] - 28s 241ms/step - loss: 2.6808 - accuracy: 0.4706 - val_loss: 2.1891 - val_accuracy: 0.6230
Epoch 6/100
118/118 [==============================] - 29s 245ms/step - loss: 1.9952 - accuracy: 0.4961 - val_loss: 1.7400 - val_accuracy: 0.6337
Epoch 7/100
118/118 [==============================] - 29s 242ms/step - loss: 1.6812 - accuracy: 0.5159 - val_loss: 1.5343 - val_accuracy: 0.6422
Epoch 8/100
118/118 [==============================] - 28s 240ms/step - loss: 1.5486 - accuracy: 0.5217 - val_loss: 1.4298 - val_accuracy: 0.6038
Epoch 9/100
118/118 [==============================] - 29s 248ms/step - loss: 1.4519 - accuracy: 0.5366 - val_loss: 1.3535 - val_accuracy: 0.6198
Epoch 10/100
118/118 [==============================] - 145s 1s/step - loss: 1.4012 - accuracy: 0.5398 - val_loss: 1.2943 - val_accuracy: 0.6432
Epoch 11/100
118/118 [==============================] - 28s 241ms/step - loss: 1.3745 - accuracy: 0.5446 - val_loss: 1.2619 - val_accuracy: 0.6741
Epoch 12/100
118/118 [==============================] - 30486s 258s/step - loss: 1.3420 - accuracy: 0.5572 - val_loss: 1.2278 - val_accuracy: 0.6528
Epoch 13/100
118/118 [==============================] - 29s 248ms/step - loss: 1.3023 - accuracy: 0.5779 - val_loss: 1.2226 - val_accuracy: 0.6081
Epoch 14/100
118/118 [==============================] - 29s 245ms/step - loss: 1.2571 - accuracy: 0.5955 - val_loss: 1.1508 - val_accuracy: 0.6933
Epoch 15/100
118/118 [==============================] - 29s 243ms/step - loss: 1.2772 - accuracy: 0.5715 - val_loss: 1.1258 - val_accuracy: 0.7082
Epoch 16/100
118/118 [==============================] - 28s 241ms/step - loss: 1.2492 - accuracy: 0.5827 - val_loss: 1.1227 - val_accuracy: 0.7103
Epoch 17/100
73/118 [=================>............] - ETA: 10s - loss: 1.2367 - accuracy: 0.5899
###Markdown
Tunned ModelOptimal model with filter regions = 7 and feature maps = 200.
###Code
base_text_cnn = models.text_cnn_model(doc_data, kernel_size=7, filter_maps=200, dense_layers=1, dense_nodes=50, dropout_rate=0.3, l2_regularization=0.5)
early_stopping = keras.callbacks.EarlyStopping(monitor='val_loss', patience=2, mode='min', restore_best_weights=True)
history = base_text_cnn.fit(doc_data.text_train, doc_data.y_train, epochs=100, validation_data=(doc_data.text_val, doc_data.y_val), callbacks=[early_stopping])
base_text_cnn.evaluate(doc_data.text_test, doc_data.y_test)
y_probs = base_text_cnn.predict(doc_data.text_test)
y_hat = np.argmax(y_probs, axis=-1)
y = np.argmax(doc_data.y_test, axis=-1)
model_utils.confusion_matrix(y, y_hat, label_map, 'Text Classifier Base Model')
labels = [label for label in label_map]
print(metrics.classification_report(y, y_hat, target_names=labels))
metric_averages = model_utils.iterate_training(
models.text_cnn_model,
doc_data,
10,
y,
'text',
model_params={
'doc_data': doc_data,
'kernel_size': 7,
'filter_maps': 200,
'dense_layers': 1,
'dense_nodes': 50,
'dropout_rate': 0.3,
'l2_regularization': 0.5
}
)
metric_averages
###Output
_____no_output_____ |
autompg_regression.ipynb | ###Markdown
결측치를 보기위해서 디스크립션(info()) 하기
###Code
pd_data.shape
pd_data= pd.read_csv('./files/auto-mpg.csv', header=None)
pd_data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 398 entries, 0 to 397
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 0 398 non-null float64
1 1 398 non-null int64
2 2 398 non-null float64
3 3 398 non-null object
4 4 398 non-null float64
5 5 398 non-null float64
6 6 398 non-null int64
7 7 398 non-null int64
8 8 398 non-null object
dtypes: float64(4), int64(3), object(2)
memory usage: 28.1+ KB
###Markdown
값이 object라고 나온 부분은 조심하기. float나 int는 괜찮음.어떠한 값이 될수도 있기 떄문에 그 안에 들어간 것이 무엇인지 꼭 확인하기!
###Code
pd_data.columns = ['mpg','cylinders','displacement','horsepower','weight',
'acceleration','model year','origin','name']
x=pd_data[['weight']]
y=pd_data[['mpg']]
x.shape, y.shape
from sklearn.linear_model import LinearRegression
lr=LinearRegression()
lr.fit(x,y)
lr.coef_
lr.intercept_
###Output
_____no_output_____
###Markdown
y= -0.00767661x + 46.31736442
###Code
lr.score(x,y)
##
###Output
_____no_output_____
###Markdown
원 파일에 헤더가 없으므로 dataFrame을 만들면서 header=None으로 헤더가 없다는 것을 알려줌pd_data.info() -> 결측치 확인 Dtype이 object인 것은 문자도 숫자도 될 수 있으므로 확인 (3,8 column 제외)pd_data의 column명 지정 x축은 weight y축은 mpg 지정
###Code
x = pd_data[['weight']]
y = pd_data[['mpg']]
x.shape, y.shape
###Output
_____no_output_____
###Markdown
split 기능을 가져옴 (두 집단으로 나눠서 하나는 식을 만들고 나머지는 만들어진 식에 대입하여 확인)
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(x,y)
X_train.shape, X_test.shape, Y_train.shape, Y_test.shape
###Output
_____no_output_____
###Markdown
sklearn.linear_model에서 선형회귀 기능을 import
###Code
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(X_train, Y_train)
###Output
_____no_output_____
###Markdown
lr에 직선방정식
###Code
# total data
lr.fit(x,y)
# split data with 1 column
lr.coef_, lr.intercept_
###Output
_____no_output_____
###Markdown
x의 계수
###Code
lr.coef_
###Output
_____no_output_____
###Markdown
y 절편
###Code
lr.intercept_
###Output
_____no_output_____
###Markdown
weight * mpg의 선형식 : y = - 0.00767661x + 46.31736442 선형식이 얼마나 정확한지 확인하는 방법 : score (정확도)
###Code
lr.score(x,y)
###Output
_____no_output_____
###Markdown
y = -0.00767661x + 46.31736442
###Code
lr.score(x,y)
###Output
_____no_output_____
###Markdown
y = -0.00767661x_1 + -1.03509013x_2 + 46.31736442
###Code
lr.score(x,y)
x_predict = lr.predict(x)
deviation = y.to_numpy() - x_predict
type(deviation)
deviation
###Output
_____no_output_____
###Markdown
y = 0.14766146x + 12.10283073
###Code
lr.score(x,y) #정확도
###Output
_____no_output_____
###Markdown
사이킷 런 모델링을 통해서 두 데이터를 주어주고 선형모델을 만든것
###Code
y_predicted = lr.predict([[99]])
y_predicted
###Output
_____no_output_____
###Markdown
1. 정보단계: 수집 가공- 문제 데이터 확인 및 처리2. 교육 단계: 머신 대상- 컬럼선택- 모델 선택- 교육- 정확도 확인3. 서비스 단계:고객응대 X의 값을 늘릴수 있다
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(x, y)
X_train, X_test, Y_train, Y_test
lr.fit(X_train, Y_train)
lr.coef_, lr.intercept_ #기울기, 절편
###Output
_____no_output_____
###Markdown
y = -0.00767661x_1 + -1.03509013x_2 + 46.31736442
###Code
lr.score(x,y)
###Output
_____no_output_____
###Markdown
y = -0.00767661x_1 + -1.03509013x_2 + 46.31736442
###Code
lr.score(x,y)
###Output
_____no_output_____
###Markdown
y = -0.00767661x_1 + - 1.03509013x_2 + 46.31736442
###Code
lr.score(x,y)
###Output
_____no_output_____
###Markdown
Simple Linear Regression ML: 정보에 알맞은 선긋기
###Code
import sklearn
import pandas as pd
!dir .\files\auto-mpg.csv
pd_data = pd.read_csv('./files/auto-mpg.csv', header = None)
pd_data.info()
pd_data.shape
pd_data
pd_data.columns = ['mpg','cylinders','displacement','horsepower','weight',
'acceleration','model year','origin','name']
x = pd_data[['weight']]
y = pd_data[['mpg']]
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(x,y)
X_train.shape, X_test.shape, Y_train.shape, Y_test.shape
x.shape, y.shape
from sklearn.linear_model import LinearRegression
lr = LinearRegression()
lr.fit(x,y)
lr.coef_, lr.intercept_
###Output
_____no_output_____
###Markdown
y = -0.00767661x + 46.31736442
###Code
# check with total data
lr.score(x,y)
lr.fit(X_train, Y_train)
lr.coef_, lr.intercept_
# check with a part train data
lr.score(X_train,Y_train)
# check with a part test data
lr.score(X_test,Y_test)
###Output
_____no_output_____ |
club_mahindra_eda.ipynb | ###Markdown
Import packages
###Code
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import xlrd
import numpy as np
import seaborn as sns
# extracting sample from the main data
#grouped=g.groupby("booking_date")
#grouped.apply(lambda x: x.sample(frac=0.2)).to_csv("/content/drive/My Drive/DS Training/Meterial/trainsample.csv",index=False)
###Output
_____no_output_____
###Markdown
Read Data
###Code
from google.colab import drive
drive.mount('/content/drive')
path="/content/drive/My Drive/DS Training/Meterial/train.csv"
df=pd.read_csv(path)
df
###Output
_____no_output_____
###Markdown
Basic analysis- shape and dimensions
###Code
df.channel_code==3
df.columns
df.shape
df['memberid'].duplicated().value_counts()
df.size
for column in df.columns:
g=df[column].isnull().value_counts()
print (g)
for column in df.columns:
g=df[column].isnull().value_counts()
print (g[g.index==True])
g=df['season_holidayed_code'].isnull().value_counts()
type(g)
g[g.index==True]
df.dtypes
###Output
_____no_output_____
###Markdown
Convertion of booking dates & date formates and derviving advance booking
###Code
# convert booking date to date format
df['booking_date'] = pd.to_datetime(df['booking_date'],format='%d/%m/%y')
# Extration of year and month
#df['booking_year'], df['booking_month'] = df['booking_date'].dt.year, df['booking_month'].dt.month
df['booking_year'] = df['booking_date'].dt.year
df['booking_month']= df['booking_date'].dt.month
#convert booking date to date format
df['checkin_date'] = pd.to_datetime(df['checkin_date'],format='%d/%m/%y')
df['checkout_date'] = pd.to_datetime(df['checkout_date'],format='%d/%m/%y')
df['spend_days']= df['checkout_date']-df['checkin_date']
#derive advance bokking days
df['advance_booking']= df['checkin_date']-df['booking_date']
df.head()
###Output
_____no_output_____
###Markdown
Pivoting Data and Creating Graphs What is the average booking value per member? how is it trending by year?
###Code
df.pivot_table(values='amount_spent_per_room_night_scaled',columns='booking_year',aggfunc='mean')
plt.figure(figsize=(16, 6))
df.groupby(['booking_year'])['amount_spent_per_room_night_scaled'].mean().plot.bar()
# its in between 7 to 8 on an average every year and no significant change
###Output
_____no_output_____
###Markdown
what are the top resorts in terms of booking, average revenue and children freindly?
###Code
g= df.pivot_table(index=['resort_id'], values=['reservation_id'], aggfunc='count')
#g['reservation_id']=pd.to_numeric(g['reservation_id'])
g.sort_values(by=['reservation_id'],inplace=True,ascending=False)
g[:5].plot.barh()
# Top 5 resorts in terms of booking
#another way of producing the bar using count plot
plt.figure(figsize=(16, 6))
sns.countplot(df["resort_id"])
g= df.pivot_table(index=['resort_type_code'], values=['amount_spent_per_room_night_scaled'], aggfunc='mean')
g.sort_values(by=['amount_spent_per_room_night_scaled'], inplace=True)
g.plot.bar()
# Resort code 5 is top in terms of revenue
#another way of producing the bar using box plot
plt.figure(figsize=(16, 6))
sns.boxplot(x='resort_type_code',y='amount_spent_per_room_night_scaled',data=df)
#Resort code 5 is top in terms of revenue
df.groupby(['state_code_resort','resort_type_code'])['numberofchildren'].count().reset_index()
plt.figure(figsize=(40,20))
#sns.barplot(x='resort_type_code',y='numberofchildren',data=df)
sns.catplot(x='resort_type_code',y='numberofchildren',col='state_code_resort',data=df,kind='bar',aspect=0.7)
#Resort code 4 is top in terms of number of children
###Output
_____no_output_____
###Markdown
how much time members are spending on resort like spent time, seasons and advance booking time
###Code
df.pivot_table(index=['season_holidayed_code'],values=['roomnights'],columns=['resort_type_code'],aggfunc='sum')
plt.figure(figsize=(16, 6))
df.groupby(['resort_type_code'])['roomnights'].agg(pd.Series.mode).plot.bar()
#Resort code 0 has the highest time spent, 5 and 7 has the leaset amount spent time
df.groupby(['resort_type_code'])['advance_booking'].count().plot.bar()
#df['advance_booking'].dtypes
#Resort code 1 has the highest advance bookings(use mean or mode)
plt.figure(figsize=(16, 6))
df.groupby(['resort_type_code'])['season_holidayed_code'].count().plot.bar()
#Resort code 1 has the highest season holiday bookings(use mean or mode)
df.pivot_table(index=['resort_id'],values=['advance_booking'],columns=['resort_region_code'],aggfunc='count')
###Output
_____no_output_____
###Markdown
Are the any resorts that attract more advanced bookings and why ?
###Code
plt.figure(figsize=(16, 6))
df.groupby(['resort_type_code'])['advance_booking'].count().plot.bar()
#Resort code 1 has the more no.of advanced bookings
#why
df.groupby(['resort_type_code','state_code_resort'])['advance_booking'].count().plot.bar()
#there could be many other reasons but stat code resort is one of them
###Output
_____no_output_____
###Markdown
is there any relationship between advance booking and time spent
###Code
plt.figure(figsize=(16, 6))
df.groupby(['roomnights'])['advance_booking'].count().plot.bar()
#No
###Output
_____no_output_____
###Markdown
Are there any resorts for specific seasons or evernts
###Code
g=df.pivot_table(index=['season_holidayed_code'],values=['reservation_id'],columns=['resort_type_code'],aggfunc='count')
g
plt.figure(figsize=(16, 6))
df.groupby(['season_holidayed_code'])['resort_type_code'].count().plot.bar()
#Resort code 1 and holiday season code 2 is the good combination
###Output
_____no_output_____
###Markdown
Can we group resorts by revenue
###Code
#doubt
g= df.pivot_table(index=['resort_type_code'],values=['amount_spent_per_room_night_scaled'],aggfunc='sum')
g.sort_values(by=['amount_spent_per_room_night_scaled'],inplace=True,ascending=False)
g
plt.figure(figsize=(16, 6))
df.groupby(['resort_type_code'])['amount_spent_per_room_night_scaled'].sum().plot.bar()
#yes.. resort code 1 has the highest revenue
###Output
_____no_output_____ |
019_bayes.ipynb | ###Markdown
Naive Bayes (NB)Recall Bayes' Theorem from stats, based on the idea of conditional probability. Bayes' Theorem tells us that:It states, for two events A & B, if we know the conditional probability of B given A and the probability of B, then it’s possible to calculate the probability of B given A.$ P(y \mid x_1, \dots, x_n) = \frac{P(y) P(x_1, \dots, x_n \mid y)} {P(x_1, \dots, x_n)} $In stats we also looked at Bayesian Interference - where we built tables to update our probabilites as more data was learned. The Naive Bayes algorithm is just that, but on a larger scale. Each feature updates the probabilities just like in a simple Bayes' table calculation we did by hand. We can show an example:Here is the table of all the features and outcomes, the training data. If we use this to create a model, and make a prediction, one sample looks like:Easy, peasy!Bayes' is an algorithm which works well, accurately and quickly, but only in certain scenarios. The simplicity of the Bayes' Theorm based calcualtions have a few key notes: NB assumes all features are independent. If they are not, accuracy will suffer. In real data, that independance often doesn't exist. NB is generally quite fast. NB often is able to become relatively accurate from small training sets. NB runs into an issue with a value in the test data was not in the training data. Implementations work around this using Laplace smoothing - which just adds a constant (normal alpha) on the top and bottom of the probability equation. NB probability estimates are not to be relied on, even if the classification is accurate. Now, Bayes is based on yes/no probabilites for a target outcome, and categorical features as the only inputs. Implementations of NB differ to handle these scenarios, sklearn has several. Two important ones are: Gaussian Naive Bayes - assumes numerical features are distributed along a normal distribution. This is very common as regular Bayes can't handle numerical features. Multinomial Naive Bayes - generates predictions into 3+ outcome classes. Bayes is commonly used for things like spam detection, where high speed yes/no classification is required. Laplace SmoothingOne critical issue with Bayes is a scenario where we get a feature value to predict that wan't in the training set. For example, what if we had a value for Windy that was "gale force" in something that we tried to predict. There would be no existing probability info for that, since it wasn't in the training data. This is known as the Zero Probability Problem. This is mitigated by something called Laplace Smoothing, which inserts a constant alpha (often/usually 1), on both the top and bottom of the probability calculations. This ensures that we don't encounter a scenario where we are dividing by 0, without substantially changing the probability calculations. Alpha is a hyperparameter that we can select, doing things like a grid search to find the best solution for our data.  Bayes from ScratchWe can build a really simple implementation of Bayes. Our dataset is a bunch of simple categorical variables, the number of records is small, and our target is a boolean (yes/no). Great candidate.
###Code
df = pd.read_csv("data/golf-dataset.csv")
df.head()
class MyNaiveBayes:
"""
Bayes Theorem:
Likelihood * Class prior probability
Posterior Probability = -------------------------------------
Predictor prior probability
P(x|c) * p(c)
P(c|x) = ------------------
P(x)
"""
def __init__(self):
"""
Attributes:
likelihoods: Likelihood of each feature per class
class_priors: Prior probabilities of classes
pred_priors: Prior probabilities of features
features: All features of dataset
"""
self.features = list
self.likelihoods = {}
self.class_priors = {}
self.pred_priors = {}
self.X_train = np.array
self.y_train = np.array
self.train_size = int
self.num_feats = int
def fit(self, X, y):
self.features = list(X.columns)
self.X_train = X
self.y_train = y
self.train_size = X.shape[0]
self.num_feats = X.shape[1]
for feature in self.features:
self.likelihoods[feature] = {}
self.pred_priors[feature] = {}
for feat_val in np.unique(self.X_train[feature]):
self.pred_priors[feature].update({feat_val: 0})
for outcome in np.unique(self.y_train):
self.likelihoods[feature].update({feat_val+"_"+outcome:0})
self.class_priors.update({outcome: 0})
self._calc_class_prior()
self._calc_likelihoods()
self._calc_predictor_prior()
def _calc_class_prior(self):
""" P(c) - Prior Class Probability """
for outcome in np.unique(self.y_train):
# Complete - Calculate class priors
# Store in self.class_priors dictionary
def _calc_likelihoods(self):
""" P(x|c) - Likelihood """
for feature in self.features:
for outcome in np.unique(self.y_train):
# Complete - Calculate likelihoods
# Store in self.likelihoods dictionary
# Note: the likelihoods are stored for both yes and no in the format: feat_val + '_' + outcome
# See sample output for example
def _calc_predictor_prior(self):
""" P(x) - Evidence """
for feature in self.features:
# Caclulate priors for the predictors
# Store in self.pred_priors
# Probability of each outcome for each feature
def predict(self, X):
""" Calculates Posterior probability P(c|x) """
results = []
X = np.array(X)
for query in X:
probs_outcome = {}
for outcome in np.unique(self.y_train):
prior = self.class_priors[outcome]
likelihood = 1
evidence = 1
for feat, feat_val in zip(self.features, query):
likelihood *= self.likelihoods[feat][feat_val + '_' + outcome]
evidence *= self.pred_priors[feat][feat_val]
posterior = (likelihood * prior) / (evidence)
probs_outcome[outcome] = posterior
result = max(probs_outcome, key = lambda x: probs_outcome[x])
results.append(result)
return np.array(results)
y = df["Play Golf"]
X = df.drop(columns={"Play Golf"})
X["Windy"] = X["Windy"].astype("str")
nb_clf = MyNaiveBayes()
nb_clf.fit(X, y)
print(accuracy_score(y, nb_clf.predict(X)))
# With Proper Data Prep - though the dataset is small, so this won't be the best example.
#X_train, X_test, y_train, y_test = train_test_split(X, y)
#nb_clf.fit(X_train, y_train)
#nb_preds = nb_clf.predict(X_test)
#print(accuracy_score(y_test, nb_preds))
###Output
_____no_output_____
###Markdown
Predict if we Should Golf on Some Random DaysCreate a dataframe with some days of weather and predict them. Note that the model (probably, unless you made it better) expects True/False to be strings, not booleans. Your dataframe should be in the same format as the feature set - X
###Code
# Complete - add in a dataframe of days.
# Predict if we should golf on those days.
###Output
['No' 'Yes']
###Markdown
SklearnOur Bayes works, but we can use sklearn for something that we are used to using, and is a bit more polished and better written. Multinomial NB is the default, it will work for both binary predictions, like we are doing, and multiple class predictions. Applying Bayes in code is similar to all the other algorithms. Here we'll encode the categories to make it work since the algorithm doesn't deal with strings, sklearn's implementation requires this, but it isn't inheirent to the algorithm.
###Code
y = df["Play Golf"]
X = df.drop(columns={"Play Golf"})
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder
mnb = MultinomialNB()
mnb_pipe = Pipeline([
("encode", OneHotEncoder()),
("model", mnb)
])
mnb_pipe.fit(X, y)
sk_preds = mnb_pipe.predict(X)
accuracy_score(sk_preds, y)
# Complete - predict our sample days using the sklearn model
###Output
_____no_output_____
###Markdown
Gaussian BayesAs we noted, regular Naive Bayes doesn't deal with numbers, so we can used Gaussian Bayes to handle those scenarios. Every step of the algorithm just requires a probability that something will occur or not occur. With categorical variables that calculation is very simple - count the number of times that it happens and divide by the total. With numerical values there isn't a direct equivalent. Rather than looking at the probability that something happens or doesn't happen, Gaussian NB calculates the probability of being in class A or B according to a normal distribution of the numerical feature. Outside of the different calculations of probability, the rest of the algorithm works in the same way as before. 
###Code
from sklearn.naive_bayes import GaussianNB
df_gaus = pd.read_csv("data/diabetes.csv")
df_gaus_y = df_gaus["Outcome"]
df_gaus_X = df_gaus.drop(columns={"Outcome"})
df_gaus.head()
###Output
_____no_output_____
###Markdown
DistributionsWe can make a plot as a shortcut to see the distributions of the numerical variables split by outcome class. If we look at the comparative distributions we can get a sense of the relative probabilities that are used in the calculations. For example, if we look at the Glucose feature. If we have an example with a glucose value of 100, the probability of that example being in class 0 is quite high, whereas the probability of it being in class 1 is low. If we have a sample that is 200, the probability of that being in class 1 is much higher.
###Code
sns.kdeplot(data=df_gaus, x="Glucose", hue="Outcome")
# Split data - this one has enough data to function properly
X_train_gaus, X_test_gaus, y_train_gaus, y_test_gaus = train_test_split(df_gaus_X, df_gaus_y)
# Model and predict
gaus_NB = GaussianNB()
gaus_NB.fit(X_train_gaus, y_train_gaus)
gaus_preds = gaus_NB.predict(X_test_gaus)
accuracy_score(y_test_gaus, gaus_preds)
###Output
_____no_output_____
###Markdown
ScalingNote that because in Bayes the features are independent of each other, there is no interaction between them with respect to calculations. When the probabilities are calculated for each feature, they dont depend on any other features - contrasted with something like linear regresion, where m1*x1 + m2*x2... will. Because of this, Bayes is one of the few things where scaling doesn't matter, though it also doesn't hurt if it is in there.
###Code
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
pipe_gaus = Pipeline([
("scale", StandardScaler()),
("model", GaussianNB())
])
pipe_gaus.fit(X_train_gaus, y_train_gaus)
pipe_gaus_preds = pipe_gaus.predict(X_test_gaus)
accuracy_score(y_test_gaus, pipe_gaus_preds)
###Output
_____no_output_____
###Markdown
ExercisePredict if people have heart disease with Gaussian NB. Note that we have mixed column types for features.
###Code
heart = pd.read_csv("data/heart.csv")
y_h = heart["HeartDisease"]
X_h = heart.drop(columns={"HeartDisease"})
heart.head()
# Complete - predict heart disease
###Output
_____no_output_____
###Markdown
Naive Bayes (NB)Recall Bayes' Theorem from stats, based on the idea of conditional probability. Bayes' Theorem tells us that:It states, for two events A & B, if we know the conditional probability of B given A and the probability of B, then it’s possible to calculate the probability of B given A.$ P(y \mid x_1, \dots, x_n) = \frac{P(y) P(x_1, \dots, x_n \mid y)} {P(x_1, \dots, x_n)} $In stats we also looked at Bayesian Interference - where we built tables to update our probabilites as more data was learned. The Naive Bayes algorithm is just that, but on a larger scale. Each feature updates the probabilities just like in a simple Bayes' table calculation we did by hand. We can show an example:Here is the table of all the features and outcomes, the training data. If we use this to create a model, and make a prediction, one sample looks like:Easy, peasy!Bayes' is an algorithm which works well, accurately and quickly, but only in certain scenarios. The simplicity of the Bayes' Theorm based calcualtions have a few key notes: NB assumes all features are independent. If they are not, accuracy will suffer. In real data, that independance often doesn't exist. NB is generally quite fast. NB often is able to become relatively accurate from small training sets. NB runs into an issue with a value in the test data was not in the training data. Implementations work around this using Laplace smoothing - which just adds a constant (normal alpha) on the top and bottom of the probability equation. NB probability estimates are not to be relied on, even if the classification is accurate. Now, Bayes is based on yes/no probabilites for a target outcome, and categorical features as the only inputs. Implementations of NB differ to handle these scenarios, sklearn has several. Two important ones are: Gaussian Naive Bayes - assumes numerical features are distributed along a normal distribution. This is very common as regular Bayes can't handle numerical features. Multinomial Naive Bayes - generates predictions into 3+ outcome classes. Bayes is commonly used for things like spam detection, where high speed yes/no classification is required. Laplace SmoothingOne critical issue with Bayes is a scenario where we get a feature value to predict that wan't in the training set. For example, what if we had a value for Windy that was "gale force" in something that we tried to predict. There would be no existing probability info for that, since it wasn't in the training data. This is known as the Zero Probability Problem. This is mitigated by something called Laplace Smoothing, which inserts a constant alpha (often/usually 1), on both the top and bottom of the probability calculations. This ensures that we don't encounter a scenario where we are dividing by 0, without substantially changing the probability calculations. Alpha is a hyperparameter that we can select, doing things like a grid search to find the best solution for our data.  Bayes from ScratchWe can build a really simple implementation of Bayes. Our dataset is a bunch of simple categorical variables, the number of records is small, and our target is a boolean (yes/no). Great candidate.
###Code
df = pd.read_csv("data/golf-dataset.csv")
df.head()
class MyNaiveBayes:
"""
Bayes Theorem:
Likelihood * Class prior probability
Posterior Probability = -------------------------------------
Predictor prior probability
P(x|c) * p(c)
P(c|x) = ------------------
P(x)
"""
def __init__(self):
"""
Attributes:
likelihoods: Likelihood of each feature per class
class_priors: Prior probabilities of classes
pred_priors: Prior probabilities of features
features: All features of dataset
"""
self.features = list
self.likelihoods = {}
self.class_priors = {}
self.pred_priors = {}
self.X_train = np.array
self.y_train = np.array
self.train_size = int
self.num_feats = int
def fit(self, X, y):
self.features = list(X.columns)
self.X_train = X
self.y_train = y
self.train_size = X.shape[0]
self.num_feats = X.shape[1]
for feature in self.features:
self.likelihoods[feature] = {}
self.pred_priors[feature] = {}
for feat_val in np.unique(self.X_train[feature]):
self.pred_priors[feature].update({feat_val: 0})
for outcome in np.unique(self.y_train):
self.likelihoods[feature].update({feat_val+"_"+outcome:0})
self.class_priors.update({outcome: 0})
self._calc_class_prior()
self._calc_likelihoods()
self._calc_predictor_prior()
def _calc_class_prior(self):
""" P(c) - Prior Class Probability """
for outcome in np.unique(self.y_train):
# Complete - Calculate class priors
# Store in self.class_priors dictionary
def _calc_likelihoods(self):
""" P(x|c) - Likelihood """
for feature in self.features:
for outcome in np.unique(self.y_train):
# Complete - Calculate likelihoods
# Store in self.likelihoods dictionary
# Note: the likelihoods are stored for both yes and no in the format: feat_val + '_' + outcome
# See sample output for example
def _calc_predictor_prior(self):
""" P(x) - Evidence """
for feature in self.features:
# Caclulate priors for the predictors
# Store in self.pred_priors
# Probability of each outcome for each feature
def predict(self, X):
""" Calculates Posterior probability P(c|x) """
results = []
X = np.array(X)
for query in X:
probs_outcome = {}
for outcome in np.unique(self.y_train):
prior = self.class_priors[outcome]
likelihood = 1
evidence = 1
for feat, feat_val in zip(self.features, query):
likelihood *= self.likelihoods[feat][feat_val + '_' + outcome]
evidence *= self.pred_priors[feat][feat_val]
posterior = (likelihood * prior) / (evidence)
probs_outcome[outcome] = posterior
result = max(probs_outcome, key = lambda x: probs_outcome[x])
results.append(result)
return np.array(results)
y = df["Play Golf"]
X = df.drop(columns={"Play Golf"})
X["Windy"] = X["Windy"].astype("str")
nb_clf = MyNaiveBayes()
nb_clf.fit(X, y)
print(accuracy_score(y, nb_clf.predict(X)))
# With Proper Data Prep - though the dataset is small, so this won't be the best example.
#X_train, X_test, y_train, y_test = train_test_split(X, y)
#nb_clf.fit(X_train, y_train)
#nb_preds = nb_clf.predict(X_test)
#print(accuracy_score(y_test, nb_preds))
###Output
_____no_output_____
###Markdown
Predict if we Should Golf on Some Random DaysCreate a dataframe with some days of weather and predict them. Note that the model (probably, unless you made it better) expects True/False to be strings, not booleans. Your dataframe should be in the same format as the feature set - X
###Code
# Complete - add in a dataframe of days.
# Predict if we should golf on those days.
###Output
['No' 'Yes']
###Markdown
SklearnOur Bayes works, but we can use sklearn for something that we are used to using, and is a bit more polished and better written. Multinomial NB is the default, it will work for both binary predictions, like we are doing, and multiple class predictions. Applying Bayes in code is similar to all the other algorithms. Here we'll encode the categories to make it work since the algorithm doesn't deal with strings, sklearn's implementation requires this, but it isn't inheirent to the algorithm.
###Code
y = df["Play Golf"]
X = df.drop(columns={"Play Golf"})
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder
mnb = MultinomialNB()
mnb_pipe = Pipeline([
("encode", OneHotEncoder()),
("model", mnb)
])
mnb_pipe.fit(X, y)
sk_preds = mnb_pipe.predict(X)
accuracy_score(sk_preds, y)
# Complete - predict our sample days using the sklearn model
###Output
_____no_output_____
###Markdown
Gaussian BayesAs we noted, regular Naive Bayes doesn't deal with numbers, so we can used Gaussian Bayes to handle those scenarios. Every step of the algorithm just requires a probability that something will occur or not occur. With categorical variables that calculation is very simple - count the number of times that it happens and divide by the total. With numerical values there isn't a direct equivalent. Rather than looking at the probability that something happens or doesn't happen, Gaussian NB calculates the probability of being in class A or B according to a normal distribution of the numerical feature. Outside of the different calculations of probability, the rest of the algorithm works in the same way as before. 
###Code
from sklearn.naive_bayes import GaussianNB
df_gaus = pd.read_csv("data/diabetes.csv")
df_gaus_y = df_gaus["Outcome"]
df_gaus_X = df_gaus.drop(columns={"Outcome"})
df_gaus.head()
###Output
_____no_output_____
###Markdown
DistributionsWe can make a plot as a shortcut to see the distributions of the numerical variables split by outcome class. If we look at the comparative distributions we can get a sense of the relative probabilities that are used in the calculations. For example, if we look at the Glucose feature. If we have an example with a glucose value of 100, the probability of that example being in class 0 is quite high, whereas the probability of it being in class 1 is low. If we have a sample that is 200, the probability of that being in class 1 is much higher.
###Code
sns.kdeplot(data=df_gaus, x="Glucose", hue="Outcome")
# Split data - this one has enough data to function properly
X_train_gaus, X_test_gaus, y_train_gaus, y_test_gaus = train_test_split(df_gaus_X, df_gaus_y)
# Model and predict
gaus_NB = GaussianNB()
gaus_NB.fit(X_train_gaus, y_train_gaus)
gaus_preds = gaus_NB.predict(X_test_gaus)
accuracy_score(y_test_gaus, gaus_preds)
###Output
_____no_output_____
###Markdown
ScalingNote that because in Bayes the features are independent of each other, there is no interaction between them with respect to calculations. When the probabilities are calculated for each feature, they dont depend on any other features - contrasted with something like linear regresion, where m1*x1 + m2*x2... will. Because of this, Bayes is one of the few things where scaling doesn't matter, though it also doesn't hurt if it is in there.
###Code
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
pipe_gaus = Pipeline([
("scale", StandardScaler()),
("model", GaussianNB())
])
pipe_gaus.fit(X_train_gaus, y_train_gaus)
pipe_gaus_preds = pipe_gaus.predict(X_test_gaus)
accuracy_score(y_test_gaus, pipe_gaus_preds)
###Output
_____no_output_____
###Markdown
ExercisePredict if people have heart disease with Gaussian NB. Note that we have mixed column types for features.
###Code
heart = pd.read_csv("data/heart.csv")
y_h = heart["HeartDisease"]
X_h = heart.drop(columns={"HeartDisease"})
heart.head()
# Complete - predict heart disease
###Output
_____no_output_____ |
Week-10_Machine-Learning-2.ipynb | ###Markdown
*Unsupervised learning: Latent Dirichlet allocation (LDA) topic modeling*
###Code
## Install Python package for LDA
# http://pythonhosted.org/lda/getting_started.html
!pip3 install lda
## Importing basic packages
import os
import numpy as np
os.chdir('/sharedfolder/')
!wget https://github.com/pcda17/pcda17.github.io/raw/master/week/10/nyt_articles_11-9-2017.zip
!unzip nyt_articles_11-9-2017.zip
os.chdir('/sharedfolder/nyt_articles_11-9-2017/')
document_list = []
for filename in [item for item in os.listdir('./') if '.txt' in item]:
text_data = open(filename).read()
document_list.append(text_data)
## Importing NLTK stop words
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
import string
stop_words = stopwords.words('english') + ["'s", "'re", '”', '“', '’', '—'] + list(string.punctuation)
string.punctuation
## Tokenizing and removing stop words from our list of documents
documents_filtered = []
for document in document_list:
token_list = word_tokenize(document.lower())
tokens_filtered = [item for item in token_list if (item not in stop_words)]
documents_filtered.append(' '.join(tokens_filtered))
## Viewing a preprocessed document
documents_filtered[30]
## Vectorizing preprocessed essays
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(documents_filtered)
## Creating a vocabulary list corresponding to the vectors we created above
vocabulary = vectorizer.get_feature_names()
vocabulary[1140:1160]
## Initializing an LDA model: 10 topics and 1500 iterations
import lda
model = lda.LDA(n_topics=10, n_iter=1500, random_state=1)
## Fitting the model using our list of vectors
model.fit(X)
## Viewing the top 50 words in each 'topic'
topic_word = model.topic_word_
n_top_words = 100
for i, topic_distribution in enumerate(topic_word):
topic_words = np.array(vocabulary)[np.argsort(topic_distribution)][:-(n_top_words+1):-1]
print('Topic ' + str(i) + ':')
print(' '.join(topic_words))
print()
###Output
Topic 0:
like one people time many even years would new get way first still much back could make work world also want know say two long think well day part go need see something going called another around good made us right take photo reading life might really come last better every never though story often far things little three look less thought set put best find end away told help old got took days yet later without thing hard asked trying enough feel ago known point play whose others months use came found line home big great among went night
Topic 1:
food art fashion image wine red white cooking style made advertisement like continue club york restaurant thanksgiving main credit recipe look museum wines france though de new dinner chocolate table johnson flavors chef clothing styles chicken plastic bar photo butter van meal shop design paris french tables finger 2015 beer coffee best lakes arnold black fat collection taste menu eat cabernet daniel brand designer color meat chimps favorite fish sea instagram artist guests franc rice and beautiful fresh light shopping nyt dogs animals fruit ice vegetables luxury leaves fine store japanese warm herbs brown recipes tradition sausage dark translation fruits
Topic 2:
said tax percent would company year million continue bill game business last companies could team plan 000 billion advertisement income money main reading pay season financial may players league 20 taxes 10 big national story group six investment deal federal also according firm states fund sales 100 pass major executive high cost expected interest soccer week rate former family top potential least growth foreign record halladay 40 games since win current united workers investors market plans points effort businesses 12 management manager role director industry quarter increase interview silver value term middle benefits 25 chief government president investments sosa able
Topic 3:
said dr people reading continue report university school main climate states story also advertisement college state students health may percent risk according law women year found violence cancer study americans medical mass heart gun research children evidence court student military united female family parents care change patients colleges case transgender department crime 000 2015 agency federal used air studies group among must high death drug attack drugs police hospital athletes deaths officer problem body national system says gender whether published members doctors researchers texas defense died community american families small admissions failure men rule prosecutors general kelley reported patient abortion
Topic 4:
cars car driving self tesla company 17 vehicles technology autonomous ford could 16 future human companies vehicle reality system tech says industry bitcoin would traffic year become model driver world uber software drivers drive road valley create space research might wall start political transportation musk systems ride network real machine electric safety city silicon design urban data 20 engineers already cost 40 may power battery problem virtual cities possible change miller chief size fully executive automated market overall use sharing lyft infrastructure models v2v stop augmented going waymo spare price border lot driverless miles customers tools production 30 automakers based
Topic 5:
said city two street new real park travel ms room estate island home year million building east house apartment three space 000 st photo west school hotel main bedroom neighborhood square five one water beach town place living community percent village also advertisement residents manhattan local high foot bay avenue trip area side reading day story price brooklyn open four com beyond group train puerto property residential long homes public center market including air rooms near window south family half travelers according tour covering spaces another trips co prices private years hurricane nearby find 30 restaurants houses winter boroughs rico
Topic 6:
mr trump president said state democrats china party republican states government american united house election chinese officials would political democratic republicans former wednesday tuesday country campaign virginia mayor main year white continue xi administration voters also obama last ms city since america saudi senate two department 2016 photo north candidates office story justice power washington first conservative war deal issues local donald trade control senator candidate peace vote public leaders news nuclear russia majority politics district national anti congress results run clinton burleigh race advertisement mrs european victory rights council county global term twitter committee elected change law care efforts
Topic 7:
mr said ms photo women family mother children film also wrote story year movie wanted kind men woman music child old star told first advertisement character love moment hop says never life hip always young hamill book shooting man brother artists main school father video show continue church person felt gerwig husband theater lady became fans read knew john black parents wilson tv director friend played daughter son white friends camp wearing sexual hair bird girl words ever moved janelle cast sometimes face fun kids characters saying scene female loved wife griffin death images came wars books sister swift recalled
Topic 8:
new please times york sign newsletter said continue main story reading special updates try must receive newsletters email later enter re box robot subscribe occurred view thank offers agree select address error occasional clicking verify subscribing invalid products services advertisement mr would week news get credit today could every office op including around board de next already person call opinion start editorial latest see commentary contributing earlier weekday columnists writers provoking 30 apple ed 2014 face offering development 2013 forces south rules month key bit comment stop religious rose close sample period required london comes read association program keep size
Topic 9:
new york times information may services use facebook com nytimes email us access time third image including account ads service digital personal please see credit order users nyt online content products policy media share subscription ad company google michaels based page help made apps travel terms advertising free used product party process parties cuba one address also purchase take via certain algorithm review home social news using provide contact user data site rosen technology without privacy ito send app article public delivery tripadvisor questions required print reviews well choose make submit right changes law offer version mobile hubble youtube twitter
###Markdown
▷Assignment Modify the code above: Apply a stemming step to each word before vectorizing the text. See example stemming code in the following cell.
###Code
## Stemming example
from nltk.stem.porter import PorterStemmer
stemmer = PorterStemmer()
print(stemmer.stem('nature'))
print(stemmer.stem('natural'))
print(stemmer.stem('naturalism'))
###Output
_____no_output_____
###Markdown
*Supervised learning: Naive Bayes classification*
###Code
## Download sample text corpora from GitHub, then unzip.
os.chdir('/sharedfolder/')
## Uncomment the lines below if you need to re-download test corpora we used last week.
#!wget -N https://github.com/pcda17/pcda17.github.io/blob/master/week/8/Sample_corpora.zip?raw=true -O Sample_corpora.zip
#!unzip -o Sample_corpora.zip
os.chdir('/sharedfolder/Sample_corpora')
os.listdir('./')
## Loading Melville novels
os.chdir('/sharedfolder/Sample_corpora/Herman_Melville/')
melville_texts = []
for filename in os.listdir('./'):
text_data = open(filename).read().replace('\n', ' ')
melville_texts.append(text_data)
print(len(melville_texts))
## Loading Austen novels
os.chdir('/sharedfolder/Sample_corpora/Jane_Austen/')
austen_texts = []
for filename in os.listdir('./'):
text_data = open(filename).read().replace('\n', ' ')
austen_texts.append(text_data)
print(len(austen_texts))
## Removing the last novel from each list so we can use it to test our classifier
melville_train_texts = melville_texts[:-1]
austen_train_texts = austen_texts[:-1]
melville_test_text = melville_texts[-1]
austen_test_text = austen_texts[-1]
## Creating a master list of Melville sentences
from nltk.tokenize import sent_tokenize
melville_combined_texts = ' '.join(melville_train_texts)
melville_sentences = sent_tokenize(melville_combined_texts)
print(len(melville_sentences))
melville_sentences[9999]
## Extracting 2000 Melville sentences at random for use as a training set
import random
melville_train_sentences = random.sample(melville_sentences, 2000)
## Creating a list of Melville sentences for our test set
melville_test_sentences = sent_tokenize(melville_test_text)
print(len(melville_test_sentences))
melville_test_sentences[997]
## Creating a master list of Austen sentences
austen_combined_texts = ' '.join(austen_train_texts)
austen_sentences = sent_tokenize(austen_combined_texts)
print(len(austen_sentences))
austen_sentences[8979]
## Extracting 2000 Austen sentences at random for use as a training set
austen_train_sentences = random.sample(austen_sentences, 2000)
## Creating a list of Austen sentences for our test set
austen_test_sentences = sent_tokenize(austen_test_text)
print(len(austen_test_sentences))
austen_test_sentences[1000]
## Combing training data
combined_texts = melville_train_sentences + austen_train_sentences
## Creating list of associated class values:
## 0 for Melville, 1 for Austen
y = [0]*len(melville_train_sentences) + [1]*len(austen_train_sentences)
## Creating vectorized training set using our combined sentence list
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(combined_texts).toarray()
X.shape
## Training a multinomial naive Bayes classifier
## X is a combined list of Melville and Austen sentences (2000 sentences from each)
## y is a list of classes (0 or 1)
from sklearn.naive_bayes import MultinomialNB
classifier = MultinomialNB().fit(X, y)
## Classifying 5 sentences in our Austen test set
# Recall that 0 means Melville & 1 means Austen
from pprint import pprint
input_sentences = austen_test_sentences[3000:3005]
input_vector = vectorizer.transform(input_sentences) ## Converting a list of string to the same
## vector format we used for our training set.
pprint(austen_test_sentences[3000:3005])
classifier.predict(input_vector)
## Classifying 5 sentences in our Melville test set
input_sentences = melville_test_sentences[3000:3005]
input_vector = vectorizer.transform(input_sentences)
pprint(melville_test_sentences[3000:3005])
classifier.predict(input_vector)
###Output
_____no_output_____ |
Recommendation System/Collaborative Filtering Based Recommendation/.ipynb_checkpoints/Collaborative Based Recommendation-checkpoint.ipynb | ###Markdown
Recommendation Based on Rating Count
###Code
high_rated_books = pd.DataFrame(ratings_df.groupby('ISBN')['book_rating'].count().sort_values(ascending=False))
high_rated_books.columns=['rating_count']
high_rated_books.head()
mean_of_books = pd.DataFrame(ratings_df.groupby('ISBN')['book_rating'].mean())
mean_of_books.columns=['mean_rating']
books_mean_rating_count = pd.merge(high_rated_books,mean_of_books,on='ISBN')
books_mean_rating_count.head()
# We can see that books have high rating Count but the average/mean rating is very poor.
###Output
_____no_output_____
###Markdown
Users with less than 200 ratings, and books with less than 100 ratings are excluded.
###Code
user_count = ratings_df['userId'].value_counts()
ratings_df = ratings_df[ratings_df['userId'].isin(user_count[user_count>=200].index)]
rating_count = ratings_df['book_rating'].value_counts()
ratings_df = ratings_df[ratings_df['book_rating'].isin(rating_count[rating_count>=100].index)]
###Output
_____no_output_____
###Markdown
Collaborative Filtering Using KNN
###Code
combined_book_rating_df = pd.merge(ratings_df,books_df,on='ISBN')
combined_book_rating_df.drop(['author','year_of_pubs','publisher','imageUrlS','imageUrlM','imageUrlL'],inplace=True,axis=1)
combined_book_rating_df.head()
book_rating_count = pd.DataFrame(combined_book_rating_df.groupby('title')['book_rating'].count())
book_rating_count.rename(columns={'book_rating':'rating_count'},inplace=True)
book_rating_count.head()
rating_plus_combined = pd.merge(combined_book_rating_df,book_rating_count,on='title')
rating_plus_combined.head()
# let us consider a thresold value
thresold_value = 50
rating_popular_book = rating_plus_combined.query('rating_count >= @thresold_value')
rating_popular_book.head()
rating_popular_book.shape
###Output
_____no_output_____
###Markdown
Filter to users in US and Canada Only
###Code
merged_df = pd.merge(rating_popular_book,users_df,on='userId')
merged_df.drop('age',axis=1,inplace=True)
merged_df.head()
us_canada_rating = merged_df[merged_df['location'].str.contains('usa|canada')]
us_canada_rating.head()
from scipy.sparse import csr_matrix
us_canada_rating = us_canada_rating.drop_duplicates(['userId','title'])
us_canada_rating_ptable = us_canada_rating.pivot(index='title',columns='userId',values='book_rating').fillna(0)
us_canada_rating_ptable.head()
us_canada_rating_matrix = csr_matrix(us_canada_rating_ptable.values)
us_canada_rating_matrix
# Here the KNN model used is an Unsupervised Model. It is different from the Classification Model.
from sklearn.neighbors import NearestNeighbors
knn_model = NearestNeighbors(metric='cosine',algorithm='brute')
knn_model.fit(us_canada_rating_matrix)
fetch_index = 655#np.random.choice(us_canada_rating_ptable.shape[0])
print(fetch_index)
distances, indices = knn_model.kneighbors(us_canada_rating_ptable.iloc[fetch_index,:].values.reshape(1,-1),n_neighbors=6)
us_canada_rating_ptable.index[fetch_index]
for i in range(0,len(distances.flatten())):
if i == 0:
print("Recommendation for {}".format(us_canada_rating_ptable.index[fetch_index]))
else:
print('{index}:{book}, with distance {distances}'.format(index = i,book = us_canada_rating_ptable.index[indices.flatten()[i]],
distances = distances.flatten()[i]))
###Output
Recommendation for The Summerhouse
1:Miss Julia Speaks Her Mind : A Novel, with distance 0.7319015561178213
2:Dream Country, with distance 0.7388349914155329
3:Unspeakable, with distance 0.7415487922518107
4:The Smoke Jumper, with distance 0.779693788214397
5:Irish Hearts, with distance 0.7833773016226391
|
ME3_NativeBayes/NaiveBayes.ipynb | ###Markdown
ME3 A simple classification task with Naive Bayes classifier & ROC curve Team Members-Kevin Khong-Wesley Wong SetupFirst, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn ≥0.20.
###Code
%matplotlib notebook
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# Common imports
import numpy as np
import pandas as pd
import seaborn as sn
import os
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
from sklearn.datasets import make_classification, make_blobs
from matplotlib.colors import ListedColormap
from sklearn.datasets import load_breast_cancer
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# to make this notebook's output stable across runs
np.random.seed(42)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = 'Naive Bayesian'
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
#pip install -U scikit-learn
###Output
_____no_output_____
###Markdown
Part 0:Read and run each cell of the example. Confusion matrix - simple example 1A simple example shows what confusion matrix represents.This example includes two class labels, 0 and 1.
###Code
y_true1 = [1, 0, 0, 1, 1, 0, 1, 1, 0]
y_pred1 = [1, 1, 0, 1, 1, 0, 1, 1, 1]
confusion_mat1 = confusion_matrix(y_true1, y_pred1)
print(confusion_mat1)
# Print classification report
target_names1 = ['Class-0', 'Class-1']
result_metrics1 = classification_report(y_true1, y_pred1, target_names=target_names1)
print(result_metrics1)
# We can also retrieve a dictionary of metrics and access the values using dictionary
result_metrics_dict1 = classification_report(y_true1, y_pred1, target_names=target_names1, output_dict=True)
print(result_metrics_dict1)
###Output
precision recall f1-score support
Class-0 1.00 0.50 0.67 4
Class-1 0.71 1.00 0.83 5
accuracy 0.78 9
macro avg 0.86 0.75 0.75 9
weighted avg 0.84 0.78 0.76 9
{'Class-0': {'precision': 1.0, 'recall': 0.5, 'f1-score': 0.6666666666666666, 'support': 4}, 'Class-1': {'precision': 0.7142857142857143, 'recall': 1.0, 'f1-score': 0.8333333333333333, 'support': 5}, 'accuracy': 0.7777777777777778, 'macro avg': {'precision': 0.8571428571428572, 'recall': 0.75, 'f1-score': 0.75, 'support': 9}, 'weighted avg': {'precision': 0.8412698412698413, 'recall': 0.7777777777777778, 'f1-score': 0.7592592592592591, 'support': 9}}
###Markdown
Confusion matrix - simple example 2A simple example shows what confusion matrix represents. This example includes four class labels, 0, 1, 2 and 3.
###Code
y_true2 = [1, 0, 0, 2, 1, 0, 3, 3, 3]
y_pred2 = [1, 1, 0, 2, 1, 0, 1, 3, 3]
confusion_mat2 = confusion_matrix(y_true2, y_pred2)
print(confusion_mat2)
target_names2 = ['Class-0', 'Class-1', 'Class-2', 'Class-3']
result_metrics2 = classification_report(y_true2, y_pred2, target_names=target_names2)
print(result_metrics2)
# We can also retrieve a dictionary of metrics and access the values using dictionary
result_metrics_dict2 = classification_report(y_true2, y_pred2, target_names=target_names2, output_dict=True)
print(result_metrics_dict2)
###Output
precision recall f1-score support
Class-0 1.00 0.67 0.80 3
Class-1 0.50 1.00 0.67 2
Class-2 1.00 1.00 1.00 1
Class-3 1.00 0.67 0.80 3
accuracy 0.78 9
macro avg 0.88 0.83 0.82 9
weighted avg 0.89 0.78 0.79 9
{'Class-0': {'precision': 1.0, 'recall': 0.6666666666666666, 'f1-score': 0.8, 'support': 3}, 'Class-1': {'precision': 0.5, 'recall': 1.0, 'f1-score': 0.6666666666666666, 'support': 2}, 'Class-2': {'precision': 1.0, 'recall': 1.0, 'f1-score': 1.0, 'support': 1}, 'Class-3': {'precision': 1.0, 'recall': 0.6666666666666666, 'f1-score': 0.8, 'support': 3}, 'accuracy': 0.7777777777777778, 'macro avg': {'precision': 0.875, 'recall': 0.8333333333333333, 'f1-score': 0.8166666666666667, 'support': 9}, 'weighted avg': {'precision': 0.8888888888888888, 'recall': 0.7777777777777778, 'f1-score': 0.7925925925925926, 'support': 9}}
###Markdown
Naive Bayes Classifiers- Read Naive Bayes classifier in Python:https://scikit-learn.org/stable/modules/naive_bayes.html- Check out the difference between model parameters and hyper parameters:https://towardsdatascience.com/model-parameters-and-hyperparameters-in-machine-learning-what-is-the-difference-702d30970f6 1. Sythetic Datasets
###Code
# synthetic dataset for classification (binary)
cmap_bold = ListedColormap(['#FFFF00', '#00FF00', '#0000FF','#000000'])
plt.figure()
plt.title('Sample binary classification problem with two informative features')
# generate X values and y values (labels)
X, y = make_classification(n_samples = 100, n_features=2,
n_redundant=0, n_informative=2,
n_clusters_per_class=1, flip_y = 0.1,
class_sep = 0.5, random_state=0)
# plot the data
plt.scatter(X[:, 0], X[:, 1], marker= 'o', c=y, s=50, cmap=cmap_bold)
plt.show()
###Output
_____no_output_____
###Markdown
Naive Bayes classifier 1 Split the data to training data and test data
###Code
from sklearn.naive_bayes import GaussianNB
# split the data into training data and testing data
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
###Output
_____no_output_____
###Markdown
Training: Develop a model using training data
###Code
# create a Naive Bayes classifier using the training data
nbclf = GaussianNB()
nbclf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Testing: evaluate the model using testing data
###Code
# predict class labels on test data
y_pred = nbclf.predict(X_test)
###Output
_____no_output_____
###Markdown
Model Evaluation
###Code
# plot a confusion matrix
confusion_mat = confusion_matrix(y_test, y_pred)
print(confusion_mat)
# Print classification report
target_names = ['Class 0', 'Class 1']
result_metrics = classification_report(y_test, y_pred, target_names=target_names)
print(result_metrics)
# The average accuracy of the model on test data. This is the value of macro avg in results
nbclf.score(X_test, y_test)
from adspy_shared_utilities import plot_class_regions_for_classifier
# This shows the boundaries of classified regions
# build a NB model using training data and display the classified region
plot_class_regions_for_classifier(nbclf, X_train, y_train, X_test, y_test,
'Gaussian Naive Bayes classifier: Dataset 1')
###Output
_____no_output_____
###Markdown
ROC Curve
###Code
from sklearn.metrics import roc_curve, auc
y_score = nbclf.predict_proba(X_test)
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_score[:,1])
roc_auc = auc(false_positive_rate, true_positive_rate)
print('Accuracy = ', roc_auc)
# Plotting
plt.title('ROC')
plt.plot(false_positive_rate, true_positive_rate, label=('Accuracy = %0.2f'%roc_auc))
plt.legend(loc='lower right', prop={'size':8})
plt.plot([0,1],[0,1], color='lightgrey', linestyle='--')
plt.xlim([-0.05,1.0])
plt.ylim([0.0,1.05])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
###Output
Accuracy = 0.8600000000000001
###Markdown
2. Application to a real-world dataset- Breast Cancer dataset: one of the well-known datasets used in ML.
###Code
# Breast cancer dataset for classification
cancer = load_breast_cancer()
(X_cancer, y_cancer) = load_breast_cancer(return_X_y = True, as_frame=False)
print(X_cancer)
# Print class labels
target_names = cancer.target_names
target_names
###Output
_____no_output_____
###Markdown
Modeling through k-Cross Validation- Create 10 folds for training and testing.- Evaluate model performance for each iteration and obtain the average.
###Code
from sklearn.model_selection import KFold
# We start with k=3 and will increase it to 10.
kf = KFold(n_splits=3, random_state=None, shuffle=True) # Define the split - into 10 folds
kf.get_n_splits(X) # returns the number of splitting iterations in the cross-validator
print (kf)
###Output
KFold(n_splits=3, random_state=None, shuffle=True)
###Markdown
Apply k-Cross Validation
###Code
nbclf = GaussianNB()
for train_index, test_index in kf.split(X_cancer):
# for each iteration, get training data and test data
X_train, X_test = X_cancer[train_index], X_cancer[test_index]
y_train, y_test = y_cancer[train_index], y_cancer[test_index]
# train the model using training data
nbclf.fit(X_train, y_train)
# show how model performs with training data and test data
print('Accuracy of GaussianNB classifier on training set: {:.2f}'
.format(nbclf.score(X_train, y_train)))
print('Accuracy of GaussianNB classifier on test set: {:.2f}'
.format(nbclf.score(X_test, y_test)))
###Output
Accuracy of GaussianNB classifier on training set: 0.94
Accuracy of GaussianNB classifier on test set: 0.94
Accuracy of GaussianNB classifier on training set: 0.93
Accuracy of GaussianNB classifier on test set: 0.95
Accuracy of GaussianNB classifier on training set: 0.96
Accuracy of GaussianNB classifier on test set: 0.92
###Markdown
Model performance uisng k-Cross Validation
###Code
nbclf2 = GaussianNB()
# !!!!! Please make a summary of the model performance (averaging k folds' results) using result_metrics_dict
for train_index, test_index in kf.split(X_cancer):
# for each iteration, get training data and test data
X_train, X_test = X_cancer[train_index], X_cancer[test_index]
y_train, y_test = y_cancer[train_index], y_cancer[test_index]
# train the model using training data
nbclf2.fit(X_train, y_train)
# predict y values using test data
y_pred = nbclf2.predict(X_test)
confusion_mat = confusion_matrix(y_test, y_pred)
print(confusion_mat)
print(classification_report(y_test, y_pred, target_names=target_names))
# Since we can retrieve a dictionary of metrics and access the values using dictionary,
# now we can sum of the results of each iteration and get the average
result_metrics_dict = classification_report(y_test, y_pred, target_names=target_names, output_dict=True)
print(result_metrics_dict)
###Output
[[ 58 9]
[ 4 119]]
precision recall f1-score support
malignant 0.94 0.87 0.90 67
benign 0.93 0.97 0.95 123
accuracy 0.93 190
macro avg 0.93 0.92 0.92 190
weighted avg 0.93 0.93 0.93 190
{'malignant': {'precision': 0.9354838709677419, 'recall': 0.8656716417910447, 'f1-score': 0.8992248062015503, 'support': 67}, 'benign': {'precision': 0.9296875, 'recall': 0.967479674796748, 'f1-score': 0.9482071713147411, 'support': 123}, 'accuracy': 0.9315789473684211, 'macro avg': {'precision': 0.932585685483871, 'recall': 0.9165756582938964, 'f1-score': 0.9237159887581456, 'support': 190}, 'weighted avg': {'precision': 0.9317314834465196, 'recall': 0.9315789473684211, 'f1-score': 0.9309344425643001, 'support': 190}}
[[ 59 8]
[ 2 121]]
precision recall f1-score support
malignant 0.97 0.88 0.92 67
benign 0.94 0.98 0.96 123
accuracy 0.95 190
macro avg 0.95 0.93 0.94 190
weighted avg 0.95 0.95 0.95 190
{'malignant': {'precision': 0.9672131147540983, 'recall': 0.8805970149253731, 'f1-score': 0.9218749999999999, 'support': 67}, 'benign': {'precision': 0.937984496124031, 'recall': 0.983739837398374, 'f1-score': 0.9603174603174603, 'support': 123}, 'accuracy': 0.9473684210526315, 'macro avg': {'precision': 0.9525988054390646, 'recall': 0.9321684261618736, 'f1-score': 0.9410962301587301, 'support': 190}, 'weighted avg': {'precision': 0.9482914300620021, 'recall': 0.9473684210526315, 'f1-score': 0.9467614348370927, 'support': 190}}
[[ 72 6]
[ 6 105]]
precision recall f1-score support
malignant 0.92 0.92 0.92 78
benign 0.95 0.95 0.95 111
accuracy 0.94 189
macro avg 0.93 0.93 0.93 189
weighted avg 0.94 0.94 0.94 189
{'malignant': {'precision': 0.9230769230769231, 'recall': 0.9230769230769231, 'f1-score': 0.9230769230769231, 'support': 78}, 'benign': {'precision': 0.9459459459459459, 'recall': 0.9459459459459459, 'f1-score': 0.9459459459459459, 'support': 111}, 'accuracy': 0.9365079365079365, 'macro avg': {'precision': 0.9345114345114345, 'recall': 0.9345114345114345, 'f1-score': 0.9345114345114345, 'support': 189}, 'weighted avg': {'precision': 0.9365079365079365, 'recall': 0.9365079365079365, 'f1-score': 0.9365079365079365, 'support': 189}}
###Markdown
ROC CurveThe example shows a ROC curve using training data and test data for one time. This can be done in k-Cross Validation.
###Code
from sklearn.metrics import roc_curve, auc
X_train, X_test, y_train, y_test = train_test_split(X_cancer, y_cancer, random_state = 0)
y_score = nbclf2.predict_proba(X_test)
false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_score[:,1])
roc_auc = auc(false_positive_rate, true_positive_rate)
print('Accuracy = ', roc_auc)
# Plotting
plt.title('ROC')
plt.plot(false_positive_rate, true_positive_rate, label=('Accuracy = %0.2f'%roc_auc))
plt.legend(loc='lower right', prop={'size':8})
plt.plot([0,1],[0,1], color='lightgrey', linestyle='--')
plt.xlim([-0.05,1.0])
plt.ylim([0.0,1.05])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
###Output
Accuracy = 0.9907756813417191
###Markdown
ME3 Part 1 Build Naive Bayes classifiers on a well-known dataset, iris dataset. You are asked to build NB classifiers on two different datasets: (1) the original dataset (the data is not normalized) and (2) the normalized dataset. Use k-cross validation to evaluate the model performance.
###Code
from IPython.display import Image
Image("images/iris.png")
###Output
_____no_output_____
###Markdown
Dataset 1: irisObtain the data through either (1) or (2). - (1) You can read the data from sklearn.datasets using load_iris()- (2) you can directly read the data from a local file: iris.csv is stored in a folder "data"Run one of the two. (1) Obtain the data from sklearn.datsets
###Code
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data # petal length and width
y = iris.target
print(iris.target_names)
print(X)
print(y)
###Output
['setosa' 'versicolor' 'virginica']
[[5.1 3.5 1.4 0.2]
[4.9 3. 1.4 0.2]
[4.7 3.2 1.3 0.2]
[4.6 3.1 1.5 0.2]
[5. 3.6 1.4 0.2]
[5.4 3.9 1.7 0.4]
[4.6 3.4 1.4 0.3]
[5. 3.4 1.5 0.2]
[4.4 2.9 1.4 0.2]
[4.9 3.1 1.5 0.1]
[5.4 3.7 1.5 0.2]
[4.8 3.4 1.6 0.2]
[4.8 3. 1.4 0.1]
[4.3 3. 1.1 0.1]
[5.8 4. 1.2 0.2]
[5.7 4.4 1.5 0.4]
[5.4 3.9 1.3 0.4]
[5.1 3.5 1.4 0.3]
[5.7 3.8 1.7 0.3]
[5.1 3.8 1.5 0.3]
[5.4 3.4 1.7 0.2]
[5.1 3.7 1.5 0.4]
[4.6 3.6 1. 0.2]
[5.1 3.3 1.7 0.5]
[4.8 3.4 1.9 0.2]
[5. 3. 1.6 0.2]
[5. 3.4 1.6 0.4]
[5.2 3.5 1.5 0.2]
[5.2 3.4 1.4 0.2]
[4.7 3.2 1.6 0.2]
[4.8 3.1 1.6 0.2]
[5.4 3.4 1.5 0.4]
[5.2 4.1 1.5 0.1]
[5.5 4.2 1.4 0.2]
[4.9 3.1 1.5 0.2]
[5. 3.2 1.2 0.2]
[5.5 3.5 1.3 0.2]
[4.9 3.6 1.4 0.1]
[4.4 3. 1.3 0.2]
[5.1 3.4 1.5 0.2]
[5. 3.5 1.3 0.3]
[4.5 2.3 1.3 0.3]
[4.4 3.2 1.3 0.2]
[5. 3.5 1.6 0.6]
[5.1 3.8 1.9 0.4]
[4.8 3. 1.4 0.3]
[5.1 3.8 1.6 0.2]
[4.6 3.2 1.4 0.2]
[5.3 3.7 1.5 0.2]
[5. 3.3 1.4 0.2]
[7. 3.2 4.7 1.4]
[6.4 3.2 4.5 1.5]
[6.9 3.1 4.9 1.5]
[5.5 2.3 4. 1.3]
[6.5 2.8 4.6 1.5]
[5.7 2.8 4.5 1.3]
[6.3 3.3 4.7 1.6]
[4.9 2.4 3.3 1. ]
[6.6 2.9 4.6 1.3]
[5.2 2.7 3.9 1.4]
[5. 2. 3.5 1. ]
[5.9 3. 4.2 1.5]
[6. 2.2 4. 1. ]
[6.1 2.9 4.7 1.4]
[5.6 2.9 3.6 1.3]
[6.7 3.1 4.4 1.4]
[5.6 3. 4.5 1.5]
[5.8 2.7 4.1 1. ]
[6.2 2.2 4.5 1.5]
[5.6 2.5 3.9 1.1]
[5.9 3.2 4.8 1.8]
[6.1 2.8 4. 1.3]
[6.3 2.5 4.9 1.5]
[6.1 2.8 4.7 1.2]
[6.4 2.9 4.3 1.3]
[6.6 3. 4.4 1.4]
[6.8 2.8 4.8 1.4]
[6.7 3. 5. 1.7]
[6. 2.9 4.5 1.5]
[5.7 2.6 3.5 1. ]
[5.5 2.4 3.8 1.1]
[5.5 2.4 3.7 1. ]
[5.8 2.7 3.9 1.2]
[6. 2.7 5.1 1.6]
[5.4 3. 4.5 1.5]
[6. 3.4 4.5 1.6]
[6.7 3.1 4.7 1.5]
[6.3 2.3 4.4 1.3]
[5.6 3. 4.1 1.3]
[5.5 2.5 4. 1.3]
[5.5 2.6 4.4 1.2]
[6.1 3. 4.6 1.4]
[5.8 2.6 4. 1.2]
[5. 2.3 3.3 1. ]
[5.6 2.7 4.2 1.3]
[5.7 3. 4.2 1.2]
[5.7 2.9 4.2 1.3]
[6.2 2.9 4.3 1.3]
[5.1 2.5 3. 1.1]
[5.7 2.8 4.1 1.3]
[6.3 3.3 6. 2.5]
[5.8 2.7 5.1 1.9]
[7.1 3. 5.9 2.1]
[6.3 2.9 5.6 1.8]
[6.5 3. 5.8 2.2]
[7.6 3. 6.6 2.1]
[4.9 2.5 4.5 1.7]
[7.3 2.9 6.3 1.8]
[6.7 2.5 5.8 1.8]
[7.2 3.6 6.1 2.5]
[6.5 3.2 5.1 2. ]
[6.4 2.7 5.3 1.9]
[6.8 3. 5.5 2.1]
[5.7 2.5 5. 2. ]
[5.8 2.8 5.1 2.4]
[6.4 3.2 5.3 2.3]
[6.5 3. 5.5 1.8]
[7.7 3.8 6.7 2.2]
[7.7 2.6 6.9 2.3]
[6. 2.2 5. 1.5]
[6.9 3.2 5.7 2.3]
[5.6 2.8 4.9 2. ]
[7.7 2.8 6.7 2. ]
[6.3 2.7 4.9 1.8]
[6.7 3.3 5.7 2.1]
[7.2 3.2 6. 1.8]
[6.2 2.8 4.8 1.8]
[6.1 3. 4.9 1.8]
[6.4 2.8 5.6 2.1]
[7.2 3. 5.8 1.6]
[7.4 2.8 6.1 1.9]
[7.9 3.8 6.4 2. ]
[6.4 2.8 5.6 2.2]
[6.3 2.8 5.1 1.5]
[6.1 2.6 5.6 1.4]
[7.7 3. 6.1 2.3]
[6.3 3.4 5.6 2.4]
[6.4 3.1 5.5 1.8]
[6. 3. 4.8 1.8]
[6.9 3.1 5.4 2.1]
[6.7 3.1 5.6 2.4]
[6.9 3.1 5.1 2.3]
[5.8 2.7 5.1 1.9]
[6.8 3.2 5.9 2.3]
[6.7 3.3 5.7 2.5]
[6.7 3. 5.2 2.3]
[6.3 2.5 5. 1.9]
[6.5 3. 5.2 2. ]
[6.2 3.4 5.4 2.3]
[5.9 3. 5.1 1.8]]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2]
###Markdown
(2) Read the data from a local file: iris.csv is stored in a folder "data"
###Code
# read data from CSV file to dataframe
iris = pd.read_csv('./data/iris.csv')
# define target_namees (class lables)
target_names = ['setosa', 'versicolor', 'virginica']
print(iris.head())
print(iris.tail())
# X contains the first four columns, y contains class labels
#X = iris_data.iloc[:, [0,1,2,3]]
X = iris.drop(['Name', 'Class'], axis=1)
y = iris.iloc[:, [5]]
print( X.head())
print(y.head())
###Output
SepalLength SepalWidth PetalLength PetalWidth Name Class
0 5.1 3.5 1.4 0.2 Iris-setosa 0
1 4.9 3.0 1.4 0.2 Iris-setosa 0
2 4.7 3.2 1.3 0.2 Iris-setosa 0
3 4.6 3.1 1.5 0.2 Iris-setosa 0
4 5.0 3.6 1.4 0.2 Iris-setosa 0
SepalLength SepalWidth PetalLength PetalWidth Name Class
145 6.7 3.0 5.2 2.3 Iris-virginica 2
146 6.3 2.5 5.0 1.9 Iris-virginica 2
147 6.5 3.0 5.2 2.0 Iris-virginica 2
148 6.2 3.4 5.4 2.3 Iris-virginica 2
149 5.9 3.0 5.1 1.8 Iris-virginica 2
SepalLength SepalWidth PetalLength PetalWidth
0 5.1 3.5 1.4 0.2
1 4.9 3.0 1.4 0.2
2 4.7 3.2 1.3 0.2
3 4.6 3.1 1.5 0.2
4 5.0 3.6 1.4 0.2
Class
0 0
1 0
2 0
3 0
4 0
###Markdown
Tasks:- First, run basic Python functions for checking the data. - describe(), info(), isnull(), boxplot(), etc. - Your modeling analysis should be done on two different datasets, (1) the original dataset and (2)
###Code
#Normalizing Iris Data Frame
normalized_iris = iris[['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth', 'Class']]
normalized_iris = normalized_iris.apply(lambda x:(x - x.min(axis = 0)) / (x.max(axis=0)-x.min(axis = 0)))
print(iris.describe())
print(normalized_iris.describe())
print(normalized_iris.info())
print(iris.info())
print(normalized_iris.isnull())
print(iris.isnull())
iris.boxplot()
normalized_iris.boxplot()
###Output
_____no_output_____
###Markdown
(1) NB classifier using the original dataset- Create Naive Bayes classifier. - A framework of k-cross validation (k = 3).- Display confusion matrix (a matrix with numbers).- Print a summary of performance metrics.- Plot ROC curves (this task is done. See the example code segment).
###Code
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import KFold
#Attempt to convert dataframe into data arrays
X_as_data_array = X.to_numpy()
Y_as_data_array = y.to_numpy()
# We start with k=3 and will increase it to 10.
kf = KFold(n_splits=3, random_state=None, shuffle=True) # Define the split - into 10 folds
kf.get_n_splits(X) # returns the number of splitting iterations in the cross-validator
print (kf)
nbclf = GaussianNB()
for train_index, test_index in kf.split(X_as_data_array):
X_train = X_as_data_array[train_index]
X_test = X_as_data_array[test_index]
y_train = Y_as_data_array[train_index]
y_test = Y_as_data_array[test_index]
# train the model using training data
# train the model using training data
nbclf.fit(X_train, y_train.ravel())
# predict y values using test data
y_pred = nbclf.predict(X_test)
confusion_mat = confusion_matrix(y_test, y_pred)
print(confusion_mat)
print(classification_report(y_test, y_pred, target_names=target_names))
###Output
KFold(n_splits=3, random_state=None, shuffle=True)
[[15 0 0]
[ 0 17 0]
[ 0 0 18]]
precision recall f1-score support
setosa 1.00 1.00 1.00 15
versicolor 1.00 1.00 1.00 17
virginica 1.00 1.00 1.00 18
accuracy 1.00 50
macro avg 1.00 1.00 1.00 50
weighted avg 1.00 1.00 1.00 50
[[19 0 0]
[ 0 15 2]
[ 0 3 11]]
precision recall f1-score support
setosa 1.00 1.00 1.00 19
versicolor 0.83 0.88 0.86 17
virginica 0.85 0.79 0.81 14
accuracy 0.90 50
macro avg 0.89 0.89 0.89 50
weighted avg 0.90 0.90 0.90 50
[[16 0 0]
[ 0 15 1]
[ 0 1 17]]
precision recall f1-score support
setosa 1.00 1.00 1.00 16
versicolor 0.94 0.94 0.94 16
virginica 0.94 0.94 0.94 18
accuracy 0.96 50
macro avg 0.96 0.96 0.96 50
weighted avg 0.96 0.96 0.96 50
###Markdown
ROC Curve- This part is done. This code assumes that your NB classifier is defined as nbclf. - The code segment shows how to draw ROC curves for multi-classification where there are more than two class labels.
###Code
from sklearn.preprocessing import label_binarize
import warnings
warnings.filterwarnings(action= 'ignore')
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0)
# we assume that your NB classifier's name is nbclf.
# Otherwise, you need to modify the name of the model.
y_score = nbclf.predict_proba(X_test)
y_test = label_binarize(y_test, classes=[0,1,2])
n_classes = 3
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Plot of a ROC curve for a specific class
for i in range(n_classes):
print("accuracy: " , roc_auc[i])
plt.figure()
plt.plot(fpr[i], tpr[i], label='ROC curve (area = %0.2f)' % roc_auc[i])
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example for class ' + str(i) )
plt.legend(loc="lower right")
plt.show()
###Output
accuracy: 1.0
###Markdown
(2) NB classifier using the normalized dataset- Normalize the data - Make sure that you normalized only X values. - Create Naive Bayes classifier. - A framework of k-cross validation (k = 3).- Display confusion matrix (a matrix with numbers).- Print a summary of performance metrics.- Plot ROC curves (this task is done. See the example code segment).
###Code
#Normalizing DF's X values
normalized_iris = iris[['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth']]
normalized_iris = normalized_iris.apply(lambda x:(x - x.min(axis = 0)) / (x.max(axis=0)-x.min(axis = 0)))
# X contains the first four columns, y contains class labels
#X = iris_data.iloc[:, [0,1,2,3]]
X = normalized_iris
y = iris.iloc[:, [5]]
print(X.head())
print(y.head())
from sklearn.naive_bayes import GaussianNB
from sklearn.model_selection import KFold
#Attempt to convert dataframe into data arrays
X_as_data_array = X.to_numpy()
Y_as_data_array = y.to_numpy()
# We start with k=3 and will increase it to 10.
kf = KFold(n_splits=3, random_state=None, shuffle=True) # Define the split - into 10 folds
kf.get_n_splits(X) # returns the number of splitting iterations in the cross-validator
print (kf)
nbclf = GaussianNB()
for train_index, test_index in kf.split(X):
X_train, X_test = X_as_data_array[train_index], X_as_data_array[test_index]
y_train, y_test = Y_as_data_array[train_index], Y_as_data_array[test_index]
# train the model using training data
nbclf.fit(X_train, y_train.ravel())
# predict y values using test data
y_pred = nbclf.predict(X_test)
confusion_mat = confusion_matrix(y_test, y_pred)
print(confusion_mat)
print(classification_report(y_test, y_pred, target_names=target_names))
result_metrics_dict = classification_report(y_test, y_pred, target_names=target_names, output_dict=True)
print(result_metrics_dict)
from sklearn.preprocessing import label_binarize
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 0)
# we assume that your NB classifier's name is nbclf.
# Otherwise, you need to modify the name of the model.
y_score = nbclf.predict_proba(X_test)
y_test = label_binarize(y_test, classes=[0,1,2])
n_classes = 3
# Compute ROC curve and ROC area for each class
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Plot of a ROC curve for a specific class
for i in range(n_classes):
print("accuracy: " , roc_auc[i])
plt.figure()
plt.plot(fpr[i], tpr[i], label='ROC curve (area = %0.2f)' % roc_auc[i])
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example for class ' + str(i) )
plt.legend(loc="lower right")
plt.show()
###Output
accuracy: 1.0
|
notebooks/record_train/example_basic_motion.ipynb | ###Markdown
Execute the following block of code by selecting it and clicking ``ctrl + enter`` to create an ``NvidiaRacecar`` class.
###Code
from jetracer.nvidia_racecar import NvidiaRacecar
car = NvidiaRacecar()
###Output
_____no_output_____
###Markdown
The ``NvidiaRacecar`` implements the ``Racecar`` class, so it has two attributes ``throttle`` and ``steering``. We can assign values in the range ``[-1, 1]`` to these attributes. Execute the following to set the steering to 0.4.> If the car does not respond, it may still be in ``manual`` mode. Flip the manual override switch on the RC transmitter.
###Code
car.steering = 0.3
###Output
_____no_output_____
###Markdown
The ``NvidiaRacecar`` class has two values ``steering_gain`` and ``steering_bias`` that can be used to calibrate the steering.We can view the default values by executing the cells below.
###Code
print(car.steering_gain)
print(car.steering_offset)
###Output
0.0
###Markdown
The final steering value is computed using the equation$y = a \times x + b$Where,* $a$ is ``car.steering_gain``* $b$ is ``car.steering_offset``* $x$ is ``car.steering``* $y$ is the value written to the motor driverYou can adjust these values calibrate the car so that setting a value of ``0`` moves forward, and setting a value of ``1`` goes fully right, and ``-1`` fully left. To set the throttle of the car to ``0.2``, you can call the following.> Give JetRacer lots of space to move, and be ready on the manual override, JetRacer is *fast*
###Code
car.throttle = 0.1
###Output
_____no_output_____
###Markdown
The throttle also has a gain value that could be used to control the speed response. The throttle output is computed as$y = a \times x$Where,* $a$ is ``car.throttle_gain``* $x$ is ``car.throttle``* $y$ is the value written to the speed controllerExecute the following to print the default gain
###Code
print(car.throttle_gain)
###Output
0.8
###Markdown
Set the following to limit the throttle to half
###Code
car.throttle_gain = 0.5
###Output
_____no_output_____ |
1_tridy.ipynb | ###Markdown
Algoritmizace a programování 2 Cv.1. Implementace vlastních třídFiserdefinition of custom classes in Python:* constructors (__init__)* common methods: with return value without changing self (typical for immutable objects), example string, without return value (change self): objects that change states, example list, methods that both modify self and return a value (not very useful, possible use - chaining modifying calls that return a modified self)* all three approaches on a simple class, e.g. sheet of paper (tuple of sizes) with operations as halving, rotation, etc.* attributes and properties (getter and setter methods, property decorators) - importance of encapsulation* special (magic/dunder) methods: initially just __str__, more as they come in handyOngoing topics* basic collections and working with them (including collections package)* standard packages: re (basics), datetime, enum, math, random (generating in non-uniform distribution), statistics, * * itertools, pathllib, csv, zlib, etc.* documentation: + basic typing, doctest, wtc.* external packages: pillow, requests, numpy (basics)* GUI: Tkinter or Kivy (please send me your preferences)theory and implementation in lectures* stack and queue using list/dequeue* simple linked list* binary search* insert/select sort, bucket sort (I don't like bubble sort)* heap sort* binary tree - basic operationsonly theory in lectures (implementation in exercises, cognitive apprenticeship)* circular queue, queue using a linked list* heap* bidirectional linked list* merge or quick sort* binary tree - (delete, more complex operations) 1.1 Motivace 1.1.1 ProgramováníCo je to vlastně programování? Programování je činnost, při které převádíme myšlenky do spustitelného kódu. Tyto myšlenky se rozkládají do dvou typů: data (atributy) a příkazy (operace, chování). Sadě příkazů, která provádí něco užitečného se říká algoritmus (načti data ze souboru, seřaď kolekci dat). Způsob formování a zápisu myšlenek se může programátor od programátora lišit. Existují určité typické způsoby přemýšlení nad programování, kterým se říká paradigmata programování. 1.1.2 Paradigmata programováníZákladní dvě paradigmata jsou imperativní (program je sada příkazů/imperativů nad daty) a deklarativní (program je specifikace výsledku, který od programu chceme). Deklarativní programování se naučíte v kurzu databází (např.: SELECT * FROM studenti WHERE známka > 2, tj. vyber všechny data o studentech z tabulky studenti, ale jen u studentů, jejichž známka je horší jak 2). U deklarativního programování vůbec neříkáme, z jakých příkazů se má program skládat, aby tyto data našel. Imperativní paradigmat je paradigmat, který jste používali v KI/APR1. 1.1.3 Imperativní programováníImperativní paradigmat se dále rozkládá na dílčí podtypy. Jmenné konvence jsou trošku chaotické a nejednoznačné. Pokud myšlenky rozkládáme do algoritmů, které jsou izolované v podprogramech (funkce a procedury) a voláme je z hlavního bloku (typicky main), pak se paradigmat nazývá procedurální nebo také strukturovaný. Při tomto přístupu k programování se program navrhuje dvěma způsoby: top-down a bottom-up. Při top-down přístupu začínáme od bloku main a přemýšlíme, jaké dílčí části budeme potřebovat. Následně přemýšlíme, jaké dílčí části potřebujeme pro tvorbu těchto dílčích částí. Při bottom-up tvoříme nejdříve nejkonkrétnější algoritmy a následně z nich tvoříme obecnější algoritmy z dílčích. Pokud se program skládá pouze z funkcí, které volají jiné funkce a pracují s jejich návratovými hodnotami (tím se úplně vyhneme změně stavů objektů - viz dále v tomto sešitě), pak se nazývá paradigmat funkcionální 1.1.4 Objektově-orientované programováníJednim z dominantních imperativních paradigmatů v dnešní době je objektově-orientované programování (OOP). Tento paradigmat vyžaduje značné úsilí k ovládnutí. Jednak má složitou a značně abstraktní terminologii a jednak vyžaduje několik let používání, než ho ovládnete natolik, abyste v něm správným způsobem navrhovali a implementovalo udržitelné a spolehlivé aplikace. Jednim z důvodu vzniku byla motivace zvýšit kvalitu softwaru modelováním entity reálného světa (entita = abstrakce skutečnosti). Příkazům se zde říká operace entit a datům atributy entity. Modelováním entit našeho světa by mělo teoreticky zjednodušit návrh softwaru, jelikož modelujeme věci kolem sebe na které si můžeme ukázat (zaměstnanec, čipová karta, databáze). Každý jazyk zavádí terminologii trošku jinak. Například jazyky Java a C pro atributy využívají název pole (field), jazyk C++ využívá název členské proměnné, jazyk javascript využívá název vlastnost (property), atd. Pro algoritmy z příkazů se využívá převážně pojem metoda. V Jazyce Python se atributům říká datové členy (instanční a třídní) a operacím se říká metody.
###Code
#operace entity
def stekej(pes):
return "haf haf"
#atributy entity
azor = {
"jmeno": "Azor",
"majitele": ["Jana", "Petr"]
}
#entita s urcitymi atributy provadi operaci
print(stekej(pes=azor))
#ale bohuzel take funguje
honza = "Honza Novak"
print(stekej(pes=honza))
#operace neni svazana s entitou drzici atributy
###Output
haf haf
haf haf
###Markdown
1.2 TřídaV předchozím kódu jste viděli, že operace nejsou svázané (coupling) s entitou, která byla realizována slovníkem s daty. V OOP máme pro toto svázání prostředek s názvem třída (class). Třída je určitá šablona společných atributů (datových členů) a operací (metod), které budou mít všechny entity (objekty) tohoto typu. Pro vytvoření (instantizování = zhmotnění) objektu dané třídy (instantizace instance třídy) se volá speciální metoda zvaná konstruktor (v Pythonu se jí také říká inicializér). V těle konstruktoru se nachází přiřazení všech datových členů do objektu, který je typicky realizován slovem self (může být jiné slovo, ale úzus je používat self). Hodnoty datových členů se vloží do konstruktoru jako argumenty. Instantizace se provede zavoláním jména třídy s argumenty (většina jazyků využívá klíčové slovo new, python však ne).Odkaz k samostudiu: [OOP terminologie](https://www.tutorialspoint.com/oop-terminology-in-python)
###Code
#definice tridy
class Pes:
#konstruktor - metoda vytvarejici novou instanci (zhmotneni) tridy = objekt
def __init__(self, jmeno, majitele):
self.jmeno = jmeno
self.majitele = majitele
#metoda
def stekej(self):
return "haf haf"
azor = Pes("Azor", ["Jana", "Petr"]) #instantizace tridy
print(azor.stekej()) #volani operace = metody
print(azor.majitele) #volani atributu = datovy clen
###Output
haf haf
['Jana', 'Petr']
###Markdown
1.3 MetodyPython má ve své syntaxi metody tří druhů - instanční, třídní a statické. Instanční metody jsou operace objektu a umožňují manipulovat s datovými členy objektu (klasické metody). Instance mají přístup i ke svým třídním datovým členům, jejichž hodnoty jsou pro všechny instance v daném okamžiku stejné (instance sdílí stav třídy). Proto potřebujeme i metody, které jsou schopny pracovat s datovými členy třídy, tedy společnými datovými členy pro všechny instance této třídy. Na závěr se hodí ještě jedna konstrukce a to statické metody. Tyto metody nepracují s žádnými datovými členy a jen provádí nějakou operaci. Takto jsou realizované různé knihovny. Není třeba vytvářet instance těchto tříd. Většina jazyků obsahuje pouze statické a instanční metody, kde statická metoda splývá s třídní metodou.
###Code
class Pes:
krmivo = ['granule', 'maso', 'gauc'] #datovy clen, ktery je promennou tridy
def __init__(self, jmeno, majitele, zvuk_stekani):
self.jmeno = jmeno #datovy clen, ktery predstavuje promennou instance
self.majitele = majitele
self.zvuk_stekani = zvuk_stekani
def stekej(self): #metoda instance, self je odkaz na instanci
return self.zvuk_stekani #pokud metoda vraci promenne instance rika se ji getter (accessor)
def zmen_zvuk(self, novy_zvuk): #metody instance mohou menit datove cleny instance
self.zvuk_stekani = novy_zvuk #poud metoda meni datove cleny instance rika se ji setter (mutator)
@classmethod #dekorator - pridava vyznam strukture pod nim
def pridej_krmivo(cls, krmivo): #metoda tridy, cls je odkaz na tridu
cls.krmivo.append(krmivo) #muze menit datove cleny tridy
@staticmethod
def jak_dela_pes(): #staticka metoda
return "haf haf" #nemuze menit zadne datove cleny
#neni treba instance, abychom zavolali tuto metodu
print(Pes.jak_dela_pes())
azor = Pes("Azor", ["Jana", "Petr"], "vrrr haf vrrr")
print(azor.stekej()) #volani metody instance, ktera je accessorem
azor.zmen_zvuk("haficky hafi") #volani metody instance, ktera je mutatorem
print(azor.stekej()) #instanci byla zmenena promenna instance
print(azor.krmivo) #volani promenne tridy, vsechny instance tyto data sdili
Pes.pridej_krmivo("knedlo, vepro, zelo") #zmena promenne tridy pomoci metody tridy
print(azor.krmivo) #vsechny instance vidi tridni data zmenena
rita = Pes("Rita", ["Jan", "Milena"], "Rghhh wrrr") #overime tim, ze vytvorime dalsi instanci
print(rita.krmivo)
###Output
['granule', 'maso', 'gauc', 'knedlo, vepro, zelo', 'knedlo, vepro, zelo']
###Markdown
1.4 Zapouzdření 1.4.1 Skrývání informací a zapozdřeníJeden z důležitých konceptů v OOP je princip zapouzdření. Objekty (instance tříd) mají své instanční datové členy (proměnné definované v konstruktoru). V daném čase vykonávání kódu mají vždy určitou hodnotu a souhrn těchto hodnot nazýváme stav. Stav můžete brát obdobně jako ve fyzice na střední škole v termice a molekulové fyzice. Každý systém (kus světa, který nás zajímá) má určitý stav, který je zcela popsán nějakou rovnicí a k ní příslušnými stavovými proměnnými (soustava poloh a hybností u korpuskulí - Schrodingerova rovnice, tlak/teplota/objem/počet částic u plynu = stavová rovnice ideálního plynu, vlnová délka/amplituda/fázový posuv u vln = Vlnová rovnice). Každý systém prochází změnou stavů. V programování to může být webový parser dat, který nejprve otevře URL adresu, přečte HTML text, rozparsuje ho do stromu pomocí DOM, vyhledá listy (text ve značkách) a něco s nimi provede. Pokud v průběhu tohoto procesu naruším stav (např.: změním načtený HTML text v nějaké proměnné), tak dojde k selhání programu. Z toho důvodu se snažíme omezit, co je možné provádět se stavem objektů a co není. Řízení viditelnosti a možných operací nad datovými členy se v OOP nazývá zapouzdření.Odkaz k samostudiu: [Zapouzdření](https://en.wikipedia.org/wiki/Encapsulation_(computer_programming)) 1.4.2 Modifikátory přístupuZapouzdření se provádí pomocí modifikátorů přístupu a speciálních metod dvou kategorií - přístupové (gettery/accessory) a nastavující (settery/mutatory). Modifikátory přístupu slouží k tomu, abychom vyvolali výjimku, pokud se pokusí někdo z veřejného prostoru přistoupit k datovým členům. Modifikátory dělíme na 3 kategorie - veřejné (public), chráněné (protected) a soukromé (private). Pokud je nějaký datový člen veřejný, pak lze k němu přistupovat odkudkoliv z programu. Pokud je datový člen chráněný, pak již k němu nelze veřejně přistupovat. Datový člen je tak dostupný pouze u potomků této třídy (viz. cvičení dědičnost) nebo u objektu samotného (objekt sám zná svá data). Privátní člen je pak pouze přístupný z objektu samého. Python bohužel tyto modifikátory moc neřeší. V pythonu jsou všechny datové členy veřejné. Pokud je označíme dvěmi podtržítky, pak jsou z nich datové členy privátní (kompilátor vyvolá výjimku při jejich zavolání). Pokud je označíme chraněné (jedno podtržítko), pak bohužel kompilátor výjimku nevyhazuje, avšak mějme na paměti, že to programátor takto zamýšlel. Odkaz k samostudiu: [Modifikátory přístupu](https://www.tutorialsteacher.com/python/public-private-protected-modifiers) 1.4.3 VlastnostiChráněnost se musí řešit dodatečně přes na začátku zmíněné speciální metody - accessory a mutatory, které se v Pythonu realizují dekorátory @property a @var.setter. Pokud je datový atribut označen accessorem, pak je možné ho číst. Pokud i mutatorem, tak je možné ho i měnit. Datové členy pouze pro zápis jsou v Pythonu trošku složitější oproti jiným jazykům - musí se vytvořit accessor s ručním vyvoláním výjimky a následně mutator s implementací. Datové členy s accessorry a mutatory nazýváme vlastnosti (property).Odkaz k samostudiu: [Vlastnosti v Pythonu](https://www.programiz.com/python-programming/property)
###Code
import random
class Pes:
krmivo = ['granule', 'maso', 'gauc'] #verejny datovy clen tridy
def __init__(self, jmeno, majitele, zvuk_stekani):
self._jmeno = jmeno #chraneny datovy clen instance
self.majitele = majitele #verejny datovy clen instance
self.__zvuk_stekani = zvuk_stekani #privatni datovy clen instance
self._vek = 0
self._prikaz = None
def stekej(self): #veřejná metoda instance
return self.__zvuk_stekani
def __zmen_zvuk(self, novy_zvuk): #privatní metoda instance
self.__zvuk_stekani = novy_zvuk
#jmeno bude read-write - jmeno muzeme menit a pes nam ho obcas i prozradi nebo stekne :)
@property #definice verejne vlastnosti (accessor)
def jmeno(self):
return self._jmeno if random.random() > 0.5 else "Haf???"
@jmeno.setter #definice verejne vlastnosti (mutator)
def jmeno(self, value):
if value != "Jonatan": #pes se vsak nechce jmenovat Jonatan :))
self._jmeno = value
#vek bude read-only - vek nemuzeme jako verejnost nastavit, pes musi starnout
@property
def vek(self):
self._vek += 1
if self._vek >= 5:
self.__zmen_zvuk("GRAAAAWWWWW HAAAAAF VRRRRR")
return self._vek
#prikaz bude write-only (pes nam nemuze rict, co jsme mu prikazali)
@property
def prikaz(self):
raise AttributeError('unreadable attribute')
@prikaz.setter
def prikaz(self, value):
self._prikaz = value
@classmethod
def pridej_krmivo(cls, krmivo): #verejna metoda tridy
cls.krmivo.append(krmivo)
@staticmethod
def jak_dela_pes(): #verejna staticka metoda
return "haf haf"
azor = Pes(jmeno="Azor", majitele=None, zvuk_stekani="haf haf mnau?")
print(azor.stekej()) #volani verejne metody instance
print(azor.__zmen_zvuk("haf")) #volani privatni metody instance
azor.jmeno = "Jonatan" #nastaveni nove hodnoty mutatorem
print(azor.jmeno) #volani hodnoty accessorem
azor.jmeno = "Rex"
print(azor.jmeno)
print(azor._vek) #volani chranene promenne instance (bohuzel Python umozni)
azor.vek = 20 #nastaveni read-only vlastnosti
print(azor.vek) #volani read-only vlastnosti, ktera internet nastavuje privatni promennou
print(azor.stekej()) #volani verejne metody instance
azor.prikaz = "k noze" #nastaveni write-only vlastnosti
print(azor.prikaz) #cteni write-only vlastnosti
###Output
_____no_output_____
###Markdown
1.5 Magické metodyZajímavostí na jazyce Python je to, že i aritmetické a logické operace jsou metody jako takové. Pokud to jsou metody, tak je možné jejich chování přepsat (tzv. přetěžování metod). Těmto metodám, které v jiných jazycích představují součást syntaxe jazyke se říká v Pythonu magické nebo také dunder metody. Úplně korektně se tak říká metodám, které obsahují prefix a suffix složený ze dvou uvozovek, což jsou právě tyto operace.Odkaz k samostudiu: [Dunder metody](https://www.section.io/engineering-education/dunder-methods-python/)Odkaz k samostudiu: [Seznam dunder metod](https://docs.python.org/3/reference/datamodel.htmlspecial-method-names)
###Code
import random
class Pes:
krmivo = ['granule', 'maso', 'gauc']
def __init__(self, jmeno, majitele, zvuk_stekani):
self._jmeno = jmeno
self.majitele = majitele
self.__zvuk_stekani = zvuk_stekani
self._vek = 0
self._prikaz = None
def stekej(self):
return self.__zvuk_stekani
def __zmen_zvuk(self, novy_zvuk):
self.__zvuk_stekani = novy_zvuk
@property
def jmeno(self):
return self._jmeno
@jmeno.setter
def jmeno(self, value):
self._jmeno = value
@property
def vek(self):
return self._vek
@property
def prikaz(self):
raise AttributeError('unreadable attribute')
@prikaz.setter
def prikaz(self, value):
self._prikaz = value
@classmethod
def pridej_krmivo(cls, krmivo):
cls.krmivo.append(krmivo)
@staticmethod
def jak_dela_pes():
return "haf haf"
#sečtením dvou psů vznikne štěně (nová instance třídy Pes)
def __add__(self, other):
return Pes(self.jmeno + other.jmeno, self.majitele, "haf")
#přetopováním psa na řetězec se vypíší informace o psovi
def __str__(self):
return "Jmeno: " + self.jmeno + "\nVek: " + str(self.vek)
#porovnáním psa s jiným se získá výsledek toho, zda pes vlevo u relačního operátoru je starší jak pes vpravo
def __gt__(self, other):
return self.vek > other.vek
azor = Pes("Azor", ["Jana", "Michal"], "haf")
rita = Pes("Rita", ["Honza"], "hafiky")
stene = azor + rita
print(stene.jmeno)
print(str(azor))
print("Azor je starsi jak Rita: ", azor > rita)
###Output
Azor je starsi jak Rita: False
|
WebInformationExtraction/.ipynb_checkpoints/UnitTestingOnXPath-checkpoint.ipynb | ###Markdown
Libraries
###Code
#Import All Dependencies
# import cv2, os, bz2, json, csv, difflib, requests, socket, whois, urllib.request, urllib.parse, urllib.error, re, OpenSSL, ssl
import numpy as np
from datetime import datetime
from urllib.parse import urlparse
from urllib.request import Request, urlopen
# from selenium import webdriver
from matplotlib import pyplot as plt
from bs4 import BeautifulSoup
# from timeout import timeout
import requests
import numpy as np
import urllib
import cv2
import re
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.firefox.options import Options
from PIL import Image
from io import BytesIO
import time
import os
import os.path
from os import path
import io
from difflib import SequenceMatcher
import contextlib
try:
from urllib.parse import urlencode
except ImportError:
from urllib import urlencode
try:
from urllib.request import urlopen
except ImportError:
from urllib2 import urlopen
import sys
###Output
_____no_output_____
###Markdown
TinyURL
###Code
#Taken from https://www.geeksforgeeks.org/python-url-shortener-using-tinyurl-api/
#Returns the url subtracting the domain name with www and com stuff
#So, http://tinyurl.com/y5bffkh2 ---becomes---> y5bffkh2
def getTinyURL(URL):
request_url = ('http://tinyurl.com/api-create.php?' + urlencode({'url':URL}))
with contextlib.closing(urlopen(request_url)) as response:
return response.read().decode('utf-8 ')[19:]
#Returns Beautiful Soup object
def getHTML(URL):
try:
hdr = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.157 Safari/537.36'} #Make the user agent verified, that is Mozilla
# req = Request(URL,headers=hdr)
req = requests.get(URL, headers=hdr)
page = req.text #Get URL HTML contents
soup = BeautifulSoup(page, 'html.parser') #Convert to BeutifulSoup
print("Built Soup")
#prettyText = str(soup.prettify()) #Convert the HTML in its form
return soup
except Exception as e:
# # if e.__class__.__name__ == "TimeoutError": raise TimeoutError("")
return None
###Output
_____no_output_____
###Markdown
XPath
###Code
#https://selenium-python.readthedocs.io/locating-elements.html#locating-by-xpath
#XPath can either id or name relative
def getMyXPath(currentTag): #Original XPath
returnXPath = ""
while(currentTag.parent!=None):
if "id" in (currentTag.attrs):
print("true")
returnXPath = currentTag.name + "[@id='" + currentTag.attrs['id'].strip() + "']/" + returnXPath #//form[@id='loginForm']
break
if "name" in (currentTag.attrs):
print("true")
returnXPath = currentTag.name + "[@name='" + currentTag.attrs['name'].strip() + "']/" + returnXPath #//form[@id='loginForm']
break
returnXPath = currentTag.name + "/" + returnXPath
currentTag = currentTag.parent
print(currentTag.attrs)
returnXPath = returnXPath.replace("[document]/html","/") #When it reaches the end of document parent, it adds the following it it's start, so we need to delete it
# if not returnXPath.startswith("//"): #the XPath should start with 2 forward slash
# returnXPath = "//" + returnXPath
if not returnXPath.startswith("/"): #the XPath should start with 1 forward slash: Update
returnXPath = "/" + returnXPath
if returnXPath.endswith("/"): #the XPath should not end with forward slash
returnXPath = returnXPath[:-1]
returnXPath = returnXPath.replace("meta/","").replace("table/tr", "table/tbody/tr") #Few changes to be made while performing XPath
return (returnXPath) #//div[@id='tab-panel-0-w3']/div/span/h2
import itertools
def getStackXPath(element): #XPath code from Stack Overflow
"""
Generate xpath of soup element
:param element: bs4 text or node
:return: xpath as string
"""
components = []
child = element if element.name else element.parent
for parent in child.parents:
"""
@type parent: bs4.element.Tag
"""
previous = itertools.islice(parent.children, 0, parent.contents.index(child))
xpath_tag = child.name
xpath_index = sum(1 for i in previous if i.name == xpath_tag) + 1
components.append(xpath_tag if xpath_index == 1 else '%s[%d]' % (xpath_tag, xpath_index))
child = parent
components.reverse()
return '/%s' % '/'.join(components)
def getMyStackXPath(element): #My XPath code modified with Stack Overflow's one
"""
Generate xpath of soup element
:param element: bs4 text or node
:return: xpath as string
"""
components = []
child = element if element.name else element.parent
for parent in child.parents:
"""
@type parent: bs4.element.Tag
"""
if "id" in (child.attrs):
print("true")
components.append(child.name + "[@id='" + child.attrs['id'].strip() + "']") #//form[@id='loginForm']
break
if "name" in (child.attrs):
print("true")
components.append(child.name + "[@name='" + child.attrs['name'].strip() + "']") #//form[@id='loginForm']
break
previous = itertools.islice(parent.children, 0, parent.contents.index(child))
xpath_tag = child.name
xpath_index = sum(1 for i in previous if i.name == xpath_tag) + 1
components.append(xpath_tag if xpath_index == 1 else '%s[%d]' % (xpath_tag, xpath_index))
print("xpath_tag:",xpath_tag)
print("xpath_tag.attrs:",child.attrs)
print("xpath_index:",xpath_index)
print("components:",components)
child = parent
components.reverse()
return '/%s' % '/'.join(components)
###Output
_____no_output_____
###Markdown
Price
###Code
def getTheTagElementForPrice(gShopPriceUpdated, soup):
returnPriceElementTag = None
try:
dummyVar = soup(text=re.compile(gShopPriceUpdated))
# print(dummyVar)
for elem in dummyVar:
# print("elem.parent",elem.parent)
# print("elem.parent.name",elem.parent.name)
if returnPriceElementTag == None: #The first element is the return value unless we encounter a heading tag
returnPriceElementTag = elem.parent
if "h" in elem.parent.name: #Found the heading tag, so return this tag and break the loop
returnPriceElementTag = elem.parent
return returnPriceElementTag
if "span" in elem.parent.name: #Found the span tag, so return this tag and break the loop
returnPriceElementTag = elem.parent
return returnPriceElementTag
except Exception as e:
print("Error in getTheTagElementForPrice(gShopPriceUpdated, soup)")
return returnPriceElementTag
def findPriceElementTag(gShopPrice, soup): #gShopPrice = $379.00
gShopPrice = gShopPrice.replace("now","").strip() #GShop soemtimes gives prices with now suffix like "$0.00 now"
print(gShopPrice)
if "$" in gShopPrice:
gShopPriceUpdated = gShopPrice.replace("$", "\$") #gShopPriceUpdated = \$379.00; Required because $ is reserved keyword for regex
print(gShopPriceUpdated)
returnPriceElementTag = getTheTagElementForPrice(gShopPriceUpdated, soup)
if returnPriceElementTag != None and len(str(returnPriceElementTag))<400:
return returnPriceElementTag
gShopPriceUpdated = gShopPrice.replace("$","") #gShopPriceUpdated = 379.00
print(gShopPriceUpdated)
returnPriceElementTag = getTheTagElementForPrice(gShopPriceUpdated, soup)
if returnPriceElementTag != None and len(str(returnPriceElementTag))<400:
return returnPriceElementTag
gShopPriceUpdated = gShopPrice.replace("$", "\$").split(".")[0] #gShopPriceUpdated = \$379
print(gShopPriceUpdated)
returnPriceElementTag = getTheTagElementForPrice(gShopPriceUpdated, soup)
if returnPriceElementTag != None and len(str(returnPriceElementTag))<400:
return returnPriceElementTag
gShopPriceUpdated = gShopPrice.replace("$","").split(".")[0] #gShopPriceUpdated = 379
print(gShopPriceUpdated)
returnPriceElementTag = getTheTagElementForPrice(gShopPriceUpdated, soup)
if returnPriceElementTag != None and len(str(returnPriceElementTag))<400:
return returnPriceElementTag
return None
###Output
_____no_output_____
###Markdown
Testing
###Code
# Unit Testing for XPath
URL = "https://www.walmart.com/ip/Farberware-3-2-Quart-Digital-Oil-Less-Fryer-White/264698854?athcpid=264698854&athpgid=athenaHomepage&athcgid=null&athznid=BestInDeals&athieid=v1&athstid=CS020&athguid=466001f5-46cfa622-5eb821569a18a716&athancid=null&athena=true"
priceOfCurrentScreenshot = "39"
# soup = getHTML(URL)
# returnPriceElementTag = findPriceElementTag(priceOfCurrentScreenshot, soup)
# getMyStackXPath(returnPriceElementTag)
###Output
_____no_output_____ |
Old Vers/image_classification-01.ipynb | ###Markdown
cheatsheets- [What is One Hot Encoding? Why And When do you have to use it?](https://hackernoon.com/what-is-one-hot-encoding-why-and-when-do-you-have-to-use-it-e3c6186d008f)- [Read Own Multiple Images from folder and Save as a Dataset for Training](https://stackoverflow.com/questions/49220111/read-own-multiple-images-from-folder-and-save-as-a-dataset-for-training)- [How to write into and read from a TFRecords file in TensorFlow](http://www.machinelearninguru.com/deep_learning/tensorflow/basics/tfrecord/tfrecord.html)- [Useful Blog: machine learning mindset](https://machinelearningmindset.com/blog/)- [Jupyer Markdown cheatsheets](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheetlinks) - [matplotlib color list](https://matplotlib.org/examples/color/named_colors.html)- [matplotlib text styles](https://matplotlib.org/2.0.2/users/text_props.html)- [PEP 8 -- Style Guide for Python Code](https://www.python.org/dev/peps/pep-0008/)- [Gaussian processes framework in python](https://github.com/SheffieldML/GPy)- [Regular Expressions](https://docs.python.org/3/library/re.html)- [Regular Expressions - Tutorial](https://docs.python.org/3/howto/regex.html)- [presentation file: P:\EnergySGP\5) Production\5.2 Ongoing projects\AM\PP177972 (JTC)- Data Smart Lift AM (Zhou SF)\3. Minutes of meeting\20180628]()
###Code
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:80% !important; }</style>"))
import cv2
import numpy as np
import os
from random import shuffle
import glob
from pathlib import Path
from importlib import reload
import img_classification_modules
reload(img_classification_modules)
from img_classification_modules import CommonModules
home_dir = str(Path.home())
original_imgs_dir = home_dir + "/Documents/Conda/00-Projects/image-classification/samples/birds-01/"
resized_iamges_dir = home_dir + "/Documents/Conda/00-Projects/image-classification/samples/birds-01-resized/"
# file_name_only = "drone.41-Z"
# file_name = original_imgs_dir + file_name_only + ".jpg"
# resized_file_name = original_imgs_dir + file_name_only + "-resized.jpg"
target_img_size = 128
specific_file_name = ""
file_mini_batch_size = 100
file_count = 2
# can be set to skip n number of files
file_counter = 0
all_left_files_processed = False
perform_baselines_check = False
# skip_file_count = None
file_mini_batch_size = min(file_mini_batch_size, file_count) if (file_count > -1) else file_mini_batch_size
resized_images = []
while (all_left_files_processed == False):
if (specific_file_name == ""):
print("looking for possibly {0:d} image files - processing in mini batches...".format(file_mini_batch_size))
else:
print("looking for {0:} image files".format(specific_file_name))
resized_images += CommonModules.read_and_resize_images(file_count=file_mini_batch_size, skip_file_count=file_counter, images_dir=original_imgs_dir,
specific_file_name=specific_file_name, target_img_size=target_img_size,
save_resized_iamges=True, resized_iamges_dir=resized_iamges_dir)
if(len(resized_images) > 0):
print("Processing {0:d} image files is completed.".format(len(resized_images)))
print("")
file_counter += file_mini_batch_size
if (file_count != -1):
if ((file_count - file_counter) // file_mini_batch_size < 1):
file_mini_batch_size = file_count - file_counter
all_left_files_processed = (file_counter >= file_count) or (specific_file_name != "")
if(len(resized_images) == 0):
print("No new image file was found.")
all_left_files_processed = True
#else:
# print(resized_images[0].shape)
len(resized_images)
###Output
_____no_output_____
###Markdown
List images and their labels
###Code
train_band = 0.6
validation_band = train_band + 0.2
# test_portion = 1.0 - train_portion - validation_portion
home_dir = str(Path.home())
shuffle_data = True # shuffle the addresses before saving
# cat_dog_train_path = 'Cat vs Dog/train/*.jpg'
original_imgs_dir = home_dir + "/Documents/Conda/00-Projects/image-classification/samples/birds-01/"
iamges_dirs = [home_dir + "/Documents/Conda/00-Projects/image-classification/samples/drones-01-resized/",
home_dir + "/Documents/Conda/00-Projects/image-classification/samples/birds-01-resized/",]
file_names = []
labels = []
for image_index, image_dir in enumerate(iamges_dirs):
# read addresses and labels from the 'train' folder
loaded_file_names = list(glob.glob(image_dir + "/*.jpg"))
file_names += loaded_file_names
labels += [0 if "drone" in image_dir else 1 for file_name in file_names] # drone:0, bird:1
print("{0:d} images loaded and labelled from '{1:s}' collection".format(len(loaded_file_names), image_dir.split("/")[-2]))
if (shuffle_data):
labelled_data = list(zip(file_names, labels))
shuffle(labelled_data)
file_names, labels = zip(*labelled_data)
file_names = list(file_names)
labels = list(labels)
train_files = file_names[:int(train_band*len(file_names))]
train_labels = labels[:int(train_band*len(labels))]
val_files = file_names[int(train_band*len(file_names)):int(validation_band*len(file_names))]
val_labels = labels[int(train_band*len(file_names)):int(validation_band*len(file_names))]
test_files = file_names[int(validation_band*len(file_names)):]
test_labels = labels[int(validation_band*len(labels)):]
print("{0:d}%: training - {1:d}%: validation - {2:d}%: test".format(int(train_band*100),
int((validation_band - train_band)*100),
int((1.0 - validation_band + 0.005)*100)) )
###Output
3450 images loaded and labelled from 'drones-01-resized' collection
3589 images loaded and labelled from 'birds-01-resized' collection
60%: training - 20%: validation - 20%: test
###Markdown
Create a TFRecords file Test cell for image resizing
###Code
drone_imgs_dir = "C:/Users/MIEuser/Documents/Conda/00-Projects/image-classification/samples/test/"
file_name_only = "drone.41-Z"
file_name = drone_imgs_dir + file_name_only + ".jpg"
resized_file_name = drone_imgs_dir + file_name_only + "-resized.jpg"
target_img_size = 128
img = cv2.imread(file_name)
img_dimensions = np.asarray(img.shape[:2])
target_img_dimensions = np.asarray([target_img_size, target_img_size])
img_resize_ratios = target_img_dimensions / img_dimensions
min_img_ratio = min(np.min(img_resize_ratios), 1.0)
new_img = np.zeros(shape=(target_img_size, target_img_size, 3))
# if(np.any(img_resize_ratios < 1.0)):
resized_img_dimensions = (img_dimensions * min_img_ratio + 0.5).astype(int)
resized_img = cv2.resize(img, dsize=(resized_img_dimensions[1], resized_img_dimensions[0]), interpolation=cv2.INTER_CUBIC)
width_gap = int((target_img_size - resized_img_dimensions[0]) // 2.0)
height_gap = int((target_img_size - resized_img_dimensions[1]) // 2.0)
new_img[width_gap:width_gap+resized_img_dimensions[0], height_gap:height_gap+resized_img_dimensions[1]] = resized_img
cv2.imwrite(resized_file_name, new_img)
# max_img_size = max(img_shape)
# max_img_size_index = np.where(img_shape == max_img_size)[0][0]
# img_resize_ratio = target_img_size / max_img_size
# if(img_resize_ratio < 1.0):
print(img.shape, resized_img_dimensions, img_resize_ratios)
# resized_img = cv2.resize(img, dsize=(128, 128), interpolation=cv2.INTER_CUBIC)
###Output
_____no_output_____
###Markdown
Image ClassificationIn this project, you'll classify images from the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html). The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images. Get the DataRun the following cell to download the [CIFAR-10 dataset for python](https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz).
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
###Output
All files found!
###Markdown
Explore the DataThe dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named `data_batch_1`, `data_batch_2`, etc.. Each batch contains the labels and images that are one of the following:* airplane* automobile* bird* cat* deer* dog* frog* horse* ship* truckUnderstanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the `batch_id` and `sample_id`. The `batch_id` is the id for a batch (1-5). The `sample_id` is the id for a image and label pair in the batch.Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
###Code
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 1
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
###Output
Stats of batch 1:
Samples: 10000
Label Counts: {0: 1005, 1: 974, 2: 1032, 3: 1016, 4: 999, 5: 937, 6: 1030, 7: 1001, 8: 1025, 9: 981}
First 20 Labels: [6, 9, 9, 4, 1, 1, 2, 7, 8, 3, 4, 7, 7, 2, 9, 9, 9, 3, 2, 6]
Example of Image 1:
Image - Min Value: 5 Max Value: 254
Image - Shape: (32, 32, 3)
Label - Label Id: 9 Name: truck
###Markdown
Implement Preprocess Functions NormalizeIn the cell below, implement the `normalize` function to take in image data, `x`, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as `x`.
###Code
def normalize(x):
"""
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
"""
# TODO: Implement Function
output_range_min = 0.0
output_range_max = 1.0
output_range_diff = output_range_max - output_range_min
image_data_min = 0.0
image_data_max = 255
image_data_range_diff = image_data_max - image_data_min
normalized_image_data = output_range_min + (x - image_data_min)*output_range_diff / image_data_range_diff
return normalized_image_data
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_normalize(normalize)
###Output
Tests Passed
###Markdown
One-hot encodeJust like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the `one_hot_encode` function. The input, `x`, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to `one_hot_encode`. Make sure to save the map of encodings outside the function.Hint: Don't reinvent the wheel.
###Code
# Hamid: This cell is my own test unit *** please ignore it
from sklearn import preprocessing
lb = preprocessing.LabelBinarizer()
lb.fit(range(0, 2))
# print(lb.classes_)
print(lb.transform([1, 0]))
from sklearn import preprocessing
lb = preprocessing.LabelBinarizer()
lb.fit(range(0, 10))
def one_hot_encode(x):
"""
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
"""
# TODO: Implement Function
return lb.transform(x)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_one_hot_encode(one_hot_encode)
###Output
Tests Passed
###Markdown
Randomize DataAs you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. Preprocess all the data and save itRunning the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
###Output
_____no_output_____
###Markdown
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
valid_features.shape
###Output
_____no_output_____
###Markdown
Build the networkFor the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.>**Note:** If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.>However, if you would like to get the most out of this course, try to solve all the problems _without_ using anything from the TF Layers packages. You **can** still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the `conv2d` class, [tf.layers.conv2d](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d), you would want to use the TF Neural Network version of `conv2d`, [tf.nn.conv2d](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d). Let's begin! InputThe neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions* Implement `neural_net_image_input` * Return a [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) * Set the shape using `image_shape` with batch size set to `None`. * Name the TensorFlow placeholder "x" using the TensorFlow `name` parameter in the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder).* Implement `neural_net_label_input` * Return a [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) * Set the shape using `n_classes` with batch size set to `None`. * Name the TensorFlow placeholder "y" using the TensorFlow `name` parameter in the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder).* Implement `neural_net_keep_prob_input` * Return a [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow `name` parameter in the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder).These names will be used at the end of the project to load your saved model.Note: `None` for shapes in TensorFlow allow for a dynamic size.
###Code
import tensorflow as tf
def neural_net_image_input(image_shape):
"""
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
"""
# TODO: Implement Function
x = tf.placeholder(dtype = tf.float32, shape = [None, image_shape[0], image_shape[1], image_shape[2]], name="x")
return x
def neural_net_label_input(n_classes):
"""
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
"""
# TODO: Implement Function
y = tf.placeholder(dtype = tf.float32, shape = [None, n_classes], name="y")
return y
def neural_net_keep_prob_input():
"""
Return a Tensor for keep probability
: return: Tensor for keep probability.
"""
# TODO: Implement Function
keep_prob = tf.placeholder(dtype = tf.float32, name="keep_prob")
return keep_prob
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
###Output
Image Input Tests Passed.
Label Input Tests Passed.
Keep Prob Tests Passed.
###Markdown
Convolution and Max Pooling LayerConvolution layers have a lot of success with images. For this code cell, you should implement the function `conv2d_maxpool` to apply convolution then max pooling:* Create the weight and bias using `conv_ksize`, `conv_num_outputs` and the shape of `x_tensor`.* Apply a convolution to `x_tensor` using weight and `conv_strides`. * We recommend you use same padding, but you're welcome to use any padding.* Add bias* Add a nonlinear activation to the convolution.* Apply Max Pooling using `pool_ksize` and `pool_strides`. * We recommend you use same padding, but you're welcome to use any padding.**Note:** You **can't** use [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) for **this** layer, but you can still use TensorFlow's [Neural Network](https://www.tensorflow.org/api_docs/python/tf/nn) package. You may still use the shortcut option for all the **other** layers.
###Code
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
"""
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
"""
# TODO: Implement Function
#conv_layer = tf.nn.conv2d(input, weight, strides, padding)
print("conv2d_maxpool... Start")
print("Cheking inputs dimensions... ")
print('conv_ksize: ', conv_ksize)
print('conv_num_outputs: ', conv_num_outputs)
#print(x_tensor)
input_depth = x_tensor.get_shape().as_list()[3]
# weight = tf.Variable(tf.truncated_normal([filter_size_height, filter_size_width, color_channels, k_output]))
# bias = tf.Variable(tf.zeros(k_output))
# [batch, height, width, channels]
"""
truncated_normal(
shape,
mean=0.0,
stddev=1.0,
dtype=tf.float32,
seed=None,
name=None
)
"""
weights = tf.Variable(tf.truncated_normal(shape=[conv_ksize[0], conv_ksize[1], input_depth, conv_num_outputs],
mean=0.0, stddev=0.05))
biases = tf.Variable(tf.zeros(conv_num_outputs))
conv_strides = (1, conv_strides[0], conv_strides[1], 1)
pool_ksize = (1, pool_ksize[0], pool_ksize[1], 1)
pool_strides = (1, pool_strides[0], pool_strides[1], 1)
print("Cheking strides dimensions... ")
print('conv_strides: ', conv_strides)
print('pool_ksize: ', pool_ksize)
print('pool_strides', pool_strides)
conv_layer = tf.nn.conv2d(x_tensor, weights, conv_strides, 'SAME')
conv_layer = tf.nn.bias_add(conv_layer, biases)
conv_layer = tf.nn.max_pool(conv_layer, ksize=pool_ksize, strides=pool_strides, padding='SAME')
conv_layer = tf.nn.relu(conv_layer)
#H1: conv_layer = tf.nn.max_pool(conv_layer, ksize=pool_ksize, strides=pool_strides, padding='SAME')
print("conv2d_maxpool... End")
return conv_layer
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_con_pool(conv2d_maxpool)
###Output
conv2d_maxpool... Start
Cheking inputs dimensions...
conv_ksize: (2, 2)
conv_num_outputs: 10
WARNING:tensorflow:From C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
Cheking strides dimensions...
conv_strides: (1, 4, 4, 1)
pool_ksize: (1, 2, 2, 1)
pool_strides (1, 2, 2, 1)
conv2d_maxpool... End
Tests Passed
###Markdown
Flatten LayerImplement the `flatten` function to change the dimension of `x_tensor` from a 4-D tensor to a 2-D tensor. The output should be the shape (*Batch Size*, *Flattened Image Size*). Shortcut option: you can use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) packages for this layer. For more of a challenge, only use other TensorFlow packages.
###Code
def flatten(x_tensor):
"""
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
"""
# TODO: Implement Function
#print(x_tensor)
output_tensor = tf.contrib.layers.flatten(x_tensor)
#print(output_tensor)
return output_tensor
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_flatten(flatten)
###Output
WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.
WARNING:tensorflow:From C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\contrib\layers\python\layers\layers.py:1624: flatten (from tensorflow.python.layers.core) is deprecated and will be removed in a future version.
Instructions for updating:
Use keras.layers.flatten instead.
Tests Passed
###Markdown
Fully-Connected LayerImplement the `fully_conn` function to apply a fully connected layer to `x_tensor` with the shape (*Batch Size*, *num_outputs*). Shortcut option: you can use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) packages for this layer. For more of a challenge, only use other TensorFlow packages.
###Code
def fully_conn(x_tensor, num_outputs):
"""
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
#print(x_tensor)
#print(num_outputs)
"""
fully_connected(
inputs,
num_outputs,
activation_fn=tf.nn.relu,
normalizer_fn=None,
normalizer_params=None,
weights_initializer=initializers.xavier_initializer(),
weights_regularizer=None,
biases_initializer=tf.zeros_initializer(),
biases_regularizer=None,
reuse=None,
variables_collections=None,
outputs_collections=None,
trainable=True,
scope=None
)
"""
output_tensor = tf.contrib.layers.fully_connected(x_tensor, num_outputs, activation_fn=tf.nn.relu)
#print(output_tensor)
return output_tensor
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_fully_conn(fully_conn)
###Output
Tests Passed
###Markdown
Output LayerImplement the `output` function to apply a fully connected layer to `x_tensor` with the shape (*Batch Size*, *num_outputs*). Shortcut option: you can use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) packages for this layer. For more of a challenge, only use other TensorFlow packages.**Note:** Activation, softmax, or cross entropy should **not** be applied to this.
###Code
def output(x_tensor, num_outputs):
"""
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
"""
# TODO: Implement Function
output_tensor = tf.contrib.layers.fully_connected(x_tensor, num_outputs, activation_fn=None)
#print(output_tensor)
return output_tensor
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_output(output)
###Output
Tests Passed
###Markdown
Create Convolutional ModelImplement the function `conv_net` to create a convolutional neural network model. The function takes in a batch of images, `x`, and outputs logits. Use the layers you created above to create this model:* Apply 1, 2, or 3 Convolution and Max Pool layers* Apply a Flatten Layer* Apply 1, 2, or 3 Fully Connected Layers* Apply an Output Layer* Return the output* Apply [TensorFlow's Dropout](https://www.tensorflow.org/api_docs/python/tf/nn/dropout) to one or more layers in the model using `keep_prob`.
###Code
def conv_net(x, keep_prob):
"""
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
"""
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
#print(x)
#print(keep_prob)
conv_ksize = (32, 32) # output layers dimensions
conv_strides = (1, 1)
pool_ksize = (3, 3) # Filter kernel/patch dimensions
pool_strides = (1, 1)
#conv_layer = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_num_outputs = 32
conv_layer = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_ksize = (32, 32) # output layers dimensions
conv_num_outputs = 64
conv_layer = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_ksize = (32, 32) # output layers dimensions
conv_num_outputs = 64
conv_layer = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
#conv_ksize = (10, 10) # output layers dimensions
#pool_ksize = (2, 2) # Filter kernel/patch dimensions
#conv_num_outputs = 24
#conv_layer = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
x_tensor = flatten(x)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
x_tensor = tf.layers.batch_normalization(x_tensor)
x_tensor = fully_conn(x_tensor, 512)
x_tensor = tf.layers.batch_normalization(x_tensor)
x_tensor = fully_conn(x_tensor, 256)
x_tensor = tf.nn.dropout(x_tensor, keep_prob)
x_tensor = fully_conn(x_tensor, 128)
#x_tensor = tf.nn.dropout(x_tensor, keep_prob)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
output_tensor = output(x_tensor, 10)
# TODO: return output
return output_tensor
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
###Output
conv2d_maxpool... Start
Cheking inputs dimensions...
conv_ksize: (32, 32)
conv_num_outputs: 32
Cheking strides dimensions...
conv_strides: (1, 1, 1, 1)
pool_ksize: (1, 3, 3, 1)
pool_strides (1, 1, 1, 1)
conv2d_maxpool... End
conv2d_maxpool... Start
Cheking inputs dimensions...
conv_ksize: (32, 32)
conv_num_outputs: 64
Cheking strides dimensions...
conv_strides: (1, 1, 1, 1)
pool_ksize: (1, 3, 3, 1)
pool_strides (1, 1, 1, 1)
conv2d_maxpool... End
conv2d_maxpool... Start
Cheking inputs dimensions...
conv_ksize: (32, 32)
conv_num_outputs: 64
Cheking strides dimensions...
conv_strides: (1, 1, 1, 1)
pool_ksize: (1, 3, 3, 1)
pool_strides (1, 1, 1, 1)
conv2d_maxpool... End
conv2d_maxpool... Start
Cheking inputs dimensions...
conv_ksize: (32, 32)
conv_num_outputs: 32
Cheking strides dimensions...
conv_strides: (1, 1, 1, 1)
pool_ksize: (1, 3, 3, 1)
pool_strides (1, 1, 1, 1)
conv2d_maxpool... End
conv2d_maxpool... Start
Cheking inputs dimensions...
conv_ksize: (32, 32)
conv_num_outputs: 64
Cheking strides dimensions...
conv_strides: (1, 1, 1, 1)
pool_ksize: (1, 3, 3, 1)
pool_strides (1, 1, 1, 1)
conv2d_maxpool... End
conv2d_maxpool... Start
Cheking inputs dimensions...
conv_ksize: (32, 32)
conv_num_outputs: 64
Cheking strides dimensions...
conv_strides: (1, 1, 1, 1)
pool_ksize: (1, 3, 3, 1)
pool_strides (1, 1, 1, 1)
conv2d_maxpool... End
Neural Network Built!
###Markdown
Train the Neural Network Single OptimizationImplement the function `train_neural_network` to do a single optimization. The optimization should use `optimizer` to optimize in `session` with a `feed_dict` of the following:* `x` for image input* `y` for labels* `keep_prob` for keep probability for dropoutThis function will be called for each batch, so `tf.global_variables_initializer()` has already been called.Note: Nothing needs to be returned. This function is only optimizing the neural network.
###Code
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
"""
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
"""
# TODO: Implement Function
# batch_size.shape -> (128, 32, 32, 3)
# label_batch.shape -> (128, 10)
session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability})
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_train_nn(train_neural_network)
###Output
Tests Passed
###Markdown
Show StatsImplement the function `print_stats` to print loss and validation accuracy. Use the global variables `valid_features` and `valid_labels` to calculate validation accuracy. Use a keep probability of `1.0` to calculate the loss and validation accuracy.
###Code
def print_stats(session, feature_batch, label_batch, cost, accuracy):
"""
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
"""
#print(cost)
#print(accuracy)
#correct_prediction = tf.equal(tf.argmax(valid_labels, 1), tf.argmax(label_batch, 1))
test_cost = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
valid_accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
print('Test Cost: {}'.format(test_cost), ' --- Valid Accuracy: {}'.format(valid_accuracy))
#print('Test Accuracy: {}'.format(test_accuracy))
# TODO: Implement Function
###Output
_____no_output_____
###Markdown
HyperparametersTune the following parameters:* Set `epochs` to the number of iterations until the network stops learning or start overfitting* Set `batch_size` to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ...* Set `keep_probability` to the probability of keeping a node using dropout
###Code
# TODO: Tune Parameters
epochs = 30
batch_size = 128
keep_probability = 0.8
###Output
_____no_output_____
###Markdown
Train on a Single CIFAR-10 BatchInstead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
print(batch_features.shape)
print(batch_labels.shape)
break
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
break
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# break
###Output
Checking the Training on a Single Batch...
(128, 32, 32, 3)
(128, 10)
###Markdown
Fully Train the ModelNow that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
###Output
Training...
Epoch 1, CIFAR-10 Batch 1: Test Cost: 2.2165138721466064 --- Valid Accuracy: 0.28600001335144043
Epoch 1, CIFAR-10 Batch 2: Test Cost: 1.8616816997528076 --- Valid Accuracy: 0.34779998660087585
Epoch 1, CIFAR-10 Batch 3: Test Cost: 1.7193629741668701 --- Valid Accuracy: 0.3353999853134155
Epoch 1, CIFAR-10 Batch 4: Test Cost: 1.6640316247940063 --- Valid Accuracy: 0.3594000041484833
Epoch 1, CIFAR-10 Batch 5: Test Cost: 1.8842222690582275 --- Valid Accuracy: 0.38420000672340393
Epoch 2, CIFAR-10 Batch 1: Test Cost: 1.956764817237854 --- Valid Accuracy: 0.3889999985694885
Epoch 2, CIFAR-10 Batch 2: Test Cost: 1.5670435428619385 --- Valid Accuracy: 0.39320001006126404
Epoch 2, CIFAR-10 Batch 3: Test Cost: 1.530177116394043 --- Valid Accuracy: 0.37619999051094055
Epoch 2, CIFAR-10 Batch 4: Test Cost: 1.4864485263824463 --- Valid Accuracy: 0.4113999903202057
Epoch 2, CIFAR-10 Batch 5: Test Cost: 1.7593681812286377 --- Valid Accuracy: 0.4262000024318695
Epoch 3, CIFAR-10 Batch 1: Test Cost: 1.7955882549285889 --- Valid Accuracy: 0.41519999504089355
Epoch 3, CIFAR-10 Batch 2: Test Cost: 1.4203517436981201 --- Valid Accuracy: 0.42239999771118164
Epoch 3, CIFAR-10 Batch 3: Test Cost: 1.3522961139678955 --- Valid Accuracy: 0.41339999437332153
Epoch 3, CIFAR-10 Batch 4: Test Cost: 1.498530626296997 --- Valid Accuracy: 0.4300000071525574
Epoch 3, CIFAR-10 Batch 5: Test Cost: 1.6516869068145752 --- Valid Accuracy: 0.42559999227523804
Epoch 4, CIFAR-10 Batch 1: Test Cost: 1.6637474298477173 --- Valid Accuracy: 0.42820000648498535
Epoch 4, CIFAR-10 Batch 2: Test Cost: 1.2549803256988525 --- Valid Accuracy: 0.43799999356269836
Epoch 4, CIFAR-10 Batch 3: Test Cost: 1.2984893321990967 --- Valid Accuracy: 0.44339999556541443
Epoch 4, CIFAR-10 Batch 4: Test Cost: 1.3824504613876343 --- Valid Accuracy: 0.44920000433921814
Epoch 4, CIFAR-10 Batch 5: Test Cost: 1.5057814121246338 --- Valid Accuracy: 0.45399999618530273
Epoch 5, CIFAR-10 Batch 1: Test Cost: 1.6331707239151 --- Valid Accuracy: 0.4537999927997589
Epoch 5, CIFAR-10 Batch 2: Test Cost: 1.2094770669937134 --- Valid Accuracy: 0.44859999418258667
Epoch 5, CIFAR-10 Batch 3: Test Cost: 1.1291310787200928 --- Valid Accuracy: 0.4514000117778778
Epoch 5, CIFAR-10 Batch 4: Test Cost: 1.3065555095672607 --- Valid Accuracy: 0.46000000834465027
Epoch 5, CIFAR-10 Batch 5: Test Cost: 1.502212643623352 --- Valid Accuracy: 0.4691999852657318
Epoch 6, CIFAR-10 Batch 1: Test Cost: 1.4640849828720093 --- Valid Accuracy: 0.4575999975204468
Epoch 6, CIFAR-10 Batch 2: Test Cost: 1.1442562341690063 --- Valid Accuracy: 0.4726000130176544
Epoch 6, CIFAR-10 Batch 3: Test Cost: 1.0873348712921143 --- Valid Accuracy: 0.47940000891685486
Epoch 6, CIFAR-10 Batch 4: Test Cost: 1.2587143182754517 --- Valid Accuracy: 0.47699999809265137
Epoch 6, CIFAR-10 Batch 5: Test Cost: 1.478074073791504 --- Valid Accuracy: 0.47620001435279846
Epoch 7, CIFAR-10 Batch 1: Test Cost: 1.4473625421524048 --- Valid Accuracy: 0.4729999899864197
Epoch 7, CIFAR-10 Batch 2: Test Cost: 1.102046012878418 --- Valid Accuracy: 0.4733999967575073
Epoch 7, CIFAR-10 Batch 3: Test Cost: 1.0184358358383179 --- Valid Accuracy: 0.47200000286102295
Epoch 7, CIFAR-10 Batch 4: Test Cost: 1.1068446636199951 --- Valid Accuracy: 0.48579999804496765
Epoch 7, CIFAR-10 Batch 5: Test Cost: 1.3231585025787354 --- Valid Accuracy: 0.4878000020980835
Epoch 8, CIFAR-10 Batch 1: Test Cost: 1.3286513090133667 --- Valid Accuracy: 0.4880000054836273
Epoch 8, CIFAR-10 Batch 2: Test Cost: 1.0215768814086914 --- Valid Accuracy: 0.4812000095844269
Epoch 8, CIFAR-10 Batch 3: Test Cost: 0.9663518667221069 --- Valid Accuracy: 0.4657999873161316
Epoch 8, CIFAR-10 Batch 4: Test Cost: 1.0539557933807373 --- Valid Accuracy: 0.48399999737739563
Epoch 8, CIFAR-10 Batch 5: Test Cost: 1.2556836605072021 --- Valid Accuracy: 0.490200012922287
Epoch 9, CIFAR-10 Batch 1: Test Cost: 1.2317280769348145 --- Valid Accuracy: 0.4986000061035156
Epoch 9, CIFAR-10 Batch 2: Test Cost: 0.9257829785346985 --- Valid Accuracy: 0.477400004863739
Epoch 9, CIFAR-10 Batch 3: Test Cost: 0.9336628913879395 --- Valid Accuracy: 0.48660001158714294
Epoch 9, CIFAR-10 Batch 4: Test Cost: 1.0064032077789307 --- Valid Accuracy: 0.487199991941452
Epoch 9, CIFAR-10 Batch 5: Test Cost: 1.2400888204574585 --- Valid Accuracy: 0.49540001153945923
Epoch 10, CIFAR-10 Batch 1: Test Cost: 1.249559760093689 --- Valid Accuracy: 0.4991999864578247
Epoch 10, CIFAR-10 Batch 2: Test Cost: 0.8837908506393433 --- Valid Accuracy: 0.4925999939441681
Epoch 10, CIFAR-10 Batch 3: Test Cost: 0.8211900591850281 --- Valid Accuracy: 0.483599990606308
Epoch 10, CIFAR-10 Batch 4: Test Cost: 1.005422592163086 --- Valid Accuracy: 0.48660001158714294
Epoch 10, CIFAR-10 Batch 5: Test Cost: 1.2322607040405273 --- Valid Accuracy: 0.4934000074863434
Epoch 11, CIFAR-10 Batch 1: Test Cost: 1.1470705270767212 --- Valid Accuracy: 0.5004000067710876
Epoch 11, CIFAR-10 Batch 2: Test Cost: 0.8548173904418945 --- Valid Accuracy: 0.4997999966144562
Epoch 11, CIFAR-10 Batch 3: Test Cost: 0.8310036659240723 --- Valid Accuracy: 0.4903999865055084
Epoch 11, CIFAR-10 Batch 4: Test Cost: 0.9178043603897095 --- Valid Accuracy: 0.4957999885082245
Epoch 11, CIFAR-10 Batch 5: Test Cost: 1.1266154050827026 --- Valid Accuracy: 0.501800000667572
Epoch 12, CIFAR-10 Batch 1: Test Cost: 1.0903719663619995 --- Valid Accuracy: 0.5099999904632568
Epoch 12, CIFAR-10 Batch 2: Test Cost: 0.7552958726882935 --- Valid Accuracy: 0.5070000290870667
Epoch 12, CIFAR-10 Batch 3: Test Cost: 0.7419159412384033 --- Valid Accuracy: 0.4941999912261963
Epoch 12, CIFAR-10 Batch 4: Test Cost: 0.8823358416557312 --- Valid Accuracy: 0.48739999532699585
Epoch 12, CIFAR-10 Batch 5: Test Cost: 1.1426656246185303 --- Valid Accuracy: 0.49160000681877136
Epoch 13, CIFAR-10 Batch 1: Test Cost: 0.9635688662528992 --- Valid Accuracy: 0.5077999830245972
Epoch 13, CIFAR-10 Batch 2: Test Cost: 0.741649329662323 --- Valid Accuracy: 0.5049999952316284
Epoch 13, CIFAR-10 Batch 3: Test Cost: 0.7153558135032654 --- Valid Accuracy: 0.4984000027179718
Epoch 13, CIFAR-10 Batch 4: Test Cost: 0.8152307271957397 --- Valid Accuracy: 0.49160000681877136
Epoch 13, CIFAR-10 Batch 5: Test Cost: 1.0612045526504517 --- Valid Accuracy: 0.504800021648407
Epoch 14, CIFAR-10 Batch 1: Test Cost: 1.0125898122787476 --- Valid Accuracy: 0.5116000175476074
Epoch 14, CIFAR-10 Batch 2: Test Cost: 0.6956135034561157 --- Valid Accuracy: 0.5040000081062317
Epoch 14, CIFAR-10 Batch 3: Test Cost: 0.6768361330032349 --- Valid Accuracy: 0.49559998512268066
Epoch 14, CIFAR-10 Batch 4: Test Cost: 0.7777456045150757 --- Valid Accuracy: 0.4997999966144562
Epoch 14, CIFAR-10 Batch 5: Test Cost: 1.0174720287322998 --- Valid Accuracy: 0.49639999866485596
Epoch 15, CIFAR-10 Batch 1: Test Cost: 0.9307224154472351 --- Valid Accuracy: 0.5121999979019165
Epoch 15, CIFAR-10 Batch 2: Test Cost: 0.6802648901939392 --- Valid Accuracy: 0.5031999945640564
Epoch 15, CIFAR-10 Batch 3: Test Cost: 0.6501916646957397 --- Valid Accuracy: 0.4934000074863434
Epoch 15, CIFAR-10 Batch 4: Test Cost: 0.7225077152252197 --- Valid Accuracy: 0.4984000027179718
Epoch 15, CIFAR-10 Batch 5: Test Cost: 0.9156202077865601 --- Valid Accuracy: 0.5117999911308289
Epoch 16, CIFAR-10 Batch 1: Test Cost: 0.9112226366996765 --- Valid Accuracy: 0.5171999931335449
Epoch 16, CIFAR-10 Batch 2: Test Cost: 0.6699400544166565 --- Valid Accuracy: 0.5001999735832214
Epoch 16, CIFAR-10 Batch 3: Test Cost: 0.601253867149353 --- Valid Accuracy: 0.4869999885559082
Epoch 16, CIFAR-10 Batch 4: Test Cost: 0.7535923719406128 --- Valid Accuracy: 0.5049999952316284
Epoch 16, CIFAR-10 Batch 5: Test Cost: 0.8951206207275391 --- Valid Accuracy: 0.5144000053405762
Epoch 17, CIFAR-10 Batch 1: Test Cost: 0.8201066851615906 --- Valid Accuracy: 0.5167999863624573
Epoch 17, CIFAR-10 Batch 2: Test Cost: 0.6084728240966797 --- Valid Accuracy: 0.5058000087738037
Epoch 17, CIFAR-10 Batch 3: Test Cost: 0.5292536020278931 --- Valid Accuracy: 0.5049999952316284
Epoch 17, CIFAR-10 Batch 4: Test Cost: 0.7060728073120117 --- Valid Accuracy: 0.5072000026702881
Epoch 17, CIFAR-10 Batch 5: Test Cost: 0.9015951156616211 --- Valid Accuracy: 0.5199999809265137
Epoch 18, CIFAR-10 Batch 1: Test Cost: 0.839641273021698 --- Valid Accuracy: 0.5145999789237976
Epoch 18, CIFAR-10 Batch 2: Test Cost: 0.5818802118301392 --- Valid Accuracy: 0.5202000141143799
Epoch 18, CIFAR-10 Batch 3: Test Cost: 0.5388607382774353 --- Valid Accuracy: 0.506600022315979
Epoch 18, CIFAR-10 Batch 4: Test Cost: 0.6864104866981506 --- Valid Accuracy: 0.5127999782562256
Epoch 18, CIFAR-10 Batch 5: Test Cost: 0.7870049476623535 --- Valid Accuracy: 0.5145999789237976
Epoch 19, CIFAR-10 Batch 1: Test Cost: 0.7524703741073608 --- Valid Accuracy: 0.5224000215530396
Epoch 19, CIFAR-10 Batch 2: Test Cost: 0.536300539970398 --- Valid Accuracy: 0.5113999843597412
Epoch 19, CIFAR-10 Batch 3: Test Cost: 0.5442339777946472 --- Valid Accuracy: 0.5080000162124634
Epoch 19, CIFAR-10 Batch 4: Test Cost: 0.6498348712921143 --- Valid Accuracy: 0.5120000243186951
Epoch 19, CIFAR-10 Batch 5: Test Cost: 0.7879853844642639 --- Valid Accuracy: 0.5170000195503235
Epoch 20, CIFAR-10 Batch 1: Test Cost: 0.7425659894943237 --- Valid Accuracy: 0.5212000012397766
Epoch 20, CIFAR-10 Batch 2: Test Cost: 0.5071281790733337 --- Valid Accuracy: 0.5095999836921692
Epoch 20, CIFAR-10 Batch 3: Test Cost: 0.544109046459198 --- Valid Accuracy: 0.51419997215271
Epoch 20, CIFAR-10 Batch 4: Test Cost: 0.5719175338745117 --- Valid Accuracy: 0.5194000005722046
Epoch 20, CIFAR-10 Batch 5: Test Cost: 0.749904453754425 --- Valid Accuracy: 0.5095999836921692
Epoch 21, CIFAR-10 Batch 1: Test Cost: 0.7878631353378296 --- Valid Accuracy: 0.5264000296592712
Epoch 21, CIFAR-10 Batch 2: Test Cost: 0.48900946974754333 --- Valid Accuracy: 0.5070000290870667
Epoch 21, CIFAR-10 Batch 3: Test Cost: 0.45922842621803284 --- Valid Accuracy: 0.49900001287460327
Epoch 21, CIFAR-10 Batch 4: Test Cost: 0.5975083112716675 --- Valid Accuracy: 0.5210000276565552
Epoch 21, CIFAR-10 Batch 5: Test Cost: 0.6809048652648926 --- Valid Accuracy: 0.5105999708175659
Epoch 22, CIFAR-10 Batch 1: Test Cost: 0.7157321572303772 --- Valid Accuracy: 0.5194000005722046
Epoch 22, CIFAR-10 Batch 2: Test Cost: 0.44642218947410583 --- Valid Accuracy: 0.5117999911308289
Epoch 22, CIFAR-10 Batch 3: Test Cost: 0.4774302542209625 --- Valid Accuracy: 0.5076000094413757
Epoch 22, CIFAR-10 Batch 4: Test Cost: 0.5965604186058044 --- Valid Accuracy: 0.5181999802589417
Epoch 22, CIFAR-10 Batch 5: Test Cost: 0.6665050387382507 --- Valid Accuracy: 0.5121999979019165
Epoch 23, CIFAR-10 Batch 1: Test Cost: 0.7208171486854553 --- Valid Accuracy: 0.5221999883651733
Epoch 23, CIFAR-10 Batch 2: Test Cost: 0.4150008261203766 --- Valid Accuracy: 0.5148000121116638
Epoch 23, CIFAR-10 Batch 3: Test Cost: 0.4604068398475647 --- Valid Accuracy: 0.49959999322891235
Epoch 23, CIFAR-10 Batch 4: Test Cost: 0.5760369300842285 --- Valid Accuracy: 0.5239999890327454
Epoch 23, CIFAR-10 Batch 5: Test Cost: 0.6071385145187378 --- Valid Accuracy: 0.5117999911308289
Epoch 24, CIFAR-10 Batch 1: Test Cost: 0.6775175929069519 --- Valid Accuracy: 0.5175999999046326
Epoch 24, CIFAR-10 Batch 2: Test Cost: 0.46259862184524536 --- Valid Accuracy: 0.51419997215271
Epoch 24, CIFAR-10 Batch 3: Test Cost: 0.42526474595069885 --- Valid Accuracy: 0.5013999938964844
Epoch 24, CIFAR-10 Batch 4: Test Cost: 0.4953770041465759 --- Valid Accuracy: 0.5228000283241272
Epoch 24, CIFAR-10 Batch 5: Test Cost: 0.5737884640693665 --- Valid Accuracy: 0.5130000114440918
Epoch 25, CIFAR-10 Batch 1: Test Cost: 0.6120211482048035 --- Valid Accuracy: 0.52920001745224
Epoch 25, CIFAR-10 Batch 2: Test Cost: 0.42966288328170776 --- Valid Accuracy: 0.5138000249862671
Epoch 25, CIFAR-10 Batch 3: Test Cost: 0.4099965989589691 --- Valid Accuracy: 0.49320000410079956
Epoch 25, CIFAR-10 Batch 4: Test Cost: 0.4506341516971588 --- Valid Accuracy: 0.522599995136261
Epoch 25, CIFAR-10 Batch 5: Test Cost: 0.6047602295875549 --- Valid Accuracy: 0.5088000297546387
Epoch 26, CIFAR-10 Batch 1: Test Cost: 0.6051012277603149 --- Valid Accuracy: 0.5242000222206116
Epoch 26, CIFAR-10 Batch 2: Test Cost: 0.4585588574409485 --- Valid Accuracy: 0.5121999979019165
Epoch 26, CIFAR-10 Batch 3: Test Cost: 0.3566889464855194 --- Valid Accuracy: 0.5001999735832214
Epoch 26, CIFAR-10 Batch 4: Test Cost: 0.49742698669433594 --- Valid Accuracy: 0.5113999843597412
Epoch 26, CIFAR-10 Batch 5: Test Cost: 0.5887235403060913 --- Valid Accuracy: 0.5067999958992004
Epoch 27, CIFAR-10 Batch 1: Test Cost: 0.5677438378334045 --- Valid Accuracy: 0.5242000222206116
Epoch 27, CIFAR-10 Batch 2: Test Cost: 0.426910400390625 --- Valid Accuracy: 0.5212000012397766
Epoch 27, CIFAR-10 Batch 3: Test Cost: 0.3060685992240906 --- Valid Accuracy: 0.4991999864578247
Epoch 27, CIFAR-10 Batch 4: Test Cost: 0.4937344491481781 --- Valid Accuracy: 0.5184000134468079
Epoch 27, CIFAR-10 Batch 5: Test Cost: 0.5216708183288574 --- Valid Accuracy: 0.5171999931335449
Epoch 28, CIFAR-10 Batch 1: Test Cost: 0.6022601127624512 --- Valid Accuracy: 0.5180000066757202
Epoch 28, CIFAR-10 Batch 2: Test Cost: 0.4024377763271332 --- Valid Accuracy: 0.5139999985694885
Epoch 28, CIFAR-10 Batch 3: Test Cost: 0.31150808930397034 --- Valid Accuracy: 0.498199999332428
Epoch 28, CIFAR-10 Batch 4: Test Cost: 0.4821416437625885 --- Valid Accuracy: 0.5085999965667725
Epoch 28, CIFAR-10 Batch 5: Test Cost: 0.5040921568870544 --- Valid Accuracy: 0.5180000066757202
Epoch 29, CIFAR-10 Batch 1: Test Cost: 0.5232058763504028 --- Valid Accuracy: 0.5217999815940857
Epoch 29, CIFAR-10 Batch 2: Test Cost: 0.3803856670856476 --- Valid Accuracy: 0.5144000053405762
Epoch 29, CIFAR-10 Batch 3: Test Cost: 0.23715415596961975 --- Valid Accuracy: 0.5095999836921692
Epoch 29, CIFAR-10 Batch 4: Test Cost: 0.4873233735561371 --- Valid Accuracy: 0.5076000094413757
Epoch 29, CIFAR-10 Batch 5: Test Cost: 0.47108227014541626 --- Valid Accuracy: 0.5090000033378601
Epoch 30, CIFAR-10 Batch 1: Test Cost: 0.5126301050186157 --- Valid Accuracy: 0.5121999979019165
Epoch 30, CIFAR-10 Batch 2: Test Cost: 0.39546361565589905 --- Valid Accuracy: 0.5103999972343445
Epoch 30, CIFAR-10 Batch 3: Test Cost: 0.2461860179901123 --- Valid Accuracy: 0.5063999891281128
Epoch 30, CIFAR-10 Batch 4: Test Cost: 0.4596402049064636 --- Valid Accuracy: 0.5085999965667725
Epoch 30, CIFAR-10 Batch 5: Test Cost: 0.4026110768318176 --- Valid Accuracy: 0.5192000269889832
###Markdown
CheckpointThe model has been saved to disk. Test ModelTest your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
"""
Test the saved model against the test dataset
"""
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
###Output
_____no_output_____ |
labwork/numpy_basics.ipynb | ###Markdown
Numpy Tutorial Creating Arrays
###Code
import numpy as np
array1=np.array([1,2,3,4,5,6])
print (array1)
print (array1[0:6])
print (array1[:6])
print (array1[3:6])
print (array1[:])
import numpy as np
array1=np.array([(1,2,3),(4.5,5,6),(7,8,9)], dtype=np.float32)
#print (array1)
print (array1[2,2]) # prints 2nd row, 2nd col element
print (array1[0:2]) # prints rows 0,1
print (array1[0:3:2]) # prints rows between 0-3 with stepSize 2
print (array1[0:3,2]) # prints 2nd col of matrix
print (array1[0:3,0:2]) #prints matrix with
print (array1.shape)
import numpy as np
array1=np.array([[1,2,3],[4.5,5,6]], dtype=int)
print (array1)
import numpy as np
array1=np.array([[(1.5,2,3), (4,5,6)], [(3,2,1), (4,5,6)]],dtype=np.float32)
print (array1)
print (array1.shape)
#create array of Zeros
array3=np.zeros((3,4)) #creates array of 3-rows 4-columns
print (array3)
#create array of Zeros
array3=np.ones((3,4),dtype=int) #creates array of 3-rows 4-columns
print (array3)
arr5=np.arange(0,10) #creates array with elements 0-9
print (arr5)
arr5=np.arange(0,10,2) #creates array with elements between 0-9 with step size 2
print (arr5)
arr6=np.linspace(1,10,3) # creates array using elements between 1-10 with 3 evenly spaces samples
print (arr6)
e = np.full((2,2),7) # creates array of 2X2 and fills with constant 7
print (e)
f = np.eye(3) #creates 3x3 identity matrix
print (f)
arr7=np.random.random((2,3)) #creates 2x3 array with random elements
print (arr7)
arr8=np.empty((3,2))
print (arr8)
###Output
_____no_output_____
###Markdown
Loading and Storing arrays into Disk
###Code
np.save('my_array',e) # stores array into disk with name my_array.npy
arr9=np.load('my_array.npy') #loads the stored array into variabe arr9
print (arr9 )
###Output
_____no_output_____
###Markdown
Inspecting Arrays
###Code
array1=np.array([[1,2,0],[4.5,0,6]], dtype=np.float32)
print (array1.shape) #prints row col
print (array1.ndim) #prints dimensions of the array
print (len(array1)) #prints length of the array
print (array1.size) #prints total number of elements in the array
print (array1.dtype) #prints datatype of elements in the array
print (array1.dtype.name) #
print (array1.astype(int)) #converts array elements to integer type
print (array1.max()) #prints max element
print (array1.max(axis=0)) #prints row that has max value
print (array1.max(axis=1)) # prints col that has max value
print (array1.min()) #prints min element
print (array1.sum()) # prints sum of all the elements in the array
print (array1.view()) #displays the array elements
###Output
_____no_output_____
###Markdown
Asking For Help
###Code
np.info(np.argmax)
###Output
_____no_output_____
###Markdown
Transposing Array
###Code
import numpy as np
array1=np.array([(1.5,2,3), (4,5,6), (3,2,1), (4,5,6)],dtype=np.float32)
print (array1)
tranposeArray=np.transpose(array1)
print (tranposeArray)
import numpy as np
array1=np.array([(1.5,2,3), (4,5,6), (3,2,1), (4,5,6)],dtype=np.float32)
print (array1)
print (array1.ravel()) #flatten the array
array1.reshape(4,3) #reshapes array into 4-rows and 3-columns
print (array1)
array2=np.array([(7,8,9),(10,11,12)], dtype=np.float32)
array3=np.array([(13,14,15,16)])
arr4=np.concatenate((array1,array2),axis=0) # concatenates array1 and array2
print (arr4)
arr5=np.vstack((array1,array2))
print (arr5)
import numpy as np
array1=np.array([(1.5,2,3), (4,5,6), (3,2,1)],dtype=np.float32)
print (array1)
array6=np.array([(4,5,6),(4,5,6), (3,2,1)])
print (array6)
array7=np.hstack((array1,array6))
print (array7)
import numpy as np
rnd_number=np.random.randn()
print (rnd_number)
import numpy as np
array1=np.array([(1,2,3),(4.5,5,6),(7,8,9)], dtype=np.float32)
print (array1)
b=np.ravel(array1) # Sysntax->[numpy.ravel(a, order)] Return a contiguous flattened array.
print (b)
b.fill(5)
print (array1)
c=array1.reshape(-1) # same as above
print (c)
d=array1.flatten() # 1-D array copy of the elements of an array in row-major order.
print (d)
###Output
_____no_output_____ |
K-means Clustering Algorithm.ipynb | ###Markdown
Importing library
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import random
###Output
_____no_output_____
###Markdown
Generating training set
###Code
training_set = []
for i in range(40):
training_set.append([2 * random.random() + 2, 2 * random.random() + 2])
for i in range(40):
training_set.append([2 * random.random() + 8, 2 * random.random() + 2])
for i in range(40):
training_set.append([2 * random.random() + 2, 2 * random.random() + 8])
for i in range(40):
training_set.append([2 * random.random() + 4, 2 * random.random() + 4])
for i in range(40):
training_set.append([3 * random.random() + 9, 3 * random.random() + 9])
for i in range(40):
training_set.append([10 * random.random(), 10 * random.random()])
training_set = np.array(training_set)
###Output
_____no_output_____
###Markdown
Functions
###Code
def kmeans(training_set, K, plot=False):
"""
Divides training set into clusters.
Arguments
training_set - 2D array with number of columns 2
K - Number of clusters
plot - Boolean value whether to plot graph or not
"""
color = ['r', 'b', 'g', 'c', 'm', 'orange', 'crimson', 'pink', 'brown', 'yellow', 'gray', 'mediumseagreen']
if plot:
plt.scatter(training_set[:, 0], training_set[:, 1], c='k')
cluster_centroids = training_set[np.random.randint(0, training_set.shape[0], K)]
if plot:
for k in range(K):
plt.scatter(cluster_centroids[k][0], cluster_centroids[k][1], c=color[k], marker='X', s=100)
plt.show()
m = len(training_set)
n = len(training_set[0])
nearest_cluster_centroid = np.zeros(m, dtype='int')
old_cluster_centroids = np.zeros((K, 2), dtype='float')
cost = []
while (np.max(np.absolute(old_cluster_centroids - cluster_centroids)) > 0.001):
sum_cluster = np.zeros((K, n))
count_cluster = np.zeros(K, dtype='int')
distance_from_nearest_cluster_centroid = np.zeros(m, dtype='float')
for i, xi in enumerate(training_set):
# find the cluster with minimum distance from our training example.
nearest_cluster_centroid[i] = np.argmin(np.linalg.norm(cluster_centroids - np.tile(xi, (K, 1)), axis=1))
sum_cluster[nearest_cluster_centroid[i]] = sum_cluster[nearest_cluster_centroid[i]] + xi
count_cluster[nearest_cluster_centroid[i]] += 1
distance_from_nearest_cluster_centroid[i] = (cluster_centroids[nearest_cluster_centroid[i]] - xi)[0] ** 2 + (cluster_centroids[nearest_cluster_centroid[i]] - xi)[1] ** 2
if plot:
plt.scatter(xi[0], xi[1], c=color[nearest_cluster_centroid[i]], marker='o')
old_cluster_centroids = cluster_centroids.copy()
for k in range(K):
if count_cluster[k] != 0:
cluster_centroids[k] = sum_cluster[k] / count_cluster[k]
else:
cluster_centroids[k] = np.array([random.randint(int(min(training_set[:, 0])), int(max(training_set[:, 0]))),
random.randint(int(min(training_set[:, 1])), int(max(training_set[:, 1])))], dtype='float')
if plot:
plt.scatter(cluster_centroids[k][0], cluster_centroids[k][1], c=color[k], marker='X', s=100)
cost.append(np.sum(distance_from_nearest_cluster_centroid) / m)
if plot:
plt.show()
if plot:
plt.plot(cost)
plt.xlabel('Iterations')
plt.ylabel('Cost')
plt.title('Cost vs Interations')
plt.show()
return (cluster_centroids, nearest_cluster_centroid, cost[-1])
def kmeans_multiply_tries(training_set, K, tries, plot=False):
"""
Tries KMeans multiple amount of times and gives clusters with minimum error.
"""
color = ['r', 'b', 'g', 'c', 'm', 'orange', 'crimson', 'pink', 'brown', 'yellow', 'gray', 'mediumseagreen']
cost = None
for i in range(tries):
if cost is None:
cluster_centroids, nearest_cluster_centroid, cost = kmeans(training_set, K, plot=False)
else:
new_cluster_centroids, new_nearest_cluster_centroid, new_cost = kmeans(training_set, K, plot=False)
if new_cost < cost:
cluster_centroids, nearest_cluster_centroid, cost = new_cluster_centroids, new_nearest_cluster_centroid, new_cost
if plot:
for k in range(K):
plt.scatter(cluster_centroids[k][0], cluster_centroids[k][1], c=color[k], marker='X', s=100)
for i, xi in enumerate(training_set):
plt.scatter(xi[0], xi[1], c=color[nearest_cluster_centroid[i]], marker='o')
plt.show()
return (cluster_centroids, nearest_cluster_centroid, cost)
def plot_cost_vs_K(training_set, max_K, tries=10):
"""
Plots cost function wrt K.
"""
costs = []
for K in range(2, max_K):
costs.append(kmeans_multiply_tries(training_set, K, tries)[2])
plt.plot(np.arange(2, max_K), costs)
plt.xlabel('K')
plt.ylabel('Cost')
plt.title('Cost function vs K')
plt.show()
###Output
_____no_output_____
###Markdown
Calling functions
###Code
kmeans(training_set, 5, plot=True)
kmeans_multiply_tries(training_set, 11, tries=10, plot=True)
plot_cost_vs_K(training_set, 20, 10)
###Output
_____no_output_____ |
cifar10_tutorial.ipynb | ###Markdown
Training a classifier=====================This is it. You have seen how to define neural networks, compute loss and makeupdates to the weights of the network.Now you might be thinking,What about data?----------------Generally, when you have to deal with image, text, audio or video data,you can use standard python packages that load data into a numpy array.Then you can convert this array into a ``torch.*Tensor``.- For images, packages such as Pillow, OpenCV are useful- For audio, packages such as scipy and librosa- For text, either raw Python or Cython based loading, or NLTK and SpaCy are usefulSpecifically for vision, we have created a package called``torchvision``, that has data loaders for common datasets such asImagenet, CIFAR10, MNIST, etc. and data transformers for images, viz.,``torchvision.datasets`` and ``torch.utils.data.DataLoader``.This provides a huge convenience and avoids writing boilerplate code.For this tutorial, we will use the CIFAR10 dataset.It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’,‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are ofsize 3x32x32, i.e. 3-channel color images of 32x32 pixels in size... figure:: /_static/img/cifar10.png :alt: cifar10 cifar10Training an image classifier----------------------------We will do the following steps in order:1. Load and normalizing the CIFAR10 training and test datasets using ``torchvision``2. Define a Convolution Neural Network3. Define a loss function4. Train the network on the training data5. Test the network on the test data1. Loading and normalizing CIFAR10^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Using ``torchvision``, it’s extremely easy to load CIFAR10.
###Code
import torch
import torchvision
import torchvision.transforms as transforms
###Output
_____no_output_____
###Markdown
The output of torchvision datasets are PILImage images of range [0, 1].We transform them to Tensors of normalized range [-1, 1].
###Code
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
###Output
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz
Files already downloaded and verified
###Markdown
Let us show some of the training images, for fun.
###Code
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
###Output
frog bird plane cat
###Markdown
2. Define a Convolution Neural Network^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Copy the neural network from the Neural Networks section before and modify it totake 3-channel images (instead of 1-channel images as it was defined).
###Code
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
###Output
_____no_output_____
###Markdown
3. Define a Loss function and optimizer^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Let's use a Classification Cross-Entropy loss and SGD with momentum.
###Code
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
4. Train the network^^^^^^^^^^^^^^^^^^^^This is when things start to get interesting.We simply have to loop over our data iterator, and feed the inputs to thenetwork and optimize.
###Code
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
###Output
[1, 2000] loss: 2.247
[1, 4000] loss: 1.887
[1, 6000] loss: 1.677
[1, 8000] loss: 1.576
[1, 10000] loss: 1.491
[1, 12000] loss: 1.454
[2, 2000] loss: 1.384
[2, 4000] loss: 1.348
[2, 6000] loss: 1.346
[2, 8000] loss: 1.323
[2, 10000] loss: 1.303
[2, 12000] loss: 1.284
Finished Training
###Markdown
5. Test the network on the test data^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^We have trained the network for 2 passes over the training dataset.But we need to check if the network has learnt anything at all.We will check this by predicting the class label that the neural networkoutputs, and checking it against the ground-truth. If the prediction iscorrect, we add the sample to the list of correct predictions.Okay, first step. Let us display an image from the test set to get familiar.
###Code
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
###Output
GroundTruth: cat ship ship plane
###Markdown
Okay, now let us see what the neural network thinks these examples above are:
###Code
outputs = net(images)
###Output
_____no_output_____
###Markdown
The outputs are energies for the 10 classes.Higher the energy for a class, the more the networkthinks that the image is of the particular class.So, let's get the index of the highest energy:
###Code
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
###Output
Predicted: cat car car ship
###Markdown
The results seem pretty good.Let us look at how the network performs on the whole dataset.
###Code
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
###Output
Accuracy of the network on the 10000 test images: 54 %
###Markdown
That looks waaay better than chance, which is 10% accuracy (randomly pickinga class out of 10 classes).Seems like the network learnt something.Hmmm, what are the classes that performed well, and the classes that didnot perform well:
###Code
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
###Output
Accuracy of plane : 72 %
Accuracy of car : 77 %
Accuracy of bird : 29 %
Accuracy of cat : 31 %
Accuracy of deer : 45 %
Accuracy of dog : 46 %
Accuracy of frog : 73 %
Accuracy of horse : 53 %
Accuracy of ship : 54 %
Accuracy of truck : 62 %
###Markdown
Okay, so what next?How do we run these neural networks on the GPU?Training on GPU----------------Just like how you transfer a Tensor on to the GPU, you transfer the neuralnet onto the GPU.Let's first define our device as the first visible cuda device if we haveCUDA available:
###Code
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Assume that we are on a CUDA machine, then this should print a CUDA device:
print(device)
###Output
cpu
###Markdown
Training a Classifier=====================This is it. You have seen how to define neural networks, compute loss and makeupdates to the weights of the network.Now you might be thinking,What about data?----------------Generally, when you have to deal with image, text, audio or video data,you can use standard python packages that load data into a numpy array.Then you can convert this array into a ``torch.*Tensor``.- For images, packages such as Pillow, OpenCV are useful- For audio, packages such as scipy and librosa- For text, either raw Python or Cython based loading, or NLTK and SpaCy are usefulSpecifically for vision, we have created a package called``torchvision``, that has data loaders for common datasets such asImagenet, CIFAR10, MNIST, etc. and data transformers for images, viz.,``torchvision.datasets`` and ``torch.utils.data.DataLoader``.This provides a huge convenience and avoids writing boilerplate code.For this tutorial, we will use the CIFAR10 dataset.It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’,‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are ofsize 3x32x32, i.e. 3-channel color images of 32x32 pixels in size... figure:: /_static/img/cifar10.png :alt: cifar10 cifar10Training an image classifier----------------------------We will do the following steps in order:1. Load and normalize the CIFAR10 training and test datasets using ``torchvision``2. Define a Convolutional Neural Network3. Define a loss function4. Train the network on the training data5. Test the network on the test data1. Load and normalize CIFAR10^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Using ``torchvision``, it’s extremely easy to load CIFAR10.
###Code
import torch
import torchvision
import torchvision.transforms as transforms
###Output
_____no_output_____
###Markdown
The output of torchvision datasets are PILImage images of range [0, 1].We transform them to Tensors of normalized range [-1, 1]. NoteIf running on Windows and you get a BrokenPipeError, try setting the num_worker of torch.utils.data.DataLoader() to 0.
###Code
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
batch_size = 4
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
###Output
_____no_output_____
###Markdown
Let us show some of the training images, for fun.
###Code
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(batch_size)))
###Output
_____no_output_____
###Markdown
2. Define a Convolutional Neural Network^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Copy the neural network from the Neural Networks section before and modify it totake 3-channel images (instead of 1-channel images as it was defined).
###Code
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = torch.flatten(x, 1) # flatten all dimensions except batch
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
###Output
_____no_output_____
###Markdown
3. Define a Loss function and optimizer^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Let's use a Classification Cross-Entropy loss and SGD with momentum.
###Code
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
4. Train the network^^^^^^^^^^^^^^^^^^^^This is when things start to get interesting.We simply have to loop over our data iterator, and feed the inputs to thenetwork and optimize.
###Code
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs; data is a list of [inputs, labels]
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
###Output
_____no_output_____
###Markdown
Let's quickly save our trained model:
###Code
PATH = './cifar_net.pth'
torch.save(net.state_dict(), PATH)
###Output
_____no_output_____
###Markdown
See `here `_for more details on saving PyTorch models.5. Test the network on the test data^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^We have trained the network for 2 passes over the training dataset.But we need to check if the network has learnt anything at all.We will check this by predicting the class label that the neural networkoutputs, and checking it against the ground-truth. If the prediction iscorrect, we add the sample to the list of correct predictions.Okay, first step. Let us display an image from the test set to get familiar.
###Code
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
###Output
_____no_output_____
###Markdown
Next, let's load back in our saved model (note: saving and re-loading the modelwasn't necessary here, we only did it to illustrate how to do so):
###Code
net = Net()
net.load_state_dict(torch.load(PATH))
###Output
_____no_output_____
###Markdown
Okay, now let us see what the neural network thinks these examples above are:
###Code
outputs = net(images)
###Output
_____no_output_____
###Markdown
The outputs are energies for the 10 classes.The higher the energy for a class, the more the networkthinks that the image is of the particular class.So, let's get the index of the highest energy:
###Code
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
###Output
_____no_output_____
###Markdown
The results seem pretty good.Let us look at how the network performs on the whole dataset.
###Code
correct = 0
total = 0
# since we're not training, we don't need to calculate the gradients for our outputs
with torch.no_grad():
for data in testloader:
images, labels = data
# calculate outputs by running images through the network
outputs = net(images)
# the class with the highest energy is what we choose as prediction
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
###Output
_____no_output_____
###Markdown
That looks way better than chance, which is 10% accuracy (randomly pickinga class out of 10 classes).Seems like the network learnt something.Hmmm, what are the classes that performed well, and the classes that didnot perform well:
###Code
# prepare to count predictions for each class
correct_pred = {classname: 0 for classname in classes}
total_pred = {classname: 0 for classname in classes}
# again no gradients needed
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predictions = torch.max(outputs, 1)
# collect the correct predictions for each class
for label, prediction in zip(labels, predictions):
if label == prediction:
correct_pred[classes[label]] += 1
total_pred[classes[label]] += 1
# print accuracy for each class
for classname, correct_count in correct_pred.items():
accuracy = 100 * float(correct_count) / total_pred[classname]
print("Accuracy for class {:5s} is: {:.1f} %".format(classname,
accuracy))
###Output
_____no_output_____
###Markdown
Okay, so what next?How do we run these neural networks on the GPU?Training on GPU----------------Just like how you transfer a Tensor onto the GPU, you transfer the neuralnet onto the GPU.Let's first define our device as the first visible cuda device if we haveCUDA available:
###Code
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Assuming that we are on a CUDA machine, this should print a CUDA device:
print(device)
###Output
_____no_output_____
###Markdown
Change runtime of notebook to GPU Select Runtime-> Change Runtime type -> select runtime python 3 and hardward accelerator GPU Install pytorch and torchvision for python 3 and cuda 8.0
###Code
!pip3 install https://download.pytorch.org/whl/cu80/torch-1.0.0-cp36-cp36m-linux_x86_64.whl
!pip3 install torchvision
%matplotlib inline
###Output
_____no_output_____
###Markdown
Training a Classifier=====================This is it. You have seen how to define neural networks, compute loss and makeupdates to the weights of the network.Now you might be thinking,What about data?----------------Generally, when you have to deal with image, text, audio or video data,you can use standard python packages that load data into a numpy array.Then you can convert this array into a ``torch.*Tensor``.- For images, packages such as Pillow, OpenCV are useful- For audio, packages such as scipy and librosa- For text, either raw Python or Cython based loading, or NLTK and SpaCy are usefulSpecifically for vision, we have created a package called``torchvision``, that has data loaders for common datasets such asImagenet, CIFAR10, MNIST, etc. and data transformers for images, viz.,``torchvision.datasets`` and ``torch.utils.data.DataLoader``.This provides a huge convenience and avoids writing boilerplate code.For this tutorial, we will use the CIFAR10 dataset.It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’,‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are ofsize 3x32x32, i.e. 3-channel color images of 32x32 pixels in size... figure:: /_static/img/cifar10.png :alt: cifar10 cifar10Training an image classifier----------------------------We will do the following steps in order:1. Load and normalizing the CIFAR10 training and test datasets using ``torchvision``2. Define a Convolutional Neural Network3. Define a loss function4. Train the network on the training data5. Test the network on the test data1. Loading and normalizing CIFAR10^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Using ``torchvision``, it’s extremely easy to load CIFAR10.
###Code
import torch
import torchvision
import torchvision.transforms as transforms
###Output
_____no_output_____
###Markdown
The output of torchvision datasets are PILImage images of range [0, 1].We transform them to Tensors of normalized range [-1, 1].
###Code
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
###Output
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz
Files already downloaded and verified
###Markdown
Let us show some of the training images, for fun.
###Code
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
###Output
_____no_output_____
###Markdown
2. Define a Convolutional Neural Network^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Copy the neural network from the Neural Networks section before and modify it totake 3-channel images (instead of 1-channel images as it was defined).
###Code
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
###Output
_____no_output_____
###Markdown
3. Define a Loss function and optimizer^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Let's use a Classification Cross-Entropy loss and SGD with momentum.
###Code
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
4. Train the network^^^^^^^^^^^^^^^^^^^^This is when things start to get interesting.We simply have to loop over our data iterator, and feed the inputs to thenetwork and optimize.
###Code
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
###Output
[1, 2000] loss: 2.138
[1, 4000] loss: 1.851
[1, 6000] loss: 1.670
[1, 8000] loss: 1.565
[1, 10000] loss: 1.498
[1, 12000] loss: 1.486
[2, 2000] loss: 1.401
[2, 4000] loss: 1.370
[2, 6000] loss: 1.367
[2, 8000] loss: 1.344
[2, 10000] loss: 1.317
[2, 12000] loss: 1.302
Finished Training
###Markdown
5. Test the network on the test data^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^We have trained the network for 2 passes over the training dataset.But we need to check if the network has learnt anything at all.We will check this by predicting the class label that the neural networkoutputs, and checking it against the ground-truth. If the prediction iscorrect, we add the sample to the list of correct predictions.Okay, first step. Let us display an image from the test set to get familiar.
###Code
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
###Output
_____no_output_____
###Markdown
Okay, now let us see what the neural network thinks these examples above are:
###Code
outputs = net(images)
###Output
_____no_output_____
###Markdown
The outputs are energies for the 10 classes.The higher the energy for a class, the more the networkthinks that the image is of the particular class.So, let's get the index of the highest energy:
###Code
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
###Output
Predicted: cat car car plane
###Markdown
The results seem pretty good.Let us look at how the network performs on the whole dataset.
###Code
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
###Output
Accuracy of the network on the 10000 test images: 52 %
###Markdown
That looks waaay better than chance, which is 10% accuracy (randomly pickinga class out of 10 classes).Seems like the network learnt something.Hmmm, what are the classes that performed well, and the classes that didnot perform well:
###Code
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
###Output
Accuracy of plane : 52 %
Accuracy of car : 62 %
Accuracy of bird : 60 %
Accuracy of cat : 31 %
Accuracy of deer : 32 %
Accuracy of dog : 45 %
Accuracy of frog : 70 %
Accuracy of horse : 47 %
Accuracy of ship : 60 %
Accuracy of truck : 66 %
###Markdown
Okay, so what next?How do we run these neural networks on the GPU?Training on GPU----------------Just like how you transfer a Tensor onto the GPU, you transfer the neuralnet onto the GPU.Let's first define our device as the first visible cuda device if we haveCUDA available:
###Code
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Assuming that we are on a CUDA machine, this should print a CUDA device:
print(device)
###Output
cuda:0
###Markdown
The rest of this section assumes that ``device`` is a CUDA device.Then these methods will recursively go over all modules and convert theirparameters and buffers to CUDA tensors:.. code:: python net.to(device)Remember that you will have to send the inputs and targets at every stepto the GPU too:.. code:: python inputs, labels = inputs.to(device), labels.to(device)Why dont I notice MASSIVE speedup compared to CPU? Because your networkis realllly small.**Exercise:** Try increasing the width of your network (argument 2 ofthe first ``nn.Conv2d``, and argument 1 of the second ``nn.Conv2d`` –they need to be the same number), see what kind of speedup you get.**Goals achieved**:- Understanding PyTorch's Tensor library and neural networks at a high level.- Train a small neural network to classify imagesTraining on multiple GPUs-------------------------If you want to see even more MASSIVE speedup using all of your GPUs,please check out :doc:`data_parallel_tutorial`.Where do I go next?-------------------- :doc:`Train neural nets to play video games `- `Train a state-of-the-art ResNet network on imagenet`_- `Train a face generator using Generative Adversarial Networks`_- `Train a word-level language model using Recurrent LSTM networks`_- `More examples`_- `More tutorials`_- `Discuss PyTorch on the Forums`_- `Chat with other users on Slack`_
###Code
###Output
_____no_output_____
###Markdown
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
Training a Classifier=====================This is it. You have seen how to define neural networks, compute loss and makeupdates to the weights of the network.Now you might be thinking,What about data?----------------Generally, when you have to deal with image, text, audio or video data,you can use standard python packages that load data into a numpy array.Then you can convert this array into a ``torch.*Tensor``.- For images, packages such as Pillow, OpenCV are useful- For audio, packages such as scipy and librosa- For text, either raw Python or Cython based loading, or NLTK and SpaCy are usefulSpecifically for vision, we have created a package called``torchvision``, that has data loaders for common datasets such asImagenet, CIFAR10, MNIST, etc. and data transformers for images, viz.,``torchvision.datasets`` and ``torch.utils.data.DataLoader``.This provides a huge convenience and avoids writing boilerplate code.For this tutorial, we will use the CIFAR10 dataset.It has the classes: ‘airplane’, ‘automobile’, ‘bird’, ‘cat’, ‘deer’,‘dog’, ‘frog’, ‘horse’, ‘ship’, ‘truck’. The images in CIFAR-10 are ofsize 3x32x32, i.e. 3-channel color images of 32x32 pixels in size... figure:: /_static/img/cifar10.png :alt: cifar10 cifar10Training an image classifier----------------------------We will do the following steps in order:1. Load and normalizing the CIFAR10 training and test datasets using ``torchvision``2. Define a Convolution Neural Network3. Define a loss function4. Train the network on the training data5. Test the network on the test data1. Loading and normalizing CIFAR10^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Using ``torchvision``, it’s extremely easy to load CIFAR10.
###Code
import torch
import torchvision
import torchvision.transforms as transforms
###Output
_____no_output_____
###Markdown
The output of torchvision datasets are PILImage images of range [0, 1].We transform them to Tensors of normalized range [-1, 1].
###Code
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False,
download=True, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
###Output
0%| | 0/170498071 [00:00<?, ?it/s]
###Markdown
Let us show some of the training images, for fun.
###Code
import matplotlib.pyplot as plt
import numpy as np
# functions to show an image
def imshow(img):
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(4)))
###Output
cat plane bird plane
###Markdown
2. Define a Convolution Neural Network^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Copy the neural network from the Neural Networks section before and modify it totake 3-channel images (instead of 1-channel images as it was defined).
###Code
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
###Output
_____no_output_____
###Markdown
3. Define a Loss function and optimizer^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Let's use a Classification Cross-Entropy loss and SGD with momentum.
###Code
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
###Output
_____no_output_____
###Markdown
4. Train the network^^^^^^^^^^^^^^^^^^^^This is when things start to get interesting.We simply have to loop over our data iterator, and feed the inputs to thenetwork and optimize.
###Code
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.item()
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
###Output
[1, 2000] loss: 2.168
[1, 4000] loss: 1.872
[1, 6000] loss: 1.682
[1, 8000] loss: 1.572
[1, 10000] loss: 1.490
[2, 2000] loss: 1.381
[2, 4000] loss: 1.365
[2, 6000] loss: 1.353
[2, 8000] loss: 1.310
[2, 10000] loss: 1.303
[2, 12000] loss: 1.255
Finished Training
###Markdown
5. Test the network on the test data^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^We have trained the network for 2 passes over the training dataset.But we need to check if the network has learnt anything at all.We will check this by predicting the class label that the neural networkoutputs, and checking it against the ground-truth. If the prediction iscorrect, we add the sample to the list of correct predictions.Okay, first step. Let us display an image from the test set to get familiar.
###Code
dataiter = iter(testloader)
images, labels = dataiter.next()
# print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in range(4)))
###Output
GroundTruth: cat ship ship plane
###Markdown
Okay, now let us see what the neural network thinks these examples above are:
###Code
outputs = net(images)
###Output
_____no_output_____
###Markdown
The outputs are energies for the 10 classes.Higher the energy for a class, the more the networkthinks that the image is of the particular class.So, let's get the index of the highest energy:
###Code
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in range(4)))
###Output
Predicted: cat ship ship ship
###Markdown
The results seem pretty good.Let us look at how the network performs on the whole dataset.
###Code
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (
100 * correct / total))
###Output
Accuracy of the network on the 10000 test images: 56 %
###Markdown
That looks waaay better than chance, which is 10% accuracy (randomly pickinga class out of 10 classes).Seems like the network learnt something.Hmmm, what are the classes that performed well, and the classes that didnot perform well:
###Code
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(outputs, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
###Output
Accuracy of plane : 63 %
Accuracy of car : 59 %
Accuracy of bird : 41 %
Accuracy of cat : 36 %
Accuracy of deer : 44 %
Accuracy of dog : 45 %
Accuracy of frog : 68 %
Accuracy of horse : 68 %
Accuracy of ship : 73 %
Accuracy of truck : 62 %
###Markdown
Okay, so what next?How do we run these neural networks on the GPU?Training on GPU----------------Just like how you transfer a Tensor on to the GPU, you transfer the neuralnet onto the GPU.Let's first define our device as the first visible cuda device if we haveCUDA available:
###Code
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Assume that we are on a CUDA machine, then this should print a CUDA device:
print(device)
###Output
cuda:0
###Markdown
The rest of this section assumes that `device` is a CUDA device.Then these methods will recursively go over all modules and convert theirparameters and buffers to CUDA tensors:.. code:: python net.to(device)Remember that you will have to send the inputs and targets at every stepto the GPU too:.. code:: python inputs, labels = inputs.to(device), labels.to(device)Why dont I notice MASSIVE speedup compared to CPU? Because your networkis realllly small.**Exercise:** Try increasing the width of your network (argument 2 ofthe first ``nn.Conv2d``, and argument 1 of the second ``nn.Conv2d`` –they need to be the same number), see what kind of speedup you get.**Goals achieved**:- Understanding PyTorch's Tensor library and neural networks at a high level.- Train a small neural network to classify imagesTraining on multiple GPUs-------------------------If you want to see even more MASSIVE speedup using all of your GPUs,please check out :doc:`data_parallel_tutorial`.Where do I go next?-------------------- :doc:`Train neural nets to play video games `- `Train a state-of-the-art ResNet network on imagenet`_- `Train a face generator using Generative Adversarial Networks`_- `Train a word-level language model using Recurrent LSTM networks`_- `More examples`_- `More tutorials`_- `Discuss PyTorch on the Forums`_- `Chat with other users on Slack`_
###Code
###Output
_____no_output_____ |
workflow/feature_engineering/SOAP_QUIP/SOAP_features.ipynb | ###Markdown
Create SOAP features--- Import Modules
###Code
import os
print(os.getcwd())
import sys
import time; ti = time.time()
import pickle
import numpy as np
import pandas as pd
from quippy.descriptors import Descriptor
# #########################################################
from methods import (
get_df_features_targets,
get_df_jobs_data,
get_df_atoms_sorted_ind,
get_df_coord,
get_df_jobs,
)
###Output
_____no_output_____
###Markdown
Read Data
###Code
df_features_targets = get_df_features_targets()
df_jobs = get_df_jobs()
df_jobs_data = get_df_jobs_data()
df_atoms = get_df_atoms_sorted_ind()
# # TEMP
# print(222 * "TEMP")
# df_features_targets = df_features_targets.sample(n=100)
# # df_features_targets = df_features_targets.loc[
# # [
# # ('sherlock', 'tanewani_59', 53.0),
# # ('slac', 'diwarise_06', 33.0),
# # ('sherlock', 'bidoripi_03', 37.0),
# # ('nersc', 'winomuvi_99', 83.0),
# # ('nersc', 'legofufi_61', 90.0),
# # ('sherlock', 'werabosi_10', 42.0),
# # ('slac', 'sunuheka_77', 51.0),
# # ('nersc', 'winomuvi_99', 96.0),
# # ('slac', 'kuwurupu_88', 26.0),
# # ('sherlock', 'sodakiva_90', 52.0),
# # ]
# # ]
# df_features_targets = df_features_targets.loc[[
# ("sherlock", "momaposi_60", 50., )
# ]]
# ('oer_adsorbate', 'sherlock', 'momaposi_60', 'o', 50.0, 1)
###Output
_____no_output_____
###Markdown
Filtering down to systems that won't crash script
###Code
# #########################################################
rows_to_process = []
# #########################################################
for name_i, row_i in df_features_targets.iterrows():
# #####################################################
active_site_i = name_i[2]
# #####################################################
job_id_o_i = row_i[("data", "job_id_o", "")]
# #####################################################
# #####################################################
row_jobs_o_i = df_jobs.loc['guhenihe_85']
# #####################################################
active_site_o_i = row_jobs_o_i.active_site
# #####################################################
# #####################################################
row_data_i = df_jobs_data.loc[job_id_o_i]
# #####################################################
att_num_i = row_data_i.att_num
# #####################################################
atoms_index_i = (
"oer_adsorbate",
name_i[0], name_i[1],
"o", active_site_o_i,
att_num_i,
)
if atoms_index_i in df_atoms.index:
rows_to_process.append(name_i)
# #########################################################
df_features_targets = df_features_targets.loc[rows_to_process]
# #########################################################
###Output
_____no_output_____
###Markdown
Main loop, running SOAP descriptors
###Code
# #########################################################
active_site_SOAP_list = []
metal_site_SOAP_list = []
ave_SOAP_list = []
# #########################################################
for name_i, row_i in df_features_targets.iterrows():
# #####################################################
active_site_i = name_i[2]
# #####################################################
job_id_o_i = row_i[("data", "job_id_o", "")]
job_id_oh_i = row_i[("data", "job_id_oh", "")]
job_id_bare_i = row_i[("data", "job_id_bare", "")]
# #####################################################
# #####################################################
row_jobs_o_i = df_jobs.loc['guhenihe_85']
# #####################################################
active_site_o_i = row_jobs_o_i.active_site
# #####################################################
# #####################################################
row_data_i = df_jobs_data.loc[job_id_o_i]
# #####################################################
# atoms_i = row_data_i.final_atoms
att_num_i = row_data_i.att_num
# #####################################################
atoms_index_i = (
# "dos_bader",
"oer_adsorbate",
name_i[0],
name_i[1],
"o",
# name_i[2],
active_site_o_i,
att_num_i,
)
try:
# #####################################################
row_atoms_i = df_atoms.loc[atoms_index_i]
# #####################################################
atoms_i = row_atoms_i.atoms_sorted_good
# #####################################################
except:
print(name_i)
# print(
# "N_atoms: ",
# atoms_i.get_global_number_of_atoms(),
# sep="")
# Original
# desc = Descriptor("soap cutoff=4 l_max=3 n_max=4 normalize=T atom_sigma=0.5 n_Z=1 Z={14} ")
# This one works
# desc = Descriptor("soap cutoff=4 l_max=10 n_max=10 normalize=T atom_sigma=0.5 n_Z=2 Z={8 77} ")
# THIS ONE IS GOOD ******************************
# desc = Descriptor("soap cutoff=4 l_max=3 n_max=4 normalize=F atom_sigma=0.2 n_Z=2 Z={8 77} ")
# Didn't work great
# desc = Descriptor("soap cutoff=8 l_max=6 n_max=6 normalize=F atom_sigma=0.1 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=4 l_max=6 n_max=6 normalize=F atom_sigma=0.1 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=4 l_max=6 n_max=6 normalize=F atom_sigma=0.5 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=4 l_max=6 n_max=6 normalize=T atom_sigma=0.5 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=4 l_max=3 n_max=4 normalize=T atom_sigma=0.2 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=4 l_max=3 n_max=4 normalize=T atom_sigma=0.4 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=4 l_max=3 n_max=4 normalize=T atom_sigma=0.6 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=4 l_max=3 n_max=4 normalize=T atom_sigma=0.3 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=4 l_max=3 n_max=4 normalize=T atom_sigma=0.2 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=4 l_max=3 n_max=4 normalize=T atom_sigma=0.25 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=4 l_max=3 n_max=4 normalize=T atom_sigma=0.3 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=2 l_max=3 n_max=4 normalize=T atom_sigma=0.3 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=6 l_max=3 n_max=4 normalize=T atom_sigma=0.3 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=5 l_max=3 n_max=4 normalize=T atom_sigma=0.3 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=3 l_max=3 n_max=4 normalize=T atom_sigma=0.3 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=6 l_max=3 n_max=4 normalize=T atom_sigma=0.1 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=6 l_max=3 n_max=4 normalize=T atom_sigma=0.05 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=6 l_max=3 n_max=4 normalize=T atom_sigma=0.2 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=6 l_max=3 n_max=4 normalize=T atom_sigma=0.4 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=6 l_max=3 n_max=4 normalize=T atom_sigma=0.5 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=4 l_max=3 n_max=4 normalize=T atom_sigma=0.3 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=3 l_max=3 n_max=4 normalize=T atom_sigma=0.3 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=3 l_max=3 n_max=4 normalize=T atom_sigma=0.2 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=3 l_max=3 n_max=4 normalize=T atom_sigma=0.5 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=3 l_max=3 n_max=4 normalize=T atom_sigma=0.1 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=3 l_max=3 n_max=4 normalize=T atom_sigma=0.6 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=4 l_max=4 n_max=4 normalize=T atom_sigma=0.3 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=4 l_max=5 n_max=4 normalize=T atom_sigma=0.3 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=4 l_max=6 n_max=4 normalize=T atom_sigma=0.3 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=4 l_max=7 n_max=4 normalize=T atom_sigma=0.3 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=4 l_max=6 n_max=4 normalize=T atom_sigma=0.3 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=4 l_max=6 n_max=6 normalize=T atom_sigma=0.3 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=4 l_max=6 n_max=3 normalize=T atom_sigma=0.3 n_Z=2 Z={8 77} ")
# Optimizing the new SOAP_ave model
# desc = Descriptor("soap cutoff=4 l_max=6 n_max=4 normalize=T atom_sigma=0.3 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=4 l_max=6 n_max=4 normalize=T atom_sigma=0.2 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=4 l_max=6 n_max=4 normalize=T atom_sigma=0.4 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=5 l_max=6 n_max=4 normalize=T atom_sigma=0.3 n_Z=2 Z={8 77} ")
# desc = Descriptor("soap cutoff=3 l_max=6 n_max=4 normalize=T atom_sigma=0.3 n_Z=2 Z={8 77} ")
desc = Descriptor("soap cutoff=4 l_max=6 n_max=4 normalize=T atom_sigma=0.3 n_Z=2 Z={8 77} ")
# desc.sizes(atoms_i)
d = desc.calc(atoms_i)
SOAP_m_i = d["data"]
active_site_SOAP_vector_i = SOAP_m_i[int(active_site_i)]
active_site_SOAP_list.append(
(name_i, active_site_SOAP_vector_i)
)
# print(
# "data shape: ",
# d['data'].shape,
# sep="")
# #####################################################
# Get df_coord to find nearest neighbors
init_slab_name_tuple_i = (
name_i[0],
name_i[1],
"o",
# name_i[2],
active_site_o_i,
att_num_i
)
df_coord_i = get_df_coord(
mode="init-slab", # 'bulk', 'slab', 'post-dft', 'init-slab'
init_slab_name_tuple=init_slab_name_tuple_i,
verbose=False,
)
# #####################################################
row_coord_i = df_coord_i.loc[active_site_i]
# #####################################################
nn_info_i = row_coord_i.nn_info
# #####################################################
# assert len(nn_info_i) == 1, "Only one bound Ir"
ir_nn_present = False
for j_cnt, nn_j in enumerate(nn_info_i):
if nn_j["site"].specie.symbol == "Ir":
ir_nn_present = True
assert ir_nn_present, "Ir has to be in nn list"
# assert nn_info_i[j_cnt]["site"].specie.symbol == "Ir", "Has to be"
metal_index_i = nn_info_i[0]["site_index"]
metal_site_SOAP_vector_i = SOAP_m_i[int(metal_index_i)]
metal_site_SOAP_list.append(
(name_i, metal_site_SOAP_vector_i)
)
# #####################################################
# Averaging SOAP vectors for Ir and 6 oxygens
row_coord_Ir_i = df_coord_i.loc[metal_index_i]
vectors_to_average = []
for nn_j in row_coord_Ir_i["nn_info"]:
if nn_j["site"].specie.symbol == "O":
O_SOAP_vect_i = SOAP_m_i[int(nn_j["site_index"])]
vectors_to_average.append(O_SOAP_vect_i)
vectors_to_average.append(metal_site_SOAP_vector_i)
SOAP_vector_ave_i = np.mean(
vectors_to_average,
axis=0
)
ave_SOAP_list.append(
(name_i, SOAP_vector_ave_i)
)
# vectors_to_average = []
# for nn_j in row_coord_Ir_i["nn_info"]:
# if nn_j["site"].specie.symbol == "O":
# O_SOAP_vect_i = SOAP_m_i[int(nn_j["site_index"])]
# vectors_to_average.append(O_SOAP_vect_i)
# vectors_to_average.append(metal_site_SOAP_vector_i)
# SOAP_vector_ave_i = np.mean(
# vectors_to_average,
# axis=0
# )
# data = []
# for i in vectors_to_average:
# trace = go.Scatter(
# y=i,
# )
# data.append(trace)
# tmp = np.mean(
# vectors_to_average,
# axis=0
# )
# # import plotly.graph_objs as go
# trace = go.Scatter(
# # x=x_array,
# y=tmp,
# )
# # data = [trace]
# data.append(trace)
# fig = go.Figure(data=data)
# fig.show()
###Output
_____no_output_____
###Markdown
Forming the SOAP vector dataframe about the active site atom
###Code
data_dict_list = []
tmp_SOAP_vector_list = []
tmp_name_list = []
for name_i, SOAP_vect_i in active_site_SOAP_list:
# #####################################################
data_dict_i = dict()
# #####################################################
name_dict_i = dict(zip(
["compenv", "slab_id", "active_site", ],
name_i, ))
# #####################################################
tmp_SOAP_vector_list.append(SOAP_vect_i)
tmp_name_list.append(name_i)
# #####################################################
data_dict_i.update(name_dict_i)
# #####################################################
# data_dict_i[""] =
# #####################################################
data_dict_list.append(data_dict_i)
# #####################################################
# #########################################################
SOAP_vector_matrix_AS = np.array(tmp_SOAP_vector_list)
df_SOAP_AS = pd.DataFrame(SOAP_vector_matrix_AS)
df_SOAP_AS.index = pd.MultiIndex.from_tuples(tmp_name_list, names=["compenv", "slab_id", "active_site"])
# #########################################################
df_SOAP_AS.head()
###Output
_____no_output_____
###Markdown
Forming the SOAP vector dataframe about the active Ir atom
###Code
data_dict_list = []
tmp_SOAP_vector_list = []
tmp_name_list = []
for name_i, SOAP_vect_i in metal_site_SOAP_list:
# #####################################################
data_dict_i = dict()
# #####################################################
name_dict_i = dict(zip(
["compenv", "slab_id", "active_site", ],
name_i, ))
# #####################################################
tmp_SOAP_vector_list.append(SOAP_vect_i)
tmp_name_list.append(name_i)
# #####################################################
data_dict_i.update(name_dict_i)
# #####################################################
# data_dict_i[""] =
# #####################################################
data_dict_list.append(data_dict_i)
# #####################################################
# #########################################################
SOAP_vector_matrix_MS = np.array(tmp_SOAP_vector_list)
df_SOAP_MS = pd.DataFrame(SOAP_vector_matrix_MS)
df_SOAP_MS.index = pd.MultiIndex.from_tuples(tmp_name_list, names=["compenv", "slab_id", "active_site"])
# #########################################################
df_SOAP_MS.head()
###Output
_____no_output_____
###Markdown
Forming the SOAP vector dataframe averaged from Ir + 6 O
###Code
data_dict_list = []
tmp_SOAP_vector_list = []
tmp_name_list = []
for name_i, SOAP_vect_i in ave_SOAP_list:
# #####################################################
data_dict_i = dict()
# #####################################################
name_dict_i = dict(zip(
["compenv", "slab_id", "active_site", ],
name_i, ))
# #####################################################
tmp_SOAP_vector_list.append(SOAP_vect_i)
tmp_name_list.append(name_i)
# #####################################################
data_dict_i.update(name_dict_i)
# #####################################################
# data_dict_i[""] =
# #####################################################
data_dict_list.append(data_dict_i)
# #####################################################
# #########################################################
SOAP_vector_matrix_ave = np.array(tmp_SOAP_vector_list)
df_SOAP_ave = pd.DataFrame(SOAP_vector_matrix_ave)
df_SOAP_ave.index = pd.MultiIndex.from_tuples(tmp_name_list, names=["compenv", "slab_id", "active_site"])
# #########################################################
###Output
_____no_output_____
###Markdown
TEMP Plotting
###Code
import plotly.graph_objs as go
from plotting.my_plotly import my_plotly_plot
import plotly.express as px
y_array = SOAP_m_i[int(metal_index_i)]
trace = go.Scatter(
y=y_array,
)
data = [trace]
fig = go.Figure(data=data)
# fig.show()
fig = px.imshow(
df_SOAP_AS.to_numpy(),
aspect='auto', # 'equal', 'auto', or None
)
my_plotly_plot(
figure=fig,
plot_name="df_SOAP_AS",
write_html=True,
)
fig.show()
fig = px.imshow(
df_SOAP_MS.to_numpy(),
aspect='auto', # 'equal', 'auto', or None
)
my_plotly_plot(
figure=fig,
plot_name="df_SOAP_MS",
write_html=True,
)
fig.show()
fig = px.imshow(
df_SOAP_ave.to_numpy(),
aspect='auto', # 'equal', 'auto', or None
)
my_plotly_plot(
figure=fig,
plot_name="df_SOAP_MS",
write_html=True,
)
fig.show()
###Output
_____no_output_____
###Markdown
Save data to file
###Code
root_path_i = os.path.join(
os.environ["PROJ_irox_oer"],
"workflow/feature_engineering/SOAP_QUIP")
directory = os.path.join(root_path_i, "out_data")
if not os.path.exists(directory): os.makedirs(directory)
# Pickling data ###########################################
path_i = os.path.join(root_path_i, "out_data/df_SOAP_AS.pickle")
with open(path_i, "wb") as fle:
pickle.dump(df_SOAP_AS, fle)
# #########################################################
# Pickling data ###########################################
path_i = os.path.join(root_path_i, "out_data/df_SOAP_MS.pickle")
with open(path_i, "wb") as fle:
pickle.dump(df_SOAP_MS, fle)
# #########################################################
# Pickling data ###########################################
path_i = os.path.join(root_path_i, "out_data/df_SOAP_ave.pickle")
with open(path_i, "wb") as fle:
pickle.dump(df_SOAP_ave, fle)
# #########################################################
# # #########################################################
# import pickle; import os
# with open(path_i, "rb") as fle:
# df_SOAP_AS = pickle.load(fle)
# # #########################################################
from methods import get_df_SOAP_AS, get_df_SOAP_MS, get_df_SOAP_ave
df_SOAP_AS_tmp = get_df_SOAP_AS()
df_SOAP_AS_tmp
df_SOAP_MS_tmp = get_df_SOAP_MS()
df_SOAP_MS_tmp
df_SOAP_ave_tmp = get_df_SOAP_ave()
df_SOAP_ave_tmp
# #########################################################
print(20 * "# # ")
print("All done!")
print("Run time:", np.round((time.time() - ti) / 60, 3), "min")
print("SOAP_features.ipynb")
print(20 * "# # ")
# #########################################################
###Output
_____no_output_____
###Markdown
###Code
# SOAP_m_i.shape
# desc?
# assert False
# atoms_i
# atoms_i
# import
# import quippy
# quippy.descriptors.
# SOAP_m_i.shape
# dir(plt.matshow(d['data']))
# plt.matshow(d['data'])
# import os
# import numpy as np
# import matplotlib.pylab as plt
# from quippy.potential import Potential
# from ase import Atoms, units
# from ase.build import add_vacuum
# from ase.lattice.cubic import Diamond
# from ase.io import write
# from ase.constraints import FixAtoms
# from ase.md.velocitydistribution import MaxwellBoltzmannDistribution
# from ase.md.verlet import VelocityVerlet
# from ase.md.langevin import Langevin
# from ase.optimize.precon import PreconLBFGS, Exp
# # from gap_si_surface import ViewStructure
# pd.MultiIndex.from_tuples?
# assert False
# job_id_o_i
# dir(nn_info_i[0]["site"])
# 'properties',
# 'specie',
# 'species',
# 'species_and_occu',
# 'species_string',
# metal_index_i
# assert False
# from plotting.my_plotly import my_plotly_plot
###Output
_____no_output_____ |
Algorithmic Trading/1-Data_Sources.ipynb | ###Markdown
Import Modules *Need to import some important Python libraries and methods that you will need to process financial data and perform data analysis.**The requests module enables you to easily download files from the web. It has a get method that takes a string of a URL to download.**The JavaScript Object Notation (JSON) module enables you to convert a string of JSON data into a Python dictionary via the loads method.**Pandas is a Python library that is built from the ground-up to do financial data analysis. It has a dataframe object that makes it easy to analyze tabular data traditionally done using spreadsheets.**Matplotlib is a Python library used for visualizing data. Pandas provides a wrapper to the library so you can plot nice charts with a single line of code.*---
###Code
import pandas as pd
import pandas_datareader.data as pdr
from datetime import datetime
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import json
import requests
start = datetime(2018, 1, 1)
end = datetime(2021, 8, 31)
###Output
_____no_output_____
###Markdown
Federal Reserve Economic Data (FRED)*FRED is the most comprehensive, free respository for US economic time series data. It has more than half a million economic times series from 87 sources, including government agencies such as the U.S. Census and the Bureau of Labor Statistics. It covers banking, business/fiscal, consumer price indexes, employment and population, exchange rates, gross domestic product, interest rates, monetary aggregates, producer price indexes, reserves and monetary base, U.S. trade and international transactions, and U.S. financial data.**See all the time series here: https://fred.stlouisfed.org/*---
###Code
inflation = pdr.DataReader('T5YIE', 'fred', start, end)
inflation.plot(figsize=(20,5), title='5 year forward inflation expectation rate'), plt.show();
###Output
_____no_output_____
###Markdown
Alpha Vantage*Repository of free APIs for upto the minute streaming data and 20 years of historical data . APIs are grouped into four categories: 1. Equity 2. Currencies (including cryptocurrencies) 3. Sectors and 4. Technical indicators. Run by a tight-knit community of researchers, engineers, and business professionals. JSON is the default data format with CSV format also supported.**Data from this source requires extensive processing before it can used in financial data analysis. The 'Processing Data' workbook focuses on this data source and the steps required to clean the the data. Below are the final lines of code that you could use to get clean data for your analysis.**You can find the API documentation here: https://www.alphavantage.co/documentation/* ---
###Code
response = requests.get("https://www.alphavantage.co/query?function=FX_DAILY&from_symbol=EUR&to_symbol=USD&apikey=demo")
alphadict = json.loads(response.text)
eur = pd.DataFrame(alphadict['Time Series FX (Daily)']).T
eur.index = pd.to_datetime(eur.index)
eur = eur.sort_index(ascending = True)
eur.columns = ['open', 'high', 'low', 'close']
eur = eur.astype(float)
eur.head()
eur['close'].plot(figsize=(20,5), title='EUR/USD closing prices'), plt.show();
###Output
_____no_output_____
###Markdown
Yahoo Finance*This is probably the oldest data source of free financial information. It has a vast repository of historical data that cover most traded securities worldwide.*https://finance.yahoo.com*---
###Code
!pip install yfinance
import yfinance as yf
msft = yf.Ticker('MSFT')
msft.history(start=start, end=end).head()
msft.info
msft.quarterly_cashflow
###Output
_____no_output_____
###Markdown
Quandl*A one stop shop for economic, financial and sentiment data some of it is offered for free and most others for a fee. Quandl sources data from over half a million publishers worldwide. It was acquired by NASDAQ in 2018. It sources freely available public sources like FRED and private sources of alternative data. Many freely available data, such as historical equity data, are offered for a fee.**See API documentation here: https://docs.quandl.com/*--- *You will get an error when 50 api calls are made by the class. You need to get your own free API key*
###Code
!pip install quandl
import quandl
investor_sentiment = quandl.get('AAII/AAII_SENTIMENT', start_date= start, end_date= end)
investor_sentiment['Bull-Bear Spread'].plot(figsize=(20,5), title='American Association of Individual Investor bull-bear spread sentiment'), plt.show();
investor_sentiment.tail()
consumer_sentiment = quandl.get('UMICH/SOC1', start_date= start, end_date= end)
consumer_sentiment.plot(figsize=(20,5), title='University of Michigan consumer sentiment index'), plt.show();
spx = quandl.get('MULTPL/SP500_PE_RATIO_MONTH', start_date = start, end_date = end)
spx.plot(figsize=(20,5), title='Trailing twelve months Price to Earning ratio of S&P 500 companies'), plt.show();
###Output
_____no_output_____
###Markdown
IEX Cloud*The Investors Exchange (IEX) was founded by Brad Katsuyama, hero of the book 'Flash Boys' by Michael Lewis. IEX recently launced IEX Cloud, a new platform provides market and fundamental data for free and for a fee. The default data format is JSON.**For more information about the APIs, see: https://iexcloud.io/docs/api/introduction*---
###Code
response = requests.get("https://sandbox.iexapis.com/stable/stock/aapl/financials?token=Tpk_53e30ef0593440d5855c259602cad185")
jdictionary = json.loads(response.text)
financials = pd.DataFrame(jdictionary['financials'])
financials
###Output
_____no_output_____ |
lesson1-pytorch.ipynb | ###Markdown
Using Convolutional Neural Networks Welcome to the first week of the first deep learning certificate! We're going to use convolutional neural networks (CNNs) to allow our computer to see - something that is only possible thanks to deep learning. Introduction to this week's task: 'Dogs vs Cats' We're going to try to create a model to enter the [Dogs vs Cats](https://www.kaggle.com/c/dogs-vs-cats) competition at Kaggle. There are 25,000 labelled dog and cat photos available for training, and 12,500 in the test set that we have to try to label for this competition. According to the Kaggle web-site, when this competition was launched (end of 2013): *"**State of the art**: The current literature suggests machine classifiers can score above 80% accuracy on this task"*. So if we can beat 80%, then we will be at the cutting edge as of 2013! Basic setup There isn't too much to do to get started - just a few simple configuration steps.This imports all dependencies and shows plots in the web page itself - we always wants to use this when using jupyter notebook:
###Code
import torch
import torchvision.models as models
import torchvision.transforms as transforms
import torchvision.datasets as datasets
from torchvision.utils import make_grid
from PIL import Image
import matplotlib.pyplot as plt
import torch.nn as nn
import torch.optim as optim
import torch.utils.trainer as trainer
import torch.utils.trainer.plugins
from torch.autograd import Variable
import numpy as np
import os
from torchsample.modules import ModuleTrainer
from torchsample.metrics import CategoricalAccuracy
%matplotlib inline
def show(img):
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1,2,0)), interpolation='nearest')
###Output
_____no_output_____
###Markdown
Define path to data: (It's a good idea to put it in a subdirectory of your notebooks folder, and then exclude that directory from git control by adding it to .gitignore.). Additionaly set use_cuda = True to use a GPU for training and prediction.
###Code
data_path = "data/dogscats/"
# data_path = "data/dogscats/sample/"
use_cuda = True
batch_size = 64
print('Using CUDA:', use_cuda)
###Output
Using CUDA: False
###Markdown
Use a pretrained VGG model with PyTorch's **Vgg16** class Our first step is simply to use a model that has been fully created for us, which can recognise a wide variety (1,000 categories) of images. We will use 'VGG', which won the 2014 Imagenet competition, and is a very simple model to create and understand. The VGG Imagenet team created both a larger, slower, slightly more accurate model (*VGG 19*) and a smaller, faster model (*VGG 16*). We will be using VGG 16 since the much slower performance of VGG19 is generally not worth the very minor improvement in accuracy.PyTorch includes a class, *Vgg16*, which makes using the VGG 16 model very straightforward. The punchline: state of the art custom model in 7 lines of codeHere's everything you need to do to get >97% accuracy on the Dogs vs Cats dataset - we won't analyze how it works behind the scenes yet, since at this stage we're just going to focus on the minimum necessary to actually do useful work.
###Code
# TODO refactor the code below and put it in utils.py to simplify allow creating a custom model in 7 lines of code
###Output
_____no_output_____
###Markdown
Use Vgg16 for basic image recognitionLet's start off by using the *Vgg16* class to recognise the main imagenet category for each image.We won't be able to enter the Cats vs Dogs competition with an Imagenet model alone, since 'cat' and 'dog' are not categories in Imagenet - instead each individual breed is a separate category. However, we can use it to see how well it can recognise the images, which is a good first step.First create a DataLoader which will read the images from disk, resize them, convert them into tensors and normalize them the same way the Vgg16 network was trained (using ImageNet's RGB mean and std).
###Code
# Data loading code
traindir = os.path.join(data_path, 'train')
valdir = os.path.join(data_path, 'valid')
# cd data/dogscats && mkdir -p test && mv test1 test/
testdir = os.path.join(data_path, 'test')
# pytorch way of implementing fastai's get_batches, (utils.py)
def get_data_loader(dirname, shuffle=True, batch_size = 64):
# pytorch's VGG requires images to be 224x224 and normalized using https://github.com/pytorch/vision#models
normalize = transforms.Compose([
transforms.Lambda(lambda img: img.resize((224, 224), Image.BILINEAR)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
image_folder = datasets.ImageFolder(dirname, normalize)
return torch.utils.data.DataLoader(image_folder, batch_size=batch_size,
shuffle=shuffle, pin_memory=use_cuda), image_folder
train_loader, folder = get_data_loader(traindir, batch_size=batch_size)
val_loader, folder = get_data_loader(valdir, shuffle=False, batch_size=batch_size)
test_loader, testfolder = get_data_loader(testdir, shuffle=False, batch_size=batch_size)
print('Images in test folder:', len(testfolder.imgs))
###Output
Images in test folder: 6
###Markdown
Then, create a Vgg16 object:
###Code
# Load the model
model = models.vgg16(pretrained=True)
###Output
_____no_output_____
###Markdown
Then *finetune* the model such that it will be trained based on the data in the batches provided - in this case, to predict either 'dog' or 'cat'.
###Code
# Finetune by replacing the last fully connected layer and freezing all network parameters
for param in model.parameters():
param.requires_grad = False
# Replace the last fully-connected layer matching the new class count
classes = train_loader.dataset.classes
num_classes = len(classes)
print('Using {:d} classes: {}'.format(num_classes, classes))
model.classifier = nn.Sequential(
nn.Linear(512 * 7 * 7, 4096),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(4096, num_classes),
)
# Monkey patch the parameters() to return trainable weights only
import types
def parameters(self):
p = filter(lambda p: p.requires_grad, nn.Module.parameters(self))
return p
model.parameters = types.MethodType(parameters, model)
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss()
# enable cuda if available
if(use_cuda):
model.cuda()
criterion.cuda()
def getTrainer(lr):
trainer = ModuleTrainer(model)
trainer.set_optimizer(optim.Adam, lr=1e-3)
trainer.set_loss(criterion)
trainer.set_metrics([CategoricalAccuracy()])
return trainer
###Output
_____no_output_____
###Markdown
Finally, we fit() the parameters of the model using the training data, reporting the accuracy on the validation set after every epoch. (An epoch is one full pass through the training data.)
###Code
trainer = getTrainer()
trainer.fit_loader(train_loader, val_loader=train_loader, nb_epoch=3)
# This gets a validation accuracy of 98.9 when using the whole dataset
###Output
Epoch 1/2: 4 batches [08:03, 114.11s/ batches, val_acc=50.31, val_loss=15.9782, loss=15.0138, acc=50.62]
Epoch 2/2: 4 batches [10:48, 161.16s/ batches, val_acc=61.56, val_loss=1.3137, loss=8.2031, acc=64.38]
###Markdown
That shows all of the steps involved in using the Vgg16 class to create an image recognition model using whatever labels you are interested in. For instance, this process could classify paintings by style, or leaves by type of disease, or satellite photos by type of crop, and so forth.Next up, we'll dig one level deeper to see what's going on in the Vgg16 class. Visually validate the classifier
###Code
# Define some helper functions
def denorm(tensor):
# Undo the image normalization + clamp between 0 and 1 to avoid image artifacts
for t, m, s in zip(tensor, [0.485, 0.456, 0.406], [0.229, 0.224, 0.225]):
t.mul_(s).add_(m).clamp_(0, 1)
return tensor
def get_images_to_plot(images_tensor):
denormalize = transforms.Compose([
transforms.Lambda(denorm)
])
return denormalize(images_tensor)
def get_classes_strings(classes, labels_ids):
# returns the classes in string format
return [classes[label_id] for label_id in labels_ids]
def get_prediction_classes_ids(predictions):
# returns the predictions in id format
predictions_ids = predictions.cpu().data.numpy().argmax(1)
return predictions_ids
def get_prediction_classes_strings(classes, predictions):
# returns the predictions in string format
return get_classes_strings(classes, get_prediction_classes_ids(predictions))
# display a sample set of images and their labels
loader, folder = get_data_loader(valdir, batch_size = 4)
images, labels = next(iter(loader))
show(make_grid(get_images_to_plot(images), padding=100))
labels_string = get_classes_strings(classes, labels.numpy())
print(labels_string)
# display the predictons for the images above
if use_cuda:
images = images.cuda()
predictions = model(Variable(images))
predictions_string = get_prediction_classes_strings(classes, predictions)
print(predictions_string)
###Output
['cats', 'dogs', 'cats', 'dogs']
|
notebooks/game_ai/raw/ex2.ipynb | ###Markdown
IntroductionIn the tutorial, you learned how to define a simple heuristic that the agent used to select moves. In this exercise, you'll check your understanding and make the heuristic more complex.To get started, run the code cell below to set up our feedback system.
###Code
from learntools.core import binder
binder.bind(globals())
from learntools.game_ai.ex2 import *
###Output
_____no_output_____
###Markdown
1) A more complex heuristicThe heuristic from the tutorial looks at all groups of four adjacent grid locations on the same row, column, or diagonal and assigns points for each occurrence of the following patterns:In the image above, we assume that the agent is the red player, and the opponent plays yellow discs.For reference, here is the `get_heuristic()` function from the tutorial:```pythondef get_heuristic(grid, mark, config): num_threes = count_windows(grid, 3, mark, config) num_fours = count_windows(grid, 4, mark, config) num_threes_opp = count_windows(grid, 3, mark%2+1, config) score = num_threes - 1e2*num_threes_opp + 1e6*num_fours return score```In the `get_heuristic()` function, `num_fours`, `num_threes`, and `num_threes_opp` are the number of windows in the game grid that are assigned 1000000, 1, and -100 point(s), respectively. In this tutorial, you'll change the heuristic to the following (where you decide the number of points to apply in each of `A`, `B`, `C`, `D`, and `E`). You will define these values in the code cell below. To check your answer, we use your values to create a heuristic function as follows:```pythondef get_heuristic_q1(grid, col, mark, config): num_twos = count_windows(grid, 2, mark, config) num_threes = count_windows(grid, 3, mark, config) num_fours = count_windows(grid, 4, mark, config) num_twos_opp = count_windows(grid, 2, mark%2+1, config) num_threes_opp = count_windows(grid, 3, mark%2+1, config) score = A*num_fours + B*num_threes + C*num_twos + D*num_twos_opp + E*num_threes_opp return score```This heuristic is then used to create an agent, that competes against the agent from the tutorial in 50 different game rounds. In order to be marked correct, - your agent must win at least half of the games, and- `C` and `D` must both be nonzero.
###Code
# TODO: Assign your values here
A = ____
B = ____
C = ____
D = ____
E = ____
# Check your answer (this will take a few seconds to run!)
q_1.check()
#%%RM_IF(PROD)%%
A = 1
B = 1
C = 1
D = -1
E = -1
q_1.assert_check_failed()
#%%RM_IF(PROD)%%
A = 1e10
B = 1e4
C = 1e2
D = -1
E = -1e6
q_1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
###Output
_____no_output_____
###Markdown
2) Does the agent win?Consider the game board below. Say the agent uses red discs, and it's the agent's turn. - If the agent uses the heuristic **_from the tutorial_**, does it win or lose the game?- If the agent uses the heuristic **_that you just implemented_**, does it win or lose the game?
###Code
#_COMMENT_IF(PROD)_
q_2.hint()
# Check your answer (Run this code cell to receive credit!)
q_2.solution()
###Output
_____no_output_____
###Markdown
3) Submit to the competitionNow, it's time to submit an agent to the competition! Use the next code cell to define an agent. (You can see an example of how to write a valid agent in **[this notebook](https://www.kaggle.com/alexisbcook/create-a-connectx-agent)**.)You're encouraged to use what you learned in the first question of this exercise to write an agent. Use the code from the tutorial as a starting point.
###Code
def my_agent(obs, config):
# Your code here: Amend the agent!
valid_moves = [col for col in range(config.columns) if obs.board[col] == 0]
return random.choice(valid_moves)
# Run this code cell to get credit for creating an agent
q_3.check()
###Output
_____no_output_____
###Markdown
Run the next code cell to convert your agent to a submission file.
###Code
import inspect
import os
def write_agent_to_file(function, file):
with open(file, "a" if os.path.exists(file) else "w") as f:
f.write(inspect.getsource(function))
print(function, "written to", file)
write_agent_to_file(my_agent, "submission.py")
###Output
_____no_output_____
###Markdown
IntroductionIn the tutorial, you learned how to define a simple heuristic that the agent used to select moves. In this exercise, you'll check your understanding and make the heuristic more complex.To get started, run the code cell below to set up our feedback system.
###Code
from learntools.core import binder
binder.bind(globals())
from learntools.game_ai.ex2 import *
###Output
_____no_output_____
###Markdown
1) A more complex heuristicThe heuristic from the tutorial looks at all groups of four adjacent grid locations on the same row, column, or diagonal and assigns points for each occurrence of the following patterns:In the image above, we assume that the agent is the red player, and the opponent plays yellow discs.For reference, here is the `get_heuristic()` function from the tutorial:```pythondef get_heuristic(grid, mark, config): num_threes = count_windows(grid, 3, mark, config) num_fours = count_windows(grid, 4, mark, config) num_threes_opp = count_windows(grid, 3, mark%2+1, config) score = num_threes - 1e2*num_threes_opp + 1e6*num_fours return score```In the `get_heuristic()` function, `num_fours`, `num_threes`, and `num_threes_opp` are the number of windows in the game grid that are assigned 1000000, 1, and -100 point(s), respectively. In this tutorial, you'll change the heuristic to the following (where you decide the number of points to apply in each of `A`, `B`, `C`, `D`, and `E`). You will define these values in the code cell below. To check your answer, we use your values to create a heuristic function as follows:```pythondef get_heuristic_q1(grid, col, mark, config): num_twos = count_windows(grid, 2, mark, config) num_threes = count_windows(grid, 3, mark, config) num_fours = count_windows(grid, 4, mark, config) num_twos_opp = count_windows(grid, 2, mark%2+1, config) num_threes_opp = count_windows(grid, 3, mark%2+1, config) score = A*num_fours + B*num_threes + C*num_twos + D*num_twos_opp + E*num_threes_opp return score```This heuristic is then used to create an agent, that competes against the agent from the tutorial in 50 different game rounds. In order to be marked correct, - your agent must win at least half of the games, and- `C` and `D` must both be nonzero.
###Code
# TODO: Assign your values here
A = ____
B = ____
C = ____
D = ____
E = ____
# Check your answer (this will take a few seconds to run!)
q_1.check()
#%%RM_IF(PROD)%%
A = 1
B = 1
C = 1
D = -1
E = -1
q_1.assert_check_failed()
#%%RM_IF(PROD)%%
A = 1e10
B = 1e4
C = 1e2
D = -1
E = -1e6
q_1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
###Output
_____no_output_____
###Markdown
2) Does the agent win?Consider the game board below. Say the agent uses red discs, and it's the agent's turn. - If the agent uses the heuristic **_from the tutorial_**, does it win or lose the game?- If the agent uses the heuristic **_that you just implemented_**, does it win or lose the game?
###Code
#_COMMENT_IF(PROD)_
q_2.hint()
# Check your answer (Run this code cell to receive credit!)
q_2.solution()
###Output
_____no_output_____
###Markdown
3) Submit to the competitionNow, it's time to submit an agent to the competition! Use the next code cell to define an agent. (You can see an example of how to write a valid agent in **[this notebook](https://www.kaggle.com/alexisbcook/create-a-connectx-agent)**.)You're encouraged to use what you learned in the first question of this exercise to write an agent. Use the code from the tutorial as a starting point.
###Code
def my_agent(obs, config):
# Your code here: Amend the agent!
valid_moves = [col for col in range(config.columns) if obs.board[col] == 0]
return random.choice(valid_moves)
# Run this code cell to get credit for creating an agent
q_3.check()
###Output
_____no_output_____
###Markdown
Run the next code cell to convert your agent to a submission file.
###Code
import inspect
import os
def write_agent_to_file(function, file):
with open(file, "a" if os.path.exists(file) else "w") as f:
f.write(inspect.getsource(function))
print(function, "written to", file)
write_agent_to_file(my_agent, "submission.py")
###Output
_____no_output_____
###Markdown
IntroductionIn the tutorial, you learned how to define a simple heuristic that the agent used to select moves. In this exercise, you'll check your understanding and make the heuristic more complex.To get started, run the code cell below to set up our feedback system.
###Code
from learntools.core import binder
binder.bind(globals())
from learntools.game_ai.ex2 import *
###Output
_____no_output_____
###Markdown
1) A more complex heuristicThe heuristic from the tutorial looks at all groups of four adjacent grid locations on the same row, column, or diagonal and assigns points for each occurrence of the following patterns:In the image above, we assume that the agent is the red player, and the opponent plays yellow discs.For reference, here is the `get_heuristic()` function from the tutorial:```pythondef get_heuristic(grid, mark, config): num_threes = count_windows(grid, 3, mark, config) num_fours = count_windows(grid, 4, mark, config) num_threes_opp = count_windows(grid, 3, mark%2+1, config) score = num_threes - 1e2*num_threes_opp + 1e6*num_fours return score```In the `get_heuristic()` function, `num_fours`, `num_threes`, and `num_threes_opp` are the number of windows in the game grid that are assigned 1000000, 1, and -100 point(s), respectively. In this tutorial, you'll change the heuristic to the following (where you decide the number of points to apply in each of `A`, `B`, `C`, `D`, and `E`). You will define these values in the code cell below. To check your answer, we use your values to create a heuristic function as follows:```pythondef get_heuristic_q1(grid, col, mark, config): num_twos = count_windows(grid, 2, mark, config) num_threes = count_windows(grid, 3, mark, config) num_fours = count_windows(grid, 4, mark, config) num_twos_opp = count_windows(grid, 2, mark%2+1, config) num_threes_opp = count_windows(grid, 3, mark%2+1, config) score = A*num_fours + B*num_threes + C*num_twos + D*num_twos_opp + E*num_threes_opp return score```This heuristic is then used to create an agent, that competes against the agent from the tutorial in 50 different game rounds. In order to be marked correct, - your agent must win at least half of the games, and- `C` and `D` must both be nonzero.
###Code
# TODO: Assign your values here
A = ____
B = ____
C = ____
D = ____
E = ____
# Check your answer (this will take a few seconds to run!)
q_1.check()
#%%RM_IF(PROD)%%
A = 1
B = 1
C = 1
D = -1
E = -1
q_1.assert_check_failed()
#%%RM_IF(PROD)%%
A = 1e10
B = 1e4
C = 1e2
D = -1
E = -1e6
q_1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
###Output
_____no_output_____
###Markdown
2) Does the agent win?Consider the game board below. Say the agent uses red discs, and it's the agent's turn. - If the agent uses the heuristic **_from the tutorial_**, does it win or lose the game?- If the agent uses the heuristic **_that you just implemented_**, does it win or lose the game?
###Code
#_COMMENT_IF(PROD)_
q_2.hint()
# Check your answer (Run this code cell to receive credit!)
q_2.solution()
###Output
_____no_output_____
###Markdown
3) Submit to the competitionNow, it's time to submit an agent to the competition! Use the next code cell to define an agent. (You can see an example of how to write a valid agent in **[this notebook](https://www.kaggle.com/alexisbcook/create-a-connectx-agent)**.)You're encouraged to use what you learned in the first question of this exercise to write an agent. Use the code from the tutorial as a starting point.
###Code
def my_agent(obs, config):
# Your code here: Amend the agent!
import random
valid_moves = [col for col in range(config.columns) if obs.board[col] == 0]
return random.choice(valid_moves)
# Run this code cell to get credit for creating an agent
q_3.check()
###Output
_____no_output_____
###Markdown
Run the next code cell to convert your agent to a submission file.
###Code
import inspect
import os
def write_agent_to_file(function, file):
with open(file, "a" if os.path.exists(file) else "w") as f:
f.write(inspect.getsource(function))
print(function, "written to", file)
write_agent_to_file(my_agent, "submission.py")
###Output
_____no_output_____
###Markdown
IntroductionIn the tutorial, you learned how to define a simple heuristic that the agent used to select moves. In this exercise, you'll check your understanding and make the heuristic more complex.To get started, run the code cell below to set up our feedback system.
###Code
from learntools.core import binder
binder.bind(globals())
from learntools.game_ai.ex2 import *
###Output
_____no_output_____
###Markdown
1) A more complex heuristicThe heuristic from the tutorial looks at all groups of four adjacent grid locations on the same row, column, or diagonal and assigns points for each occurrence of the following patterns:In the image above, we assume that the agent is the red player, and the opponent plays yellow discs.For reference, here is the `get_heuristic()` function from the tutorial:```pythondef get_heuristic(grid, mark, config): num_threes = count_windows(grid, 3, mark, config) num_fours = count_windows(grid, 4, mark, config) num_threes_opp = count_windows(grid, 3, mark%2+1, config) score = num_threes - 1e2*num_threes_opp + 1e6*num_fours return score```In the `get_heuristic()` function, `num_fours`, `num_threes`, and `num_threes_opp` are the number of windows in the game grid that are assigned 1000000, 1, and -100 point(s), respectively. In this tutorial, you'll change the heuristic to the following (where you decide the number of points to apply in each of `A`, `B`, `C`, `D`, and `E`). You will define these values in the code cell below. To check your answer, we use your values to create a heuristic function as follows:```pythondef get_heuristic_q1(grid, col, mark, config): num_twos = count_windows(grid, 2, mark, config) num_threes = count_windows(grid, 3, mark, config) num_fours = count_windows(grid, 4, mark, config) num_twos_opp = count_windows(grid, 2, mark%2+1, config) num_threes_opp = count_windows(grid, 3, mark%2+1, config) score = A*num_fours + B*num_threes + C*num_twos + D*num_twos_opp + E*num_threes_opp return score```This heuristic is then used to create an agent, that competes against the agent from the tutorial in 50 different game rounds. In order to be marked correct, - your agent must win at least half of the games, and- `C` and `D` must both be nonzero.
###Code
# TODO: Assign your values here
A = ____
B = ____
C = ____
D = ____
E = ____
# Check your answer (this will take a few seconds to run!)
q_1.check()
#%%RM_IF(PROD)%%
A = 1
B = 1
C = 1
D = -1
E = -1
q_1.assert_check_failed()
#%%RM_IF(PROD)%%
A = 1e10
B = 1e4
C = 1e2
D = -1
E = -1e6
q_1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
q_1.hint()
#_COMMENT_IF(PROD)_
q_1.solution()
###Output
_____no_output_____
###Markdown
2) Does the agent win?Consider the game board below. Say the agent uses red discs, and it's the agent's turn. - If the agent uses the heuristic **_from the tutorial_**, does it win or lose the game?- If the agent uses the heuristic **_that you just implemented_**, does it win or lose the game?
###Code
#_COMMENT_IF(PROD)_
q_2.hint()
# Check your answer (Run this code cell to receive credit!)
q_2.solution()
###Output
_____no_output_____
###Markdown
3) Submit to the competitionNow, it's time to submit an agent to the competition! Use the next code cell to define an agent. (You can see an example of how to write a valid agent in **[this notebook](https://www.kaggle.com/alexisbcook/create-a-connectx-agent)**.)You're encouraged to use what you learned in the first question of this exercise to write an agent. Use the code from the tutorial as a starting point.
###Code
def my_agent(obs, config):
# Your code here: Amend the agent!
valid_moves = [col for col in range(config.columns) if obs.board[col] == 0]
return random.choice(valid_moves)
# Run this code cell to get credit for creating an agent
q_3.check()
###Output
_____no_output_____
###Markdown
Run the next code cell to convert your agent to a submission file.
###Code
import inspect
import os
def write_agent_to_file(function, file):
with open(file, "a" if os.path.exists(file) else "w") as f:
f.write(inspect.getsource(function))
print(function, "written to", file)
write_agent_to_file(my_agent, "submission.py")
###Output
_____no_output_____ |
DS_Sprint_Challenge_7_Classification_1.ipynb | ###Markdown
_Lambda School Data Science, Unit 2_ Sprint Challenge: Predict Steph Curry's shots 🏀For your Sprint Challenge, you'll use a dataset with all Steph Curry's NBA field goal attempts. (Regular season and playoff games, from October 28, 2009, through June 5, 2019.) You'll use information about the shot and the game to predict whether the shot was made. This is hard to predict! Try for an accuracy score in the high 50's or low 60's. The dataset was collected with the [nba_api](https://github.com/swar/nba_api) Python library.
###Code
import pandas as pd
url = 'https://drive.google.com/uc?export=download&id=1fL7KPyxgGYfQDsuJoBWHIWwCAf-HTFpX'
df = pd.read_csv(url, parse_dates=['game_date']).set_index('game_date')
assert df.shape == (13958, 19)
###Output
_____no_output_____
###Markdown
This Sprint Challenge has two parts. To demonstrate mastery on each part, do all the required, numbered instructions. To earn a score of "3" for the part, also do the stretch goals. Part 1. Prepare to model Required1. **Do train/validate/test split.** Use the 2009-10 season through 2016-17 season to train, the 2017-18 season to validate, and the 2018-19 season to test. NBA seasons begin in October and end in June. You'll know you've split the data correctly when your train set has 11081 observations, your validation set has 1168 observations, and your test set has 1709 observations.2. **Begin with baselines for classification.** Your target to predict is `shot_made_flag`. What is the baseline accuracy for the validation set, if you guessed the majority class for every prediction?3. **Use Ordinal Encoding _or_ One-Hot Encoding,** for the categorical features you select.4. **Train a Random Forest _or_ Logistic Regression** with the features you select. Stretch goalsEngineer at least 4 of these 5 features:- **Homecourt Advantage**: Is the home team (`htm`) the Golden State Warriors (`GSW`) ?- **Opponent**: Who is the other team playing the Golden State Warriors?- **Seconds remaining in the period**: Combine minutes remaining with seconds remaining, to get the total number of seconds remaining in the period.- **Seconds remaining in the game**: Combine period, and seconds remaining in the period, to get the total number of seconds remaining in the game. A basketball game has 4 periods, each 12 minutes long.- **Made previous shot**: Was Steph Curry's previous shot successful? Part 2. Evaluate models Required1. Get your model's **validation accuracy.** (Multiple times if you try multiple iterations.)2. Get your model's **test accuracy.** (One time, at the end.)3. Get and plot your Random Forest's **feature importances** _or_ your Logistic Regression's **coefficients.**4. Imagine this is the confusion matrix for a binary classification model. **Calculate accuracy, precision, and recall for this confusion matrix:** Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Stretch goals- Calculate F1 score for the provided, imaginary confusion matrix.- Plot a real confusion matrix for your basketball model, with row and column labels.- Print the classification report for your model.
###Code
!pip install category_encoders
#imports
import numpy as np
import category_encoders as ce
from sklearn.preprocessing import StandardScaler
from sklearn.impute import SimpleImputer
from sklearn.ensemble import RandomForestClassifier
from sklearn.utils.multiclass import unique_labels
from sklearn.metrics import (roc_auc_score, roc_curve,
classification_report,
confusion_matrix,
accuracy_score)
#2009-10 season through 2016-17 season to train,
#2017-18 season to validate,
#2018-19 season to test
import datetime as dt
#Train split
train_season_start_date = dt.datetime(2009,10,1)
train_season_end_date = dt.datetime(2017, 7, 1)
X_train = df[(df.index >= train_season_start_date) &
(df.index <= train_season_end_date)]
#validation split
val_season_start_date = dt.datetime(2017,10,1)
val_season_end_date = dt.datetime(2018,7,1)
X_val = df[(df.index >= val_season_start_date) &
(df.index <= val_season_end_date)]
#test split
test_season_start_date = dt.datetime(2018,10,1)
test_season_end_date = dt.datetime(2019,7,1)
X_test = df[(df.index >= test_season_start_date) &
(df.index <= test_season_end_date)]
#Train had 11081 observations
assert X_train.shape[0] == 11081
#validation set has 1168 observations
assert X_val.shape[0] == 1168
#test set has 1709 observations
assert X_test.shape[0] == 1709
#Get baseline accuracy for the validation set
target = 'shot_made_flag'
#assemble baseline using tartget data
y_train = X_train[target]
majority_class = y_train.mode()
y_test = [majority_class] * len(y_train)
print(f'The baseline accuracy of this data is {accuracy_score(y_train,y_test)}')
def wrangle(df):
X = df.copy()
#drop columns that add no real value
no_val = ['game_id', 'game_event_id', 'player_name']
X = X.drop(columns= no_val)
#drop outliers
half_court_distance = 47
X.shot_distance = np.where(X.shot_distance >= half_court_distance,
np.nan, X.shot_distance)
X.shot_zone_range = np.where(X.shot_zone_range == 'Back Court Shot',
np.nan, X.shot_zone_range)
X = X.dropna()
return X
#Wrangle data
X_train = wrangle(X_train)
X_val = wrangle(X_val)
X_test = wrangle(X_test)
#Chosen features for encoding
cat_features = ['action_type', 'shot_type', 'shot_zone_area',
'shot_zone_basic', 'shot_zone_range', 'season_type']
#pass features on initialization
encoder = ce.OrdinalEncoder(cols = cat_features)
#encode data
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
#drop unused cols
unused = ['htm','vtm']
X_train_encoded = X_train_encoded.drop(columns = unused)
X_val_encoded = X_val_encoded.drop(columns = unused)
#pop target variables
y_train = X_train_encoded.pop('shot_made_flag')
y_val = X_val_encoded.pop('shot_made_flag')
#initialize and fit model
rfc= RandomForestClassifier(n_estimators = 500,
max_depth = 15,
min_samples_leaf = 2,
min_samples_split = 2,
bootstrap = True)
model = rfc.fit(X_train_encoded, y_train)
#get accuracy
y_test = model.predict(X_val_encoded)
print(f'The accuracy of this model is {accuracy_score(y_val, y_test)}')
%matplotlib inline
import matplotlib.pyplot as plt
#Clean up cloumn names and create a series of feature importances
columns = X_train_encoded.columns.str.replace('_',' ').str.capitalize()
importances = pd.Series(model.feature_importances_, columns)
#Plot that figure
fig = plt.figure(figsize = (7,7))
plt.style.use('ggplot')
ax = importances.sort_values().plot.barh()
ax.set_title('Feature Importances')
plt.show();
###Output
_____no_output_____
###Markdown
Predicted Negative Positive Actual Negative 85 58 Positive 8 36
###Code
#Calculate accuracy, precision, and recall for this confusion matrix:
true_negative = 85
false_negative = 8
true_positive = 36
false_positive = 58
total = 85 + 8 + 58 +36
#accuracy is the sum of correct predictions divided by total predictions
accuracy = (true_positive + true_negative) / (total)
#precision is class accuracy of actual results
pos_precision = true_positive / (true_positive + false_positive)
neg_precision = true_negative / (true_negative + false_negative)
#recall is class accuracy of predicted results
pos_recall = true_positive / (true_positive + false_negative)
neg_recall = true_negative / (true_negative + false_positive)
print(f'The accuracy of this model the table represents is {accuracy}')
print(f'The precision of the positive class is {pos_precision}')
print(f'The precision of the negative class is {neg_precision}')
print(f'The recall of the positive class is {pos_recall}')
print(f'The recall of the negative class is {neg_recall}')
#f1 score
f1_pos = 2 / ((1/pos_precision) + (1/pos_recall))
f1_neg = 2 / ((1/neg_precision) + (1/neg_recall))
print(f'The f1 score of the positive class is {f1_pos}')
print(f'The f1 score of the negative class is {f1_neg}')
#feature engineering
def fe(df):
X = df.copy()
#Homecourt Advantage
X['hca'] = np.where(X.htm == 'GSW', 1, 0)
#Determine Opponent
X['opp'] = np.where(X.htm != 'GSW', X.htm,
np.where(X.vtm != 'GSW', X.vtm, np.nan))
#Seconds remaining in period
X['srp'] = (X.minutes_remaining * 60) + X.seconds_remaining
#Seconds remaining in game
X['srg'] = (X.period * 12 * 60) + X.seconds_remaining
return X
#add new features to split data
X_train = fe(X_train)
X_val = fe(X_val)
X_test = fe(X_test)
#Recycle previous code to rerun model
#Chosen features for encoding
cat_features = ['action_type', 'shot_type', 'shot_zone_area',
'shot_zone_basic', 'shot_zone_range', 'season_type',
'hca', 'opp']
#pass features on initialization
encoder = ce.OrdinalEncoder(cols = cat_features)
#encode data
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
#drop unused cols
unused = ['htm','vtm']
X_train_encoded = X_train_encoded.drop(columns = unused)
X_val_encoded = X_val_encoded.drop(columns = unused)
#pop target variables
y_train = X_train_encoded.pop('shot_made_flag')
y_val = X_val_encoded.pop('shot_made_flag')
#initialize and fit model
rfc= RandomForestClassifier(n_estimators = 500,
max_depth = 15,
min_samples_leaf = 2,
min_samples_split = 2,
bootstrap = True)
model = rfc.fit(X_train_encoded, y_train)
#get accuracy
y_pred = model.predict(X_val_encoded)
print(f'The accuracy of this model is {accuracy_score(y_val, y_pred)}')
#using test data now
X_test_encoded = encoder.transform(X_test)
X_test_encoded =X_test_encoded.drop(columns = unused)
y_test = X_test_encoded.pop('shot_made_flag')
y_pred = model.predict(X_test_encoded)
print(f'The accuracy using the test data is {accuracy_score(y_test, y_pred)}')
#Confusion matrix
import seaborn as sns
def plot_confusion_matrix(y_true, y_pred):
labels = unique_labels(y_true)
columns = [f'Predicted {label}' for label in labels]
index = [f'Actual {label}' for label in labels]
cm = confusion_matrix(y_true, y_pred)
cm = cm/cm.sum(axis=1).reshape(len(labels), 1)
table = pd.DataFrame(cm, columns=columns, index=index)
return sns.heatmap(table, annot=True, fmt='.2f', cmap='viridis')
plot_confusion_matrix(y_test, y_pred);
#classification report
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
0 0.67 0.59 0.63 896
1 0.59 0.68 0.63 796
accuracy 0.63 1692
macro avg 0.63 0.63 0.63 1692
weighted avg 0.64 0.63 0.63 1692
###Markdown
_Lambda School Data Science, Unit 2_ Classification 1 Sprint Challenge: Predict Steph Curry's shots 🏀For your Sprint Challenge, you'll use a dataset with all Steph Curry's NBA field goal attempts. (Regular season and playoff games, from October 28, 2009, through June 5, 2019.) You'll use information about the shot and the game to predict whether the shot was made. This is hard to predict! Try for an accuracy score in the high 50's or low 60's. The dataset was collected with the [nba_api](https://github.com/swar/nba_api) Python library.
###Code
import pandas as pd
url = 'https://drive.google.com/uc?export=download&id=1fL7KPyxgGYfQDsuJoBWHIWwCAf-HTFpX'
df = pd.read_csv(url, parse_dates=['game_date']).set_index('game_date')
assert df.shape == (13958, 19)
###Output
_____no_output_____
###Markdown
This Sprint Challenge has two parts. To demonstrate mastery on each part, do all the required, numbered instructions. To earn a score of "3" for the part, also do the stretch goals. Part 1. Prepare to model Required1. **Do train/validate/test split.** Use the 2009-10 season through 2016-17 season to train, the 2017-18 season to validate, and the 2018-19 season to test. NBA seasons begin in October and end in June. You'll know you've split the data correctly when your train set has 11081 observations, your validation set has 1168 observations, and your test set has 1709 observations.2. **Begin with baselines for classification.** Your target to predict is `shot_made_flag`. What is the baseline accuracy for the validation set, if you guessed the majority class for every prediction?3. **Use Ordinal Encoding _or_ One-Hot Encoding,** for the categorical features you select.4. **Train a Random Forest _or_ Logistic Regression** with the features you select. Stretch goalsEngineer at least 4 of these 5 features:- **Homecourt Advantage**: Is the home team (`htm`) the Golden State Warriors (`GSW`) ?- **Opponent**: Who is the other team playing the Golden State Warriors?- **Seconds remaining in the period**: Combine minutes remaining with seconds remaining, to get the total number of seconds remaining in the period.- **Seconds remaining in the game**: Combine period, and seconds remaining in the period, to get the total number of seconds remaining in the game. A basketball game has 4 periods, each 12 minutes long.- **Made previous shot**: Was Steph Curry's previous shot successful? Part 2. Evaluate models Required1. Get your model's **validation accuracy.** (Multiple times if you try multiple iterations.)2. Get your model's **test accuracy.** (One time, at the end.)3. Get and plot your Random Forest's **feature importances** _or_ your Logistic Regression's **coefficients.**4. Imagine this is the confusion matrix for a binary classification model. **Calculate accuracy, precision, and recall for this confusion matrix:** Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Stretch goals- Calculate F1 score for the provided, imaginary confusion matrix.- Plot a real confusion matrix for your basketball model, with row and column labels.- Print the classification report for your model. Imports
###Code
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
###Output
_____no_output_____
###Markdown
Explore Data
###Code
df.head()
cols = df.columns.tolist()
cols
plt.scatter(x='loc_x', y='loc_y',data=df)
df.describe(include='number')
df.describe(include='object')
df.vtm.unique()
#del df['player_name']
###Output
_____no_output_____
###Markdown
Part 1 Train / Validate / Test Split
###Code
train_dates = (df.index > '2009-10-01') & (df.index <= '2017-07-01')
val_dates = (df.index > '2017-10-01') & (df.index <= '2018-07-01')
test_dates = (df.index > '2018-10-01') & (df.index <= '2019-07-01')
target = 'shot_made_flag'
X_train = df.loc[train_dates].drop(columns=[target])
X_val = df.loc[val_dates].drop(columns=[target])
X_test = df.loc[test_dates].drop(columns=[target])
y_train = df.loc[train_dates]['shot_made_flag']
y_val = df.loc[val_dates]['shot_made_flag']
y_test = df.loc[test_dates]['shot_made_flag']
X_train.shape, X_val.shape, X_test.shape, y_train.shape, y_val.shape, y_test.shape
###Output
_____no_output_____
###Markdown
Baselines
###Code
y_val.value_counts(normalize=True)
majority_class = y_val.mode()[0]
y_pred = [majority_class] * len(y_val)
from sklearn.metrics import accuracy_score
accuracy_score(y_val, y_pred)
###Output
_____no_output_____
###Markdown
One-Hot encoding, fit random forest on Train using make_pipeline
###Code
%%time
import category_encoders as ce
from sklearn.pipeline import make_pipeline
from sklearn.ensemble import RandomForestClassifier
pipeline = make_pipeline(
ce.OneHotEncoder(use_cat_names=True),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
pipeline.fit(X_train, y_train)
###Output
CPU times: user 4.38 s, sys: 325 ms, total: 4.7 s
Wall time: 1.94 s
###Markdown
Part 1 stretch goals
###Code
# Homecourt Advantage: Is the home team (htm) the Golden State Warriors (GSW) ?
# Opponent: Who is the other team playing the Golden State Warriors?
# Seconds remaining in the period: Combine minutes remaining with seconds remaining, to get the total number of seconds remaining in the period.
# Seconds remaining in the game: Combine period, and seconds remaining in the period, to get the total number of seconds remaining in the game. A basketball game has 4 periods, each 12 minutes long.
# Made previous shot: Was Steph Curry's previous shot successful?
#Homecourt
df['homecourt_adv'] = [1 if x == 'GSW' else 0 for x in df['htm']]
###Output
_____no_output_____
###Markdown
Part 2 Validation accuracy
###Code
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
###Output
Validation Accuracy 0.5787671232876712
###Markdown
Test Accuracy
###Code
y_pred = pipeline.predict(X_test)
print('Test Accuracy', accuracy_score(y_test, y_pred))
###Output
Test Accuracy 0.6155646576945583
###Markdown
Random Forest Feature Importances
###Code
# Get feature importances
encoder = pipeline.named_steps['onehotencoder']
rf = pipeline.named_steps['randomforestclassifier']
feature_names = encoder.transform(X_train).columns
importances = pd.Series(rf.feature_importances_, feature_names)
# Plot feature importances
n = 15
plt.figure(figsize=(10,n/2))
plt.title(f'Top {n} features')
importances.sort_values()[-n:].plot.barh(color='grey');
###Output
_____no_output_____
###Markdown
Calculate accuracy, precision, and recall for the given confusion matrix
###Code
# acc = true pos + true neg / total
# precision = true pos / true pos + fals pos
# recall = true pos / true pos + fals neg
# f1 = 2 * (pre*recall/pre+recall)
accuracy = (36 + 85) / 187
precision = 36 / (36 + 58)
recall = 36 / (36 + 8)
f1 = 2 * ((precision*recall)/(precision+recall))
print(f'Accuracy: {accuracy}\nPrecision: {precision}\nRecall: {recall}')
###Output
Accuracy: 0.6470588235294118
Precision: 0.3829787234042553
Recall: 0.8181818181818182
###Markdown
Part 2 Stretch Goals F1
###Code
print(f'F1: {f1}')
###Output
F1: 0.5217391304347826
###Markdown
Confusion Matrix (on test, make sure you run that test accuracy cell prior to this cell)
###Code
from sklearn.metrics import confusion_matrix
labels = [0,1]
cm = confusion_matrix(y_test, y_pred, labels)
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(cm)
plt.title('Confusion Matrix')
fig.colorbar(cax)
ax.set_xticklabels([''] + labels)
ax.set_yticklabels([''] + labels)
plt.xlabel('Predicted')
plt.ylabel('True')
plt.show()
###Output
_____no_output_____
###Markdown
Classification Report
###Code
from sklearn.metrics import classification_report
target_names = ['Missed Shot','Made Shot']
print(classification_report(y_val, y_pred, target_names=target_names))
###Output
precision recall f1-score support
Missed Shot 0.58 0.65 0.61 603
Made Shot 0.57 0.51 0.54 565
micro avg 0.58 0.58 0.58 1168
macro avg 0.58 0.58 0.58 1168
weighted avg 0.58 0.58 0.58 1168
###Markdown
_Lambda School Data Science, Unit 2_ Sprint Challenge: Predict Steph Curry's shots 🏀For your Sprint Challenge, you'll use a dataset with all Steph Curry's NBA field goal attempts. (Regular season and playoff games, from October 28, 2009, through June 5, 2019.) You'll use information about the shot and the game to predict whether the shot was made. This is hard to predict! Try for an accuracy score in the high 50's or low 60's. The dataset was collected with the [nba_api](https://github.com/swar/nba_api) Python library.
###Code
import pandas as pd
url = 'https://drive.google.com/uc?export=download&id=1fL7KPyxgGYfQDsuJoBWHIWwCAf-HTFpX'
df = pd.read_csv(url, parse_dates=['game_date']).set_index('game_date')
assert df.shape == (13958, 19)
###Output
_____no_output_____
###Markdown
This Sprint Challenge has two parts. To demonstrate mastery on each part, do all the required, numbered instructions. To earn a score of "3" for the part, also do the stretch goals. Part 1. Prepare to model Required1. **Do train/validate/test split.** Use the 2009-10 season through 2016-17 season to train, the 2017-18 season to validate, and the 2018-19 season to test. NBA seasons begin in October and end in June. You'll know you've split the data correctly when your train set has 11081 observations, your validation set has 1168 observations, and your test set has 1709 observations.2. **Begin with baselines for classification.** Your target to predict is `shot_made_flag`. What is the baseline accuracy for the validation set, if you guessed the majority class for every prediction?3. **Use Ordinal Encoding _or_ One-Hot Encoding,** for the categorical features you select.4. **Train a Random Forest _or_ Logistic Regression** with the features you select. Stretch goalsEngineer at least 4 of these 5 features:- **Homecourt Advantage**: Is the home team (`htm`) the Golden State Warriors (`GSW`) ?- **Opponent**: Who is the other team playing the Golden State Warriors?- **Seconds remaining in the period**: Combine minutes remaining with seconds remaining, to get the total number of seconds remaining in the period.- **Seconds remaining in the game**: Combine period, and seconds remaining in the period, to get the total number of seconds remaining in the game. A basketball game has 4 periods, each 12 minutes long.- **Made previous shot**: Was Steph Curry's previous shot successful? Part 2. Evaluate models Required1. Get your model's **validation accuracy.** (Multiple times if you try multiple iterations.)2. Get your model's **test accuracy.** (One time, at the end.)3. Get and plot your Random Forest's **feature importances** _or_ your Logistic Regression's **coefficients.**4. Imagine this is the confusion matrix for a binary classification model. **Calculate accuracy, precision, and recall for this confusion matrix:** Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Stretch goals- Calculate F1 score for the provided, imaginary confusion matrix.- Plot a real confusion matrix for your basketball model, with row and column labels.- Print the classification report for your model. Part 1 Required
###Code
df.head()
df.tail(5)
train = df.loc['2009-10':'2017-06'];
val = df.loc['2017-10':'2018-06'];
test = df.loc['2018-10':'2019-06'];
train.shape, val.shape, test.shape
assert (train.shape[0] + val.shape[0] + test.shape[0]) == df.shape[0]
feature1 = ['shot_zone_range', 'action_type', 'shot_zone_basic']
target = 'shot_made_flag';
x_train = train[feature1];
y_train = train[target];
x_val = val[feature1];
y_val = val[target];
x_test = test[feature1];
y_test = test[target];
import category_encoders as ce;
from sklearn.pipeline import make_pipeline;
from sklearn.metrics import accuracy_score;
from sklearn.impute import SimpleImputer;
from sklearn.linear_model import LogisticRegression;
from sklearn.linear_model import LinearRegression;
pipeline = make_pipeline(
ce.OneHotEncoder(use_cat_names=True),
SimpleImputer(),
LogisticRegression()
);
encoder = pipeline.named_steps['onehotencoder']
x_train_encoded = encoder.fit_transform(x_train)
x_val_encoded = encoder.transform(x_val)
x_test_encoded = encoder.transform(x_test)
x_train_encoded.columns
pipeline.fit(x_train, y_train);
pipeline.score(x_val, y_val)
y_pred = pipeline.predict(x_test)
y_pred
y_pred.shape
model = pipeline.named_steps['logisticregression']
coefs = model.coef_[0]
intercept = model.intercept_
coefs, intercept
%matplotlib inline
import matplotlib.pyplot as plt;
coefs = pd.Series(model.coef_[0], x_test_encoded.columns)
coefs
plt.figure(figsize=(10,63/2))
plt.title(f'Top {63} features')
coefs.sort_values()[-63:].plot.barh(color='grey')
###Output
_____no_output_____
###Markdown
Stretch Goals
###Code
df['homecourt_advantage'] = df['htm'] == 'GSW'
df['homecourt_advantage'].head()
df['seconds_remaining_in_period'] = df['seconds_remaining'] + df['minutes_remaining'] * 60;
df['seconds_remaining_in_period'].head()
df['seconds_remaining_in_game'] = df['seconds_remaining_in_period'] + 12 * (4 - df['period'])
df['seconds_remaining_in_game'].head()
import numpy as np;
shot_made = df['shot_made_flag'].tolist();
print(len(shot_made))
shot_made.pop(len(shot_made)-1)
shot_made.insert(0, 0)
print(len(shot_made))
df['made_previous_shot'] = shot_made;
df['made_previous_shot'].head()
###Output
_____no_output_____
###Markdown
Part 2 Required
###Code
print(f'Validation Accuracy: {pipeline.score(x_val, y_val)}');
print(f'Test Accuracy: {pipeline.score(x_test, y_test)}');
import numpy as np;
matrix = np.array([[85, 58], [8, 36]]);
matrix
accuracy = (85+36)/(85+58+8+36)
accuracy
correct_predictions = matrix[1][1];
correct_predictions
total_predictions = matrix[0][1] + matrix[1][1];
total_predictions
percision = correct_predictions/total_predictions;
percision
actual = matrix[1][0] + matrix[1][1];
actual
recall = correct_predictions/actual;
recall
###Output
_____no_output_____
###Markdown
Stretch Goals
###Code
from sklearn.metrics import f1_score, confusion_matrix, classification_report;
y_pred = pipeline.predict(x_test);
f1 = f1_score(y_test, y_pred)
print(f'F! Score: {f1}')
import seaborn as sns;
confusion = confusion_matrix(y_test, y_pred);
sns.heatmap(confusion, cmap='viridis', annot=True, fmt='d')
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
0 0.66 0.61 0.63 912
1 0.59 0.64 0.61 797
micro avg 0.62 0.62 0.62 1709
macro avg 0.62 0.62 0.62 1709
weighted avg 0.62 0.62 0.62 1709
###Markdown
_Lambda School Data Science, Unit 2_ Classification 1 Sprint Challenge: Predict Steph Curry's shots 🏀For your Sprint Challenge, you'll use a dataset with all Steph Curry's NBA field goal attempts. (Regular season and playoff games, from October 28, 2009, through June 5, 2019.) You'll use information about the shot and the game to predict whether the shot was made. This is hard to predict! Try for an accuracy score in the high 50's or low 60's. The dataset was collected with the [nba_api](https://github.com/swar/nba_api) Python library.
###Code
!pip install category_encoders
%matplotlib inline
import category_encoders as ce
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import RobustScaler
pd.set_option('display.float_format', '{:.2f}'.format)
import pandas as pd
url = 'https://drive.google.com/uc?export=download&id=1fL7KPyxgGYfQDsuJoBWHIWwCAf-HTFpX'
df = pd.read_csv(url, parse_dates=['game_date']).set_index('game_date')
assert df.shape == (13958, 19)
df.head() #first look at table content
df.describe()
df.dtypes
###Output
_____no_output_____
###Markdown
Part 1. Prepare to model Required1. **Do train/validate/test split.** Use the 2009-10 season through 2016-17 season to train, the 2017-18 season to validate, and the 2018-19 season to test. NBA seasons begin in October and end in June. You'll know you've split the data correctly when your train set has 11081 observations, your validation set has 1168 observations, and your test set has 1709 observations.2. **Begin with baselines for classification.** Your target to predict is `shot_made_flag`. What is the baseline accuracy for the validation set, if you guessed the majority class for every prediction?3. **Use Ordinal Encoding _or_ One-Hot Encoding,** for the categorical features you select.4. **Train a Random Forest _or_ Logistic Regression** with the features you select. Train Test Split
###Code
df.columns.values #getting the namesof the columns
df.index = pd.to_datetime(df.index) #needed to be able to split the data frame
train = df['2009-10-8':'2017-06-30']
val = df['2017-07-01':'2018-06-30']
test = df['2018-07-01':'2019-06-05']
print('train', train.shape)
print('val', val.shape)
print('test', test.shape)
###Output
train (11081, 19)
val (1168, 19)
test (1709, 19)
###Markdown
Baseline for classification
###Code
y_train = train['shot_made_flag']
y_train.value_counts(normalize=True) #0 is the majority class
###Output
_____no_output_____
###Markdown
One-hot encoding
###Code
train.describe(exclude='number').T.sort_values(by='unique') #checking which non numeric columns would be interesting
train['shot_zone_range'].value_counts(dropna=False) #shotzonerange chosen
encoder = ce.OrdinalEncoder() #shotzonerange info
encoded = encoder.fit_transform(train['shot_zone_range'])
encoded.head(10)
encoded['shot_zone_range'].dtype
train_encoded = encoder.fit_transform(train) #creating new dataframe for predictions
train_encoded.head()
###Output
_____no_output_____
###Markdown
Random Forest Model
###Code
train_encoded.columns.values
X_train = train_encoded[['game_id', 'game_event_id', 'player_name', 'period',
'minutes_remaining', 'seconds_remaining', 'action_type',
'shot_type', 'shot_zone_basic', 'shot_zone_area',
'shot_zone_range', 'shot_distance', 'loc_x', 'loc_y',
'htm', 'vtm', 'season_type',
'scoremargin_before_shot']]
y_train = train_encoded['shot_made_flag']
val.columns.values
X_val =val[['game_id', 'game_event_id', 'player_name', 'period',
'minutes_remaining', 'seconds_remaining', 'action_type',
'shot_type', 'shot_zone_basic', 'shot_zone_area',
'shot_zone_range', 'shot_distance', 'loc_x', 'loc_y',
'htm', 'vtm', 'season_type',
'scoremargin_before_shot']]
y_val = val['shot_made_flag']
%%time
from sklearn.ensemble import RandomForestClassifier
pipeline = make_pipeline(
ce.OneHotEncoder(use_cat_names=True),
SimpleImputer(strategy='median'),
RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=-1)
)
# Fit on train, score on val
a = pipeline.fit(X_train, y_train)
# print('Validation Accuracy', pipeline.score(X_val, y_val))
a
###Output
_____no_output_____
###Markdown
Stretch goalsEngineer at least 4 of these 5 features:- **Homecourt Advantage**: Is the home team (`htm`) the Golden State Warriors (`GSW`) ?- **Opponent**: Who is the other team playing the Golden State Warriors?- **Seconds remaining in the period**: Combine minutes remaining with seconds remaining, to get the total number of seconds remaining in the period.- **Seconds remaining in the game**: Combine period, and seconds remaining in the period, to get the total number of seconds remaining in the game. A basketball game has 4 periods, each 12 minutes long.- **Made previous shot**: Was Steph Curry's previous shot successful? This Sprint Challenge has two parts. To demonstrate mastery on each part, do all the required, numbered instructions. To earn a score of "3" for the part, also do the stretch goals. Part 2. Evaluate models Required1. Get your model's **validation accuracy.** (Multiple times if you try multiple iterations.)2. Get your model's **test accuracy.** (One time, at the end.)3. Get and plot your Random Forest's **feature importances** _or_ your Logistic Regression's **coefficients.**4. Imagine this is the confusion matrix for a binary classification model. **Calculate accuracy, precision, and recall for this confusion matrix:** Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Stretch goals- Calculate F1 score for the provided, imaginary confusion matrix.- Plot a real confusion matrix for your basketball model, with row and column labels.- Print the classification report for your model. Confusion Matrix
###Code
Accuracy = (85 + 36 / (85+36+58+8))/100
Accuracy
precision = 85/(85+8)
precision
recall = 85/(85+58)
recall
###Output
_____no_output_____
###Markdown
_Lambda School Data Science, Unit 2_ Sprint Challenge: Predict Steph Curry's shots 🏀For your Sprint Challenge, you'll use a dataset with all Steph Curry's NBA field goal attempts. (Regular season and playoff games, from October 28, 2009, through June 5, 2019.) You'll use information about the shot and the game to predict whether the shot was made. This is hard to predict! Try for an accuracy score in the high 50's or low 60's. The dataset was collected with the [nba_api](https://github.com/swar/nba_api) Python library.
###Code
import pandas as pd
url = 'https://drive.google.com/uc?export=download&id=1fL7KPyxgGYfQDsuJoBWHIWwCAf-HTFpX'
df = pd.read_csv(url, parse_dates=['game_date']).set_index('game_date')
assert df.shape == (13958, 19)
#I honestly felt this was easier than using train_test_split from sklearn given the criteria
train = df[df.index <= '2017-07']
val = df[(df.index>='2017-07') & (df.index<='2018-07')]
test = df[df.index >'2018-07']
#establish target and features
target = 'shot_made_flag'
features = ['shot_distance','minutes_remaining', 'action_type']
#count values in train to get majority class
train['shot_made_flag'].value_counts()
#Create x features and y target
X_train = train[features]
X_val = val[features]
X_test = test[features]
y_train = train[target]
y_val = val[target]
y_test = test[target]
len(y_val)
#establish baseline majority class prediction accuracy
from sklearn.metrics import accuracy_score
y_pred=[0]*1168
print(f'Validation accuracy for majority classifier is {accuracy_score(y_val,y_pred)}')
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import StandardScaler
import category_encoders as ce
rfc = RandomForestClassifier(n_estimators = 100)
encoder = ce.OrdinalEncoder()
scaler = StandardScaler()
#Encode and scale
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
X_test_encoded = encoder.transform(X_test)
X_train_scaled = scaler.fit_transform(X_train_encoded)
X_val_scaled = scaler.transform(X_val_encoded)
X_test_scaled = scaler.transform(X_test_encoded)
#fit the model
rfc.fit(X_train_scaled, y_train)
print(f'Validation Accuracy Score: {rfc.score(X_val_scaled, y_val)}')
print(f'Test Accuracy Score: {rfc.score(X_test_scaled, y_test)}')
train.head()
train['hmtwn_adv'] = train['htm']=='GSW'
val['hmtwn_adv'] = val['htm']=='GSW'
test['hmtwn_adv'] = test['htm']=='GSW'
import matplotlib.pyplot as plt
importances = pd.Series(rfc.feature_importances_, features)
features = encoder.transform(X_val).columns
n = rfc.n_features_
plt.figure(figsize = (10,n))
plt.title(f'Top {n} features')
importances.sort_values()[-n:].plot.barh(color='grey');
###Output
_____no_output_____
###Markdown
This Sprint Challenge has two parts. To demonstrate mastery on each part, do all the required, numbered instructions. To earn a score of "3" for the part, also do the stretch goals. Part 1. Prepare to model Required1. **Do train/validate/test split.** Use the 2009-10 season through 2016-17 season to train, the 2017-18 season to validate, and the 2018-19 season to test. NBA seasons begin in October and end in June. You'll know you've split the data correctly when your train set has 11081 observations, your validation set has 1168 observations, and your test set has 1709 observations.2. **Begin with baselines for classification.** Your target to predict is `shot_made_flag`. What is the baseline accuracy for the validation set, if you guessed the majority class for every prediction?3. **Use Ordinal Encoding _or_ One-Hot Encoding,** for the categorical features you select.4. **Train a Random Forest _or_ Logistic Regression** with the features you select. Stretch goalsEngineer at least 4 of these 5 features:- **Homecourt Advantage**: Is the home team (`htm`) the Golden State Warriors (`GSW`) ?- **Opponent**: Who is the other team playing the Golden State Warriors?- **Seconds remaining in the period**: Combine minutes remaining with seconds remaining, to get the total number of seconds remaining in the period.- **Seconds remaining in the game**: Combine period, and seconds remaining in the period, to get the total number of seconds remaining in the game. A basketball game has 4 periods, each 12 minutes long.- **Made previous shot**: Was Steph Curry's previous shot successful? Part 2. Evaluate models Required1. Get your model's **validation accuracy.** (Multiple times if you try multiple iterations.)2. Get your model's **test accuracy.** (One time, at the end.)3. Get and plot your Random Forest's **feature importances** _or_ your Logistic Regression's **coefficients.**4. Imagine this is the confusion matrix for a binary classification model. **Calculate accuracy, precision, and recall for this confusion matrix:** Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Stretch goals- Calculate F1 score for the provided, imaginary confusion matrix.- Plot a real confusion matrix for your basketball model, with row and column labels.- Print the classification report for your model.
###Code
correct_negative = 85
predicted_negative = 93
actual_negative = 85+58
total_predictions = 85+8+58+36
total_correct = correct_negative+correct_positive
negative_precision = correct_negative/predicted_negative
negative_recall = correct_negative/actual_negative
print(negative_precision)
print(negative_recall)
correct_positive = 36
predicted_positive = 58+36
actual_positive = 44
positive_precision = correct_positive/predicted_positive
positive_recall = correct_positive/actual_positive
accuracy = total_correct/total_predictions
print(positive_precision)
print(positive_recall)
print(accuracy)
y_pred = rfc.predict(X_val_scaled)
#Getting the confusion matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_val, y_pred)
cm
columns_names = [f'Predicted {c}' for c in y_val.unique()]
index_names = [f'Actual {c}' for c in y_val.unique()]
cm_df = pd.DataFrame(cm, index = index_names, columns = columns_names)
import seaborn as sns
sns.heatmap(cm_df, cmap='viridis', annot=True, fmt='d')
###Output
_____no_output_____
###Markdown
_Lambda School Data Science, Unit 2_ Sprint Challenge: Predict Steph Curry's shots 🏀For your Sprint Challenge, you'll use a dataset with all Steph Curry's NBA field goal attempts. (Regular season and playoff games, from October 28, 2009, through June 5, 2019.) You'll use information about the shot and the game to predict whether the shot was made. This is hard to predict! Try for an accuracy score in the high 50's or low 60's. The dataset was collected with the [nba_api](https://github.com/swar/nba_api) Python library. This Sprint Challenge has two parts. To demonstrate mastery on each part, do all the required, numbered instructions. To earn a score of "3" for the part, also do the stretch goals. Part 1. Prepare to model Required1. **Do train/validate/test split.** Use the 2009-10 season through 2016-17 season to train, the 2017-18 season to validate, and the 2018-19 season to test. NBA seasons begin in October and end in June. You'll know you've split the data correctly when your train set has 11081 observations, your validation set has 1168 observations, and your test set has 1709 observations.2. **Begin with baselines for classification.** Your target to predict is `shot_made_flag`. What is the baseline accuracy for the validation set, if you guessed the majority class for every prediction?3. **Use Ordinal Encoding _or_ One-Hot Encoding,** for the categorical features you select.4. **Train a Random Forest _or_ Logistic Regression** with the features you select. Stretch goalsEngineer at least 4 of these 5 features:- **Homecourt Advantage**: Is the home team (`htm`) the Golden State Warriors (`GSW`) ?- **Opponent**: Who is the other team playing the Golden State Warriors?- **Seconds remaining in the period**: Combine minutes remaining with seconds remaining, to get the total number of seconds remaining in the period.- **Seconds remaining in the game**: Combine period, and seconds remaining in the period, to get the total number of seconds remaining in the game. A basketball game has 4 periods, each 12 minutes long.- **Made previous shot**: Was Steph Curry's previous shot successful? Part 2. Evaluate models Required1. Get your model's **validation accuracy.** (Multiple times if you try multiple iterations.)2. Get your model's **test accuracy.** (One time, at the end.)3. Get and plot your Random Forest's **feature importances** _or_ your Logistic Regression's **coefficients.**4. Imagine this is the confusion matrix for a binary classification model. **Calculate accuracy, precision, and recall for this confusion matrix:** Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Stretch goals- Calculate F1 score for the provided, imaginary confusion matrix.- Plot a real confusion matrix for your basketball model, with row and column labels.- Print the classification report for your model.
###Code
!pip install category_encoders
import pandas as pd
url = 'https://drive.google.com/uc?export=download&id=1fL7KPyxgGYfQDsuJoBWHIWwCAf-HTFpX'
df = pd.read_csv(url, parse_dates=['game_date'])
assert df.shape == (13958, 20)
# check for null values
df.isnull().sum()
df.head()
df['game_date'] = pd.to_datetime(df['game_date'], infer_datetime_format=True)
df['game_month'] = df['game_date'].dt.month
df['game_day'] = df['game_date'].dt.day
df['game_year'] = df['game_date'].dt.year
# add home advantage
df['home_adv'] = df['htm'] == 'GSW'
# more feature engineering
df['opponent'] = (df['vtm'].replace('GSW', '') + df['htm'].replace('GSW', ''))
df['sec_remain_period'] = (df['minutes_remaining'] * 60) + df['seconds_remaining']
df['sec_remain_game'] = (df['minutes_remaining'] * 60) + df['seconds_remaining'] + ((4-df['period']) * 12 * 60)
#train, test, val split
train = df[(df['game_date'] >= '2009-10-1') & (df['game_date'] <= '2017-6-30')]
test = df[(df['game_date'] >= '2017-10-1') & (df['game_date'] <= '2018-6-30')]
val = df[(df['game_date'] >= '2018-10-1') & (df['game_date'] <= '2019-6-30')]
train.shape, test.shape, val.shape
target = 'shot_made_flag'
y_train = train[target]
y_train.value_counts(normalize=True)
majority_class = y_train.mode()[0]
y_pred = [majority_class] * len(y_train)
from sklearn.metrics import accuracy_score
accuracy_score(y_train, y_pred)
features = df.columns.tolist()
features.remove('player_name')
features.remove('game_date')
features.remove('shot_made_flag')
from sklearn.pipeline import make_pipeline
import category_encoders as ce
from sklearn.ensemble import RandomForestClassifier
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
pipeline = make_pipeline(ce.OrdinalEncoder()
, RandomForestClassifier(n_estimators=100, random_state=42, n_jobs=1))
pipeline.fit(X_train, y_train)
print('Validation Accuracy', pipeline.score(X_val, y_val))
###Output
Validation Accuracy 0.6266822703335284
###Markdown
Part 2
###Code
print('Validation Accuracy', pipeline.score(X_val, y_val))
y_test = test[target]
print('Test Accuracy', pipeline.score(X_test, y_test))
%matplotlib inline
import matplotlib.pyplot as plt
# get feature importances
rf = pipeline.named_steps['randomforestclassifier']
importances = pd.Series(rf.feature_importances_, X_train.columns)
n = 10
plt.figure(figsize = (10, n/2))
plt.title(f'Top {n} features')
importances.sort_values()[-n:].plot.barh(color='grey')
n = 10
plt.figure(figsize = (10, n/2))
plt.title(f'Bottom {n} features')
importances.sort_values()[:n].plot.barh(color='grey');
###Output
_____no_output_____
###Markdown
Confusion Matrix Predicted Negative Positive Actual Negative 85 58 Positive 8 36
###Code
total = 85 + 58 + 8 + 36
print('Accuracy:',(36+85)/total)
precision = 36/(36+58)
print('Precision:', precision)
recall = 36/(36+8)
print('Recall:', recall)
print('F1:', 2*(recall * precision) / (recall + precision))
from sklearn.metrics import confusion_matrix
from sklearn.utils.multiclass import unique_labels
import seaborn as sns
y_pred = pipeline.predict(X_test)
def plot_confusion_matrix(y_true, y_pred):
labels = unique_labels(y_true)
columns = [f'Predicted {label}' for label in labels]
index = [f'Actual {label}' for label in labels]
table = pd.DataFrame(confusion_matrix(y_true, y_pred),
columns=columns, index=index)
return sns.heatmap(table, annot=True, fmt='d', cmap='viridis')
plot_confusion_matrix(y_test, y_pred);
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
from sklearn.linear_model import LogisticRegression
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
X_test = test[features]
pipeline = make_pipeline(ce.OneHotEncoder()
, LogisticRegression(solver='lbfgs', multi_class='auto', max_iter=500))
pipeline.fit(X_train, y_train)
print('Validation Accuracy', pipeline.score(X_val, y_val))
###Output
_____no_output_____
###Markdown
_Lambda School Data Science, Unit 2_ Classification 1 Sprint Challenge: Predict Steph Curry's shots 🏀For your Sprint Challenge, you'll use a dataset with all Steph Curry's NBA field goal attempts. (Regular season and playoff games, from October 28, 2009, through June 5, 2019.) You'll use information about the shot and the game to predict whether the shot was made. This is hard to predict! Try for an accuracy score in the high 50's or low 60's. The dataset was collected with the [nba_api](https://github.com/swar/nba_api) Python library.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
%matplotlib inline
!pip install category_encoders
import category_encoders as ce
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import QuantileTransformer
from sklearn.preprocessing import RobustScaler
from sklearn.impute import SimpleImputer
from sklearn.linear_model import LogisticRegression
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
from sklearn.pipeline import make_pipeline
import pandas as pd
url = 'https://drive.google.com/uc?export=download&id=1fL7KPyxgGYfQDsuJoBWHIWwCAf-HTFpX'
df = pd.read_csv(url, parse_dates=['game_date']).set_index('game_date')
assert df.shape == (13958, 19)
df.dtypes
df.index
df.tail()
df.isna().sum()
conts = df.select_dtypes('number')
conts.describe()
from yellowbrick.features import Rank2D
X = conts
y = df.shot_made_flag
visualizer = Rank2D(algorithm="pearson")
visualizer.fit_transform(X,y)
visualizer.poof()
cats = df.select_dtypes('object')
cats.describe()
df.describe(exclude='number').T.sort_values(by='unique')
df = df.drop(['game_id', 'game_event_id','player_name',], axis=1)
###Output
_____no_output_____
###Markdown
Baseline Predictions
###Code
y_train = df['shot_made_flag']
y_train.value_counts(normalize=True)
majority_class = y_train.mode()[0]
y_pred = [majority_class] * len(y_train)
#autopredict on AS
from sklearn.metrics import accuracy_score
accuracy_score(y_train, y_pred)
#predicting 47%
df.shot_made_flag.mean()
###Output
_____no_output_____
###Markdown
Test Train Validate Split
###Code
df_train = df['2009-10-28':'2017-9-28']
y_train = df_train['shot_made_flag']
df_train = df_train.drop('shot_made_flag', axis=1).copy()
print(df_train.shape)
df_train.head()
df_train.info()
df_val = df['2017-9-29':'2018-9-28'].copy()
y_val = df_val['shot_made_flag']
df_val = df_val.drop('shot_made_flag', axis=1)
print(df_val.shape)
df_val.info()
df_test = df['2018-9-1':]
y_test = df_test['shot_made_flag']
df_test = df_test.drop('shot_made_flag', axis=1).copy()
print(df_test.shape)
df_test.info()
catcode = [
'action_type','shot_zone_basic',
'shot_zone_area','shot_zone_range',
'htm','vtm',
]
numeric_features = df_train.select_dtypes('number').columns.tolist()
features = catcode + numeric_features
X_train_subset = df_train[features]
X_val_subset = df_val[features]
X_test = df_test[features]
###Output
_____no_output_____
###Markdown
Random Forest
###Code
Rf = RandomForestClassifier(n_estimators=800, n_jobs=-1)
pipeline = make_pipeline(
ce.OneHotEncoder(use_cat_names=True),
QuantileTransformer(),
IterativeImputer(),
Rf
)
# Fit on train, score on val, predict on test
pipeline.fit(X_train_subset, y_train)
print('Train Accuracy', pipeline.score(X_train_subset, y_train))
print('Validation Accuracy', pipeline.score(X_val_subset, y_val))
y_pred = pipeline.predict(X_test)
# Get feature importances
encoder = pipeline.named_steps['onehotencoder']
rf = pipeline.named_steps['randomforestclassifier']
feature_names = encoder.transform(X_train_subset).columns
importances = pd.Series(Rf.feature_importances_, feature_names)
#feature importances
n = 20
plt.figure(figsize=(10,n/2))
plt.title(f'Top {n} features')
importances.sort_values()[-n:].plot.barh(color='red');
from sklearn.metrics import confusion_matrix
confusion_matrix(y_val, y_pred[:1168])
pipeline.named_steps['randomforestclassifier'].classes_
from sklearn.utils.multiclass import unique_labels
unique_labels(y_val)
def plot_confusion_matrix(y_true, y_pred):
labels = unique_labels(y_true)
columns = [f'Predicted {label}' for label in labels]
index = [f'Actual {label}' for label in labels]
table = pd.DataFrame(confusion_matrix(y_true, y_pred),
columns=columns, index=index)
return sns.heatmap(table, annot=True, fmt='d', cmap='viridis')
plot_confusion_matrix(y_val, y_pred[:1168]);
from sklearn.metrics import classification_report
print(classification_report(y_val, y_pred[:1168]))
###Output
precision recall f1-score support
0 0.48 0.46 0.47 603
1 0.45 0.47 0.46 565
accuracy 0.46 1168
macro avg 0.46 0.46 0.46 1168
weighted avg 0.46 0.46 0.46 1168
###Markdown
Logistic Regression
###Code
Lr = LogisticRegression(solver='lbfgs', multi_class='auto', max_iter=1000)
pipeline = make_pipeline(
ce.OneHotEncoder(use_cat_names=True),
QuantileTransformer(),
IterativeImputer(),
Lr
)
# Fit on train, score on val, predict on test
pipeline.fit(X_train_subset, y_train)
print('Train Accuracy', pipeline.score(X_train_subset, y_train))
print('Validation Accuracy', pipeline.score(X_val_subset, y_val))
y_pred2 = pipeline.predict(X_test)
###Output
_____no_output_____
###Markdown
This Sprint Challenge has two parts. To demonstrate mastery on each part, do all the required, numbered instructions. To earn a score of "3" for the part, also do the stretch goals. Part 1. Prepare to model Required1. **Do train/validate/test split.** Use the 2009-10 season through 2016-17 season to train, the 2017-18 season to validate, and the 2018-19 season to test. NBA seasons begin in October and end in June. You'll know you've split the data correctly when your train set has 11081 observations, your validation set has 1168 observations, and your test set has 1709 observations.2. **Begin with baselines for classification.** Your target to predict is `shot_made_flag`. What is the baseline accuracy for the validation set, if you guessed the majority class for every prediction?3. **Use Ordinal Encoding _or_ One-Hot Encoding,** for the categorical features you select.4. **Train a Random Forest _or_ Logistic Regression** with the features you select. Stretch goalsEngineer at least 4 of these 5 features:- **Homecourt Advantage**: Is the home team (`htm`) the Golden State Warriors (`GSW`) ?- **Opponent**: Who is the other team playing the Golden State Warriors?- **Seconds remaining in the period**: Combine minutes remaining with seconds remaining, to get the total number of seconds remaining in the period.- **Seconds remaining in the game**: Combine period, and seconds remaining in the period, to get the total number of seconds remaining in the game. A basketball game has 4 periods, each 12 minutes long.- **Made previous shot**: Was Steph Curry's previous shot successful? Part 2. Evaluate models Required1. Get your model's **validation accuracy.** (Multiple times if you try multiple iterations.)2. Get your model's **test accuracy.** (One time, at the end.)3. Get and plot your Random Forest's **feature importances** _or_ your Logistic Regression's **coefficients.**4. Imagine this is the confusion matrix for a binary classification model. **Calculate accuracy, precision, and recall for this confusion matrix:** Predicted Negative Positive Actual Negative 85 58 Positive 8 36 Stretch goals- Calculate F1 score for the provided, imaginary confusion matrix.- Plot a real confusion matrix for your basketball model, with row and column labels.- Print the classification report for your model.
###Code
#Accuracy on matrix
correct_predictions = 85 +36
total_predictions = 85 + 36 + 8 + 58
correct_predictions / total_predictions
#Recall of Positive Matrix
precision_Pos = 36
total_predictions_pos = 8+58+36
precision_Pos/total_predictions_pos
#Precision of matrix
actual_pos = 48
correct_pos = 36
correct_pos/actual_pos
###Output
_____no_output_____ |
notebooks/00_quick_start/fastai_movielens.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. FastAI RecommenderThis notebook shows how to use the [FastAI](https://fast.ai) recommender which is using [Pytorch](https://pytorch.org/) under the hood.
###Code
# set the environment path to find Recommenders
import sys
sys.path.append("../../")
import time
import os
import itertools
import pandas as pd
import numpy as np
import papermill as pm
import torch, fastai
from fastai.collab import EmbeddingDotBias, collab_learner, CollabDataBunch, load_learner
from reco_utils.dataset import movielens
from reco_utils.dataset.python_splitters import python_random_split
from reco_utils.recommender.fastai.fastai_utils import cartesian_product, score
from reco_utils.evaluation.python_evaluation import map_at_k, ndcg_at_k, precision_at_k, recall_at_k
from reco_utils.evaluation.python_evaluation import rmse, mae, rsquared, exp_var
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
print("Fast AI version: {}".format(fastai.__version__))
print("Torch version: {}".format(torch.__version__))
print("Cuda Available: {}".format(torch.cuda.is_available()))
print("CuDNN Enabled: {}".format(torch.backends.cudnn.enabled))
###Output
System version: 3.6.7 | packaged by conda-forge | (default, Nov 21 2018, 03:09:43)
[GCC 7.3.0]
Pandas version: 0.23.4
Fast AI version: 1.0.46
Torch version: 1.0.0
Cuda Available: True
CuDNN Enabled: True
###Markdown
Defining some constants to refer to the different columns of our dataset.
###Code
USER, ITEM, RATING, TIMESTAMP, PREDICTION, TITLE = 'UserId', 'MovieId', 'Rating', 'Timestamp', 'Prediction', 'Title'
# top k items to recommend
TOP_K = 10
# Select Movielens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
# Model parameters
N_FACTORS = 40
EPOCHS = 5
ratings_df = movielens.load_pandas_df(
size=MOVIELENS_DATA_SIZE,
header=[USER,ITEM,RATING,TIMESTAMP]
)
# make sure the IDs are loaded as strings to better prevent confusion with embedding ids
ratings_df[USER] = ratings_df[USER].astype('str')
ratings_df[ITEM] = ratings_df[ITEM].astype('str')
ratings_df.head()
train_valid_df, test_df = python_random_split(ratings_df, ratio=[0.75, 0.25])
###Output
_____no_output_____
###Markdown
Training
###Code
# fix random seeds to make sure our runs are reproducible
np.random.seed(101)
torch.manual_seed(101)
torch.cuda.manual_seed_all(101)
start_time = time.time()
data = CollabDataBunch.from_df(train_valid_df, user_name=USER, item_name=ITEM, rating_name=RATING)
preprocess_time = time.time() - start_time
data.show_batch()
###Output
_____no_output_____
###Markdown
Now we will create a `collab_learner` for the data, which by default uses the [EmbeddingDotBias](https://docs.fast.ai/collab.htmlEmbeddingDotBias) model. We will be using 40 latent factors. This will create an embedding for the users and the items that will map each of these to 40 floats as can be seen below. Note that the embedding parameters are not predefined, but are learned by the model.Although ratings can only range from 1-5, we are setting the range of possible ratings to a range from 0 to 5.5 -- that will allow the model to predict values around 1 and 5, which improves accuracy. Lastly, we set a value for weight-decay for regularization.
###Code
learn = collab_learner(data, n_factors=N_FACTORS, y_range=[0,5.5], wd=1e-1)
learn.model
###Output
_____no_output_____
###Markdown
Now train the model for 5 epochs setting the maximal learning rate. The learner will reduce the learning rate with each epoch using cosine annealing.
###Code
start_time = time.time()
learn.fit_one_cycle(EPOCHS, max_lr=5e-3)
train_time = time.time() - start_time + preprocess_time
print("Took {} seconds for training.".format(train_time))
###Output
_____no_output_____
###Markdown
Save the learner so it can be loaded back later for inferencing / generating recommendations
###Code
learn.export('movielens_model.pkl')
###Output
_____no_output_____
###Markdown
Generating RecommendationsLoad the learner from disk.
###Code
learner = load_learner(path=".", fname='movielens_model.pkl')
###Output
_____no_output_____
###Markdown
Get all users and items that the model knows
###Code
total_users, total_items = learner.data.train_ds.x.classes.values()
total_items = total_items[1:]
total_users = total_users[1:]
###Output
_____no_output_____
###Markdown
Get all users from the test set and remove any users that were know in the training set
###Code
test_users = test_df[USER].unique()
test_users = np.intersect1d(test_users, total_users)
###Output
_____no_output_____
###Markdown
Build the cartesian product of test set users and all items known to the model
###Code
users_items = cartesian_product(np.array(test_users),np.array(total_items))
users_items = pd.DataFrame(users_items, columns=[USER,ITEM])
###Output
_____no_output_____
###Markdown
Lastly, remove the user/items combinations that are in the training set -- we don't want to propose a movie that the user has already watched.
###Code
training_removed = pd.concat([users_items, train_valid_df[[USER,ITEM]]]).drop_duplicates(keep=False)
###Output
_____no_output_____
###Markdown
Score the model to find the top K recommendation
###Code
start_time = time.time()
top_k_scores = score(learner,
test_df=training_removed,
user_col=USER,
item_col=ITEM,
prediction_col=PREDICTION, top_k=TOP_K)
test_time = time.time() - start_time
print("Took {} seconds for {} predictions.".format(test_time, len(training_removed)))
###Output
Took 1.967883825302124 seconds for 1439504 predictions.
###Markdown
Calculate some metrics for our model
###Code
eval_map = map_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_ndcg = ndcg_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_precision = precision_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_recall = recall_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
print("Model:\t" + learn.__class__.__name__,
"Top K:\t%d" % TOP_K,
"MAP:\t%f" % eval_map,
"NDCG:\t%f" % eval_ndcg,
"Precision@K:\t%f" % eval_precision,
"Recall@K:\t%f" % eval_recall, sep='\n')
###Output
Model: CollabLearner
Top K: 10
MAP: 0.021576
NDCG: 0.136680
Precision@K: 0.127147
Recall@K: 0.050106
###Markdown
The above numbers are lower than [SAR](../sar_single_node_movielens.ipynb), but expected, since the model is explicitly trying to generalize the users and items to the latent factors. Next look at how well the model predicts how the user would rate the movie. Need to score `test_df`, but this time don't ask for top_k.
###Code
scores = score(learner, test_df=test_df,
user_col=USER, item_col=ITEM, prediction_col=PREDICTION)
###Output
_____no_output_____
###Markdown
Now calculate some regression metrics
###Code
eval_r2 = rsquared(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_rmse = rmse(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_mae = mae(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_exp_var = exp_var(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
print("Model:\t" + learn.__class__.__name__,
"RMSE:\t%f" % eval_rmse,
"MAE:\t%f" % eval_mae,
"Explained variance:\t%f" % eval_exp_var,
"R squared:\t%f" % eval_r2, sep='\n')
###Output
Model: CollabLearner
RMSE: 0.921269
MAE: 0.729055
Explained variance: 0.348939
R squared: 0.348134
###Markdown
That RMSE is actually quite good when compared to these benchmarks: https://www.librec.net/release/v1.3/example.html
###Code
# Record results with papermill for tests
pm.record("map", eval_map)
pm.record("ndcg", eval_ndcg)
pm.record("precision", eval_precision)
pm.record("recall", eval_recall)
pm.record("rmse", eval_rmse)
pm.record("mae", eval_mae)
pm.record("exp_var", eval_exp_var)
pm.record("rsquared", eval_r2)
pm.record("train_time", train_time)
pm.record("test_time", test_time)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. FastAI RecommenderThis notebook shows how to use the [FastAI](https://fast.ai) recommender which is using [Pytorch](https://pytorch.org/) under the hood.
###Code
# set the environment path to find Recommenders
import sys
sys.path.append("../../")
import time
import os
import itertools
import pandas as pd
import numpy as np
import papermill as pm
import torch, fastai
from fastai.collab import EmbeddingDotBias, collab_learner, CollabDataBunch, load_learner
from reco_utils.dataset import movielens
from reco_utils.dataset.python_splitters import python_stratified_split
from reco_utils.recommender.fastai.fastai_utils import cartesian_product, score
from reco_utils.evaluation.python_evaluation import map_at_k, ndcg_at_k, precision_at_k, recall_at_k
from reco_utils.evaluation.python_evaluation import rmse, mae, rsquared, exp_var
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
print("Fast AI version: {}".format(fastai.__version__))
print("Torch version: {}".format(torch.__version__))
print("Cuda Available: {}".format(torch.cuda.is_available()))
print("CuDNN Enabled: {}".format(torch.backends.cudnn.enabled))
###Output
System version: 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34)
[GCC 7.3.0]
Pandas version: 0.24.1
Fast AI version: 1.0.46
Torch version: 1.0.1.post2
Cuda Available: True
CuDNN Enabled: True
###Markdown
Defining some constants to refer to the different columns of our dataset.
###Code
USER, ITEM, RATING, TIMESTAMP, PREDICTION, TITLE = 'UserId', 'MovieId', 'Rating', 'Timestamp', 'Prediction', 'Title'
# top k items to recommend
TOP_K = 10
# Select MovieLens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
# Model parameters
N_FACTORS = 40
EPOCHS = 5
ratings_df = movielens.load_pandas_df(
size=MOVIELENS_DATA_SIZE,
header=[USER,ITEM,RATING,TIMESTAMP]
)
# make sure the IDs are loaded as strings to better prevent confusion with embedding ids
ratings_df[USER] = ratings_df[USER].astype('str')
ratings_df[ITEM] = ratings_df[ITEM].astype('str')
ratings_df.head()
# Split the dataset
train_valid_df, test_df = python_stratified_split(
ratings_df,
ratio=0.75,
min_rating=1,
filter_by="item",
col_user=USER,
col_item=ITEM
)
###Output
_____no_output_____
###Markdown
Training
###Code
# fix random seeds to make sure our runs are reproducible
np.random.seed(101)
torch.manual_seed(101)
torch.cuda.manual_seed_all(101)
start_time = time.time()
data = CollabDataBunch.from_df(train_valid_df, user_name=USER, item_name=ITEM, rating_name=RATING, valid_pct=0)
preprocess_time = time.time() - start_time
data.show_batch()
###Output
_____no_output_____
###Markdown
Now we will create a `collab_learner` for the data, which by default uses the [EmbeddingDotBias](https://docs.fast.ai/collab.htmlEmbeddingDotBias) model. We will be using 40 latent factors. This will create an embedding for the users and the items that will map each of these to 40 floats as can be seen below. Note that the embedding parameters are not predefined, but are learned by the model.Although ratings can only range from 1-5, we are setting the range of possible ratings to a range from 0 to 5.5 -- that will allow the model to predict values around 1 and 5, which improves accuracy. Lastly, we set a value for weight-decay for regularization.
###Code
learn = collab_learner(data, n_factors=N_FACTORS, y_range=[0,5.5], wd=1e-1)
learn.model
###Output
_____no_output_____
###Markdown
Now train the model for 5 epochs setting the maximal learning rate. The learner will reduce the learning rate with each epoch using cosine annealing.
###Code
start_time = time.time()
learn.fit_one_cycle(EPOCHS, max_lr=5e-3)
train_time = time.time() - start_time + preprocess_time
print("Took {} seconds for training.".format(train_time))
###Output
_____no_output_____
###Markdown
Save the learner so it can be loaded back later for inferencing / generating recommendations
###Code
learn.export('movielens_model.pkl')
###Output
_____no_output_____
###Markdown
Generating RecommendationsLoad the learner from disk.
###Code
learner = load_learner(path=".", fname='movielens_model.pkl')
###Output
_____no_output_____
###Markdown
Get all users and items that the model knows
###Code
total_users, total_items = learner.data.train_ds.x.classes.values()
total_items = total_items[1:]
total_users = total_users[1:]
###Output
_____no_output_____
###Markdown
Get all users from the test set and remove any users that were know in the training set
###Code
test_users = test_df[USER].unique()
test_users = np.intersect1d(test_users, total_users)
###Output
_____no_output_____
###Markdown
Build the cartesian product of test set users and all items known to the model
###Code
users_items = cartesian_product(np.array(test_users),np.array(total_items))
users_items = pd.DataFrame(users_items, columns=[USER,ITEM])
###Output
_____no_output_____
###Markdown
Lastly, remove the user/items combinations that are in the training set -- we don't want to propose a movie that the user has already watched.
###Code
training_removed = pd.merge(users_items, train_valid_df.astype(str), on=[USER, ITEM], how='left')
training_removed = training_removed[training_removed[RATING].isna()][[USER, ITEM]]
###Output
_____no_output_____
###Markdown
Score the model to find the top K recommendation
###Code
start_time = time.time()
top_k_scores = score(learner,
test_df=training_removed,
user_col=USER,
item_col=ITEM,
prediction_col=PREDICTION)
test_time = time.time() - start_time
print("Took {} seconds for {} predictions.".format(test_time, len(training_removed)))
###Output
Took 1.928511142730713 seconds for 1511060 predictions.
###Markdown
Calculate some metrics for our model
###Code
eval_map = map_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_ndcg = ndcg_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_precision = precision_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_recall = recall_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
print("Model:\t" + learn.__class__.__name__,
"Top K:\t%d" % TOP_K,
"MAP:\t%f" % eval_map,
"NDCG:\t%f" % eval_ndcg,
"Precision@K:\t%f" % eval_precision,
"Recall@K:\t%f" % eval_recall, sep='\n')
###Output
Model: CollabLearner
Top K: 10
MAP: 0.026112
NDCG: 0.155062
Precision@K: 0.136691
Recall@K: 0.054940
###Markdown
The above numbers are lower than [SAR](../sar_single_node_movielens.ipynb), but expected, since the model is explicitly trying to generalize the users and items to the latent factors. Next look at how well the model predicts how the user would rate the movie. Need to score `test_df` user-items only.
###Code
scores = score(learner,
test_df=test_df.copy(),
user_col=USER,
item_col=ITEM,
prediction_col=PREDICTION)
###Output
_____no_output_____
###Markdown
Now calculate some regression metrics
###Code
eval_r2 = rsquared(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_rmse = rmse(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_mae = mae(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_exp_var = exp_var(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
print("Model:\t" + learn.__class__.__name__,
"RMSE:\t%f" % eval_rmse,
"MAE:\t%f" % eval_mae,
"Explained variance:\t%f" % eval_exp_var,
"R squared:\t%f" % eval_r2, sep='\n')
###Output
Model: CollabLearner
RMSE: 0.902386
MAE: 0.712164
Explained variance: 0.346513
R squared: 0.345662
###Markdown
That RMSE is actually quite good when compared to these benchmarks: https://www.librec.net/release/v1.3/example.html
###Code
# Record results with papermill for tests
pm.record("map", eval_map)
pm.record("ndcg", eval_ndcg)
pm.record("precision", eval_precision)
pm.record("recall", eval_recall)
pm.record("rmse", eval_rmse)
pm.record("mae", eval_mae)
pm.record("exp_var", eval_exp_var)
pm.record("rsquared", eval_r2)
pm.record("train_time", train_time)
pm.record("test_time", test_time)
###Output
/data/anaconda/envs/reco_gpu/lib/python3.6/site-packages/ipykernel_launcher.py:2: DeprecationWarning: Function record is deprecated and will be removed in verison 1.0.0 (current version 0.19.0). Please see `scrapbook.glue` (nteract-scrapbook) as a replacement for this functionality.
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. FastAI RecommenderThis notebook shows how to use the [FastAI](https://fast.ai) recommender which is using [Pytorch](https://pytorch.org/) under the hood.
###Code
# set the environment path to find Recommenders
import sys
sys.path.append("../../")
import time
import os
import itertools
import pandas as pd
import numpy as np
import papermill as pm
import torch, fastai
from fastai.collab import EmbeddingDotBias, collab_learner, CollabDataBunch, load_learner
from reco_utils.dataset import movielens
from reco_utils.dataset.python_splitters import python_random_split
from reco_utils.evaluation.python_evaluation import map_at_k, ndcg_at_k, precision_at_k, recall_at_k
from reco_utils.evaluation.python_evaluation import rmse, mae, rsquared, exp_var
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
print("Fast AI version: {}".format(fastai.__version__))
print("Torch version: {}".format(torch.__version__))
print("Cuda Available: {}".format(torch.cuda.is_available()))
print("CuDNN Enabled: {}".format(torch.backends.cudnn.enabled))
###Output
System version: 3.6.7 | packaged by conda-forge | (default, Nov 21 2018, 03:09:43)
[GCC 7.3.0]
Pandas version: 0.23.4
Fast AI version: 1.0.46
Torch version: 1.0.0
Cuda Available: True
CuDNN Enabled: True
###Markdown
Defining some constants to refer to the different columns of our dataset.
###Code
USER, ITEM, RATING, TIMESTAMP, PREDICTION, TITLE = 'UserId', 'MovieId', 'Rating', 'Timestamp', 'Prediction', 'Title'
# top k items to recommend
TOP_K = 10
# Select Movielens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
# Model parameters
N_FACTORS = 40
EPOCHS = 5
ratings_df = movielens.load_pandas_df(
size=MOVIELENS_DATA_SIZE,
header=[USER,ITEM,RATING,TIMESTAMP]
)
# make sure the IDs are loaded as strings to better prevent confusion with embedding ids
ratings_df[USER] = ratings_df[USER].astype('str')
ratings_df[ITEM] = ratings_df[ITEM].astype('str')
ratings_df.head()
train_valid_df, test_df = python_random_split(ratings_df, ratio=[0.75, 0.25])
###Output
_____no_output_____
###Markdown
Training
###Code
# fix random seeds to make sure our runs are reproducible
np.random.seed(101)
torch.manual_seed(101)
torch.cuda.manual_seed_all(101)
start_time = time.time()
data = CollabDataBunch.from_df(train_valid_df, user_name=USER, item_name=ITEM, rating_name=RATING)
preprocess_time = time.time() - start_time
data.show_batch()
###Output
_____no_output_____
###Markdown
Now we will create a `collab_learner` for the data, which by default uses the [EmbeddingDotBias](https://docs.fast.ai/collab.htmlEmbeddingDotBias) model. We will be using 40 latent factors. This will create an embedding for the users and the items that will map each of these to 40 floats as can be seen below. Note that the embedding parameters are not predefined, but are learned by the model.Although ratings can only range from 1-5, we are setting the range of possible ratings to a range from 0 to 5.5 -- that will allow the model to predict values around 1 and 5, which improves accuracy. Lastly, we set a value for weight-decay for regularization.
###Code
learn = collab_learner(data, n_factors=N_FACTORS, y_range=[0,5.5], wd=1e-1)
learn.model
###Output
_____no_output_____
###Markdown
Now train the model for 5 epochs setting the maximal learning rate. The learner will reduce the learning rate with each epoch using cosine annealing.
###Code
start_time = time.time()
learn.fit_one_cycle(EPOCHS, max_lr=5e-3)
train_time = time.time() - start_time + preprocess_time
print("Took {} seconds for training.".format(train_time))
###Output
_____no_output_____
###Markdown
Save the learner so it can be loaded back later for inferencing / generating recommendations
###Code
learn.export('movielens_model.pkl')
###Output
_____no_output_____
###Markdown
Generating RecommendationsDefine two helper functions
###Code
def cartesian_product(*arrays):
la = len(arrays)
dtype = np.result_type(*arrays)
arr = np.empty([len(a) for a in arrays] + [la], dtype=dtype)
for i, a in enumerate(np.ix_(*arrays)):
arr[...,i] = a
return arr.reshape(-1, la)
def score(learner, test_df, user_col, item_col, prediction_col, top_k=0):
"""score all users+movies provided and reduce to top_k items per user if top_k>0"""
# replace values not known to the model with #na#
total_users, total_items = learner.data.train_ds.x.classes.values()
test_df.loc[~test_df[user_col].isin(total_users),user_col] = total_users[0]
test_df.loc[~test_df[item_col].isin(total_items),item_col] = total_items[0]
# map ids to embedding ids
u = learner.get_idx(test_df[user_col], is_item=False)
m = learner.get_idx(test_df[item_col], is_item=True)
# score the pytorch model
pred = learner.model.forward(u, m)
scores = pd.DataFrame({user_col: test_df[user_col], item_col:test_df[item_col], prediction_col:pred})
scores = scores.sort_values([user_col,prediction_col],ascending=[True,False])
if top_k > 0:
top_scores = scores.groupby(user_col).head(top_k).reset_index(drop=True)
else:
top_scores = scores
return top_scores
###Output
_____no_output_____
###Markdown
Load the learner from disk.
###Code
learner = load_learner(path=".", fname='movielens_model.pkl')
###Output
_____no_output_____
###Markdown
Get all users and items that the model knows
###Code
total_users, total_items = learner.data.train_ds.x.classes.values()
total_items = total_items[1:]
total_users = total_users[1:]
###Output
_____no_output_____
###Markdown
Get all users from the test set and remove any users that were know in the training set
###Code
test_users = test_df[USER].unique()
test_users = np.intersect1d(test_users, total_users)
###Output
_____no_output_____
###Markdown
Build the cartesian product of test set users and all items known to the model
###Code
users_items = cartesian_product(np.array(test_users),np.array(total_items))
users_items = pd.DataFrame(users_items, columns=[USER,ITEM])
###Output
_____no_output_____
###Markdown
Lastly, remove the user/items combinations that are in the training set -- we don't want to propose a movie that the user has already watched.
###Code
training_removed = pd.concat([users_items, train_valid_df[[USER,ITEM]], train_valid_df[[USER,ITEM]]]).drop_duplicates(keep=False)
###Output
_____no_output_____
###Markdown
Score the model to find the top K recommendation
###Code
start_time = time.time()
top_k_scores = score(learner, test_df=training_removed,
user_col=USER, item_col=ITEM, prediction_col=PREDICTION, top_k=TOP_K)
test_time = time.time() - start_time
print("Took {} seconds for {} predictions.".format(test_time, len(training_removed)))
###Output
Took 1.967883825302124 seconds for 1439504 predictions.
###Markdown
Calculate some metrics for our model
###Code
eval_map = map_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_ndcg = ndcg_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_precision = precision_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_recall = recall_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
print("Model:\t" + learn.__class__.__name__,
"Top K:\t%d" % TOP_K,
"MAP:\t%f" % eval_map,
"NDCG:\t%f" % eval_ndcg,
"Precision@K:\t%f" % eval_precision,
"Recall@K:\t%f" % eval_recall, sep='\n')
###Output
Model: CollabLearner
Top K: 10
MAP: 0.021576
NDCG: 0.136680
Precision@K: 0.127147
Recall@K: 0.050106
###Markdown
The above numbers are lower than [SAR](../sar_single_node_movielens.ipynb), but expected, since the model is explicitly trying to generalize the users and items to the latent factors. Next look at how well the model predicts how the user would rate the movie. Need to score `test_df`, but this time don't ask for top_k.
###Code
scores = score(learner, test_df=test_df,
user_col=USER, item_col=ITEM, prediction_col=PREDICTION)
###Output
_____no_output_____
###Markdown
Now calculate some regression metrics
###Code
eval_r2 = rsquared(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_rmse = rmse(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_mae = mae(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_exp_var = exp_var(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
print("Model:\t" + learn.__class__.__name__,
"RMSE:\t%f" % eval_rmse,
"MAE:\t%f" % eval_mae,
"Explained variance:\t%f" % eval_exp_var,
"R squared:\t%f" % eval_r2, sep='\n')
###Output
Model: CollabLearner
RMSE: 0.921269
MAE: 0.729055
Explained variance: 0.348939
R squared: 0.348134
###Markdown
That RMSE is actually quite good when compared to these benchmarks: https://www.librec.net/release/v1.3/example.html
###Code
# Record results with papermill for tests
pm.record("map", eval_map)
pm.record("ndcg", eval_ndcg)
pm.record("precision", eval_precision)
pm.record("recall", eval_recall)
pm.record("rmse", eval_rmse)
pm.record("mae", eval_mae)
pm.record("exp_var", eval_exp_var)
pm.record("rsquared", eval_r2)
pm.record("train_time", train_time)
pm.record("test_time", test_time)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. FastAI RecommenderThis notebook shows how to use the [FastAI](https://fast.ai) recommender which is using [Pytorch](https://pytorch.org/) under the hood.
###Code
# set the environment path to find Recommenders
import sys
sys.path.append("../../")
import time
import os
import itertools
import pandas as pd
import papermill as pm
import torch, fastai
from fastai.collab import *
from fastai.tabular import *
from reco_utils.dataset import movielens
from reco_utils.dataset.python_splitters import python_random_split
from reco_utils.evaluation.python_evaluation import map_at_k, ndcg_at_k, precision_at_k, recall_at_k
from reco_utils.evaluation.python_evaluation import rmse, mae, rsquared, exp_var
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
print("Fast AI version: {}".format(fastai.__version__))
print("Torch version: {}".format(torch.__version__))
print("Cuda Available: {}".format(torch.cuda.is_available()))
print("CuDNN Enabled: {}".format(torch.backends.cudnn.enabled))
###Output
System version: 3.6.7 | packaged by conda-forge | (default, Nov 21 2018, 03:09:43)
[GCC 7.3.0]
Pandas version: 0.24.0
Fast AI version: 1.0.42
Torch version: 1.0.0
Cuda Available: True
CuDNN Enabled: True
###Markdown
Defining some constants to refer to the different columns of our dataset.
###Code
USER, ITEM, RATING, TIMESTAMP, PREDICTION, TITLE = 'UserId', 'MovieId', 'Rating', 'Timestamp', 'Prediction', 'Title'
# top k items to recommend
TOP_K = 10
# Select Movielens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
# Model parameters
N_FACTORS = 40
EPOCHS = 5
ratings_df = movielens.load_pandas_df(
size=MOVIELENS_DATA_SIZE,
header=[USER,ITEM,RATING,TIMESTAMP]
)
ratings_df.head()
# make sure the IDs are loaded as strings to better prevent confusion with embedding ids
ratings_df[USER] = ratings_df[USER].astype('str')
ratings_df[ITEM] = ratings_df[ITEM].astype('str')
train_valid_df, test_df = python_random_split(ratings_df, ratio=[0.75, 0.25])
###Output
_____no_output_____
###Markdown
Training
###Code
# fix random seeds to make sure our runs are reproducible
np.random.seed(101)
torch.manual_seed(101)
torch.cuda.manual_seed_all(101)
start_time = time.time()
data = CollabDataBunch.from_df(train_valid_df, user_name=USER, item_name=ITEM, rating_name=RATING)
preprocess_time = time.time() - start_time
data.show_batch()
###Output
_____no_output_____
###Markdown
Now we will create a `collab_learner` for the data, which by default uses the [EmbeddingDotBias](https://docs.fast.ai/collab.htmlEmbeddingDotBias) model. We will be using 40 latent factors. This will create an embedding for the users and the items that will map each of these to 40 floats as can be seen below. Note that the embedding parameters are not predefined, but are learned by the model.Although ratings can only range from 1-5, we are setting the range of possible ratings to a range from 0 to 5.5 -- that will allow the model to predict values around 1 and 5, which improves accuracy. Lastly, we set a value for weight-decay for regularization.
###Code
learn = collab_learner(data, n_factors=N_FACTORS, y_range=[0,5.5], wd=1e-1)
learn.model
###Output
_____no_output_____
###Markdown
Now train the model for 5 epochs setting the maximal learning rate. The learner will reduce the learning rate with each epoch using cosine annealing.
###Code
start_time = time.time()
learn.fit_one_cycle(EPOCHS, max_lr=5e-3)
train_time = time.time() - start_time + preprocess_time
print("Took {} seconds for training.".format(train_time))
###Output
_____no_output_____
###Markdown
Save the learner so it can be loaded back later for inferencing / generating recommendations
###Code
learn.export('movielens_model.pkl')
###Output
_____no_output_____
###Markdown
Generating RecommendationsDefine two helper functions
###Code
def cartesian_product(*arrays):
la = len(arrays)
dtype = np.result_type(*arrays)
arr = np.empty([len(a) for a in arrays] + [la], dtype=dtype)
for i, a in enumerate(np.ix_(*arrays)):
arr[...,i] = a
return arr.reshape(-1, la)
def score(learner, test_df, user_col, item_col, prediction_col, top_k=0):
"""score all users+movies provided and reduce to top_k items per user if top_k>0"""
# replace values not known to the model with #na#
total_users, total_items = learner.data.classes.values()
test_df.loc[~test_df[user_col].isin(total_users),user_col] = total_users[0]
test_df.loc[~test_df[item_col].isin(total_items),item_col] = total_items[0]
# map ids to embedding ids
u = learner.get_idx(test_df[user_col], is_item=False)
m = learner.get_idx(test_df[item_col], is_item=True)
# score the pytorch model
pred = learner.model.forward(u, m)
scores = pd.DataFrame({user_col: test_df[user_col], item_col:test_df[item_col], prediction_col:pred})
scores = scores.sort_values([user_col,prediction_col],ascending=[True,False])
if top_k > 0:
top_scores = scores.groupby(user_col).head(top_k).reset_index(drop=True)
else:
top_scores = scores
return top_scores
###Output
_____no_output_____
###Markdown
Load the learner from disk.
###Code
learner = load_learner(path=Path('.'),
fname='movielens_model.pkl')
###Output
_____no_output_____
###Markdown
Get all users and items that the model knows
###Code
total_users, total_items = learner.data.classes.values()
total_items = np.array(total_items[1:])
total_users = np.array(total_users[1:])
###Output
_____no_output_____
###Markdown
Get all users from the test set and remove any users that were know in the training set
###Code
test_users = test_df[USER].unique()
test_users = np.intersect1d(test_users, total_users)
###Output
_____no_output_____
###Markdown
Build the cartesian product of test set users and all items known to the model
###Code
users_items = cartesian_product(np.array(test_users),np.array(total_items))
users_items = pd.DataFrame(users_items, columns=[USER,ITEM])
###Output
_____no_output_____
###Markdown
Lastly, remove the user/items combinations that are in the training set -- we don't want to propose a movie that the user has already watched.
###Code
training_removed = pd.concat([users_items, train_valid_df[[USER,ITEM]], train_valid_df[[USER,ITEM]]]).drop_duplicates(keep=False)
###Output
_____no_output_____
###Markdown
Score the model to find the top K recommendation
###Code
start_time = time.time()
top_k_scores = score(learner, test_df=training_removed,
user_col=USER, item_col=ITEM, prediction_col=PREDICTION, top_k=TOP_K)
test_time = time.time() - start_time
print("Took {} seconds for {} predictions.".format(test_time, len(training_removed)))
###Output
Took 1.993603229522705 seconds for 1433851 predictions.
###Markdown
Calculate some metrics for our model
###Code
eval_map = map_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_ndcg = ndcg_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_precision = precision_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_recall = recall_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
print("Model:\t" + learn.__class__.__name__,
"Top K:\t%d" % TOP_K,
"MAP:\t%f" % eval_map,
"NDCG:\t%f" % eval_ndcg,
"Precision@K:\t%f" % eval_precision,
"Recall@K:\t%f" % eval_recall, sep='\n')
###Output
Model: CollabLearner
Top K: 10
MAP: 0.021485
NDCG: 0.137494
Precision@K: 0.124284
Recall@K: 0.045587
###Markdown
The above numbers are lower than [SAR](../sar_single_node_movielens.ipynb), but expected, since the model is explicitly trying to generalize the users and items to the latent factors. Next look at how well the model predicts how the user would rate the movie. Need to score `test_df`, but this time don't ask for top_k.
###Code
scores = score(learner, test_df=test_df,
user_col=USER, item_col=ITEM, prediction_col=PREDICTION)
###Output
_____no_output_____
###Markdown
Now calculate some regression metrics
###Code
eval_r2 = rsquared(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_rmse = rmse(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_mae = mae(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_exp_var = exp_var(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
print("Model:\t" + learn.__class__.__name__,
"RMSE:\t%f" % eval_rmse,
"MAE:\t%f" % eval_mae,
"Explained variance:\t%f" % eval_exp_var,
"R squared:\t%f" % eval_r2, sep='\n')
###Output
Model: CollabLearner
RMSE: 0.912115
MAE: 0.723051
Explained variance: 0.357081
R squared: 0.356302
###Markdown
That RMSE is actually quite good when compared to these benchmarks: https://www.librec.net/release/v1.3/example.html
###Code
# Record results with papermill for tests
pm.record("map", eval_map)
pm.record("ndcg", eval_ndcg)
pm.record("precision", eval_precision)
pm.record("recall", eval_recall)
pm.record("rmse", eval_rmse)
pm.record("mae", eval_mae)
pm.record("exp_var", eval_exp_var)
pm.record("rsquared", eval_r2)
pm.record("train_time", train_time)
pm.record("test_time", test_time)
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License. FastAI RecommenderThis notebook shows how to use the [FastAI](https://fast.ai) recommender which is using [Pytorch](https://pytorch.org/) under the hood.
###Code
# set the environment path to find Recommenders
import sys
sys.path.append("../../")
import time
import os
import itertools
import pandas as pd
import numpy as np
import papermill as pm
import torch, fastai
from fastai.collab import EmbeddingDotBias, collab_learner, CollabDataBunch, load_learner
from reco_utils.dataset import movielens
from reco_utils.dataset.python_splitters import python_stratified_split
from reco_utils.recommender.fastai.fastai_utils import cartesian_product, score
from reco_utils.evaluation.python_evaluation import map_at_k, ndcg_at_k, precision_at_k, recall_at_k
from reco_utils.evaluation.python_evaluation import rmse, mae, rsquared, exp_var
print("System version: {}".format(sys.version))
print("Pandas version: {}".format(pd.__version__))
print("Fast AI version: {}".format(fastai.__version__))
print("Torch version: {}".format(torch.__version__))
print("Cuda Available: {}".format(torch.cuda.is_available()))
print("CuDNN Enabled: {}".format(torch.backends.cudnn.enabled))
###Output
System version: 3.6.8 |Anaconda, Inc.| (default, Dec 30 2018, 01:22:34)
[GCC 7.3.0]
Pandas version: 0.24.1
Fast AI version: 1.0.46
Torch version: 1.0.1.post2
Cuda Available: True
CuDNN Enabled: True
###Markdown
Defining some constants to refer to the different columns of our dataset.
###Code
USER, ITEM, RATING, TIMESTAMP, PREDICTION, TITLE = 'UserId', 'MovieId', 'Rating', 'Timestamp', 'Prediction', 'Title'
# top k items to recommend
TOP_K = 10
# Select Movielens data size: 100k, 1m, 10m, or 20m
MOVIELENS_DATA_SIZE = '100k'
# Model parameters
N_FACTORS = 40
EPOCHS = 5
ratings_df = movielens.load_pandas_df(
size=MOVIELENS_DATA_SIZE,
header=[USER,ITEM,RATING,TIMESTAMP]
)
# make sure the IDs are loaded as strings to better prevent confusion with embedding ids
ratings_df[USER] = ratings_df[USER].astype('str')
ratings_df[ITEM] = ratings_df[ITEM].astype('str')
ratings_df.head()
# Split the dataset
train_valid_df, test_df = python_stratified_split(
ratings_df,
ratio=0.75,
min_rating=1,
filter_by="item",
col_user=USER,
col_item=ITEM
)
###Output
_____no_output_____
###Markdown
Training
###Code
# fix random seeds to make sure our runs are reproducible
np.random.seed(101)
torch.manual_seed(101)
torch.cuda.manual_seed_all(101)
start_time = time.time()
data = CollabDataBunch.from_df(train_valid_df, user_name=USER, item_name=ITEM, rating_name=RATING, valid_pct=0)
preprocess_time = time.time() - start_time
data.show_batch()
###Output
_____no_output_____
###Markdown
Now we will create a `collab_learner` for the data, which by default uses the [EmbeddingDotBias](https://docs.fast.ai/collab.htmlEmbeddingDotBias) model. We will be using 40 latent factors. This will create an embedding for the users and the items that will map each of these to 40 floats as can be seen below. Note that the embedding parameters are not predefined, but are learned by the model.Although ratings can only range from 1-5, we are setting the range of possible ratings to a range from 0 to 5.5 -- that will allow the model to predict values around 1 and 5, which improves accuracy. Lastly, we set a value for weight-decay for regularization.
###Code
learn = collab_learner(data, n_factors=N_FACTORS, y_range=[0,5.5], wd=1e-1)
learn.model
###Output
_____no_output_____
###Markdown
Now train the model for 5 epochs setting the maximal learning rate. The learner will reduce the learning rate with each epoch using cosine annealing.
###Code
start_time = time.time()
learn.fit_one_cycle(EPOCHS, max_lr=5e-3)
train_time = time.time() - start_time + preprocess_time
print("Took {} seconds for training.".format(train_time))
###Output
_____no_output_____
###Markdown
Save the learner so it can be loaded back later for inferencing / generating recommendations
###Code
learn.export('movielens_model.pkl')
###Output
_____no_output_____
###Markdown
Generating RecommendationsLoad the learner from disk.
###Code
learner = load_learner(path=".", fname='movielens_model.pkl')
###Output
_____no_output_____
###Markdown
Get all users and items that the model knows
###Code
total_users, total_items = learner.data.train_ds.x.classes.values()
total_items = total_items[1:]
total_users = total_users[1:]
###Output
_____no_output_____
###Markdown
Get all users from the test set and remove any users that were know in the training set
###Code
test_users = test_df[USER].unique()
test_users = np.intersect1d(test_users, total_users)
###Output
_____no_output_____
###Markdown
Build the cartesian product of test set users and all items known to the model
###Code
users_items = cartesian_product(np.array(test_users),np.array(total_items))
users_items = pd.DataFrame(users_items, columns=[USER,ITEM])
###Output
_____no_output_____
###Markdown
Lastly, remove the user/items combinations that are in the training set -- we don't want to propose a movie that the user has already watched.
###Code
training_removed = pd.merge(users_items, train_valid_df.astype(str), on=[USER, ITEM], how='left')
training_removed = training_removed[training_removed[RATING].isna()][[USER, ITEM]]
###Output
_____no_output_____
###Markdown
Score the model to find the top K recommendation
###Code
start_time = time.time()
top_k_scores = score(learner,
test_df=training_removed,
user_col=USER,
item_col=ITEM,
prediction_col=PREDICTION)
test_time = time.time() - start_time
print("Took {} seconds for {} predictions.".format(test_time, len(training_removed)))
###Output
Took 1.928511142730713 seconds for 1511060 predictions.
###Markdown
Calculate some metrics for our model
###Code
eval_map = map_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_ndcg = ndcg_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_precision = precision_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
eval_recall = recall_at_k(test_df, top_k_scores, col_user=USER, col_item=ITEM,
col_rating=RATING, col_prediction=PREDICTION,
relevancy_method="top_k", k=TOP_K)
print("Model:\t" + learn.__class__.__name__,
"Top K:\t%d" % TOP_K,
"MAP:\t%f" % eval_map,
"NDCG:\t%f" % eval_ndcg,
"Precision@K:\t%f" % eval_precision,
"Recall@K:\t%f" % eval_recall, sep='\n')
###Output
Model: CollabLearner
Top K: 10
MAP: 0.026112
NDCG: 0.155062
Precision@K: 0.136691
Recall@K: 0.054940
###Markdown
The above numbers are lower than [SAR](../sar_single_node_movielens.ipynb), but expected, since the model is explicitly trying to generalize the users and items to the latent factors. Next look at how well the model predicts how the user would rate the movie. Need to score `test_df` user-items only.
###Code
scores = score(learner,
test_df=test_df.copy(),
user_col=USER,
item_col=ITEM,
prediction_col=PREDICTION)
###Output
_____no_output_____
###Markdown
Now calculate some regression metrics
###Code
eval_r2 = rsquared(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_rmse = rmse(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_mae = mae(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
eval_exp_var = exp_var(test_df, scores, col_user=USER, col_item=ITEM, col_rating=RATING, col_prediction=PREDICTION)
print("Model:\t" + learn.__class__.__name__,
"RMSE:\t%f" % eval_rmse,
"MAE:\t%f" % eval_mae,
"Explained variance:\t%f" % eval_exp_var,
"R squared:\t%f" % eval_r2, sep='\n')
###Output
Model: CollabLearner
RMSE: 0.902386
MAE: 0.712164
Explained variance: 0.346513
R squared: 0.345662
###Markdown
That RMSE is actually quite good when compared to these benchmarks: https://www.librec.net/release/v1.3/example.html
###Code
# Record results with papermill for tests
pm.record("map", eval_map)
pm.record("ndcg", eval_ndcg)
pm.record("precision", eval_precision)
pm.record("recall", eval_recall)
pm.record("rmse", eval_rmse)
pm.record("mae", eval_mae)
pm.record("exp_var", eval_exp_var)
pm.record("rsquared", eval_r2)
pm.record("train_time", train_time)
pm.record("test_time", test_time)
###Output
/data/anaconda/envs/reco_gpu/lib/python3.6/site-packages/ipykernel_launcher.py:2: DeprecationWarning: Function record is deprecated and will be removed in verison 1.0.0 (current version 0.19.0). Please see `scrapbook.glue` (nteract-scrapbook) as a replacement for this functionality.
|
notebooks/01-training.ipynb | ###Markdown
Training Models with the Data-Driven LibraryThe datadriven library provides an extensible command-line interface for training, evaluating, and predicting data-driven simulators. However, you may prefer training and sweeping models inside a notebook. This notebook provides an example for doing so. Set Working Directory and Import Necessary Libraries
###Code
cd ..
from hydra.experimental import initialize, compose
from omegaconf import DictConfig, ListConfig, OmegaConf
from model_loader import available_models
from base import plot_parallel_coords
import logging
import matplotlib.pyplot as plt
import numpy as np
from rich import print
from rich.logging import RichHandler
import copy
import pandas as pd
logging.basicConfig(
level=logging.INFO,
format="%(message)s",
datefmt="[%X]",
handlers=[RichHandler()]
)
logger = logging.getLogger("ddm_training")
logger.setLevel(logging.INFO)
###Output
_____no_output_____
###Markdown
Initialize ConfigurationWhile you can provide every argument manually, there is benefit in directly using the `hydra` config class to load an existing configuration file. This way you can ensure your parameters are saved to a file for later use, and you automatically gain the benefit of all the logging and model artifacts that are provided by our workflow of `hydra` and `mlflow`.If you want to override any settings of the configurations, provide them in a list of `overrides` as shown below.
###Code
initialize(config_path="../conf", job_name="ddm_training")
cfg = compose(config_name="config", overrides=["data=cartpole_st1_at", "model=torch"])
print(OmegaConf.to_yaml(cfg))
# Extract features from yaml file
input_cols = cfg['data']['inputs']
output_cols = cfg['data']['outputs']
augmented_cols = cfg['data']['augmented_cols']
dataset_path = cfg['data']['path']
iteration_order = cfg['data']['iteration_order']
episode_col = cfg['data']['episode_col']
iteration_col = cfg['data']['iteration_col']
max_rows = cfg['data']['max_rows']
###Output
_____no_output_____
###Markdown
Model TrainerTo make it easy to sweep over models later, we create a simple `train_models` function here:
###Code
def train_models(config=cfg):
logger.info(f'Model type: {available_models[config["model"]["name"]]}')
Model = available_models[config["model"]["name"]]
global model
model = Model()
logger.info(f"Loading data from {dataset_path}")
global X, y
X, y = model.load_csv(
input_cols=input_cols,
output_cols=output_cols,
augm_cols=list(augmented_cols),
dataset_path=dataset_path,
iteration_order=iteration_order,
episode_col=episode_col,
iteration_col=iteration_col,
max_rows=max_rows,
)
logger.info(f"Building model with parameters: {config}")
model.build_model(
**config["model"]["build_params"]
)
logger.info(f"Fitting model...")
model.fit(X, y)
logger.info(f"Model trained!")
return model
model = train_models(cfg)
###Output
_____no_output_____
###Markdown
Hyperparameter SweepingThe `datadrivenmodel` has an automatic solution for hyperparameter sweeping and tuning. These settings are provided in the config `model.sweep` parameters. Provide the limits of the variables you want to sweep over and the `sweep` method will automatically parallelize the sweep over the available number of cores and find the optimal solution according to your `scoring_func`. Configuration ParametersYou can select the search algorithm you'd like to use: `bayesian` runs bayesian optimiziation (using scikit-optimize), `hyperopt` runs [Tree-Parzen Estimators](https://papers.nips.cc/paper/2011/file/86e8f7ab32cfd12577bc2619bc635690-Paper.pdf) with the `hyperopt` package, `bohb` uses Bayesian Opt/HyperBand, or `optuna` which also runs Tree-Parzen estimators but using the [`optuna`](https://optuna.readthedocs.io/en/stable/) package.
###Code
print(OmegaConf.to_yaml(cfg["model"]["sweep"]))
params = OmegaConf.to_container(cfg["model"]["sweep"]["params"])
logger.info(f"Sweeping with parameters: {params}")
sweep_df = model.sweep(
params=params,
X=X,
y=y,
search_algorithm=cfg["model"]["sweep"]["search_algorithm"],
num_trials=cfg["model"]["sweep"]["num_trials"],
scoring_func=cfg["model"]["sweep"]["scoring_func"],
results_csv_path=cfg["model"]["sweep"]["results_csv_path"],
)
sweep_df.head()
###Output
_____no_output_____
###Markdown
Results and OutputsAll outputs are saved to a timestamped directory in `outputs`. This includes the model artifacts, hyperparameter tuning results, and a verbose log of the entire run.
###Code
ls -lh outputs/
!tree -L 3 outputs/2021-04-21/
###Output
_____no_output_____
###Markdown
Visualizing Hyperparameter Results
###Code
plot_parallel_coords(sweep_df)
###Output
_____no_output_____
###Markdown
Reading Saved Runs from CSVRuns are automatically saved to a CSV in the outputs directory:
###Code
sweep_df2 = pd.read_csv("outputs/2021-04-21/10-29-25/xgboost_gridsearch/search_results.csv")
plot_parallel_coords(sweep_df2)
###Output
_____no_output_____ |
XGBoost_Kumar.ipynb | ###Markdown
Author: Kumar R. XGBoost Problem Statement:In this assignment we are going to predict whether a person makes over 50K per year or not from classic adult dataset using XGBoost. The description of the dataset is as follows: Extraction was done by Barry Becker from the 1994 Census database. A set of reasonably clean records was extracted using the following conditions: ((AAGE>16) && (AGI>100) && (AFNLWGT>1)&& (HRSWK>0))Attribute Information:Listing of attributes: >50K, <=50K.age: continuous.workclass: Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked.fnlwgt: continuous.education: Bachelors, Some-college, 11th, HS-grad, Prof-school, Assoc-acdm, Assoc-voc, 9th, 7th-8th, 12th, Masters, 1st-4th, 10th, Doctorate, 5th-6th, Preschool.education-num: continuous. marital-status: Married-civ-spouse, Divorced, Never-married, Separated, Widowed, Married-spouse-absent, Married-AF-spouse.occupation: Tech-support, Craft-repair, Other-service, Sales, Execmanagerial, Prof-specialty, Handlers-cleaners, Machine-op-inspct, Adm-clerical, Farming-fishing, Transport-moving, Priv-house-serv, Protective-serv, Armed-Forces.relationship: Wife, Own-child, Husband, Not-in-family, Other-relative, Unmarried.race: White, Asian-Pac-Islander, Amer-Indian-Eskimo, Other, Black.sex: Female, Male.capital-gain: continuous.capital-loss: continuous.hours-per-week: continuous.native-country: United-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, Greece, South, China, Cuba, Iran, Honduras, Philippines, Italy, Poland, Jamaica, Vietnam, Mexico, Portugal, Ireland, France, Dominican-Republic, Laos, Ecuador, Taiwan, Haiti, Columbia, Hungary, Guatemala, Nicaragua, Scotland, Thailand, Yugoslavia, ElSalvador, Trinadad&Tobago, Peru, Hong, Holand-Net
###Code
#Import the required libraries
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
#Load the training and testing dataset
train_set = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data', header = None)
test_set = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test' , skiprows = 1, header = None)
#column names of the dataset
col_labels = ['age', 'workclass', 'fnlwgt', 'education', 'education_num','marital_status', 'occupation','relationship', 'race', 'sex', 'capital_gain','capital_loss', 'hours_per_week', 'native_country', 'wage_class']
#Asssign the column names
train_set.columns = col_labels
test_set.columns = col_labels
#Concatination of both training and testing datset
data = pd.concat([train_set,test_set])
###Output
_____no_output_____
###Markdown
EDA (Exploratory Data Analysis)
###Code
data.info()
data.head()
#Check if there are any missing values in all the columns
data.isnull().sum()
data.replace(' ?', np.nan, inplace=True)
data.isnull().sum()
###Output
_____no_output_____
###Markdown
AGE
###Code
sns.distplot(data['age'],kde_kws={"color": "g", "lw": 2, "label": "KDE"})
#mean of the age column
print("Mean of the age: ",round(data['age'].mean()))
###Output
Mean of the age: 39
###Markdown
Work_CLass
###Code
data['workclass'].value_counts()
data['workclass'].unique()
#Replacing ' Without-pay' as ' Never-worked'
data = data.replace(' Without-pay',' Never-worked')
data['workclass'].value_counts()
#Fill nan with '0' for meantime (can also fill with the mode of the column by analysisng other columns too)
data['workclass'].fillna('0', inplace=True)
#Count plot
plt.figure(figsize=(10,6))
sns.countplot('workclass', data=data)
plt.xticks(rotation=45)
###Output
_____no_output_____
###Markdown
fnlwgt
###Code
data['fnlwgt'].plot(kind='kde')
#Check if there are any -ve values as it deviates form the domine knowledge.
def check(col):
if col<0:
return True
data['fnlwgt'].apply(check).any()
data['fnlwgt'].apply(check).sum()
#Reducing the magnitude
data['fnlwgt'] = data['fnlwgt'].apply(lambda x: np.log1p(x))
sns.distplot(data['fnlwgt'])
###Output
_____no_output_____
###Markdown
education
###Code
#Values and its count
data['education'].value_counts()
data['education'].unique()
#Function to replace classes of school and pre_university
def edu(column):
if column in [' 1st-4th',' 5th-6th',' 7th-8th',' 9th',' 10th',]:
return ' School'
elif column in [' 11th',' 12th']:
return ' Pre_uni'
else:
return column
#Apply the function on education column
data['education'] = data['education'].apply(edu)
data['education'].unique()
plt.figure(figsize=(10,6))
sns.countplot(data['education'])
plt.xticks(rotation=45)
###Output
_____no_output_____
###Markdown
wage_class
###Code
data['wage_class'].value_counts()
#Get the unique values
data['wage_class'].unique()
#Replace it with 0 and 1
data = data.replace({' <=50K':0,' >50K':1,' <=50K.':0,' >50K.':1})
count = data['wage_class'].value_counts()
count
#Viisualization of the values
plt.figure(figsize=(6,6))
count.plot(kind='pie', autopct='%.2f' )
plt.legend()
###Output
_____no_output_____
###Markdown
76% of the people belongs to the wage_class less than 50k per year
###Code
sns.catplot(x='education',y='wage_class',data=data,height=10,palette='muted',kind='bar')
plt.xticks(rotation=60)
###Output
_____no_output_____
###Markdown
education_num
###Code
data['education_num'].plot(kind='kde')
###Output
_____no_output_____
###Markdown
marital_status
###Code
data['marital_status'].value_counts()
data['marital_status'].unique()
#Replacement
data['marital_status'].replace(' Married-civ-spouse',' Married-AF-spouse', inplace=True)
count_mar = data['marital_status'].value_counts()
count_mar
plt.figure(figsize=(6,6))
count_mar.plot(kind='pie', autopct='%.2f')
plt.title('Percentage of people with relationship categories')
###Output
_____no_output_____
###Markdown
occupation
###Code
data['occupation'].value_counts()
data['occupation'].isnull().sum()
#Filling the missing values with '0'.
data['occupation'].fillna('0', inplace=True)
data['occupation'].isnull().any()
data['occupation'].replace(' Armed-Forces','0',inplace=True)
data['occupation'].value_counts()
sns.catplot(x='occupation',y='wage_class',data=data, kind='bar',height=8, hue='sex')
plt.xticks(rotation=90)
###Output
_____no_output_____
###Markdown
relationship
###Code
value_rel = data['relationship'].value_counts()
value_rel
plt.figure(figsize=(6,6))
value_rel.plot(kind='pie',autopct='%.2f')
plt.title("% of people and relationship")
plt.show()
data['native_country'].unique()
#Function to replace countries into their continents
def native(country):
if country in [' United-States',' Canada']:
return 'North_America'
elif country in [' Puerto-Rico',' El-Salvador',' Cuba',' Jamaica',' Dominican-Republic',' Guatemala',' Haiti',' Nicaragua',' Trinadad&Tobago',' Honduras']:
return 'Central_America'
elif country in [' Mexico',' Columbia',' Vietnam',' Peru',' Ecuador',' South',' Outlying-US(Guam-USVI-etc)']:
return 'South_America'
elif country in [' Germany',' England',' Italy',' Poland',' Portugal',' Greece',' Yugoslavia',' France',' Ireland',' Scotland',' Hungary',' Holand-Netherlands']:
return 'EU'
elif country in [' India',' Iran',' China',' Japan',' Thailand',' Hong',' Cambodia',' Laos',' Philippines',' Taiwan']:
return 'Asian'
else:
return country
#Apply the native function on native_country column
data['native_country'] = data['native_country'].apply(native)
data['native_country'].fillna('0', inplace=True)
native = data['native_country'].value_counts()
native
#Visualization
plt.figure(figsize=(8,8))
explod = (0,0.1,0.2,0.3,0.4,0.5)
label = ['North_America','South_America','Central_America','Asian','None','EU']
plt.pie(native,
explode=explod,
labels=label,
autopct='%.3f')
plt.title("Native Countries")
plt.show()
###Output
_____no_output_____
###Markdown
capital_gain
###Code
#Check if any -ve values in capital_gain
data['capital_gain'].apply(check).any(), data['capital_gain'].apply(check).sum()
###Output
_____no_output_____
###Markdown
Correlation test
###Code
#Correlation test
cor = data.corr()
plt.figure(figsize=(8,8))
sns.heatmap(cor,annot=True)
###Output
_____no_output_____
###Markdown
Since the column 'fnlwgt' is very less correated with the label,, I'm dropping the column
###Code
data = data.drop('fnlwgt',axis=1)
###Output
_____no_output_____
###Markdown
Model Creation
###Code
#Seerate feature and label from the dataset
feature = data.iloc[:,:-1].values
label = data.iloc[:,13].values
#Convert the categorial columns into numeric
#Label Encoder
from sklearn.preprocessing import LabelEncoder
le_workclass = LabelEncoder()
le_education = LabelEncoder()
le_marital = LabelEncoder()
le_occupation = LabelEncoder()
le_relationship = LabelEncoder()
le_race = LabelEncoder()
le_sex = LabelEncoder()
le_native = LabelEncoder()
feature[:,1] = le_workclass.fit_transform(feature[:,1])
feature[:,2] = le_education.fit_transform(feature[:,2])
feature[:,4] = le_marital.fit_transform(feature[:,4])
feature[:,5] = le_occupation.fit_transform(feature[:,5])
feature[:,6] = le_relationship.fit_transform(feature[:,6])
feature[:,7] = le_race.fit_transform(feature[:,7])
feature[:,8] = le_sex.fit_transform(feature[:,8])
feature[:,12] = le_native.fit_transform(feature[:,12])
feature[:10,1]
#OneHotEncoder
from sklearn.preprocessing import OneHotEncoder
ohe = OneHotEncoder(sparse=False)
ohe_feature = ohe.fit_transform(feature[:,(1,2,4,5,6,7,8,12)])
print(ohe_feature.shape)
ohe_feature
#Concatination of OneHotEncoded features and rest of the features
final_feature = np.concatenate((ohe_feature,feature[:,[0,3,9,10,11]]), axis=1)
final_feature.shape
final_feature
'''from sklearn.preprocessing import OneHotEncoder
ohe_2 = OneHotEncoder(sparse=False)
new_feature = ohe_2.fit_transform(try_feature[:,(1,2,4,5,6,7,8,12)])
new_feature
###Output
_____no_output_____
###Markdown
The above code(OneHotEncoder) alone can give the output we required.
###Code
#Train Test Split
from sklearn.model_selection import train_test_split
for i in range(50):
x_train,x_test,y_train,y_test = train_test_split(final_feature,label, test_size=0.2,random_state=i)
#Model training
from xgboost import XGBRFClassifier
model_1 = XGBRFClassifier()
model_1.fit(x_train,y_train)
train_score = model_1.score(x_train,y_train)
test_score = model_1.score(x_test,y_test)
if test_score>train_score:
print(f"test score: {test_score}, train score: {train_score}, RS: {i}")
###Output
test score: 0.8622172177295526, train score: 0.8575230977913137, RS: 0
test score: 0.8599651960282526, train score: 0.856755304174238, RS: 5
test score: 0.8605793837649708, train score: 0.856089883039439, RS: 6
test score: 0.8596581021598936, train score: 0.8571903872239142, RS: 9
test score: 0.8582249974408844, train score: 0.8582141120466819, RS: 17
test score: 0.8616030299928344, train score: 0.8569600491387915, RS: 19
test score: 0.8585320913092436, train score: 0.8575230977913137, RS: 24
test score: 0.8586344559320299, train score: 0.8574719115501753, RS: 25
test score: 0.858941549800389, train score: 0.858239705167251, RS: 30
test score: 0.8592486436687481, train score: 0.857830215238144, RS: 32
test score: 0.8607841130105436, train score: 0.856755304174238, RS: 37
test score: 0.8599651960282526, train score: 0.8561154761600082, RS: 41
###Markdown
Hyper Parameter Tuning
###Code
parameters = [{'learning_rate':[0.1,0.5,1],
'n_estimators':[5,10,15],
'max_depth':[3,5,10]}]
from sklearn.model_selection import GridSearchCV
grid_sc = GridSearchCV(model_1,
param_grid=parameters,
scoring='accuracy',
n_jobs=3,
cv=10,
verbose=3)
grid_sc.fit(x_train,y_train)
grid_sc.best_score_
grid_sc.best_params_
from sklearn.model_selection import train_test_split
for i in range(50):
x_train,x_test,y_train,y_test = train_test_split(final_feature,label, test_size=0.2,random_state=i)
from xgboost import XGBRFClassifier
model_1 = XGBRFClassifier(learning_rate=0.1,
max_depth=10,
n_estimators=5)
model_1.fit(x_train,y_train)
train_score = model_1.score(x_train,y_train)
test_score = model_1.score(x_test,y_test)
if test_score>train_score:
print(f"test score: {test_score}, train score: {train_score}, RS: {i}")
###Output
test score: 0.8679496366055891, train score: 0.867888311621836, RS: 37
###Markdown
Cross validation test
###Code
from sklearn.model_selection import cross_val_score
cv_score = cross_val_score(estimator=model_1,X=final_feature,y=label,cv=5)
print("Minimum score: ",np.min(cv_score))
print("Average score: ",np.average(cv_score))
print("Maximum score: ",np.max(cv_score))
#Final model training
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test = train_test_split(final_feature,label, test_size=0.2,random_state=37)
from xgboost import XGBRFClassifier
model_1 = XGBRFClassifier(learning_rate=0.1,
max_depth=10,
n_estimators=5)
model_1.fit(x_train,y_train)
train_score = model_1.score(x_train,y_train)
test_score = model_1.score(x_test,y_test)
print(f"test score: {test_score}, train score: {train_score}")
###Output
test score: 0.8679496366055891, train score: 0.867888311621836
###Markdown
Prediction and Testing
###Code
#Prediction
predictor = model_1.predict(final_feature)
from sklearn.metrics import confusion_matrix, classification_report
cm = confusion_matrix(label, predictor)
ax = plt.subplot()
sns.heatmap(cm,cbar=False,annot=True,cmap='Greens',fmt='g',ax=ax)
plt.xlabel('Prediction', fontsize=12)
plt.ylabel('Actual',fontsize=12)
plt.title('Confusion matrix')
plt.show()
report = classification_report(label, predictor)
print(report)
###Output
precision recall f1-score support
0 0.88 0.95 0.92 37155
1 0.80 0.60 0.68 11687
accuracy 0.87 48842
macro avg 0.84 0.78 0.80 48842
weighted avg 0.86 0.87 0.86 48842
###Markdown
AUC and ROC Curve
###Code
from sklearn.metrics import roc_auc_score
auc = roc_auc_score(label,predictor)
print(f"Area Under the Curve: {auc}")
#Visualisation
from sklearn.metrics import roc_curve
fpr,tpr,threshold = roc_curve(label, predictor)
plt.plot(fpr,tpr,'g',linewidth=2,label='ROC Curve (auc= %0.2f)' %auc)
plt.plot([0,1],[0,1],'bo--',linewidth=2,label='Skill line')
plt.xlabel('False Postive Rate', fontsize=12)
plt.ylabel('True Positive Rate', fontsize=12)
plt.title('ROC Curve', fontsize=14)
plt.legend()
plt.show()
import pickle
pickle.dump(model_1, open('XGBoost.model','wb'))
pickle.dump(le_workclass, open("le_workclass",'wb'))
pickle.dump(le_education, open("le_education",'wb'))
pickle.dump(le_marital, open("le_workclass",'wb'))
pickle.dump(le_occupation, open("le_workclass",'wb'))
pickle.dump(le_relationship,open("le_workclass",'wb'))
pickle.dump(le_race, open("le_workclass",'wb'))
pickle.dump(le_sex, open("le_workclass",'wb'))
pickle.dump(le_native , open("le_workclass",'wb'))
pickle.dump(ohe, open("OneHotEncoder",'wb'))
###Output
_____no_output_____ |
jupyter_russian/projects_individual/project_us_railway_accident_analysis.ipynb | ###Markdown
Анализ аварий на ЖД транспорте США в 2013 году и страховых выплат по ним 1. Описание набора данных и признаковВ этом проекте исследуются данные об инцидентах грузового железнодорожного транспорта США за 2013 год и соответствующие запросы на страховое возмещение ущербра от перевозчиков. Данные взяты с Cisco Data Explore.Датасет содержит следующие признаки:ПризнакОписаниеDEPARTURE CITYГород отправления груза (вагона)DEPARTURE STATEШтат отправления груза (вагона)DEPARTURE CARRIERПеревозчик отправления, отправительARRIVAL CITYГород прибытия груза (вагона)ARRIVAL STATEШтат прибытия груза (вагона)ARRIVAL CARRIERКомпания-перевозчик прибытия, принимающая сторонаRAIL SPEED SPEEDТип скороcти железной дорогиRAIL CAR TYPE TYPEТип вагонаRAIL OWNERSHIP OWNERSHIPТип собственности железной дорогиRAIL CARLOAD LOADТип грузаDEPEARTURE DATEДата отправленияARRIVAL DATEДата прибытияCAR VALUEСтоимость вагона, USD DAMAGEDРазмер ущерба, USDWEIGHTВес грузаFUEL USEDКоличествто израсходованного топливаPROPER DESTINATIONМетка правильности назначенияMILESПройденный путь OF STOPSКоличество остановок в пути Задача данного пректа - попытаться предсказать размер ущерба, полученного в результате инцидента, а также предоставить другую полезную информацию. Ценность результатов проекта - информация для страховых компаний, позволяющая быть более гибкими в расчете стоимости страховки для тех или иных компаний, грузов, направлений и т. д., а также для самих перевозчиков - для прогноза затрат на перевозку грузов, выбора более безопасных путей, времени и других параметров для транспортировки грузов. 2. Первичный анализ данных Импортируем все нужные библиотеки:
###Code
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
from sklearn.preprocessing import StandardScaler, OneHotEncoder, LabelBinarizer
from sklearn.model_selection import train_test_split, GridSearchCV, StratifiedKFold, cross_val_score
from sklearn.linear_model import LogisticRegression, LinearRegression
import xgboost as xgb
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Теперь посмотрим на наши данные:
###Code
data_file = '../../data/Rail_Insurance_Claims.csv'
data = pd.read_csv(data_file, sep=',', parse_dates=['DEPEARTURE DATE','ARRIVAL DATE'])
data.head()
data.info()
data.describe()
data.describe(include=['object'])
data.describe(include='datetime64')
###Output
_____no_output_____
###Markdown
Видим, что в датасете большинство признаков - категориальные, целевая переменная и кол-во топлива - вещественные, два временных признака и несколько количественных. В данных нет пропусков. Также в данных есть много парных признаков DEPARTURE и ARRIVAL. Отсюда можно сделать следующие промежуточные выводы: * Категориальные признаки понадобится кодировать (OHE, mean target) * Из парных признаков можно извлечь много дополинтельной информации и создать новые признаки, связанные с путем следования, временем в пути и т.д. * В данных на первый взгляд нет пропусков - нужно проверить значения категориальных признаков на смысловые пропуски: значения типа N/A, unknown и т. д. * Из числовых признаков можно создать новые относительные. * Из временных признаков также можно извлечь дополнительную информацию * В данных присутствуют признаки CAR VALUE и DAMAGED - целесообразно предсказывать не сам ущерб, а его процент
###Code
data['damaged percent'] = data['DAMAGED'] / data['CAR VALUE']
data['damaged percent'].describe()
###Output
_____no_output_____
###Markdown
Видим, что значения находятся в интервали от 0 до 1, т.е. признак корректный. Также можно заметить, что 75-я квантиль на порядок меньше максимального значения, из чего можно сделать вывод, что целевая переменная имееть сильный дисбаланс. Посмотрим на распределения категориальных признаков и целевой переменной: построим несколько pivot-таблиц по категориальным признакам.
###Code
(data.pivot_table( ['damaged percent'], ['DEPARTURE STATE'], ['ARRIVAL STATE'], aggfunc='mean')) * 100
###Output
_____no_output_____
###Markdown
Посмотрим на распределение по штатам отправления:
###Code
print(data['DEPARTURE STATE'].value_counts())
###Output
_____no_output_____
###Markdown
Создадим дополнительные временные признаки и посмотрим, возможно, есть перекос по дням или месяцам в ущербе для штата отправления TN:
###Code
data['Dep Month'] = data['DEPEARTURE DATE'].dt.month
data['Dep Day'] = data['DEPEARTURE DATE'].dt.day
(data[data['DEPARTURE STATE'] == 'TN'].pivot_table( ['damaged percent'], ['Dep Month'], ['Dep Day'], aggfunc='mean')) * 100
###Output
_____no_output_____
###Markdown
Продолжим с другими категориями:
###Code
(data.pivot_table( ['damaged percent'], ['DEPARTURE CARRIER'], ['ARRIVAL CARRIER'], aggfunc='mean')) * 100
(data.pivot_table( ['damaged percent'], ['RAIL CARLOAD LOAD'], ['RAIL OWNERSHIP OWNERSHIP'], aggfunc='mean')) * 100
(data.pivot_table(['damaged percent'], ['RAIL CAR TYPE TYPE'], ['RAIL SPEED SPEED'], aggfunc='mean')) * 100
(data.pivot_table(['damaged percent'], ['PROPER DESTINATION'], aggfunc='mean')) * 100
###Output
_____no_output_____
###Markdown
Построим корреляционную матрицу:
###Code
corr_m = data.corr()
round((corr_m), 2)
###Output
_____no_output_____
###Markdown
Проверим целевую переменную на нормальность и скошенность:
###Code
from scipy.stats import shapiro, skewtest, skew
print('Normality: {}'.format(shapiro(data['damaged percent'])))
print('Skewness: {}'.format(skew(data['damaged percent'])))
###Output
_____no_output_____
###Markdown
Проверим, нет ли в категориальных признаках значений, похожих на пропуски:
###Code
for c in data.select_dtypes(include=['object']):
print('{}: {}'.format(c, data[c].unique()))
###Output
_____no_output_____
###Markdown
Выводы:К предыдущим выводам можно добавить следующее:* Вагоны, следующие из штата TN, терпят гораздо больший ущерб, чем из всех остальных штатов, но судя по распределению кол-ва рейсов из этого штата и средних потерь по датам, не похоже, что это выброс. Возможно, это будет хорошим предиктором.* Перевозчики-отправители CN и NS страдают немного сильнее, чем остальные, однако в ределах стандартного отклонения* В распределении остальных категорий отностильено таргета ничего необычного не замечено* С целевой переменной немного коррелируют стоимость вагона, вес груза, использованное топливо и кол-во остановок* Целевая переменная не распределена нормально и имеет скошенность с тяжелым левым хвостом, понадобится ее преобразовать* Выбросов и пропусков найти не удалось* Значения переменной PROPER DESTINATION нужно преобразовать в [0, 1] 3. Первичный визуальный анализ данных Для начала дополним данные недостающими временными признаками, преобразуем переменную PROPER DESTINATION:
###Code
data['Dep DayOfWeek'] = data['DEPEARTURE DATE'].dt.weekday
data['Dep weekend'] = data['Dep DayOfWeek'].isin([5,6]).astype('int')
data['Arr Month'] = data['ARRIVAL DATE'].dt.month
data['Arr Day'] = data['ARRIVAL DATE'].dt.day
data['Arr DayOfWeek'] = data['ARRIVAL DATE'].dt.weekday
data['Arr weekend'] = data['Arr DayOfWeek'].isin([5,6]).astype('int')
data['proper_dest'] = data['PROPER DESTINATION'].map({'Yes': 1, 'No':0})
data['duration'] = (data['ARRIVAL DATE'] - data['DEPEARTURE DATE']).dt.days
###Output
_____no_output_____
###Markdown
Отобразим корреляционную матрицу:
###Code
c_m = data.corr()
plt.figure(figsize=(15, 15))
sns.heatmap(np.abs(c_m), annot=True, fmt=".2f", linewidths=.5)
###Output
_____no_output_____
###Markdown
Видна корреляция между днем отправления и днем прибытия, месяцем отправления и месяцем прибытия, что говорит о наличии расписания, корреляция между днем недели и выходным также ясна, как и между использованным топливом и весом груза. Интересна корреляция между правильным направлением и весом, нуждается в доп. изучении. Видно, что добавленные признаки немного коррелируют с целевой переменной. Переменную DAMAGED нужно удалить, она может быть получена из CAR VALUE и damaged percent.
###Code
data.drop(['DAMAGED'], axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
Отобразим плотности распределения числовых величин и колличество значений категориальных:
###Code
clmns = data.select_dtypes(exclude=['object','datetime64', 'bool']).columns
f, axarr = plt.subplots(ncols=1, nrows=len(clmns), figsize=(15, 40))
for c in clmns:
sns.distplot(data[c], ax=axarr[list(clmns).index(c)])
plt.tight_layout()
categories = data.select_dtypes('object').columns
f, axarr = plt.subplots(ncols=1, nrows=len(categories), figsize=(15, 40))
i = 0
for cat in categories:
g = sns.countplot(x=cat, data=data, ax=axarr[i], palette="Blues_d")
r = 30
if (cat.endswith('CITY')):
r = 90
g.set_xticklabels(g.get_xticklabels(), rotation=r)
i += 1
plt.tight_layout(h_pad=0.5)
categories = list(data.select_dtypes('object').columns) + ['# OF STOPS', 'Dep Month', 'Dep Day', 'Dep DayOfWeek', 'Arr Month', 'Arr Day', 'Arr DayOfWeek', 'Dep weekend', 'Arr weekend']
f, axarr = plt.subplots(ncols=1, nrows=len(categories), figsize=(15, 80))
i = 0
for cat in categories:
g = sns.barplot(x=cat, y='damaged percent', data=data, ax=axarr[i])
r = 30
if (cat.endswith('CITY')):
r = 90
g.set_xticklabels(g.get_xticklabels(), rotation=r)
i += 1
plt.tight_layout(h_pad=0.5)
state_tbl = (data.pivot_table(['damaged percent'], ['DEPARTURE STATE'], ['ARRIVAL STATE'], aggfunc='mean')) * 100
plt.figure(figsize=(15, 15))
sns.heatmap(state_tbl)
###Output
_____no_output_____
###Markdown
ВыводыВидно, что результаты визуального анализа отображают закономерности, выявленные в предыдущей части. Распределения величин не указывает на наличие выбросов. Почти все значения признаков имеют разные средние значения damaged percent, т.е. должны быть учтены при прогнозе. 4. Инсайты и закономерности Часть закономерностей описана выше.Следует также создать признаки, связанные с путем, закодировать некоторые из них OHE, для путей как таковых применить mean target encoding. 5. Выбор метрики и модели Т.к. в данных нет выбросов, а решаемая задача - регрессия, можно использовать MSE или R2. Т.к. R2 - это по сути 1 - усредненная MSE, то используем первую для большей наглядности. В качестве модели будем сравнивать линейную регрессию и градиентный бустинг, т.к. они обе подходят для задачи регрессии, а градиентный бустинг хорошо себя зарекомендовал в решениях задач со смешанным типом признаков. При использовании линейной регрессии будем масштабировать признаки. 6. Предобработка данных и создание новых признаков Частично предобработка была выполнена выше для визуализации.Создадим признаки, связанные с путем:
###Code
data['interstate'] = (data['DEPARTURE STATE'] != data['ARRIVAL STATE']).astype('int')
data['intercity'] = (data['DEPARTURE CITY'] != data['ARRIVAL CITY']).astype('int')
data['intercarrier'] = (data['DEPARTURE CARRIER'] != data['ARRIVAL CARRIER']).astype('int')
data['city route'] = data['DEPARTURE CITY'] + data['ARRIVAL CITY']
data['state route'] = data['DEPARTURE STATE'] + data['ARRIVAL STATE']
data['carrier route'] = data['DEPARTURE CARRIER'] + data['ARRIVAL CARRIER']
###Output
_____no_output_____
###Markdown
Создадим относительный признак расход доплива:
###Code
data['FUEL PER MILE'] = (data['FUEL USED']/data['MILES'])
###Output
_____no_output_____
###Markdown
Напишем функцию для кодирования средним:
###Code
def mean_target_enc(train_df, y_train, valid_df, cat_features, skf):
import warnings
warnings.filterwarnings('ignore')
target_name = y_train.name
glob_mean = y_train.mean()
train_df = pd.concat([train_df, pd.Series(y_train, name='y')], axis=1)
new_train_df = train_df.copy()
for col in cat_features:
new_train_df[col + '_mean_' + target_name] = [glob_mean for _ in range(new_train_df.shape[0])]
for train_idx, valid_idx in skf.split(train_df, y_train):
train_df_cv, valid_df_cv = train_df.iloc[train_idx, :], train_df.iloc[valid_idx, :]
for col in cat_features:
means = valid_df_cv[col].map(train_df_cv.groupby(col)['y'].mean())
valid_df_cv[col + '_mean_' + target_name] = means.fillna(glob_mean)
new_train_df.iloc[valid_idx] = valid_df_cv
new_train_df.drop(['y'], axis=1, inplace=True)
for col in cat_features:
means = valid_df[col].map(train_df.groupby(col)['y'].mean())
valid_df[col + '_mean_' + target_name] = means.fillna(glob_mean)
# valid_df.drop(cat_features, axis=1, inplace=True)
return new_train_df, valid_df
###Output
_____no_output_____
###Markdown
Применим OHE преобразование к категориальным признакам, за исключением путей:
###Code
data_ohe = pd.get_dummies(data, columns=['DEPARTURE CITY', 'DEPARTURE STATE', 'DEPARTURE CARRIER',
'ARRIVAL CITY', 'ARRIVAL STATE', 'ARRIVAL CARRIER', 'RAIL SPEED SPEED',
'RAIL CAR TYPE TYPE', 'RAIL OWNERSHIP OWNERSHIP', 'RAIL CARLOAD LOAD'])
###Output
_____no_output_____
###Markdown
Применим mean target encoding к путям
###Code
data_ohe, _ = mean_target_enc(data_ohe, (data['damaged percent']*10000).astype('int'), data_ohe[-1:], ['city route', 'state route', 'carrier route'], StratifiedKFold(5, shuffle=True, random_state=17))
###Output
_____no_output_____
###Markdown
Удалим ненужные фичи и выделим целевую переменную:
###Code
y = data['damaged percent']
data_ohe.drop(['DEPEARTURE DATE', 'ARRIVAL DATE', 'FUEL USED', 'city route', 'state route', 'carrier route', 'PROPER DESTINATION'], axis=1, inplace=True)
data_ohe.drop(['damaged percent'], axis=1, inplace=True)
import scipy.stats as stats
stats.probplot(y, dist="norm", plot=plt)
stats.probplot(np.log(y), dist="norm", plot=plt)
stats.probplot(StandardScaler().fit_transform(y.values.reshape(-1,1).astype(np.float64)).flatten(), dist="norm", plot=plt)
y = np.log(y)
y.hist()
###Output
_____no_output_____
###Markdown
Выделим тренировочную, валидационную и тестовые выборки. Т.к. данные у нас сбалансированы, выберем случайный способ. Отмасштабируем признаки.
###Code
st = StandardScaler()
X_train, X_val, y_train, y_val = train_test_split(data_ohe, y, test_size=0.3, random_state=1)
X_train_st = st.fit_transform(X_train)
X_val_st = st.transform(X_val)
X_val_st, X_test_st, y_val, y_test = train_test_split(X_val_st, y_val, test_size=0.3333, random_state=1)
lr = LinearRegression(n_jobs=-1)
cv_sc = cross_val_score(lr, X_train, y_train, cv=5, n_jobs=-1)
cv_sc
lr.fit(X_train, y_train)
lr_pred_val = lr.predict(X_val_st)
r2_score(y_val, lr_pred_val)
lr_test_pred = lr.predict((X_test_st))
print(r2_score(y_test, lr_test_pred))
np.sqrt(mean_squared_error(y_test, lr_test_pred))
plt.figure(figsize=(15, 10))
plt.plot(np.exp(y_test).values[-200:], 'b')
plt.plot(np.exp(lr_test_pred)[-200:], 'g')
###Output
_____no_output_____
###Markdown
Видим, что линейная регрессия не сработала. Посмотрим на xboost.
###Code
dtrain = xgb.DMatrix(X_train_st, label=np.sqrt(y_train))
dtest = xgb.DMatrix(X_val_st)
params = {
'objective':'reg:linear',
'max_depth':5,
'silent':1,
'nthread': 8,
# 'booster': 'dart',
# 'eta':0.5,
# 'gamma': 0.1,
# 'lambda': 20,
# 'alpha': 0.5
}
num_rounds = 100
xgb_ = xgb.train(params, dtrain, num_rounds)
xgb__pred = xgb_.predict(dtest)
r2_score(y_val, (xgb__pred))
np.sqrt(mean_squared_error(np.sqrt(y_val), xgb__pred))
np.sqrt(mean_squared_error(y_val, np.exp(xgb__pred)))
xgb_test_pred = xgb_.predict(xgb.DMatrix(X_test_st))
print(r2_score(y_test, xgb_test_pred))
np.sqrt(mean_squared_error(y_test, xgb_test_pred))
plt.figure(figsize=(15, 10))
plt.plot(np.exp(y_test).values[-200:], 'b')
plt.plot(np.exp(xgb_test_pred)[-200:], 'g')
###Output
_____no_output_____ |
notebooks/demosaic_ppp_bm3d_admm.ipynb | ###Markdown
Image Demosaicing (ADMM Plug-and-Play Priors w/ BM3D)=====================================================This example demonstrates the use of the ADMM Plug and Play Priors(PPP) algorithm for solvinga raw image demosaicing problem.
###Code
import numpy as np
import jax
from bm3d import bm3d_rgb
from colour_demosaicing import demosaicing_CFA_Bayer_Menon2007
import scico
import scico.numpy as snp
import scico.random
from scico import functional, linop, loss, metric, plot
from scico.data import kodim23
from scico.optimize.admm import ADMM, LinearSubproblemSolver
from scico.util import device_info
plot.config_notebook_plotting()
###Output
_____no_output_____
###Markdown
Read a ground truth image.
###Code
img = kodim23(asfloat=True)[160:416, 60:316]
img = jax.device_put(img) # convert to jax type, push to GPU
###Output
_____no_output_____
###Markdown
Define demosaicing forward operator and its transpose.
###Code
def Afn(x):
"""Map an RGB image to a single channel image with each pixel
representing a single colour according to the colour filter array.
"""
y = snp.zeros(x.shape[0:2])
y = y.at[1::2, 1::2].set(x[1::2, 1::2, 0])
y = y.at[0::2, 1::2].set(x[0::2, 1::2, 1])
y = y.at[1::2, 0::2].set(x[1::2, 0::2, 1])
y = y.at[0::2, 0::2].set(x[0::2, 0::2, 2])
return y
def ATfn(x):
"""Back project a single channel raw image to an RGB image with zeros
at the locations of undefined samples.
"""
y = snp.zeros(x.shape + (3,))
y = y.at[1::2, 1::2, 0].set(x[1::2, 1::2])
y = y.at[0::2, 1::2, 1].set(x[0::2, 1::2])
y = y.at[1::2, 0::2, 1].set(x[1::2, 0::2])
y = y.at[0::2, 0::2, 2].set(x[0::2, 0::2])
return y
###Output
_____no_output_____
###Markdown
Define a baseline demosaicing function based on the demosaicingalgorithm of from package[colour_demosaicing](https://github.com/colour-science/colour-demosaicing).
###Code
def demosaic(cfaimg):
"""Apply baseline demosaicing."""
return demosaicing_CFA_Bayer_Menon2007(cfaimg, pattern="BGGR").astype(np.float32)
###Output
_____no_output_____
###Markdown
Create a test image by color filter array sampling and adding Gaussianwhite noise.
###Code
s = Afn(img)
rgbshp = s.shape + (3,) # shape of reconstructed RGB image
σ = 2e-2 # noise standard deviation
noise, key = scico.random.randn(s.shape, seed=0)
sn = s + σ * noise
###Output
_____no_output_____
###Markdown
Compute a baseline demosaicing solution.
###Code
imgb = jax.device_put(bm3d_rgb(demosaic(sn), 3 * σ).astype(np.float32))
###Output
_____no_output_____
###Markdown
Set up an ADMM solver object. Note the use of the baseline solutionas an initializer. We use BM3D as thedenoiser, using the [code](https://pypi.org/project/bm3d) releasedwith .
###Code
A = linop.LinearOperator(input_shape=rgbshp, output_shape=s.shape, eval_fn=Afn, adj_fn=ATfn)
f = loss.SquaredL2Loss(y=sn, A=A)
C = linop.Identity(input_shape=rgbshp)
g = 1.8e-1 * 6.1e-2 * functional.BM3D(is_rgb=True)
ρ = 1.8e-1 # ADMM penalty parameter
maxiter = 12 # number of ADMM iterations
solver = ADMM(
f=f,
g_list=[g],
C_list=[C],
rho_list=[ρ],
x0=imgb,
maxiter=maxiter,
subproblem_solver=LinearSubproblemSolver(cg_kwargs={"tol": 1e-3, "maxiter": 100}),
itstat_options={"display": True},
)
###Output
_____no_output_____
###Markdown
Run the solver.
###Code
print(f"Solving on {device_info()}\n")
x = solver.solve()
hist = solver.itstat_object.history(transpose=True)
###Output
Solving on GPU (NVIDIA GeForce RTX 2080 Ti)
###Markdown
Show reference and demosaiced images.
###Code
fig, ax = plot.subplots(nrows=1, ncols=3, sharex=True, sharey=True, figsize=(21, 7))
plot.imview(img, title="Reference", fig=fig, ax=ax[0])
plot.imview(imgb, title="Baseline demoisac: %.2f (dB)" % metric.psnr(img, imgb), fig=fig, ax=ax[1])
plot.imview(x, title="PPP demoisac: %.2f (dB)" % metric.psnr(img, x), fig=fig, ax=ax[2])
fig.show()
###Output
_____no_output_____
###Markdown
Plot convergence statistics.
###Code
plot.plot(
snp.vstack((hist.Prml_Rsdl, hist.Dual_Rsdl)).T,
ptyp="semilogy",
title="Residuals",
xlbl="Iteration",
lgnd=("Primal", "Dual"),
)
###Output
_____no_output_____ |
Data Analytics/Topic 3 - One parameter models/Single parameters models/Flight data/Airlines.ipynb | ###Markdown
Airline fatalities 1976-1985We consider the number of fatal accidents and deaths on scheduled airline flights per year over a ten-year period *Source: Gelman et al. 2014 Reproduced from Statistical Abstract of the United States.* Our goal is to create a model predicting such number in 1986.
###Code
import sys
sys.path.append('../')
import pystan
import stan_utility
import arviz as az
import numpy as np
import scipy.stats as stats
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
light="#FFFCDC"
light_highlight="#FEF590"
mid="#FDED2A"
mid_highlight="#f0dc05"
dark="#EECA02"
dark_highlight="#BB9700"
green="#00FF00"
light_grey="#DDDDDD"
plt.style.context('seaborn-white')
mpl.rcParams['figure.dpi']= 200
dts=[24,734,25,516,31,754,31,877,22,814,21,362,26,764,20,809,16,223,22,1066]
c1=dts[::2]
c2=dts[1::2]
Airline_data=pd.DataFrame({'Year':[1976,1977,1978,1979,1980,1981,1982,1983,1984,1985],
'Fatal accidents':c1,
'Passenger deaths':c2,
'Death rate':[0.19,0.12,0.15,0.16,0.14,0.06,0.13,0.13,0.03,0.15]}).set_index('Year')
Airline_data['Miles flown [100 mln miles]']=np.round(Airline_data['Passenger deaths']/Airline_data['Death rate'])
## generation of vector for plotting samples under histograms
acc=[]
dta_cnt=[]
for k in Airline_data['Fatal accidents']:
dta_cnt.append(-(1.+acc.count(k)))
acc.append(k)
dta_cnt=np.array(dta_cnt)
Airline_data
###Output
_____no_output_____
###Markdown
Model for accidentsWe start our modelling by proposing very simple model, with an assumption, that Fatal accidents number $y_i$ has a Poisson distribution $$y_i\sim\mathrm{Poisson}(\lambda)$$with a rate $\lambda$ independent on year or miles flown. Prior for fatal accidents rateWe assume that having fatal accident every day would be very improbable. For poisson distribution we have mean of $\lambda$ and standard deviation of $\sqrt{\lambda}$. Approximately in order to have no more than 1% probability $\lambda$ should fulfill$$\lambda+3\sqrt{\lambda}\approx365$$We need to assign the prior that would have probability of smaller $\lambda$ equal 99%.
###Code
root_of_lam=np.polynomial.polynomial.polyroots([-365.,3.,1.])
lam_ub=np.round(root_of_lam[root_of_lam>0]**2)
print(lam_ub)
###Output
[312.]
###Markdown
Prior tuning in StanUsing Stan algebra solver we can solve nonlinear equations. In particular it can be used for finding distribution parameters that are fulfilling the conditions we have given. We can for example find $\sigma$ for a HalfNormal distribution.
###Code
with open('prior_tune3.stan', 'r') as file:
print(file.read())
tuning2=stan_utility.compile_model('prior_tune3.stan')
data=dict(y_guess=np.array([np.log(100)]),theta=np.array(lam_ub))
tuned2 = tuning2.sampling(data=data,
seed=1052020,
algorithm="Fixed_param",
iter=1,
warmup=0,
chains=1)
sigma = np.round(tuned2.extract()['sigma'][0])
print(sigma)
fig, ax2 = plt.subplots(1, 1,figsize=(7, 4))
x2=np.linspace(0,3*sigma,1000)
x4=np.linspace(0,lam_ub[0],1000)
ax2.plot(x2,2*stats.norm.pdf(x2,scale=sigma),color=dark,linewidth=2)
ax2.fill_between(x4,2*stats.norm.pdf(x4,scale=sigma),0,color=dark)
ax2.set_yticks([])
ax2.set_xticks([0,lam_ub[0]])
ax2.set_title(r'$\lambda$')
plt.show()
###Output
_____no_output_____
###Markdown
Prior predictive distributionWe can use stan to simulate possible outputs and parameteres based only on prior information.
###Code
with open('airline_FA_hnorm_ppc.stan', 'r') as file:
print(file.read())
model_prior=stan_utility.compile_model('airline_FA_hnorm_ppc.stan')
R=1000
sim_uf=model_prior.sampling(data={'M':1},
algorithm="Fixed_param",
iter=R,
warmup=0,
chains=1,
refresh=R,
seed=29042020)
params=sim_uf.extract()
theta=params['lambda']
y_sim=params['y_sim']
fig, axes = plt.subplots(2, 1,figsize=(7, 8))
ax1=axes[0]
ax1.hist(theta,bins=20,color=dark,edgecolor=dark_highlight,density=True)
x=np.linspace(0,350,2000)
ax1.set_xticks([0,lam_ub[0]])
ax1.set_yticks([])
ax1.set_title(r'$\lambda$')
ax1.plot(x,2*stats.norm.pdf(x,0,sigma),color='black',linestyle='--')
arr_y_loc = 2*stats.norm.pdf(150,0,sigma)
ax1.annotate('HalfNormal(0,'+str(np.int(sigma))+')',xy=(150,arr_y_loc),xytext=(200,1.5*arr_y_loc),arrowprops={'arrowstyle':'->'})
ax2=axes[1]
ax2.hist(y_sim.flatten(),color=dark,edgecolor=dark_highlight,density=True,bins=20,zorder=1)
ax2.scatter(acc,0.0002*dta_cnt,color='black',marker='.',zorder=2)
ax2.set_xticks([0,365])
ax2.set_yticks([])
ax2.set_title('No. of accidents')
plt.show()
###Output
_____no_output_____
###Markdown
Posterior inference and posterior predictive checks
###Code
with open('airline_FA_hnorm_fit.stan', 'r') as file:
print(file.read())
model=stan_utility.compile_model('airline_FA_hnorm_fit.stan')
data = dict(M = len(Airline_data),
y = Airline_data['Fatal accidents'])
fit = model.sampling(data=data, seed=8052020)
params=fit.extract()
lam=params['lambda']
y_sim=params['y_sim']
mean_lam = np.mean(lam)
cinf_lam = az.hpd(lam,0.89)
hpd_width=cinf_lam[1]-cinf_lam[0]
print('Mean lambda : {:4.2f}'.format(mean_lam))
print('89% confidence interval: [',*['{:4.2f}'.format(k) for k in cinf_lam],']')
fig, axes = plt.subplots(2, 1,figsize=(7, 8))
ax1=axes[0]
ax1.hist(lam,bins=20,color=dark,edgecolor=dark_highlight,density=True)
x=np.linspace(0,350,1000)
#ax1.plot(x,2*stats.t.pdf(x,5,0,10),color='black',linestyle='--')
ax1.plot(x,2*stats.norm.pdf(x,0,sigma),color='black',linestyle='--')
arr_y_loc = 2*stats.norm.pdf(50,0,sigma)
ax1.annotate('Prior',xy=(50,arr_y_loc),xytext=(100,10*arr_y_loc),arrowprops={'arrowstyle':'->'})
ax1.set_xticks([0,lam_ub[0]])
ax1.set_yticks([])
ax1.set_title(r'$\lambda$')
ax_sm=plt.axes([0.5,0.6,0.35,0.2])
x_sm=np.linspace(cinf_lam[0]-hpd_width,cinf_lam[1]+hpd_width,200)
ax_sm.hist(lam,bins=20,color=dark,edgecolor=dark_highlight,density=True)
ax_sm.plot(x_sm,2*stats.norm.pdf(x_sm,0,sigma),color='black',linestyle='--')
ax_sm.annotate(s='', xy=(cinf_lam[0]-.2,0.2), xytext=(cinf_lam[1]+.2,0.2), arrowprops=dict(arrowstyle='<->'))
ax_sm.plot([cinf_lam[0],cinf_lam[0]],[0,0.3],color='black',linestyle='-',linewidth=0.5)
ax_sm.plot([cinf_lam[1],cinf_lam[1]],[0,0.3],color='black',linestyle='-',linewidth=0.5)
ax_sm.set_xticks(np.round([cinf_lam[0],cinf_lam[1]],2))
ax_sm.set_yticks([])
ax_sm.set_title(r'$\lambda$ HPD')
ax2=axes[1]
ax2.hist(y_sim.flatten(),color=dark,edgecolor=dark_highlight,density=True,bins=20,zorder=1)
ax2.scatter(acc,0.002*dta_cnt,color='black',marker='.',zorder=2)
ax2.set_xticks([0,np.max(y_sim)])
ax2.set_yticks([])
ax2.set_title('No. of accidents')
plt.show()
###Output
_____no_output_____
###Markdown
Using model for predictionIn 1986, there were **22** fatal accidents, **546** passenger deaths, and a death rate of **0.06** per 100 million miles flown. Lets check how our can perform such prediction.In order to predict value in 1986 we just need to use the prior predictive distribution of y_sim.
###Code
median_y_sim = np.median(y_sim.flatten())
cinf_y_sim = az.hpd(y_sim.flatten(),0.89)
print('Median of predicted accidents =',median_y_sim)
print('Confidence interval = [',*cinf_y_sim,']')
###Output
Median of predicted accidents = 24.0
Confidence interval = [ 15.0 31.0 ]
###Markdown
Modelling for accidents, considering milesIt is rather logical, that number of accidents should be related to number of miles flown. We can still use the Poisson model, however we can decompose rate $\lambda$ into intensity $\theta$ and exposure $n$, i.e.$$y_i\sim\mathrm{Poisson}(\theta n)$$With $n$ being a number miles flown (in 100 mil). Prior for fatal accidents intensityWe still assume that having fatal accident every day would be very improbable. Our previous argument, can be still valid, however in order to compute the bound we will use $\lambda=\theta\cdot\bar{n}$, with $\bar{n}$ being mean of miles flown. This gives us condition$$\theta\cdot\bar{n}+3\sqrt{\theta\cdot\bar{n}}\approx365$$We need to assign the prior for $\theta$ that would have probability of smaller $\lambda$ equal 99%.
###Code
mean_miles=np.mean(Airline_data['Miles flown [100 mln miles]'])
root_of_theta=np.polynomial.polynomial.polyroots([-365/mean_miles,3./np.sqrt(mean_miles),1.])
theta_ub=(root_of_theta[root_of_lam>0]**2)
print('theta upper bound','{:4.3f}'.format(theta_ub[0]))
data=dict(y_guess=np.array([np.log(0.01)]),theta=np.array(theta_ub))
tuned2 = tuning2.sampling(data=data,
seed=1052020,
algorithm="Fixed_param",
iter=1,
warmup=0,
chains=1)
sigma = (tuned2.extract()['sigma'][0])
print('sigma =','{:4.3f}'.format(sigma))
fig, ax2 = plt.subplots(1, 1,figsize=(7, 4))
x2=np.linspace(0,3*sigma,1000)
x4=np.linspace(0,theta_ub[0],1000)
ax2.plot(x2,2*stats.norm.pdf(x2,scale=sigma),color=dark,linewidth=2)
ax2.fill_between(x4,2*stats.norm.pdf(x4,scale=sigma),0,color=dark)
ax2.set_yticks([])
ax2.set_xticks([0,theta_ub[0]])
ax2.set_xticklabels([0,0.055])
ax2.set_title(r'$\theta$')
plt.show()
###Output
_____no_output_____
###Markdown
Prior predictive distributionWe can use stan to simulate possible outputs and parameteres based only on prior information.
###Code
with open('airline_FA_miles_hnorm_ppc.stan', 'r') as file:
print(file.read())
model_prior=stan_utility.compile_model('airline_FA_miles_hnorm_ppc.stan')
R=1000
data_prior=dict(M=len(Airline_data),miles=Airline_data['Miles flown [100 mln miles]'].to_numpy())
sim_uf=model_prior.sampling(data=data_prior,algorithm="Fixed_param", iter=R, warmup=0, chains=1, refresh=R,
seed=29042020)
params=sim_uf.extract()
theta=params['theta']
#y_sim=params['y_sim']
fig, axes = plt.subplots(1, 1,figsize=(7, 4))
ax1=axes
ax1.hist(theta,bins=20,color=dark,edgecolor=dark_highlight,density=True)
x=np.linspace(0,1.2*theta_ub[0],2000)
ax1.set_xticks([0,theta_ub[0]])
ax1.set_xticklabels([0,np.round(theta_ub[0],3)])
ax1.set_yticks([])
ax1.set_title(r'$\theta$')
ax1.plot(x,2*stats.norm.pdf(x,0,sigma),color='black',linestyle='--')
arr_y_loc = 2*stats.norm.pdf(0.025,0,sigma)
ax1.annotate('HalfNormal(0,'+'{:4.3f}'.format(sigma)+')',xy=(0.025,arr_y_loc),xytext=(0.04,1.5*arr_y_loc),arrowprops={'arrowstyle':'->'})
plt.show()
y_sim=params['y_sim']
fig, axes = plt.subplots(5, 2, figsize=(7, 8), sharey=True,squeeze=False)
axes_flat=axes.flatten()
for k in range(len(axes_flat)):
ax = axes_flat[k]
ax.hist(y_sim[:,k],bins=20,color=dark,edgecolor=dark_highlight,density=True)
ax.set_title(Airline_data.index[k])
tv=Airline_data['Fatal accidents'].iloc[k]
ax.plot([tv,tv],[0,0.02],linestyle='--',color='black')
ax.set_yticks([])
ax.set_xticks([0,tv,365])
ax.set_xticklabels(['',tv,365])
ax.set_ylim([0,0.012])
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Posterior inference and posterior predictive checks
###Code
with open('airline_FA_miles_hnorm_fit.stan', 'r') as file:
print(file.read())
model_miles=stan_utility.compile_model('airline_FA_miles_hnorm_fit.stan')
data = dict(M = len(Airline_data),
miles = Airline_data['Miles flown [100 mln miles]'],
y = Airline_data['Fatal accidents'])
fit = model_miles.sampling(data=data, seed=8052020)
params_miles=fit.extract()
theta=params_miles['theta']
y_sim=params_miles['y_sim']
mean_theta = np.mean(theta)
cinf_theta = az.hpd(theta,0.89)
hpd_width=cinf_theta[1]-cinf_theta[0]
print('Mean theta : {:5.4f}'.format(mean_theta))
print('89% confidence interval: [',*['{:5.4f}'.format(k) for k in cinf_theta],']')
#fig, axes = plt.subplots(2, 1,figsize=(7, 8))
fig, axes = plt.subplots(1, 1,figsize=(7, 4))
ax1=axes
ax1.hist(theta,bins=20,color=dark,edgecolor=dark_highlight,density=True)
x=np.linspace(0,1.2*theta_ub[0],2000)
ax1.set_xticks([0,theta_ub[0]])
ax1.set_xticklabels([0,np.round(theta_ub[0],3)])
ax1.set_yticks([])
ax1.set_title(r'$\theta$')
ax1.plot(x,2*stats.norm.pdf(x,0,sigma),color='black',linestyle='--')
arr_y_loc = 2*stats.norm.pdf(0.01,0,sigma)
ax1.annotate('Prior',xy=(0.01,arr_y_loc),xytext=(0.015,10*arr_y_loc),arrowprops={'arrowstyle':'->'})
ax_sm=plt.axes([0.5,0.3,0.35,0.4])
x_sm=np.linspace(cinf_theta[0]-hpd_width,cinf_theta[1]+hpd_width,200)
ax_sm.hist(theta,bins=20,color=dark,edgecolor=dark_highlight,density=True)
ax_sm.plot(x_sm,2*stats.norm.pdf(x_sm,0,sigma),color='black',linestyle='--')
ax_sm.annotate(s='', xy=(0.99*cinf_theta[0],1000), xytext=(1.01*cinf_theta[1],1000), arrowprops=dict(arrowstyle='<->'))
ax_sm.plot([cinf_theta[0],cinf_theta[0]],[0,1600],color='black',linestyle='-',linewidth=0.5)
ax_sm.plot([cinf_theta[1],cinf_theta[1]],[0,1600],color='black',linestyle='-',linewidth=0.5)
ax_sm.set_xticks(([cinf_theta[0],cinf_theta[1]]))
ax_sm.set_xticklabels(np.round([cinf_theta[0],cinf_theta[1]],4))
ax_sm.set_yticks([])
ax_sm.set_title(r'$\theta$ HPD')
plt.show()
y_sim=params_miles['y_sim']
fig, axes = plt.subplots(5, 2, figsize=(7, 8), sharey=True,squeeze=False)
axes_flat=axes.flatten()
for k in range(len(axes_flat)):
ax = axes_flat[k]
ax.hist(y_sim[:,k],bins=20,color=dark,edgecolor=dark_highlight,density=True)
ax.set_title(Airline_data.index[k])
tv=Airline_data['Fatal accidents'].iloc[k]
ax.plot([tv,tv],[0,0.15],linestyle='--',color='black')
#ax.set_yticks([])
ax.set_xticks([0,tv,50])
ax.set_xticklabels([0,tv,50])
ax.set_ylim([0,0.15])
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Using model for predictionIn this situation prediction might be a slight more complicated, as it requires from us modifying the generated quantities blockAs stated before in 1986, there were **22** fatal accidents, **546** passenger deaths, and a death rate of **0.06** per 100 million miles flown.
###Code
miles1986=546/0.06
print(np.round(miles1986))
with open('airline_FA_miles_1986.stan', 'r') as file:
print(file.read())
model1986=stan_utility.compile_model('airline_FA_miles_1986.stan')
data = dict(M = len(Airline_data),
miles = Airline_data['Miles flown [100 mln miles]'],
y = Airline_data['Fatal accidents'])
fit1986 = model1986.sampling(data=data, seed=8052020)
y_1986=fit1986.extract()['y_1986']
median_y_1986 = np.median(y_1986)
cinf_y_1986 = az.hpd(y_1986,0.89)
print('Median of predicted accidents =',median_y_1986)
print('Confidence interval = [',*cinf_y_1986,']')
y_sim=params['y_sim']
fig, ax = plt.subplots(1, 1, figsize=(7, 4))
ax.hist(y_1986,bins=20,color=dark,edgecolor=dark_highlight,density=True)
ax.set_title('1986')
tv = 22
ax.plot([tv,tv],[0,0.07],linestyle='--',color='black')
ax.set_yticks([])
ax.set_xticks([0,tv,50])
ax.set_xticklabels(['0',tv,50])
ax.set_ylim([0,0.07])
plt.show()
###Output
_____no_output_____ |
SQl_Alchemy_Juan.ipynb | ###Markdown
How to connect to a Database using Python First, the imports!
###Code
!pip install SQLAlchemy
!pip install psycopg2
!pip install psycopg2-binary
from sqlalchemy import create_engine
import pandas as pd
import os
###Output
_____no_output_____
###Markdown
Create a db connection
###Code
USERNAME = 'postgres'
PASSWORD = 'postgres'
HOST = 'localhost'
PORT = '5432'
DBNAME = 'movies'
conn_string = f'postgres://{USERNAME}:{PASSWORD}@{HOST}:{PORT}/{DBNAME}'
conn_string_short = f'postgres://{HOST}:{PORT}/{DBNAME}'
db = create_engine(conn_string)
db
###Output
_____no_output_____
###Markdown
Write csvs to disk - (maybe already done?) Query disk
###Code
#sql command - written in sql
query=input()
#query to the db
results = db.execute(query)
results
list_of_results = results.fetchall()
#displaying the results of that query, plus doing stuff with the results
list_of_results
pd.DataFrame(list_of_results)
iter(list_of_results)
#list_of_results = iterable + iterator
def generator_function():
yield x
###Output
_____no_output_____
###Markdown
Advanced SQLAlchemy - the ORM part! declarative base, sessionmaker, python Queries
###Code
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy import MetaData, Table, create_engine, and_, or_, not_
base = declarative_base(db)
Session = sessionmaker(db)
session = Session()
metadata = base.metadata
base.metadata.tables.keys()
Ratings = Table('ratings1', base.metadata, autoload = True)
Movies = Table('movies', base.metadata, autoload = True)
Links = Table('links', base.metadata, autoload=True)
Tags = Table('tabs', base.metadata, autoload=True)
Ratings
base.metadata.tables.keys()
base.metadata.tables['movies'].columns.values()
select * from movies limit 5;
session.query(Movies).limit(5).all()
###Output
_____no_output_____ |
intro-tf/3-automatic-differentiation.ipynb | ###Markdown
Part of the training process requires calculating derivatives that involve tensors. So let's learn about TensorFlow's built-in [automatic differentiation](https://www.tensorflow.org/guide/autodiff) engine, using a very simple example. Let's consider the following two tensors:$$\begin{align} U = \begin{bmatrix} 1 & 2 \end{bmatrix} && V = \begin{bmatrix} 3 & 4 \\ 5 & 6 \end{bmatrix}\end{align}$$Now let's suppose that we want to multiply $U$ by $V$, and then sum all the values in the resulting tensor, such that the result is a scalar. In math notation, we might represent this as the following scalar function $f$:$$f(U, V) = \mathrm{sum} (U \, V) = \sum_j \sum_i u_i \, v_{ij}$$Our goal is to calculate the derivative of $f$ with respect to each of its inputs: $\frac{\partial f}{\partial U}$ and $\frac{\partial f}{\partial V}$. We start by creating the two tensors $U$ and $V$. We then create a [tf.GradientTape](https://www.tensorflow.org/guide/autodiffgradient_tapes), and tell TensorFlow to watch for mathematical operations involving $U$ and $V$, recording those operations onto our "tape." The tape then enables us to calculate the derivatives of the function $f$ with respect to $U$ and $V$.
###Code
# Decimal points in tensor values ensure they are floats, which automatic differentiation requires.
U = tf.constant([[1., 2.]])
V = tf.constant([[3., 4.], [5., 6.]])
with tf.GradientTape(persistent=True) as tape:
tape.watch(U)
tape.watch(V)
W = tf.matmul(U, V)
f = tf.math.reduce_sum(W)
print(tape.gradient(f, U)) # df/dU
print(tape.gradient(f, V)) # df/dV
###Output
tf.Tensor([[ 7. 11.]], shape=(1, 2), dtype=float32)
tf.Tensor(
[[1. 1.]
[2. 2.]], shape=(2, 2), dtype=float32)
###Markdown
TensorFlow automatically watches tensors that are defined as `Variable` instances. So let's turn `U` and `V` into variables, and remove the `watch` calls:
###Code
# Decimal points in tensor values ensure they are floats, which automatic differentiation requires.
U = tf.Variable(tf.constant([[1., 2.]]))
V = tf.Variable(tf.constant([[3., 4.], [5., 6.]]))
with tf.GradientTape(persistent=True) as tape:
W = tf.matmul(U, V)
f = tf.math.reduce_sum(W)
print(tape.gradient(f, U)) # df/dU
print(tape.gradient(f, V)) # df/dV
###Output
tf.Tensor([[ 7. 11.]], shape=(1, 2), dtype=float32)
tf.Tensor(
[[1. 1.]
[2. 2.]], shape=(2, 2), dtype=float32)
|
Exam/Final/GEOG489 SP22 Final.ipynb | ###Markdown
GEOG489 SP22 Final InstructionYour final exam consists of three major parts. **First**, you will prepare supply, demand, and mobility data for measuring spatial accessibility to healthcare resources in Champaign County. **Second**, you will measure spatial accessibility considering distance decay. **Third**, you will calculate spatial autocorrelation based on the accessibility measures.**When you finish the tasks, please save/download your Jupyter notebook and submit it to learn.illinois.edu.**
###Code
import geopandas as gpd
import pandas as pd
import osmnx as ox
import networkx as nx
import matplotlib.pyplot as plt
import esda
import libpysal
###Output
_____no_output_____
###Markdown
1. Data preprocessing (3 points) 1.1. Supply (1 point)* Load `healthcare.shp` in the data folder and name it as `supply`. * Create a column named `weight` and assign weights based on `TYPE` of healthcare (10 for `Hospital` and 5 for `Urgent Care`). * Change the coordinate system of the dataframe to State Plane Coordinate System - Illinois East (NAD83) (epsg:26971).**Note**: The below is the expected result.
###Code
# Your code here
###Output
_____no_output_____
###Markdown
1.2. Demand (1 point)* With `census_block_groups.shp` and `pop_census.csv` in the data folder, create a GeoDataFrame named `demand` by merging them based on a column that shares information between them.* Drop the `GEO_ID` column after the merge. * Change the coordinate system of the dataframe to State Plane Coordinate System - Illinois East (NAD83) (epsg:26971).**Note**: The below is the expected result.
###Code
# Your code here
###Output
_____no_output_____
###Markdown
1.3. Mobility (1 point)* Utilize `OSMnx` package to obtain road network for `Champaign County` and assign the result to a variable `G`.* Project the road network to State Plane Coordinate System - Illinois East (NAD83) (epsg:26971).* Utilize the `remove_uncenessary_nodes` function below, and remove unnecessary nodes from the imported road network. ```pythondef remove_uncenessary_nodes(network): _nodes_removed = len([n for (n, deg) in network.out_degree() if deg == 0]) network.remove_nodes_from([n for (n, deg) in network.out_degree() if deg == 0]) for component in list(nx.strongly_connected_components(network)): if len(component) < 30: for node in component: _nodes_removed += 1 network.remove_node(node) print("Removed {} nodes ({:2.4f}%) from the OSMNX network".format(_nodes_removed, _nodes_removed / float(network.number_of_nodes()))) print("Number of nodes: {}".format(network.number_of_nodes())) print("Number of edges: {}".format(network.number_of_edges())) return network```
###Code
def remove_uncenessary_nodes(network):
_nodes_removed = len([n for (n, deg) in network.out_degree() if deg == 0])
network.remove_nodes_from([n for (n, deg) in network.out_degree() if deg == 0])
for component in list(nx.strongly_connected_components(network)):
if len(component) < 30:
for node in component:
_nodes_removed += 1
network.remove_node(node)
print("Removed {} nodes ({:2.4f}%) from the OSMNX network".format(_nodes_removed, _nodes_removed / float(network.number_of_nodes())))
print("Number of nodes: {}".format(network.number_of_nodes()))
print("Number of edges: {}".format(network.number_of_edges()))
return network
# Your code here
###Output
_____no_output_____
###Markdown
2. Measuring accessibility to healthcare resources (5 points) 2.1. Find the nearest OSM node from `supply` and `demand`. (1 point)* Use the following `find_nearest_osm` function to search the nearest OSM node from `supply` and `demand` GeoDataFrame, respectively.```pythondef find_nearest_osm(network, gdf): """ This function helps you to find the nearest OSM node from a given GeoDataFrame If geom type is point, it will take it without modification, but IF geom type is polygon or multipolygon, it will take its centroid to calculate the nearest element. Input: - network (NetworkX MultiDiGraph): Network Dataset obtained from OSMnx - gdf (GeoDataFrame): stores locations in its `geometry` column Output: - gdf (GeoDataFrame): will have `nearest_osm` column, which describes the nearest OSM node that was computed based on its geometry column """ for idx, row in gdf.iterrows(): if row.geometry.geom_type == 'Point': nearest_osm = ox.distance.nearest_nodes(network, X=row.geometry.x, Y=row.geometry.y ) elif row.geometry.geom_type == 'Polygon' or row.geometry.geom_type == 'MultiPolygon': nearest_osm = ox.distance.nearest_nodes(network, X=row.geometry.centroid.x, Y=row.geometry.centroid.y ) else: print(row.geometry.geom_type) continue gdf.at[idx, 'nearest_osm'] = nearest_osm return gdf```
###Code
# Your code here
def find_nearest_osm(network, gdf):
"""
# This function helps you to find the nearest OSM node from a given GeoDataFrame
# If geom type is point, it will take it without modification, but
# IF geom type is polygon or multipolygon, it will take its centroid to calculate the nearest element.
Input:
- network (NetworkX MultiDiGraph): Network Dataset obtained from OSMnx
- gdf (GeoDataFrame): stores locations in its `geometry` column
Output:
- gdf (GeoDataFrame): will have `nearest_osm` column, which describes the nearest OSM node
that was computed based on its geometry column
"""
for idx, row in gdf.iterrows():
if row.geometry.geom_type == 'Point':
nearest_osm = ox.distance.nearest_nodes(network,
X=row.geometry.x,
Y=row.geometry.y
)
elif row.geometry.geom_type == 'Polygon' or row.geometry.geom_type == 'MultiPolygon':
nearest_osm = ox.distance.nearest_nodes(network,
X=row.geometry.centroid.x,
Y=row.geometry.centroid.y
)
else:
print(row.geometry.geom_type)
continue
gdf.at[idx, 'nearest_osm'] = nearest_osm
return gdf
###Output
_____no_output_____
###Markdown
2.2. Calculate estimated travel time for edges in the road network (1 points)* Investigate the road network `G` and compute the `time` column in `G`. This will include the subtasks below. * If `maxspeed` exists in each row, maintain the current value. * If `maxspeed` is missing, assign `maxspeed` value of each row based on `max_speed_per_type` dictionary below.```pythonmax_speed_per_type = {'motorway': 60, 'motorway_link': 45, 'trunk': 60, 'trunk_link': 45, 'primary': 50, 'primary_link': 35, 'secondary': 40, 'secondary_link': 35, 'tertiary': 40, 'tertiary_link': 35, 'residential': 20, 'living_street': 20, 'unclassified': 20, 'road': 20, 'busway': 20 }```**Note**: Be aware that the `length` column of `G` is based on meters, but `maxspeed` is MPH. You need to multiply `maxspeed` column with 26.8223 to compute meters per minute from mile per hour.
###Code
# Your code here
max_speed_per_type = {'motorway': 60,
'motorway_link': 45,
'trunk': 60,
'trunk_link': 45,
'primary': 50,
'primary_link': 35,
'secondary': 40,
'secondary_link': 35,
'tertiary': 40,
'tertiary_link': 35,
'residential': 20,
'living_street': 20,
'unclassified': 20,
'road': 20,
'busway': 20
}
# Your code here
###Output
_____no_output_____
###Markdown
2.3. Measure accessibility (Enhanced two-step floating catchment area method) (2 points)Now, you will interpret the following two equations into code. First step:$$ R_j = \frac{S_j}{\sum_{k\in {\left\{{t_{kj}} \le {t_0} \right\}}}^{}{P_k}{W_k}}$$where$R_j$: the supply-to-demand ratio of location $j$. $S_j$: the degree of supply (e.g., number of doctors) at location $j$. $P_k$: the degree of demand (e.g., population) at location $k$. $t_{kj}$: the travel time between locations $k$ and $j$. $t_0$: the threshold travel time of the analysis. ${W_k}$: Weight based on a distance decay function Second step:$$ A_i = \sum_{j\in {\left\{{t_{ij}} \le {t_0} \right\}}} R_j{W_j}$$where$A_i$: the accessibility measures at location $i$. $R_j$: the supply-to-demand ratio of location $j$. ${W_j}$: Weight based on a distance decay function 2.3.1. Step1: Calculate the supply-to-demand ratio of each healthcare facility (1 point)In this stage, you will calculate supply-to-demand ratio ($R_j$) of each healthcare resource, and store the ratio into `ratio` column in the `supply` GeoDataFrame. The ratio should be depreciated based on the travel time and the weights provided below. In other words, each facility will have a catchment area that consists of three subzones. The inner subzone will be drawn from a 10-minute travel time and has a weight of 1. The middle subzone will be drawn from a 20-minute travel time and has a weight of 0.68. The outer subzone will be drawn from a 30-minute travel time and has a weight of 0.22. ```pythonminutes = [10, 20, 30]weights = {10: 1, 20: 0.68, 30: 0.22}```The function `calculate_catchment_area` will help you to calculate the three subzones for each facility. ```pythondef calculate_catchment_area(network, nearest_osm, minutes, distance_unit='time'): polygons = gpd.GeoDataFrame() Create convex hull for each travel time (minutes), respectively. for minute in minutes: access_nodes = nx.single_source_dijkstra_path_length(G=network, source=nearest_osm, cutoff=minute, weight=distance_unit ) convex_hull = nodes.loc[ nodes.index.isin(access_nodes.keys()), 'geometry' ].unary_union.convex_hull polygons.at[minute, 'geometry'] = convex_hull Calculate the differences between convex hulls which created in the previous section. polygons_ = polygons.copy(deep=True) for idx, minute in enumerate(minutes): if idx != 0: current_polygon = polygons.loc[[minute]] previous_polygons = polygons.loc[[minutes[idx-1]]] diff_polygon = gpd.overlay(current_polygon, previous_polygons, how="difference") if diff_polygon.shape[0] != 0: polygons_.at[minute, 'geometry'] = diff_polygon['geometry'].values[0] if polygons_.shape[0]: polygons_ = polygons_.set_crs(epsg=26971) return polygons_.copy(deep=True)```
###Code
# Extract the nodes and edges of the network dataset for the future analysis.
nodes, edges = ox.graph_to_gdfs(G, nodes=True, edges=True, node_geometry=True)
def calculate_catchment_area(network, nearest_osm, minutes, distance_unit='time'):
polygons = gpd.GeoDataFrame()
# Create convex hull for each travel time (minutes), respectively.
for minute in minutes:
access_nodes = nx.single_source_dijkstra_path_length(G=network,
source=nearest_osm,
cutoff=minute,
weight=distance_unit
)
convex_hull = nodes.loc[
nodes.index.isin(access_nodes.keys()), 'geometry'
].unary_union.convex_hull
polygons.at[minute, 'geometry'] = convex_hull
# Calculate the differences between convex hulls which created in the previous section.
polygons_ = polygons.copy(deep=True)
for idx, minute in enumerate(minutes):
if idx != 0:
current_polygon = polygons.loc[[minute]]
previous_polygons = polygons.loc[[minutes[idx-1]]]
diff_polygon = gpd.overlay(current_polygon, previous_polygons, how="difference")
if diff_polygon.shape[0] != 0:
polygons_.at[minute, 'geometry'] = diff_polygon['geometry'].values[0]
if polygons_.shape[0]:
polygons_ = polygons_.set_crs(epsg=26971)
return polygons_.copy(deep=True)
###Output
_____no_output_____
###Markdown
**Note**: The below is the expected result.
###Code
supply['ratio'] = 0
minutes = [10, 20, 30]
weights = {10: 1, 20: 0.68, 30: 0.22}
# Your code here
###Output
_____no_output_____
###Markdown
2.3.2. Step2: Aggregate the supply-to-demand ratio for each census block group (1 point)In this stage, you will aggregate the supply-to-demand ratio, which was calculated in the step above, for each census block group (`demand`). Assign the aggregated result into `access` column at `demand` GeoDataFrame. You can still utilize `calculate_catchment_area` function to facilitate your analysis. **Note**: The below is the expected result.
###Code
demand['access'] = 0
# Your code here
###Output
_____no_output_____
###Markdown
2.4. Plot the measures of accessibility (1 point)Try your best to mimic the map shown below, which demonstrate the measure of accessibility to healthcare resource at Champaign County. To achieve this, you need to 1) Plot the location of healthcare resources (`supply`). 2) Plot a Choropleth map with the `access` column in `demand`. 3) Use grey color to visualize locations without access 4) Hide x-axis and y-axis of the figure. **Note**: The below is the expected result.
###Code
# Your code here
###Output
_____no_output_____
###Markdown
3. Calculate spatial autocorrelation based on the accessibility measure (2 points)Calculate **Moran's I** and **Local Moran's I** based on the accessibility measures. If you fail to finish the accessibility measurements, you can use `step2.shp` in the data folder for this task. * Compute weights (`w`) with `libpysal.weights.DistanceBand`, which will be utilized for calculating spatial autocorrelation. * Fixed distance will be 10000 and alpha value for distance decay is -1. If you are looking for places to search, visit `libpysal.weights.DistanceBand()`, `esda.Moran()`, `esda.Moran_Local()`. 3.1. Calculate Moran's I of accessibility measure (1 point)Utilize `esda.moran.Moran()` and print the `Moran's I`.
###Code
# Your code here
###Output
_____no_output_____
###Markdown
3.2. Calculate Local Moran's I (1 point)Utilize `esda.moran.Moran_Local()` function and plot the Local Moran's I result as shown below. Use the following code to color your result if the classification is statistically significant (p-value < 0.05). ```pythonlm_dict = {1: 'HH', 2: 'LH', 3: 'LL', 4: 'HL'}lisa_color = {'HH': 'red', 'LL': 'blue', 'HL': 'orange', 'LH': 'skyblue', 'Not_Sig': 'lightgrey'}```**Note**: The map can be slightly different for every run, since the equation is based on a simulation.
###Code
# Your code here
###Output
_____no_output_____ |
materials/1_core.ipynb | ###Markdown
Core language A. VariablesVariables are used to store and modify values.
###Code
a = 5
b = a + 3.1415
c = a / b
print(a, b, c)
###Output
5 8.1415 0.6141374439599582
###Markdown
Note, we did not need to declare variable types (like in fortran), we could just assign anything to a variable and it works. This is the power of an interpreted (as opposed to compiled) language. Also, we can add different types (`a` is an integer, and we add the float 3.1415 to get `b`). The result is 'upcast' to whatever data type can handle the result. I.e., adding a float and an int results in a float.Variables can store lots of different kinds of data
###Code
s = 'Ice cream' # A string
f = [1, 2, 3, 4] # A list
d = 3.1415928 # A floating point number
i = 5 # An integer
b = True # A boolean value
###Output
_____no_output_____
###Markdown
*Side note*: Anything followed by a `` is a comment, and is not considered part of the code. Comments are useful for explaining what a bit of code does. ___USE COMMENTS___ You can see what `type` a variable has by using the `type` function, like
###Code
type(s)
###Output
_____no_output_____
###Markdown
--- *Exercise*> Use `type` to see the types of the other variables--- --- *Exercise*> What happens when you add variables of the same type? What about adding variables of different types?--- You can test to see if a variable is a particular type by using the `isinstance(var, type)` function.
###Code
isinstance(s, str) # is s a string?
isinstance(f, int) # is s an integer?
###Output
_____no_output_____
###Markdown
C. Tests for equality and inequalityWe can test the values of variables using different operators. These tests return a `Boolean` value. Either `True` or `False`. `False` is the same as zero, `True` is nonzero. Note that assignment `=` is different than a test of equality `==`.
###Code
a < 99
a > 99
a == 5.
###Output
_____no_output_____
###Markdown
These statements have returned "booleans", which are `True` and `False` only. These are commonly used to check for conditions within a script or function to determine the next course of action.NOTE: booleans are NOT equivalent to a string that says "True" or "False". We can test this:
###Code
True == 'True'
###Output
_____no_output_____
###Markdown
There are other things that can be tested, not just mathematical equalities. For example, to test if an element is inside of a list or string (or any sequence, more on sequences below..), do
###Code
foo = [1, 2, 3, 4, 5 ,6]
5 in foo
'this' in 'What is this?'
'that' in 'What is this?'
###Output
_____no_output_____
###Markdown
D. Intro to functionsWe will discuss functions in more detail later in this notebook, but here is a quick view to help with the homework.Functions allow us to write code that we can use in the future. When we take a series of code statements and put them in a function, we can reuse that code to take in inputs, perform calculations or other manipulations, and return outputs, just like a function in math.Almost all of the code you submit in your homework will be within functions so that I can use and test the functionality of your code.Here we have a function called `display_and_capitalize_string` which takes in a string, prints that string, and then returns the same string but with it capitalized.
###Code
def display_and_capitalize_string(input_str):
'''Documentation for this function, which can span
multiple
lines since triple quotes are used for this.
Takes in a string, prints that string, and then returns the same string but with it capitalized.'''
print(input_str) # print out to the screen the string that was input, called `input_str`
new_string = input_str.capitalize() # use built-in method for a string to capitalize it
return new_string
display_and_capitalize_string('hi')
###Output
hi
###Markdown
This is analogous to the relationship between a variable and a function in math. The variable is $x$, and the function is $f(x)$, which changes the input $x$ in some way, then returns a new value. To access that returned value, you have to use the function -- not just define the function.
###Code
# input variable, x. Internal to the function itself, it is called
# input_str.
x = 'hi'
# function f(x) is `display_and_capitalize_string`
# the function returns the variable `output_string`
output_string = display_and_capitalize_string('hi')
###Output
hi
###Markdown
--- *Exercise*> Write your own functions that do the following: 1. Take in a number and return that number plus 10. 2. Take in a variable and return the `type` of the variable.--- Equality checks are commonly used to test the outcome of a function to make sure it is performing as expected and desire. We can test the function we wrote before to see if it works the way we expect and want it to. Here are three different ways to test the outcome of the same input/output pair.
###Code
out_string = display_and_capitalize_string('banana')
assert(out_string == 'Banana')
from nose.tools import assert_equal
assert_equal(out_string, "Banana")
assert(out_string[0].isupper())
###Output
_____no_output_____
###Markdown
We know that the assert statements passed because no error was thrown. On the other hand, the following test does not run successfully:
###Code
assert(out_string=='BANANA')
###Output
_____no_output_____
###Markdown
--- *Exercise*> Write tests using assertions to check how well your functions from the previous exercise are working.--- E. ConditionalsConditionals have a similar syntax to `for` statements. Generally, conditionals look like if : or if : elif : else: In both cases the test statements are code segments that return a boolean value, often a test for equality or inequality. The `elif` and `else` statements are always optional; both, either, or none can be included.
###Code
x = 20
if x < 10:
print('x is less than 10')
else:
print('x is more than 10')
###Output
x is more than 10
###Markdown
--- *Exercise*> Rerun the code block above using different values for x. What happens if x=10?> Add an `elif` statement to the second block of code that will print something if x==10.--- F. StringsStrings are made using various kinds of (matching) quotes. Examples:
###Code
s1 = 'hello'
s2 = "world"
s3 = '''strings can
also go 'over'
multiple "lines".'''
s2
print(s3)
###Output
strings can
also go 'over'
multiple "lines".
###Markdown
You can also 'add' strings using 'operator overloading', meaning that the plus sign can take on different meanings depending on the data types of the variables you are using it on.
###Code
print( s1 + ' ' + s2) # note, we need the space otherwise we would get 'helloworld'
###Output
hello world
###Markdown
We can include special characters in strings. For example `\n` gives a newline, `\t` a tab, etc. Notice that the multiple line string above (`s3`) is converted to a single quote string with the newlines 'escaped' out with `\n`.
###Code
s3.upper()
###Output
_____no_output_____
###Markdown
Strings are 'objects' in that they have 'methods'. Methods are functions that act on the particular instance of a string object. You can access the methods by putting a dot after the variable name and then the method name with parentheses (and any arguments to the method within the parentheses). Methods always have to have parentheses, even if they are empty.
###Code
s3.capitalize()
###Output
_____no_output_____
###Markdown
One of the most useful string methods is 'split' that returns a list of the words in a string, with all of the whitespace (actual spaces, newlines, and tabs) removed. More on lists next.
###Code
s3.split()
###Output
_____no_output_____
###Markdown
Another common thing that is done with strings is the `join` method. It can be used to join a sequence of strings given a common conjunction
###Code
words = s3.split()
'_'.join(words) # Here, we are using a method directly on the string '_' itself.
###Output
_____no_output_____
###Markdown
G. ContainersOften you need lists or sequences of different values (e.g., a timeseries of temperature – a list of values representing the temperature on sequential days). There are three containers in the core python language. There are a few more specialized containers (e.g., numpy arrays and pandas dataframes) for use in scientific computing that we will learn much more about later; they are very similar to the containers we will learn about here. ListsLists are perhaps the most common container type. They are used for sequential data. Create them with square brackets with comma separated values within:
###Code
foo = [1., 2., 3, 'four', 'five', [6., 7., 8], 'nine']
type(foo)
###Output
_____no_output_____
###Markdown
Note that lists (unlike arrays, as we will later learn) can be heterogeneous. That is, the elements in the list don't have to have the same kind of data type. Here we have a list with floats, ints, strings, and even another (nested) list!We can retrieve the individual elements of a list by 'indexing' the list. We do this with square brackets, using zero-based indexes – that is `0` is the first element – as such:
###Code
foo[0]
foo[5]
foo[5][1] # Python is sequential, we can access an element within an element using sequential indexing.
foo[-1] # This is the way to access the last element.
foo[-3] # ...and the third to last element
foo[-3][2] # we can also index strings.
###Output
_____no_output_____
###Markdown
We can get a sub-sequence from the list by giving a range of the data to extract. This is done by using the format start:stop:stridewhere `start` is the first element, up to but not including the element indexed by `stop`, taking every `stride` elements. The defaluts are start at the beginning, include through the end, and include every element. The up-to-but-not-including part is confusing to first time Python users, but makes sense given the zero-based indexing. For example, `foo[:10]` gives the first ten elements of a sequence.
###Code
# create a sequence of 10 elements, starting with zero, up to but not including 10.
bar = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
bar[2:5]
bar[:4]
bar[:]
bar[::2]
###Output
_____no_output_____
###Markdown
--- *Exercise*> Use the list bar = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] > use indexing to get the following sequences: [3, 4, 5] [9] note this is different than just the last element. It is a sequence with only one element, but still a sequence [2, 5, 8]> What happens when you exceed the limits of the list? bar[99] bar[-99] bar[5:99]--- You can assign values to list elements by putting the indexed list on the right side of the assignment, as
###Code
bar[5] = -99
bar
###Output
_____no_output_____
###Markdown
This works for sequences as well,
###Code
bar[2:7] = [1, 1, 1, 1, 1, 1, 1, 1]
bar
###Output
_____no_output_____
###Markdown
Lists are also 'objects'; they also have 'methods'. Methods are functions that are designed to be applied to the data contained in the list. You can access them by putting a dot and the method name after the variable (called an 'object instance')
###Code
bar.insert(5, 'here')
bar
bar = [4, 5, 6, 7, 3, 6, 7, 3, 5, 7, 9]
bar.sort() # Note that we don't do 'bar = bar.sort()'. The sorting is done in place.
bar
###Output
_____no_output_____
###Markdown
--- *Exercise*> What other methods are there? Type `bar.` and then ``. This will show the possible completions, which in this case is a list of the methods and attributes. You can get help on a method by typing, for example, `bar.pop?`. The text in the help file is called a `docstring`; as we will see below, you can write these for your own functions.> See if you can use these four methods of the list instance `bar`: 1. append 2. pop 3. index 4. count--- TuplesTuples (pronounced `too'-puls`) are sequences that can't be modified, and don't have methods. Thus, they are designed to be immutable sequences. They are created like lists, but with parentheses instead of square brackets.
###Code
foo = (3, 5, 7, 9)
# foo[2] = -999 # gives an assignment error. Commented so that all cells run.
###Output
_____no_output_____
###Markdown
Tuples are often used when a function has multiple outputs, or as a lightweight storage container. Becuase of this, you don't need to put the parentheses around them, and can assign multiple values at a time.
###Code
a, b, c = 1, 2, 3 # Equivalent to '(a, b, c) = (1, 2, 3)'
print(b)
###Output
2
###Markdown
DictionariesDictionaries are used for unordered sequences that are referenced by arbitrary 'keys' instead of by a (sequential) index. Dictionaries are created using curly braces with keys and values separated by a colon, and key:value pairs separated by commas, as
###Code
foobar = {'a':3, 'b':4, 'c':5}
###Output
_____no_output_____
###Markdown
Elements are referenced and assigned by keys:
###Code
foobar['b']
foobar['c'] = -99
foobar
###Output
_____no_output_____
###Markdown
The keys and values can be extracted as lists using methods of the dictionary class.
###Code
foobar.keys()
foobar.values()
###Output
_____no_output_____
###Markdown
New values can be assigned simply by assigning a value to a key that does not exist yet
###Code
foobar['spam'] = 'eggs'
foobar
###Output
_____no_output_____
###Markdown
--- *Exercise*> Create a dictionary variable with at least 3 entries. The entry keys should be the first name of people around you in the class, and the value should be their favorite food.> Explore the methods of the dictionary object, as was done with the list instance in the previous exercise.--- You can make an empty dictionary or list by using the `dict` and `list` functions respectively.
###Code
empty_dict = dict()
empty_list = list()
print(empty_dict, empty_list)
###Output
{} []
###Markdown
H. Logical OperatorsYou can compare statements that evaluate to a boolean value with the logical `and` and `or`. We can first think about this with boolean values directly:
###Code
True and True, True and False
True or True, True or False
###Output
_____no_output_____
###Markdown
Note that you can also use the word `not` to switch the meaning of a boolean:
###Code
not True, not False
###Output
_____no_output_____
###Markdown
Now let's look at this with actual test examples instead of direct boolean values:
###Code
word = 'the'
sentence1 = 'the big brown dog'
sentence2 = 'I stand at the fridge'
sentence3 = 'go outside'
(word in sentence1) and (word in sentence2)
(word in sentence1) and (word in sentence2) and (word in sentence3)
(word in sentence1) or (word in sentence2) or (word in sentence3)
x = 20
5 < x < 30, 5 < x and x < 30
###Output
_____no_output_____
###Markdown
I. Loops For loopsLoops are one of the fundamental structures in programming. Loops allow you to iterate over each element in a sequence, one at a time, and do something with those elements.*Loop syntax*: Loops have a very particular syntax in Python; this syntax is one of the most notable features to Python newcomers. The format looks like for *element* in *sequence*: NOTE the colon at the end the block of code that is looped over for each element is indented four spaces (yes four! yes spaces!) the end of the loop is marked simply by unindented code Thus, indentation is significant to the code. This was done because good coding practice (in almost all languages, C, FORTRAN, MATLAB) typically indents loops, functions, etc. Having indentation be significant saves the end of loop syntax for more compact code.*Some important notes on indentation* Indentation in python is typically *4 spaces*. Most programming text editors will be smart about indentation, and will also convert TABs to four spaces. Jupyter notebooks are smart about indentation, and will do the right thing, i.e., autoindent a line below a line with a trailing colon, and convert TABs to spaces. If you are in another editor remember: ___TABS AND SPACES DO NOT MIX___. See [PEP-8](https://www.python.org/dev/peps/pep-0008/) for more information on the correct formatting of Python code.A simple example is to find the sum of the squares of the sequence 0 through 99,
###Code
sum_of_squares = 0
for n in range(100): # range yields a sequence of numbers from 0 up to but not including 100
sum_of_squares += n**2 # the '+=' operator is equivalent to 'sum = sum + n**2',
# the '**' operator is a power, like '^' in other languages
print(sum_of_squares)
###Output
328350
###Markdown
You can iterate over any sequence, and in Python (like MATLAB) it is better to iterate over the sequence you want than to loop over the indices of that sequence. The following two examples give the same result, but the first is much more readable and easily understood than the second. Do the first whenever possible.
###Code
# THIS IS BETTER THAN THE NEXT CODE BLOCK. DO IT THIS WAY.
words = ['the', 'quick', 'brown', 'fox', 'jumped', 'over', 'the', 'lazy', 'dog']
sentence = '' # this initializes a string which we can then add onto
for word in words:
sentence += word + ' '
sentence
# DON'T DO IT THIS WAY IF POSSIBLE, DO IT THE WAY IN THE PREVIOUS CODE BLOCK.
words = ['the', 'quick', 'brown', 'fox', 'jumped', 'over', 'the', 'lazy', 'dog']
sentence = ''
for i in range(len(words)):
sentence += words[i] + ' '
sentence
###Output
_____no_output_____
###Markdown
Sometimes you want to iterate over a sequence but you *also* want the indices of those elements. One way to do that is the `enumerate` function: enumerate()This returns a sequence of two element tuples, the first element in each tuple is the index, the second the element. It is commonly used in `for` loops, like
###Code
for idx, word in enumerate(words):
print('The index is', idx, '...')
print('...and the word is', word)
###Output
The index is 0 ...
...and the word is the
The index is 1 ...
...and the word is quick
The index is 2 ...
...and the word is brown
The index is 3 ...
...and the word is fox
The index is 4 ...
...and the word is jumped
The index is 5 ...
...and the word is over
The index is 6 ...
...and the word is the
The index is 7 ...
...and the word is lazy
The index is 8 ...
...and the word is dog
###Markdown
List comprehensionThere is a short way to make a list from a simple rule by using list comprehensions. The syntax is like [ for item in sequence] for example, we can calculate the squares of the first 10 integers
###Code
[n**2 for n in range(10)]
###Output
_____no_output_____
###Markdown
The `element` can be any code snippet that depends on the `item`. This example gives a sequence of boolean values that determine if the element in a list is a string.
###Code
random_list = [1, 2, 'three', 4.0, ['five',]]
[isinstance(item, str) for item in random_list]
random_list = [1, 2, 'three', 4.0, ['five',]]
foo = []
for item in random_list:
foo.append(isinstance(item, str))
foo
###Output
_____no_output_____
###Markdown
--- *Exercise*> Modify the previous list comprehension to test if the elements are integers.--- While loopsThe majority of loops that you will write will be `for` loops. These are loops that have a defined number of iterations, over a specified sequence. However, there may be times when it is not clear when the loop should terminate. In this case, you use a `while` loop. This has the syntax while : `condition` should be something that can be evaluated when the loop is started, and the variables that determine the conditional should be modified in the loop.This kind of loop should be use carefully — it is relatively easy to accidentally create an infinite loop, where the condition never is triggered to stop so the loop continues forever. This is especially important to avoid given that we are using shared resources in our class and a `while` loop that never ends can cause the computer the crash.
###Code
n = 5 # starting value
while n > 0:
n -= 1 # subtract 1 each loop
print(n) # look at value of n
###Output
4
3
2
1
0
###Markdown
Flow controlThere are a few commands that allow you to control the flow of any iterative loop: `continue`, `break`, and `pass`.- `continue` stops the current iteration and continues to the next element, if there is one.- `break` stops the current iteration, and leaves the loop.- `pass` does nothing, and is just a placeholder when syntax requires some code needs to be present
###Code
# print all the numbers, except 5
for n in range(10):
if n == 5:
continue
print(n)
# print all the numbers up to (but not including) 5, then break out of the loop.
for n in range(10):
print('.')
if n == 5:
break
print(n)
print('done')
# pass can be used for empty functions or classes,
# or in loops (in which case it is usually a placeholder for future code)
def foo(x):
pass
class Foo(object):
pass
x = 2
if x == 1:
pass # could just leave this part of the code out entirely...
elif x == 2:
print(x)
###Output
2
###Markdown
J. FunctionsFunctions are ways to create reusable blocks of code that can be run with different variable values – the input variables to the function. Functions are defined using the syntax def (var1, var2, ...): return Functions can be defined at any point in the code, and called at any subsequent point.
###Code
def addfive(x):
return x+5
addfive(3.1415)
###Output
_____no_output_____
###Markdown
Function inputs and outputsFunctions can have multiple input and output values. The documentation for the function can (and should) be provided as a string at the beginning of the function.
###Code
def sasos(a, b, c):
'''return the sum of a, b, and c and the sum of the squares of a, b, and c'''
res1 = a + b + c
res2 = a**2 + b**2 + c**2
return res1, res2
s, ss = sasos(3, 4, 5)
print(s)
print(ss)
###Output
12
50
###Markdown
Functions can have variables with default values. You can also specify positional variables out of order if they are labeled explicitly.
###Code
def powsum(x, y, z, a=1, b=2, c=3):
return x**a + y**b + z**c
print( powsum(2., 3., 4.) )
print( powsum(2., 3., 4., b=5) )
print( powsum(z=2., c=2, x=3., y=4.) )
###Output
75.0
309.0
23.0
###Markdown
--- *Exercise*> Verify `powsum(z=2., x=3., y=4., c=2)` is the same as `powsum(3., 4., 2., c=2)`> What happens when you do `powsum(3., 4., 2., x=2)`? Why?--- --- *Exercise*> Write a function that takes in a list of numbers and returns two lists of numbers: the odd numbers in the list and the even numbers in the list. That is, if your function is called `odds_evens()`, it should work as follows: >>> odds, evens = odds_evens([1,5,2,8,3,4]) >>> odds, evens ([1, 5, 3], [2, 8, 4]) > Note that `x % y` gives the remainder of `x/y`.> How would you change the code to make a counter (the index) available each loop?--- DocstringsYou can add 'help' text to functions (and classes) by adding a 'docstring', which is just a regular string, right below the definition of the function. This should be considered a mandatory step in your code writing.
###Code
def addfive(x):
'''Return the argument plus five
Input : x
A number
Output: foo
The number x plus five
'''
return x+5
# now, try addfive?
addfive?
###Output
_____no_output_____
###Markdown
See [PEP-257](https://www.python.org/dev/peps/pep-0257/) for guidelines about writing good docstrings. ScopeVariables within the function are treated as 'local' variables, and do not affect variables outside of the 'scope' of the function. That is, all of the variables that are changed within the block of code inside a function are only changed within that block, and do not affect similarly named variables outside the function.
###Code
x = 5
def changex(x): # This x is local to the function
x += 10. # here the local variable x is changed
print('Inside changex, x=', x)
return x
res = changex(x) # supply the value of x in the 'global' scope.
print(res)
print(x) # The global x is unchanged
###Output
Inside changex, x= 15.0
15.0
5
###Markdown
Variables from the 'global' scope can be used within a function, as long as those variables are unchanged. This technique should generally only be used when it is very clear what value the global variable has, for example, in very short helper functions.
###Code
x = 5
def dostuffwithx(y):
res = y + x # Here, the global value of x is used, since it is not defined inside the function.
return res
print(dostuffwithx(3.0))
print(x)
###Output
8.0
5
###Markdown
[Packing and unpacking](https://docs.python.org/2/tutorial/controlflow.htmlunpacking-argument-lists) function argumentsYou can provide a sequence of arguments to a function by placing a `*` in front of the sequence, like foo(*args)This unpacks the elements of the sequence into the arguments of the function, in order.
###Code
list(range(3, 6)) # normal call with separate arguments
args = [3, 6]
list(range(*args)) # call with arguments unpacked from a list
###Output
_____no_output_____
###Markdown
You can also unpack dictionaries as keyword arguments by placing `**` in front of the dictionary, like bar(**kwargs)These can be mixed, to an extent. E.g., `foo(*args, **kwargs)` works.Using our function from earlier, here we call `powsum` first with keyword arguments written in and second by unpacking a dictionary.
###Code
x = 5; y = 6; z = 7
powdict = {'a': 1, 'b': 2, 'c': 3}
print(powsum(x, y, z, a=1, b=2, c=3))
print(powsum(x, y, z, **powdict))
###Output
384
384
###Markdown
One common usage is using the builtin `zip` function to take a 'transpose' of a set of points.
###Code
list(zip((1, 2, 3, 4, 5), ('a', 'b', 'c', 'd', 'e'), (6, 7, 8, 9, 10)))
pts = ((1, 2), (3, 4), (5, 6), (7, 8), (9, 10))
x, y = list(zip(*pts))
print(x)
print(y)
# and back again,
print(list(zip(*(x,y))))
###Output
(1, 3, 5, 7, 9)
(2, 4, 6, 8, 10)
[(1, 2), (3, 4), (5, 6), (7, 8), (9, 10)]
###Markdown
K. Classes*We won't cover classes in this class, but these notes are here for your reference in case you are interested.*Classes are used to define generic objects. The 'instances' of the class are supplied with specific data. Classes define a data structure, 'methods' to work with this data, and 'attributes' that define the data. The computer science way to think of classesThink of the class as a sentence. The nouns would be the classes, the associated verbs class methods, and associated adjectives class attributes. For example take the sentence> The white car signals and makes a left turn.In this case the object is a `car`, a generic kind of vehicle. We see in the sentence that we have a particular instance of a `car`, a *white* `car`. Obviously, there can be many instances of the class `car`. White is a defining or distinguishing 'attribute' of the car. There are two 'methods' noted: signaling and turning. We might write the code for a `car` object like this: class Car(object): def __init__(self, color): self.color = color def signal(self, direction): def turn(self, direction): The scientific way to thing about classesGenerally, in science we use objects to store and work with complicated data sets, so it is natural to think of the data structure first, and use that to define the class. The methods are functions that work on this data. The attributes hold the data, and other defining characteristics about the dataset (i.e., metadata). The primary advantage of this approach is that the data are in a specified structure, so that the methods can assume this structure and are thereby more efficient.For example, consider a (atmospheric, oceanic, geologic) profile of temperature in the vertical axis. We might create a class that would look like: class Profile(object): ''' Documentation describing the object, in particular how it is instantiated. ''' def __init__(self, z, temp, lat, lon, time): self.z = z A sequence of values defining the vertical positions of the samples self.property = temp A corresponding sequence of temperature values self.lat = lat The latitude at which the profile was taken self.lon = lon The longitude at which the profile was taken self.time = time The time at which the profile was taken def mean(self): 'return the mean of the profile' Note, there could be a number of different choices for how the data are stored, more variables added to the profile, etc. Designing good classes is essential to the art of computer programming. Make classes as small and agile as possible, building up your code from small, flexible building blocks. Classes should be parsimonious and cogent. Avoid bloat.Classes are traditionally named with a Capitol, sometimes CamelCase, sometimes underlined_words_in_a_row, as opposed to functions which are traditionally lower case (there are many exceptions to these rules, though). When a class instance is created, the special `__init__` function is called to create the class instance. Within the class, the attributes are stored in `self` with a dot and the attribute name. Methods are defined like normal functions, but within the block, and the first argument is always `self`.There are many other special functions, that allow you to, for exmaple, overload the addition operator (`__add__`) or have a representation of the class that resembles the command used to create it (`__repr__`).Consider the example of a class defining a point on a 2D plan:
###Code
from math import sqrt # more on importing external packages below
class Point(object):
def __init__(self, x, y):
self.x = x
self.y = y
def norm(self):
'The distance of the point from the origin'
return sqrt(self.x**2 + self.y**2)
def dist(self, other):
'The distance to another point'
dx = self.x - other.x
dy = self.y - other.y
return sqrt(dx**2 + dy**2)
def __add__(self, other):
return Point(self.x + other.x, self.y + other.y)
def __repr__(self):
return 'Point(%f, %f)' % (self.x, self.y)
p1 = Point(3.3, 4.) # a point at location (3, 4)
p2 = Point(6., 8.) # another point, we can have as many as we want..
res = p1.norm()
print('p1.norm() = ', res)
res = p2.norm()
print('p2.norm() = ', res)
res = p1.dist(p2)
res2 = p2.dist(p1)
print('The distance between p1 and p2 is', res)
print('The distance between p2 and p1 is', res2)
p3 = p1+p2
p1
###Output
p1.norm() = 5.185556864985669
p2.norm() = 10.0
The distance between p1 and p2 is 4.825971404805461
The distance between p2 and p1 is 4.825971404805461
###Markdown
Notice that we don't require `other` to be a `Point` class instance; it could be any object with `x` and `y` attributes. This is known as 'object composition' and is a useful approach for using multiple different kinds of objects with similar data in the same functions. L. PackagesFunctions and classes represent code that is intended to be reused over and over. Packages are a way to store and manage this code. Python has a number of 'built-in' classes and functions that we have discussed above. List, tuples and dictionaries; `for` and `while` loops; and standard data types are part of every python session.There is also a very wide range of packages that you can import that extend the abilities of core Python. There are packages that deal with file input and output, internet communication, numerical processing, etc. One of the nice features about Python is that you only import the packages you need, so that the memory footprint of your code remains lean. Also, there are ways to import code that keep your 'namespace' organized.> Namespaces are one honking great idea -- let's do more of those!In the same way directories keep your files organized on your computer, namespaces organize your Python environment. There are a number of ways to import packages, for example.
###Code
import math # This imports the math function. Here 'math' is like a subdirectory
# in your namespace that holds all of the math functions
math.e
e = 15.7
print(math.e, e)
###Output
2.718281828459045 15.7
###Markdown
--- *Exercise*> After importing the math package, type `math.` and hit to see all the possible completions. These are the functions available in the math package. Use the math package to calculate the square root of 2.> There are a number of other ways to import things from the math package. Experiment with these commands from math import tanh Import just the `tanh` function. Called as `tanh(x)` import math as m Import the math package, but rename it to `m`. Functions called like `m.sin(x)` from math import * All the functions imported to top level namespace. Functions called like `sin(x)` > This last example makes things easier to use, but is frowned on as it is less clear where different functions come from.> For the rest of the 'Zen of Python' type `import this`--- One particular package that is central to scientific Python is the `numpy` package (*Num*erical *Py*thon). We will talk about this package much more in the future, but will outline a few things about the package now. The standard way to import this package is
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
The `numpy` package has the same math functions as the `math` package, but these functions are designed to work with numpy arrays. Arrays are the backbone of the `numpy` package. For now, just think of them as homogeneous, multidimensional lists.
###Code
a = np.array([[1., 2., 3], [4., 5., 6.]])
a
np.sin(a)
###Output
_____no_output_____
###Markdown
Note that we can have two `sin` functions at the same time, one from the `math` package and one from the `numpy` package. This is one of the advantages of namespaces.
###Code
math.sin(2.0) == np.sin(2.0)
###Output
_____no_output_____ |
.ipynb_checkpoints/News-Classifier-checkpoint.ipynb | ###Markdown
Data Warahousing and Data Mining Assignment H.M.D.R.W.Herath KU-HDCBIS-171F-001 NLP (Natural Language Processing) with PythonThis notebook focues on the Assignment given for DW & DM Module.**Summery*** Two class categorization problem* Training set : 200 training instances* Testing set : 100 test instances* Each document is one line of text* Fields are seperated by the tab '\t' character> CLASS \t TITLE \t DATE \t BODY* CLASS is either +1 or -1**Objective**Predict the labels for the 100 test instances. Process
###Code
# Importing the NLTK package
import nltk
# Download the stopwords
#nltk.download_shell()
###Output
_____no_output_____
###Markdown
Importing the DataData sets needed for the process is included inside the `dataset` directory in the root.As the summery indicates we have **TSV (Tab Seperated Values)** as the documents. Instead of parsing TSV manually using Python, I will take advantage of pandas.
###Code
# Importing the Pandas package
import pandas as pd
# Parse using read_csv
news = pd.read_csv('dataset/trainset.txt', sep='\t', names=['CLASS', 'TITLE', 'DATE', 'BODY'])
news.head()
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis
###Code
news.describe()
###Output
_____no_output_____
###Markdown
Now we can use **groupby** to describe by *CLASS*, this way we can begin to think about the features that separate **+1** and **-1**
###Code
news.groupby('CLASS').describe()
###Output
_____no_output_____
###Markdown
In the training set we have 98 instances of **-1** class. The remaining 102 instances bear the class of **+1**.We have two instances of class -1 that does not have a body and another 10 instances of class +1 without a body.Also, class +1 contains 10 instances where there is a no date specified.All the class instances contains a title.> Therefore we can assume TITLE plays a bigger role when it comes to classifying these news articles. Now we have to check the if lenght of the body plays a part in the classification.First lets create a addtional column contaning the body length.
###Code
news['BODY LENGTH'] = news['BODY'].apply(len)
news.head()
###Output
_____no_output_____
###Markdown
Data Visualization
###Code
# Importing the Visualization libraries
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
news['BODY LENGTH'].plot.hist(bins=50)
news['BODY LENGTH'].plot.hist(bins=150)
###Output
_____no_output_____
###Markdown
According to the above Histograms we can identify body length usually revolves around 0-1000 area, with exception some of the news Body lengths exceeding 4000 words.
###Code
# Overview of the Lengths
news['BODY LENGTH'].describe()
###Output
_____no_output_____
###Markdown
Now we need to identify whether the BODY LENGTH have a effect on the CLASS classification.
###Code
news.hist(column='BODY LENGTH', by='CLASS', bins=60, figsize=(12,4))
###Output
_____no_output_____
###Markdown
Using FacetGrid from the seaborn library to create a grid of 2 histograms of BODY LENGTH based off of the CLASS values.
###Code
g = sns.FacetGrid(news,col='CLASS')
g.map(plt.hist,'BODY LENGTH')
###Output
_____no_output_____
###Markdown
Creating a boxplot of BODY LENGTH for each CLASS.
###Code
sns.boxplot(x='CLASS', y='BODY LENGTH', data=news, palette='rainbow')
###Output
_____no_output_____
###Markdown
Creating a countplot of the number of occurrences for each type of CLASS.
###Code
sns.countplot(x='CLASS',data=news,palette='rainbow')
###Output
_____no_output_____
###Markdown
As the histograms indicate we cannot distinguish BODY LENGTH having a clear effect on CLASSES -1 and +1. But we can observe that the CLASS -1 BODY LENGTHS spread closely around 0-1000 mark wheares CLASS +1 BODY LENGTHS more spread out. Text Pre-processing Main issues with the dataset is it consists of text data.Due to that we need to pre-process them in order to convert **corpus** to a **vector** format.
###Code
# Importing String library for remove punctuations
import string
# Importing Regular Expressions
import re
# Importing stop words
from nltk.corpus import stopwords
# Importing Stemming Library
from nltk.stem import PorterStemmer
stemmer = PorterStemmer()
###Output
_____no_output_____
###Markdown
Text Processing Fuction
###Code
def text_process(mess):
"""
1. Remove punc
2. Remove numbers
3. Remove stop words + 'reuters' (News Network)
4. Stemming
5. Return list of clean text words
"""
text = [char for char in mess if char not in string.punctuation]
text = ''.join(text)
text = re.sub(r'\d+', ' ', text)
text = [word for word in text.split() if word.lower() not in stopwords.words('english')+['reuter']]
return [stemmer.stem(word) for word in text]
###Output
_____no_output_____
###Markdown
Data Pipeline Now we need to Vectorize, train and evaluvate model. We can due to this step by step but the best way (easy way) is to create a data pipeline. We will use use SciKit Learn's pipeline capabilities to store a pipeline of workflow. This will allow us to set up all the transformations that we will do to the data for future use. We will use **TF-IDF** for the term weighting and normalization. What is TF-IDF TF-IDF stands for *term frequency-inverse document frequency*, and the tf-idf weight is a weight often used in information retrieval and text mining. This weight is a statistical measure used to evaluate how important a word is to a document in a collection or corpus. The importance increases proportionally to the number of times a word appears in the document but is offset by the frequency of the word in the corpus. Variations of the tf-idf weighting scheme are often used by search engines as a central tool in scoring and ranking a document's relevance given a user query.One of the simplest ranking functions is computed by summing the tf-idf for each query term; many more sophisticated ranking functions are variants of this simple model.Typically, the tf-idf weight is composed by two terms: the first computes the normalized Term Frequency (TF), aka. the number of times a word appears in a document, divided by the total number of words in that document; the second term is the Inverse Document Frequency (IDF), computed as the logarithm of the number of the documents in the corpus divided by the number of documents where the specific term appears.**TF: Term Frequency**, which measures how frequently a term occurs in a document. Since every document is different in length, it is possible that a term would appear much more times in long documents than shorter ones. Thus, the term frequency is often divided by the document length (aka. the total number of terms in the document) as a way of normalization: *TF(t) = (Number of times term t appears in a document) / (Total number of terms in the document).***IDF: Inverse Document Frequency**, which measures how important a term is. While computing TF, all terms are considered equally important. However it is known that certain terms, such as "is", "of", and "that", may appear a lot of times but have little importance. Thus we need to weigh down the frequent terms while scale up the rare ones, by computing the following: *IDF(t) = log_e(Total number of documents / Number of documents with term t in it).*See below for a simple example.**Example:**Consider a document containing 100 words wherein the word cat appears 3 times. The term frequency (i.e., tf) for cat is then (3 / 100) = 0.03. Now, assume we have 10 million documents and the word cat appears in one thousand of these. Then, the inverse document frequency (i.e., idf) is calculated as log(10,000,000 / 1,000) = 4. Thus, the Tf-idf weight is the product of these quantities: 0.03 * 4 = 0.12. Pipeline Creation Process We will split the training data set into two parts as *training* and *test* for modal building and evaluvation.
###Code
# Importing train_test_split package
from sklearn.model_selection import train_test_split
news_body_train, news_body_test, class_train, class_test = train_test_split(news['BODY'], news['CLASS'], test_size=0.3)
print(len(news_body_train), len(news_body_test), len(news_body_train) + len(news_body_test))
# Imporing CountVectorizer Package
from sklearn.feature_extraction.text import CountVectorizer
# Importing Tfidf Library
from sklearn.feature_extraction.text import TfidfTransformer
# Importing MultinomialNB
from sklearn.naive_bayes import MultinomialNB
# Importing Pipeline Package
from sklearn.pipeline import Pipeline
pipeline = Pipeline([
('bow', CountVectorizer(analyzer=text_process)),
('tfidf', TfidfTransformer()),
('classifier', MultinomialNB())
])
###Output
_____no_output_____
###Markdown
Now we can directly pass news body data and the pipeline will do our pre-processing for us. We can treat it as a model/estimator API:
###Code
pipeline.fit(news_body_train,class_train)
predictions_eval = pipeline.predict(news_body_test)
###Output
_____no_output_____
###Markdown
Lets make a simple evaluvation by comaparing the predictions with real train set values
###Code
import numpy as np
np.asarray(class_test.tolist())
predictions_eval
###Output
_____no_output_____
###Markdown
Now lets create a report
###Code
# Import classification report package
from sklearn.metrics import confusion_matrix,classification_report
from sklearn.metrics import accuracy_score
print(confusion_matrix(class_test, predictions_eval))
print('\n')
print(classification_report(class_test, predictions_eval))
print('\n')
print('Accuracy :', accuracy_score(class_test, predictions_eval))
###Output
[[25 1]
[ 2 32]]
precision recall f1-score support
-1 0.93 0.96 0.94 26
1 0.97 0.94 0.96 34
avg / total 0.95 0.95 0.95 60
Accuracy : 0.95
###Markdown
Comparing Models Now lets change the MultinomialNB to RandomForrest and generate reports
###Code
# Importing RandomForrestClassifier
from sklearn.ensemble import RandomForestClassifier
pipeline = Pipeline([
('bow', CountVectorizer(analyzer=text_process)),
('tfidf', TfidfTransformer()),
('classifier', RandomForestClassifier())
])
pipeline.fit(news_body_train,class_train)
predictions_eval = pipeline.predict(news_body_test)
print(confusion_matrix(class_test, predictions_eval))
print('\n')
print(classification_report(class_test, predictions_eval))
print('\n')
print('Accuracy :', accuracy_score(class_test, predictions_eval))
###Output
[[24 2]
[ 3 31]]
precision recall f1-score support
-1 0.89 0.92 0.91 26
1 0.94 0.91 0.93 34
avg / total 0.92 0.92 0.92 60
Accuracy : 0.9166666666666666
###Markdown
**Conclusion : *RandomForrestClassifier* offeres better precision than *MultinomialNB* when comes to CLASS +1** Can TITLE be used for News Classification? Here we will try to determine whether TITLE place a role in News classification.We will use the pipelines with TITLE based test and train sets. **Step 1 :** Train Test Split
###Code
news_title_train, news_title_test, class_train, class_test = train_test_split(news['TITLE'], news['CLASS'], test_size=0.3)
###Output
_____no_output_____
###Markdown
**Step 2 :** Determine the pipeline. We will use the MultinomialNB.**Step 3 :** Train the model.
###Code
pipeline.fit(news_title_train,class_train)
###Output
_____no_output_____
###Markdown
**Step 4 :** Predict
###Code
predictions_eval = pipeline.predict(news_title_test)
###Output
_____no_output_____
###Markdown
**Step 5 :** Generate Reports
###Code
print(confusion_matrix(class_test, predictions_eval))
print('\n')
print(classification_report(class_test, predictions_eval))
print('\n')
print('Accuracy :', accuracy_score(class_test, predictions_eval))
###Output
[[27 1]
[ 8 24]]
precision recall f1-score support
-1 0.77 0.96 0.86 28
1 0.96 0.75 0.84 32
avg / total 0.87 0.85 0.85 60
Accuracy : 0.85
###Markdown
Modal Evaluvation After couple of runs we get a table like below.
###Code
runs = [1, 2, 3, 4]
body_mdf_acc = [0.91, 0.85, 0.95, 0.95]
body_rnf_acc = [0.89, 0.82, 0.86, 0.92]
title_mdf_acc = [0.93, 0.9, 0.93, 0.85]
plt.plot(runs, body_mdf_acc, color='g')
plt.plot(runs, body_rnf_acc, color='orange')
plt.plot(runs, title_mdf_acc, color='blue')
plt.xticks(np.arange(min(runs), max(runs)+1, 1.0))
plt.xlabel('Runs')
plt.ylabel('Accuracy')
plt.title('Model Accuracy by Runs')
plt.show()
###Output
_____no_output_____
###Markdown
ConclusionAccording to the graph using BODY content with MultinomialNB will provide better predictions than the others.Therefore we can predict the test set without labels like below. Predicting Test Labels
###Code
# Parse using read_csv
news_without_labels = pd.read_csv('dataset/testsetwithoutlabels.txt', sep='\t', names=['TITLE', 'DATE', 'BODY'])
news_without_labels.head()
pipeline = Pipeline([
('bow', CountVectorizer(analyzer=text_process)),
('tfidf', TfidfTransformer()),
('classifier', MultinomialNB())
])
pipeline.fit(news_body_train,class_train)
predictions_final = pipeline.predict(news_without_labels['BODY'])
predictions_final
###Output
_____no_output_____
###Markdown
Witing Predictions to CSV File
###Code
result = pd.DataFrame(data={'CLASS': predictions_final, 'TITLE': news_without_labels['TITLE'], 'DATE': news_without_labels['DATE'], 'BODY': news_without_labels['BODY']})
result.to_csv(path_or_buf='Final_Prediction.csv', index = False, header = True)
###Output
_____no_output_____
###Markdown
NLP (Natural Language Processing) with Python**Summery*** Two class categorization problem* Training set : 200 training instances* Testing set : 100 test instances* Each document is one line of text* Fields are seperated by the tab '\t' character> CLASS \t TITLE \t DATE \t BODY* CLASS is either +1 or -1**Objective**Predict the labels for the 100 test instances. Process
###Code
# Importing the NLTK package
import nltk
# Download the stopwords
nltk.download_shell()
###Output
NLTK Downloader
---------------------------------------------------------------------------
d) Download l) List u) Update c) Config h) Help q) Quit
---------------------------------------------------------------------------
###Markdown
Importing the DataData sets needed for the process is included inside the `dataset` directory in the root.As the summery indicates we have **TSV (Tab Seperated Values)** as the documents. Instead of parsing TSV manually using Python, I will take advantage of pandas.
###Code
# Importing the Pandas package
import pandas as pd
# Parse using read_csv
news = pd.read_csv('dataset/trainset.txt', sep='\t', names=['CLASS', 'TITLE', 'DATE', 'BODY'])
news.head()
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis
###Code
news.describe()
###Output
_____no_output_____
###Markdown
Now we can use **groupby** to describe by *CLASS*, this way we can begin to think about the features that separate **+1** and **-1**
###Code
news.groupby('CLASS').describe()
###Output
_____no_output_____
###Markdown
In the training set we have 98 instances of **-1** class. The remaining 102 instances bear the class of **+1**.We have two instances of class -1 that does not have a body and another 10 instances of class +1 without a body.Also, class +1 contains 10 instances where there is a no date specified.All the class instances contains a title.> Therefore we can assume TITLE plays a bigger role when it comes to classifying these news articles. Now we have to check the if lenght of the body plays a part in the classification.First lets create a addtional column contaning the body length.
###Code
news['BODY LENGTH'] = news['BODY'].apply(len)
news.head()
###Output
_____no_output_____
###Markdown
Data Visualization
###Code
# Importing the Visualization libraries
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
news['BODY LENGTH'].plot.hist(bins=50)
news['BODY LENGTH'].plot.hist(bins=150)
###Output
_____no_output_____
###Markdown
According to the above Histograms we can identify body length usually revolves around 0-1000 area, with exception some of the news Body lengths exceeding 4000 words.
###Code
# Overview of the Lengths
news['BODY LENGTH'].describe()
###Output
_____no_output_____
###Markdown
Now we need to identify whether the BODY LENGTH have a effect on the CLASS classification.
###Code
news.hist(column='BODY LENGTH', by='CLASS', bins=60, figsize=(12,4))
###Output
_____no_output_____
###Markdown
Using FacetGrid from the seaborn library to create a grid of 2 histograms of BODY LENGTH based off of the CLASS values.
###Code
g = sns.FacetGrid(news,col='CLASS')
g.map(plt.hist,'BODY LENGTH')
###Output
_____no_output_____
###Markdown
Creating a boxplot of BODY LENGTH for each CLASS.
###Code
sns.boxplot(x='CLASS', y='BODY LENGTH', data=news, palette='rainbow')
###Output
_____no_output_____
###Markdown
Creating a countplot of the number of occurrences for each type of CLASS.
###Code
sns.countplot(x='CLASS',data=news,palette='rainbow')
###Output
_____no_output_____
###Markdown
As the histograms indicate we cannot distinguish BODY LENGTH having a clear effect on CLASSES -1 and +1. But we can observe that the CLASS -1 BODY LENGTHS spread closely around 0-1000 mark wheares CLASS +1 BODY LENGTHS more spread out. Text Pre-processing Main issues with the dataset is it consists of text data.Due to that we need to pre-process them in order to convert **corpus** to a **vector** format.
###Code
# Importing String library for remove punctuations
import string
# Importing Regular Expressions
import re
# Importing stop words
from nltk.corpus import stopwords
# Importing Stemming Library
from nltk.stem import PorterStemmer
stemmer = PorterStemmer()
###Output
_____no_output_____
###Markdown
Text Processing Fuction
###Code
def text_process(mess):
"""
1. Remove punc
2. Remove numbers
3. Remove stop words + 'reuters' (News Network)
4. Stemming
5. Return list of clean text words
"""
text = [char for char in mess if char not in string.punctuation]
text = ''.join(text)
text = re.sub(r'\d+', ' ', text)
text = [word for word in text.split() if word.lower() not in stopwords.words('english')+['reuter']]
return [stemmer.stem(word) for word in text]
###Output
_____no_output_____
###Markdown
Data Pipeline Now we need to Vectorize, train and evaluvate model. We can due to this step by step but the best way (easy way) is to create a data pipeline. We will use use SciKit Learn's pipeline capabilities to store a pipeline of workflow. This will allow us to set up all the transformations that we will do to the data for future use. We will use **TF-IDF** for the term weighting and normalization. What is TF-IDF TF-IDF stands for *term frequency-inverse document frequency*, and the tf-idf weight is a weight often used in information retrieval and text mining. This weight is a statistical measure used to evaluate how important a word is to a document in a collection or corpus. The importance increases proportionally to the number of times a word appears in the document but is offset by the frequency of the word in the corpus. Variations of the tf-idf weighting scheme are often used by search engines as a central tool in scoring and ranking a document's relevance given a user query.One of the simplest ranking functions is computed by summing the tf-idf for each query term; many more sophisticated ranking functions are variants of this simple model.Typically, the tf-idf weight is composed by two terms: the first computes the normalized Term Frequency (TF), aka. the number of times a word appears in a document, divided by the total number of words in that document; the second term is the Inverse Document Frequency (IDF), computed as the logarithm of the number of the documents in the corpus divided by the number of documents where the specific term appears.**TF: Term Frequency**, which measures how frequently a term occurs in a document. Since every document is different in length, it is possible that a term would appear much more times in long documents than shorter ones. Thus, the term frequency is often divided by the document length (aka. the total number of terms in the document) as a way of normalization: *TF(t) = (Number of times term t appears in a document) / (Total number of terms in the document).***IDF: Inverse Document Frequency**, which measures how important a term is. While computing TF, all terms are considered equally important. However it is known that certain terms, such as "is", "of", and "that", may appear a lot of times but have little importance. Thus we need to weigh down the frequent terms while scale up the rare ones, by computing the following: *IDF(t) = log_e(Total number of documents / Number of documents with term t in it).*See below for a simple example.**Example:**Consider a document containing 100 words wherein the word cat appears 3 times. The term frequency (i.e., tf) for cat is then (3 / 100) = 0.03. Now, assume we have 10 million documents and the word cat appears in one thousand of these. Then, the inverse document frequency (i.e., idf) is calculated as log(10,000,000 / 1,000) = 4. Thus, the Tf-idf weight is the product of these quantities: 0.03 * 4 = 0.12. Pipeline Creation Process We will split the training data set into two parts as *training* and *test* for modal building and evaluvation.
###Code
# Importing train_test_split package
from sklearn.model_selection import train_test_split
news_body_train, news_body_test, class_train, class_test = train_test_split(news['BODY'], news['CLASS'], test_size=0.3)
print(len(news_body_train), len(news_body_test), len(news_body_train) + len(news_body_test))
# Imporing CountVectorizer Package
from sklearn.feature_extraction.text import CountVectorizer
# Importing Tfidf Library
from sklearn.feature_extraction.text import TfidfTransformer
# Importing MultinomialNB
from sklearn.naive_bayes import MultinomialNB
# Importing Pipeline Package
from sklearn.pipeline import Pipeline
pipeline = Pipeline([
('bow', CountVectorizer(analyzer=text_process)),
('tfidf', TfidfTransformer()),
('classifier', MultinomialNB())
])
###Output
_____no_output_____
###Markdown
Now we can directly pass news body data and the pipeline will do our pre-processing for us. We can treat it as a model/estimator API:
###Code
pipeline.fit(news_body_train,class_train)
predictions_eval = pipeline.predict(news_body_test)
###Output
_____no_output_____
###Markdown
Lets make a simple evaluvation by comaparing the predictions with real train set values
###Code
import numpy as np
np.asarray(class_test.tolist())
predictions_eval
###Output
_____no_output_____
###Markdown
Now lets create a report
###Code
# Import classification report package
from sklearn.metrics import confusion_matrix,classification_report
from sklearn.metrics import accuracy_score
print(confusion_matrix(class_test, predictions_eval))
print('\n')
print(classification_report(class_test, predictions_eval))
print('\n')
print('Accuracy :', accuracy_score(class_test, predictions_eval))
###Output
[[25 1]
[ 0 34]]
precision recall f1-score support
-1 1.00 0.96 0.98 26
1 0.97 1.00 0.99 34
avg / total 0.98 0.98 0.98 60
Accuracy : 0.9833333333333333
###Markdown
Comparing Models Now lets change the MultinomialNB to RandomForrest and generate reports
###Code
# Importing RandomForrestClassifier
from sklearn.ensemble import RandomForestClassifier
pipeline = Pipeline([
('bow', CountVectorizer(analyzer=text_process)),
('tfidf', TfidfTransformer()),
('classifier', RandomForestClassifier())
])
pipeline.fit(news_body_train,class_train)
predictions_eval = pipeline.predict(news_body_test)
print(confusion_matrix(class_test, predictions_eval))
print('\n')
print(classification_report(class_test, predictions_eval))
print('\n')
print('Accuracy :', accuracy_score(class_test, predictions_eval))
###Output
[[25 1]
[ 4 30]]
precision recall f1-score support
-1 0.86 0.96 0.91 26
1 0.97 0.88 0.92 34
avg / total 0.92 0.92 0.92 60
Accuracy : 0.9166666666666666
###Markdown
**Conclusion : *RandomForrestClassifier* offeres better precision than *MultinomialNB* when comes to CLASS +1** Can TITLE be used for News Classification? Here we will try to determine whether TITLE place a role in News classification.We will use the pipelines with TITLE based test and train sets. **Step 1 :** Train Test Split
###Code
news_title_train, news_title_test, class_train, class_test = train_test_split(news['TITLE'], news['CLASS'], test_size=0.3)
###Output
_____no_output_____
###Markdown
**Step 2 :** Determine the pipeline. We will use the MultinomialNB.**Step 3 :** Train the model.
###Code
pipeline.fit(news_title_train,class_train)
###Output
_____no_output_____
###Markdown
**Step 4 :** Predict
###Code
predictions_eval = pipeline.predict(news_title_test)
###Output
_____no_output_____
###Markdown
**Step 5 :** Generate Reports
###Code
print(confusion_matrix(class_test, predictions_eval))
print('\n')
print(classification_report(class_test, predictions_eval))
print('\n')
print('Accuracy :', accuracy_score(class_test, predictions_eval))
###Output
[[25 2]
[ 7 26]]
precision recall f1-score support
-1 0.78 0.93 0.85 27
1 0.93 0.79 0.85 33
avg / total 0.86 0.85 0.85 60
Accuracy : 0.85
###Markdown
Modal Evaluvation After couple of runs we get a table like below.
###Code
runs = [1, 2, 3, 4]
body_mdf_acc = [0.91, 0.85, 0.95, 0.95]
body_rnf_acc = [0.89, 0.82, 0.86, 0.92]
title_mdf_acc = [0.93, 0.9, 0.93, 0.85]
plt.plot(runs, body_mdf_acc, color='g')
plt.plot(runs, body_rnf_acc, color='orange')
plt.plot(runs, title_mdf_acc, color='blue')
plt.xticks(np.arange(min(runs), max(runs)+1, 1.0))
plt.xlabel('Runs')
plt.ylabel('Accuracy')
plt.title('Model Accuracy by Runs')
plt.show()
###Output
_____no_output_____
###Markdown
ConclusionAccording to the graph using BODY content with MultinomialNB will provide better predictions than the others.Therefore we can predict the test set without labels like below. Predicting Test Labels
###Code
# Parse using read_csv
news_without_labels = pd.read_csv('dataset/testsetwithoutlabels.txt', sep='\t', names=['TITLE', 'DATE', 'BODY'])
news_without_labels.head()
pipeline = Pipeline([
('bow', CountVectorizer(analyzer=text_process)),
('tfidf', TfidfTransformer()),
('classifier', MultinomialNB())
])
pipeline.fit(news_body_train,class_train)
predictions_final = pipeline.predict(news_without_labels['BODY'])
predictions_final
###Output
_____no_output_____
###Markdown
Witing Predictions to CSV File
###Code
result = pd.DataFrame(data={'CLASS': predictions_final, 'TITLE': news_without_labels['TITLE'], 'DATE': news_without_labels['DATE'], 'BODY': news_without_labels['BODY']})
result.to_csv(path_or_buf='Final_Prediction.csv', index = False, header = True)
###Output
_____no_output_____ |
insurance_scikit/prudential.ipynb | ###Markdown
See : https://www.kaggle.com/c/prudential-life-insurance-assessment/data Variable DescriptionId A unique identifier associated with an application.Product_Info_1-7 A set of normalized variables relating to the product applied forIns_Age Normalized age of applicantHt Normalized height of applicantWt Normalized weight of applicantBMI Normalized BMI of applicantEmployment_Info_1-6 A set of normalized variables relating to the employment history of the applicant.InsuredInfo_1-6 A set of normalized variables providing information about the applicant.Insurance_History_1-9 A set of normalized variables relating to the insurance history of the applicant.Family_Hist_1-5 A set of normalized variables relating to the family history of the applicant.Medical_History_1-41 A set of normalized variables relating to the medical history of the applicant.Medical_Keyword_1-48 A set of dummy variables relating to the presence of/absence of a medical keyword being associated with the application.**Response** This is the target variable, an ordinal variable relating to the final decision associated with an application The following variables are all categorical (nominal) :Product_Info_1, Product_Info_2, Product_Info_3, Product_Info_5, Product_Info_6, Product_Info_7,Employment_Info_2, Employment_Info_3, Employment_Info_5, InsuredInfo_1, InsuredInfo_2, InsuredInfo_3,InsuredInfo_4, InsuredInfo_5, InsuredInfo_6, InsuredInfo_7,Insurance_History_1, Insurance_History_2, Insurance_History_3, Insurance_History_4, Insurance_History_7,Insurance_History_8, Insurance_History_9,Family_Hist_1,Medical_History_2, Medical_History_3, Medical_History_4, Medical_History_5, Medical_History_6, Medical_History_7,Medical_History_8, Medical_History_9, Medical_History_11, Medical_History_12, Medical_History_13,Medical_History_14, Medical_History_16, Medical_History_17, Medical_History_18, Medical_History_19,Medical_History_20, Medical_History_21, Medical_History_22, Medical_History_23, Medical_History_25,Medical_History_26, Medical_History_27, Medical_History_28, Medical_History_29, Medical_History_30,Medical_History_31, Medical_History_33, Medical_History_34, Medical_History_35, Medical_History_36,Medical_History_37, Medical_History_38, Medical_History_39, Medical_History_40, Medical_History_41 The following variables are continuous :Product_Info_4, Ins_Age, Ht, Wt, BMI,Employment_Info_1, Employment_Info_4, Employment_Info_6,Insurance_History_5,Family_Hist_2, Family_Hist_3, Family_Hist_4, Family_Hist_5 The following variables are discrete :Medical_History_1, Medical_History_10, Medical_History_15, Medical_History_24, Medical_History_32 The following variables are dummy variables :Medical_Keyword_1-48
###Code
import math
import pandas
import numpy as np
import matplotlib.pyplot as plt
# machine learning
#from sklearn import datasets
from sklearn.metrics import log_loss
from sklearn.preprocessing import OneHotEncoder, MinMaxScaler, StandardScaler
from sklearn.feature_selection import chi2
from sklearn.feature_selection import SelectKBest
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
# Keras
#from keras.preprocessing.sequence import pad_sequences
from keras.layers import Input, Dense, Embedding, Activation, LSTM, merge, Flatten, Dropout, Lambda
from keras.layers import RepeatVector, Reshape
from keras.models import Model, Sequential
#from keras.engine.topology import Merge
from keras.layers.normalization import BatchNormalization
from keras.optimizers import SGD, RMSprop, Adam
#from keras.layers.convolutional import *
#from keras.utils.data_utils import get_file
#
from keras import backend as K
import xgboost as xgb
from metrics import quadratic_weighted_kappa
df0 = pandas.read_csv("../data/train.csv.gz")
# WARNING : shuffle to better split the train/test sets
df = df0.sample(frac=1)
df['BMI_Age'] = df['BMI'] * df['Ins_Age']
med_keyword_columns = df.columns[df.columns.str.startswith('Medical_Keyword_')]
df['Med_Keywords_Count'] = df[med_keyword_columns].sum(axis=1)
#df.describe().transpose()
###Output
_____no_output_____
###Markdown
Continuous variables Product_Info_4, Ins_Age, Ht, Wt, BMI Employment_Info_1, Employment_Info_4, Employment_Info_6 Insurance_History_5 Family_Hist_2, Family_Hist_3, Family_Hist_4, Family_Hist_5
###Code
L = []
L1 = ['Product_Info_4', 'Ins_Age', 'Ht', 'Wt', 'BMI', 'BMI_Age', 'Med_Keywords_Count',
'Employment_Info_1', 'Employment_Info_4', 'Employment_Info_6']
L.extend(L1)
L2 = ['Insurance_History_5', 'Family_Hist_2', 'Family_Hist_3', 'Family_Hist_4', 'Family_Hist_5']
L.extend(L2)
df[L].describe().transpose()
### note that some variables are not defined everywhere
L1 = ['Product_Info_4', 'Ins_Age', 'Ht', 'Wt', 'BMI','BMI_Age', 'Med_Keywords_Count']
df[L1].describe().transpose()
for l in L:
if not(l in L1):
print(l, df[l].mean())
df[l].fillna((df[l].mean()), inplace=True)
df[L].describe().transpose()
X = df[L].as_matrix()
Y = df['Response'].as_matrix()
logreg = LogisticRegression(C=1e5)
logreg.fit(X, Y)
# WARNING : check how Logistic handles more than 2 classes
len( [1 for y, ym in zip(Y, logreg.predict(X)) if y==ym] ) / float(len(Y))
knn = KNeighborsClassifier()
knn.fit(X, Y)
len( [1 for y, ym in zip(Y, knn.predict(X)) if y==ym] ) / float(len(Y))
c2val, c2prob = chi2(X, Y)
c2val.sort()
c2val = np.fliplr([c2val])[0]
print c2val
print X.shape
X_new = SelectKBest(chi2, k=2).fit_transform(X, Y)
print X_new.shape
###Output
(59381, 15)
(59381, 2)
###Markdown
Turn categorical variables into dummies with OneHotEncodingList of variables:Product_Info_1, Product_Info_2, Product_Info_3, Product_Info_5, Product_Info_6, Product_Info_7,Employment_Info_2, Employment_Info_3, Employment_Info_5, InsuredInfo_1, InsuredInfo_2, InsuredInfo_3,InsuredInfo_4, InsuredInfo_5, InsuredInfo_6, InsuredInfo_7,Insurance_History_1, Insurance_History_2, Insurance_History_3, Insurance_History_4, Insurance_History_7,Insurance_History_8, Insurance_History_9,Family_Hist_1,Medical_History_2, Medical_History_3, Medical_History_4, Medical_History_5, Medical_History_6, Medical_History_7,Medical_History_8, Medical_History_9, Medical_History_11, Medical_History_12, Medical_History_13,Medical_History_14, Medical_History_16, Medical_History_17, Medical_History_18, Medical_History_19,Medical_History_20, Medical_History_21, Medical_History_22, Medical_History_23, Medical_History_25,Medical_History_26, Medical_History_27, Medical_History_28, Medical_History_29, Medical_History_30,Medical_History_31, Medical_History_33, Medical_History_34, Medical_History_35, Medical_History_36,Medical_History_37, Medical_History_38, Medical_History_39, Medical_History_40, Medical_History_41
###Code
catstring = 'Product_Info_1, Product_Info_2, Product_Info_3, Product_Info_5, Product_Info_6, Product_Info_7, '
catstring+= 'Employment_Info_2, Employment_Info_3, Employment_Info_5, '
catstring+= 'InsuredInfo_1, InsuredInfo_2, InsuredInfo_3, InsuredInfo_4, InsuredInfo_5, InsuredInfo_6, InsuredInfo_7, '
catstring+= 'Insurance_History_1, Insurance_History_2, Insurance_History_3, Insurance_History_4, Insurance_History_7, '
catstring+= 'Insurance_History_8, Insurance_History_9, '
catstring+= 'Family_Hist_1, '
catstring+= 'Medical_History_2, Medical_History_3, Medical_History_4, Medical_History_5, Medical_History_6, '
catstring+= 'Medical_History_7, Medical_History_8, Medical_History_9, Medical_History_11, Medical_History_12, '
catstring+= 'Medical_History_13, Medical_History_14, Medical_History_16, Medical_History_17, Medical_History_18, '
catstring+= 'Medical_History_19, Medical_History_20, Medical_History_21, Medical_History_22, Medical_History_23, '
catstring+= 'Medical_History_25, Medical_History_26, Medical_History_27, Medical_History_28, Medical_History_29, '
catstring+= 'Medical_History_30, Medical_History_31, Medical_History_33, Medical_History_34, Medical_History_35, '
catstring+= 'Medical_History_36, Medical_History_37, Medical_History_38, Medical_History_39, Medical_History_40, '
catstring+= 'Medical_History_41'
categories = catstring.replace(' ','').split(',')
print categories[0:10]
df[categories].describe().transpose()
###Output
_____no_output_____
###Markdown
WARNING : Product_Info_2 is not numeric
###Code
print( df[['Product_Info_2']].count() )
df[['Product_Info_2']].head(5)
# Found at :
# https://www.kaggle.com/marcellonegro/prudential-life-insurance-assessment/xgb-offset0501/run/137585/code
df['Product_Info_2_char'] = df.Product_Info_2.str[0]
df['Product_Info_2_num'] = df.Product_Info_2.str[1]
# factorize categorical variables
df['Product_Info_2'] = pandas.factorize(df['Product_Info_2'])[0]
df['Product_Info_2_char'] = pandas.factorize(df['Product_Info_2_char'])[0]
df['Product_Info_2_num'] = pandas.factorize(df['Product_Info_2_num'])[0]
df[['Product_Info_2','Product_Info_2_char','Product_Info_2_num']].head(5)
categories.append('Product_Info_2_char')
categories.append('Product_Info_2_num')
encX = OneHotEncoder()
# remove Product_Info_2 as it is not numeric (should convert it separately)
#Xcat = df[categories].drop('Product_Info_2', 1).as_matrix()
Xcat = df[categories].as_matrix()
#print Xcat.shape
#print df[categories].head()
encX.fit(Xcat)
Xohe = encX.transform(Xcat).toarray()
print Xohe.shape
# as Y has 9 categories it can be usefull to treat them separately
encY = OneHotEncoder()
encY.fit(Y.reshape(-1, 1)) # reshape as Y is a vector and OHE requires a matrix
Yohe = encY.transform(Y.reshape(-1, 1))
print Yohe.shape
###Output
(59381, 842)
(59381, 8)
###Markdown
We can remove low occurence one-hot columns to reduce dimension
###Code
column_test = (np.sum(Xohe, axis=0) > 25) # tweak filter setting
print(np.sum(column_test*1))
Xohe_trim = Xohe[:,column_test]
print(Xohe_trim.shape)
###Output
308
(59381, 308)
###Markdown
Discrete variables / WARNING still need to include these
###Code
discstring = 'Medical_History_1, Medical_History_10, Medical_History_15, Medical_History_24, Medical_History_32'
discretes = discstring.replace(' ', '').split(',')
missing_disc_indic = -1
for discrete in discretes:
# WARNING : shall fill with most frequent modality ?
df[discrete].fillna(missing_disc_indic, inplace=True)
#df[discrete] = pandas.factorize(df[discrete])[0]
df[discrete] = df[discrete] - missing_disc_indic # TO AVOID NEGATIVE VALUES
Xdisc = df[discretes].as_matrix()
if True:
encD = OneHotEncoder()
encD.fit(Xdisc)
Xdisc_ohe = encD.transform(Xdisc).toarray()
print Xdisc_ohe.shape
column_test = (np.sum(Xdisc_ohe, axis=0) > 10) # tweak filter setting
print(np.sum(column_test*1))
Xdisc_ohe_trim = Xdisc_ohe[:,column_test]
print(Xdisc_ohe_trim.shape)
df[discretes].describe().transpose()
df[discretes].head(10)
###Output
_____no_output_____
###Markdown
Dummy variables
###Code
dummies = ['Medical_Keyword_'+str(i) for i in range(1,49)]
df[dummies].describe().transpose()
Xdummies = df[dummies].as_matrix()
###Output
_____no_output_____
###Markdown
Merge
###Code
if False: # non trimmed one-hot
Xmerge = np.concatenate((X, Xohe, Xdummies, Xdisc), axis=1)
else:
Xmerge = np.concatenate((X, Xohe_trim, Xdummies), axis=1)
# Xdisc
# Xdisc_ohe
# Xdisc_ohe_trim ?
Xmerge.shape
###Output
_____no_output_____
###Markdown
chi2 selection
###Code
def getbests(Xarray, Yarray, nbkeep=20):
c2val, c2prob = chi2(Xarray, Yarray)
print len([j for j, p in enumerate(c2prob) if p<0.01]) / float(len(c2prob))
aux = c2val.tolist()
aux.sort()
aux.reverse()
minc2val = aux[nbkeep]
return [j for j, cv in enumerate(c2val) if cv>minc2val]
bests20 = getbests(Xmerge, Y, 20)
Xbests20 = Xmerge[:,bests20]
print Xbests20.shape
bests30 = getbests(Xmerge, Y, 30)
Xbests30 = Xmerge[:,bests30]
print Xbests30.shape
bests40 = getbests(Xmerge, Y, 40)
Xbests40 = Xmerge[:,bests40]
print Xbests40.shape
bests50 = getbests(Xmerge, Y, 50)
Xbests50 = Xmerge[:,bests50]
print Xbests50.shape
###Output
0.654986522911
(59381, 20)
0.654986522911
(59381, 30)
0.654986522911
(59381, 40)
0.654986522911
(59381, 50)
###Markdown
XGBoost (installed from pip)see this link :https://www.kaggle.com/zeroblue/prudential-life-insurance-assessment/xgboost-with-optimized-offsets/code
###Code
columns_to_drop = ['Id', 'Response'] #, 'Medical_History_10','Medical_History_24']
xgb_num_rounds = 720
num_classes = 8
missing_indicator = -1000
def get_params():
params = {}
params["objective"] = "reg:linear"
params["eta"] = 0.05
params["min_child_weight"] = 360
params["subsample"] = 0.85
params["colsample_bytree"] = 0.3
params["silent"] = 1
params["max_depth"] = 7
plst = list(params.items())
return plst
xgtrain = xgb.DMatrix(df.drop(columns_to_drop, axis=1), df['Response'].values,
missing=missing_indicator)
#xgtest = xgb.DMatrix(test.drop(columns_to_drop, axis=1), label=test['Response'].values,
# missing=missing_indicator)
plst = get_params()
# train model
xgbmodel = xgb.train(plst, xgtrain, xgb_num_rounds)
train_preds = xgbmodel.predict(xgtrain, ntree_limit=xgbmodel.best_iteration)
quadratic_weighted_kappa(train_preds, df['Response'].as_matrix()) # Kaggle metric
###Output
_____no_output_____
###Markdown
KNN
###Code
knn2 = KNeighborsClassifier()
Xknntrain = Xbests20[range(0,50000), :]
Yknntrain = Y[range(0,50000)]
Xknntest = Xbests20[range(50000,59000), :]
Yknntest = Y[range(50000,59000)]
knn2.fit(Xknntrain, Yknntrain) # lower the "bests" threshold to include more variables ... but KNN will slow drastically
#len( [1 for y, ym in zip(Y, knn2.predict(Xbests30)) if y==ym] ) / float(len(Y))
print knn2.score(Xknntrain, Yknntrain)
print knn2.score(Xknntest, Yknntest)
quadratic_weighted_kappa(knn2.predict(Xknntrain), Yknntrain) # Kaggle metric
# split the set into different Y classes to measure their importance
np.mean(encY.transform(Yknntrain.reshape(-1, 1)).toarray(), axis=0)
###Output
_____no_output_____
###Markdown
SVC
###Code
classcol = 7
#model = LogisticRegression()
#model = KNeighborsClassifier()
#model = RandomForestClassifier(n_estimators=50)
#model = GaussianNB()
model = SVC()
Xrftrain = Xbests40[range(0,40000), :]
Yrftrain = Y[range(0,40000)]
Xrftest = Xbests40[range(40000,59000), :]
Yrftest = Y[range(40000,59000)]
colYrftrain = encY.transform(Yrftrain.reshape(-1, 1)).getcol(classcol).toarray().flatten()
colYrftest = encY.transform(Yrftest.reshape(-1, 1)).getcol(classcol).toarray().flatten()
model.fit(Xrftrain, colYrftrain)
print model.score(Xrftrain, colYrftrain)
print model.score(Xrftest, colYrftest)
quadratic_weighted_kappa(model.predict(Xrftrain), colYrftrain) # Kaggle metric
###Output
_____no_output_____
###Markdown
Random Forests
###Code
random_forest = RandomForestClassifier(n_estimators=50)
Xrftrain = Xbests20[range(0,50000), :]
Yrftrain = Y[range(0,50000)]
Xrftest = Xbests20[range(50000,59000), :]
Yrftest = Y[range(50000,59000)]
random_forest.fit(Xrftrain, Yrftrain)
#Y_pred = random_forest.predict(X)
print random_forest.score(Xrftrain, Yrftrain)
print random_forest.score(Xrftest, Yrftest)
quadratic_weighted_kappa(random_forest.predict(Xrftrain), Yrftrain) # Kaggle metric
###Output
_____no_output_____
###Markdown
Neural Network (with Keras)
###Code
Xmerge.shape, Yohe.toarray().shape # WARNING : to_array
nn_input_dim = Xmerge.shape[1]
if True:
min_max_scaler = MinMaxScaler() # WARNING : is it correct for binary variables ?
Xmerge_prepro = min_max_scaler.fit_transform(Xmerge)
else:
std_scaler = StandardScaler().fit(Xmerge)
Xmerge_prepro = std_scaler.transform(Xmerge)
Xnn_train = Xmerge_prepro[0:45000]
Xnn_valid = Xmerge_prepro[45000:]
Ynn_train = Yohe.toarray()[0:45000]
Ynn_valid = Yohe.toarray()[45000:]
###Output
_____no_output_____
###Markdown
Stand alone Neural Network i.e. no mixture
###Code
model = Sequential()
model.add( Dense(500, init='glorot_uniform', activation='relu', input_dim=nn_input_dim) )
model.add( BatchNormalization() )
model.add( Dropout(0.4) )
model.add( Dense(200, activation='sigmoid') )
model.add( BatchNormalization() )
model.add( Dropout(0.4) )
model.add( Dense(100, activation='sigmoid') )
model.add( BatchNormalization() )
model.add( Dropout(0.4) )
model.add( Dense(8, activation='softmax') )
model.compile(optimizer=Adam(1e-3), loss='categorical_crossentropy', metrics=['accuracy'])
model.optimizer.lr = 1e-4
# train 30 times (at least)
model.fit(Xnn_train, Ynn_train, nb_epoch=10, batch_size=64, validation_data=(Xnn_valid, Ynn_valid), verbose=1)
###Output
Train on 45000 samples, validate on 14381 samples
Epoch 1/10
45000/45000 [==============================] - 6s - loss: 1.2841 - acc: 0.5326 - val_loss: 1.3131 - val_acc: 0.5237
Epoch 2/10
45000/45000 [==============================] - 6s - loss: 1.2751 - acc: 0.5373 - val_loss: 1.3141 - val_acc: 0.5237
Epoch 3/10
45000/45000 [==============================] - 6s - loss: 1.2662 - acc: 0.5373 - val_loss: 1.3089 - val_acc: 0.5249
Epoch 4/10
45000/45000 [==============================] - 6s - loss: 1.2624 - acc: 0.5413 - val_loss: 1.3089 - val_acc: 0.5267
Epoch 5/10
45000/45000 [==============================] - 6s - loss: 1.2538 - acc: 0.5456 - val_loss: 1.3097 - val_acc: 0.5269
Epoch 6/10
45000/45000 [==============================] - 6s - loss: 1.2440 - acc: 0.5472 - val_loss: 1.3074 - val_acc: 0.5283
Epoch 7/10
45000/45000 [==============================] - 6s - loss: 1.2399 - acc: 0.5503 - val_loss: 1.3062 - val_acc: 0.5287
Epoch 8/10
45000/45000 [==============================] - 6s - loss: 1.2333 - acc: 0.5525 - val_loss: 1.3072 - val_acc: 0.5279
Epoch 9/10
45000/45000 [==============================] - 6s - loss: 1.2308 - acc: 0.5522 - val_loss: 1.3067 - val_acc: 0.5308
Epoch 10/10
45000/45000 [==============================] - 6s - loss: 1.2271 - acc: 0.5529 - val_loss: 1.3022 - val_acc: 0.5296
###Markdown
GATED MIXTURE OF EXPERTS A custom loss function seems tricky to implement in Keras so we implement a NN that takes X,Y as input and returns the Errors as output. The fit function will have a dummy null output target so that the fit minimizes the Error function.
###Code
NM = 2
inputs = Input(shape=(nn_input_dim,))
outputs = Input(shape=(8,))
predictions = []
for i in range(NM):
if True:
xi = Dense(500, init='glorot_uniform', activation='relu')(inputs)
xi = BatchNormalization()(xi)
xi = Dropout(0.40)(xi)
xi = Dense(200, activation='relu')(xi)
xi = BatchNormalization()(xi)
xi = Dropout(0.40)(xi)
xi = Dense(100, activation='relu')(xi)
xi = BatchNormalization()(xi)
xi = Dropout(0.40)(xi)
predictions.append( Dense(8, activation='softmax')(xi) )
predmat = Reshape((NM,8))( merge(predictions, mode='concat', concat_axis=1) ) # .summary to check axis
deltas = merge([RepeatVector(NM)(outputs), predmat], output_shape=(NM,8), mode=lambda x: -(x[0] * K.log(x[1])))
deltasums = Lambda(lambda x: K.sum(x, axis=2), output_shape=lambda s: (s[0], s[1]))(deltas)# .summary to check axis
hinton_trick = True # see "Adaptive Mixtures of Local Experts"
if hinton_trick:
Hinton1 = Lambda(lambda x: K.exp(-x), output_shape=lambda s: s)
deltasums = Hinton1(deltasums)
gate = Dense(100, activation='relu')(inputs)
gate = BatchNormalization()(gate)
gate = Dropout(0.40)(gate)
gate = Dense(NM, activation='softmax')(gate)
errors = merge([gate, deltasums], mode='dot')
if hinton_trick:
Hinton2 = Lambda(lambda x: -K.log(x), output_shape=lambda s: s)
errors = Hinton2(errors)
# a model for training only
modelG_train = Model(input=[inputs, outputs], output=errors)
predavg = merge([gate, Reshape((8,2))(predmat)], mode='dot') # WARNING : not sure about that one !
# a model for prediction / WARNING : share weights with "train" ???
modelG_pred = Model(input=inputs, output=[predavg, predmat, gate])
# INFO :
# CE is positive and we want to minimize it
# if dummy_target = 0 then MAE = Mean(CrossEntropy)
modelG_train.compile(optimizer=Adam(1e-3), loss='mean_absolute_error')
#modelG_train.summary() # useful when debugging tensor shapes
#modelG_pred.summary() # useful when debugging tensor shapes
modelG_train.optimizer.lr = 1e-4
# train 30 times (at least)
Yg_train_dummy = Ynn_train[:,0]*0
Yg_valid_dummy = Ynn_valid[:,0]*0
modelG_train.fit([Xnn_train, Ynn_train], Yg_train_dummy,
validation_data=([Xnn_valid, Ynn_valid], Yg_valid_dummy),
nb_epoch=5, batch_size=64, verbose=1)
# to check the mixing is not degenerate
print( np.min(modelG_pred.predict(Xnn_train[0:5000,:])[2], axis=0) )
print( np.max(modelG_pred.predict(Xnn_train[0:5000,:])[2], axis=0) )
# round up forecasted probabilities
(modelG_pred.predict(Xnn_train[5000:5010,:])[0]*100).astype(int)
# there is a bug with 0-th index output as CE is too large
log_loss(Ynn_train, modelG_pred.predict(Xnn_train)[0], eps=1e-3, normalize=True)
# there is a bug with 0-th index output as CE is too large
-np.mean(np.log(np.sum(Ynn_train * modelG_pred.predict(Xnn_train)[0], axis=1))) # log out of sum is fine
preds = modelG_pred.predict(Xnn_train)
Ypred = np.sum(preds[1] * np.tile(np.expand_dims(preds[2],2), (1, 1, 8)), axis=1)
# this CE value is consistent so 1-th and 2-th index seem fine
-np.mean(np.log(np.sum(Ynn_train * Ypred, axis=1)))
###Output
_____no_output_____ |
shell-commands-in-python.ipynb | ###Markdown
Shell Commands in PythonSorry, not sorry. Old Methods`os` module
###Code
import os
os.system("ls") # can't get the result, only 0 for success else non-zero exit code
print(os.popen("git --version").read()) # get the result
###Output
git version 2.26.0.windows.1
###Markdown
Subprocess module
###Code
import subprocess
subprocess.run("ls") # Runs the program 'ls'
subprocess.run(["python3", "test.py"]) # Need a list of strings here since we have args. This will run 'python3 test.py'
###Output
_____no_output_____
###Markdown
Actually reading the output
###Code
# To see stdout
result = subprocess.run('ls',stdout=subprocess.PIPE)
print(result.stdout.decode())
# To see stdout and stderr
result = subprocess.run(['rm','xyz'],stdout=subprocess.PIPE,stderr=subprocess.PIPE)
print(result.stderr.decode())
subprocess.run('ls -la',shell=True) # This is a way to pass args without using a list like above
###Output
_____no_output_____
###Markdown
Taking stdin parameters
###Code
# pass 'abc' and then 'def'
subprocess.run(['python3','test.py'],capture_output=True,input="abc\ndef".encode())
###Output
_____no_output_____
###Markdown
Run with timeout
###Code
subprocess.run(['sleep','5'],timeout=3) # Generates timeout expired error
###Output
_____no_output_____
###Markdown
Throw and error if the command fails
###Code
try:
subprocess.run(['rm','xyz'],check=True) # Generates an error if anything goes wrong while running shell command
except subprocess.CalledProcessError:
print("Failed")
###Output
_____no_output_____ |
data_extraction/dartAPI.ipynb | ###Markdown
기업 고유번호 불러오기 회사의 고유번호 데이터를 불러오는 작업
###Code
openApi = "ab851319407812ac10d593dcb2fef51d0c944b66"
### 필요한 모듈
from urllib.request import urlopen
from io import BytesIO
from zipfile import ZipFile
### 회사고유번호 데이터 불러오기
url = 'https://opendart.fss.or.kr/api/corpCode.xml?crtfc_key=' + openApi
with urlopen(url) as zipresp:
with ZipFile(BytesIO(zipresp.read())) as zfile:
zfile.extractall('corp_num')
###Output
_____no_output_____
###Markdown
XML 데이터 읽기
###Code
### 모듈 import
import xml.etree.ElementTree as ET
### 압축파일 안의 xml 파일 읽기
tree = ET.parse('CORPCODE.xml')
root = tree.getroot()
## 총 80939개 기업
###Output
_____no_output_____
###Markdown
필요한 정보를 얻기위한 함수 만들기
###Code
### 회사 이름으로 회사 고유번호 찾기
def find_corp_num(find_name):
for country in root.iter("list"):
if country.findtext("corp_name") == find_name:
return country.findtext("corp_code")
find_corp_num('삼성전자')
corp_code = []
for country in root.iter("list"):
corp_code.append(country.findtext("corp_code"))
print(corp_code)
len(corp_code)
###Output
_____no_output_____
###Markdown
기업개황api 한 회사 검색 테스트
###Code
import requests
import pandas as pd
from urllib.request import urlopen
from time import sleep
import time
url = "https://opendart.fss.or.kr/api/company.json?crtfc_key=" + crtfc_key_2 + "&corp_code=00126380"
response = requests.get(url)
print(response.text)
#urldata = response.json()
#corp_df = pd.DataFrame(urldata, index=[0])
print(response)
# response에 200이 나오면 잘 응답이 온것. 400은 잘 응답이 안온것.
###Output
_____no_output_____
###Markdown
모든 기업 개황 검색해 데이터프레임화 후 저장 회사 고유번호 불러오기 (Load 또는 Read)
###Code
### xml 모듈 import
import xml.etree.ElementTree as ET
### xml 파일 읽기
tree = ET.parse('CORPCODE.xml')
root = tree.getroot()
###Output
_____no_output_____
###Markdown
기업 고유번호 모음 list 만들기
###Code
### 기업 고유번호 모음 list 만들기
code_list=[]
for country in root.iter("corp_code"):
code_list.append(country.text)
print(code_list[0:3])
###Output
['00434003', '00434456', '00430964']
###Markdown
기업 개황 검색 함수 만들기
###Code
### 본인의 인증키 입력
crtfc_key_1 = "ab851319407812ac10d593dcb2fef51d0c944b66"
crtfc_key_2 = "f494a3020128060351c817cabf5b1b4a851e0737"
crtfc_key_3 = "5a79f1c6d673e00b0614c74e542b390ddd0b3542"
crtfc_key_4 = "614f7fe579f14daf0ecb6aa38652677ab0576191"
crtfc_key_5 = "d4857d7491b5c47d494584350731c06dc7b66882"
crtfc_key_6 = "bb7d92cbd78b6b67d156bef779a02e74bce661c4"
crtfc_key_7 = "a30987f4b05d0371f9f3d84c898efc698bbbc1e2"
crtfc_key_8 = "012be8cea16c0d414c5273f7422280f3f9905adf"
crtfc_key_9 = "ba2c91d2cc13f79700da1e0ed1542d2a37f18068"
crtfc_key_10 = "02c190bf4a27db86defd39afd0c48183dbff1d2c"
crtfc_key_11 = "a263608b5e4e9040a9de7ad1f0ff62808b29503d"
crtfc_key_12 = "c42daf7eaba42a0ed786b61e59240c58caa1d5d6"
### 기업개황 검색 함수 만들기
def load_data(corp_code):
### 기업개황 요청 url
url = "https://opendart.fss.or.kr/api/company.json?crtfc_key=" +crtfc_key_1+"&corp_code=" +corp_code
### HTTP 요청
r = requests.get(url)
### 요청한 데이터는 json형태이기 때문에 json으로 읽어줍니다.
company_data = r.json()
### 기업개황을 요청했을 때 기업개황에 대한 자료를 반환합니다.
return company_data
print(r)
###Output
_____no_output_____
###Markdown
반복문을 통한 데이터 수집
###Code
### 데이터를 담아둘 list 생성
company_list_1 = [] # company_list_1 0 ~ 9,000 (ㅇ)
company_list_2 = [] # company_list_2 9,000 ~ 18,000 (ㅇ)
company_list_4 = [] # company_list_4 18,000 ~ 27,000 (ㅇ)
company_list_5 = [] # company_list_5 27,000 ~ 36,000 (ㅇ)
company_list_6 = [] # company_list_6 36,000 ~ 45,000 (ㅇ)
company_list_7 = [] # company_list_7 45,000 ~ 54,000 (ㅇ)
company_list_8 = [] # company_list_8 54,000 ~ 63,000 (ㅇ)
company_list_9 = [] # company_list_9 63,000 ~ 72,000 (ㅇ)
company_list_10 = [] # company_list_10 72,000 ~ 80,939 (ㅇ)
company_list_4 = []
### 반복문 실행
for corp_code in code_list[18000:27000]:
company_dict = load_data(corp_code)
### list 변경할것
company_list_4.append(company_dict)
# sleep으로 트래픽 휴식시간주기
time.sleep(0.1)
print(company_list_3[0])
len(company_list_1)
len(company_list_2)
len(company_list_4)
print(company_list_4[0])
len(company_list_5)
len(company_list_6)
len(company_list_7)
print(company_list_7[8999])
len(company_list_8)
print(company_list_8[8999])
len(company_list_9)
print(company_list_9[8999])
len(company_list_10)
print(company_list_10[8938])
###Output
{'status': '000', 'message': '정상', 'corp_code': '00585963', 'corp_name': '지앤지인베스트 주식회사', 'corp_name_eng': 'GnG Invest co., Ltd.', 'stock_name': '지앤지인베스트', 'stock_code': '', 'ceo_nm': '선경래', 'corp_cls': 'E', 'jurir_no': '1101113315002', 'bizr_no': '2118771445', 'adres': '서울특별시 강남구 남부순환로 2736 (도곡동)', 'hm_url': '', 'ir_url': '', 'phn_no': '02-3460-4821', 'fax_no': '02-3460-4829', 'induty_code': '68112', 'est_dt': '20050930', 'acc_mt': '03'}
###Markdown
데이터 저장
###Code
### pickle 모듈 import
import pickle
### pickle 모듈을 통해 list 저장
with open('company_4.txt','wb') as f:
pickle.dump(company_list_4,f)
###Output
_____no_output_____
###Markdown
기업 개황 통합
###Code
### 기업개황 통합하기
# 모든 기업개황이 담길 list
total_company_list=[]
# for문을 이용해 통합
for num in range(1,10):
file_name = 'company_'+str(num)+'.txt'
with open(file_name,'rb') as f:
data=pickle.load(f)
total_company_list=total_company_list + data
# 통합 list 저장
with open('total_company_list.txt','wb') as f:
pickle.dump(total_company_list,f)
total_company_list = []
total_company_list += company_list_1
total_company_list += company_list_2
total_company_list += company_list_4
total_company_list += company_list_5
total_company_list += company_list_6
total_company_list += company_list_7
total_company_list += company_list_8
total_company_list += company_list_9
total_company_list += company_list_10
len(total_company_list)
###Output
_____no_output_____
###Markdown
데이터프레임화 및 통합리스트 엑셀화
###Code
data = pd.DataFrame(total_company_list)
data.to_excel('기업개황.xlsx')
type(total_company_list[0])
###Output
_____no_output_____
###Markdown
------------------------------------------- 재무제표 크롤링
###Code
url = "https://opendart.fss.or.kr/api/fnlttSinglAcntAll.json?crtfc_key=" + crtfc_key_1 + "&corp_code=00126380&bsns_year=2017&reprt_code=11011&fs_div=OFS"
print(code_list[30000])
r = requests.get(url)
finance_data = r.json()
print(load_finance_data('00126380'))
len(finance_list_1)
### 본인의 인증키 입력
crtfc_key_1 = "ab851319407812ac10d593dcb2fef51d0c944b66"
crtfc_key_2 = "f494a3020128060351c817cabf5b1b4a851e0737"
crtfc_key_3 = "5a79f1c6d673e00b0614c74e542b390ddd0b3542"
crtfc_key_4 = "614f7fe579f14daf0ecb6aa38652677ab0576191"
crtfc_key_5 = "d4857d7491b5c47d494584350731c06dc7b66882"
crtfc_key_6 = "bb7d92cbd78b6b67d156bef779a02e74bce661c4"
crtfc_key_7 = "a30987f4b05d0371f9f3d84c898efc698bbbc1e2"
crtfc_key_8 = "012be8cea16c0d414c5273f7422280f3f9905adf"
crtfc_key_9 = "ba2c91d2cc13f79700da1e0ed1542d2a37f18068"
crtfc_key_10 = "02c190bf4a27db86defd39afd0c48183dbff1d2c"
crtfc_key_11 = "a263608b5e4e9040a9de7ad1f0ff62808b29503d"
crtfc_key_12 = "c42daf7eaba42a0ed786b61e59240c58caa1d5d6"
### 재무제표 검색 함수 만들기
def load_finance_data(corp_code):
### 재무제표 요청 url
url = "https://opendart.fss.or.kr/api/fnlttSinglAcntAll.json?crtfc_key=" +crtfc_key_6+"&corp_code=" +corp_code + "&bsns_year=2019&reprt_code=11011&fs_div=OFS"
### HTTP 요청
r = requests.get(url)
### 요청한 데이터는 json형태이기 때문에 json으로 읽어줍니다.
finance_data = r.json()
### 재무제표를 요청했을 때 재무제표에 대한 자료를 반환합니다.
return finance_data
###Output
_____no_output_____
###Markdown
반복문을 통한 데이터 수집
###Code
### 데이터를 담아둘 list 생성
finance_list_18_1 = [] # 0 ~ 9,000 (ㅇ)
finance_list_18_2 = [] # 9,000 ~ 18,000
finance_list_18_3 = []# 18,000 ~ 27,000
finance_list_18_4 = []# 27,000~ 36,000
finance_list_18_5 = [] # 36,000 ~ 45,000
finance_list_18_6 = [] # 45,000 ~ 54,000
finance_list_18_7 = [] # 54,000 ~ 63,000
finance_list_18_8 = [] # 63,000 ~ 72,000
finance_list_18_9 = []# 72,000 ~ 80,939
finance_list_18_10 = []
finance_list_18_11 = []
### 데이터를 담아둘 list 생성
finance_list_19_1 = [] # 0 ~ 9,000 (ㅇ)
finance_list_19_2 = [] # 9,000 ~ 18,000
finance_list_19_3 = []# 18,000 ~ 27,000
finance_list_19_4 = []# 27,000~ 36,000
finance_list_19_5 = [] # 36,000 ~ 45,000
finance_list_19_6 = [] # 45,000 ~ 54,000
finance_list_19_7 = [] # 54,000 ~ 63,000
finance_list_19_8 = [] # 63,000 ~ 72,000
finance_list_19_9 = []# 72,000 ~ 80,939
finance_list_19_10 = []
finance_list_19_11 = []
### 반복문 실행
for corp_code in code_list[72000:]:
company_dict = load_finance_data(corp_code)
### list 변경할것
if(company_dict["status"] != "013") :
finance_list_19_9.append(company_dict)
# sleep으로 트래픽 휴식시간주기
time.sleep(0.1)
# 2017년
total_finance_list = []
total_finance_list += finance_list_2 # 1개
total_finance_list += finance_list_5 # 3개
total_finance_list += finance_list_6 # 608개
total_finance_list += finance_list_7 # 327개
total_finance_list += finance_list_9 # 373개
total_finance_list += finance_list_10 # 574개
total_finance_list_17 = total_finance_list
len(total_finance_list_17)
# 2018년
total_finance_list_18 = []
total_finance_list_18 += finance_list_18_2 # 1개
total_finance_list_18 += finance_list_18_5 # 3개
total_finance_list_18 += finance_list_18_6 # 624개
total_finance_list_18 += finance_list_18_7 # 347개
total_finance_list_18 += finance_list_18_8 # 402개
total_finance_list_18 += finance_list_18_9 # 601개
len(total_finance_list_18)
# 2019년
total_finance_list_19 = []
total_finance_list_19 += finance_list_19_2 # 1개
total_finance_list_19 += finance_list_19_5 # 4개
total_finance_list_19 += finance_list_19_6 # 639개
total_finance_list_19 += finance_list_19_7 # 371개
total_finance_list_19 += finance_list_19_8 # 424개
total_finance_list_19 += finance_list_19_9 # 622개
finance_list_19_9[621]
# 3개년치 통합
total_finance_all = []
total_finance_all += total_finance_list
total_finance_all += total_finance_list_18
total_finance_all += total_finance_list_19
len(total_finance_all)
# 조회되지 않은 기업 걸러내기
while(len(finance_list_7) > 327) :
#cnt = 0
for company in finance_list_7 :
if(company["status"] != '000'):
finance_list_7.remove(company)
#cnt += 1
#print(cnt)
###Output
_____no_output_____
###Markdown
모델링 구조에 맞게 데이터 가공하기 corp_code, thstrm_nm(회차), 등록일 수익(매출액), 영업이익(손실), 당기순이익(손실)유동자산,비유동자산, 자산총계, 유동부채, 비유동부채, 부채총계, 자본금, 이익잉여금(결손금), 자본총계영업활동현금흐름, 투자활동현금흐름, 재무활동현금흐름
###Code
company = finance_list_7[2] # 리스트에 들어있는 회사 하나 추출 (dict)
companyFin = company['list'] # 세가지 key값중 재무제표 list 키 추출
print(companyFin[0]) # list 키의 첫번째 dict 추출
# DB에 손익계산서, 재무상태표, 현금 흐름표로 나뉘어져있으므로 각각의 리스트와 딕셔너리를 만든다.
cash_flow_list = []
fs_status_list = []
icm_stmt_list = []
cash_flow_dict = {}
fs_status_dict = {}
icm_stmt_dict = {}
###Output
_____no_output_____
###Markdown
손익계산서
###Code
for org_dict in total_finance_all:
finc_list = org_dict["list"]
# 각 딕셔너리마다 기업 코드, 회차, 등록일 key 추가
icm_stmt_dict["기업코드"] = (finc_list[0])["corp_code"]
icm_stmt_dict["회차"] = (finc_list[0])["thstrm_nm"]
icm_stmt_dict["등록일"] = ((finc_list[0])["rcept_no"])[0:8]
for finc_dict in finc_list:
if(finc_dict["sj_nm"] == '포괄손익계산서'):
# 필요한 항목만 골라 넣는다.
if(finc_dict["account_nm"] == '수익(매출액)') or (finc_dict["account_nm"] == '영업이익(손실)') or (finc_dict["account_nm"] == '당기순이익(손실)'):
column = finc_dict["account_nm"]
icm_stmt_dict[column] = finc_dict["thstrm_amount"]
icm_stmt_list.append(icm_stmt_dict) # 한 회사의 손익계산서 딕셔너리 완성 후 리스트에 넣기
icm_stmt_dict = {} # 딕셔너리 초기화
icm_stmt_data = pd.DataFrame(icm_stmt_list)
icm_stmt_data.head(10)
icm_stmt_data.sort_values('기업코드')
icm_stmt_data.to_csv('손익계산서.csv', encoding='utf-8-sig')
###Output
_____no_output_____
###Markdown
현금흐름표
###Code
for org_dict in total_finance_all:
finc_list = org_dict["list"]
cash_flow_dict["기업코드"] = (finc_list[0])["corp_code"]
cash_flow_dict["회차"] = (finc_list[0])["thstrm_nm"]
cash_flow_dict["등록일"] = ((finc_list[0])["rcept_no"])[0:8]
for finc_dict in finc_list:
if(finc_dict["sj_nm"] == '현금흐름표'):
if(finc_dict["account_nm"] == '영업활동현금흐름') or (finc_dict["account_nm"] == '투자활동현금흐름') or (finc_dict["account_nm"] == '재무활동현금흐름'):
column = finc_dict["account_nm"]
cash_flow_dict[column] = finc_dict["thstrm_amount"]
cash_flow_list.append(cash_flow_dict) # 한 회사의 손익계산서 딕셔너리 완성 후 리스트에 넣기
cash_flow_dict = {} # 딕셔너리 초기화
cash_flow_data = pd.DataFrame(cash_flow_list)
cash_flow_data.head(10)
cash_flow_data.sort_values('기업코드')
cash_flow_data.to_csv('현금흐름표.csv', encoding='utf-8-sig')
###Output
_____no_output_____
###Markdown
재무상태표
###Code
for org_dict in total_finance_all:
finc_list = org_dict["list"]
fs_status_dict["기업코드"] = (finc_list[0])["corp_code"]
fs_status_dict["회차"] = (finc_list[0])["thstrm_nm"]
fs_status_dict["등록일"] = ((finc_list[0])["rcept_no"])[0:8]
for finc_dict in finc_list:
if(finc_dict["sj_nm"] == '재무상태표'):
if(finc_dict["account_nm"] == '유동자산') or (finc_dict["account_nm"] == '비유동자산') or (finc_dict["account_nm"] == '자산총계') or (finc_dict["account_nm"] == '유동부채') or (finc_dict["account_nm"] == '비유동부채') or (finc_dict["account_nm"] == '부채총계') or (finc_dict["account_nm"] == '자본금') or (finc_dict["account_nm"] == '이익잉여금(결손금)') or (finc_dict["account_nm"] == '자본총계'):
column = finc_dict["account_nm"]
fs_status_dict[column] = finc_dict["thstrm_amount"]
fs_status_list.append(fs_status_dict) # 한 회사의 손익계산서 딕셔너리 완성 후 리스트에 넣기
fs_status_dict = {} # 딕셔너리 초기화
fs_status_data = pd.DataFrame(fs_status_list)
fs_status_data.head(10)
len(finance_list_7)
fs_status_data.to_csv('재무상태표.csv', encoding='utf-8-sig')
fs_status_data.sort_values('기업코드').head(20)
###Output
_____no_output_____
###Markdown
-------------------------------------------------------- 기업 개황 데이터 불러와 정제
###Code
import pandas as pd
corpData = pd.read_excel('/Users/linakim/Desktop/최종프로젝트/기업개황.xlsx', '정제후')
corpData.head(3)
###Output
_____no_output_____ |
Data_Science/google_trends/02_google_trends_to_google_data_studio.ipynb | ###Markdown
Google Trends to Google Data Studio1. Get the result form `Google Trends`2. Using [gspread](https://github.com/burnash/gspread) transform the data of `Google Trends` to `Google Sheets`3. Import the file of `Google Sheets` into `Google Data Studio` Get the result from `Goolge Trends`
###Code
import pandas as pd
from pytrends.request import TrendReq
# Create an instance(實例) of TrendReq
pytrend = TrendReq()
# Build a payload
pytrend.build_payload(kw_list=['Coronavirus'], timeframe='2020-01-01 2020-06-04')
# Requset data: Interest Over Time
covid_19_interest_over_time_df = pytrend.interest_over_time()
covid_19_interest_over_time_df.tail()
###Output
_____no_output_____
###Markdown
Plot the result
###Code
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(color_codes=True)
plt.style.use('fivethirtyeight')
# 中文
plt.rcParams['font.sans-serif'] = ['Noto Sans Mono CJK TC', 'sans-serif']
plt.rcParams['axes.unicode_minus'] = False
%matplotlib inline
axes = covid_19_interest_over_time_df.plot.line(
figsize=(20,5),
title='The Search Trends of COVID-19 in 2020')
axes.set_yticks([0, 25, 50, 75, 100])
axes.set_xlabel('Date')
axes.set_ylabel('Trends Index')
axes.tick_params(axis='both', which='major', labelsize=13)
###Output
_____no_output_____
###Markdown
Using `gspread` transform the data of `Google Trends` to `Google Sheets`Reference: [Access spreadsheets via Google Sheets API.](https://gspread.readthedocs.io/en/latest/oauth2.html) Install the required packages- [gspread](https://github.com/burnash/gspread)- [oauth2client](https://github.com/googleapis/oauth2client)
###Code
!pip3 install gspread oauth2client
###Output
_____no_output_____
###Markdown
申請帳號並啟用API因為我們要存取 `Google sheets`,所以我們必須打開原本Google帳號的權限(或是申請一個新的帳號)1. 到 [Google Cloud Platform](https://console.developers.google.com/?hl=zh-tw) 建立一個`Project` 新增專案 -> 專案名稱:`google-sheets` -> 建立  2. 啟動該`Project`的 API 啟用API和服務 -> 在搜尋API和服務打上`Drive API` -> 啟用 -> 在搜尋API和服務打上`Sheets API(Google Sheets)` -> 啟用      3. 建立憑證(Credentials) 回到首頁點選憑證 -> 建立憑證 -> 選服務帳號 -> 服務帳號詳細資料:`Google Trends to Google Sheets` -> 建立 -> 繼續 -> 建立金鑰 -> 選擇 `JSON` -> 建立 -> 完成      4. 將下載好的`JSON`檔案取名為`auth.json` 建立試算表透過`gspread`建立並使用試算表有兩種方式1. 在`Google Drive`或是[Google Sheets](https://sheets.google.com)建立試算表,並將開試算表分享給剛剛下載的`JSON`裡的`client_email`裡的帳號: `[email protected]`使用,並給予編輯的權限,這樣才有辦法透過程式存取。2. 透過`gspread`的`create()`創建試算表 ```python sh = gc.create('A new spreadsheet') ``` Note: ``` If you’re using a service account, this new spreadsheet will be visible only to your script's account. To be able to access newly created spreadsheet from Google Sheets with your own Google account you must share it with your email. See how to share a spreadsheet in the section below. ``` - Sharing a Spreadsheet: ```python sh.share('your_email', perm_type='user', role='writer') ```以下使用第二種方法! Connect to `Google Sheets`
###Code
import gspread
from google.oauth2.service_account import Credentials
def google_oauth2_service(auth_path, scopes):
credentials = Credentials.from_service_account_file(
auth_path,
scopes=scopes
)
return gspread.authorize(credentials)
scopes = [
'https://www.googleapis.com/auth/spreadsheets',
'https://www.googleapis.com/auth/drive'
]
auth_path = 'google_sheets_auth.json'
gc = google_oauth2_service(auth_path, scopes)
###Output
_____no_output_____
###Markdown
Connetc and share a spreadsheet
###Code
# Create a spreadsheet
sh = gc.create("COVID-19 Search Trends")
# Share a spreadsheet
sh.share('[email protected]', perm_type='user', role='writer')
###Output
_____no_output_____
###Markdown
Select a worksheetSelect worksheet by index. Worksheet indexes start from zero:```pythonworksheet = sh.get_worksheet(0)```Or by title:```pythonworksheet = sh.worksheet("January")```Or the most common case: Sheet1:```pythonworksheet = sh.sheet1```To get a list of all worksheets:```pythonworksheet_list = sh.worksheets()```
###Code
worksheet = gc.open("COVID-19 Search Trends").sheet1
###Output
_____no_output_____
###Markdown
Update value of cell: send `DataFrame` into `sheet` Prepocess DataFrame1. `reset_index()`: 因為我們需要date這個欄位2. conver datatime to string: ``` Object of type 'Timestamp' is not JSON serializable ```
###Code
covid_19_interest_over_time_df
covid_19_interest_over_time_df.index
###Output
_____no_output_____
###Markdown
1. Reset index
###Code
covid_19_interest_over_time_df.reset_index(inplace=True)
covid_19_interest_over_time_df
###Output
_____no_output_____
###Markdown
2. Convert datatime to string
###Code
def convert_datetime_to_string(df):
df['date'] = df['date'].dt.strftime('%Y-%m-%d %H:%M:%S')
convert_datetime_to_string(covid_19_interest_over_time_df)
covid_19_interest_over_time_df.head()
###Output
_____no_output_____
###Markdown
Send `DataFram` into `Sheet`
###Code
def iter_pd(df):
for val in df.columns:
yield val
for row in df.to_numpy():
for val in row:
if pd.isna(val):
yield ""
else:
yield val
def pandas_to_sheets(pandas_df, sheet, clear=True):
"""Update all values in a worksheet to match a pandas dataframe"""
if clear:
sheet.clear()
(row, col) = pandas_df.shape
cells = sheet.range("A1:{}".format(gspread.utils.rowcol_to_a1(row+1, col)))
for cell, val in zip(cells, iter_pd(pandas_df)):
cell.value = val
sheet.update_cells(cells)
pandas_to_sheets(covid_19_interest_over_time_df, worksheet)
###Output
_____no_output_____ |
assignments/2019/assignment1/softmax.ipynb | ###Markdown
Softmax exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*This exercise is analogous to the SVM exercise. You will:- implement a fully-vectorized **loss function** for the Softmax classifier- implement the fully-vectorized expression for its **analytic gradient**- **check your implementation** with numerical gradient- use a validation set to **tune the learning rate and regularization** strength- **optimize** the loss function with **SGD**- **visualize** the final learned weights
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000, num_dev=500):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the linear classifier. These are the same steps as we used for the
SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis = 0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# add bias dimension and transform into columns
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
return X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
print('dev data shape: ', X_dev.shape)
print('dev labels shape: ', y_dev.shape)
###Output
_____no_output_____
###Markdown
Softmax ClassifierYour code for this section will all be written inside **cs231n/classifiers/softmax.py**.
###Code
# First implement the naive softmax loss function with nested loops.
# Open the file cs231n/classifiers/softmax.py and implement the
# softmax_loss_naive function.
from cs231n.classifiers.softmax import softmax_loss_naive
import time
# Generate a random softmax weight matrix and use it to compute the loss.
W = np.random.randn(3073, 10) * 0.0001
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As a rough sanity check, our loss should be something close to -log(0.1).
print('loss: %f' % loss)
print('sanity check: %f' % (-np.log(0.1)))
###Output
_____no_output_____
###Markdown
**Inline Question 1**Why do we expect our loss to be close to -log(0.1)? Explain briefly.**$\color{blue}{\textit Your Answer:}$ *Fill this in*
###Code
# Complete the implementation of softmax_loss_naive and implement a (naive)
# version of the gradient that uses nested loops.
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As we did for the SVM, use numeric gradient checking as a debugging tool.
# The numeric gradient should be close to the analytic gradient.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
# similar to SVM case, do another gradient check with regularization
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 5e1)
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 5e1)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
# Now that we have a naive implementation of the softmax loss function and its gradient,
# implement a vectorized version in softmax_loss_vectorized.
# The two versions should compute the same results, but the vectorized version should be
# much faster.
tic = time.time()
loss_naive, grad_naive = softmax_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('naive loss: %e computed in %fs' % (loss_naive, toc - tic))
from cs231n.classifiers.softmax import softmax_loss_vectorized
tic = time.time()
loss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic))
# As we did for the SVM, we use the Frobenius norm to compare the two versions
# of the gradient.
grad_difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print('Loss difference: %f' % np.abs(loss_naive - loss_vectorized))
print('Gradient difference: %f' % grad_difference)
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of over 0.35 on the validation set.
from cs231n.classifiers import Softmax
results = {}
best_val = -1
best_softmax = None
learning_rates = [1e-7, 5e-7]
regularization_strengths = [2.5e4, 5e4]
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained softmax classifer in best_softmax. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# evaluate on test set
# Evaluate the best softmax on test set
y_test_pred = best_softmax.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print('softmax on raw pixels final test set accuracy: %f' % (test_accuracy, ))
###Output
_____no_output_____
###Markdown
**Inline Question 2** - *True or False*Suppose the overall training loss is defined as the sum of the per-datapoint loss over all training examples. It is possible to add a new datapoint to a training set that would leave the SVM loss unchanged, but this is not the case with the Softmax classifier loss.$\color{blue}{\textit Your Answer:}$$\color{blue}{\textit Your Explanation:}$
###Code
# Visualize the learned weights for each class
w = best_softmax.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in range(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
###Output
_____no_output_____
###Markdown
Softmax exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*This exercise is analogous to the SVM exercise. You will:- implement a fully-vectorized **loss function** for the Softmax classifier- implement the fully-vectorized expression for its **analytic gradient**- **check your implementation** with numerical gradient- use a validation set to **tune the learning rate and regularization** strength- **optimize** the loss function with **SGD**- **visualize** the final learned weights
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000, num_dev=500):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the linear classifier. These are the same steps as we used for the
SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis = 0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# add bias dimension and transform into columns
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
return X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
print('dev data shape: ', X_dev.shape)
print('dev labels shape: ', y_dev.shape)
###Output
Train data shape: (49000, 3073)
Train labels shape: (49000,)
Validation data shape: (1000, 3073)
Validation labels shape: (1000,)
Test data shape: (1000, 3073)
Test labels shape: (1000,)
dev data shape: (500, 3073)
dev labels shape: (500,)
###Markdown
Softmax ClassifierYour code for this section will all be written inside **cs231n/classifiers/softmax.py**.
###Code
# First implement the naive softmax loss function with nested loops.
# Open the file cs231n/classifiers/softmax.py and implement the
# softmax_loss_naive function.
from cs231n.classifiers.softmax import softmax_loss_naive
import time
# Generate a random softmax weight matrix and use it to compute the loss.
W = np.random.randn(3073, 10) * 0.0001
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As a rough sanity check, our loss should be something close to -log(0.1).
print('loss: %f' % loss)
print('sanity check: %f' % (-np.log(0.1)))
###Output
loss: 2.377603
sanity check: 2.302585
###Markdown
**Inline Question 1**Why do we expect our loss to be close to -log(0.1)? Explain briefly.**$\color{blue}{\textit Your Answer:}$ 随机初始化,每个类的得分大致是相等的,有10类,所以是1/10=0.1。
###Code
# Complete the implementation of softmax_loss_naive and implement a (naive)
# version of the gradient that uses nested loops.
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As we did for the SVM, use numeric gradient checking as a debugging tool.
# The numeric gradient should be close to the analytic gradient.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
# similar to SVM case, do another gradient check with regularization
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 5e1)
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 5e1)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
# Now that we have a naive implementation of the softmax loss function and its gradient,
# implement a vectorized version in softmax_loss_vectorized.
# The two versions should compute the same results, but the vectorized version should be
# much faster.
tic = time.time()
loss_naive, grad_naive = softmax_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('naive loss: %e computed in %fs' % (loss_naive, toc - tic))
from cs231n.classifiers.softmax import softmax_loss_vectorized
tic = time.time()
loss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic))
# As we did for the SVM, we use the Frobenius norm to compare the two versions
# of the gradient.
grad_difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print('Loss difference: %f' % np.abs(loss_naive - loss_vectorized))
print('Gradient difference: %f' % grad_difference)
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of over 0.35 on the validation set.
from cs231n.classifiers import Softmax
results = {}
best_val = -1
best_softmax = None
learning_rates = [1e-7, 5e-7]
regularization_strengths = [2.5e4, 5e4]
################################################################################
# #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained softmax classifer in best_softmax. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for lr in learning_rates:
for reg in regularization_strengths:
sfm=Softmax()
sfm.train(X_train, y_train, learning_rate=lr, reg=reg, num_iters=1500,
batch_size=200, verbose=False)
y_train_pred=sfm.predict(X_train)
trn_acc=np.mean(y_train_pred==y_train)
y_val_pred=sfm.predict(X_val)
val_acc=np.mean(y_val_pred==y_val)
if val_acc>best_val:
best_val=val_acc
best_softmax=sfm
results[(lr,reg)]=(trn_acc,val_acc)
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# evaluate on test set
# Evaluate the best softmax on test set
y_test_pred = best_softmax.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print('softmax on raw pixels final test set accuracy: %f' % (test_accuracy, ))
###Output
softmax on raw pixels final test set accuracy: 0.340000
###Markdown
**Inline Question 2** - *True or False*Suppose the overall training loss is defined as the sum of the per-datapoint loss over all training examples. It is possible to add a new datapoint to a training set that would leave the SVM loss unchanged, but this is not the case with the Softmax classifier loss.$\color{blue}{\textit Your Answer:}$正确$\color{blue}{\textit Your Explanation:}$svm的loss有margin范围,如果没有超出范围,loss为0,所以可能不会改变,但是softmax对每个样本都会有loss
###Code
# Visualize the learned weights for each class
w = best_softmax.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in range(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
###Output
_____no_output_____
###Markdown
Softmax exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*This exercise is analogous to the SVM exercise. You will:- implement a fully-vectorized **loss function** for the Softmax classifier- implement the fully-vectorized expression for its **analytic gradient**- **check your implementation** with numerical gradient- use a validation set to **tune the learning rate and regularization** strength- **optimize** the loss function with **SGD**- **visualize** the final learned weights
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000, num_dev=500):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the linear classifier. These are the same steps as we used for the
SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis = 0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# add bias dimension and transform into columns
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
return X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
print('dev data shape: ', X_dev.shape)
print('dev labels shape: ', y_dev.shape)
###Output
_____no_output_____
###Markdown
Softmax ClassifierYour code for this section will all be written inside **cs231n/classifiers/softmax.py**.
###Code
# First implement the naive softmax loss function with nested loops.
# Open the file cs231n/classifiers/softmax.py and implement the
# softmax_loss_naive function.
from cs231n.classifiers.softmax import softmax_loss_naive
import time
# Generate a random softmax weight matrix and use it to compute the loss.
W = np.random.randn(3073, 10) * 0.0001
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As a rough sanity check, our loss should be something close to -log(0.1).
print('loss: %f' % loss)
print('sanity check: %f' % (-np.log(0.1)))
###Output
_____no_output_____
###Markdown
**Inline Question 1**Why do we expect our loss to be close to -log(0.1)? Explain briefly.**$\color{blue}{\textit Your Answer:}$ *Fill this in*
###Code
# Complete the implementation of softmax_loss_naive and implement a (naive)
# version of the gradient that uses nested loops.
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As we did for the SVM, use numeric gradient checking as a debugging tool.
# The numeric gradient should be close to the analytic gradient.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
# similar to SVM case, do another gradient check with regularization
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 5e1)
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 5e1)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
# Now that we have a naive implementation of the softmax loss function and its gradient,
# implement a vectorized version in softmax_loss_vectorized.
# The two versions should compute the same results, but the vectorized version should be
# much faster.
tic = time.time()
loss_naive, grad_naive = softmax_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('naive loss: %e computed in %fs' % (loss_naive, toc - tic))
from cs231n.classifiers.softmax import softmax_loss_vectorized
tic = time.time()
loss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic))
# As we did for the SVM, we use the Frobenius norm to compare the two versions
# of the gradient.
grad_difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print('Loss difference: %f' % np.abs(loss_naive - loss_vectorized))
print('Gradient difference: %f' % grad_difference)
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of over 0.35 on the validation set.
from cs231n.classifiers import Softmax
results = {}
best_val = -1
best_softmax = None
learning_rates = [1e-7, 5e-7]
regularization_strengths = [2.5e4, 5e4]
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained softmax classifer in best_softmax. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
pass
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# evaluate on test set
# Evaluate the best softmax on test set
y_test_pred = best_softmax.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print('softmax on raw pixels final test set accuracy: %f' % (test_accuracy, ))
###Output
_____no_output_____
###Markdown
**Inline Question 2** - *True or False*Suppose the overall training loss is defined as the sum of the per-datapoint loss over all training examples. It is possible to add a new datapoint to a training set that would leave the SVM loss unchanged, but this is not the case with the Softmax classifier loss.$\color{blue}{\textit Your Answer:}$$\color{blue}{\textit Your Explanation:}$
###Code
# Visualize the learned weights for each class
w = best_softmax.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in range(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
###Output
_____no_output_____
###Markdown
Softmax exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*This exercise is analogous to the SVM exercise. You will:- implement a fully-vectorized **loss function** for the Softmax classifier- implement the fully-vectorized expression for its **analytic gradient**- **check your implementation** with numerical gradient- use a validation set to **tune the learning rate and regularization** strength- **optimize** the loss function with **SGD**- **visualize** the final learned weights
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000, num_dev=500):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the linear classifier. These are the same steps as we used for the
SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis = 0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# add bias dimension and transform into columns
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
return X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
print('dev data shape: ', X_dev.shape)
print('dev labels shape: ', y_dev.shape)
###Output
Train data shape: (49000, 3073)
Train labels shape: (49000,)
Validation data shape: (1000, 3073)
Validation labels shape: (1000,)
Test data shape: (1000, 3073)
Test labels shape: (1000,)
dev data shape: (500, 3073)
dev labels shape: (500,)
###Markdown
Softmax ClassifierYour code for this section will all be written inside **cs231n/classifiers/softmax.py**.
###Code
# First implement the naive softmax loss function with nested loops.
# Open the file cs231n/classifiers/softmax.py and implement the
# softmax_loss_naive function.
from cs231n.classifiers.softmax import softmax_loss_naive
import time
# Generate a random softmax weight matrix and use it to compute the loss.
W = np.random.randn(3073, 10) * 0.0001
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As a rough sanity check, our loss should be something close to -log(0.1).
print('loss: %f' % loss)
print('sanity check: %f' % (-np.log(0.1)))
###Output
loss: 2.418137
sanity check: 2.302585
###Markdown
**Inline Question 1**Why do we expect our loss to be close to -log(0.1)? Explain briefly.**$\color{blue}{\textit Your Answer:}$ Because we have 10 class, so initially if we spread out the probability, each class is 0.1. -log(Probabilty of a class).
###Code
# Complete the implementation of softmax_loss_naive and implement a (naive)
# version of the gradient that uses nested loops.
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As we did for the SVM, use numeric gradient checking as a debugging tool.
# The numeric gradient should be close to the analytic gradient.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
# similar to SVM case, do another gradient check with regularization
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 5e1)
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 5e1)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
# Now that we have a naive implementation of the softmax loss function and its gradient,
# implement a vectorized version in softmax_loss_vectorized.
# The two versions should compute the same results, but the vectorized version should be
# much faster.
tic = time.time()
loss_naive, grad_naive = softmax_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('naive loss: %e computed in %fs' % (loss_naive, toc - tic))
from cs231n.classifiers.softmax import softmax_loss_vectorized
tic = time.time()
loss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic))
# As we did for the SVM, we use the Frobenius norm to compare the two versions
# of the gradient.
grad_difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print('Loss difference: %f' % np.abs(loss_naive - loss_vectorized))
print('Gradient difference: %f' % grad_difference)
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of over 0.35 on the validation set.
from cs231n.classifiers import Softmax
results = {}
best_val = -1
best_softmax = None
learning_rates = [1e-7, 2e-6, 2.5e-6]
regularization_strengths = [1e3, 1e4, 2e4, 2.5e4, 3e4, 3.5e4]
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained softmax classifer in best_softmax. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
grid_search_params = [(l, r) for l in learning_rates for r in regularization_strengths]
for l, r in grid_search_params:
model = Softmax()
model.train(X_train, y_train, learning_rate=l, reg=r, num_iters=500)
y_pred_train = model.predict(X_train)
y_pred_dev = model.predict(X_dev)
train_accuracy = np.mean(y_pred_train == y_train)
val_accuracy = np.mean(y_pred_dev == y_dev)
results[(l, r)] = (train_accuracy, val_accuracy)
if val_accuracy > best_val:
best_softmax = model
best_val = val_accuracy
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# evaluate on test set
# Evaluate the best softmax on test set
y_test_pred = best_softmax.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print('softmax on raw pixels final test set accuracy: %f' % (test_accuracy, ))
###Output
softmax on raw pixels final test set accuracy: 0.278000
###Markdown
**Inline Question 2** - *True or False*Suppose the overall training loss is defined as the sum of the per-datapoint loss over all training examples. It is possible to add a new datapoint to a training set that would leave the SVM loss unchanged, but this is not the case with the Softmax classifier loss.$\color{blue}{\textit Your Answer:}$ True.$\color{blue}{\textit Your Explanation:}$ Let's assume that we add a new datapoint that leads to scores [10,8,7], also that the margin for SVM is 2 and the correct class is "10", then the SVM loss of this datapoint will be 0 because it satisfies the margin, i.e., max(0, 8 + 2 - 10) + max(0, 7 + 2 - 10) = 0. Thus, the loss remains unchanged. However, it is not the case for Softmax classifier where the loss will increase, i.e., -log(softmax(10)) = -log(0.84) = 0.17. This occurs because the SVM loss is local objective, that is, it does not care about the details of individual scores only the margin has to be satisfied. On the other hand, the Softmax classifier considers all the individual scores in the calculation of the loss.
###Code
# Visualize the learned weights for each class
w = best_softmax.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in range(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
###Output
_____no_output_____
###Markdown
Softmax exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*This exercise is analogous to the SVM exercise. You will:- implement a fully-vectorized **loss function** for the Softmax classifier- implement the fully-vectorized expression for its **analytic gradient**- **check your implementation** with numerical gradient- use a validation set to **tune the learning rate and regularization** strength- **optimize** the loss function with **SGD**- **visualize** the final learned weights
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000, num_dev=500):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the linear classifier. These are the same steps as we used for the
SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis = 0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# add bias dimension and transform into columns
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
return X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
print('dev data shape: ', X_dev.shape)
print('dev labels shape: ', y_dev.shape)
###Output
Train data shape: (49000, 3073)
Train labels shape: (49000,)
Validation data shape: (1000, 3073)
Validation labels shape: (1000,)
Test data shape: (1000, 3073)
Test labels shape: (1000,)
dev data shape: (500, 3073)
dev labels shape: (500,)
###Markdown
Softmax ClassifierYour code for this section will all be written inside **cs231n/classifiers/softmax.py**.
###Code
# First implement the naive softmax loss function with nested loops.
# Open the file cs231n/classifiers/softmax.py and implement the
# softmax_loss_naive function.
from cs231n.classifiers.softmax import softmax_loss_naive
import time
# Generate a random softmax weight matrix and use it to compute the loss.
W = np.random.randn(3073, 10) * 0.0001
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As a rough sanity check, our loss should be something close to -log(0.1).
print('loss: %f' % loss)
print('sanity check: %f' % (-np.log(0.1)))
###Output
loss: 2.304681
sanity check: 2.302585
###Markdown
**Inline Question 1**Why do we expect our loss to be close to -log(0.1)? Explain briefly.**$\color{blue}{\textit Your Answer:}$ *Fill this in* Initially, all classes will have similar scores, so the loss will be:$$L = -\log \left(\frac{e^{.1}}{10 e^{.1}}\right) = -\log .1$$
###Code
# Complete the implementation of softmax_loss_naive and implement a (naive)
# version of the gradient that uses nested loops.
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As we did for the SVM, use numeric gradient checking as a debugging tool.
# The numeric gradient should be close to the analytic gradient.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
# similar to SVM case, do another gradient check with regularization
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 5e1)
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 5e1)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
# Now that we have a naive implementation of the softmax loss function and its gradient,
# implement a vectorized version in softmax_loss_vectorized.
# The two versions should compute the same results, but the vectorized version should be
# much faster.
tic = time.time()
loss_naive, grad_naive = softmax_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('naive loss: %e computed in %fs' % (loss_naive, toc - tic))
from cs231n.classifiers.softmax import softmax_loss_vectorized
tic = time.time()
loss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic))
# As we did for the SVM, we use the Frobenius norm to compare the two versions
# of the gradient.
grad_difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print('Loss difference: %f' % np.abs(loss_naive - loss_vectorized))
print('Gradient difference: %f' % grad_difference)
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of over 0.35 on the validation set.
from cs231n.classifiers import Softmax
results = {}
best_val = -1
best_softmax = None
#learning_rates = [1e-7, 5e-7]
#regularization_strengths = [2.5e4, 5e4]
learning_rates = [ 5e-7, 2.000000e-06, 1e-6]
regularization_strengths = [ 3.25e4, 3.5e4, 1.000000e+03, 2e+3]
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the softmax; save #
# the best trained softmax classifer in best_softmax. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
combos = [(rate, reg) for rate in learning_rates for reg in regularization_strengths]
for lr, rv in combos:
softmax = Softmax()
tic = time.time()
softmax.train(X_train, y_train, lr, reg= rv,
num_iters=1600)
y_train_pred = softmax.predict(X_train)
train_acc = np.mean(y_train == y_train_pred)
y_val_pred = softmax.predict(X_val)
val_acc = np.mean(y_val == y_val_pred)
print(lr,rv, train_acc, val_acc)
results[(lr, rv)] = (train_acc, val_acc)
if val_acc > best_val:
best_val = val_acc
best_softmax = softmax
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# evaluate on test set
# Evaluate the best softmax on test set
y_test_pred = best_softmax.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print('softmax on raw pixels final test set accuracy: %f' % (test_accuracy, ))
###Output
softmax on raw pixels final test set accuracy: 0.386000
###Markdown
**Inline Question 2** - *True or False*Suppose the overall training loss is defined as the sum of the per-datapoint loss over all training examples. It is possible to add a new datapoint to a training set that would leave the SVM loss unchanged, but this is not the case with the Softmax classifier loss.$\color{blue}{\textit Your Answer:}$False.$\color{blue}{\textit Your Explanation:}$If datapoint produces weights of (1,0,...) it will have 0 loss.
###Code
# Visualize the learned weights for each class
w = best_softmax.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in range(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
###Output
_____no_output_____
###Markdown
Softmax exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*This exercise is analogous to the SVM exercise. You will:- implement a fully-vectorized **loss function** for the Softmax classifier- implement the fully-vectorized expression for its **analytic gradient**- **check your implementation** with numerical gradient- use a validation set to **tune the learning rate and regularization** strength- **optimize** the loss function with **SGD**- **visualize** the final learned weights
###Code
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000, num_dev=500):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the linear classifier. These are the same steps as we used for the
SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
# Cleaning up variables to prevent loading data multiple times (which may cause memory issue)
try:
del X_train, y_train
del X_test, y_test
print('Clear previously loaded data.')
except:
pass
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# subsample the data
mask = list(range(num_training, num_training + num_validation))
X_val = X_train[mask]
y_val = y_train[mask]
mask = list(range(num_training))
X_train = X_train[mask]
y_train = y_train[mask]
mask = list(range(num_test))
X_test = X_test[mask]
y_test = y_test[mask]
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
# Preprocessing: reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_val = np.reshape(X_val, (X_val.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis = 0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# add bias dimension and transform into columns
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
return X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
print('dev data shape: ', X_dev.shape)
print('dev labels shape: ', y_dev.shape)
###Output
Train data shape: (49000, 3073)
Train labels shape: (49000,)
Validation data shape: (1000, 3073)
Validation labels shape: (1000,)
Test data shape: (1000, 3073)
Test labels shape: (1000,)
dev data shape: (500, 3073)
dev labels shape: (500,)
###Markdown
Softmax ClassifierYour code for this section will all be written inside **cs231n/classifiers/softmax.py**.
###Code
# First implement the naive softmax loss function with nested loops.
# Open the file cs231n/classifiers/softmax.py and implement the
# softmax_loss_naive function.
from cs231n.classifiers.softmax import softmax_loss_naive
import time
# Generate a random softmax weight matrix and use it to compute the loss.
W = np.random.randn(3073, 10) * 0.0001
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As a rough sanity check, our loss should be something close to -log(0.1).
print('loss: %f' % loss)
print('sanity check: %f' % (-np.log(0.1)))
###Output
loss: 2.336374
sanity check: 2.302585
###Markdown
**Inline Question 1**Why do we expect our loss to be close to -log(0.1)? Explain briefly.**$\color{blue}{\textit Your Answer:}$ because it is loss of random uniform sampled scores of ten classes.
###Code
# Complete the implementation of softmax_loss_naive and implement a (naive)
# version of the gradient that uses nested loops.
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)
# As we did for the SVM, use numeric gradient checking as a debugging tool.
# The numeric gradient should be close to the analytic gradient.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
# similar to SVM case, do another gradient check with regularization
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 5e1)
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 5e1)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
# Now that we have a naive implementation of the softmax loss function and its gradient,
# implement a vectorized version in softmax_loss_vectorized.
# The two versions should compute the same results, but the vectorized version should be
# much faster.
tic = time.time()
loss_naive, grad_naive = softmax_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('naive loss: %e computed in %fs' % (loss_naive, toc - tic))
from cs231n.classifiers.softmax import softmax_loss_vectorized
tic = time.time()
loss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic))
# As we did for the SVM, we use the Frobenius norm to compare the two versions
# of the gradient.
grad_difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print('Loss difference: %f' % np.abs(loss_naive - loss_vectorized))
print('Gradient difference: %f' % grad_difference)
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of over 0.35 on the validation set.
from cs231n.classifiers import Softmax
results = {}
best_val = -1
best_softmax = None
learning_rates = [1e-7, 5e-7]
regularization_strengths = [2.5e4, 5e4]
################################################################################
# TODO: #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save #
# the best trained softmax classifer in best_softmax. #
################################################################################
# *****START OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
for lr in learning_rates:
for reg in regularization_strengths:
softmax = Softmax()
softmax.train(X_train, y_train, learning_rate=lr, reg=reg, num_iters=1500, verbose=False)
y_train_pred = softmax.predict(X_train)
train_acc = np.mean(y_train == y_train_pred)
y_val_pred = softmax.predict(X_val)
val_acc = np.mean(y_val == y_val_pred)
results[(lr, reg)] = (train_acc, val_acc)
if val_acc > best_val:
best_val = val_acc
best_softmax = softmax
# *****END OF YOUR CODE (DO NOT DELETE/MODIFY THIS LINE)*****
# Print out results.
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
# evaluate on test set
# Evaluate the best softmax on test set
y_test_pred = best_softmax.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print('softmax on raw pixels final test set accuracy: %f' % (test_accuracy, ))
###Output
softmax on raw pixels final test set accuracy: 0.347000
###Markdown
**Inline Question 2** - *True or False*Suppose the overall training loss is defined as the sum of the per-datapoint loss over all training examples. It is possible to add a new datapoint to a training set that would leave the SVM loss unchanged, but this is not the case with the Softmax classifier loss.$\color{blue}{\textit Your Answer:}$ True$\color{blue}{\textit Your Explanation:}$ log loss can't be strictly equal to zero so any new datapoint change the final loss. Unlike this, svm loss can be equal to zero.
###Code
# Visualize the learned weights for each class
w = best_softmax.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in range(10):
plt.subplot(2, 5, i + 1)
# Rescale the weights to be between 0 and 255
wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wimg.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
###Output
_____no_output_____ |
notebooks/multi_gpu_training_torch.ipynb | ###Markdown
Train a CNN on multiple GPUs using data parallelism.Based on sec 12.5 of http://d2l.ai/chapter_computational-performance/multiple-gpus.html.Note: in colab, we only have access to 1 GPU, so the code below just simulates the effects of multiple GPUs, so it will not run faster. You may not see a speedup eveen on a machine which really does have multiple GPUs, because the model and data are too small. But the example should still illustrate the key ideas.
###Code
import matplotlib.pyplot as plt
import numpy as np
import math
import torch
from torch import nn
from torch.nn import functional as F
!mkdir figures # for saving plots
!wget https://raw.githubusercontent.com/d2l-ai/d2l-en/master/d2l/torch.py -q -O d2l.py
import d2l
###Output
_____no_output_____
###Markdown
ModelWe use a slightly modified version of the LeNet CNN.
###Code
# Initialize model parameters
scale = 0.01
torch.random.manual_seed(0)
W1 = torch.randn(size=(20, 1, 3, 3)) * scale
b1 = torch.zeros(20)
W2 = torch.randn(size=(50, 20, 5, 5)) * scale
b2 = torch.zeros(50)
W3 = torch.randn(size=(800, 128)) * scale
b3 = torch.zeros(128)
W4 = torch.randn(size=(128, 10)) * scale
b4 = torch.zeros(10)
params = [W1, b1, W2, b2, W3, b3, W4, b4]
# Define the model
def lenet(X, params):
h1_conv = F.conv2d(input=X, weight=params[0], bias=params[1])
h1_activation = F.relu(h1_conv)
h1 = F.avg_pool2d(input=h1_activation, kernel_size=(2, 2), stride=(2, 2))
h2_conv = F.conv2d(input=h1, weight=params[2], bias=params[3])
h2_activation = F.relu(h2_conv)
h2 = F.avg_pool2d(input=h2_activation, kernel_size=(2, 2), stride=(2, 2))
h2 = h2.reshape(h2.shape[0], -1)
h3_linear = torch.mm(h2, params[4]) + params[5]
h3 = F.relu(h3_linear)
y_hat = torch.mm(h3, params[6]) + params[7]
return y_hat
# Cross-entropy loss function
loss = nn.CrossEntropyLoss(reduction='none')
###Output
_____no_output_____
###Markdown
Copying parameters across devices
###Code
def get_params(params, device):
new_params = [p.clone().to(device) for p in params]
for p in new_params:
p.requires_grad_()
return new_params
# Copy the params to GPU0
gpu0 = torch.device('cuda:0')
new_params = get_params(params, gpu0)
print('b1 weight:', new_params[1])
print('b1 grad:', new_params[1].grad)
# Copy the params to GPU1
gpu1 = torch.device('cuda:0') # torch.device('cuda:1')
new_params = get_params(params, gpu1)
print('b1 weight:', new_params[1])
print('b1 grad:', new_params[1].grad)
###Output
b1 weight: tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
device='cuda:0', requires_grad=True)
b1 grad: None
###Markdown
All-reduce will copy data (eg gradients) from all devices to device 0, add them, and then broadcast the result back to each device.
###Code
def allreduce(data):
for i in range(1, len(data)):
data[0][:] += data[i].to(data[0].device)
for i in range(1, len(data)):
data[i] = data[0].to(data[i].device)
data = [torch.ones((1, 2), device=d2l.try_gpu(i)) * (i + 1) for i in range(2)]
print('before allreduce:\n', data[0], '\n', data[1])
allreduce(data)
print('after allreduce:\n', data[0], '\n', data[1])
###Output
before allreduce:
tensor([[1., 1.]], device='cuda:0')
tensor([[2., 2.]])
after allreduce:
tensor([[3., 3.]], device='cuda:0')
tensor([[3., 3.]])
###Markdown
Distribute data across GPUs
###Code
data = torch.arange(20).reshape(4, 5)
#devices = [torch.device('cuda:0'), torch.device('cuda:1')]
devices = [torch.device('cuda:0'), torch.device('cuda:0')]
split = nn.parallel.scatter(data, devices)
print('input :', data)
print('load into', devices)
print('output:', split)
###Output
input : tensor([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]])
load into [device(type='cuda', index=0), device(type='cuda', index=0)]
output: (tensor([[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9]], device='cuda:0'), tensor([[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]], device='cuda:0'))
###Markdown
Split data and labels.
###Code
def split_batch(X, y, devices):
"""Split `X` and `y` into multiple devices."""
assert X.shape[0] == y.shape[0]
return (nn.parallel.scatter(X, devices), nn.parallel.scatter(y, devices))
###Output
_____no_output_____
###Markdown
Training
###Code
def sgd(params, lr, batch_size):
"""Minibatch stochastic gradient descent."""
with torch.no_grad():
for param in params:
param -= lr * param.grad / batch_size
param.grad.zero_()
def train_batch(X, y, device_params, devices, lr):
X_shards, y_shards = split_batch(X, y, devices)
# Loss is calculated separately on each GPU
losses = [
loss(lenet(X_shard, device_W),
y_shard).sum() for X_shard, y_shard, device_W in zip(
X_shards, y_shards, device_params)]
for l in losses: # Back Propagation is performed separately on each GPU
l.backward()
# Sum all gradients from each GPU and broadcast them to all GPUs
with torch.no_grad():
for i in range(len(device_params[0])):
allreduce([device_params[c][i].grad for c in range(len(devices))])
# The model parameters are updated separately on each GPU
ndata = X.shape[0] # gradient is summed over the full minibatch
for param in device_params:
#d2l.sgd(param, lr, ndata)
sgd(param, lr, ndata)
def train(num_gpus, batch_size, lr):
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
devices = [d2l.try_gpu(i) for i in range(num_gpus)]
# Copy model parameters to num_gpus GPUs
device_params = [get_params(params, d) for d in devices]
# num_epochs, times, acces = 10, [], []
num_epochs = 5
animator = d2l.Animator('epoch', 'test acc', xlim=[1, num_epochs])
timer = d2l.Timer()
for epoch in range(num_epochs):
timer.start()
for X, y in train_iter:
# Perform multi-GPU training for a single minibatch
train_batch(X, y, device_params, devices, lr)
torch.cuda.synchronize()
timer.stop()
# Verify the model on GPU 0
animator.add(epoch + 1, (d2l.evaluate_accuracy_gpu(
lambda x: lenet(x, device_params[0]), test_iter, devices[0]),))
print(f'test acc: {animator.Y[0][-1]:.2f}, {timer.avg():.1f} sec/epoch '
f'on {str(devices)}')
#train(num_gpus=2, batch_size=256, lr=0.2)
train(num_gpus=1, batch_size=256, lr=0.2)
###Output
_____no_output_____ |
pdbbind_data.ipynb | ###Markdown
Parse and clean affinity data
###Code
%%bash -s $path --out missing
path=$1
# Save binding affinities to csv file
echo 'pdbid,-logKd/Ki' > affinity_data.csv
cat $path/PDBbind_2016_plain_text_index/index/INDEX_general_PL_data.2016 | while read l1 l2 l3 l4 l5; do
if [[ ! $l1 =~ "#" ]]; then
echo $l1,$l4
fi
done >> affinity_data.csv
# Find affinities without structural data (i.e. with missing directories)
cut -f 1 -d ',' affinity_data.csv | tail -n +2 | while read l;
do if [ ! -e $path/general-set-except-refined/$l ] && [ ! -e $path/refined-set/$l ]; then
echo $l;
fi
done
missing = set(missing.split())
len(missing)
affinity_data = pd.read_csv('affinity_data.csv', comment='#')
affinity_data = affinity_data[~np.in1d(affinity_data['pdbid'], list(missing))]
affinity_data.head()
# Check for NaNs
affinity_data['-logKd/Ki'].isnull().any()
# Separate core, refined, and general sets
core_set = ! grep -v '#' $path/PDBbind_2016_plain_text_index/index/INDEX_core_data.2016 | cut -f 1 -d ' '
core_set = set(core_set)
refined_set = ! grep -v '#' $path/PDBbind_2016_plain_text_index/index/INDEX_refined_data.2016 | cut -f 1 -d ' '
refined_set = set(refined_set)
general_set = set(affinity_data['pdbid'])
assert core_set & refined_set == core_set
assert refined_set & general_set == refined_set
len(general_set), len(refined_set), len(core_set)
# Exclude v 2013 core set - it will be used as another test set
core2013 = ! cat core_pdbbind2013.ids
core2013 = set(core2013)
affinity_data['include'] = True
affinity_data.loc[np.in1d(affinity_data['pdbid'], list(core2013 & (general_set - core_set))), 'include'] = False
affinity_data.loc[np.in1d(affinity_data['pdbid'], list(general_set)), 'set'] = 'general'
affinity_data.loc[np.in1d(affinity_data['pdbid'], list(refined_set)), 'set'] = 'refined'
affinity_data.loc[np.in1d(affinity_data['pdbid'], list(core_set)), 'set'] = 'core'
affinity_data.head()
affinity_data[affinity_data['include']].groupby('set').apply(len).loc[['general', 'refined', 'core']]
# Check affinity distributions
grid = sns.FacetGrid(affinity_data[affinity_data['include']], row='set', row_order=['general', 'refined', 'core'],
size=3, aspect=3)
grid.map(sns.distplot, '-logKd/Ki');
affinity_data[['pdbid']].to_csv('pdb.ids', header=False, index=False)
affinity_data[['pdbid', '-logKd/Ki', 'set']].to_csv('affinity_data_cleaned.csv', index=False)
###Output
_____no_output_____
###Markdown
--- Parse molecules
###Code
dataset_path = {'general': 'general-set-except-refined', 'refined': 'refined-set', 'core': 'refined-set'}
%%bash -s $path
# Prepare pockets with UCSF Chimera - pybel sometimes fails to calculate the charges.
# Even if Chimera fails to calculate several charges (mostly for non-standard residues),
# it returns charges for other residues.
path=$1
for dataset in general-set-except-refined refined-set; do
echo $dataset
for pdbfile in $path/$dataset/*/*_pocket.pdb; do
mol2file=${pdbfile%pdb}mol2
if [[ ! -e $mol2file ]]; then
echo -e "open $pdbfile \n addh \n addcharge \n write format mol2 0 tmp.mol2 \n stop" | chimera --nogui
# Do not use TIP3P atom types, pybel cannot read them
sed 's/H\.t3p/H /' tmp.mol2 | sed 's/O\.t3p/O\.3 /' > $mol2file
fi
done
done > chimera_rw.log
featurizer = Featurizer()
charge_idx = featurizer.FEATURE_NAMES.index('partialcharge')
with h5py.File('%s/core2013.hdf' % path, 'w') as g:
j = 0
for dataset_name, data in affinity_data.groupby('set'):
print(dataset_name, 'set')
i = 0
ds_path = dataset_path[dataset_name]
with h5py.File('%s/%s.hdf' % (path, dataset_name), 'w') as f:
for _, row in data.iterrows():
name = row['pdbid']
affinity = row['-logKd/Ki']
ligand = next(pybel.readfile('mol2', '%s/%s/%s/%s_ligand.mol2' % (path, ds_path, name, name)))
# do not add the hydrogens! they are in the strucutre and it would reset the charges
try:
pocket = next(pybel.readfile('mol2', '%s/%s/%s/%s_pocket.mol2' % (path, ds_path, name, name)))
# do not add the hydrogens! they were already added in chimera and it would reset the charges
except:
warnings.warn('no pocket for %s (%s set)' % (name, dataset_name))
continue
ligand_coords, ligand_features = featurizer.get_features(ligand, molcode=1)
assert (ligand_features[:, charge_idx] != 0).any()
pocket_coords, pocket_features = featurizer.get_features(pocket, molcode=-1)
assert (pocket_features[:, charge_idx] != 0).any()
centroid = ligand_coords.mean(axis=0)
ligand_coords -= centroid
pocket_coords -= centroid
data = np.concatenate((np.concatenate((ligand_coords, pocket_coords)),
np.concatenate((ligand_features, pocket_features))), axis=1)
if row['include']:
dataset = f.create_dataset(name, data=data, shape=data.shape, dtype='float32', compression='lzf')
dataset.attrs['affinity'] = affinity
i += 1
else:
dataset = g.create_dataset(name, data=data, shape=data.shape, dtype='float32', compression='lzf')
dataset.attrs['affinity'] = affinity
j += 1
print('prepared', i, 'complexes')
print('excluded', j, 'complexes')
with h5py.File('%s/core.hdf' % path, 'r') as f, \
h5py.File('%s/core2013.hdf' % path, 'r+') as g:
for name in f:
if name in core2013:
dataset = g.create_dataset(name, data=f[name])
dataset.attrs['affinity'] = f[name].attrs['affinity']
###Output
_____no_output_____
###Markdown
Protein data
###Code
protein_data = pd.read_csv('%s/PDBbind_2016_plain_text_index/index/INDEX_general_PL_name.2016' % path,
comment='#', sep=' ', engine='python', na_values='------',
header=None, names=['pdbid', 'year', 'uniprotid', 'name'])
protein_data.head()
# we assume that PDB IDs are unique
assert ~protein_data['pdbid'].duplicated().any()
protein_data = protein_data[np.in1d(protein_data['pdbid'], affinity_data['pdbid'])]
# check for missing values
protein_data.isnull().any()
protein_data[protein_data['name'].isnull()]
# fix rows with wrong separators between protein ID and name
for idx, row in protein_data[protein_data['name'].isnull()].iterrows():
uniprotid = row['uniprotid'][:6]
name = row['uniprotid'][7:]
protein_data.loc[idx, ['uniprotid', 'name']] = [uniprotid, name]
protein_data.isnull().any()
protein_data.to_csv('protein_data.csv', index=False)
###Output
_____no_output_____ |
1-basics/1-assignment.ipynb | ###Markdown
Python Block Course Assignment 1: Python basics and programming fundamentals Prof. Dr. Karsten Donnay, Stefan ScholzWinter Term 2019 / 2020In this first assignment we will practice how to use Jupyter Notebooks and how to execute Python code. You can score up to 15 points in this assignment. Please submit your solutions inside this notebook in your repository on GitHub. The deadline for submission is on Tuesday, October 15, 09:59 am. You will get individual feedback in your repository. 1.1 Encoding Text Suppose you want to send a secret message. Therefore, you want to use a very simple encoding method called [Ceasar Cipher](https://en.wikipedia.org/wiki/Caesar_cipher). For this, you first have to define which characters you want to send and of couse which message you want to send. We have already prepared an alphabet and a message for you.
###Code
# define characters you want to use
alphabet = list("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ .")
# define message you want to send
message = "Mission accomplished. Meeting point is Mailand. I will wear a black coat."
###Output
_____no_output_____
###Markdown
To get a encoded alphabet we need to shift our alphabet by a certain amount of characters. Therefore we can use a combination of **pop()** and **append()**. We have already prepared some code to move the alphabet by three characters. Exercise (2 Points): Try to understand the following code. To increase the security level, increase the shift to a value greater than three characters.
###Code
# define characters you want to use for encoding
alphabet_encoded = list(alphabet)
# shift alphabet
for i in range(3):
alphabet_encoded.append(alphabet_encoded.pop(0))
# print encoded alphabet
print(alphabet_encoded)
###Output
_____no_output_____
###Markdown
You have an encoded alphabet now. But in order to encode your message easily, you still need a table or dictionary, in which you encode from the normal alphabet to the encoded one. We have already prepared this dictionary and encoded the original message.
###Code
# define dictionary with original and encoded alphabet
encoder = dict(zip(alphabet, alphabet_encoded))
# encode message
message_encoded = ""
for letter in message:
message_encoded += encoder[letter]
# print encoded message
print(message_encoded)
###Output
_____no_output_____ |
notebooks/ne_baseline.ipynb | ###Markdown
はじめにこのノートブックでは、拡張固有表現認識のベースラインモデルの作成を行います。まずはデータセットを読み込み、整形します。その後、ベースラインモデルを構築し、評価を行います。 データセットの読み込みこの節では、拡張固有表現のデータセットを読み込みます。データセットには、毎日新聞1995に対して拡張固有表現が付与されたデータセットを用います。以下のコードを実行して、文字ベースのIOB2形式で読み込みます。
###Code
import os
import sys
sys.path.append('../')
from entitypedia.evaluation.converter import to_iob2
mainichi_dir = '../data/raw/corpora/mainichi'
X, y = to_iob2(mainichi_dir)
print(' '.join(X[0][:50]))
print(' '.join(y[0][:50]))
###Output
_____no_output_____
###Markdown
上記に示したように読み込んだデータセットでは文字単位でラベルがついています。今回は単語単位で認識するベースラインモデルを作りたいため、単語レベルにラベルを付け直します。タスクとしては以下の通りです。* 文字のリストを結合して文字列にする* 文字列を形態素解析器で解析し、分かち書きする* 分かち書きした単語のリストに対してラベルを付け直す。まずは文字のリストを結合して文字列にします。
###Code
docs = [''.join(doc) for doc in X]
docs[0][:100]
###Output
_____no_output_____
###Markdown
次に、結合した文字列に対して形態素解析を行います。形態素解析機にはMeCabを使用します。ついでに品詞情報も取得しておきましょう。
###Code
import MeCab
t = MeCab.Tagger()
def tokenize(sent):
tokens = []
t.parse('') # for UnicodeDecodeError
node = t.parseToNode(sent)
while node:
feature = node.feature.split(',')
surface = node.surface # 表層形
pos = feature[0] # 品詞
tokens.append((surface, pos))
node = node.next
return tokens[1:-1]
tokenized_docs = [[d[0] for d in tokenize(doc)] for doc in docs]
poses = [[d[1] for d in tokenize(doc)] for doc in docs]
print(tokenized_docs[0][:10])
print(poses[0][:10])
###Output
['\u3000', '◇', '国際', '貢献', 'など', '4', '点', '、', 'ビジョン', 'の']
['記号', '記号', '名詞', '名詞', '助詞', '名詞', '名詞', '記号', '名詞', '助詞']
###Markdown
これで分かち書きまではできました。その後が若干面倒です。ラベルを単語単位で付け直す必要があります。以下の手順でやってみましょう。1. 形態素を1つ取り出す2. 形態素を構成するラベルを文字列マッチングによって取り出す3. ラベルを修正する
###Code
tags = []
for t_doc, doc, label in zip(tokenized_docs, docs, y):
i = 0
doc_tags = []
for word in t_doc:
j = len(word)
while not doc[i:].startswith(word): # correct
i += 1
tag = label[i: i+j][0]
# print('{}\t{}'.format(word, tag))
doc_tags.append(tag)
i += j
tags.append(doc_tags)
# break
###Output
_____no_output_____
###Markdown
対応付けができているか確認してみましょう。
###Code
for word, tag in zip(tokenized_docs[0][:20], tags[0][:20]):
print('{}\t{}'.format(word, tag))
###Output
O
◇ O
国際 O
貢献 O
など O
4 O
点 O
、 O
ビジョン O
の O
基本 O
示す O
O
村山 B-person
富市 I-person
首相 B-position_vocation
は O
年頭 B-date
の O
記者 B-position_vocation
###Markdown
大丈夫そうですね。では`tokenized_docs`と`tags`を`X`と`y`に代入してやりましょう。
###Code
X = tokenized_docs
y = tags
###Output
_____no_output_____
###Markdown
以上でデータの読み込みと整形は完了しました。次はベースラインモデルを作成します。 ベースラインモデルの作成本節では拡張固有表現を認識するベースラインモデルを作成します。現在の固有表現認識ではBi-LSTMとCRFを組み合わせたモデルがよく用いられます。しかし、今回のように認識するタグ数が多い場合、CRFを入れると計算量が非常に多くなり、現実的な時間で問題を解くことができなくなります。したがって、まずはシンプルなモデルで解いてみましょう。ここでは、まず単純な単語ベースBi-LSTMを試してみます。計算時間が多いようだったら、更に簡単なモデルを検討します。ではまずは、データセットを学習用と検証用に分割しましょう。
###Code
from sklearn.model_selection import train_test_split
x_train, x_valid, y_train, y_valid = train_test_split(X, y, test_size=0.3, random_state=42)
###Output
_____no_output_____
###Markdown
これでデータセットを分割できました。現在、データセットの中は文字列で表現されています。これではモデルにデータを与えることができないので前処理を行います。前処理のためのコードを定義していきましょう。具体的な前処理としては、以下を行います。* 単語を数字に変換* 系列長の統一少々長いですが以下のように定義できます。
###Code
import itertools
import re
import numpy as np
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.externals import joblib
from keras.preprocessing.sequence import pad_sequences
UNK = '<UNK>'
PAD = '<PAD>'
class Preprocessor(BaseEstimator, TransformerMixin):
def __init__(self,
lowercase=True,
num_norm=True,
vocab_init=None,
padding=True,
return_lengths=True):
self.lowercase = lowercase
self.num_norm = num_norm
self.padding = padding
self.return_lengths = return_lengths
self.vocab_word = None
self.vocab_tag = None
self.vocab_init = vocab_init or {}
def fit(self, X, y):
words = {PAD: 0, UNK: 1}
tags = {PAD: 0}
for w in set(itertools.chain(*X)) | set(self.vocab_init):
if w not in words:
words[w] = len(words)
for t in itertools.chain(*y):
if t not in tags:
tags[t] = len(tags)
self.vocab_word = words
self.vocab_tag = tags
return self
def transform(self, X, y=None):
"""transforms input(s)
Args:
X: list of list of words
y: list of list of tags
Returns:
numpy array: sentences
numpy array: tags
Examples:
>>> X = [['President', 'Obama', 'is', 'speaking']]
>>> print(self.transform(X))
[
[1999, 1037, 22123, 48388], # word ids
]
"""
words = []
lengths = []
for sent in X:
word_ids = []
lengths.append(len(sent))
for word in sent:
word_ids.append(self.vocab_word.get(word, self.vocab_word[UNK]))
words.append(word_ids)
if y is not None:
y = [[self.vocab_tag[t] for t in sent] for sent in y]
if self.padding:
maxlen = max(lengths)
sents = pad_sequences(words, maxlen, padding='post')
if y is not None:
y = pad_sequences(y, maxlen, padding='post')
y = dense_to_one_hot(y, len(self.vocab_tag), nlevels=2)
else:
sents = words
if self.return_lengths:
lengths = np.asarray(lengths, dtype=np.int32)
lengths = lengths.reshape((lengths.shape[0], 1))
sents = [sents, lengths]
return (sents, y) if y is not None else sents
def inverse_transform(self, y):
indice_tag = {i: t for t, i in self.vocab_tag.items()}
return [indice_tag[y_] for y_ in y]
def vocab_size(self):
return len(self.vocab_word)
def tag_size(self):
return len(self.vocab_tag)
def dense_to_one_hot(labels_dense, num_classes, nlevels=1):
"""Convert class labels from scalars to one-hot vectors."""
if nlevels == 1:
num_labels = labels_dense.shape[0]
index_offset = np.arange(num_labels) * num_classes
labels_one_hot = np.zeros((num_labels, num_classes), dtype=np.int32)
labels_one_hot.flat[index_offset + labels_dense.ravel()] = 1
return labels_one_hot
elif nlevels == 2:
# assume that labels_dense has same column length
num_labels = labels_dense.shape[0]
num_length = labels_dense.shape[1]
labels_one_hot = np.zeros((num_labels, num_length, num_classes), dtype=np.int32)
layer_idx = np.arange(num_labels).reshape(num_labels, 1)
# this index selects each component separately
component_idx = np.tile(np.arange(num_length), (num_labels, 1))
# then we use `a` to select indices according to category label
labels_one_hot[layer_idx, component_idx, labels_dense] = 1
return labels_one_hot
else:
raise ValueError('nlevels can take 1 or 2, not take {}.'.format(nlevels))
def prepare_preprocessor(X, y, use_char=True):
p = Preprocessor()
p.fit(X, y)
return p
p = prepare_preprocessor(X, y)
###Output
_____no_output_____
###Markdown
前処理の関数を定義できたので、次にデータ生成部分の処理を描いてあげます。これは、バッチごとに前処理器を用いてデータを生成する処理になります。以下のように定義できます。
###Code
def batch_iter(data, labels, batch_size, shuffle=False, preprocessor=None):
num_batches_per_epoch = int((len(data) - 1) / batch_size) + 1
def data_generator():
"""
Generates a batch iterator for a dataset.
"""
data_size = len(data)
while True:
# Shuffle the data at each epoch
if shuffle:
shuffle_indices = np.random.permutation(np.arange(data_size))
shuffled_data = data[shuffle_indices]
shuffled_labels = labels[shuffle_indices]
else:
shuffled_data = data
shuffled_labels = labels
for batch_num in range(num_batches_per_epoch):
start_index = batch_num * batch_size
end_index = min((batch_num + 1) * batch_size, data_size)
X, y = shuffled_data[start_index: end_index], shuffled_labels[start_index: end_index]
if preprocessor:
yield preprocessor.transform(X, y)
else:
yield X, y
return num_batches_per_epoch, data_generator()
BATCH_SIZE = 32
train_steps, train_batches = batch_iter(
x_train, y_train, BATCH_SIZE, preprocessor=p)
valid_steps, valid_batches = batch_iter(
x_valid, y_valid, BATCH_SIZE, preprocessor=p)
###Output
_____no_output_____
###Markdown
ではモデルを定義しましょう。フレームワークにはKerasを使用します。
###Code
from keras.layers import Dense, LSTM, Bidirectional, Embedding, Input, Dropout
from keras.models import Model
def build_model(vocab_size, ntags, embedding_size=100, n_lstm_units=100, dropout=0.5):
sequence_lengths = Input(batch_shape=(None, 1), dtype='int32')
word_ids = Input(batch_shape=(None, None), dtype='int32')
word_embeddings = Embedding(input_dim=vocab_size,
output_dim=embedding_size,
mask_zero=True)(word_ids)
x = Dropout(dropout)(word_embeddings)
x = Bidirectional(LSTM(units=n_lstm_units, return_sequences=True))(x)
x = Dropout(dropout)(x)
x = Dense(n_lstm_units, activation='tanh')(x)
pred = Dense(ntags, activation='softmax')(x)
model = Model(inputs=[word_ids, sequence_lengths], outputs=[pred])
return model
model = build_model(p.vocab_size(), p.tag_size())
###Output
_____no_output_____
###Markdown
以上で学習の準備が整いました。実際に学習させてみましょう。最適化アルゴリズムには`Adam`を使用します。
###Code
from keras.optimizers import Adam
MAX_EPOCH = 5
model.compile(loss='categorical_crossentropy',
optimizer=Adam(),
metrics=['acc'],
)
model.fit_generator(generator=train_batches,
steps_per_epoch=train_steps,
validation_data=valid_batches,
validation_steps=valid_steps,
epochs=MAX_EPOCH)
###Output
_____no_output_____ |
kaggle_notebooks/metrics-calculations-for-tumor-region.ipynb | ###Markdown
Display SR Tumor Images
###Code
## Display SR Tumors
displayImages(sr_tumors)
def img_normal(img1, img2):
hr_img = img1.astype(np.uint16)
sr_img = img2.astype(np.uint16)
hr_img = 0.2*hr_img/255.
sr_img = 0.1*sr_img/255.
return hr_img, sr_img
## Extract Tumor regions with the help of contours
sr_tumor_regions = []
sr_contours_regions = []
for img in sr_tumors:
ret, thresh = cv2.threshold(img,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
contours, hierarchy = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
## get the biggest contour
biggest_cntr = max(contours, key = cv2.contourArea)
img_cpy = img.get().copy()
## Apply Polygon Curve approximation to extract out the tumor
eps = 0.01 * cv2.arcLength(biggest_cntr, True)
approx = cv2.approxPolyDP(biggest_cntr, eps, True)
sr_contours_regions.append(cv2.drawContours(img_cpy, [approx],0,(255,0,0), 3))
## Bounding Rectangle
(x,y,w,h) = cv2.boundingRect(biggest_cntr)
## Crop the tumor region
sr_tumor_regions.append(img.get()[y:y+h, x:x+w])
###Output
_____no_output_____
###Markdown
Display Contours on Tumor
###Code
displayImages(sr_contours_regions)
###Output
_____no_output_____
###Markdown
Display Extracted Tumor Regions
###Code
displayTumors(sr_tumor_regions)
###Output
_____no_output_____
###Markdown
Compute SSIM parameters individually and see the output
###Code
espcn_tumor_metric = {}
espcn_tumor_metric["tumor"] = {}
espcn_tumor_metric["mannwhitneyu"] = {}
###Output
_____no_output_____
###Markdown
Defining Constants C1 and C2 and C3 C1 = (K1,L) C2 = (K2,L) C3 = C2/2 L is the dynamic range for pixel values [How to decide Value of L?](https://scikit-image.org/docs/dev/user_guide/data_types.html)**Here K1 and K2 are constant values very very close to 0**
###Code
C1 = (0.01 * 65535) ** 2
C2 = (0.03 * 65535) ** 2
C3 = C2/2
def luminance(img1, img2):
mu1 = img1.mean()
mu2 = img2.mean()
mu1_sqr = mu1 ** 2
mu2_sqr = mu2 ** 2
L = (2*mu1*mu2 + C1) / (2*(mu1_sqr + mu2_sqr) + C1)
return L
def contrast(img1, img2):
sigma1 = img1.std()
sigma2 = img2.std()
sigma1_sqr = sigma1 ** 2
sigma2_sqr = sigma2 ** 2
C = (2*sigma1*sigma2 + C2) / (2*(sigma1_sqr + sigma2_sqr) + C2)
return C
def structure(img1, img2):
C3 = C2/2
sigma1 = img1.std()
sigma2 = img2.std()
sigma12 = np.cov(img1, img2)[0,1]
S = (sigma12 + C3) / (2*sigma1*sigma2 + C3)
return S
def compute_ssim(sr_img, hr_img):
sr_img = sr_img.astype(np.uint16)
hr_img = hr_img.astype(np.uint16)
img1 = np.array(list(filter(lambda pixel : pixel !=0, sr_img.flatten())))
img2 = np.array(list(filter(lambda pixel : pixel !=0, hr_img.flatten())))
## Computing Luminance Comparison Function
L = luminance(img1, img2)
## Computing Contrast Comparison Function
C = contrast(img1, img2)
## Computing Structure Comparison Function
S = structure(img1, img2)
## defining alpha, beta, gamma
alpha, beta, gamma = 1, 1, 1
ssim = (L ** alpha) * (C ** beta) ** (S ** gamma)
return ssim
from skimage.metrics import structural_similarity as ssim
## Compute SSIM for single image
compute_ssim(sr_tumor_regions[22],hr_tumor_regions[22])
ssim_arr = []
for sr_img, hr_img in zip(sr_tumor_regions,hr_tumor_regions):
ssim_arr.append(compute_ssim(sr_img, hr_img))
## Display Results for starting 10 images
print(ssim_arr[:10])
ssim_mean, ssim_std = np.mean(ssim_arr), np.std(ssim_arr)
espcn_tumor_metric["tumor"]["ssim"] = ssim_arr
print("mean: ", ssim_mean, " std: ", ssim_std)
###Output
mean: 0.9291948580719523 std: 0.03647167423360422
###Markdown
Universal Quality Index (UQI) It is special case of SSIM when C1=0 and C2=0**NOTE: It produces unstable results when either (mu1_srq + mu2_srq) or (sigma1_sqr + sigma2_sqr) is close to 0**
###Code
# def displayResults(img_arr1, img_arr2,ssim_arr, metric, dim=(1, 3), figsize=(15, 5)):
# width=8
# height=8
# rows = 5
# cols = 5
# axes=[]
# fig=plt.figure(figsize=(10,10))
# for i in range(rows * cols):
# plt.figure(figsize=figsize)
# plt.subplot(dim[0], dim[1], 1)
# plt.imshow(img_arr1[i].squeeze(), interpolation='nearest', cmap='gray')
# plt.title(f"Super Resolution Image Tumor {i+1}")
# plt.axis('off')
# plt.subplot(dim[0], dim[1], 2)
# plt.imshow(img_arr2[i].squeeze(), interpolation='nearest', cmap='gray')
# plt.title(f"Origial Image Tumor {i+1}")
# plt.axis('off')
# plt.subplot(dim[0], dim[1], 3)
# plt.text(0.5, 0.5,f"{metric} {ssim_arr[i]}")
# plt.axis('off')
# fig.tight_layout()
# plt.show()
###Output
_____no_output_____
###Markdown
Display SSIM Results for starting 10 Images
###Code
# displayResults(sr_tumor_regions, hr_tumor_regions, ssim_arr, "SSIM")
###Output
_____no_output_____
###Markdown
Mean Absolute Error
###Code
def MAE(true_img, pred_img):
hr_img, sr_img = img_normal(true_img, pred_img)
img1 = np.array(list(filter(lambda pixel : pixel !=0., sr_img.flatten())))
img2 = np.array(list(filter(lambda pixel : pixel !=0., hr_img.flatten())))
metric = (np.sum(np.absolute(np.subtract(img1, img2)))) / len(img1)
return metric
MAE(hr_tumor_regions[0], sr_tumor_regions[0])
mae_arr=[]
for img1, img2 in zip(hr_tumor_regions, sr_tumor_regions):
mae_arr.append(MAE(img1, img2))
print(mae_arr[:10])
mae_mean, mae_std = np.mean(mae_arr), np.std(mae_arr)
espcn_tumor_metric["tumor"]["mae"] = mae_arr
print("mean: ", mae_mean, " std: ", mae_std)
###Output
mean: 0.050516821579605256 std: 0.01603081056611103
###Markdown
Mean Percentage Error
###Code
def MPE(true_img, pred_img):
hr_img, sr_img = img_normal(true_img, pred_img)
img1 = np.array(list(filter(lambda pixel : pixel !=0, hr_img.flatten())))
img2 = np.array(list(filter(lambda pixel : pixel !=0., sr_img.flatten())))
metric = np.sum((img1 - img2)) / len(img1)
return metric * 100
MPE(hr_tumor_regions[0], sr_tumor_regions[0])
mpe_arr=[]
for img1, img2 in zip(hr_tumor_regions, sr_tumor_regions):
mpe_arr.append(MPE(img1, img2))
print(mpe_arr[:10])
mpe_mean, mpe_std = np.mean(mpe_arr), np.std(mpe_arr)
espcn_tumor_metric["tumor"]["mpe"] = mpe_arr
print("mean: ", mpe_mean, " std: ", mpe_std)
def hr_normal(img):
hr_img = img.astype(np.uint16)
hr_img = 0.2*hr_img/255.
return hr_img
def sr_normal(img):
sr_img = img.astype(np.uint16)
sr_img = 0.1*sr_img/255.
return sr_img
n_hr_tumor_regions = []
for img in hr_tumor_regions:
n_hr_tumor_regions.append(hr_normal(img))
n_sr_tumor_regions = []
for img in sr_tumor_regions:
n_sr_tumor_regions.append(sr_normal(img))
###Output
_____no_output_____
###Markdown
Mean Square Error (MSE)
###Code
ans = sewar.full_ref.mse(n_hr_tumor_regions[9], n_sr_tumor_regions[9])
print(ans, type(ans))
mse_arr = []
for i in range(199):
mse_arr.append(sewar.full_ref.mse(n_hr_tumor_regions[i], n_sr_tumor_regions[i]))
## Display Results for starting 10 images
print(mse_arr[:10])
mse_mean, mse_std = np.mean(mse_arr), np.std(mse_arr)
espcn_tumor_metric["tumor"]["mse"] = mse_arr
print("mean: ", mse_mean, " std: ", mse_std)
###Output
mean: 0.0025827580914429666 std: 0.0015045413630965234
###Markdown
Root Mean Square Error (RMSE)
###Code
ans = sewar.full_ref.rmse(n_hr_tumor_regions[9], n_sr_tumor_regions[9])
print(ans, type(ans))
rmse_arr = []
for i in range(199):
rmse_arr.append(sewar.full_ref.rmse(n_hr_tumor_regions[i], n_sr_tumor_regions[i]))
## Display Results for starting 10 images
print(rmse_arr[:10])
rmse_mean, rmse_std = np.mean(rmse_arr), np.std(rmse_arr)
espcn_tumor_metric["tumor"]["rmse"] = rmse_arr
print("mean: ", rmse_mean, " std: ", rmse_std)
###Output
mean: 0.04860087564495682 std: 0.014856412015907825
###Markdown
PSNR
###Code
from skimage.metrics import peak_signal_noise_ratio as psnr
ans = psnr(n_hr_tumor_regions[4], n_sr_tumor_regions[4])
print(ans, type(ans))
psnr_arr = []
for i in range(199):
psnr_arr.append(psnr(n_hr_tumor_regions[i], n_sr_tumor_regions[i]))
## Display Results for starting 10 images
print(psnr_arr[:10])
psnr_mean, psnr_std = np.mean(psnr_arr), np.std(psnr_arr)
espcn_tumor_metric["tumor"]["psnr"] = psnr_arr
print("mean: ", psnr_mean, " std: ", psnr_std)
###Output
mean: 26.69571461551074 std: 2.779466891272773
###Markdown
Multi-Scale Structural Similarity Index (MS-SSIM)
###Code
ans = sewar.full_ref.msssim(n_hr_tumor_regions[2].astype(np.uint16), n_sr_tumor_regions[2].astype(np.uint16)).real
print(ans, type(ans))
msssim_arr = []
for i in range(199):
try:
msssim_arr.append(sewar.full_ref.msssim(n_hr_tumor_regions[i].astype(np.uint16), n_sr_tumor_regions[i].astype(np.uint16)).real)
except:
continue
## Display Results for starting 10 images
print(msssim_arr[:10])
msssim_mean, msssim_std = np.mean(msssim_arr), np.std(msssim_arr)
espcn_tumor_metric["tumor"]["msssim"] = msssim_arr
print("mean: ", msssim_mean, " std: ", msssim_std)
###Output
mean: 1.0 std: 0.0
###Markdown
Spatial Corelation Coefficient (SCC)
###Code
ans = sewar.full_ref.scc(n_hr_tumor_regions[3], n_sr_tumor_regions[3])
print(ans, type(ans))
scc_arr = []
for i in range(199):
scc_arr.append(sewar.full_ref.scc(n_hr_tumor_regions[i], n_sr_tumor_regions[i]))
## Display Results for starting 10 images
print(scc_arr[:10])
scc_mean, scc_std = np.mean(scc_arr), np.std(scc_arr)
espcn_tumor_metric["tumor"]["scc"] = scc_arr
print("mean: ", scc_mean, " std: ", scc_std)
###Output
mean: 0.9408737259163489 std: 0.07100322306382177
###Markdown
Pixel Based Visual Information Fidelity (vif-p)
###Code
ans = sewar.full_ref.vifp(n_hr_tumor_regions[5], n_sr_tumor_regions[5])
print(ans, type(ans))
vifp_arr = []
for i in range(199):
try:
vifp_arr.append(sewar.full_ref.vifp(n_hr_tumor_regions[i], n_sr_tumor_regions[i]))
except:
continue
## Display Results for starting 10 images
print(vifp_arr[:10])
vifp_mean, vifp_std = np.mean(vifp_arr), np.std(vifp_arr)
espcn_tumor_metric["tumor"]["vifp"] = vifp_arr
print("mean: ", vifp_mean, " std: ", vifp_std)
# os.mkdir('./tumor')
# os.mkdir('./tumor/error_barplot')
# os.mkdir('./tumor/scatter')
# os.mkdir('./tumor/regression')
# ## Define error bar plot function
# def error_barplot(error_arr,title='', file_name=''):
# # Calculate the average
# error_mean = np.mean(error_arr)
# # Calculate the standard deviation
# error_std = np.std(error_arr)
# # Define labels, positions, bar heights and error bar heights
# labels = ['For 200 Images']
# x_pos = np.arange(len(labels))
# CTEs = [error_mean]
# error = [error_std]
# # Build the plot
# fig, ax = plt.subplots(figsize=(5,5))
# ax.bar(x_pos, CTEs,yerr=error,align='center',alpha=0.5,ecolor='black',capsize=10)
# # ax.set_ylabel('Mean Percentage Error')
# ax.set_xticks(x_pos)
# ax.set_xticklabels(labels)
# ax.set_title(title)
# ax.yaxis.grid(True)
# plt.savefig(f"./tumor/error_barplot/{file_name}.png")
# # Save the figure and show
# plt.tight_layout()
# # plt.savefig('bar_plot_with_error_bars.png')
# plt.show()
# error_barplot(mae_arr,title='Mean Absolute Error (MAE)', file_name='mae_barplot')
# error_barplot(mpe_arr,title='Mean Percentage Error (MPE)', file_name='mpe_barplot')
# error_barplot(mse_arr,title='Mean Square Error (MSE)', file_name='mse_barplot')
# error_barplot(rmse_arr,title='Root Mean Square Error (RMSE)', file_name='rmse_barplot')
# error_barplot(psnr_arr,title='Peak Signal to Noise Ratio (PSNR)', file_name='psnr_barplot')
# error_barplot(ssim_arr,title='Structural Similarity Index (SSIM)', file_name='ssim_barplot')
# error_barplot(scc_arr,title='Spatial Corelation Coefficient (SCC)', file_name='scc_barplot')
# error_barplot(vifp_arr,title='Pixel Based Visual Information Fidelity (vif-p)', file_name='vifp_barplot')
###Output
_____no_output_____
###Markdown
Scatter Plot for MAE, MPE, MSE, RMSE, PSNR, SSIM, MS-SSIM, SCC and VIF-P
###Code
# import seaborn as sns
# sns.set_theme(style="whitegrid")
# sns.set(rc={'figure.figsize':(8,8)})
# metric_dict = {'Images': [i for i in range(1,200)],
# 'MAE' : mae_arr,
# 'MPE' : mpe_arr,
# 'MSE' : mse_arr,
# 'RMSE' : rmse_arr,
# 'PSNR' : psnr_arr,
# 'SSIM' : ssim_arr,
# 'SCC' : scc_arr,
# 'VIFP' : vifp_arr
# }
# metric_df = pd.DataFrame(metric_dict)
# def getScatterPlot(y_val,df,title='', file_name=''):
# sns_plt = sns.scatterplot(x=metric_df.Images, y=y_val, data=df, linewidth=2.5).set_title(title)
# sns_fig = sns_plt.get_figure()
# sns_fig.savefig(f"./tumor/scatter/{file_name}.png")
# def RegPlot(y_val,df,title='', file_name=''):
# sns_plt = sns.regplot(x=metric_df.Images, y=y_val, data=df).set_title(title)
# sns_fig = sns_plt.get_figure()
# sns_fig.savefig(f"./tumor/regression/{file_name}.png")
# getScatterPlot(metric_df.MAE, metric_df, 'Mean Absolute Error', 'mae_scatter')
# getScatterPlot(metric_df.MPE, metric_df, title='Mean Percentage Error', file_name='mpe_scatter')
# getScatterPlot(metric_df.MSE, metric_df, title='Mean Square Error', file_name='mse_scatter')
# getScatterPlot(metric_df.RMSE, metric_df, title='Root Mean Square Error', file_name='rmse_scatter')
# getScatterPlot(metric_df.PSNR, metric_df, title='Peak Signal to Noise Ratio', file_name='psnr_scatter')
# getScatterPlot(metric_df.SSIM, metric_df, title='Structure Similarity Index', file_name='ssim_scatter')
# getScatterPlot(metric_df.SCC, metric_df, title='Spatial Corelation Coefficient', file_name='scc_scatter')
# getScatterPlot(metric_df.VIFP, metric_df, title='Pixel Based Visual Information Fidelity', file_name='vifp_scatter')
###Output
_____no_output_____
###Markdown
Regression Plot for MAE, MPE, MSE, RMSE, PSNR, SSIM, MS-SSIM, SCC and VIF-P
###Code
# RegPlot(metric_df.MAE, metric_df, 'Mean Absolute Error', 'mae_scatter')
# RegPlot(metric_df.MPE, metric_df, title='Mean Percentage Error', file_name='mpe_scatter')
# RegPlot(metric_df.MSE, metric_df, title='Mean Square Error', file_name='mse_scatter')
# RegPlot(metric_df.RMSE, metric_df, title='Root Mean Square Error', file_name='rmse_scatter')
# RegPlot(metric_df.PSNR, metric_df, title='Peak Signal to Noise Ratio', file_name='psnr_scatter')
# RegPlot(metric_df.SSIM, metric_df, title='Structure Similarity Index', file_name='ssim_scatter')
# RegPlot(metric_df.SCC, metric_df, title='Spatial Corelation Coefficient', file_name='scc_scatter')
# RegPlot(metric_df.VIFP, metric_df, title='Pixel Based Visual Information Fidelity', file_name='vifp_scatter')
import pickle
with open('./espcn_tumor_pickle.pkl', 'wb') as f:
pickle.dump(espcn_tumor_metric, f)
%%!
zip espcn_tumor_metric.zip ./espcn_tumor_pickle.pkl
###Output
_____no_output_____ |
notebooks/experiments_lstm/medium_article.ipynb | ###Markdown
> Data downloadingData link https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/6C3JR1 Data descriptionHere we consoder dataset of "Additional Tennessee Eastman Process Simulation Data for Anomaly Detection Evaluation"This dataverse contains the data referenced in Rieth et al. (2017). Issues and Advances in Anomaly Detection Evaluation for Joint Human-Automated Systems. To be presented at Applied Human Factors and Ergonomics 2017. Columns description* **faultNumber** ranges from 1 to 20 in the “Faulty” datasets and represents the fault type in the TEP. The “FaultFree” datasets only contain fault 0 (i.e. normal operating conditions).* **simulationRun** ranges from 1 to 500 and represents a different random number generator state from which a full TEP dataset was generated (Note: the actual seeds used to generate training and testing datasets were non-overlapping).* **sample** ranges either from 1 to 500 (“Training” datasets) or 1 to 960 (“Testing” datasets). The TEP variables (columns 4 to 55) were sampled every 3 minutes for a total duration of 25 hours and 48 hours respectively. Note that the faults were introduced 1 and 8 hours into the Faulty Training and Faulty Testing datasets, respectively.* **columns 4-55** contain the process variables; the column names retain the original variable names.
###Code
# ! unzip dataverse_files.zip -d dataverse_files
#reading train data in .R format
a1 = py.read_r("dataverse_files/TEP_FaultFree_Training.RData")
a2 = py.read_r("dataverse_files/TEP_Faulty_Training.RData")
#reading test data in .R format
a3 = py.read_r("dataverse_files/TEP_FaultFree_Testing.RData")
a4 = py.read_r("dataverse_files/TEP_Faulty_Testing.RData")
print("Objects that are present in a1 :", a1.keys())
print("Objects that are present in a2 :", a2.keys())
print("Objects that are present in a3 :", a3.keys())
print("Objects that are present in a4 :", a4.keys())
# concatinating the train and the test dataset
# train dataframe
raw_train = pd.concat([a1['fault_free_training'], a2['faulty_training']])
# test dataframe
raw_test = pd.concat([a3['fault_free_testing'], a4['faulty_testing']])
raw_train.groupby(['faultNumber','simulationRun']).size()
raw_test.groupby(['faultNumber','simulationRun']).size()
###Output
_____no_output_____
###Markdown
> EDA
###Code
for col in raw_train.columns[3:]:
plt.figure(figsize=(10,5))
plt.hist(raw_train[col])
plt.xlabel(col)
plt.ylabel('counts')
plt.show()
###Output
_____no_output_____
###Markdown
> SamplingDescribed in "Data Preparation for Deep Learning Models" in [that article](https://medium.com/@mrunal68/tennessee-eastman-process-simulation-data-for-anomaly-detection-evaluation-d719dc133a7f)
###Code
%%time
# Program to construct the sample train data
frame = []
for i in set(raw_train['faultNumber']):
b_i = pd.DataFrame()
if i == 0:
b_i = raw_train[raw_train['faultNumber'] == i][0:20000]
frame.append(b_i)
else:
fr = []
b = raw_train[raw_train['faultNumber'] == i]
for x in range(1,25):
b_x = b[b['simulationRun'] == x][20:500]
fr.append(b_x)
b_i = pd.concat(fr)
frame.append(b_i)
sampled_train = pd.concat(frame)
sampled_train.groupby('faultNumber')['simulationRun'].count() / raw_train.groupby('faultNumber')['simulationRun'].count()
%%time
# Program to construct the sample CV Data
frame = []
for i in set(raw_train['faultNumber']):
b_i = pd.DataFrame()
if i == 0:
b_i = raw_train[raw_train['faultNumber'] == i][20000:30000]
frame.append(b_i)
else:
fr = []
b = raw_train[raw_train['faultNumber'] == i]
for x in range(26,35):
b_x = b[b['simulationRun'] == x][20:500]
fr.append(b_x)
b_i = pd.concat(fr)
frame.append(b_i)
sampled_cv = pd.concat(frame)
%%time
# Program to construct Sampled raw_test data
frame = []
for i in set(raw_test['faultNumber']):
b_i = pd.DataFrame()
if i == 0:
b_i = raw_test[raw_test['faultNumber'] == i][0:2000]
frame.append(b_i)
else:
fr = []
b = raw_test[raw_test['faultNumber'] == i]
for x in range(1,11):
b_x = b[b['simulationRun'] == x][160:660]
fr.append(b_x)
b_i = pd.concat(fr)
frame.append(b_i)
sampled_test = pd.concat(frame)
len(sampled_train), len(sampled_cv), len(sampled_test)
sampled_data_path = "sampled_data/"
sampled_train.to_csv(sampled_data_path + "train.csv")
sampled_test.to_csv(sampled_data_path + "test.csv")
sampled_cv.to_csv(sampled_data_path + "cv.csv")
###Output
_____no_output_____
###Markdown
> Preparing data
###Code
#Sorting the Datasets wrt to the simulation runs
sampled_train.sort_values(['simulationRun', 'faultNumber'], inplace=True)
sampled_test.sort_values(['simulationRun', 'faultNumber'], inplace=True)
sampled_cv.sort_values(['simulationRun', 'faultNumber'], inplace=True)
# Removing faults 3, 9 and 15
tr = sampled_train.drop(sampled_train[(sampled_train['faultNumber'] == 3) |\
(sampled_train['faultNumber'] == 9) |\
(sampled_train['faultNumber'] == 15)].index)
# Removing faults 3, 9 and 15
ts = sampled_test.drop(sampled_test[(sampled_test['faultNumber'] == 3) |\
(sampled_test['faultNumber'] == 9) |\
(sampled_test['faultNumber'] == 15)].index)
# Removing faults 3, 9 and 15
cv = sampled_cv.drop(sampled_cv[(sampled_cv['faultNumber'] == 3) |\
(sampled_cv['faultNumber'] == 9) |\
(sampled_cv['faultNumber'] == 15)].index)
#converting the class labels to categorical values and removing unnecessary features from train, test and cv data.
y_train = to_categorical(tr['faultNumber'], num_classes=21)
y_test = to_categorical(ts['faultNumber'], num_classes=21)
y_cv = to_categorical(cv['faultNumber'], num_classes=21)
tr = tr.drop(['faultNumber', 'simulationRun', 'sample'], axis=1)
ts = ts.drop(['faultNumber', 'simulationRun', 'sample'], axis=1)
cv = cv.drop(['faultNumber', 'simulationRun', 'sample'], axis=1)
# Resizing the train, test and cv data.
x_train = np.array(tr)[:, :, np.newaxis]
x_test = np.array(ts)[:, :, np.newaxis]
x_cv = np.array(cv)[:, :, np.newaxis]
tr.shape, x_train.shape
tr
###Output
_____no_output_____
###Markdown
> Modeling: LSTM-1 Models configuration
###Code
model_1 = Sequential()
model_1.add(LSTM(256, input_shape=(52, 1), return_sequences=True))
model_1.add(LSTM(128, return_sequences=False))
model_1.add(Dense(300))
model_1.add(Dropout(0.5))
model_1.add(Dense(128))
model_1.add(Dense(21, activation='softmax'))
model_1.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model_1.summary())
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_1 (LSTM) (None, 52, 256) 264192
_________________________________________________________________
lstm_2 (LSTM) (None, 128) 197120
_________________________________________________________________
dense_1 (Dense) (None, 300) 38700
_________________________________________________________________
dropout_1 (Dropout) (None, 300) 0
_________________________________________________________________
dense_2 (Dense) (None, 128) 38528
_________________________________________________________________
dense_3 (Dense) (None, 21) 2709
=================================================================
Total params: 541,249
Trainable params: 541,249
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Training
###Code
n_epochs = 25
# n_epochs = 50
history_1 = model_1.fit(x_train, y_train, validation_data=(x_cv, y_cv), batch_size=256, epochs=n_epochs, verbose=2)
###Output
Train on 230080 samples, validate on 93440 samples
Epoch 1/25
- 777s - loss: 1.7492 - acc: 0.4454 - val_loss: 1.2626 - val_acc: 0.6079
Epoch 2/25
- 683s - loss: 1.2116 - acc: 0.6009 - val_loss: 1.1726 - val_acc: 0.6395
Epoch 3/25
- 671s - loss: 1.1393 - acc: 0.6237 - val_loss: 1.1172 - val_acc: 0.6560
Epoch 4/25
- 800s - loss: 1.1003 - acc: 0.6360 - val_loss: 1.0926 - val_acc: 0.6623
Epoch 5/25
- 683s - loss: 1.0817 - acc: 0.6416 - val_loss: 1.0666 - val_acc: 0.6674
Epoch 6/25
- 657s - loss: 1.0652 - acc: 0.6469 - val_loss: 1.0530 - val_acc: 0.6737
Epoch 7/25
- 646s - loss: 1.0457 - acc: 0.6540 - val_loss: 1.0447 - val_acc: 0.6799
Epoch 8/25
- 638s - loss: 1.0177 - acc: 0.6623 - val_loss: 1.0391 - val_acc: 0.6787
Epoch 9/25
- 640s - loss: 0.9945 - acc: 0.6698 - val_loss: 1.0131 - val_acc: 0.6859
Epoch 10/25
- 639s - loss: 0.9769 - acc: 0.6791 - val_loss: 1.0727 - val_acc: 0.6706
Epoch 11/25
- 639s - loss: 0.9706 - acc: 0.6808 - val_loss: 1.0361 - val_acc: 0.6753
Epoch 12/25
- 641s - loss: 0.9408 - acc: 0.6918 - val_loss: 1.2099 - val_acc: 0.6436
Epoch 13/25
- 640s - loss: 0.9568 - acc: 0.6886 - val_loss: 0.9643 - val_acc: 0.7094
Epoch 14/25
- 640s - loss: 0.8883 - acc: 0.7118 - val_loss: 0.9733 - val_acc: 0.7068
Epoch 15/25
- 641s - loss: 0.8728 - acc: 0.7173 - val_loss: 0.8782 - val_acc: 0.7371
Epoch 16/25
- 640s - loss: 0.8011 - acc: 0.7399 - val_loss: 0.7926 - val_acc: 0.7631
Epoch 17/25
- 638s - loss: 0.8472 - acc: 0.7266 - val_loss: 0.8087 - val_acc: 0.7601
Epoch 18/25
- 641s - loss: 0.7525 - acc: 0.7544 - val_loss: 0.8741 - val_acc: 0.7433
Epoch 19/25
- 640s - loss: 0.7341 - acc: 0.7605 - val_loss: 0.7692 - val_acc: 0.7745
Epoch 20/25
- 640s - loss: 0.7213 - acc: 0.7638 - val_loss: 0.8173 - val_acc: 0.7568
Epoch 21/25
- 642s - loss: 0.8166 - acc: 0.7346 - val_loss: 0.8563 - val_acc: 0.7450
Epoch 22/25
- 641s - loss: 0.8237 - acc: 0.7337 - val_loss: 0.7729 - val_acc: 0.7708
Epoch 23/25
- 643s - loss: 0.7153 - acc: 0.7650 - val_loss: 0.7883 - val_acc: 0.7613
Epoch 24/25
- 642s - loss: 0.6942 - acc: 0.7708 - val_loss: 0.9523 - val_acc: 0.7285
Epoch 25/25
- 643s - loss: 0.7074 - acc: 0.7683 - val_loss: 0.7577 - val_acc: 0.7745
###Markdown
Metrics
###Code
history_1.history
def plots_and_metrics(h, m):
epochs_arr = list(range(1, len(h.history['acc']) + 1))
# Plot training & validation accuracy values
plt.figure(figsize=(20,5))
plt.plot(epochs_arr, h.history['acc'])
plt.plot(epochs_arr, h.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.xticks(epochs_arr)
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# Plot training & validation loss values
plt.figure(figsize=(20,5))
plt.plot(epochs_arr, h.history['loss'])
plt.plot(epochs_arr, h.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.xticks(epochs_arr)
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
score, accuracy = m.evaluate(x_test, y_test, verbose=0)
print('Test accuracy:', accuracy)
print("Test loss:", score)
%%time
plots_and_metrics(history_1, model_1)
###Output
_____no_output_____
###Markdown
> Modeling: LSTM-2
###Code
model_2 = Sequential()
model_2.add(LSTM(128, input_shape=(52, 1), return_sequences=False))
model_2.add(Dense(300))
model_2.add(Dropout(0.5))
model_2.add(Dense(128))
model_2.add(Dense(21, activation='softmax'))
model_2.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model_2.summary())
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_4 (LSTM) (None, 128) 66560
_________________________________________________________________
dense_7 (Dense) (None, 300) 38700
_________________________________________________________________
dropout_3 (Dropout) (None, 300) 0
_________________________________________________________________
dense_8 (Dense) (None, 128) 38528
_________________________________________________________________
dense_9 (Dense) (None, 21) 2709
=================================================================
Total params: 146,497
Trainable params: 146,497
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Training
###Code
n_epochs = 3
n_epochs = 50
history_2 = model_2.fit(x_train, y_train, validation_data = (x_cv, y_cv), batch_size=256, epochs=n_epochs, verbose=2)
###Output
Train on 230080 samples, validate on 93440 samples
Epoch 1/50
- 129s - loss: 1.8608 - acc: 0.4174 - val_loss: 1.5097 - val_acc: 0.5491
Epoch 2/50
- 126s - loss: 1.3767 - acc: 0.5593 - val_loss: 1.3313 - val_acc: 0.5970
Epoch 3/50
- 127s - loss: 1.2526 - acc: 0.5954 - val_loss: 1.4297 - val_acc: 0.5565
Epoch 4/50
- 127s - loss: 1.1557 - acc: 0.6317 - val_loss: 1.1099 - val_acc: 0.6701
Epoch 5/50
- 127s - loss: 1.0390 - acc: 0.6703 - val_loss: 1.0263 - val_acc: 0.7048
Epoch 6/50
- 128s - loss: 0.9635 - acc: 0.6920 - val_loss: 0.9132 - val_acc: 0.7278
Epoch 7/50
- 128s - loss: 0.9214 - acc: 0.7043 - val_loss: 0.9526 - val_acc: 0.7220
Epoch 8/50
- 129s - loss: 0.8746 - acc: 0.7172 - val_loss: 0.8725 - val_acc: 0.7428
Epoch 9/50
- 130s - loss: 0.8656 - acc: 0.7198 - val_loss: 1.0994 - val_acc: 0.6725
Epoch 10/50
- 131s - loss: 0.8445 - acc: 0.7251 - val_loss: 0.8970 - val_acc: 0.7390
Epoch 11/50
- 132s - loss: 0.8258 - acc: 0.7308 - val_loss: 0.8235 - val_acc: 0.7504
Epoch 12/50
- 132s - loss: 0.8110 - acc: 0.7359 - val_loss: 0.8687 - val_acc: 0.7461
Epoch 13/50
- 132s - loss: 0.7975 - acc: 0.7406 - val_loss: 0.8057 - val_acc: 0.7644
Epoch 14/50
- 132s - loss: 0.7887 - acc: 0.7427 - val_loss: 0.8411 - val_acc: 0.7499
Epoch 15/50
- 132s - loss: 0.7761 - acc: 0.7468 - val_loss: 0.7749 - val_acc: 0.7683
Epoch 16/50
- 132s - loss: 0.7698 - acc: 0.7483 - val_loss: 0.8044 - val_acc: 0.7595
Epoch 17/50
- 134s - loss: 0.7659 - acc: 0.7502 - val_loss: 0.7777 - val_acc: 0.7649
Epoch 18/50
- 131s - loss: 0.7574 - acc: 0.7526 - val_loss: 0.8002 - val_acc: 0.7634
Epoch 19/50
- 131s - loss: 0.7521 - acc: 0.7540 - val_loss: 0.7716 - val_acc: 0.7693
Epoch 20/50
- 131s - loss: 0.7443 - acc: 0.7556 - val_loss: 0.7844 - val_acc: 0.7667
Epoch 21/50
- 131s - loss: 0.7433 - acc: 0.7568 - val_loss: 0.7927 - val_acc: 0.7668
Epoch 22/50
- 131s - loss: 0.7391 - acc: 0.7586 - val_loss: 0.7870 - val_acc: 0.7696
Epoch 23/50
- 131s - loss: 0.7372 - acc: 0.7584 - val_loss: 0.7599 - val_acc: 0.7721
Epoch 24/50
- 131s - loss: 0.7334 - acc: 0.7596 - val_loss: 0.7727 - val_acc: 0.7679
Epoch 25/50
- 131s - loss: 0.7296 - acc: 0.7614 - val_loss: 0.7580 - val_acc: 0.7703
Epoch 26/50
- 132s - loss: 0.7268 - acc: 0.7617 - val_loss: 0.7540 - val_acc: 0.7768
Epoch 27/50
- 131s - loss: 0.7252 - acc: 0.7627 - val_loss: 0.7368 - val_acc: 0.7807
Epoch 28/50
- 131s - loss: 0.7182 - acc: 0.7642 - val_loss: 0.7246 - val_acc: 0.7825
Epoch 29/50
- 131s - loss: 0.7208 - acc: 0.7642 - val_loss: 0.7422 - val_acc: 0.7816
Epoch 30/50
- 131s - loss: 0.7147 - acc: 0.7656 - val_loss: 0.7827 - val_acc: 0.7635
Epoch 31/50
- 132s - loss: 0.7100 - acc: 0.7673 - val_loss: 0.7574 - val_acc: 0.7760
Epoch 32/50
- 131s - loss: 0.7097 - acc: 0.7672 - val_loss: 0.7480 - val_acc: 0.7775
Epoch 33/50
- 132s - loss: 0.7064 - acc: 0.7690 - val_loss: 0.7730 - val_acc: 0.7684
Epoch 34/50
- 131s - loss: 0.7036 - acc: 0.7688 - val_loss: 0.7184 - val_acc: 0.7843
Epoch 35/50
- 132s - loss: 0.7002 - acc: 0.7697 - val_loss: 0.7248 - val_acc: 0.7835
Epoch 36/50
- 131s - loss: 0.6953 - acc: 0.7714 - val_loss: 0.7293 - val_acc: 0.7833
Epoch 37/50
- 131s - loss: 0.6928 - acc: 0.7717 - val_loss: 0.7407 - val_acc: 0.7766
Epoch 38/50
- 131s - loss: 0.6931 - acc: 0.7723 - val_loss: 0.7530 - val_acc: 0.7761
Epoch 39/50
- 131s - loss: 0.6912 - acc: 0.7729 - val_loss: 0.7341 - val_acc: 0.7833
Epoch 40/50
- 131s - loss: 0.6904 - acc: 0.7727 - val_loss: 0.7300 - val_acc: 0.7818
Epoch 41/50
- 131s - loss: 0.8237 - acc: 0.7343 - val_loss: 0.7859 - val_acc: 0.7630
Epoch 42/50
- 131s - loss: 0.6931 - acc: 0.7722 - val_loss: 0.7160 - val_acc: 0.7858
Epoch 43/50
- 131s - loss: 0.6858 - acc: 0.7746 - val_loss: 0.7256 - val_acc: 0.7850
Epoch 44/50
- 132s - loss: 0.6824 - acc: 0.7749 - val_loss: 0.7172 - val_acc: 0.7843
Epoch 45/50
- 133s - loss: 0.6818 - acc: 0.7754 - val_loss: 0.7533 - val_acc: 0.7741
Epoch 46/50
- 131s - loss: 0.6715 - acc: 0.7796 - val_loss: 0.6802 - val_acc: 0.8010
Epoch 47/50
- 132s - loss: 0.6477 - acc: 0.7923 - val_loss: 0.7166 - val_acc: 0.7896
Epoch 48/50
- 131s - loss: 0.6218 - acc: 0.8012 - val_loss: 0.6687 - val_acc: 0.8050
Epoch 49/50
- 131s - loss: 0.6082 - acc: 0.8057 - val_loss: 0.6411 - val_acc: 0.8112
Epoch 50/50
- 131s - loss: 0.5981 - acc: 0.8093 - val_loss: 0.6090 - val_acc: 0.8228
###Markdown
Metrics
###Code
%%time
plots_and_metrics(history_2, model_2)
###Output
_____no_output_____ |
Chapter 2 Basics/Chapter_2_Section_5_Using_Variables.ipynb | ###Markdown
Ch `02`: Concept `05` Using variables Here we go, here we go, here we go! Moving on from those simple examples, let's get a better understanding of variables. Start with a session:
###Code
import tensorflow as tf
sess = tf.InteractiveSession()
###Output
_____no_output_____
###Markdown
Below is a series of numbers. Don't worry what they mean. Just for fun, let's think of them as neural activations.
###Code
raw_data = [1., 2., 8., -1., 0., 5.5, 6., 13]
###Output
_____no_output_____
###Markdown
Create a boolean variable called `spike` to detect a sudden increase in the values.All variables must be initialized. Go ahead and initialize the variable by calling `run()` on its `initializer`:
###Code
spike = tf.Variable(False)
spike.initializer.run()
###Output
_____no_output_____
###Markdown
Loop through the data and update the spike variable when there is a significant increase:
###Code
for i in range(1, len(raw_data)):
if raw_data[i] - raw_data[i-1] > 5:
tf.assign(spike, True).eval()
else:
tf.assign(spike, False).eval()
print("Spike", spike.eval())
###Output
Spike False
Spike True
Spike False
Spike False
Spike True
Spike False
Spike True
###Markdown
You forgot to close the session! Here, let me do it:
###Code
sess.close()
###Output
_____no_output_____ |
AdventOfCode 2020/AOC.7-2.ipynb | ###Markdown
Common
###Code
import collections
import math
import re
from utils import *
from personal import SESSION
aoc = AOC(session=SESSION)
#aoc.verify_session()
data = aoc.get_today_file().analyse().head().data
###Output
Local file found.
4% of data are digits. Analyse as text.
0 empty line(s) found. Analyse as monline data.
===== HEAD (5) =====
shiny aqua bags contain 1 dark white bag.
muted blue bags contain 1 vibrant lavender bag, 4 dotted silver bags, 2 dim indigo bags.
drab gray bags contain 5 mirrored white bags, 1 light green bag, 5 shiny lavender bags, 5 faded aqua bags.
muted indigo bags contain 4 muted chartreuse bags, 2 dotted teal bags.
drab white bags contain 2 dull fuchsia bags, 1 vibrant bronze bag.
====================
###Markdown
Treatment
###Code
contents = collections.defaultdict(dict)
contentsin = collections.defaultdict(set)
for el in data:
main, content = el.split(' bags contain ')
for amount, name in re.findall('(\d) (.+?) bags?[.,]', content):
contents[main][name] = int(amount)
contentsin[name].add(main)
dic_head(contents)
print('--')
dic_head(contentsin)
###Output
shiny aqua {'dark white': 1}
muted blue {'vibrant lavender': 1, 'dotted silver': 4, 'dim indigo': 2}
drab gray {'mirrored white': 5, 'light green': 1, 'shiny lavender': 5, 'faded aqua': 5}
muted indigo {'muted chartreuse': 4, 'dotted teal': 2}
drab white {'dull fuchsia': 2, 'vibrant bronze': 1}
--
dark white {'light orange', 'shiny aqua', 'clear teal', 'drab cyan', 'faded turquoise', 'striped cyan', 'shiny gold', 'bright cyan'}
vibrant lavender {'pale silver', 'muted blue'}
dotted silver {'clear red', 'muted yellow', 'posh white', 'dark gold', 'muted blue'}
dim indigo {'dim bronze', 'mirrored gray', 'striped purple', 'muted blue'}
mirrored white {'mirrored gold', 'drab gray', 'posh yellow', 'dotted white', 'faded teal'}
###Markdown
Part 1
###Code
res = set()
def p1(x, i=0):
if i > 10:
return
for k in contentsin[x]:
fct(k, i+1)
res.add(k)
p1('shiny gold')
len(res)
###Output
_____no_output_____
###Markdown
Part 2
###Code
tot = 0
def p2(x, amount=1):
global tot
for k, v in contents[x].items():
tot += v*amount
p2(k, v*amount)
p2('shiny gold')
print(tot)
###Output
14177
|
tooling/DataVisualization.ipynb | ###Markdown
JSS '19 - Gkortzis et al. - Data AnalysisThis notebook performs the following analyses reported in the study:1. [Prepare dataset](prepare)2. [RQ1](rq1) 1. [Descriptive statistics](rq1-descriptive) 2. [Descriptive statistics (sums & median)](rq1-sums) 4. [Regression Analysis (Prepare dataset)](rq1-regression) 5. [Dataset Visualization](rq1-visual) 6. [Multivariate Regression Analysis](rq1-regression-multivariate) 7. [rq1-boxplots](rq1-boxplots)3. [RQ2](rq2) 1. [Prepare Dataset](rq2-pd) 2. [Scatterplots](rq2-scatter) 3. [Boxplots](rq2-boxplots2) 4. [Regression Analysis [vuln-density, reuse-ratio]](rq2-regression) 5. [Regression Analysis [native-vuln-density, reuse-ratio]](rq2-regression2) 6. [Multivariate Regression Analysis [vuln-density, native-sloc, reuse-sloc]](rq2-regression3) 7. [Multivariate Regression Analysis [vuln-density, native-vuln-density, reuse-vuln-density]](rq2-regression4)4. [RQ3](rq3) 1. [Dataset Description](rq3-dd) 2. [Regression Analysis [cves-dependencies]](rq3-regression) 3. [RQ3 - Regression Analysis [v - dependencies]](rq3-potential) 3. [Regression Analysis [cves - module_size]](rq3-regression2) 4. [Regression Analysis [cve-density - dependencies]](rq3-regression3) 3. [Count Vulnerable Projects](rq3-count)5. [RQ4](rq4) 1. [Prepare Dataset](rq4-pd) 2. [Count Vulnerabilities](rq4-count) 3. [Regression Analysis](rq4-regression)6. [[Discussion] How are potential vulnerabilities related to disclosed ones?](discussion)7. [JSS Revision 1 - New Analysis](jss-rev1) Prepare dataset
###Code
import csv
import logging
import numpy as np
import pandas as pd
from scipy import stats
logging.basicConfig(level=logging.INFO)
def map_deps_to_projects(dependencies_usages):
logging.info("Creating projects dependencies' list..")
projects_dependencies = {}
with open(dependencies_usages, 'r') as csv_file:
for line in csv_file:
fields = line.replace('\n','').split(';')
# logging.info(fields)
dependency = fields[0]
for project in fields[2:]:
if project not in projects_dependencies:
projects_dependencies[project] = [dependency]
else:
projects_dependencies[project].append(dependency)
return projects_dependencies
def count_vulnerabilities(projects_dependencies, owasp_vulnerabilities):
logging.info("Creating projects cves list..")
dependencies_vulnerabilities = {}
with open(owasp_vulnerabilities, 'r') as csv_file:
for line in csv_file:
fields = line.replace('\n','').split(';')
# logging.info(fields)
dependency = fields[0]
number_of_cves = int(fields[2])
if number_of_cves > 0:
cves = fields[4].split(',')
dependencies_vulnerabilities[dependency] = set(cves)
projects_vulnerabilities = {}
for project in projects_dependencies:
cves = set()
for dependency in projects_dependencies[project]:
if dependency in dependencies_vulnerabilities:
dependency_cves = dependencies_vulnerabilities[dependency]
cves.update(dependency_cves)
else:
# logging.warning("dependency {} not found".format(dependency))
pass
projects_vulnerabilities[project] = len(cves)
# logging.info("{}-->{}".format(project,projects_vulnerabilities[project]))
return projects_vulnerabilities
def load_dataset(csv_file):
return pd.read_csv(csv_file)
def prepare_dataset(df):
print("Creating main dataframe. Size {}".format(len(df)))
# Calculate derived variables
df['#uv_p1'] = df['#uv_p1_r1'] + df['#uv_p1_r2'] + df['#uv_p1_r3'] + df['#uv_p1_r4']
df['#dv_p1'] = df['#dv_p1_r1'] + df['#dv_p1_r2'] + df['#dv_p1_r3'] + df['#dv_p1_r4']
df['#dev_p1'] = df['#dev_p1_r1'] + df['#dev_p1_r2'] + df['#dev_p1_r3'] + df['#dev_p1_r4']
df['#dnev_p1'] = df['#dnev_p1_r1'] + df['#dnev_p1_r2'] + df['#dnev_p1_r3'] + df['#dnev_p1_r4']
df['#dwv_p1'] = df['#dwv_p1_r1'] + df['#dwv_p1_r2'] + df['#dwv_p1_r3'] + df['#dwv_p1_r4']
df['#dnwv_p1'] = df['#dnwv_p1_r1'] + df['#dnwv_p1_r2'] + df['#dnwv_p1_r3'] + df['#dnwv_p1_r4']
df['#uv_p2'] = df['#uv_p2_r1'] + df['#uv_p2_r2'] + df['#uv_p2_r3'] + df['#uv_p2_r4']
df['#dv_p2'] = df['#dv_p2_r1'] + df['#dv_p2_r2'] + df['#dv_p2_r3'] + df['#dv_p2_r4']
df['#dev_p2'] = df['#dev_p2_r1'] + df['#dev_p2_r2'] + df['#dev_p2_r3'] + df['#dev_p2_r4']
df['#dnev_p2'] = df['#dnev_p2_r1'] + df['#dnev_p2_r2'] + df['#dnev_p2_r3'] + df['#dnev_p2_r4']
df['#dwv_p2'] = df['#dwv_p2_r1'] + df['#dwv_p2_r2'] + df['#dwv_p2_r3'] + df['#dwv_p2_r4']
df['#dnwv_p2'] = df['#dnwv_p2_r1'] + df['#dnwv_p2_r2'] + df['#dnwv_p2_r3'] + df['#dnwv_p2_r4']
df['#uv'] = df['#uv_p1'] + df['#uv_p2']
df['#dv'] = df['#dv_p1'] + df['#dv_p2']
df['#dev'] = df['#dev_p1'] + df['#dev_p2']
df['#dnev'] = df['#dnev_p1'] + df['#dnev_p2']
df['#dwv'] = df['#dwv_p1'] + df['#dwv_p2']
df['#dnwv'] = df['#dnwv_p1'] + df['#dnwv_p2']
df['#uv_sloc'] = df['#uv'] / (df['#d_sloc']+df['#u_sloc'])
df['#dv_sloc'] = df['#dv'] / (df['#d_sloc']+df['#u_sloc'])
# df['#dev_sloc'] = df['#dev'] / (df['#d_sloc']+df['#u_sloc'])
# df['#dnev_sloc'] = df['#dnev'] / (df['#d_sloc']+df['#u_sloc'])
# df['#dwv_sloc'] = df['#dwv'] / (df['#d_sloc']+df['#u_sloc'])
# df['#dnwv_sloc'] = df['#dnw'] / (df['#d_sloc']+df['#u_sloc'])
df['classes'] = df['#u_classes'] + df['#d_classes']
df['sloc'] = df['#u_sloc'] + df['#d_sloc']
df['v'] = df['#uv'] + df['#dv']
# Remove project with no external classes or very small native code base
df = df[df['#d_classes'] > 0]
df = df[df['#u_sloc'] >= 1000]
print("Initial filtering reduced size to {}".format(len(df)))
return df
def enhance_dataset(df, projects_dependencies, projects_vulnerabilities):
logging.info("Enhancing dataframe with dependencies and cves..")
df["#dependencies"] = np.nan
df["#cves"] = np.nan
for index, row in df.iterrows():
project = row['project']
number_of_dependencies = len(projects_dependencies[project])
number_of_cves = projects_vulnerabilities[project]
df.at[index,'#dependencies'] = int(number_of_dependencies)
df.at[index,'#cves'] = int(number_of_cves)
return df
def detect_enterprise_repos(df, enterprise_repos):
logging.info("Detecting enterprise repos")
df["is_enterprise"] = np.nan
df["contributors"] = np.nan
# read the enterprise repos
with open(enterprise_repos) as f:
lines = f.read().splitlines()
repositories_info = {}
for repository in lines[1:]: # skip csv's headings
fields = repository.split(',')
repositories_info[fields[0]] = fields[1:]
for index, row in df.iterrows():
project = row['project']
if not project:
print("Project {} not found".format(project))
continue
if project in repositories_info:
is_of_enterprise_org = repositories_info[project][3]
contributors = repositories_info[project][4]
else:
print("{} :: does not exist in the group ids list".format(project))
is_of_enterprise_org = 0
contributors = 1
df.at[index,'is_enterprise'] = int(is_of_enterprise_org)
df.at[index,'contributors'] = int(contributors)
return df
def filter_dataset(df, projects_as_dependencies):
logging.info("Filtering dataset")
project_list = []
with open(projects_as_dependencies, 'r') as csv_file:
for line in csv_file:
project = line.rstrip('\n')
project_list.append(project)
df = df[df.project != project]
print("Selected data set after filtering :: {}".format(len(df)))
return df
owasp_vulnerabilities = '../owasp_vulnerabilities_enhanced.csv'
dependencies_usages = '../depependencies_usages.csv'
projects_dataset = '../datasets/dataset_complete.csv'
study_vars = ['classes','#u_classes','#d_classes',
'sloc','#u_sloc','#d_sloc','#de_sloc','#dne_sloc','#dw_sloc','#dnw_sloc',
'v', '#uv', '#dv', '#dev', '#dnev', '#dwv', '#dnwv',
'#uv_classes', '#dv_classes', '#uv_sloc', '#dv_sloc',
'#dependencies', '#cves']
projects_dependencies = map_deps_to_projects(dependencies_usages)
projects_vulnerabilities = count_vulnerabilities(projects_dependencies, owasp_vulnerabilities)
projects_as_dependencies = '../projects_as_dependencies.csv'
enterprise_repos = "../projects_groupids_enterprise_info.csv"
df = load_dataset(projects_dataset)
df = prepare_dataset(df)
df = enhance_dataset(df, projects_dependencies, projects_vulnerabilities)
df = detect_enterprise_repos(df, enterprise_repos)
df = filter_dataset(df, projects_as_dependencies)
###Output
_____no_output_____
###Markdown
RQ1__RQ1: "What size and reuse factors are related with potential security vulnerabilities?"__. [Back to table of contents](index) RQ1 - Descriptive statisticsThis is the table with the descriptive statistics for the whole dataset. [Back to table of contents](index)
###Code
VLn = sum(df['#uv_classes_sloc'])
VLr = sum(df['#dv_classes_sloc'])
# Add reuse ratio
df_filtered = df[study_vars]
pd.set_option('float_format', '{:f}'.format)
df_filtered.describe()
# df_filtered.describe().to_csv("../datasets/temp_descriptive_stats.csv") # uncomment if you want to export the descriptive stats into a csv file
###Output
_____no_output_____
###Markdown
RQ1 - Descriptive statistics (Sums & median)// TODO description [Back to table of contents](index)
###Code
C = sum(df['classes'])
Cn = sum(df['#u_classes'])
Cr = sum(df['#d_classes'])
L = sum(df['sloc'])
Ln = sum(df['#u_sloc'])
Lr = sum(df['#d_sloc'])
Lre = sum(df['#de_sloc'])
Lrne = sum(df['#dne_sloc'])
Lrw = sum(df['#dw_sloc'])
Lrnw = sum(df['#dnw_sloc'])
V = sum(df['v'])
Vn = sum(df['#uv'])
Vr = sum(df['#dv'])
Vre = sum(df['#dev'])
Vrne = sum(df['#dnev'])
Vrw = sum(df['#dwv'])
Vrnw = sum(df['#dnwv'])
VCn = sum(df['#uv_classes'])
VCr = sum(df['#dv_classes'])
D = sum(df['#dependencies'])
print('''----- Descriptive statistics [sum] -----
{:30}{:=10d}\n{:30}{:=10d}\n{:30}{:=10d}
{:30}{:=10d}\n{:30}{:=10d}\n{:30}{:=10d}
{:30}{:=10d}\n{:30}{:=10d}
{:30}{:=10d}\n{:30}{:=10d}
{:30}{:=10d}\n{:30}{:=10d}\n{:30}{:=10d}
{:30}{:=10d}\n{:30}{:=10d}
{:30}{:=10d}\n{:30}{:=10d}
{:30}{:=10d}\n{:30}{:=10d}
{:30}{:=10d}\n{:30}{:=10d}
{:30}{:=10.0f}
'''.format('Classes =',C,'Native classes =',Cn,'Reused classes =',Cr,
'Sloc =', L,'Native sloc =',Ln,'Reused sloc =', Lr,
'Reused enterprise sloc =',Lre,'Reused volunteer sloc =', Lrne,
'Reused well-known sloc =',Lrw, 'Reused less-known sloc =', Lrnw,
'Vulnerabilities (potential) =',V, 'Vulns native =', Vn, 'Vulns reused =', Vr,
'Vulns reused enterprise =', Vre, 'Vulns reused volunteer =', Vrne,
'Vulns reused well-known =', Vrw,'Vulns reused less-known =', Vrnw,
'Vulnerbale native classes =',VCn,'Vulnerable reused classes', VCr,
'Vulnerable native sloc',VLn,'Vulnerable reused sloc =', VLr,
'Dependencies =',D))
print("---- Descriptive statistics [median] ---")
df_filtered.median()
###Output
_____no_output_____
###Markdown
RQ1 - Descriptive statistics [For Enterprise projects]The following represent the descriptive statistics for the Enterprise projects [Back to table of contents](index)
###Code
enterprise = df[df['is_enterprise'] > 0]
enterprise.describe()
# enterprise.describe().to_csv("../datasets/temp_enterprise_descriptive_statistics.csv")
###Output
_____no_output_____
###Markdown
RQ1 - Descriptive statistics [For Volunteer projects]The following represent the descriptive statistics for the Volunteer projects [Back to table of contents](index)
###Code
non_enterprise = df[df['is_enterprise'] == 0]
non_enterprise.describe()
# non_enterprise.describe().to_csv("../datasets/temp_volunteer_descriptive_statistics.csv")
###Output
_____no_output_____
###Markdown
RQ1 - Regression Analysis (Prepare dataset)// TODO description [Back to table of contents](index)
###Code
#-----------------
# IMPORTS & CONFIG
#-----------------
import pandas
import numpy
import seaborn
import statsmodels.formula.api as sm
from scipy import stats
from matplotlib import pyplot
from IPython.display import display, HTML
%matplotlib inline
marker_size = 5
df['dependencies'] = df['#dependencies'] # make a copy of the column without the '#' that cannot be parsed by statsmodels library
df['cves'] = df['#cves'] # make a copy of the column without the '#' that cannot be parsed by statsmodels library
df['reuse_ratio'] = df['#d_sloc'] / (df['#d_sloc']+df['#u_sloc']) # these variable is also declared and initialized in RQ2
df['wk_ratio'] = df['#dw_sloc'] / (df['#dw_sloc']+df['#dnw_sloc'])
df['dv'] = df['#dv']
df['dependency_size'] = df['#d_sloc'] / df['dependencies'] # the average size of the dependencies modules of a project
df['cve_density'] = df['#cves'] / df['#d_sloc']
#
# Standardize beta coefficient (by z-score)
#
df['v_z'] = df['v'].pipe(stats.zscore)
df['sloc_z'] = df['sloc'].pipe(stats.zscore)
df['classes_z'] = df['classes'].pipe(stats.zscore)
df['dependencies_z'] = df['#dependencies'].pipe(stats.zscore)
df['cves_z'] = df['#cves'].pipe(stats.zscore)
df['reuse_ratio_z'] = df['reuse_ratio'].pipe(stats.zscore)
df['wk_ratio_z'] = df['wk_ratio'].pipe(stats.zscore)
df['dv_z'] = df['dv'].pipe(stats.zscore)
df['dependency_size_z'] = df['dependency_size'].pipe(stats.zscore)
df['cve_density_z'] = df['cve_density'].pipe(stats.zscore)
df['u_sloc_z'] = df['#u_sloc'].pipe(stats.zscore)
df['d_sloc_z'] = df['#d_sloc'].pipe(stats.zscore)
df['dw_sloc_z'] = df['#dw_sloc'].pipe(stats.zscore)
df['dnw_sloc_z'] = df['#dnw_sloc'].pipe(stats.zscore)
###Output
_____no_output_____
###Markdown
RQ1 - Dataset VisualizationThe following four figures present the regression line of the number of vulnerabilities against the 4 factors: _'sloc'_, _'dependencies'_, _'reuse-ratio'_ and _'classes'_.[Back to table of contents](index)
###Code
# print plots with regression line
seaborn.lmplot(x='sloc',y='v',data=df,fit_reg=True, scatter_kws={"s": marker_size})
# seaborn.lmplot(x='sloc_z',y='v_z',data=df,fit_reg=True, scatter_kws={"s": marker_size})
seaborn.lmplot(x='dependencies',y='v',data=df,fit_reg=True, scatter_kws={"s": marker_size})
# seaborn.lmplot(x='v_z',y='dependencies_z',data=df,fit_reg=True, scatter_kws={"s": marker_size})
seaborn.lmplot(x='reuse_ratio', y='v',data=df,fit_reg=True, scatter_kws={"s": marker_size})
# seaborn.lmplot(x='reuse_ratio_z',y='v_z',data=df,fit_reg=True, scatter_kws={"s": marker_size})
seaborn.lmplot(x='classes', y='v',data=df,fit_reg=True, scatter_kws={"s": marker_size})
# seaborn.lmplot(x='classes_z',y='v_z',data=df,fit_reg=True, scatter_kws={"s": marker_size})
###Output
_____no_output_____
###Markdown
Multivariate Regression AnalysisHere, we calculate the standardized beta values and perform a multivariate regression analysis on the four factors: _'sloc'_, _'dependencies'_, _'reuse-ratio'_ and _'classes'_.[Back to table of contents](index)
###Code
# OLS with beta standardized
ols_model = sm.ols(formula="v_z ~ sloc_z + dependencies_z + reuse_ratio_z + classes_z", data=df)
result = ols_model.fit()
print(result.summary())
###Output
_____no_output_____
###Markdown
Multivariate Regression Analysis [well-known]Here, we calculate the standardized beta values and perform a multivariate regression analysis on the four factors: _'sloc'_, _'dependencies'_, _'reuse-ratio'_ and _'classes'_.[Back to table of contents](index)
###Code
# OLS with beta standardized
ols_model = sm.ols(formula="v_z ~ u_sloc_z + dw_sloc_z + dnw_sloc_z", data=df)
result = ols_model.fit()
print(result.summary())
###Output
_____no_output_____
###Markdown
Correlation between well-known Ratio and VulnerabilitiesHere, we calculate the Pearson correlation between the amount of vulnerabilitites in a project and the ratio of well-known dependencies.[Back to table of contents](index)
###Code
# Correlation with Kendall Tau
tau, p_value = stats.kendalltau(df['v_z'], df['wk_ratio_z'])
print(f'tau: {round(tau,2)}, p-value: {round(p_value,2)}')
###Output
_____no_output_____
###Markdown
RQ1 - Boxplots[Back to table of contents](index)
###Code
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
matplotlib.rcParams['mathtext.fontset'] = 'custom'
matplotlib.rcParams['mathtext.rm'] = 'Bitstream Vera Sans'
matplotlib.rcParams['mathtext.it'] = 'Bitstream Vera Sans:italic'
matplotlib.rcParams['mathtext.bf'] = 'Bitstream Vera Sans:bold'
matplotlib.rcParams['font.family'] = 'STIXGeneral'
fig, axs = plt.subplots(nrows=1, ncols=4, figsize=(8, 4), tight_layout = {'pad': 1})
bp_vars = ['sloc', 'classes', 'dependencies', 'reuse_ratio']
labels = ['Design\nsize', 'Number of\nclasses', 'Number of\ndependencies', 'Reuse\nratio']
# Plot boxes
for i in range(len(labels)):
bxp_df = df[bp_vars[i]]
axs[i].boxplot(bxp_df, showfliers=False)
axs[i].set_xticks([])
axs[i].set_title(labels[i])
fig.subplots_adjust(hspace=0.1, wspace=0.5)
plt.savefig("../figs/boxplots_rq1.pdf")
plt.show()
###Output
_____no_output_____
###Markdown
RQ2__RQ2: "How are potential security vulnerabilities distributed between native and reused code?"__[Back to table of contents](index) RQ2 - Prepare DatasetDefine new variables for the analysis of RQ2 and calculate their standardized beta values. [Back to table of contents](index)
###Code
#
# Define and calculate new variables
#
df['reuse_ratio'] = df['#d_sloc'] / (df['#d_sloc']+df['#u_sloc'])
df['uv_ratio'] = df['#uv'] / df['#u_sloc']
df['dv_ratio'] = df['#dv'] / df['#d_sloc']
df['#v_sloc'] = (df['#uv'] + df['#dv']) / (df['#d_sloc']+df['#u_sloc']) # vulnerability density
#
# Standardize beta coefficient (by z-score)
#
df['reuse_ratio_z'] = df['reuse_ratio'].pipe(stats.zscore)
df['uv_ratio_z'] = df['uv_ratio'].pipe(stats.zscore) # vulnerability density in native code
df['dv_ratio_z'] = df['dv_ratio'].pipe(stats.zscore) # vulnerability density in reused code
df['v_sloc_z'] = df['#v_sloc'].pipe(stats.zscore) # vulnerability density
###Output
_____no_output_____
###Markdown
RQ2 - Scatterplots[Back to table of contents](index)
###Code
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
matplotlib.rcParams.update({'font.size': 16})
fig, axs = plt.subplots(nrows=1, ncols=2, figsize=(20, 6), tight_layout = {'pad': 1})
label_size = 24
axs[0].scatter(df['uv_ratio'], df['reuse_ratio'],s=5,cmap='bwr')
axs[0].set_xlim([-0.0001,0.02])
axs[0].set_xlabel("Native Vulnerability Density", fontsize=label_size)
axs[0].set_ylabel('Reuse Ratio', rotation=90, fontsize=label_size)
axs[1].scatter(df['dv_ratio'], df['reuse_ratio'],s=5,cmap='bwr')
axs[1].set_xlim([-0.0001,0.01])
axs[1].set_xlabel("Reused Vulnerability Density", fontsize=label_size)
axs[1].set_yticks([])
fig.subplots_adjust(wspace=0.1)
plt.savefig("../figs/scatter_plots.pdf")
plt.show()
###Output
_____no_output_____
###Markdown
RQ2 - Boxplots[Back to table of contents](index)
###Code
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
def draw_seperate_plots():
matplotlib.rcParams['mathtext.fontset'] = 'custom'
matplotlib.rcParams['mathtext.rm'] = 'Bitstream Vera Sans'
matplotlib.rcParams['mathtext.it'] = 'Bitstream Vera Sans:italic'
matplotlib.rcParams['mathtext.bf'] = 'Bitstream Vera Sans:bold'
matplotlib.rcParams['font.family'] = 'STIXGeneral'
fig, axs = plt.subplots(nrows=1, ncols=3, figsize=(8, 4), tight_layout = {'pad': 1})
bp_vars = ['uv_ratio', 'dv_ratio', '#v_sloc'] #'reuse_ratio'
labels = ['Native\nvulnerabilities density', 'Reused\nvulnerabilities density', 'Overall\nvulnerabilities density'] #'Reuse ratio',
# Plot boxes
for i in range(len(labels)):
bxp_df = df[bp_vars[i]]
axs[i].boxplot(bxp_df, showfliers=False)
axs[i].set_xticks([])
axs[i].set_ylim([-0.0001,0.0065])
axs[i].set_ylim([-0.0001,0.0100])
axs[i].set_title(labels[i])
fig.subplots_adjust(hspace=0.1, wspace=0.5)
plt.savefig("../figs/boxplots2.pdf")
plt.show()
def draw_merged_plots():
matplotlib.rcParams['mathtext.fontset'] = 'custom'
matplotlib.rcParams['mathtext.rm'] = 'Bitstream Vera Sans'
matplotlib.rcParams['mathtext.it'] = 'Bitstream Vera Sans:italic'
matplotlib.rcParams['mathtext.bf'] = 'Bitstream Vera Sans:bold'
matplotlib.rcParams['font.family'] = 'STIXGeneral'
# Multiple box plots on one Axes
boxplots_df = [df[bp_vars[0]]*1000, df[bp_vars[1]]*1000, df[bp_vars[2]]*1000]
fig = plt.figure(1, figsize=(6, 4),tight_layout = {'pad': 1})
# fig, ax = plt.subplots(figsize=(10, 6), tight_layout = {'pad': 1})
# Create an axes instance
ax = fig.add_subplot(111)
ax.boxplot(boxplots_df, showfliers=False, widths=0.15)
# ax.set_xticks([])
ax.set_ylim([-0.0001*1000,0.0100*1000])
ax.yaxis.grid(False)
## Custom x-axis labels
ax.set_xticklabels(labels)
ax.set_axisbelow(True)
# Create the boxplot
# bp = ax.boxplot(boxplots_df)
plt.savefig("../figs/boxplots_rq2_compact.pdf")
plt.show()
draw_seperate_plots()
draw_merged_plots()
###Output
_____no_output_____
###Markdown
RQ2 - Regression Analysis [vuln-density, reuse-ratio]In the following analysis we investigate how reuse ratio in a project is related to its vulnerability density. The results show that there is no evidence that these two variables are somehow related.[Back to table of contents](index)
###Code
# OLS with beta standardized
df['v_sloc'] = df['#v_sloc']
ols_model = sm.ols(formula="v_sloc_z ~ reuse_ratio_z", data=df)
result = ols_model.fit()
print(result.summary())
seaborn.lmplot(x='reuse_ratio', y='#v_sloc',data=df,fit_reg=True, scatter_kws={"s": marker_size})
###Output
_____no_output_____
###Markdown
RQ2 - Regression Analysis [native-vuln-density, reuse-ratio]In the following analysis we investigate how reuse ratio in a project is related to its vulnerability density in the native code. The results show that there is a weak correlation between the two variables. __Very unexpected results__: How can we interprete? [Back to table of contents](index)
###Code
# OLS with beta standardized
ols_model = sm.ols(formula="uv_ratio_z ~ reuse_ratio_z", data=df)
result = ols_model.fit()
print(result.summary())
seaborn.lmplot(x='reuse_ratio_z', y='uv_ratio_z',data=df,fit_reg=True, scatter_kws={"s": marker_size})
###Output
_____no_output_____
###Markdown
RQ2 - Multivariate Regression Analysis [vuln-density, native-sloc, reuse-sloc]In the following analysis we investigate how native and reused code contribute to the vulnerability density of the project. <!--The results show that there is a weak correlation between the two variables. __Very unexpected results__: How can we interprete? -->[Back to table of contents](index)
###Code
# OLS with beta standardized
ols_model = sm.ols(formula="uv_ratio_z ~ u_sloc_z + d_sloc_z", data=df)
result = ols_model.fit()
print(result.summary())
###Output
_____no_output_____
###Markdown
RQ2 - Multivariate Regression Analysis [vuln-density, native-vuln-density, reuse-vuln-density]In the following analysis we investigate how native and reused code contribute to the vulnerability density of the project. [Back to table of contents](index)
###Code
# OLS with beta standardized
ols_model = sm.ols(formula="v_sloc_z ~ uv_ratio_z + dv_ratio_z", data=df)
result = ols_model.fit()
print(result.summary())
###Output
_____no_output_____
###Markdown
RQ3__RQ3: "To What extent do open source projects suffer from vulnerabilities introduced through dependencies?"__.For that RQ we collect information from the the OWASP dependenvcy-check tool in order to find how projects may use [Back to table of contents](index) RQ3 - Dataset DescriptionVizualize how projects are distributed based to the number of their disclosed vulnerabilities. [Back to table of contents](index)
###Code
import seaborn as sns
sns.set(font_scale=1.1)
ax = sns.violinplot(y=df['#cves'])
fig = ax.get_figure()
ax.set_xlabel("Disclosed Vulnerabilities in Projects")
ax.set_ylabel("Observed values")
fig.savefig('../figs/rq3_violin.pdf')
###Output
_____no_output_____
###Markdown
RQ3 - Regression Analysis [cves - dependencies]Perform a regression analysis to investigate how the number of the disclosed vulnerabilities of a project is related to the number of its dependencies.[Back to table of contents](index)
###Code
# OLS with beta standardized
ols_model = sm.ols(formula="cves_z ~ dependencies_z", data=df)
result = ols_model.fit()
print(result.summary())
# print plots with regression line
seaborn.lmplot(x='dependencies',y='cves',data=df,fit_reg=True, scatter_kws={"s": marker_size})
###Output
_____no_output_____
###Markdown
RQ3 - Regression Analysis [v - dependencies]Perform a regression analysis to investigate how the number of the disclosed vulnerabilities of a project is related to the number of its dependencies.[Back to table of contents](index)
###Code
# OLS with beta standardized
ols_model = sm.ols(formula="v_z ~ dependencies_z", data=df)
result = ols_model.fit()
print(result.summary())
# print plots with regression line
seaborn.lmplot(x='dependencies',y='v',data=df,fit_reg=True, scatter_kws={"s": marker_size})
###Output
_____no_output_____
###Markdown
RQ3 - Regression Analysis [cves - module_size]Perform a regression analysis to investigate how the number of the disclosed vulnerabilities of a project is related to the size of its dependencies.The results show that the size of a module(dependency) is not related to the number of its disclosed vulnerablities. [Back to table of contents](index)
###Code
# OLS with beta standardized
ols_model = sm.ols(formula="cves_z ~ dependency_size_z", data=df)
result = ols_model.fit()
print(result.summary())
# print plots with regression line
seaborn.lmplot(x='dependency_size',y='cves',data=df,fit_reg=True, scatter_kws={"s": marker_size})
###Output
_____no_output_____
###Markdown
RQ3 - Regression Analysis [cve-density - dependencies]Perform a regression analysis to investigate how the cve density of a project is related to the number of its dependencies.[Back to table of contents](index)
###Code
df_filtered = df[df['cve_density'] < 0.2] # filter a great outlier
# OLS with beta standardized
ols_model = sm.ols(formula="cve_density_z ~ dependencies_z", data=df_filtered)
result = ols_model.fit()
print(result.summary())
seaborn.lmplot(x='dependencies', y='cve_density',data=df_filtered,fit_reg=True, scatter_kws={"s": marker_size})
###Output
_____no_output_____
###Markdown
RQ3 - Multivariate Regression AnalysisPerform a multivariate regression analysis to investigate how the cves of a project is related to the following variables: number of its dependencies, size of the depndencies, reused_code.[Back to table of contents](index)
###Code
# OLS with beta standardized
ols_model = sm.ols(formula="cves_z ~ dependencies_z + dependency_size_z + d_sloc_z", data=df)
result = ols_model.fit()
print(result.summary())
# print plots with regression line
seaborn.lmplot(x='dependencies_z',y='cves_z',data=df,fit_reg=True, scatter_kws={"s": marker_size})
seaborn.lmplot(x='dependency_size_z',y='cves_z',data=df,fit_reg=True, scatter_kws={"s": marker_size})
seaborn.lmplot(x='d_sloc_z', y='cves_z',data=df,fit_reg=True, scatter_kws={"s": marker_size})
seaborn.lmplot(x='dependency_size_z',y='d_sloc_z',data=df,fit_reg=True, scatter_kws={"s": marker_size})
seaborn.lmplot(x='dependencies_z',y='d_sloc_z',data=df,fit_reg=True, scatter_kws={"s": marker_size})
###Output
_____no_output_____
###Markdown
RQ3 - Vulnerable projectsThe following script identifies projects that contain at least one vulnerable dependency.[Back to table of contents](index)
###Code
vul_projects = df[df['#cves'] > 0]
print("Vulnerable projects {} out of {} [{:2.2%}]".format(len(vul_projects.index), len(df.index), len(vul_projects.index)/len(df.index)))
###Output
_____no_output_____
###Markdown
RQ4__RQ4: "How is the use frequency of a dependency related to its disclosed vulnerabilities"__.For this RQ we: 1. Generate the dataset and present the descriptive statistics,2. Count the vulnerable dependencies3. Perform a univariate regression analysis between the number of vulnerabilities and its use frequency.[Back to table of contents](index) RQ4 - Prepare datasetThe following code generates the dataset used for answering RQ4 and presents its descriptive statistics. [Back to table of contents](index)
###Code
import csv
import logging
import pandas as pd
logging.basicConfig(level=logging.INFO)
def get_dependencies(dependencies_usages):
dependencies = []
with open(dependencies_usages, 'r') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=';')
for row in csv_reader:
logging.debug("{}::{}".format(row[0],row[1]))
dependencies.append([row[0],row[1]])
return dependencies
def get_vulnerabilities(owasp_vulnerabilities):
dependencies_vulns = {}
with open(owasp_vulnerabilities, 'r') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=';')
for row in csv_reader:
logging.debug("{}::{}".format(row[0],row[1]))
dependencies_vulns[row[0]] = row[1]
return dependencies_vulns
def get_potential_vulnerabilities(depependencies_spotbugs):
depependencies_potential_vulns = {}
with open(depependencies_spotbugs, 'r') as csv_file:
csv_reader = csv.reader(csv_file, delimiter=';')
for row in csv_reader:
logging.debug("{}::{}".format(row[0],row[1]))
depependencies_potential_vulns[row[0]] = row[1]
return depependencies_potential_vulns
def create_dataset(dependencies_usages, owasp_vulnerabilities, depependencies_spotbugs):
dependencies = get_dependencies(dependencies_usages)
logging.info("Dependencies with usages :: {}".format(len(dependencies)))
dependencies_vulns = get_vulnerabilities(owasp_vulnerabilities)
logging.info("Dependencies with vulnerabilities :: {}".format(len(dependencies_vulns)))
depependencies_potential_vulns = get_potential_vulnerabilities(depependencies_spotbugs)
logging.info("Dependencies with potential vulnerabilities :: {}".format(len(depependencies_potential_vulns)))
data = []
logging.info("Creating dataset...")
for entry in dependencies:
logging.debug("Parsing usage dependency :: {}".format(entry))
dependency = entry[0]
usages = int(entry[1])
vulns = 0
potential_vulns = 0
if dependency not in dependencies_vulns:
logging.warning("Dependency not in owasp reports :: {}".format(dependency))
else:
vulns = int(dependencies_vulns[dependency])
if dependency not in depependencies_potential_vulns:
logging.warning("Dependency not in spotbugs reports :: {}".format(dependency))
else:
potential_vulns = int(depependencies_potential_vulns[dependency])
data_entry = [dependency, usages, vulns, potential_vulns]
data.append(data_entry)
return data
owasp_vulnerabilities = '../owasp_vulnerabilities.csv'
dependencies_usages = '../depependencies_usages.csv'
depependencies_spotbugs = "../depependencies_spotbugs.csv"
data = create_dataset(dependencies_usages, owasp_vulnerabilities, depependencies_spotbugs)
logging.info("Created dataset with {} entries".format(len(data)))
# print(data[1:10])
df_vulnerable = pd.DataFrame(data, columns = ['Dependency', 'Usages', 'Vulnerabilities', 'Potential_vulns'])
# df_vulnerable[1:10]
df_vulnerable.describe()
###Output
_____no_output_____
###Markdown
RQ4 - Count Vulnerable dependenciesIn this step we analyze the dependencies used in the projects and report those that are vulnerable with at least one disclosed vulnerability. [Back to table of contents](index)
###Code
import numpy as np
import pandas as pd
import seaborn as sns
from scipy import stats
# df_vulnerable[1:10]
df_vulnerable_filtered = df_vulnerable[df_vulnerable['Vulnerabilities'] > 0] # exclude non-vulnerable dependencies
df_vulnerable_filtered = df_vulnerable_filtered[df_vulnerable_filtered['Vulnerabilities'] < 40] # exclude one extreme (outlier) value
# df_vulnerable = df_vulnerable[df_vulnerable['Usages'] < 40] # exclude one extreme (outlier) value
print("Found {} vulnerable dependencies out of {} total [{:2.2%}]".format(len(df_vulnerable_filtered.index), len(df_vulnerable.index), len(df_vulnerable_filtered.index)/len(df_vulnerable.index)))
sns.set(font_scale=1.1)
ax = sns.violinplot(y=df_vulnerable_filtered['Vulnerabilities'])
fig = ax.get_figure()
ax.set_ylabel("Disclosed Vulnerabilities")
fig.savefig('../figs/rq4_violin.pdf')
# df_vulnerable_filtered.plot(kind='scatter',x='Usages',y='Vulnerabilities',color='red')
# df_vulnerable[1:10]
###Output
_____no_output_____
###Markdown
RQ4 - Regression Analysis [CVEs - Usages][Back to table of contents](index)
###Code
# todo zero values
df_vulnerable_filtered['Vulnerabilities_z'] = df_vulnerable_filtered['Vulnerabilities'].pipe(stats.zscore)
df_vulnerable_filtered['Usages_z'] = df_vulnerable_filtered['Usages'].pipe(stats.zscore)
# OLS with beta standardized
ols_model = sm.ols(formula="Vulnerabilities_z ~ Usages_z", data=df_vulnerable_filtered)
result = ols_model.fit()
print(result.summary())
# print plots with regression line
# seaborn.lmplot(x='sloc',y='v',data=df,fit_reg=True, scatter_kws={"s": marker_size})
seaborn.lmplot(x='Vulnerabilities_z',y='Usages_z',data=df_vulnerable_filtered,fit_reg=True, scatter_kws={"s": marker_size})
###Output
_____no_output_____
###Markdown
RQ4 - Regression Analysis [Potential Vulns - Usages][Back to table of contents](index)
###Code
df_vulnerable_filtered['Potential_vulns_z'] = df_vulnerable_filtered['Potential_vulns'].pipe(stats.zscore)
# OLS with beta standardized
ols_model = sm.ols(formula="Potential_vulns_z ~ Usages_z", data=df_vulnerable_filtered)
result = ols_model.fit()
print(result.summary())
# print plots with regression line
seaborn.lmplot(x='Usages',y='Potential_vulns',data=df_vulnerable_filtered,fit_reg=True, scatter_kws={"s": marker_size})
###Output
_____no_output_____
###Markdown
[Discussion] How are potential vulnerabilities related to disclosed ones?[Back to table of contents](index)
###Code
# OLS with beta standardized
ols_model = sm.ols(formula="cves_z ~ dv_z", data=df)
result = ols_model.fit()
print(result.summary())
# print plots with regression line
fig = seaborn.lmplot(x='cves_z',y='dv_z',data=df,fit_reg=True, scatter_kws={"s": marker_size})
plt.xlabel('Disclosed vulnerabilities')
plt.ylabel('Potential vulnerabilities')
fig.savefig('../figs/vulnerabilities_z.pdf')
# print plots with regression line
fig = seaborn.lmplot(x='cves',y='dv',data=df,fit_reg=True, scatter_kws={"s": marker_size})
plt.xlabel('Disclosed vulnerabilities')
plt.ylabel('Potential vulnerabilities')
fig.savefig('../figs/vulnerabilities.pdf')
###Output
_____no_output_____
###Markdown
JSS Revision 1 - New analysis Eyes on dependencies[Back to table of contents](index)
###Code
import pandas as pd
import pprint
pp = pprint.PrettyPrinter(indent=1)
def create_dependencies_to_contributors_dataframe(projects_info,dependencies_usages,dependencies_info,dependencies_cves, dependencies_spotbugs):
# collect projects and contributors
with open(projects_info) as f:
lines = f.read().splitlines()
projects = {}
for line in lines[1:]:
fields = line.split(',')
projects[fields[0]]=int(fields[5])
# collect dependencies info
with open(dependencies_info) as f:
lines = f.read().splitlines()
dependencies = {}
for line in lines[1:]:
fields = line.split(';')
dependencies[fields[0]]=[int(fields[2]),int(fields[6])]
# collect dependencies usages
with open(dependencies_usages) as f:
lines = f.read().splitlines()
for line in lines[1:]:
fields = line.split(';')
dep_name = fields[0]
used_in = fields[2:]
sum=0
for project in used_in:
sum += projects[project]
if dep_name in dependencies:
dependencies[dep_name].append(len(used_in))
dependencies[dep_name].append(sum)
# collect cves
with open(dependencies_cves) as f:
lines = f.read().splitlines()
for line in lines:
fields = line.split(';')
dep_name = fields[0]
cves = fields[2]
if dep_name in dependencies:
dependencies[dep_name].append(int(cves))
# collect spotbugs
with open(dependencies_spotbugs) as f:
lines = f.read().splitlines()
for line in lines:
fields = line.split(';')
dep_name = fields[0]
potential_vulns = fields[1]
if dep_name in dependencies:
dependencies[dep_name].append(int(potential_vulns))
# cleanup entries with missing fields (only one)
delete = [key for key in dependencies if len(dependencies[key]) < 5]
for key in delete: del dependencies[key]
#TODO: transform the dict to dataframe
df_dict = {'dependency': [],'enterprise': [],'well_known': [],'used_projects': [], 'contributors_in_used_projects': [],'cves': [],'spotbugs_vuls': []}
for d in dependencies:
df_dict['dependency'].append(d)
df_dict['enterprise'].append(dependencies[d][0])
df_dict['well_known'].append(dependencies[d][1])
df_dict['used_projects'].append(dependencies[d][2])
df_dict['contributors_in_used_projects'].append(dependencies[d][3])
df_dict['cves'].append(dependencies[d][4])
df_dict['spotbugs_vuls'].append(dependencies[d][5])
return pd.DataFrame.from_dict(df_dict)
dependencies_info = "../dependencies_groupids_enterprise_info.csv"
dependencies_usages = "../depependencies_usages.csv"
projects_info = "../projects_groupids_enterprise_info.csv"
dependencies_spotbugs = "../depependencies_spotbugs.csv"
dependencies_cves = "../owasp_vulnerabilities.csv"
df_deps = create_dependencies_to_contributors_dataframe(projects_info,dependencies_usages,dependencies_info, dependencies_cves, dependencies_spotbugs)
df_deps.describe()
df_deps.median()
df_deps_enterprise = df_deps[df_deps['enterprise'] >0]
print((len(df_deps_enterprise)/len(df_deps))*100)
from scipy import stats
print(f'overall (N={len(df_deps)})')
df_test = df_deps
for v in ['cves', 'spotbugs_vuls']:
tau, p_value = stats.kendalltau(df_test[v], df_test['contributors_in_used_projects'])
print(f' [{v} x contrib.] tau: {round(tau,2)}, p-value: {round(p_value,2)}')
tau, p_value = stats.kendalltau(df_test[v], df_test['used_projects'])
print(f' [{v} x n_projs ] tau: {round(tau,2)}, p-value: {round(p_value,2)}')
for c in ['enterprise', 'well_known']:
for b in [1,0]:
df_test = df_deps[df_deps[c] == b]
print(f'{c} == {b} (N={len(df_test)})')
for v in ['cves', 'spotbugs_vuls']:
tau, p_value = stats.kendalltau(df_test[v], df_test['contributors_in_used_projects'])
print(f' [{v} x contrib.] tau: {round(tau,2)}, p-value: {round(p_value,2)}')
tau, p_value = stats.kendalltau(df_test[v], df_test['used_projects'])
print(f' [{v} x n_projs ] tau: {round(tau,2)}, p-value: {round(p_value,2)}')
df_test = df_deps[(df_deps['enterprise'] == 1) & (df_deps['well_known'] == 1)]
print(f'enterprise == 1 & well_known == 1 (N={len(df_test)})')
for v in ['cves', 'spotbugs_vuls']:
tau, p_value = stats.kendalltau(df_test[v], df_test['contributors_in_used_projects'])
print(f' [{v} x contrib.] tau: {round(tau,2)}, p-value: {round(p_value,2)}')
tau, p_value = stats.kendalltau(df_test[v], df_test['used_projects'])
print(f' [{v} x n_projs ] tau: {round(tau,2)}, p-value: {round(p_value,2)}')
ols_model = sm.ols(formula="contributors_in_used_projects ~ cves", data=df_deps)
result = ols_model.fit()
print(result.summary())
###Output
_____no_output_____
###Markdown
Redo RQs for enterprise projects vs. volunteer projects [revision 1 comment 1][Back to table of contents](index)
###Code
# Split datasets
df_e = df[df['is_enterprise'] == 1]
df_ne = df[df['is_enterprise'] == 0]
#
# RQ1
#
# OLS with beta standardized
ols_model = sm.ols(formula="v_z ~ sloc_z + dependencies_z + reuse_ratio_z + classes_z", data=df_e)
result = ols_model.fit()
print(result.summary())
ols_model = sm.ols(formula="v_z ~ sloc_z + dependencies_z + reuse_ratio_z + classes_z", data=df_ne)
result = ols_model.fit()
print(result.summary())
ols_model = sm.ols(formula="v_z ~ u_sloc_z + dw_sloc_z + dnw_sloc_z", data=df_e)
result = ols_model.fit()
print(result.summary())
ols_model = sm.ols(formula="v_z ~ u_sloc_z + dw_sloc_z + dnw_sloc_z", data=df_ne)
result = ols_model.fit()
print(result.summary())
# Correlation with Kendall Tau
tau, p_value = stats.kendalltau(df_e['v_z'], df_e['wk_ratio_z'])
print(f'Enterprise: tau: {round(tau,2)}, p-value: {round(p_value,2)}')
tau, p_value = stats.kendalltau(df_ne['v_z'], df_ne['wk_ratio_z'])
print(f'Non-enterprise: tau: {round(tau,2)}, p-value: {round(p_value,2)}')
#
# RQ2
#
print('=====================================================')
print('RQ2 - Regression Analysis [vuln-density, reuse-ratio]')
print('=====================================================\n')
ols_model = sm.ols(formula="v_sloc_z ~ reuse_ratio_z", data=df)
result = ols_model.fit()
print(result.summary())
ols_model = sm.ols(formula="v_sloc_z ~ reuse_ratio_z", data=df_e)
result = ols_model.fit()
print(result.summary())
ols_model = sm.ols(formula="v_sloc_z ~ reuse_ratio_z", data=df_ne)
result = ols_model.fit()
print(result.summary())
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
matplotlib.rcParams['mathtext.fontset'] = 'custom'
matplotlib.rcParams['mathtext.rm'] = 'Bitstream Vera Sans'
matplotlib.rcParams['mathtext.it'] = 'Bitstream Vera Sans:italic'
matplotlib.rcParams['mathtext.bf'] = 'Bitstream Vera Sans:bold'
matplotlib.rcParams['font.family'] = 'STIXGeneral'
bp_vars = ['uv_ratio', 'dv_ratio', '#v_sloc'] #'reuse_ratio'
labels = ['Native\nvulnerabilities density', '', 'Reused\nvulnerabilities density', '', 'Overall\nvulnerabilities density'] #'Reuse ratio',
# Multiple box plots on one Axes
boxplots_df = []
for v in bp_vars:
boxplots_df.append(df_e[v]*1000)
boxplots_df.append(df_ne[v]*1000)
fig = plt.figure(1, figsize=(6, 4),tight_layout = {'pad': 1})
# Create an axes instance
ax = fig.add_subplot(111)
ax.boxplot(boxplots_df, showfliers=False, widths=0.15)
# ax.set_xticks([])
ax.set_ylim([-0.0001*1000,0.0100*1000])
ax.yaxis.grid(False)
## Custom x-axis labels
ax.set_xticklabels(labels)
ax.set_axisbelow(True)
# Create the boxplot
# bp = ax.boxplot(boxplots_df)
# plt.savefig("../figs/boxplots_rq2_compact.pdf")
plt.show()
for test_var in ['uv_ratio', 'dv_ratio', '#v_sloc']:
t = stats.ttest_ind(df_e[test_var],df_ne[test_var])
print(f'Comparison of {test_var}')
print(f'\tStatistic={t[0]:.2f} (p={t[1]:.2f})')
#
# RQ3
#
print('=====================================================')
print('RQ3 - Regression Analysis [#cves - #dependencies]')
print('=====================================================\n')
ols_model = sm.ols(formula="cves_z ~ dependencies_z", data=df)
result = ols_model.fit()
print(result.summary())
ols_model = sm.ols(formula="cves_z ~ dependencies_z", data=df_e)
result = ols_model.fit()
print(result.summary())
ols_model = sm.ols(formula="cves_z ~ dependencies_z", data=df_ne)
result = ols_model.fit()
print(result.summary())
# print plots with regression line
# seaborn.lmplot(x='dependencies',y='cves',data=df,fit_reg=True, scatter_kws={"s": marker_size})
print('=====================================================')
print('RQ3 - Regression Analysis [#v - #dependencies]')
print('=====================================================\n')
ols_model = sm.ols(formula="v_z ~ dependencies_z", data=df)
result = ols_model.fit()
print(result.summary())
ols_model = sm.ols(formula="v_z ~ dependencies_z", data=df_e)
result = ols_model.fit()
print(result.summary())
ols_model = sm.ols(formula="v_z ~ dependencies_z", data=df_ne)
result = ols_model.fit()
print(result.summary())
# print plots with regression line
# seaborn.lmplot(x='dependencies',y='v',data=df,fit_reg=True, scatter_kws={"s": marker_size})
print('=====================================================')
print('RQ3 - Regression Analysis [#cves - #module_size]')
print('=====================================================\n')
ols_model = sm.ols(formula="cves_z ~ dependency_size_z", data=df)
result = ols_model.fit()
print(result.summary())
ols_model = sm.ols(formula="cves_z ~ dependency_size_z", data=df_e)
result = ols_model.fit()
print(result.summary())
ols_model = sm.ols(formula="cves_z ~ dependency_size_z", data=df_ne)
result = ols_model.fit()
print(result.summary())
# print plots with regression line
# seaborn.lmplot(x='dependency_size',y='cves',data=df,fit_reg=True, scatter_kws={"s": marker_size})
print('=====================================================')
print('RQ3 - Regression Analysis [#cve-density - #dependencies]')
print('=====================================================\n')
df_filtered = df[df['cve_density'] < 0.2] # filter a great outlier
df_e_filtered = df_e[df_e['cve_density'] < 0.2] # filter a great outlier
df_ne_filtered = df_ne[df_ne['cve_density'] < 0.2] # filter a great outlier
ols_model = sm.ols(formula="cve_density_z ~ dependencies_z", data=df_filtered)
result = ols_model.fit()
print(result.summary())
ols_model = sm.ols(formula="cve_density_z ~ dependencies_z", data=df_e_filtered)
result = ols_model.fit()
print(result.summary())
ols_model = sm.ols(formula="cve_density_z ~ dependencies_z", data=df_ne_filtered)
result = ols_model.fit()
print(result.summary())
# seaborn.lmplot(x='dependencies', y='cve_density',data=df,fit_reg=True, scatter_kws={"s": marker_size})
print('=====================================================')
print('RQ3 - Multivariate Regression Analysis')
print('=====================================================\n')
ols_model = sm.ols(formula="cves_z ~ dependencies_z + dependency_size_z + d_sloc_z", data=df)
result = ols_model.fit()
print(result.summary())
ols_model = sm.ols(formula="cves_z ~ dependencies_z + dependency_size_z + d_sloc_z", data=df_e)
result = ols_model.fit()
print(result.summary())
ols_model = sm.ols(formula="cves_z ~ dependencies_z + dependency_size_z + d_sloc_z", data=df_ne)
result = ols_model.fit()
print(result.summary())
print('=====================================================')
print('RQ3 - Vulnerable projects')
print('=====================================================\n')
vul_projects = df[df['#cves'] > 0]
vul_projects_e = df_e[df_e['#cves'] > 0]
vul_projects_ne = df_ne[df_ne['#cves'] > 0]
print("Vulnerable projects {} out of {} [{:2.2%}]".format(len(vul_projects.index), len(df.index), len(vul_projects.index)/len(df.index)))
print("Enterprise: Vulnerable projects {} out of {} [{:2.2%}]".format(len(vul_projects_e.index), len(df_e.index), len(vul_projects_e.index)/len(df_e.index)))
print("Non-enterprise: Vulnerable projects {} out of {} [{:2.2%}]".format(len(vul_projects_ne.index), len(df_ne.index), len(vul_projects_ne.index)/len(df_ne.index)))
###Output
_____no_output_____ |
notebooks/wav2vec2large_experiment_language_model.ipynb | ###Markdown
Install ffmpeg-pythonfor recorded audio decoding
###Code
!pip install -q ffmpeg-python
###Output
_____no_output_____
###Markdown
Installing transformers
###Code
!pip install -q transformers
###Output
[K |████████████████████████████████| 2.1MB 6.8MB/s
[K |████████████████████████████████| 901kB 35.8MB/s
[K |████████████████████████████████| 3.3MB 43.1MB/s
[?25h
###Markdown
Installing ctcdecodeImplementation of beam search algorithm for python. This is used to rescore output symbols from model using language model (kenlm in this case)kenlm is a n-gram language model small and efficient.We can use neural network based networks but it will be more resource intensive so kenlm is best choice to use as language model and is already used in many projects.By default ctcdecode use kenlm for rescoring so I do not bother to setup the model.- [ctcdecode repo](https://github.com/parlance/ctcdecode)- [kenlm](https://github.com/kpu/kenlm)
###Code
!git clone --recursive https://github.com/parlance/ctcdecode.git
!cd ctcdecode && pip install .
###Output
Cloning into 'ctcdecode'...
remote: Enumerating objects: 1063, done.[K
remote: Total 1063 (delta 0), reused 0 (delta 0), pack-reused 1063[K
Receiving objects: 100% (1063/1063), 759.71 KiB | 11.17 MiB/s, done.
Resolving deltas: 100% (513/513), done.
Submodule 'third_party/ThreadPool' (https://github.com/progschj/ThreadPool.git) registered for path 'third_party/ThreadPool'
Submodule 'third_party/kenlm' (https://github.com/kpu/kenlm.git) registered for path 'third_party/kenlm'
Cloning into '/content/ctcdecode/third_party/ThreadPool'...
remote: Enumerating objects: 82, done.
remote: Total 82 (delta 0), reused 0 (delta 0), pack-reused 82
Cloning into '/content/ctcdecode/third_party/kenlm'...
remote: Enumerating objects: 13792, done.
remote: Counting objects: 100% (105/105), done.
remote: Compressing objects: 100% (58/58), done.
remote: Total 13792 (delta 59), reused 74 (delta 34), pack-reused 13687
Receiving objects: 100% (13792/13792), 5.48 MiB | 18.83 MiB/s, done.
Resolving deltas: 100% (7939/7939), done.
Submodule path 'third_party/ThreadPool': checked out '9a42ec1329f259a5f4881a291db1dcb8f2ad9040'
Submodule path 'third_party/kenlm': checked out '35835f1ac4884126458ac89f9bf6dd9ccad561e0'
Processing /content/ctcdecode
Building wheels for collected packages: ctcdecode
Building wheel for ctcdecode (setup.py) ... [?25l[?25hdone
Created wheel for ctcdecode: filename=ctcdecode-1.0.2-cp37-cp37m-linux_x86_64.whl size=12877957 sha256=8694cfcf1208f7dfdbfbc88076b27389dc1ef8328381244234792bcc12871d65
Stored in directory: /tmp/pip-ephem-wheel-cache-tjrqr5u6/wheels/c3/6c/94/7d57d4f20a87a22ef1722eaad22052b4c435892b55400e5f4e
Successfully built ctcdecode
Installing collected packages: ctcdecode
Successfully installed ctcdecode-1.0.2
###Markdown
Loading Dependencies- pytorch- Transformers library- numpy- ctcdecode (ctc beam search decoder with kenlm as language model)- librosa
###Code
import torch
import transformers
import numpy as np
import ctcdecode
import librosa
###Output
_____no_output_____
###Markdown
Instantiate pretrained models- Tokenizer- Wav2Vec2 ModelThe model takes as input a speech signal in any language (currently english because it was trained on english dataset) in its raw form. This audio data is one-dimensional and is passed to a multi-layer 1-d Convolutional neural network to generate audio representations of 25ms each
###Code
tokenizer = transformers.Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
model = transformers.Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
model.eval()
###Output
_____no_output_____
###Markdown
Recording and loading audio in colabTaken from [ricardodeazambuja.com](https://ricardodeazambuja.com/deep_learning/2019/03/09/audio_and_video_google_colab/)
###Code
# https://ricardodeazambuja.com/deep_learning/2019/03/09/audio_and_video_google_colab/
from IPython.display import HTML, Audio
from google.colab.output import eval_js
from base64 import b64decode
import numpy as np
import io
import ffmpeg
AUDIO_HTML = """
<script>
var my_div = document.createElement("DIV");
var my_p = document.createElement("P");
var my_btn = document.createElement("BUTTON");
var t = document.createTextNode("Press to start recording");
my_btn.appendChild(t);
//my_p.appendChild(my_btn);
my_div.appendChild(my_btn);
document.body.appendChild(my_div);
var base64data = 0;
var reader;
var recorder, gumStream;
var recordButton = my_btn;
var handleSuccess = function(stream) {
gumStream = stream;
var options = {
//bitsPerSecond: 8000, //chrome seems to ignore, always 48k
mimeType : 'audio/webm;codecs=opus'
//mimeType : 'audio/webm;codecs=pcm'
};
//recorder = new MediaRecorder(stream, options);
recorder = new MediaRecorder(stream);
recorder.ondataavailable = function(e) {
var url = URL.createObjectURL(e.data);
var preview = document.createElement('audio');
preview.controls = true;
preview.src = url;
document.body.appendChild(preview);
reader = new FileReader();
reader.readAsDataURL(e.data);
reader.onloadend = function() {
base64data = reader.result;
//console.log("Inside FileReader:" + base64data);
}
};
recorder.start();
};
recordButton.innerText = "Recording... press to stop";
navigator.mediaDevices.getUserMedia({audio: true}).then(handleSuccess);
function toggleRecording() {
if (recorder && recorder.state == "recording") {
recorder.stop();
gumStream.getAudioTracks()[0].stop();
recordButton.innerText = "Saving the recording... pls wait!"
}
}
// https://stackoverflow.com/a/951057
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
var data = new Promise(resolve=>{
//recordButton.addEventListener("click", toggleRecording);
recordButton.onclick = ()=>{
toggleRecording()
sleep(2000).then(() => {
// wait 2000ms for the data to be available...
// ideally this should use something like await...
//console.log("Inside data:" + base64data)
resolve(base64data.toString())
});
}
});
</script>
"""
def get_audio(sr):
display(HTML(AUDIO_HTML))
data = eval_js("data")
binary = b64decode(data.split(',')[1])
process = (ffmpeg
.input('pipe:0')
.output('pipe:1', format='wav')
.run_async(pipe_stdin=True, pipe_stdout=True, pipe_stderr=True, quiet=True, overwrite_output=True)
)
output, err = process.communicate(input=binary)
riff_chunk_size = len(output) - 8
# Break up the chunk size into four bytes, held in b.
q = riff_chunk_size
b = []
for i in range(4):
q, r = divmod(q, 256)
b.append(r)
# Replace bytes 4:8 in proc.stdout with the actual size of the RIFF chunk.
riff = output[:4] + bytes(b) + output[8:]
speech, rate = librosa.load(io.BytesIO(riff),sr=16000)
return speech, sr
#record or load any audio file of your choice here
speech, rate = get_audio(sr=16000)
###Output
_____no_output_____
###Markdown
wave2vec2 vocabulary[found here](https://huggingface.co/facebook/wav2vec2-base-960h/resolve/main/vocab.json)
###Code
label_dict = {"<pad>": 0,
"<s>": 1,
"</s>": 2,
"<unk>": 3,
"|": 4,
"E": 5,
"T": 6,
"A": 7,
"O": 8,
"N": 9,
"I": 10,
"H": 11,
"S": 12,
"R": 13,
"D": 14,
"L": 15,
"U": 16,
"M": 17,
"W": 18,
"C": 19,
"F": 20,
"G": 21,
"Y": 22,
"P": 23,
"B": 24,
"V": 25,
"K": 26,
"'": 27,
"X": 28,
"J": 29,
"Q": 30,
"Z": 31
}
labels = [key for key, value in label_dict.items()]
###Output
_____no_output_____
###Markdown
CTC Beam searchCTC Beam search algorithm combined with language model for rescoring probabilities output from language model.This class handles all the things we just need to pass our models softmax output into this class object to decode
###Code
class CTCBeamDecoder:
def __init__(self, labels, blank_id=0, beam_size=100, kenlm_path=None):
print("loading beam search with kenlm...")
self.labels = labels
# model_path = is the path to your external kenlm language model(LM). Default is none.
# alpha = Weighting associated with the LMs probabilities. A weight of 0 means the LM has no effect.
# beta = Weight associated with the number of words within our beam.
self.ctcdecoder = ctcdecode.CTCBeamDecoder(
self.labels, model_path=kenlm_path,
alpha=0.6, beta=1,
beam_width=beam_size, blank_id=blank_id)
print("loading finished")
def __call__(self, output, num_sentences=1):
sentences = []
for num in range(num_sentences):
beam_result, beam_scores, timesteps, out_seq_len = self.ctcdecoder.decode(output)
# beam_result[0][0][:out_seq_len[0][0]] get the top beam for the first item in batch
sentences.append(self.output(beam_result[0][num], self.labels, out_seq_len[0][num]))
return sentences
def output(self, tokens, vocab, seq_len):
out = ''.join([vocab[x] for x in tokens[0:seq_len]])
# wave2vec implementation use | for space in vocabulary
return out.replace("|", " ")
# blank_id = ctc blank token (epsilon) which is <pad> in wave2vec vocabulary
decode_and_rescore = CTCBeamDecoder(kenlm_path=None,
labels=labels,
blank_id=label_dict.get("<pad>"),
beam_size=100)
###Output
loading beam search with kenlm...
loading finished
###Markdown
Inferencing- tokenizing(encoding) speech data and return pytorch tensor- pass encodings to model- converting model outputs into probabilities using softmax
###Code
input_values = tokenizer(speech, return_tensors = 'pt').input_values
#logits (non-normalized predictions)
logits = model(input_values).logits
out_proba = torch.nn.functional.softmax(logits, dim=-1)
predicted_ids = torch.argmax(out_proba, dim =-1)
results_ = tokenizer.decode(predicted_ids[0])
print("Without Language Model")
print(results_)
###Output
Without Language Model
THE BOOK IS ON THE TABLE
###Markdown
Applying rescoring algorithm using language model and beam search
###Code
results = decode_and_rescore(out_proba, num_sentences=5)
print("With Language Model Kenlm")
for result in results:
print(result)
###Output
With Language Model Kenlm
THE BOOK IS ON THE TABLE
ETHE BOOK IS ON THE TABLE
THE BOOK IS ON THEI TABLE
THE BOOK IS ON THE TABLE
THE BOOK IS ON THE TABLEE
###Markdown
Load Audio and transcribe
###Code
!wget https://upload.wikimedia.org/wikipedia/commons/c/c8/Example.ogg -O example.ogg
###Output
--2021-04-29 12:20:09-- https://upload.wikimedia.org/wikipedia/commons/c/c8/Example.ogg
Resolving upload.wikimedia.org (upload.wikimedia.org)... 208.80.154.240, 2620:0:861:ed1a::2:b
Connecting to upload.wikimedia.org (upload.wikimedia.org)|208.80.154.240|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 105243 (103K) [application/ogg]
Saving to: ‘example.ogg’
example.ogg 0%[ ] 0 --.-KB/s
example.ogg 100%[===================>] 102.78K --.-KB/s in 0.03s
2021-04-29 12:20:09 (2.98 MB/s) - ‘example.ogg’ saved [105243/105243]
###Markdown
Audio Loading Functions
###Code
from scipy.signal import resample
import numpy as np
import soundfile as sf
class AudioReader:
def __init__(self, audio_path, sr=16000, dtype="float32"):
self._sr = sr
self._dtype = dtype
self._audio_path = audio_path
def read(self):
data, sr = sf.read(self._audio_path, dtype=self._dtype)
data = self.__resample_file(data, sr, self._sr)
return data, self._sr
def __resample_file(self, array, original_sr, target_sr):
return resample(array, num=int(len(array)*target_sr/original_sr))
class AudioStreaming:
def __init__(self, audio_path, blocksize, sr=16000, overlap=0, padding=None, dtype="float32"):
assert blocksize >= 0, "blocksize cannot be 0 or negative"
self._sr = sr
self._orig_sr = sf.info(audio_path).samplerate
self._sf_blocks = sf.blocks(audio_path,
blocksize=blocksize,
overlap=overlap,
fill_value=padding,
dtype=dtype)
def generator(self):
for block in self._sf_blocks:
chunk = self.__resample_file(block, self._orig_sr, self._sr)
yield chunk, self._orig_sr
def __resample_file(self, array, original_sr, target_sr):
return resample(array, num=int(len(array)*target_sr/original_sr))
###Output
_____no_output_____
###Markdown
Loading Audio one passload full audio at once and transcribe
###Code
audio_reader = AudioReader("/content/example.ogg", sr=16000)
block, sr = audio_reader.read()
print(sr)
print(block.shape)
input_values = tokenizer(block[:,0], return_tensors = 'pt').input_values
#logits (non-normalized predictions)
logits = model(input_values).logits
out_proba = torch.nn.functional.softmax(logits, dim=-1)
predicted_ids = torch.argmax(out_proba, dim =-1)
results_ = tokenizer.decode(predicted_ids[0])
print("Without Language Model")
print(results_)
results = decode_and_rescore(out_proba, num_sentences=5)
print("With Language Model Kenlm")
for result in results:
print(result)
###Output
With Language Model Kenlm
THIS IS AN EXAMPLE SOUND FILE IN AG FORBUS FORMA FROM WICIPAEDIA THE FREE ENCYCLOPAEDIA
THIS IS AN EXAMPLE SOUND FILE IN AUG FORBUS FORMA FROM WICIPAEDIA THE FREE ENCYCLOPAEDIA
THIS IS AN EXAMPLE SOUND FILE IN AG FORBUS FORMA FROM WICHIPAEDIA THE FREE ENCYCLOPAEDIA
THIS IS AN EXAMPLE SOUND FILE IN AUG FORBUS FORMA FROM WICHIPAEDIA THE FREE ENCYCLOPAEDIA
THIS IS AN EXAMPLE SOUND FILE IN OG FORBUS FORMA FROM WICIPAEDIA THE FREE ENCYCLOPAEDIA
###Markdown
Splitting audio and loadfirst split audio into multiple blocks and pass each block for transcribing
###Code
## splitting 5 sec
audio_stream = AudioStreaming(audio_path="/content/example.ogg", blocksize=16000*5, padding=0)
for block, sr in audio_stream.generator():
inputs = tokenizer(block[:,0], return_tensors='pt').input_values
logits = model(inputs).logits
predicted_ids = torch.argmax(logits, dim =-1)
print(tokenizer.decode(predicted_ids[0]), end="")
###Output
THIS IS AN EXAMPLE SOUNDILEIN AG VORBUS FORMAWICKIPEDIA THE FREEPADIA
###Markdown
transcribe block and decode a last
###Code
## splitting 5 sec
audio_stream = AudioStreaming(audio_path="/content/example.ogg", blocksize=16000*5, padding=0)
ctc_outs = torch.Tensor()
for block, sr in audio_stream.generator():
inputs = tokenizer(block[:,0], return_tensors='pt').input_values
logits = model(inputs).logits
logits = torch.nn.functional.softmax(logits, dim=-1)
ctc_outs = torch.cat((ctc_outs, logits), dim=1)
results = decode_and_rescore(ctc_outs, num_sentences=5)
for result in results:
print(result)
###Output
_____no_output_____ |
IV. Molecular Dynamics/out/IV. Molecular Dynamics.ipynb | ###Markdown
Computer simulations course 2018/2019-2 @ ELTE Assignment 4: Molecular Dynamics - 1D motion 03.19.2019
###Code
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import random
import os
import sys
from scipy import stats
from datetime import datetime
import time
import imageio
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.patches import Circle
from matplotlib.patches import Patch
from matplotlib.lines import Line2D
import mpl_toolkits.mplot3d.art3d as art3d
sns.set_style(style='whitegrid')
def mode_choose2(file, mode, n, N, rho, T):
current_mode = (file + ' ' +
mode + ' ' +
str(n) + ' ' +
str(N) + ' ' +
str(rho) + ' ' +
str(T)
)
return(current_mode)
def mode_choose3(file, mode, n, N, rho, T, rCutOff, rMax, updateInterval):
current_mode = (file + ' ' +
mode + ' ' +
str(n) + ' ' +
str(N) + ' ' +
str(rho) + ' ' +
str(T) + ' ' +
str(rCutOff) + ' ' +
str(rMax) + ' ' +
str(updateInterval)
)
return(current_mode)
# Number of simulated steps
n = 3000
N = 64
T = 1.0
# Constants
k_B = 1.38e-23 # Boltzmann constant [J/K]
N_A = 6.022e23 # Avogadro's number [1/mol]
# Others
steps = 1
image_dpi = 72
image_format = 'pdf'
image_path_trajectories = '..\\Documentation\\src\\images\\Trajectories\\'
image_path_others = '..\\Documentation\\src\\images\\Others\\'
###Output
_____no_output_____
###Markdown
Run simulations Modes:- periodic- bounded- write anything else for non-bounded
###Code
os.system('..\Release\md1.exe' + ' ' + 'bounded' + ' ' + str(n) + ' ' + str(N) + ' ' + str(T))
data_set_1 = np.genfromtxt('md1.dat');
Temperature_1 = data_set_1[::steps,-1]
Virial_1 = data_set_1[::steps,-2]
Energy_1 = data_set_1[::steps,-3]
print('Last run\'s n:', len(data_set_1))
current_mode = mode_choose2(file='..\Release\md2.exe', mode='bounded', n=n, N=N, rho=0.95, T=1.0)
os.system(current_mode)
data_set_2 = np.genfromtxt('md2.dat')
Temperature_2 = data_set_2[::steps,-1]
Virial_2 = data_set_2[::steps,-2]
Energy_2 = data_set_2[::steps,-3]
print('Last run\'s n:', len(data_set_2))
current_mode = mode_choose3(file='..\Release\md3.exe', mode='bounded', n=n, N=N, rho=0.95, T=1.0, rCutOff=2.5, rMax=3.2, updateInterval=10)
os.system(current_mode)
data_set_3 = np.genfromtxt('md3.dat')
Temperature_3 = data_set_3[::steps,-1]
Virial_3 = data_set_3[::steps,-2]
Energy_3 = data_set_3[::steps,-3]
print('Last run\'s n:', len(data_set_3))
###Output
_____no_output_____
###Markdown
Eqilibrium
###Code
equlibrium = 4000
###Output
_____no_output_____
###Markdown
Plot out data Instantaneous temperature
###Code
nrows=1
ncols=3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*7,nrows*7))
titlesize = 23
axislabelsize = 20
for i in range(0,ncols):
axes[i].axvline(x=200, linewidth=2, linestyle='--', color='green', label='First rescaling')
axes[i].axvline(x=equlibrium, linewidth=2, linestyle='--', color='black', label='Equilibrium')
axes[i].set_title('Simulation\'s index: md' + str(i+1), fontsize=titlesize)
axes[i].set_xlabel('Steps [n]', fontsize=axislabelsize)
axes[i].set_ylabel('Temperature [K]', fontsize=axislabelsize)
axes[i].tick_params(axis='both', which='major', labelsize=axislabelsize-3)
axes[i].yaxis.get_offset_text().set_size(15)
axes[0].plot([i for i in range(0,len(Temperature_1))], Temperature_1, color='red')
axes[1].plot([i for i in range(0,len(Temperature_2))], Temperature_2, color='orange')
axes[2].plot([i for i in range(0,len(Temperature_3))], Temperature_3, color='purple')
for i in range(0,ncols):
axes[i].legend(loc='upper right', fontsize=17)
fig.tight_layout()
plt.savefig(image_path_others +
'instantaneous_temperatures_' + str(int(n/1000)) + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.show()
nrows=1
ncols=3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*7,nrows*7))
titlesize = 23
axislabelsize = 20
for i in range(0,ncols):
axes[i].set_title('Simulation\'s index: md' + str(i+1), fontsize=titlesize)
axes[i].set_xlabel('Steps [n]', fontsize=axislabelsize)
axes[i].set_ylabel('Temperature [K]', fontsize=axislabelsize)
axes[i].tick_params(axis='both', which='major', labelsize=axislabelsize-3)
axes[i].yaxis.get_offset_text().set_size(15)
axes[0].plot([i for i in range(equlibrium,len(Temperature_1))], Temperature_1[equlibrium:], color='red')
axes[1].plot([i for i in range(equlibrium,len(Temperature_2))], Temperature_2[equlibrium:], color='orange')
axes[2].plot([i for i in range(equlibrium,len(Temperature_3))], Temperature_3[equlibrium:], color='purple')
fig.tight_layout()
plt.savefig(image_path_others +
'instantaneous_temperatures_equi_' + str(int(n/1000)) + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
Total energy
###Code
nrows=1
ncols=3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*7,nrows*7))
titlesize = 23
axislabelsize = 20
for i in range(0,ncols):
axes[i].axvline(x=200, linewidth=2, linestyle='--', color='green', label='First rescaling')
axes[i].axvline(x=equlibrium, linewidth=2, linestyle='--', color='black', label='Equilibrium')
axes[i].set_title('Simulation\'s index: md' + str(i+1), fontsize=titlesize)
axes[i].set_xlabel('Steps [n]', fontsize=axislabelsize)
axes[i].set_ylabel('Total energy [J]', fontsize=axislabelsize)
axes[i].tick_params(axis='both', which='major', labelsize=axislabelsize-3)
axes[i].yaxis.get_offset_text().set_size(15)
axes[0].plot([i for i in range(0,len(Energy_1))], Energy_1, color='red')
axes[1].plot([i for i in range(0,len(Energy_1))], Energy_2, color='orange')
axes[2].plot([i for i in range(0,len(Energy_1))], Energy_3, color='purple')
for i in range(0,ncols):
axes[i].legend(loc='upper right', fontsize=17)
fig.tight_layout()
plt.savefig(image_path_others +
'total_energy_' + str(int(n/1000)) + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.show()
nrows=1
ncols=3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*7,nrows*7))
titlesize = 23
axislabelsize = 20
for i in range(0,ncols):
axes[i].set_title('Simulation\'s index: md' + str(i+1), fontsize=titlesize)
axes[i].set_xlabel('Steps [n]', fontsize=axislabelsize)
axes[i].set_ylabel('Total energy [J]', fontsize=axislabelsize)
axes[i].tick_params(axis='both', which='major', labelsize=axislabelsize-3)
axes[i].yaxis.get_offset_text().set_size(15)
axes[0].plot([i for i in range(equlibrium,len(Energy_1))], Energy_1[equlibrium:], color='red')
axes[1].plot([i for i in range(equlibrium,len(Energy_2))], Energy_2[equlibrium:], color='orange')
axes[2].plot([i for i in range(equlibrium,len(Energy_3))], Energy_3[equlibrium:], color='purple')
fig.tight_layout()
plt.savefig(image_path_others +
'total_energy_equi_' + str(int(n/1000)) + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
Total work
###Code
nrows=1
ncols=3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*7,nrows*7))
titlesize = 23
axislabelsize = 20
for i in range(0,ncols):
axes[i].axvline(x=200, linewidth=2, linestyle='--', color='green', label='First rescaling')
axes[i].axvline(x=equlibrium, linewidth=2, linestyle='--', color='black', label='Equilibrium')
axes[i].set_title('Simulation\'s index: md' + str(i+1), fontsize=titlesize)
axes[i].set_xlabel('Steps [n]', fontsize=axislabelsize)
axes[i].set_ylabel('Current work [J]', fontsize=axislabelsize)
axes[i].tick_params(axis='both', which='major', labelsize=axislabelsize-3)
axes[i].yaxis.get_offset_text().set_size(15)
axes[0].plot(Virial_1, color='red')
axes[1].plot(Virial_2, color='orange')
axes[2].plot(Virial_3, color='purple')
for i in range(0,ncols):
axes[i].legend(loc='upper right', fontsize=17)
fig.tight_layout()
plt.savefig(image_path_others +
'virial_' + str(int(n/1000)) + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.show()
nrows=1
ncols=3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*7,nrows*7))
titlesize = 23
axislabelsize = 20
for i in range(0,ncols):
axes[i].set_title('Simulation\'s index: md' + str(i+1), fontsize=titlesize)
axes[i].set_xlabel('Steps [n]', fontsize=axislabelsize)
axes[i].set_ylabel('Current work [J]', fontsize=axislabelsize)
axes[i].tick_params(axis='both', which='major', labelsize=axislabelsize-3)
axes[i].yaxis.get_offset_text().set_size(15)
axes[0].plot([i for i in range(equlibrium,len(Virial_1))], Virial_1[equlibrium:], color='red')
axes[1].plot([i for i in range(equlibrium,len(Virial_2))], Virial_2[equlibrium:], color='orange')
axes[2].plot([i for i in range(equlibrium,len(Virial_3))], Virial_3[equlibrium:], color='purple')
fig.tight_layout()
plt.savefig(image_path_others +
'virial_equi_' + str(int(n/1000)) + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
Calculations for $$, $$, $C_V$, $Z$
###Code
E_expected_1 = np.mean(Energy_1[equlibrium::])
E_expected_2 = np.mean(Energy_2[equlibrium::])
E_expected_3 = np.mean(Energy_3[equlibrium::])
E_expected_squared_1 = E_expected_1**2
E_expected_squared_2 = E_expected_2**2
E_expected_squared_3 = E_expected_3**2
E2_expected_1 = np.mean(np.square(Energy_1[equlibrium::]))
E2_expected_2 = np.mean(np.square(Energy_2[equlibrium::]))
E2_expected_3 = np.mean(np.square(Energy_3[equlibrium::]))
E_oscillation_1 = E2_expected_1 - E_expected_squared_1
E_oscillation_2 = E2_expected_2 - E_expected_squared_2
E_oscillation_3 = E2_expected_3 - E_expected_squared_3
print('Expected value for first simulation: {0}'.format(E_expected_1))
print('Expected value for second simulation: {0}'.format(E_expected_2))
print('Expected value for third simulation: {0}'.format(E_expected_3))
print('\n')
print('Square of expected value for first simulation: {0}'.format(E_expected_squared_1))
print('Square of expected value for second simulation: {0}'.format(E_expected_squared_2))
print('Square of expected value for third simulation: {0}'.format(E_expected_squared_3))
print('\n')
print('Expected value for first simulation: {0}'.format(E2_expected_1))
print('Expected value for second simulation: {0}'.format(E2_expected_2))
print('Expected value for third simulation: {0}'.format(E2_expected_3))
print('\n')
print('Oscillation of energy in equilibrium (md1): {0}'.format(E_oscillation_1))
print('Oscillation of energy in equilibrium (md2): {0}'.format(E_oscillation_2))
print('Oscillation of energy in equilibrium (md3): {0}'.format(E_oscillation_3))
###Output
_____no_output_____
###Markdown
Propagation of means
###Code
E_expected_propag_1 = np.array([np.mean(Energy_1[equlibrium:i+1]) for i in range(equlibrium, len(Energy_1))])
E_expected_propag_2 = np.array([np.mean(Energy_2[equlibrium:i+1]) for i in range(equlibrium, len(Energy_2))])
E_expected_propag_3 = np.array([np.mean(Energy_3[equlibrium:i+1]) for i in range(equlibrium, len(Energy_3))])
nrows=1
ncols=3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*7,nrows*7))
titlesize = 23
axislabelsize = 20
for i in range(0,ncols):
axes[i].set_title('Simulation\'s index: md' + str(i+1), fontsize=titlesize)
axes[i].set_xlabel('Steps [n]', fontsize=axislabelsize)
axes[i].set_ylabel('Propagation of mean energy ($\\left< E \\right>$) [J]', fontsize=axislabelsize)
axes[i].tick_params(axis='both', which='major', labelsize=axislabelsize-3)
axes[i].yaxis.get_offset_text().set_size(15)
axes[0].plot([i for i in range(equlibrium, len(E_expected_propag_1)+equlibrium)], E_expected_propag_1, color='red')
axes[1].plot([i for i in range(equlibrium, len(E_expected_propag_2)+equlibrium)], E_expected_propag_2, color='orange')
axes[2].plot([i for i in range(equlibrium, len(E_expected_propag_3)+equlibrium)], E_expected_propag_3, color='purple')
axes[0].axhline(y=E_expected_propag_1[-1],
linewidth=2, linestyle='--', color='green', label='Best value of $< E >$ = {0:.2f}'.format(E_expected_propag_1[-1]))
axes[1].axhline(y=E_expected_propag_2[-1],
linewidth=2, linestyle='--', color='green', label='Best value of $< E >$ = {0:.2f}'.format(E_expected_propag_2[-1]))
axes[2].axhline(y=E_expected_propag_3[-1],
linewidth=2, linestyle='--', color='green', label='Best value of $< E >$ = {0:.2f}'.format(E_expected_propag_3[-1]))
for i in range(0,ncols):
axes[i].legend(loc='upper right', fontsize=17)
fig.tight_layout()
plt.savefig(image_path_others +
'energy_propag_' + str(int(n/1000)) + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.show()
E2_expected_propag_1 = np.array([np.mean(np.square(Energy_1[equlibrium:i+1])) for i in range(equlibrium, len(Energy_1))])
E2_expected_propag_2 = np.array([np.mean(np.square(Energy_2[equlibrium:i+1])) for i in range(equlibrium, len(Energy_2))])
E2_expected_propag_3 = np.array([np.mean(np.square(Energy_3[equlibrium:i+1])) for i in range(equlibrium, len(Energy_3))])
nrows=1
ncols=3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*7,nrows*7))
titlesize = 23
axislabelsize = 20
for i in range(0,ncols):
axes[i].set_title('Simulation\'s index: md' + str(i+1), fontsize=titlesize)
axes[i].set_xlabel('Steps [n]', fontsize=axislabelsize)
axes[i].set_ylabel('Propagation of mean energy squared ($\\left< E^{2} \\right>$) [J$^{2}$]', fontsize=axislabelsize)
axes[i].tick_params(axis='both', which='major', labelsize=axislabelsize-3)
axes[i].yaxis.get_offset_text().set_size(15)
axes[0].plot([i for i in range(equlibrium, len(Energy_1))], E2_expected_propag_1, color='red')
axes[1].plot([i for i in range(equlibrium, len(Energy_2))], E2_expected_propag_2, color='orange')
axes[2].plot([i for i in range(equlibrium, len(Energy_3))], E2_expected_propag_3, color='purple')
axes[0].axhline(y=E2_expected_propag_1[-1],
linewidth=2, linestyle='--', color='green', label='Best value of $< E^2 >$ = {0:.2f}'.format(E2_expected_propag_1[-1]))
axes[1].axhline(y=E2_expected_propag_2[-1],
linewidth=2, linestyle='--', color='green', label='Best value of $< E^2 >$ = {0:.2f}'.format(E2_expected_propag_2[-1]))
axes[2].axhline(y=E2_expected_propag_3[-1],
linewidth=2, linestyle='--', color='green', label='Best value of $< E^2 >$ = {0:.2f}'.format(E2_expected_propag_3[-1]))
for i in range(0,ncols):
axes[i].legend(loc='upper right', fontsize=17)
fig.tight_layout()
plt.savefig(image_path_others +
'energy2_propag_' + str(int(n/1000)) + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.show()
E_oscillation_1 = E2_expected_propag_1 - E_expected_propag_1**2
E_oscillation_2 = E2_expected_propag_2 - E_expected_propag_2**2
E_oscillation_3 = E2_expected_propag_3 - E_expected_propag_3**2
nrows=1
ncols=3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*7,nrows*7))
titlesize = 23
axislabelsize = 20
for i in range(0,ncols):
axes[i].set_title('Simulation\'s index: md' + str(i+1), fontsize=titlesize)
axes[i].set_xlabel('Steps [n]', fontsize=axislabelsize)
axes[i].set_ylabel('Propagation of mean energy squared\n($\\left< E^{2} \\right> - \\left< E \\right>^{2}$) [J$^{2}$]', fontsize=axislabelsize)
axes[i].tick_params(axis='both', which='major', labelsize=axislabelsize-3)
axes[i].yaxis.get_offset_text().set_size(15)
axes[0].plot([i for i in range(equlibrium, len(Energy_1))], E_oscillation_1, color='red')
axes[1].plot([i for i in range(equlibrium, len(Energy_2))], E_oscillation_2, color='orange')
axes[2].plot([i for i in range(equlibrium, len(Energy_3))], E_oscillation_3, color='purple')
axes[0].axhline(y=E_oscillation_1[-1],
linewidth=2, linestyle='--', color='green', label='Best value of $\sigma^2$ = {0:.2f}'.format(E_oscillation_1[-1]))
axes[1].axhline(y=E_oscillation_2[-1],
linewidth=2, linestyle='--', color='green', label='Best value of $\sigma^2$ = {0:.2f}'.format(E_oscillation_2[-1]))
axes[2].axhline(y=E_oscillation_3[-1],
linewidth=2, linestyle='--', color='green', label='Best value of $\sigma^2$ = {0:.2f}'.format(E_oscillation_3[-1]))
for i in range(0,ncols):
axes[i].legend(loc='lower right', fontsize=20)
fig.tight_layout()
plt.savefig(image_path_others +
'energy_oscill_' + str(int(n/1000)) + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
Heat capacityDimension:$$[C_{V}]=\left[ \frac{1}{k_{B} T^{2}} \cdot \left( \left - \left^{2} \right) \right]=\frac{1}{\frac{J}{K} \cdot K^{2}} \cdot J^{2}=\frac{J}{K}$$For argon gas it's$$12.76\ \frac{J}{mol \cdot K}$$
###Code
molar_heat_cap_1 = E_oscillation_1 * (1/(k_B * Temperature_1[equlibrium::] * Temperature_1[equlibrium::])) / N_A
molar_heat_cap_2 = E_oscillation_2 * (1/(k_B * Temperature_2[equlibrium::] * Temperature_2[equlibrium::])) / N_A
molar_heat_cap_3 = E_oscillation_3 * (1/(k_B * Temperature_3[equlibrium::] * Temperature_3[equlibrium::])) / N_A
nrows=1
ncols=3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*7,nrows*7))
titlesize = 23
axislabelsize = 20
for i in range(0,ncols):
axes[i].set_title('Simulation\'s index: md' + str(i+1), fontsize=titlesize)
axes[i].set_xlabel('Steps [n]', fontsize=axislabelsize)
axes[i].set_ylabel('$C_{v}$ [$\\frac{J}{mol \cdot K}$]', fontsize=axislabelsize)
axes[i].tick_params(axis='both', which='major', labelsize=axislabelsize-3)
axes[i].yaxis.get_offset_text().set_size(15)
axes[0].plot([i for i in range(equlibrium, len(Energy_1))],
molar_heat_cap_1, color='red')
axes[1].plot([i for i in range(equlibrium, len(Energy_2))],
molar_heat_cap_2, color='orange')
axes[2].plot([i for i in range(equlibrium, len(Energy_3))],
molar_heat_cap_3, color='purple')
axes[0].axhline(y=molar_heat_cap_1[-1],
linewidth=2, linestyle='--', color='green', label='Best value of C$_V$ = {0:.2f}'.format(molar_heat_cap_1[-1]))
axes[1].axhline(y=molar_heat_cap_2[-1],
linewidth=2, linestyle='--', color='green', label='Best value of C$_V$ = {0:.2f}'.format(molar_heat_cap_2[-1]))
axes[2].axhline(y=molar_heat_cap_3[-1],
linewidth=2, linestyle='--', color='green', label='Best value of C$_V$ = {0:.2f}'.format(molar_heat_cap_3[-1]))
for i in range(0,ncols):
axes[i].legend(loc='lower right', fontsize=20)
fig.tight_layout()
plt.savefig(image_path_others +
'heat_capacity_propag_' + str(int(n/1000)) + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
Pressure
###Code
PV_exp_1 = np.array([np.mean(Virial_1[equlibrium:i+1]) for i in range(equlibrium, len(Virial_1))])
PV_exp_2 = np.array([np.mean(Virial_2[equlibrium:i+1]) for i in range(equlibrium, len(Virial_2))])
PV_exp_3 = np.array([np.mean(Virial_3[equlibrium:i+1]) for i in range(equlibrium, len(Virial_3))])
PV_1 = N * k_B * Temperature_1[equlibrium:] + 1/3 * PV_exp_1
PV_2 = N * k_B * Temperature_2[equlibrium:] + 1/3 * PV_exp_2
PV_3 = N * k_B * Temperature_3[equlibrium:] + 1/3 * PV_exp_3
nrows=1
ncols=3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*7,nrows*7))
titlesize = 23
axislabelsize = 20
for i in range(0,ncols):
axes[i].set_title('Simulation\'s index: md' + str(i+1), fontsize=titlesize)
axes[i].set_xlabel('Steps [n]', fontsize=axislabelsize)
axes[i].set_ylabel('Pressure $\cdot$ Volume [Pa $\cdot$ m$^3$]', fontsize=axislabelsize)
axes[i].tick_params(axis='both', which='major', labelsize=axislabelsize-3)
axes[i].yaxis.get_offset_text().set_size(15)
axes[0].plot([i for i in range(equlibrium, len(Energy_1))],
PV_1, color='red')
axes[1].plot([i for i in range(equlibrium, len(Energy_2))],
PV_2, color='orange')
axes[2].plot([i for i in range(equlibrium, len(Energy_3))],
PV_3, color='purple')
axes[0].axhline(y=PV_1[-1],
linewidth=2, linestyle='--', color='green', label='Best value of PV = {0:.2f}'.format(PV_1[-1]))
axes[1].axhline(y=PV_2[-1],
linewidth=2, linestyle='--', color='green', label='Best value of PV = {0:.2f}'.format(PV_2[-1]))
axes[2].axhline(y=PV_3[-1],
linewidth=2, linestyle='--', color='green', label='Best value of PV = {0:.2f}'.format(PV_3[-1]))
for i in range(0,ncols):
axes[i].legend(loc='lower right', fontsize=20)
fig.tight_layout()
plt.savefig(image_path_others +
'PV_' + str(int(n/1000)) + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
Compressibility factor Dimension:$$[Z]=\left[ \frac{PV}{N k_{B} T} \right]=\frac{\frac{\text{kg}}{\text{m}\ \text{s}^{2}} \text{m}^{3}}{1 \frac{\text{J}}{\text{K}} \text{K}}=\frac{\frac{\text{kg}}{\text{m}\ \text{s}^{2}} \text{m}^{3}}{\frac{\text{kg}\ \text{m}^{2}}{\text{s}^{2}}}=\frac{\text{m}^{2}}{\text{m}^{2}}=1$$
###Code
Z_1 = PV_1 / (N * k_B * Temperature_1[equlibrium:] * N_A)
Z_2 = PV_2 / (N * k_B * Temperature_2[equlibrium:] * N_A)
Z_3 = PV_3 / (N * k_B * Temperature_3[equlibrium:] * N_A)
nrows=1
ncols=3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*7,nrows*7))
titlesize = 23
axislabelsize = 20
for i in range(0,ncols):
axes[i].set_title('Simulation\'s index: md' + str(i+1), fontsize=titlesize)
axes[i].set_xlabel('Steps [n]', fontsize=axislabelsize)
axes[i].set_ylabel('Compressibility factor', fontsize=axislabelsize)
axes[i].tick_params(axis='both', which='major', labelsize=axislabelsize-3)
axes[i].yaxis.get_offset_text().set_size(15)
axes[0].plot([i for i in range(equlibrium, len(Energy_1))],
Z_1, color='red')
axes[1].plot([i for i in range(equlibrium, len(Energy_1))],
Z_2, color='orange')
axes[2].plot([i for i in range(equlibrium, len(Energy_1))],
Z_3, color='purple')
axes[0].axhline(y=Z_1[-1],
linewidth=2, linestyle='--', color='green', label='Best value of Z = {0:.2f}'.format(Z_1[-1]))
axes[1].axhline(y=Z_2[-1],
linewidth=2, linestyle='--', color='green', label='Best value of Z = {0:.2f}'.format(Z_2[-1]))
axes[2].axhline(y=Z_3[-1],
linewidth=2, linestyle='--', color='green', label='Best value of Z = {0:.2f}'.format(Z_3[-1]))
for i in range(0,ncols):
axes[i].legend(loc='lower right', fontsize=20)
fig.tight_layout()
plt.savefig(image_path_others +
'Z_' + str(int(n/1000)) + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
Don't run
###Code
nrows=1
ncols=3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*7,nrows*7))
titlesize = 23
axislabelsize = 20
for i in range(0,ncols):
axes[i].set_title('Simulation\'s index: md' + str(i+1), fontsize=titlesize)
axes[i].set_xlabel('Steps [n]', fontsize=axislabelsize)
axes[i].set_ylabel('Compressibility factor', fontsize=axislabelsize)
axes[i].tick_params(axis='both', which='major', labelsize=axislabelsize-3)
axes[i].yaxis.get_offset_text().set_size(15)
for i in range(0,10):
color = np.array([random.random(), random.random(), random.random()])
N = i * 8 + 64
os.system('..\Release\md1.exe' + ' ' + 'bounded' + ' ' + str(n) + ' ' + str(N) + ' ' + str(T))
data_set_1 = np.genfromtxt('md1.dat')
current_mode = mode_choose2(file='..\Release\md2.exe', mode='bounded', n=n, N=N, rho=0.95, T=1.0)
os.system(current_mode)
data_set_2 = np.genfromtxt('md2.dat')
current_mode = mode_choose3(file='..\Release\md3.exe', mode='bounded', n=n, N=N, rho=0.95, T=1.0, rCutOff=2.5, rMax=3.2, updateInterval=10)
os.system(current_mode)
data_set_3 = np.genfromtxt('md3.dat')
axes[0].plot(data_set_1[::steps,-2], color=color)
axes[1].plot(data_set_2[::steps,-2], color=color)
axes[2].plot(data_set_3[::steps,-2], color=color)
fig.tight_layout()
plt.savefig(image_path_others +
'Z_particles_' + str(int(n/1000)) + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
Plot coordinates, velocities and accelerations MD1
###Code
nrows=1
ncols=3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*20,nrows*20), subplot_kw={'projection': '3d'})
axislabelsize = 30
labelpad = 20
axes[0].set_xlim(0,10)
axes[0].set_ylim(0,10)
axes[0].set_zlim(0,10)
axes[0].set_xlabel('Distance (X)', fontsize=axislabelsize, labelpad=labelpad)
axes[0].set_ylabel('Distance (Y)', fontsize=axislabelsize, labelpad=labelpad)
axes[0].set_zlabel('Distance (Z)', fontsize=axislabelsize, labelpad=labelpad)
axes[1].set_xlabel('Velocity (V_X)', fontsize=axislabelsize, labelpad=labelpad)
axes[1].set_ylabel('Velocity (V_Y)', fontsize=axislabelsize, labelpad=labelpad)
axes[1].set_zlabel('Velocity (V_Z)', fontsize=axislabelsize, labelpad=labelpad)
axes[2].set_xlabel('Acceleration (A_X)', fontsize=axislabelsize, labelpad=labelpad)
axes[2].set_ylabel('Acceleration (A_Y)', fontsize=axislabelsize, labelpad=labelpad)
axes[2].set_zlabel('Acceleration (A_Z)', fontsize=axislabelsize, labelpad=labelpad)
axes[0].tick_params(axis='both', which='major', labelsize=axislabelsize)
axes[1].tick_params(axis='both', which='major', labelsize=axislabelsize)
axes[2].tick_params(axis='both', which='major', labelsize=axislabelsize)
for i in range(0, (data_set_1.shape[1]-1)//9):
axes[0].scatter(data_set_1[::steps,i*9], data_set_1[::steps,i*9+1], data_set_1[::steps,i*9+2])
axes[1].scatter(data_set_1[::steps,i*9+3], data_set_1[::steps,i*9+4], data_set_1[::steps,i*9+5])
axes[2].scatter(data_set_1[::steps,i*9+6], data_set_1[::steps,i*9+7], data_set_1[::steps,i*9+8])
axes[0].scatter(data_set_1[::steps,i*9][-1], data_set_1[::steps,i*9+1][-1], data_set_1[::steps,i*9+2][-1],
color='red', s=200)
fig.tight_layout()
plt.savefig(image_path_trajectories +
'md1_trajectories' + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.savefig(image_path_trajectories +
'md1_trajectories' + '.' +
'png',
format='png',
dpi=image_dpi,
bbox_inches='tight')
plt.show()
nrows=1
ncols=2
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*20,nrows*20), subplot_kw={'projection': '3d'})
axislabelsize = 30
labelpad = 20
axes[0].set_xlabel('Distance (X)', fontsize=axislabelsize, labelpad=labelpad)
axes[0].set_ylabel('Distance (Y)', fontsize=axislabelsize, labelpad=labelpad)
axes[0].set_zlabel('Distance (Z)', fontsize=axislabelsize, labelpad=labelpad)
axes[1].set_xlabel('Distance (X)', fontsize=axislabelsize, labelpad=labelpad)
axes[1].set_ylabel('Distance (Y)', fontsize=axislabelsize, labelpad=labelpad)
axes[1].set_zlabel('Distance (Z)', fontsize=axislabelsize, labelpad=labelpad)
axes[0].tick_params(axis='both', which='major', labelsize=axislabelsize)
axes[1].tick_params(axis='both', which='major', labelsize=axislabelsize)
for i in range(0, (data_set_1.shape[1]-1)//9):
color = np.array([random.random(), random.random(), random.random()])
axes[0].plot(data_set_1[::steps,i*9], data_set_1[::steps,i*9+1], data_set_1[::steps,i*9+2], color=color, lw=3)
axes[1].scatter(data_set_1[::steps,i*9], data_set_1[::steps,i*9+1], data_set_1[::steps,i*9+2], color=color, s=20)
axes[0].scatter(data_set_1[::steps,i*9][-1], data_set_1[::steps,i*9+1][-1], data_set_1[::steps,i*9+2][-1],
color='red', s=200)
axes[1].scatter(data_set_1[::steps,i*9][-1], data_set_1[::steps,i*9+1][-1], data_set_1[::steps,i*9+2][-1],
color='red', s=200)
fig.tight_layout()
plt.savefig(image_path_trajectories +
'md1_trajectories_compare' + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.savefig(image_path_trajectories +
'md1_trajectories_compare' + '.' +
'png',
format='png',
dpi=image_dpi,
bbox_inches='tight')
plt.show()
nrows=4
ncols=5
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*10,nrows*10))
axislabelsize = 18
for i in range(0,nrows):
for j in range(0,ncols):
p = int((i*nrows + j) * (len(data_set_1) / (nrows*ncols)))
velocities = np.array([np.sqrt(data_set_1[::steps][p][k*9]**2 +
data_set_1[::steps][p][k*9+1]**2 +
data_set_1[::steps][p][k*9+2]**2) for k in range(0, (data_set_1.shape[1]-1)//9)])
sns.distplot(velocities, ax=axes[i][j])
axes[i][j].set_xlabel('Velocities', fontsize=axislabelsize+10)
axes[i][j].tick_params(axis='both', which='major', labelsize=axislabelsize)
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
MD2
###Code
nrows=1
ncols=3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*20,nrows*20), subplot_kw={'projection': '3d'})
axislabelsize = 30
labelpad = 20
axes[0].set_xlabel('Distance (X)', fontsize=axislabelsize, labelpad=labelpad)
axes[0].set_ylabel('Distance (Y)', fontsize=axislabelsize, labelpad=labelpad)
axes[0].set_zlabel('Distance (Z)', fontsize=axislabelsize, labelpad=labelpad)
axes[1].set_xlabel('Velocity (V_X)', fontsize=axislabelsize, labelpad=labelpad)
axes[1].set_ylabel('Velocity (V_Y)', fontsize=axislabelsize, labelpad=labelpad)
axes[1].set_zlabel('Velocity (V_Z)', fontsize=axislabelsize, labelpad=labelpad)
axes[2].set_xlabel('Acceleration (A_X)', fontsize=axislabelsize, labelpad=labelpad)
axes[2].set_ylabel('Acceleration (A_Y)', fontsize=axislabelsize, labelpad=labelpad)
axes[2].set_zlabel('Acceleration (A_Z)', fontsize=axislabelsize, labelpad=labelpad)
axes[0].tick_params(axis='both', which='major', labelsize=axislabelsize)
axes[1].tick_params(axis='both', which='major', labelsize=axislabelsize)
axes[2].tick_params(axis='both', which='major', labelsize=axislabelsize)
for i in range(0, (data_set_2.shape[1]-1)//9):
axes[0].scatter(data_set_2[::steps,i*9], data_set_2[::steps,i*9+1], data_set_2[::steps,i*9+2])
axes[1].scatter(data_set_2[::steps,i*9+3], data_set_2[::steps,i*9+4], data_set_2[::steps,i*9+5])
axes[2].scatter(data_set_2[::steps,i*9+6], data_set_2[::steps,i*9+7], data_set_2[::steps,i*9+8])
axes[0].scatter(data_set_2[::steps,i*9][-1], data_set_2[::steps,i*9+1][-1], data_set_2[::steps,i*9+2][-1],
color='red', s=200)
fig.tight_layout()
plt.savefig(image_path_trajectories +
'md2_trajectories' + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.savefig(image_path_trajectories +
'md2_trajectories' + '.' +
'png',
format='png',
dpi=image_dpi,
bbox_inches='tight')
plt.show()
nrows=1
ncols=2
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*20,nrows*20), subplot_kw={'projection': '3d'})
axislabelsize = 30
labelpad = 20
axes[0].set_xlabel('Distance (X)', fontsize=axislabelsize, labelpad=labelpad)
axes[0].set_ylabel('Distance (Y)', fontsize=axislabelsize, labelpad=labelpad)
axes[0].set_zlabel('Distance (Z)', fontsize=axislabelsize, labelpad=labelpad)
axes[1].set_xlabel('Distance (X)', fontsize=axislabelsize, labelpad=labelpad)
axes[1].set_ylabel('Distance (Y)', fontsize=axislabelsize, labelpad=labelpad)
axes[1].set_zlabel('Distance (Z)', fontsize=axislabelsize, labelpad=labelpad)
axes[0].tick_params(axis='both', which='major', labelsize=axislabelsize)
axes[1].tick_params(axis='both', which='major', labelsize=axislabelsize)
for i in range(0, (data_set_2.shape[1]-1)//9):
color = np.array([random.random(), random.random(), random.random()])
axes[0].plot(data_set_2[::steps,i*9], data_set_2[::steps,i*9+1], data_set_2[::steps,i*9+2], color=color, lw=3)
axes[1].scatter(data_set_2[::steps,i*9], data_set_2[::steps,i*9+1], data_set_2[::steps,i*9+2], color=color, s=20)
axes[0].scatter(data_set_2[::steps,i*9][-1], data_set_2[::steps,i*9+1][-1], data_set_2[::steps,i*9+2][-1],
color='red', s=200)
axes[1].scatter(data_set_2[::steps,i*9][-1], data_set_2[::steps,i*9+1][-1], data_set_2[::steps,i*9+2][-1],
color='red', s=200)
fig.tight_layout()
plt.savefig(image_path_trajectories +
'md2_trajectories_compare' + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.savefig(image_path_trajectories +
'md2_trajectories_compare' + '.' +
'png',
format='png',
dpi=image_dpi,
bbox_inches='tight')
plt.show()
nrows=4
ncols=5
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*10,nrows*10))
axislabelsize = 18
for i in range(0,nrows):
for j in range(0,ncols):
p = int((i*nrows + j) * (len(data_set_2) / (nrows*ncols)))
velocities = np.array([np.sqrt(data_set_2[::steps][p][k*9]**2 +
data_set_2[::steps][p][k*9+1]**2 +
data_set_2[::steps][p][k*9+2]**2) for k in range(0, (data_set_2.shape[1]-1)//9)])
sns.distplot(velocities, ax=axes[i][j])
axes[i][j].set_xlabel('Velocities', fontsize=axislabelsize+10)
axes[i][j].tick_params(axis='both', which='major', labelsize=axislabelsize)
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
MD3
###Code
nrows=1
ncols=3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*20,nrows*20), subplot_kw={'projection': '3d'})
axislabelsize = 30
labelpad = 20
axes[0].set_xlabel('Distance (X)', fontsize=axislabelsize, labelpad=labelpad)
axes[0].set_ylabel('Distance (Y)', fontsize=axislabelsize, labelpad=labelpad)
axes[0].set_zlabel('Distance (Z)', fontsize=axislabelsize, labelpad=labelpad)
axes[1].set_xlabel('Velocity (V_X)', fontsize=axislabelsize, labelpad=labelpad)
axes[1].set_ylabel('Velocity (V_Y)', fontsize=axislabelsize, labelpad=labelpad)
axes[1].set_zlabel('Velocity (V_Z)', fontsize=axislabelsize, labelpad=labelpad)
axes[2].set_xlabel('Acceleration (A_X)', fontsize=axislabelsize, labelpad=labelpad)
axes[2].set_ylabel('Acceleration (A_Y)', fontsize=axislabelsize, labelpad=labelpad)
axes[2].set_zlabel('Acceleration (A_Z)', fontsize=axislabelsize, labelpad=labelpad)
axes[0].tick_params(axis='both', which='major', labelsize=axislabelsize)
axes[1].tick_params(axis='both', which='major', labelsize=axislabelsize)
axes[2].tick_params(axis='both', which='major', labelsize=axislabelsize)
for i in range(0, (data_set_3.shape[1]-1)//9):
color = np.array([random.random(), random.random(), random.random()])
axes[0].scatter(data_set_3[::steps,i*9], data_set_3[::steps,i*9+1], data_set_3[::steps,i*9+2], color=color)
axes[1].scatter(data_set_3[::steps,i*9+3], data_set_3[::steps,i*9+4], data_set_3[::steps,i*9+5], color=color)
axes[2].scatter(data_set_3[::steps,i*9+6], data_set_3[::steps,i*9+7], data_set_3[::steps,i*9+8], color=color)
for i in range(0, (data_set_3.shape[1]-1)//9):
axes[0].scatter(data_set_3[::steps,i*9][-1], data_set_3[::steps,i*9+1][-1], data_set_3[::steps,i*9+2][-1],
color='grey', s=200)
fig.tight_layout()
plt.savefig(image_path_trajectories +
'md3_trajectories' + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.savefig(image_path_trajectories +
'md3_trajectories' + '.' +
'png',
format='png',
dpi=image_dpi,
bbox_inches='tight')
plt.show()
nrows=1
ncols=2
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*20,nrows*20), subplot_kw={'projection': '3d'})
axislabelsize = 30
labelpad = 20
axes[0].set_xlabel('Distance (X)', fontsize=axislabelsize, labelpad=labelpad)
axes[0].set_ylabel('Distance (Y)', fontsize=axislabelsize, labelpad=labelpad)
axes[0].set_zlabel('Distance (Z)', fontsize=axislabelsize, labelpad=labelpad)
axes[1].set_xlabel('Distance (X)', fontsize=axislabelsize, labelpad=labelpad)
axes[1].set_ylabel('Distance (Y)', fontsize=axislabelsize, labelpad=labelpad)
axes[1].set_zlabel('Distance (Z)', fontsize=axislabelsize, labelpad=labelpad)
axes[0].tick_params(axis='both', which='major', labelsize=axislabelsize)
axes[1].tick_params(axis='both', which='major', labelsize=axislabelsize)
for i in range(0, (data_set_3.shape[1]-1)//9):
color = np.array([random.random(), random.random(), random.random()])
axes[0].plot(data_set_3[::steps,i*9], data_set_3[::steps,i*9+1], data_set_3[::steps,i*9+2], color=color, lw=3)
axes[0].scatter(data_set_3[::steps,i*9][-1], data_set_3[::steps,i*9+1][-1], data_set_3[::steps,i*9+2][-1],
color='red', s=200)
axes[1].scatter(data_set_3[::steps,i*9], data_set_3[::steps,i*9+1], data_set_3[::steps,i*9+2], color=color, s=10)
axes[1].scatter(data_set_3[::steps,i*9][-1], data_set_3[::steps,i*9+1][-1], data_set_3[::steps,i*9+2][-1],
color='red', s=200)
fig.tight_layout()
plt.savefig(image_path_trajectories +
'md3_trajectories_compare' + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.savefig(image_path_trajectories +
'md3_trajectories_compare' + '.' +
'png',
format='png',
dpi=image_dpi,
bbox_inches='tight')
plt.show()
nrows=4
ncols=5
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*10,nrows*10))
axislabelsize = 18
for i in range(0,nrows):
for j in range(0,ncols):
p = int((i*nrows + j) * (len(data_set_3) / (nrows*ncols)))
velocities = np.array([np.sqrt(data_set_3[::steps][p][k*9]**2 +
data_set_3[::steps][p][k*9+1]**2 +
data_set_3[::steps][p][k*9+2]**2) for k in range(0, (data_set_3.shape[1]-1)//9)])
sns.distplot(velocities, ax=axes[i][j])
axes[i][j].set_xlabel('Velocities', fontsize=axislabelsize+10)
axes[i][j].tick_params(axis='both', which='major', labelsize=axislabelsize)
fig.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Differencies with various rCutOffs, rMax and updateIntervals
###Code
runtimes = np.genfromtxt('runtimes.dat')
nrows=1
ncols=3
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*20,nrows*20), subplot_kw={'projection': '3d'})
axislabelsize = 30
labelpad = 20
axes[0].set_xlabel('rMax', fontsize=axislabelsize, labelpad=labelpad)
axes[0].set_ylabel('updateIntervals [n]', fontsize=axislabelsize, labelpad=labelpad)
axes[0].set_zlabel('Runtime [s]', fontsize=axislabelsize, labelpad=labelpad)
axes[1].set_xlabel('rCutOff', fontsize=axislabelsize, labelpad=labelpad)
axes[1].set_ylabel('updateIntervals [n]', fontsize=axislabelsize, labelpad=labelpad)
axes[1].set_zlabel('Runtime [s]', fontsize=axislabelsize, labelpad=labelpad)
axes[2].set_xlabel('rCutOff', fontsize=axislabelsize, labelpad=labelpad)
axes[2].set_ylabel('rMax', fontsize=axislabelsize, labelpad=labelpad)
axes[2].set_zlabel('Runtime [s]', fontsize=axislabelsize, labelpad=labelpad)
for i in range(0, ncols):
axes[i].tick_params(axis='both', which='major', labelsize=axislabelsize)
for k in range(1, len(runtimes)):
axes[0].scatter(runtimes[k][1], runtimes[k][2], runtimes[k][3])
axes[1].scatter(runtimes[k][0], runtimes[k][2], runtimes[k][3])
axes[2].scatter(runtimes[k][0], runtimes[k][1], runtimes[k][3])
fig.tight_layout()
plt.savefig(image_path_others +
'runtime_full_md3' + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.show()
nrows=1
ncols=2
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*20,nrows*12))
axislabelsize = 30
labelpad = 20
axes[0].set_xlabel('rMax', fontsize=axislabelsize+3, labelpad=labelpad)
axes[0].set_ylabel('Runtime [s]', fontsize=axislabelsize+3, labelpad=labelpad)
axes[1].set_xlabel('updateIntervals [n]', fontsize=axislabelsize+3, labelpad=labelpad)
axes[1].set_ylabel('Runtime [s]', fontsize=axislabelsize+3, labelpad=labelpad)
axes[0].tick_params(axis='both', which='major', labelsize=axislabelsize)
axes[1].tick_params(axis='both', which='major', labelsize=axislabelsize)
for k in range(1, len(runtimes)):
axes[0].scatter(runtimes[k][1], runtimes[k][3])
axes[1].scatter(runtimes[k][2], runtimes[k][3])
fig.tight_layout()
plt.savefig(image_path_others +
'runtime_rmax_update_md3' + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.show()
nrows=1
ncols=2
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*20,nrows*12))
axislabelsize = 30
labelpad = 20
axes[0].set_xlabel('rCutOff', fontsize=axislabelsize+3, labelpad=labelpad)
axes[0].set_ylabel('Runtime [s]', fontsize=axislabelsize+3, labelpad=labelpad)
axes[1].set_xlabel('updateIntervals [n]', fontsize=axislabelsize+3, labelpad=labelpad)
axes[1].set_ylabel('Runtime [s]', fontsize=axislabelsize+3, labelpad=labelpad)
axes[0].tick_params(axis='both', which='major', labelsize=axislabelsize)
axes[1].tick_params(axis='both', which='major', labelsize=axislabelsize)
for k in range(1, len(runtimes)):
axes[0].scatter(runtimes[k][0], runtimes[k][3])
axes[1].scatter(runtimes[k][2], runtimes[k][3])
fig.tight_layout()
plt.savefig(image_path_others +
'runtime_rcutoff_update_md3' + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.show()
nrows=1
ncols=2
fig, axes = plt.subplots(nrows=nrows, ncols=ncols, figsize=(ncols*20,nrows*12))
axislabelsize = 30
labelpad = 20
axes[0].set_xlabel('rCutOff', fontsize=axislabelsize+3, labelpad=labelpad)
axes[0].set_ylabel('Runtime [s]', fontsize=axislabelsize+3, labelpad=labelpad)
axes[1].set_xlabel('rMax', fontsize=axislabelsize+3, labelpad=labelpad)
axes[1].set_ylabel('Runtime [s]', fontsize=axislabelsize+3, labelpad=labelpad)
axes[0].tick_params(axis='both', which='major', labelsize=axislabelsize)
axes[1].tick_params(axis='both', which='major', labelsize=axislabelsize)
for k in range(1, len(runtimes)):
axes[0].scatter(runtimes[k][0], runtimes[k][3])
axes[1].scatter(runtimes[k][1], runtimes[k][3])
fig.tight_layout()
plt.savefig(image_path_others +
'runtime_rcutoff_rmax_md3' + '.' +
image_format,
format=image_format,
dpi=image_dpi,
bbox_inches='tight')
plt.show()
###Output
_____no_output_____ |
docs/docs/documentation/tutorials/1-Training_a_text_classifier.ipynb | ###Markdown
Training a short text classifier of German business names [View on recogn.ai](https://https://recognai.github.io/biome-text/master/documentation/tutorials/1-Training_a_text_classifier.html)[Run in Google Colab](https://colab.research.google.com/github/recognai/biome-text/blob/master/docs/docs/documentation/tutorials/1-Training_a_text_classifier.ipynb)[View source on GitHub](https://github.com/recognai/biome-text/blob/master/docs/docs/documentation/tutorials/1-Training_a_text_classifier.ipynb) When running this tutorial in Google Colab, make sure to install *biome.text* first:
###Code
!pip install -U pip
!pip install -U git+https://github.com/recognai/biome-text.git
exit(0) # Force restart of the runtime
###Output
_____no_output_____
###Markdown
*If* you want to log your runs with [WandB](https://wandb.ai/home), don't forget to install its client and log in.
###Code
!pip install wandb
!wandb login
###Output
_____no_output_____
###Markdown
IntroductionIn this tutorial we will train a basic short-text classifier for predicting the sector of a business based only on its business name. For this we will use a training data set with business names and business categories in German. ImportsLet us first import all the stuff we need for this tutorial:
###Code
from biome.text import Pipeline, Dataset, Trainer
from biome.text.configuration import VocabularyConfiguration, WordFeatures, TrainerConfiguration
###Output
_____no_output_____
###Markdown
Explore the training dataLet's take a look at the data we will use for training. For this we will use the [`Dataset`](https://recognai.github.io/biome-text/master/api/biome/text/dataset.htmldataset) class that is a very thin wrapper around HuggingFace's awesome [datasets.Dataset](https://huggingface.co/docs/datasets/master/package_reference/main_classes.htmldatasets.Dataset).We will download the data first to create `Dataset` instances.Apart from the training data we will also download an optional validation data set to estimate the generalization error.
###Code
# Downloading the dataset first
!curl -O https://biome-tutorials-data.s3-eu-west-1.amazonaws.com/text_classifier/business.cat.train.csv
!curl -O https://biome-tutorials-data.s3-eu-west-1.amazonaws.com/text_classifier/business.cat.valid.csv
# Loading from local
train_ds = Dataset.from_csv("business.cat.train.csv")
valid_ds = Dataset.from_csv("business.cat.valid.csv")
###Output
_____no_output_____
###Markdown
Most of HuggingFace's `Dataset` API is exposed and you can checkout their nice [documentation](https://huggingface.co/docs/datasets/master/processing.html) on how to work with data in a `Dataset`. For example, let's quickly check the size of our training data and print the first 10 examples as a pandas DataFrame:
###Code
len(train_ds)
train_ds.head()
###Output
_____no_output_____
###Markdown
As we can see we have two relevant columns *label* and *text*. Our classifier will be trained to predict the *label* given the *text*. ::: tip TipThe [TaskHead](https://recognai.github.io/biome-text/master/api/biome/text/modules/heads/task_head.htmltaskhead) of our model below will expect a *text* and a *label* column to be present in the `Dataset`. In our data set this is already the case, otherwise we would need to change or map the corresponding column names via `Dataset.rename_column_()` or `Dataset.map()`.::: We can also quickly check the distribution of our labels. Use `Dataset.head(None)` to return the complete data set as a pandas DataFrame:
###Code
train_ds.head(None)["label"].value_counts()
###Output
_____no_output_____
###Markdown
The `Dataset` class also provides access to Hugging Face's extensive NLP datasets collection via the `Dataset.load_dataset()` method. Have a look at their [quicktour](https://huggingface.co/docs/datasets/master/quicktour.html) for more details about their awesome library. Configure your *biome.text* Pipeline A typical [Pipeline](https://recognai.github.io/biome-text/master/api/biome/text/pipeline.htmlpipeline) consists of tokenizing the input, extracting features, applying a language encoding (optionally) and executing a task-specific head in the end.After training a pipeline, you can use it to make predictions.As a first step we must define a configuration for our pipeline. In this tutorial we will create a configuration dictionary and use the `Pipeline.from_config()` method to create our pipeline, but there are [other ways](https://recognai.github.io/biome-text/master/api/biome/text/pipeline.htmlpipeline).A *biome.text* pipeline has the following main components:```yamlname: a descriptive name of your pipelinetokenizer: how to tokenize the inputfeatures: input features of the modelencoder: the language encoderhead: your task configuration```See the [Configuration section](https://recognai.github.io/biome-text/master/documentation/user-guides/2-configuration.html) for a detailed description of how these main components can be configured.Our complete configuration for this tutorial will be following:
###Code
pipeline_dict = {
"name": "german_business_names",
"tokenizer": {
"text_cleaning": {
"rules": ["strip_spaces"]
}
},
"features": {
"word": {
"embedding_dim": 64,
"lowercase_tokens": True,
},
"char": {
"embedding_dim": 32,
"lowercase_characters": True,
"encoder": {
"type": "gru",
"num_layers": 1,
"hidden_size": 32,
"bidirectional": True,
},
"dropout": 0.1,
},
},
"head": {
"type": "TextClassification",
"labels": train_ds.unique("label"),
"pooler": {
"type": "gru",
"num_layers": 1,
"hidden_size": 32,
"bidirectional": True,
},
"feedforward": {
"num_layers": 1,
"hidden_dims": [32],
"activations": ["relu"],
"dropout": [0.0],
},
},
}
###Output
_____no_output_____
###Markdown
With this dictionary we can now create a `Pipeline`:
###Code
pl = Pipeline.from_config(pipeline_dict)
###Output
_____no_output_____
###Markdown
Configure the vocabularyThe default behavior of *biome.text* is to add all tokens from the training data set to the pipeline's vocabulary. This is done automatically when training the pipeline for the first time.If you want to have more control over this step, you can define a `VocabularyConfiguration` and pass it to the [`Trainer`](https://recognai.github.io/biome-text/master/api/biome/text/trainer.html) later on.In our business name classifier we only want to include words with a general meaning to our word feature vocabulary (like "Computer" or "Autohaus", for example), and want to exclude specific names that will not help to generally classify the kind of business.This can be achieved by including only the most frequent words in our training set via the `min_count` argument. For a complete list of available arguments see the [VocabularyConfiguration API](https://recognai.github.io/biome-text/master/api/biome/text/configuration.htmlvocabularyconfiguration).
###Code
vocab_config = VocabularyConfiguration(min_count={WordFeatures.namespace: 20})
###Output
_____no_output_____
###Markdown
Configure the trainerAs a next step we have to configure the [`Trainer`](https://recognai.github.io/biome-text/master/api/biome/text/trainer.html), which in essentially is a light wrapper around the amazing [Pytorch Lightning Trainer](https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html).The default trainer has sensible defaults and should work alright for most of your cases.In this tutorial, however, we want to tune a bit the learning rate and limit the training time to three epochs only.We also want to modify the monitored validation metric (by default it is the `validation_loss`) that is used to rank the checkpoints, as well as for the early stopping mechanism and to load the best model weights at the end of the training.For a complete list of available arguments see the [TrainerConfiguration API](https://recognai.github.io/biome-text/master/api/biome/text/configuration.htmltrainerconfiguration).::: tip TipBy default we will use a CUDA device if one is available. If you prefer not to use it, just set `gpus=0` in the `TrainerConfiguration`.:::::: tip TipThe default [WandB](https://wandb.ai/site) logger will log the runs to the "biome" project. You can easily change this by setting the `WANDB_PROJECT` env variable:```pythonimport osos.environ["WANDB_PROJECT"] = "my_project"```:::
###Code
trainer_config = TrainerConfiguration(
optimizer={
"type": "adam",
"lr": 0.01,
},
max_epochs=3,
monitor="validation_accuracy",
monitor_mode="max"
)
###Output
_____no_output_____
###Markdown
Train your modelNow we have everything ready to start the training of our model:- training data set- pipeline- trainer configurationIn a fist step we have to create a `Trainer` instance and pass in the pipeline, the training/validation data, the trainer configuration and our vocabulary configuration.This will load the data into memory (unless you specify `layz=True`) and build the vocabulary.
###Code
trainer = Trainer(
pipeline=pl,
train_dataset=train_ds,
valid_dataset=valid_ds,
trainer_config=trainer_config,
vocab_config=vocab_config,
)
###Output
_____no_output_____
###Markdown
In a second step we simply have to call the `Trainer.fit()` method to start the training.By default, at the end of the training the trained pipeline and the training metrics will be saved in a folder called `output`.The trained pipeline is saved as a `model.tar.gz` file that contains the pipeline configuration, the model weights and the vocabulary.The metrics are saved to a `metrics.json` file.During the training the `Trainer` will also create a logging folder called `training_logs` by default.You can modify this path via the `default_root_dir` option in your `TrainerConfiguration`, that also supports remote addresses such as s3 or hdfs.This logging folder contains all your checkpoints and logged metrics, like the ones logged for [TensorBoard](https://www.tensorflow.org/tensorboard/) for example.
###Code
trainer.fit()
###Output
_____no_output_____
###Markdown
After 3 epochs we achieve a validation accuracy of about 0.91.The validation loss seems to be decreasing further, though, so we could probably train the model for a few more epochs without overfitting the training data.For this we could simply reinitialize the `Trainer` and call `Trainer.fit(exist_ok=True)` again.::: tip TipIf for some reason the training gets interrupted, you can continue from the last saved checkpoint by setting the `resume_from_checkpoint` option in the `TrainerConfiguration`.:::::: tip TipIf you receive warnings about the data loader being a bottleneck, try to increase the `num_workers_for_dataloader` parameter in the `TrainerConfiguration` (up to the number of cpus on your machine).::: Make your first predictions Now that we trained our model we can go on to make our first predictions. We provide the input expected by our `TaskHead` of the model to the `Pipeline.predict()` method.In our case it is a `TextClassification` head that classifies a `text` input:
###Code
pl.predict(text="Autohaus biome.text")
###Output
_____no_output_____
###Markdown
The output of the `Pipeline.predict()` method is a dictionary with a `labels` and `probabilities` key containing a list of labels and their corresponding probabilities, ordered from most to less likely. ::: tip TipWhen configuring the pipeline in the first place, we recommend to check that it is correctly setup by using the `predict` method.Since the pipeline is still not trained at that moment, the predictions will be arbitrary.::: We can also load the trained pipeline from the training output. This is useful in case you trained the pipeline in some earlier session, and want to continue your work with the inference steps:
###Code
pl_trained = Pipeline.from_pretrained("output/model.tar.gz")
###Output
_____no_output_____
###Markdown
Training a short text classifier of German business names [View on recogn.ai](https://https://recognai.github.io/biome-text/v3.2.1/documentation/tutorials/1-Training_a_text_classifier.html)[Run in Google Colab](https://colab.research.google.com/github/recognai/biome-text/blob/v3.2.1/docs/docs/documentation/tutorials/1-Training_a_text_classifier.ipynb)[View source on GitHub](https://github.com/recognai/biome-text/blob/v3.2.1/docs/docs/documentation/tutorials/1-Training_a_text_classifier.ipynb) When running this tutorial in Google Colab, make sure to install *biome.text* first:
###Code
!pip install -U pip
!pip install -U biome-text
exit(0) # Force restart of the runtime
###Output
_____no_output_____
###Markdown
*If* you want to log your runs with [WandB](https://wandb.ai/home), don't forget to install its client and log in.
###Code
!pip install wandb
!wandb login
###Output
_____no_output_____
###Markdown
IntroductionIn this tutorial we will train a basic short-text classifier for predicting the sector of a business based only on its business name. For this we will use a training data set with business names and business categories in German. ImportsLet us first import all the stuff we need for this tutorial:
###Code
from biome.text import Pipeline, Dataset, Trainer
from biome.text.configuration import VocabularyConfiguration, WordFeatures, TrainerConfiguration
###Output
_____no_output_____
###Markdown
Explore the training dataLet's take a look at the data we will use for training. For this we will use the [`Dataset`](https://recognai.github.io/biome-text/v3.2.1/api/biome/text/dataset.htmldataset) class that is a very thin wrapper around HuggingFace's awesome [datasets.Dataset](https://huggingface.co/docs/datasets/master/package_reference/main_classes.htmldatasets.Dataset).We will download the data first to create `Dataset` instances.Apart from the training data we will also download an optional validation data set to estimate the generalization error.
###Code
# Downloading the dataset first
!curl -O https://biome-tutorials-data.s3-eu-west-1.amazonaws.com/text_classifier/business.cat.train.csv
!curl -O https://biome-tutorials-data.s3-eu-west-1.amazonaws.com/text_classifier/business.cat.valid.csv
# Loading from local
train_ds = Dataset.from_csv("business.cat.train.csv")
valid_ds = Dataset.from_csv("business.cat.valid.csv")
###Output
_____no_output_____
###Markdown
Most of HuggingFace's `Dataset` API is exposed and you can checkout their nice [documentation](https://huggingface.co/docs/datasets/master/processing.html) on how to work with data in a `Dataset`. For example, let's quickly check the size of our training data and print the first 10 examples as a pandas DataFrame:
###Code
len(train_ds)
train_ds.head()
###Output
_____no_output_____
###Markdown
As we can see we have two relevant columns *label* and *text*. Our classifier will be trained to predict the *label* given the *text*. ::: tip TipThe [TaskHead](https://recognai.github.io/biome-text/v3.2.1/api/biome/text/modules/heads/task_head.htmltaskhead) of our model below will expect a *text* and a *label* column to be present in the `Dataset`. In our data set this is already the case, otherwise we would need to change or map the corresponding column names via `Dataset.rename_column_()` or `Dataset.map()`.::: We can also quickly check the distribution of our labels. Use `Dataset.head(None)` to return the complete data set as a pandas DataFrame:
###Code
train_ds.head(None)["label"].value_counts()
###Output
_____no_output_____
###Markdown
The `Dataset` class also provides access to Hugging Face's extensive NLP datasets collection via the `Dataset.load_dataset()` method. Have a look at their [quicktour](https://huggingface.co/docs/datasets/master/quicktour.html) for more details about their awesome library. Configure your *biome.text* Pipeline A typical [Pipeline](https://recognai.github.io/biome-text/v3.2.1/api/biome/text/pipeline.htmlpipeline) consists of tokenizing the input, extracting features, applying a language encoding (optionally) and executing a task-specific head in the end.After training a pipeline, you can use it to make predictions.As a first step we must define a configuration for our pipeline. In this tutorial we will create a configuration dictionary and use the `Pipeline.from_config()` method to create our pipeline, but there are [other ways](https://recognai.github.io/biome-text/v3.2.1/api/biome/text/pipeline.htmlpipeline).A *biome.text* pipeline has the following main components:```yamlname: a descriptive name of your pipelinetokenizer: how to tokenize the inputfeatures: input features of the modelencoder: the language encoderhead: your task configuration```See the [Configuration section](https://recognai.github.io/biome-text/v3.2.1/documentation/user-guides/2-configuration.html) for a detailed description of how these main components can be configured.Our complete configuration for this tutorial will be following:
###Code
pipeline_dict = {
"name": "german_business_names",
"tokenizer": {
"text_cleaning": {
"rules": ["strip_spaces"]
}
},
"features": {
"word": {
"embedding_dim": 64,
"lowercase_tokens": True,
},
"char": {
"embedding_dim": 32,
"lowercase_characters": True,
"encoder": {
"type": "gru",
"num_layers": 1,
"hidden_size": 32,
"bidirectional": True,
},
"dropout": 0.1,
},
},
"head": {
"type": "TextClassification",
"labels": train_ds.unique("label"),
"pooler": {
"type": "gru",
"num_layers": 1,
"hidden_size": 32,
"bidirectional": True,
},
"feedforward": {
"num_layers": 1,
"hidden_dims": [32],
"activations": ["relu"],
"dropout": [0.0],
},
},
}
###Output
_____no_output_____
###Markdown
With this dictionary we can now create a `Pipeline`:
###Code
pl = Pipeline.from_config(pipeline_dict)
###Output
_____no_output_____
###Markdown
Configure the vocabularyThe default behavior of *biome.text* is to add all tokens from the training data set to the pipeline's vocabulary. This is done automatically when training the pipeline for the first time.If you want to have more control over this step, you can define a `VocabularyConfiguration` and pass it to the [`Trainer`](https://recognai.github.io/biome-text/v3.2.1/api/biome/text/trainer.html) later on.In our business name classifier we only want to include words with a general meaning to our word feature vocabulary (like "Computer" or "Autohaus", for example), and want to exclude specific names that will not help to generally classify the kind of business.This can be achieved by including only the most frequent words in our training set via the `min_count` argument. For a complete list of available arguments see the [VocabularyConfiguration API](https://recognai.github.io/biome-text/v3.2.1/api/biome/text/configuration.htmlvocabularyconfiguration).
###Code
vocab_config = VocabularyConfiguration(min_count={WordFeatures.namespace: 20})
###Output
_____no_output_____
###Markdown
Configure the trainerAs a next step we have to configure the [`Trainer`](https://recognai.github.io/biome-text/v3.2.1/api/biome/text/trainer.html), which in essentially is a light wrapper around the amazing [Pytorch Lightning Trainer](https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html).The default trainer has sensible defaults and should work alright for most of your cases.In this tutorial, however, we want to tune a bit the learning rate and limit the training time to three epochs only.We also want to modify the monitored validation metric (by default it is the `validation_loss`) that is used to rank the checkpoints, as well as for the early stopping mechanism and to load the best model weights at the end of the training.For a complete list of available arguments see the [TrainerConfiguration API](https://recognai.github.io/biome-text/v3.2.1/api/biome/text/configuration.htmltrainerconfiguration).::: tip TipBy default we will use a CUDA device if one is available. If you prefer not to use it, just set `gpus=0` in the `TrainerConfiguration`.:::::: tip TipThe default [WandB](https://wandb.ai/site) logger will log the runs to the "biome" project. You can easily change this by setting the `WANDB_PROJECT` env variable:```pythonimport osos.environ["WANDB_PROJECT"] = "my_project"```:::
###Code
trainer_config = TrainerConfiguration(
optimizer={
"type": "adam",
"lr": 0.01,
},
max_epochs=3,
monitor="validation_accuracy",
monitor_mode="max"
)
###Output
_____no_output_____
###Markdown
Train your modelNow we have everything ready to start the training of our model:- training data set- pipeline- trainer configurationIn a fist step we have to create a `Trainer` instance and pass in the pipeline, the training/validation data, the trainer configuration and our vocabulary configuration.This will load the data into memory (unless you specify `layz=True`) and build the vocabulary.
###Code
trainer = Trainer(
pipeline=pl,
train_dataset=train_ds,
valid_dataset=valid_ds,
trainer_config=trainer_config,
vocab_config=vocab_config,
)
###Output
_____no_output_____
###Markdown
In a second step we simply have to call the `Trainer.fit()` method to start the training.By default, at the end of the training the trained pipeline and the training metrics will be saved in a folder called `output`.The trained pipeline is saved as a `model.tar.gz` file that contains the pipeline configuration, the model weights and the vocabulary.The metrics are saved to a `metrics.json` file.During the training the `Trainer` will also create a logging folder called `training_logs` by default.You can modify this path via the `default_root_dir` option in your `TrainerConfiguration`, that also supports remote addresses such as s3 or hdfs.This logging folder contains all your checkpoints and logged metrics, like the ones logged for [TensorBoard](https://www.tensorflow.org/tensorboard/) for example.
###Code
trainer.fit()
###Output
_____no_output_____
###Markdown
After 3 epochs we achieve a validation accuracy of about 0.91.The validation loss seems to be decreasing further, though, so we could probably train the model for a few more epochs without overfitting the training data.For this we could simply reinitialize the `Trainer` and call `Trainer.fit(exist_ok=True)` again.::: tip TipIf for some reason the training gets interrupted, you can continue from the last saved checkpoint by setting the `resume_from_checkpoint` option in the `TrainerConfiguration`.:::::: tip TipIf you receive warnings about the data loader being a bottleneck, try to increase the `num_workers_for_dataloader` parameter in the `TrainerConfiguration` (up to the number of cpus on your machine).::: Make your first predictions Now that we trained our model we can go on to make our first predictions. We provide the input expected by our `TaskHead` of the model to the `Pipeline.predict()` method.In our case it is a `TextClassification` head that classifies a `text` input:
###Code
pl.predict(text="Autohaus biome.text")
###Output
_____no_output_____
###Markdown
The output of the `Pipeline.predict()` method is a dictionary with a `labels` and `probabilities` key containing a list of labels and their corresponding probabilities, ordered from most to less likely. ::: tip TipWhen configuring the pipeline in the first place, we recommend to check that it is correctly setup by using the `predict` method.Since the pipeline is still not trained at that moment, the predictions will be arbitrary.::: We can also load the trained pipeline from the training output. This is useful in case you trained the pipeline in some earlier session, and want to continue your work with the inference steps:
###Code
pl_trained = Pipeline.from_pretrained("output/model.tar.gz")
###Output
_____no_output_____
###Markdown
Training a short text classifier of German business names [View on recogn.ai](https://www.recogn.ai/biome-text/master/documentation/tutorials/1-Training_a_text_classifier.html)[Run in Google Colab](https://colab.research.google.com/github/recognai/biome-text/blob/master/docs/docs/documentation/tutorials/1-Training_a_text_classifier.ipynb)[View source on GitHub](https://github.com/recognai/biome-text/blob/master/docs/docs/documentation/tutorials/1-Training_a_text_classifier.ipynb) When running this tutorial in Google Colab, make sure to install *biome.text* first:
###Code
!pip install -U pip
!pip install -U git+https://github.com/recognai/biome-text.git
exit(0) # Force restart of the runtime
###Output
_____no_output_____
###Markdown
*If* you want to log your runs with [WandB](https://wandb.ai/home), don't forget to install its client and log in.
###Code
!pip install wandb
!wandb login
###Output
_____no_output_____
###Markdown
IntroductionIn this tutorial we will train a basic short-text classifier for predicting the sector of a business based only on its business name. For this we will use a training data set with business names and business categories in German. ImportsLet us first import all the stuff we need for this tutorial:
###Code
from biome.text import Pipeline, Dataset, Trainer
from biome.text.configuration import VocabularyConfiguration, WordFeatures, TrainerConfiguration
###Output
_____no_output_____
###Markdown
Explore the training dataLet's take a look at the data we will use for training. For this we will use the [`Dataset`](https://www.recogn.ai/biome-text/master/api/biome/text/dataset.htmldataset) class that is a very thin wrapper around HuggingFace's awesome [datasets.Dataset](https://huggingface.co/docs/datasets/master/package_reference/main_classes.htmldatasets.Dataset). We will download the data first to create `Dataset` instances.Apart from the training data we will also download an optional validation data set to estimate the generalization error.
###Code
# Downloading the dataset first
!curl -O https://biome-tutorials-data.s3-eu-west-1.amazonaws.com/text_classifier/business.cat.train.csv
!curl -O https://biome-tutorials-data.s3-eu-west-1.amazonaws.com/text_classifier/business.cat.valid.csv
# Loading from local
train_ds = Dataset.from_csv("business.cat.train.csv")
valid_ds = Dataset.from_csv("business.cat.valid.csv")
###Output
_____no_output_____
###Markdown
Most of HuggingFace's `Dataset` API is exposed and you can checkout their nice [documentation](https://huggingface.co/docs/datasets/master/processing.html) on how to work with data in a `Dataset`. For example, let's quickly check the size of our training data and print the first 10 examples as a pandas DataFrame:
###Code
len(train_ds)
train_ds.head()
###Output
_____no_output_____
###Markdown
As we can see we have two relevant columns *label* and *text*. Our classifier will be trained to predict the *label* given the *text*. ::: tip TipThe [TaskHead](https://www.recogn.ai/biome-text/master/api/biome/text/modules/heads/task_head.htmltaskhead) of our model below will expect a *text* and a *label* column to be present in the `Dataset`. In our data set this is already the case, otherwise we would need to change or map the corresponding column names via `Dataset.rename_column_()` or `Dataset.map()`.::: We can also quickly check the distribution of our labels. Use `Dataset.head(None)` to return the complete data set as a pandas DataFrame:
###Code
train_ds.head(None)["label"].value_counts()
###Output
_____no_output_____
###Markdown
The `Dataset` class also provides access to Hugging Face's extensive NLP datasets collection via the `Dataset.load_dataset()` method. Have a look at their [quicktour](https://huggingface.co/docs/datasets/master/quicktour.html) for more details about their awesome library. Configure your *biome.text* Pipeline A typical [Pipeline](https://www.recogn.ai/biome-text/master/api/biome/text/pipeline.htmlpipeline) consists of tokenizing the input, extracting features, applying a language encoding (optionally) and executing a task-specific head in the end.After training a pipeline, you can use it to make predictions.As a first step we must define a configuration for our pipeline. In this tutorial we will create a configuration dictionary and use the `Pipeline.from_config()` method to create our pipeline, but there are [other ways](https://www.recogn.ai/biome-text/master/api/biome/text/pipeline.htmlpipeline).A *biome.text* pipeline has the following main components:```yamlname: a descriptive name of your pipelinetokenizer: how to tokenize the inputfeatures: input features of the modelencoder: the language encoderhead: your task configuration```See the [Configuration section](https://www.recogn.ai/biome-text/master/documentation/user-guides/2-configuration.html) for a detailed description of how these main components can be configured.Our complete configuration for this tutorial will be following:
###Code
pipeline_dict = {
"name": "german_business_names",
"tokenizer": {
"text_cleaning": {
"rules": ["strip_spaces"]
}
},
"features": {
"word": {
"embedding_dim": 64,
"lowercase_tokens": True,
},
"char": {
"embedding_dim": 32,
"lowercase_characters": True,
"encoder": {
"type": "gru",
"num_layers": 1,
"hidden_size": 32,
"bidirectional": True,
},
"dropout": 0.1,
},
},
"head": {
"type": "TextClassification",
"labels": train_ds.unique("label"),
"pooler": {
"type": "gru",
"num_layers": 1,
"hidden_size": 32,
"bidirectional": True,
},
"feedforward": {
"num_layers": 1,
"hidden_dims": [32],
"activations": ["relu"],
"dropout": [0.0],
},
},
}
###Output
_____no_output_____
###Markdown
With this dictionary we can now create a `Pipeline`:
###Code
pl = Pipeline.from_config(pipeline_dict)
###Output
_____no_output_____
###Markdown
Configure the vocabularyThe default behavior of *biome.text* is to add all tokens from the training data set to the pipeline's vocabulary. This is done automatically when training the pipeline for the first time.If you want to have more control over this step, you can define a `VocabularyConfiguration` and pass it to the [`Trainer`](https://www.recogn.ai/biome-text/master/api/biome/text/trainer.html) later on.In our business name classifier we only want to include words with a general meaning to our word feature vocabulary (like "Computer" or "Autohaus", for example), and want to exclude specific names that will not help to generally classify the kind of business.This can be achieved by including only the most frequent words in our training set via the `min_count` argument. For a complete list of available arguments see the [VocabularyConfiguration API](https://www.recogn.ai/biome-text/master/api/biome/text/configuration.htmlvocabularyconfiguration).
###Code
vocab_config = VocabularyConfiguration(min_count={WordFeatures.namespace: 20})
###Output
_____no_output_____
###Markdown
Configure the trainerAs a next step we have to configure the [`Trainer`](https://www.recogn.ai/biome-text/master/api/biome/text/trainer.html), which in essentially is a light wrapper around the amazing [Pytorch Lightning Trainer](https://pytorch-lightning.readthedocs.io/en/latest/common/trainer.html).The default trainer has sensible defaults and should work alright for most of your cases.In this tutorial, however, we want to tune a bit the learning rate and limit the training time to three epochs only.We also want to modify the monitored validation metric (by default it is the `validation_loss`) that is used to rank the checkpoints, as well as for the early stopping mechanism and to load the best model weights at the end of the training.For a complete list of available arguments see the [TrainerConfiguration API](https://www.recogn.ai/biome-text/master/api/biome/text/configuration.htmltrainerconfiguration).::: tip TipBy default we will use a CUDA device if one is available. If you prefer not to use it, just set `gpus=0` in the `TrainerConfiguration`.:::::: tip TipThe default [WandB](https://wandb.ai/site) logger will log the runs to the "biome" project. You can easily change this by setting the `WANDB_PROJECT` env variable:```pythonimport osos.environ["WANDB_PROJECT"] = "my_project"```:::
###Code
trainer_config = TrainerConfiguration(
optimizer={
"type": "adam",
"lr": 0.01,
},
max_epochs=3,
monitor="validation_accuracy",
monitor_mode="max"
)
###Output
_____no_output_____
###Markdown
Train your modelNow we have everything ready to start the training of our model:- training data set- pipeline- trainer configurationIn a fist step we have to create a `Trainer` instance and pass in the pipeline, the training/validation data, the trainer configuration and our vocabulary configuration.This will load the data into memory (unless you specify `layz=True`) and build the vocabulary.
###Code
trainer = Trainer(
pipeline=pl,
train_dataset=train_ds,
valid_dataset=valid_ds,
trainer_config=trainer_config,
vocab_config=vocab_config,
)
###Output
_____no_output_____
###Markdown
In a second step we simply have to call the `Trainer.fit()` method to start the training.By default, at the end of the training the trained pipeline and the training metrics will be saved in a folder called `output`.The trained pipeline is saved as a `model.tar.gz` file that contains the pipeline configuration, the model weights and the vocabulary.The metrics are saved to a `metrics.json` file.During the training the `Trainer` will also create a logging folder called `training_logs` by default.You can modify this path via the `default_root_dir` option in your `TrainerConfiguration`, that also supports remote addresses such as s3 or hdfs.This logging folder contains all your checkpoints and logged metrics, like the ones logged for [TensorBoard](https://www.tensorflow.org/tensorboard/) for example.
###Code
trainer.fit()
###Output
_____no_output_____
###Markdown
After 3 epochs we achieve a validation accuracy of about 0.91.The validation loss seems to be decreasing further, though, so we could probably train the model for a few more epochs without overfitting the training data.For this we could simply reinitialize the `Trainer` and call `Trainer.fit(exist_ok=True)` again.::: tip TipIf for some reason the training gets interrupted, you can continue from the last saved checkpoint by setting the `resume_from_checkpoint` option in the `TrainerConfiguration`.:::::: tip TipIf you receive warnings about the data loader being a bottleneck, try to increase the `num_workers_for_dataloader` parameter in the `TrainerConfiguration` (up to the number of cpus on your machine).::: Make your first predictions Now that we trained our model we can go on to make our first predictions. We provide the input expected by our `TaskHead` of the model to the `Pipeline.predict()` method.In our case it is a `TextClassification` head that classifies a `text` input:
###Code
pl.predict(text="Autohaus biome.text")
###Output
_____no_output_____
###Markdown
The output of the `Pipeline.predict()` method is a dictionary with a `labels` and `probabilities` key containing a list of labels and their corresponding probabilities, ordered from most to less likely. ::: tip TipWhen configuring the pipeline in the first place, we recommend to check that it is correctly setup by using the `predict` method.Since the pipeline is still not trained at that moment, the predictions will be arbitrary.::: We can also load the trained pipeline from the training output. This is useful in case you trained the pipeline in some earlier session, and want to continue your work with the inference steps:
###Code
pl_trained = Pipeline.from_pretrained("output/model.tar.gz")
###Output
_____no_output_____ |
src/[1]_BuildingDamage_STA221_feature_ML.ipynb | ###Markdown
Comparative Assessment of Building Damage from High-Resolution Satellite-Based Images Erica Scaduto ([email protected]), Yuhan Huang ([email protected]) The automatic extraction and evaluation of building damage by natural hazards can aid in assessing risk management and mapping distributions of urban vulnerability. Although wildfire is a common and critical natural disaster posing significant threats, constrained by the methods and data quality, most previous studies only focused on large-scale disasters. Deriving a reliable and efficient building extraction and damage classification method has presented challenges due to regional differences in development type (e.g. rural vs metropolitan), as well as the sheer varieties in building characteristics (i.e. color, shape, materials). In this project, we intend to compare different machine learning algorithms for image classification to develop a general framework for fire-induced building damage evaluation from high-resolution remotely sensed images.Our results show machine learning models, based on spectral, texture, and convolutional features, have promising utility in the applications for post-fire building-damage monitoring. For the binary classification scheme, the Random Forest (RF) classifier performed the best with an overall accuracy of 93% with a kappa of 0.73. For the multiclass scheme (i.e without vegetation mask), XGBoost performed better than the 5-layer neural networks. It is able to detect building areas but is less accurate in predicting damage when compared with the binary case. Feature engineering also proved to be an essential step in model building. Particularly, the addition of SNIC segmentation which greatly aided in the improvement of overall model performance for both RF and XGBoost classifiers. Main Pipeline:**Data Preprocessing**: As both XView building annotations and NAIP images include geograohic information, XView data was converted to shapefiles to filter out corresponding NAIP images in the same location through GEE API. Both pre-fire and post-fire images were used. **Feature Engineering:** Based on the four channels of NAIP images, features were further calculated through band math, convolutoinal filters and unsupervised methods. **Classification Model:** Decision tree, SVM, random forest, xgboost, and simple neural networks were built to test their predictability of buliding damage types. The dicision tree, SVM, and random forest were run directly on the server provided by GEE API. XGBoost and neural net were run locally on images extracted through GEE from Google Colab. **Model Evaluation** Table of Content(1) __Preprocessing XView Data__ (See file: 0_BuildingDamage_STA221_preprocessing.ipynb): extract geographic coordinates and fire-related annotations as geojson and shapefilesfrom XView json files(2) __Set up__: Load API and packages. Connect to Google Drive, GEE API, and load annotations(3) __Visualize Dataset__: use GEE API and leaflet to visualize ground truth labels and acquire NAIP images based on the locations of these labels(4) __Vegetation Indices and Texture FEatures__: calculate several useful features for classification, including remote sensing indices, texture metrics, and some layers filtered by convolutional filters(5) __Unsupervised Clustering Features__: use unsupervised methods to get clusters, superpixels, or segmentations as features(6) __Image Extraction__: using the extent of buildings with the same image id as the boundary to extract NAIP images from the API. (7) __Supervised Classification (server-end)__: GEE provide some fuctions for machine learning method (*Decision Tree, SVM, and Random Forest*) to do the classification directly on its server. Test these methods on the NAIP images and summarize their results(8) __Supervised Classificatoin (client-end, XGBoost)__: See notebook *BuildingProj_XGBoost*. (9) __Supervised Classification (client-end, Simple Neural Network)__: See notebook *BuildingProj_NN*. Dataset: - Building Annotations from XView- NAIP (National Agriculture Imagery Program) aerial photos (resolution: 0.6 meter) XView Annotations- Labels for two fire events in 2017: Santa Rosa & South CA fires (geographic coordinates only) NAIP images (acquired from GEE API)- RGB, Near Infrared- both pre- and post- fire images Setting UpThe following steps below will (1) mount the colab to the appropriate drive directory (2) Load the training and tes datasets (3) connect to GEE API and load the data as GEE Assets Connect to Drive & Set Directory
###Code
from google.colab import drive # import drive from google colab
ROOT = "/content/drive" # default location for the drive
print(ROOT) # print content of ROOT (Optional)
drive.mount(ROOT) # we mount the google drive at /content/drive
%cd "/content/drive/My Drive/STA221_FinalProj"
%ls "./Data/FireDataset/train"
import os
rootPath = '/content/drive/My Drive/STA221_FinalProj'
os.chdir(rootPath)
###Output
/content/drive/.shortcut-targets-by-id/1xQURupjEB6eidd-IqFW8qhj4FjXXQz8r/STA221_FinalProj
[0m[01;34msanta-rosa-wildfire[0m/ [01;34msocal-fire[0m/
###Markdown
Load Data
###Code
# install dependencies and packages
! pip install geopandas
import numpy as np
import pandas as pd
import geopandas as gpd
def lstFiles(rootPath, ext):
'''
retrieve file path + names based on extension
'''
file_list = []
root = rootPath
for path, subdirs, files in os.walk(root):
for names in files:
if names.endswith(ext) and not names.startswith("._"):
file_list.append(path +"/" + names)
return(file_list)
def createFolder(rootPath, folderName):
'''
Create new folder in root path
'''
folderPath = os.path.join(rootPath, folderName)
if not os.path.exists(folderPath):
os.makedirs(folderPath)
return folderPath + "/"
merged_path = "./Data/FireDataset/merged_shp"
merged_files = lstFiles(merged_path, '.shp')
train_shp = merged_files[0]
test_shp = merged_files[1]
from pyproj import CRS
def subsetData(dataPath, path, outfolder):
'''(1) Subset training dataset w/Santa Rosa (post fire) & Socal (pre fire) ** since NAIP doesnt reflect post fire damage
(2) combine
(3) Remove any unclassified classes
(4) Convert factor classes 'damage' to int (no-damage:0, minor-damage:1, destroyed:3)
(5) output to folder in rootpath
'''
gdf = gpd.read_file(dataPath)
santaRosa = gdf[(gdf['location_n'] == 'santa-rosa-wildfire') & (gdf['pre_post_d'] == 'post')]
socal = gdf[(gdf['location_n'] == 'socal-fire') & (gdf['pre_post_d'] == 'pre')]
joined_all = gpd.GeoDataFrame( pd.concat( [santaRosa, socal], ignore_index=True) )
joined_all = joined_all[(joined_all['damage'] != 'unclassified') & (joined_all['damage'] != 'un-classified')]
joined_all['class'] = np.where(joined_all['damage']== 'no-damage', 0,1)
joined_all.crs = "EPSG:4326"
outPath = createFolder(path, outfolder)
joined_all.to_file(os.path.join(outPath, outfolder + '.shp'))
return joined_all
path = "Data/FireDataset/merged_shp"
trainDF = subsetData(train_shp, path, 'train_filt')
testDF = subsetData(test_shp, path, 'test_filt')
print(len(trainDF[trainDF['class'] == 0]), len(trainDF[trainDF['class'] == 1]),len(trainDF))
print(len(testDF[testDF['class'] == 0]), len(testDF[testDF['class'] == 1]), len(testDF))
###Output
19760 3612 23372
7494 992 8486
###Markdown
Connect to GEE API & Load Asset
###Code
# initialize and connect to GEE
from google.colab import auth
auth.authenticate_user()
!earthengine authenticate
import ee
ee.Initialize()
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Connect to google cloud
! gcloud auth login
# export files hosted in cloud bucket as assets to GEE
# needed to set up a bucket in google cloud: gs://esca_bucket
! earthengine upload table --asset_id=users/escaduto/XVIEW_newtraining gs://esca_bucket/train_filt/train_filt.shp
! earthengine upload table --asset_id=users/escaduto/XVIEW_newtesting gs://esca_bucket/test_filt/test_filt.shp
# import feature collection asset
train_data = ee.FeatureCollection('users/escaduto/XVIEW_newtraining')
test_data = ee.FeatureCollection('users/escaduto/XVIEW_newtesting')
###Output
_____no_output_____
###Markdown
Interactively Visualize Dataset w/ GEE API & leafletThe annotated test/train xView dataset were read as feature collections via the GEE API. Now, we want to visualize the building footprints and identify which ones are classified as damaged vs non-damaged. To best do this, we will utilize an interactive mapping tool to directly load the data into the notebook. Building Geometry w/ Attributes
###Code
def visualizeByAttribute(fc, className):
'''
visualize building polygon based on damage type 'class' (0,1)
'''
empty = ee.Image().byte()
feature = empty.paint(**{
'featureCollection': fc,
'color': className,
'width': 1
})
return feature
train_palette = ['green', # no-damage (0)
'red' # destroyed (1)
]
test_palette = ['yellow', # no-damage(0)
'blue' # destroyed (1)
]
Map = emap.Map(center=[38.50178453635526,-122.74843617724784], zoom=16)
Map.addLayer(visualizeByAttribute(train_data, 'class'), {'palette': train_palette, 'min': 0, 'max':1}, 'train')
Map.addLayer(visualizeByAttribute(test_data, 'class'), {'palette': test_palette,'min': 0, 'max':1}, 'test')
Map.addLayerControl()
Map
###Output
_____no_output_____
###Markdown
Load NAIP ImageryAnother crucial step before we can get started is to load the NAIP imagery from GEE. To first do this, we must identify our area of interest, which is the bounding box from the data extent. We will obtain pre-/post- fire imagery and overlay the annotated building footprints on top.
###Code
def extract_coords(geom):
'''
takes one polygon from geopandas and converts it to the GEE geometry format
input: geom from each row of the 'geometry' column in the gpd dataframe
output: coordinate list of the GEE geometry
'''
try:
coords=geom.__geo_interface__['coordinates']
geom_extr=[list(map(list,coord)) for coord in coords]
return geom_extr
except:
pass
def get_bounds(gdf):
'''
takes a geo data frame get convert its bounding extent to a GEE format rectangle
'''
bounds=gdf.total_bounds
geom_bound=[[ [bounds[0],bounds[1]], [bounds[2],bounds[1]], [bounds[2],bounds[3]], [bounds[0],bounds[3]]]]
return geom_bound
gdf = gpd.read_file(train_shp)
santaRosa = gdf[(gdf['location_n'] == 'santa-rosa-wildfire') & (gdf['pre_post_d'] == 'post')]
socal = gdf[(gdf['location_n'] == 'socal-fire') & (gdf['pre_post_d'] == 'pre')]
SR_bounds=get_bounds(santaRosa)
SC_bounds=get_bounds(socal)
sr_geom=[extract_coords(pol) for pol in santaRosa['geometry']]
SR_ROI = ee.Geometry.MultiPolygon(sr_geom)
SR_Bound_Box=ee.Geometry.Polygon(SR_bounds)
sc_geom=[extract_coords(pol) for pol in socal['geometry']]
SC_ROI = ee.Geometry.MultiPolygon(sc_geom)
SC_Bound_Box=ee.Geometry.Polygon(SC_bounds)
# combine the bounding boxes from above into feature collection
features = [
ee.Feature(SC_Bound_Box),
ee.Feature(SR_Bound_Box)
]
finalBounds = ee.FeatureCollection(features);
preFire = ee.Image(ee.ImageCollection('USDA/NAIP/DOQQ')
.filter(ee.Filter.date('2014-01-01', '2015-12-31'))
.select(['R', 'G', 'B', 'N'])
.filterBounds(finalBounds)
.mosaic());
postFire = ee.Image(ee.ImageCollection('USDA/NAIP/DOQQ')
.filter(ee.Filter.date('2017-01-01', '2019-12-31'))
.select(['R', 'G', 'B', 'N'])
.filterBounds(finalBounds)
.mosaic());
preFire = preFire.clip(finalBounds)
postFire = postFire.clip(finalBounds)
trueColorVis = {
min: 0.0,
max: 255.0,
}
# visualize santa rosa building dataset overlaid on NAIP
Map = emap.Map(center=[38.4815,-122.7084], zoom=11)
Map.add_basemap('TERRAIN')
Map.addLayer(preFire.select(['R', 'G', 'B']), trueColorVis, 'PreFire')
Map.addLayer(postFire.select(['R', 'G', 'B']), trueColorVis, 'Postfire')
Map.addLayer(finalBounds, {'color': 'white'}, 'bound', True, opacity=0.4)
Map.addLayer(visualizeByAttribute(train_data, 'class'), {'palette': train_palette, 'min': 0, 'max':1}, 'train')
Map.addLayer(visualizeByAttribute(test_data, 'class'), {'palette': test_palette,'min': 0, 'max':1}, 'test')
Map.addLayerControl()
Map
# visualize socal building dataset overlaid on NAIP
Map = emap.Map(center=[34.0922,-118.8058], zoom=11)
Map.add_basemap('TERRAIN')
Map.addLayer(preFire, trueColorVis, 'PreFire');
Map.addLayer(postFire, trueColorVis, 'Postfire');
Map.addLayer(finalBounds, {'color': 'white'}, 'Postfire', True, opacity = 0.8);
Map.addLayer(visualizeByAttribute(train_data, 'class'), {'palette': train_palette, 'min': 0, 'max':1}, 'train')
Map.addLayer(visualizeByAttribute(test_data, 'class'), {'palette': test_palette,'min': 0, 'max':1}, 'test')
Map.addLayerControl()
Map
###Output
_____no_output_____
###Markdown
Feature Calculation(1) NDVI: (NIR-R)/(NIR+R)(2) Canny edge detection to extract structural information from different vision objects and dramatically reduce the amount of data to be processed. (3) Bare Soil Index: (R+B-G)/(R+G+B)(4) Shadow Index: $\sqrt {(256-B)*(256-G)}$(5) Texture Information: GLCM & spatial association of neighborhood(6) Convolutional filters NDVI
###Code
def getNDVI(image):
'''
Add Normalized Differenced Vegetation Index using NIR and Red bands
'''
nir = image.select('N')
red = image.select('R')
ndvi = nir.subtract(red).divide(nir.add(red)).rename('NDVI')
new_image = image.addBands(ndvi)
return new_image
preFire = getNDVI(preFire)
postFire = getNDVI(postFire)
print(preFire.bandNames().getInfo())
###Output
['R', 'G', 'B', 'N', 'NDVI']
###Markdown
Edge Detection
###Code
def edgeDetection(image, band):
'''
Perform Canny edge detection and add to image.
'''
canny = ee.Algorithms.CannyEdgeDetector(**{
'image': image.select(band), 'threshold': 50, 'sigma': 1
})
new_image = image.addBands(canny.rename('edge'))
return new_image
preFire = edgeDetection(preFire, 'R')
postFire = edgeDetection(postFire, 'R')
print(preFire.bandNames().getInfo())
###Output
['R', 'G', 'B', 'N', 'NDVI', 'edge']
###Markdown
Bare Soil Index (BSI)
###Code
def bareSoil(image):
'''
Add Bare Soil Index Index using the Red, Blue, and Green bands
'''
red = image.select('R')
blue = image.select('B')
green = image.select('G')
BSI = red.add(blue).subtract(green).divide(red.add(blue).add(green)).rename('BSI')
new_image = image.addBands(BSI)
return new_image
preFire = bareSoil(preFire)
postFire = bareSoil(postFire)
print(preFire.bandNames().getInfo())
###Output
['R', 'G', 'B', 'N', 'NDVI', 'edge', 'BSI']
###Markdown
Shadow Index
###Code
def shadowIndex(image):
'''
Add Shadow Index using Blue and Green bands
'''
SI = image.expression(
'sqrt((256 - B) * (256 - G))', {
'B': image.select('B'),
'G': image.select('G')
}).rename('SI');
new_image = image.addBands(SI)
return new_image
preFire = shadowIndex(preFire)
postFire = shadowIndex(postFire)
print(preFire.bandNames().getInfo())
###Output
['R', 'G', 'B', 'N', 'NDVI', 'edge', 'BSI', 'SI']
###Markdown
TextureGet texture values with NIR band. (1) compute entropy w. defined neighborhood, (2) gray-level co-occurence matrix (GLCM) to get contrast, (3) local Geary's C, measure of spatial association[ source code](https://github.com/giswqs/earthengine-py-notebooks/blob/master/Image/texture.ipynb)
###Code
import math
def texture(image):
'''
Get texture values with NIR band.
(1) compute entropy w. defined neighborhood,
(2) gray-level co-occurence matrix (GLCM) to get contrast,
(3) local Geary's C, measure of spatial association
'''
# Get the NIR band.
nir = image.select('N')
# Define a neighborhood with a kernel.
square = ee.Kernel.square(**{'radius': 4})
# Compute entropy and display.
entropy = nir.entropy(square)
# Compute the gray-level co-occurrence matrix (GLCM), get contrast.
glcm = nir.glcmTexture(**{'size': 4})
contrast = glcm.select('N_contrast')
# Create a list of weights for a 9x9 kernel.
list = [1, 1, 1, 1, 1, 1, 1, 1, 1]
# The center of the kernel is zero.
centerList = [1, 1, 1, 1, 0, 1, 1, 1, 1]
# Assemble a list of lists: the 9x9 kernel weights as a 2-D matrix.
lists = [list, list, list, list, centerList, list, list, list, list]
# Create the kernel from the weights.
# Non-zero weights represent the spatial neighborhood.
kernel = ee.Kernel.fixed(9, 9, lists, -4, -4, False)
# Convert the neighborhood into multiple bands.
neighs = nir.neighborhoodToBands(kernel)
# Compute local Geary's C, a measure of spatial association.
gearys = nir.subtract(neighs).pow(2).reduce(ee.Reducer.sum()) \
.divide(math.pow(9, 2)).rename('texture');
new_image = image.addBands(gearys)
return new_image
preFire = texture(preFire)
postFire = texture(postFire)
print(preFire.bandNames().getInfo())
###Output
['R', 'G', 'B', 'N', 'NDVI', 'edge', 'BSI', 'SI', 'texture']
###Markdown
GLCM TextureGLCM Texture list (selection in bold):- Angular Second Moment: of repeated pairs- **Contrast**: local contrast - **Correlation**: correlation between pairs of pixels - **Variance**: spreat-out of the Grayscale -**Inverse Difference Moment**: homogeneity- sum average- sum variance- sum entropy- entropy: randomness of the grayscale- difference variance- difference entropy- information measure of correlation 1, 2 , and Max Corr. Coefficient.- **dissimilarity**- inertia- **cluster shade**- cluster prominence
###Code
def glcm_texture(image):
'''
add some texture calculations for each spectral band (contrast and variance only for NIR and Red band)
'''
#average the directional bands
#consider a neighborhood of 4 pixels
texture_img=image.select(['R','G','B','N']).glcmTexture(size=4,average=True)
#select some useful textures :
selection=['N_corr','N_var', 'B_shade','N_shade']
new_image = image.addBands(texture_img.select(selection))
return new_image
preFire = glcm_texture(preFire)
postFire = glcm_texture(postFire)
print(preFire.bandNames().getInfo())
###Output
['R', 'G', 'B', 'N', 'NDVI', 'edge', 'BSI', 'SI', 'texture', 'N_corr', 'N_var', 'B_shade', 'N_shade']
###Markdown
Convolution Layers(tuned with best visual performance)- low-pass convolutional filter (Gaussian)- high-pass filter and gradient (has been represented by canny edge detection above)- shape-sensitive filter (rectangle, octagon)- manhattan kernel based on rectilinear (city-block) distance
###Code
def conv_filter(image):
'''
apply gaussian, octagon, and mangattan convolutional filters to the image
'''
#define filters
#Gaussian
gauss=ee.Kernel.gaussian(radius=7, sigma=2, units='pixels', normalize=True)
# #define a 19 by 11 rectangle low pass filter
# low_pass_rect1 = ee.Kernel.rectangle(xRadius=9,yRadius=5, units='pixels', normalize=True);
# #the opposite way
# low_pass_rect2 = ee.Kernel.rectangle(xRadius=5,yRadius=9, units='pixels', normalize=True);
#octagon
low_oct = ee.Kernel.octagon(radius=5, units='pixels', normalize=True);
#manhattan
manha=ee.Kernel.manhattan(radius=4, units='pixels', normalize=True)
new_image=image
filt_dict={'gauss':gauss,'low_oct':low_oct,'manha':manha}
for name,filt in filt_dict.items():
smooth=image.select(['R','G','B','N']).convolve(filt).rename(['R_'+name,'G_'+name,'B_'+name,'N_'+name])
new_image = new_image.addBands(smooth)
return new_image
preFire = conv_filter(preFire)
postFire = conv_filter(postFire)
print(preFire.bandNames().getInfo())
###Output
['R', 'G', 'B', 'N', 'NDVI', 'edge', 'BSI', 'SI', 'texture', 'N_corr', 'N_var', 'B_shade', 'N_shade', 'R_gauss', 'G_gauss', 'B_gauss', 'N_gauss', 'R_low_oct', 'G_low_oct', 'B_low_oct', 'N_low_oct', 'R_manha', 'G_manha', 'B_manha', 'N_manha']
###Markdown
Visualize Indices
###Code
siViz = {'min': 0, 'max': 100, 'palette': ['ffff00', '330033']}
bsiViz = {'min': 0.0, 'max': 0.3, 'palette': ['7fffd4', 'b99879']}
ndviViz = {'min': -0.5, 'max': 0.5, 'palette': ['cc8e7f', '268b07']}
texViz = {'min': 0, 'max': 4000, 'palette': ['fe6b73', '7fffd4']}
Map = emap.Map(center=[38.50178453635526,-122.74843617724784], zoom=16)
Map.addLayer(preFire.select(['R', 'G', 'B']), trueColorVis, 'preFire')
Map.addLayer(preFire.select(['NDVI']),ndviViz, 'NDVI')
Map.addLayer(preFire.select(['SI']),siViz, 'SI')
Map.addLayer(preFire.select(['edge']),'', 'Canny')
Map.addLayer(preFire.select(['BSI']),bsiViz, 'BSI')
Map.addLayer(preFire.select(['texture']),texViz, 'texture')
Map.addLayer(train_data, {'color': 'yellow'}, 'training')
Map.addLayer(test_data, {'color': 'blue'}, 'testing')
Map.addLayerControl()
Map
###Output
_____no_output_____
###Markdown
Unsupervised Methods(1) KMeans Clustering(2) Learning Vector Quantization Clustering (LVQ)(3) KMeans Segmentation (4) Simple Non-Iterative Clustering Segmentation (SNIC) KMeans Clustering
###Code
bands = ['R', 'G', 'B', 'N', 'NDVI', 'edge', 'BSI', 'SI', 'texture']
# Make the training dataset.
training = postFire.sample(**{
'region': finalBounds,
'scale': 10,
'numPixels': 5000
})
# Instantiate the clusterer and train it.
clusterer = ee.Clusterer.wekaKMeans(15).train(training)
# Cluster the input using the trained clusterer.
preFire_result = preFire.cluster(clusterer).rename('KMeans')
postFire_result = postFire.cluster(clusterer).rename('KMeans')
# add KMeans clustering
postFire = postFire.addBands(postFire_result)
preFire = preFire.addBands(preFire_result)
print(postFire.bandNames().getInfo())
Map = emap.Map(center=[38.50178453635526,-122.74843617724784], zoom=19)
Map.addLayer(preFire.select(['R', 'G', 'B']), trueColorVis, 'preFire')
Map.addLayer(postFire.select(['R', 'G', 'B']), trueColorVis, 'postFire')
Map.addLayer(postFire_result.randomVisualizer(),'', 'postFire_Kmeans',opacity=0.6)
Map.addLayer(preFire_result.randomVisualizer(),'', 'preFire_Kmeans', opacity=0.6)
Map.addLayer(train_data, {'color': 'yellow'}, 'training',opacity=0.4)
Map.addLayer(test_data, {'color': 'blue'}, 'testing',opacity=0.4)
Map.addLayerControl()
Map
###Output
_____no_output_____
###Markdown
Learning Vector Quantization (LVQ) ClusteringT. Kohonen, "Learning Vector Quantization", The Handbook of Brain Theory and Neural Networks, 2nd Edition, MIT Press, 2003, pp. 631-634.**ee.Clusterer.wekaLVQ(numClusters, learningRate, epochs, normalizeInput)**
###Code
bands = ['R', 'G', 'B', 'N', 'NDVI', 'edge', 'BSI', 'SI', 'texture']
# Make the training dataset.
training = postFire.select(bands).sample(**{
'region': finalBounds,
'scale': 10,
'numPixels': 5000
})
# Instantiate the clusterer and train it.
clusterer = ee.Clusterer.wekaLVQ(15).train(training)
# Cluster the input using the trained clusterer.
preFire_result = preFire.select(bands).cluster(clusterer).rename('LVQ')
postFire_result = postFire.select(bands).cluster(clusterer).rename('LVQ')
# add layer
postFire = postFire.addBands(postFire_result)
preFire = preFire.addBands(preFire_result)
print(postFire.bandNames().getInfo())
Map = emap.Map(center=[38.50178453635526,-122.74843617724784], zoom=18)
Map.addLayer(postFire.select(['R', 'G', 'B']), trueColorVis, 'postFire')
Map.addLayer(postFire_result.randomVisualizer(),'', 'postFire_LVQ',opacity=0.6)
Map.addLayer(preFire_result.randomVisualizer(),'', 'preFire_LVQ', opacity=0.6)
Map.addLayer(train_data, {'color': 'yellow'}, 'training',opacity=0.4)
Map.addLayer(test_data, {'color': 'blue'}, 'testing',opacity=0.4)
Map.addLayerControl()
Map
###Output
_____no_output_____
###Markdown
KMeans Segmentation**Description:** Performs K-Means clustering on the input image. Outputs a 1-band image containing the ID of the cluster that each pixel belongs to. **ee.Algorithms.Image.Segmentation.KMeans(image, numClusters, numIterations, neighborhoodSize, gridSize, forceConvergence, uniqueLabels)*** **numClusters:** Number of clusters.* **numIterations:** Number of iterations.* **neighborhoodSize:** The amount to extend each tile (overlap) when computing the clusters. This option is mutually exclusive with gridSize.* **gridSize:** If greater than 0, kMeans will be run independently on cells of this size. This has the effect of limiting the size of any cluster to be gridSize or smaller. This option is mutually exclusive with neighborhoodSize.* **forceConvergence:** If true, an error is thrown if convergence is not achieved before numIterations.* **uniqueLabels:** If true, clusters are assigned unique IDs. Otherwise, they repeat per tile or grid cell.
###Code
pre_kmeans = ee.Algorithms.Image.Segmentation.KMeans(preFire, 15, 1000, 20, 0, False, False)
pre_clusters = pre_kmeans.select('clusters').rename('KMeans_Seg')
post_kmeans = ee.Algorithms.Image.Segmentation.KMeans(postFire, 15, 1000, 20,0, False, False)
post_clusters = post_kmeans.select('clusters').rename('KMeans_Seg')
# add layer
postFire = postFire.addBands(post_clusters)
preFire = preFire.addBands(pre_clusters)
print(preFire.bandNames().getInfo())
Map = emap.Map(center=[38.50178453635526,-122.74843617724784], zoom=18)
Map.addLayer(preFire.select(['R', 'G', 'B']), trueColorVis, 'preFire')
Map.addLayer(postFire.select(['R', 'G', 'B']), trueColorVis, 'postFire')
Map.addLayer(pre_clusters.randomVisualizer(),'', 'pre_clusters', opacity=0.6)
Map.addLayer(post_clusters.randomVisualizer(),'', 'post_clusters', opacity=0.6)
Map.addLayer(train_data, {'color': 'yellow'}, 'training',opacity=0.4)
Map.addLayer(test_data, {'color': 'blue'}, 'testing',opacity=0.4)
Map.addLayerControl()
Map
###Output
_____no_output_____
###Markdown
Simple Non-Iterative Clustering (SNIC) Segmentation**Description:** An improved version of non-parametric SLIC. Superpixel clustering based on SNIC (Simple Non-Iterative Clustering). Outputs a band of cluster IDs and the per-cluster averages for each of the input bands. Outputs a band of cluster IDs and the per-cluster averages for each of the input bands.**ee.Algorithms.Image.Segmentation.SNIC(image, size, compactness, connectivity, neighborhoodSize, seeds)*** **size:** The superpixel seed location spacing, in pixels. If 'seeds' image is provided, no grid is produced.* **compactness:** Compactness factor. Larger values cause clusters to be more compact (square). Setting this to 0 disables spatial distance weighting.* **connectivity:** Connectivity. Either 4 or 8.* **neighbor:** Tile neighborhood size (to avoid tile boundary artifacts). Defaults to 2 * size.* **seeds:** If provided, any non-zero valued pixels are used as seed locations. Pixels that touch (as specified by 'connectivity') are considered to belong to the same cluster.
###Code
def expandSeeds(seeds):
seeds = seeds.unmask(0).focal_max()
return seeds.updateMask(seeds)
seeds = ee.Algorithms.Image.Segmentation.seedGrid(30)
pre_snic = ee.Algorithms.Image.Segmentation.SNIC(preFire, 30, 15, 8, 200, seeds).select(["R_mean", "G_mean", "B_mean", "N_mean", "clusters"], ["R", "G", "B", "N", "clusters"])
pre_clusters = pre_snic.select('clusters').rename('SNIC')
post_snic = ee.Algorithms.Image.Segmentation.SNIC(postFire, 30, 15, 8)
post_clusters = post_snic.select('clusters').rename('SNIC')
# add layer
postFire = postFire.addBands(post_clusters)
preFire = preFire.addBands(pre_clusters)
print(preFire.bandNames().getInfo())
Map = emap.Map(center=[38.50178453635526,-122.74843617724784], zoom=18)
Map.addLayer(pre_clusters.randomVisualizer(),'' , "clusters")
Map.addLayer(post_clusters.randomVisualizer(),'', 'postFire_SNIC', opacity=0.6)
Map.addLayer(expandSeeds(seeds), {}, 'seeds')
Map.addLayer(train_data, {'color': 'yellow'}, 'training',opacity=0.4)
Map.addLayer(test_data, {'color': 'blue'}, 'testing',opacity=0.4)
Map.addLayerControl()
Map
###Output
_____no_output_____
###Markdown
Filter ImageryExclude areas with vegetation i.e. only keeps imagery areas with urban and baresoil
###Code
hansenImage = ee.Image('UMD/hansen/global_forest_change_2015')
def applyMask(imageryA, imageryB, hansenImage):
'''
Mask out all vegetation and water from imagery from pre disaster values.
'''
NDVIMaskB = imageryB.select('NDVI').lt(0.005)
dataMask = hansenImage.select('datamask')
waterMask = dataMask.eq(1)
imageryA = imageryA.updateMask(NDVIMaskB)
new_imagery = imageryA.updateMask(waterMask)
return new_imagery
postFire_filt = applyMask(postFire, preFire, hansenImage)
Map = emap.Map(center=[38.50178453635526,-122.74843617724784], zoom=11)
Map.addLayer(postFire_filt.select(['R', 'G', 'B']), trueColorVis, 'postFire')
Map.addLayerControl()
Map
###Output
_____no_output_____
###Markdown
Supervised Classification(1) Train Data(2) CART (3) SVM(4) Random Forest Train Data
###Code
# get layer names
print(postFire_filt.bandNames().getInfo())
bands = ['R', 'G', 'B', 'N', 'NDVI', 'BSI', # Bands & Indices
'SNIC', # Clustering, Segmentation
'N_corr', 'B_shade', 'B_gauss', # GLCM Texture
'R_manha','R_low_oct'] # Convolution
training = postFire_filt.select(bands).sampleRegions(**{
'collection': train_data,
'properties': ['class'],
'scale': 10
});
###Output
_____no_output_____
###Markdown
CART Classifier
###Code
# Train a CART classifier with default parameters.
classifier = ee.Classifier.smileCart().train(training, 'class', bands);
# Classify the image with the same bands used for training.
postFire_classified = postFire_filt.select(bands).classify(classifier);
#preFire_classified = preFire_filt.select(bands).classify(trained);
###Output
_____no_output_____
###Markdown
Feature Importance
###Code
class_explain = classifier.explain()
variable_importance = ee.Feature(None, ee.Dictionary(class_explain).get('importance'))
variable_importance.getInfo()
import json
import matplotlib.pylab as plt
import seaborn as sns
import pandas.util.testing as tm
sns.set(style="whitegrid")
sns.set_color_codes("pastel")
var_dict = variable_importance.getInfo()
lists = sorted(var_dict['properties'].items(), key = lambda kv:(kv[1], kv[0]), reverse=True)
var = [i[0] for i in lists]
values = [i[1] for i in lists]
d = pd.DataFrame({'Variables':var,'Values':values})
sns.barplot('Values', 'Variables', data = d, label="Variables", color="b")
plt.tight_layout()
plt.savefig("Figures/CART_feature_imp.png", dpi=250)
###Output
_____no_output_____
###Markdown
Validation
###Code
validation = postFire_classified.sampleRegions(**{
'collection': test_data,
'properties': ['class'],
'scale': 10,
})
testAccuracy = validation.errorMatrix('class', 'classification');
print("Test Accuracy: ", testAccuracy.accuracy().getInfo())
print("Kappa Accuracy: ", testAccuracy.kappa().getInfo())
print("Producer Accuracy: ", testAccuracy.producersAccuracy().getInfo())
print("Consumers Accuracy(): ", testAccuracy.consumersAccuracy().getInfo())
###Output
Test Accuracy: 0.8306339904102291
Kappa Accuracy: 0.4481339614555388
Producer Accuracy: [[0.857304146020197], [0.6802263883975946]]
Consumers Accuracy(): [[0.9379632171287401, 0.45807527393997144]]
###Markdown
Classification Visual
###Code
class_palette = ['bff7ff','ff9900']
Map = emap.Map(center=[38.50178453635526,-122.74843617724784], zoom=11)
Map.addLayer(preFire.select(['R', 'G', 'B']), trueColorVis, 'preFire')
Map.addLayer(postFire.select(['R', 'G', 'B']), trueColorVis, 'postFire')
Map.addLayer(postFire_classified,
{'palette': class_palette, 'min': 0, 'max':1},
'postFire_classification')
Map.addLayer(visualizeByAttribute(train_data, 'class'), {'palette': train_palette, 'min': 0, 'max':1}, 'train')
Map.addLayer(visualizeByAttribute(test_data, 'class'), {'palette': train_palette,'min': 0, 'max':1}, 'test')
Map.addLayerControl()
Map
###Output
_____no_output_____
###Markdown
Support Vector Machine Classifier
###Code
# Create an SVM classifier with custom parameters.
classifier = ee.Classifier.libsvm(**{
'kernelType': 'RBF'
}).train(training, 'class', bands)
# Classify the image.
postFire_classified = postFire_filt.select(bands).classify(classifier)
###Output
_____no_output_____
###Markdown
Validation
###Code
validation = postFire_classified.sampleRegions(**{
'collection': test_data,
'properties': ['class'],
'scale': 10,
})
testAccuracy = validation.errorMatrix('class', 'classification');
print("Test Accuracy: ", testAccuracy.accuracy().getInfo())
print("Kappa Accuracy: ", testAccuracy.kappa().getInfo())
print("Producer Accuracy: ", testAccuracy.producersAccuracy().getInfo())
print("Consumers Accuracy(): ", testAccuracy.consumersAccuracy().getInfo())
###Output
Test Accuracy: 0.9042834479111581
Kappa Accuracy: 0.5850923650769214
Producer Accuracy: [[0.9904336734693877], [0.48606811145510836]]
Consumers Accuracy(): [[0.9034322280395579, 0.9127906976744186]]
###Markdown
Classification Visual
###Code
Map = emap.Map(center=[38.50178453635526,-122.74843617724784], zoom=11)
Map.addLayer(preFire.select(['R', 'G', 'B']), trueColorVis, 'preFire')
Map.addLayer(postFire.select(['R', 'G', 'B']), trueColorVis, 'postFire')
Map.addLayer(postFire_classified,
{'palette': class_palette, 'min': 0, 'max':1},
'postFire_classification')
Map.addLayer(visualizeByAttribute(train_data, 'class'), {'palette': train_palette, 'min': 0, 'max':1}, 'train')
Map.addLayer(visualizeByAttribute(test_data, 'class'), {'palette': train_palette,'min': 0, 'max':1}, 'test')
Map.addLayerControl()
Map
###Output
_____no_output_____
###Markdown
Random Forestee.Classifier.randomForest(numberOfTrees, variablesPerSplit, minLeafPopulation, bagFraction, outOfBagMode, seed) Classifier
###Code
# Create an SVM classifier with custom parameters.
classifier = ee.Classifier.smileRandomForest(**{
'numberOfTrees': 100
}).train(training, 'class', bands)
postFire_classified = postFire_filt.select(bands).classify(classifier)
###Output
_____no_output_____
###Markdown
Feature Importance
###Code
class_explain = classifier.explain()
variable_importance = ee.Feature(None, ee.Dictionary(class_explain).get('importance'))
variable_importance.getInfo()
sns.set(style="whitegrid")
sns.set_color_codes("pastel")
var_dict = variable_importance.getInfo()
lists = sorted(var_dict['properties'].items(), key = lambda kv:(kv[1], kv[0]), reverse=True)
var = [i[0] for i in lists]
values = [i[1] for i in lists]
d = pd.DataFrame({'Variables':var,'Values':values})
sns.barplot('Values', 'Variables', data = d, label="Variables", color="b")
plt.tight_layout()
plt.savefig("Figures/RF_feature_imp.png", dpi=250)
###Output
_____no_output_____
###Markdown
Validation
###Code
validation = postFire_classified.sampleRegions(**{
'collection': test_data,
'properties': ['class'],
'scale': 30,
})
testAccuracy = validation.errorMatrix('class', 'classification');
testAccuracy.array().getInfo()
print("Test Accuracy: ", testAccuracy.accuracy().getInfo())
print("Kappa Accuracy: ", testAccuracy.kappa().getInfo())
print("Producer Accuracy: ", testAccuracy.producersAccuracy().getInfo())
print("Consumers Accuracy(): ", testAccuracy.consumersAccuracy().getInfo())
accuracy = []
kappa = []
producer = []
consumer = []
scale = []
for i in range (10, 120+1, 10):
print(i)
validation = postFire_classified.sampleRegions(**{
'collection': test_data,
'properties': ['class'],
'scale': i,
})
testAccuracy = validation.errorMatrix('class', 'classification')
accuracy.append(testAccuracy.accuracy().getInfo())
kappa.append(testAccuracy.kappa().getInfo())
producer.append(testAccuracy.producersAccuracy().getInfo())
consumer.append(testAccuracy.consumersAccuracy().getInfo())
scale.append(i)
prod_0 = [i[0] for i in producer]
prod_0 = [i[0] for i in prod_0]
prod_1 = [i[1] for i in producer]
prod_1 = [i[0] for i in prod_1]
user_0 = [i[0] for i in consumer]
user_0 = [i[0] for i in user_0]
user_1 = [i[0] for i in consumer]
user_1 = [i[1] for i in user_1]
# plot effects of block scales on accuracy assessment
data = {'Scale (pixels)': scale,
'Accuracy (%)': accuracy,
'Kappa (%)': kappa,
'Producer (no-damage)': prod_0,
'Producer (damaged)': prod_1,
'User (no-damage)': user_0,
'User (damaged)': user_1,
}
scale_scores = pd.DataFrame.from_dict(data)
#scale_scores.to_csv('scale_scores.csv', index=False)
scale_scores = pd.read_csv(r'scale_scores.csv', index_col=None)
scale_scores = scale_scores[1:]
scale_scores
melted = pd.melt(scale_scores, id_vars=['Scale (pixels)'],
value_vars=['Accuracy (%)', 'Kappa (%)',
'Producer (damaged)', 'User (damaged)'],
var_name='Assessment', value_name='Accuracy (%)')
melted['Accuracy (%)'] = melted['Accuracy (%)'] * 100
melted
import seaborn as sns
import matplotlib.pylab as plt
sns.set(style="ticks")
colors = ["#f684a0", "#df6748", "#84a0f6", "#534c5c"]
sns.set_color_codes("pastel")
sns.scatterplot(x="Scale (pixels)", y="Accuracy (%)", hue = 'Assessment',
style = 'Assessment',data=melted,
palette = sns.color_palette(colors))
sns.lineplot(x="Scale (pixels)", y="Accuracy (%)", hue = 'Assessment',
style = 'Assessment',data=melted,
palette = sns.color_palette(colors), legend=False)
plt.savefig("Figures/RF_scale_accuracy.png", dpi=250)
###Output
_____no_output_____
###Markdown
Classification Visual
###Code
class_palette = ['bff7ff','ff9900']
Map = emap.Map(center=[38.50178453635526,-122.74843617724784], zoom=11)
Map.addLayer(preFire.select(['R', 'G', 'B']), trueColorVis, 'preFire')
Map.addLayer(postFire.select(['R', 'G', 'B']), trueColorVis, 'postFire')
Map.addLayer(postFire_classified,
{'palette': class_palette, 'min': 0, 'max':1},
'postFire_classification')
Map.addLayer(visualizeByAttribute(train_data, 'class'), {'palette': train_palette, 'min': 0, 'max':1}, 'train')
Map.addLayer(visualizeByAttribute(test_data, 'class'), {'palette': train_palette,'min': 0, 'max':1}, 'test')
Map.addLayerControl()
Map
###Output
_____no_output_____
###Markdown
Image ExtractionTo run xgboost and neural networks locally, extract images by building polygons from the GEE API. Extracted images (with all features) will be saved in Google Drive. Reading these extracted images as well as training XGBoost and Neural Net will be run in seperate jupyter notebooks.__output images will have all 43 features at a resolutoin of 1 meter__To reduce image size (due to the expot limit from GEE API):Each polygon converted from XView annotations has recorded its original image ID. Based on these, for each id, clip the pre- and post- images respectively by polygons with the same id****The output image would roughly be in the same extent as its original image
###Code
#make sure all features (each band of the image has the same data type)
preFire=ee.Image.float(preFire)
postFire=ee.Image.float(postFire)
#The whole pre- and post- images include two seperate areas, first use geo dataframe to subset Santa Rosa
#Santa Rosa
train_data_SR=train_data.filter(ee.Filter.eq('location_n', 'santa-rosa-wildfire'))
santaRosa = gdf.query("location_n == 'santa-rosa-wildfire'")
ID_list=santaRosa.ID.unique()
#convert ground truth polygons to image
SR_true_pre = ee.Image.byte(train_data_SR.filter(ee.Filter.eq('pre_post_d','pre')).reduceToImage(properties=['dmg_code'],reducer=ee.Reducer.first()))
SR_true_post = ee.Image.byte(train_data_SR.filter(ee.Filter.eq('pre_post_d','post')).reduceToImage(properties=['dmg_code'],reducer=ee.Reducer.first()))
#use the export function from GEE API to save both pre- and post-fire features and ground truth images
for index in ID_list:
try:
image_ROI=train_data_SR.filter(ee.Filter.eq('ID', index))
#post
ROI_shp=santaRosa[(santaRosa['ID']==index)&(santaRosa['pre_post_d']=='post')]
image_bound=ee.Geometry.Polygon(get_bounds(ROI_shp))
task1=ee.batch.Export.image.toDrive(image=postFire, description='post_'+index, folder='NAIP_img_new', region=image_bound, scale=1)
task2=ee.batch.Export.image.toDrive(image=SR_true_post, description='post_'+index+'gt', folder='NAIP_img_new', region=image_bound, scale=1)
task1.start()
task2.start()
#print(task2.status())
#pre
ROI_shp=santaRosa[(santaRosa['ID']==index)&(santaRosa['pre_post_d']=='pre')]
image_bound=ee.Geometry.Polygon(get_bounds(ROI_shp))
task3=ee.batch.Export.image.toDrive(image=preFire, description='pre_'+index, folder='NAIP_img_new', region=image_bound, scale=1)
task4=ee.batch.Export.image.toDrive(image=SR_true_pre, description='pre_'+index+'gt', folder='NAIP_img_new', region=image_bound, scale=1)
task3.start()
task4.start()
#print(task3.status())
#if index=='00000376':
# print('done!')
except:
continue
###Output
_____no_output_____ |
MandMs/02_Facies_classification-MandMs_plurality_voting_classifier.ipynb | ###Markdown
Facies classification using plurality voting (e.g. multiclass majority voting) Contest entry by: Matteo Niccoli and Mark Dahl [Original contest notebook](../Facies_classification.ipynb) by Brendon Hall, [Enthought](https://www.enthought.com/) The code and ideas in this notebook, by Matteo Niccoli and Mark Dahl, are licensed under a Creative Commons Attribution 4.0 International License. In this notebook we will attempt to predict facies from well log data using machine learnig classifiers. The dataset comes from a class exercise from The University of Kansas on [Neural Networks and Fuzzy Systems](http://www.people.ku.edu/~gbohling/EECS833/). This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see [Bohling and Dubois (2003)](http://www.kgs.ku.edu/PRS/publication/2003/ofr2003-50.pdf) and [Dubois et al. (2007)](http://dx.doi.org/10.1016/j.cageo.2006.08.011). The dataset consists of log data from nine wells that have been labeled with a facies type based on observation of core. We will use this log data to train a support vector machine to classify facies types. The planWe will created three classifiers with pretuned parameters:- best SVM in the competition (by our team's SVM submission) - best Random Forest in the competition (form the leading submission, by gccrowther) - multilayer perceptron (from previous notebooks, not submitted)We will then try to predict the facies using a plurality voting approach (plurality voting = multi-class majority voting).From the [scikit-learn website](http://scikit-learn.org/stable/modules/ensemble.htmlvoting-classifier): "The idea behind the voting classifier implementation is to combine conceptually different machine learning classifiers and use a majority vote or the average predicted probabilities (soft vote) to predict the class labels. Such a classifier can be useful for a set of equally well performing model in order to balance out their individual weaknesses". Exploring the datasetFirst, we will examine the data set we will use to train the classifier.
###Code
%matplotlib inline
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
import pandas as pd
from pandas import set_option
set_option("display.max_rows", 10)
pd.options.mode.chained_assignment = None
from sklearn import preprocessing
from sklearn.metrics import f1_score, accuracy_score, make_scorer
from sklearn.model_selection import LeaveOneGroupOut
filename = 'facies_vectors.csv'
training_data = pd.read_csv(filename)
training_data
###Output
_____no_output_____
###Markdown
This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate. The seven predictor variables are:* Five wire line log curves include [gamma ray](http://petrowiki.org/Gamma_ray_logs) (GR), [resistivity logging](http://petrowiki.org/Resistivity_and_spontaneous_%28SP%29_logging) (ILD_log10),[photoelectric effect](http://www.glossary.oilfield.slb.com/en/Terms/p/photoelectric_effect.aspx) (PE), [neutron-density porosity difference and average neutron-density porosity](http://petrowiki.org/Neutron_porosity_logs) (DeltaPHI and PHIND). Note, some wells do not have PE.* Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)The nine discrete facies (classes of rocks) are: 1. Nonmarine sandstone2. Nonmarine coarse siltstone 3. Nonmarine fine siltstone 4. Marine siltstone and shale 5. Mudstone (limestone)6. Wackestone (limestone)7. Dolomite8. Packstone-grainstone (limestone)9. Phylloid-algal bafflestone (limestone)These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.Facies |Label| Adjacent Facies:---: | :---: |:--:1 |SS| 22 |CSiS| 1,33 |FSiS| 24 |SiSh| 55 |MS| 4,66 |WS| 5,77 |D| 6,88 |PS| 6,7,99 |BS| 7,8Let's clean up this dataset. The 'Well Name' and 'Formation' columns can be turned into a categorical data type.
###Code
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data['Well Name'].unique()
###Output
_____no_output_____
###Markdown
These are the names of the 10 training wells in the Council Grove reservoir. Data has been recruited into pseudo-well 'Recruit F9' to better represent facies 9, the Phylloid-algal bafflestone. Before we plot the well data, let's define a color map so the facies are represented by consistent color in all the plots in this tutorial. We also create the abbreviated facies labels, and add those to the `facies_vectors` dataframe.
###Code
# 1=sandstone 2=c_siltstone 3=f_siltstone # 4=marine_silt_shale
#5=mudstone 6=wackestone 7=dolomite 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041', '#DC7633','#A569BD',
'#000000', '#000080', '#2E86C1', '#AED6F1', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)
training_data.describe()
###Output
_____no_output_____
###Markdown
This is a quick view of the statistical distribution of the input variables. Looking at the `count` values, most values have 4149 valid values except for `PE`, which has 3232. We will drop the feature vectors that don't have a valid `PE` entry.
###Code
PE_mask = training_data['PE'].notnull().values
training_data = training_data[PE_mask]
training_data.describe()
###Output
_____no_output_____
###Markdown
Now we extract just the feature variables we need to perform the classification. The predictor variables are the five log values and two geologic constraining variables, **and we are also using depth**. We also get a vector of the facies labels that correspond to each feature vector.
###Code
y = training_data['Facies'].values
print y[25:40]
print np.shape(y)
X = training_data.drop(['Formation', 'Well Name','Facies','FaciesLabels'], axis=1)
print np.shape(X)
X.describe(percentiles=[.05, .25, .50, .75, .95])
scaler = preprocessing.StandardScaler().fit(X)
X = scaler.transform(X)
###Output
_____no_output_____
###Markdown
Make performance scorersUsed to evaluate performance.
###Code
Fscorer = make_scorer(f1_score, average = 'micro')
###Output
_____no_output_____
###Markdown
Pre-tuned SVM classifier classifier and leave one well out average F1 scoreThis is the Support Vector Machine classifier from our [first submission](https://github.com/mycarta/2016-ml-contest/blob/master/MandMs/Facies_classification-M%26Ms_SVM_rbf_kernel_optimal.ipynb).
###Code
from sklearn import svm
SVC_classifier = svm.SVC(C = 100, cache_size=2400, class_weight=None, coef0=0.0,
decision_function_shape=None, degree=3, gamma=0.01, kernel='rbf',
max_iter=-1, probability=True, random_state=49, shrinking=True,
tol=0.001, verbose=False)
f1_svc = []
wells = training_data["Well Name"].values
logo = LeaveOneGroupOut()
for train, test in logo.split(X, y, groups=wells):
well_name = wells[test[0]]
SVC_classifier.fit(X[train], y[train])
pred_svc = SVC_classifier.predict(X[test])
sc = f1_score(y[test], pred_svc, labels = np.arange(10), average = 'micro')
print("{:>20s} {:.3f}".format(well_name, sc))
f1_svc.append(sc)
print "-Average leave-one-well-out F1 Score: %6f" % (sum(f1_svc)/(1.0*(len(f1_svc))))
###Output
CHURCHMAN BIBLE 0.542
CROSS H CATTLE 0.347
LUKE G U 0.440
NEWBY 0.400
NOLAN 0.494
Recruit F9 0.721
SHANKLE 0.483
SHRIMPLIN 0.590
-Average leave-one-well-out F1 Score: 0.502174
###Markdown
Pre-tuned multi-layer perceptron classifier and average F1 score
###Code
from sklearn.neural_network import MLPClassifier
mlp_classifier = MLPClassifier(activation='logistic', alpha=0.01, batch_size='auto',
beta_1=0.9, beta_2=0.999, early_stopping=False, epsilon=1e-08,
hidden_layer_sizes=(100,), learning_rate='adaptive',
learning_rate_init=0.001, max_iter=1000, momentum=0.9,
nesterovs_momentum=True, power_t=0.5, random_state=49, shuffle=True,
solver='adam', tol=0.0001, validation_fraction=0.1, verbose=False,
warm_start=False)
f1_mlp = []
wells = training_data["Well Name"].values
logo = LeaveOneGroupOut()
for train, test in logo.split(X, y, groups=wells):
well_name = wells[test[0]]
mlp_classifier.fit(X[train], y[train])
pred_mlp = mlp_classifier.predict(X[test])
sc = f1_score(y[test], pred_mlp, labels = np.arange(10), average = 'micro')
print("{:>20s} {:.3f}".format(well_name, sc))
f1_mlp.append(sc)
print "-Average leave-one-well-out F1 Score: %6f" % (sum(f1_mlp)/(1.0*(len(f1_mlp))))
###Output
CHURCHMAN BIBLE 0.525
CROSS H CATTLE 0.341
LUKE G U 0.419
NEWBY 0.415
NOLAN 0.482
Recruit F9 0.779
SHANKLE 0.541
SHRIMPLIN 0.575
-Average leave-one-well-out F1 Score: 0.509666
###Markdown
Pre-tuned extra treesThis is the RF classifier with parameters tuned in the leading submission, by George Crowther, but without his engineered features.
###Code
from sklearn.pipeline import make_pipeline
from sklearn.feature_selection import VarianceThreshold
from sklearn.ensemble import ExtraTreesClassifier
ET_classifier = make_pipeline(
VarianceThreshold(threshold=0.49),
ExtraTreesClassifier(criterion="entropy", max_features=0.71,
n_estimators=500, random_state=49))
f1_ET = []
wells = training_data["Well Name"].values
logo = LeaveOneGroupOut()
for train, test in logo.split(X, y, groups=wells):
well_name = wells[test[0]]
ET_classifier.fit(X[train], y[train])
pred_cv = ET_classifier.predict(X[test])
sc = f1_score(y[test], pred_cv, labels = np.arange(10), average = 'micro')
print("{:>20s} {:.3f}".format(well_name, sc))
f1_ET.append(sc)
print "-Average leave-one-well-out F1 Score: %6f" % (sum(f1_ET)/(1.0*(len(f1_ET))))
###Output
CHURCHMAN BIBLE 0.498
CROSS H CATTLE 0.337
LUKE G U 0.434
NEWBY 0.408
NOLAN 0.494
Recruit F9 0.912
SHANKLE 0.486
SHRIMPLIN 0.614
-Average leave-one-well-out F1 Score: 0.522719
###Markdown
Plurality voting classifier (multi-class majority voting)We will use a weighted approach, where the weights are somewhat arbitrary, but their proportion is based on the average f1 score of the individual classifiers.
###Code
from sklearn.ensemble import VotingClassifier
eclf_cv = VotingClassifier(estimators=[
('SVC', SVC_classifier), ('MLP', mlp_classifier), ('ET', ET_classifier)],
voting='soft', weights=[0.3,0.33,0.37])
###Output
_____no_output_____
###Markdown
Leave one-well-out F1 scores
###Code
f1_ens = []
wells = training_data["Well Name"].values
logo = LeaveOneGroupOut()
for train, test in logo.split(X, y, groups=wells):
well_name = wells[test[0]]
eclf_cv.fit(X[train], y[train])
pred_cv = eclf_cv.predict(X[test])
sc = f1_score(y[test], pred_cv, labels = np.arange(10), average = 'micro')
print("{:>20s} {:.3f}".format(well_name, sc))
f1_ens.append(sc)
print "-Average leave-one-well-out F1 Score: %6f" % (sum(f1_ens)/(1.0*(len(f1_ens))))
###Output
CHURCHMAN BIBLE 0.554
CROSS H CATTLE 0.351
LUKE G U 0.451
NEWBY 0.400
NOLAN 0.501
Recruit F9 0.912
SHANKLE 0.519
SHRIMPLIN 0.603
-Average leave-one-well-out F1 Score: 0.536423
###Markdown
Comments Using the average F1 score from the leave-one-well out cross validation as a metric, the majority voting is superior to the individual classifiers, including the pre-tuned Random Forest from the leading submission. However, the Random Forest in the official leading submission was trained using additional new features engineered by George, and outperforms our majority voting classifier, with an F1 score of 0.580 against our 0.579. A clear indication, in our view, that the feature engineering is a key element to achieve the best possible prediction. Predicting, displaying, and saving facies for blind wells
###Code
blind = pd.read_csv('validation_data_nofacies.csv')
X_blind = np.array(blind.drop(['Formation', 'Well Name'], axis=1))
X_blind = scaler.transform(X_blind)
y_pred = eclf_cv.fit(X, y).predict(X_blind)
blind['Facies'] = y_pred
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im=ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[5])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
make_facies_log_plot(blind[blind['Well Name'] == 'STUART'], facies_colors)
make_facies_log_plot(blind[blind['Well Name'] == 'CRAWFORD'], facies_colors)
np.save('ypred.npy', y_pred)
###Output
_____no_output_____
###Markdown
Displaying predicted versus original facies in the training dataThis is a nice display to finish up with, as it gives us a visual idea of the predicted faces where we have facies from the core observations.The plot we will use a function from the original notebook. Let's look at the well with the lowest F1 from the previous code block, CROSS H CATTLE, and the one with the highest F1 (excluding Recruit F9), which is SHRIMPLIN.
###Code
def compare_facies_plot(logs, compadre, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster1 = np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
cluster2 = np.repeat(np.expand_dims(logs[compadre].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=7, figsize=(9, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im1 = ax[5].imshow(cluster1, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
im2 = ax[6].imshow(cluster2, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[6])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im2, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-2):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[6].set_xlabel(compadre)
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
ax[6].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
eclf_cv.fit(X,y)
pred = eclf_cv.predict(X)
X = training_data
X['Prediction'] = pred
compare_facies_plot(X[X['Well Name'] == 'CROSS H CATTLE'], 'Prediction', facies_colors)
compare_facies_plot(X[X['Well Name'] == 'SHRIMPLIN'], 'Prediction', facies_colors)
###Output
_____no_output_____ |
src/pubmed_disc_conc/TopicModelTest.ipynb | ###Markdown
Topic Model TestThis is a notebook for trying to use topic models for classifying sets of text that are more syntactically similar than topically similar. This notebook attempts to distinguish between discussion and conclusion section of scientific papers.Below we are loading the dataset for use.
###Code
from __future__ import print_function
from time import time
import os
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.decomposition import NMF, LatentDirichletAllocation
from sklearn.cross_validation import train_test_split
import numpy as np
import pickle
validDocsDict = dict()
fileList = os.listdir("BioMedProcessed")
for f in fileList:
validDocsDict.update(pickle.load(open("BioMedProcessed/" + f, "rb")))
###Output
_____no_output_____
###Markdown
Here we are setting some vaiables to be used below and defining a function for printing the top words in a topic for the topic modeling.
###Code
n_samples = len(validDocsDict.keys())
n_features = 1000
n_topics = 2
n_top_words = 30
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
print("Topic #%d:" % topic_idx)
print(" ".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
###Output
_____no_output_____
###Markdown
Pre-process dataHere we are preprocessing data for use later. This code only grabs the discussion and conclusion sections of the data. We are also creating appropriate labels for the data and spliting the documents up to train and test sets.
###Code
print("Loading dataset...")
t0 = time()
documents = []
labels = []
concLengthTotal = 0
discLengthTotal = 0
concCount = 0
discCount = 0
for k in validDocsDict.keys():
if k.startswith("conclusion"):
labels.append("conclusion")
documents.append(validDocsDict[k])
concCount += 1
concLengthTotal += len(validDocsDict[k].split(' '))
elif k.startswith("discussion"):
labels.append("discussion")
documents.append(validDocsDict[k])
discCount += 1
discLengthTotal += len(validDocsDict[k].split(' '))
print(len(documents))
print(concLengthTotal * 1.0/ concCount)
print(discLengthTotal * 1.0/ discCount)
train, test, labelsTrain, labelsTest = train_test_split(documents, labels, test_size = 0.1)
###Output
Loading dataset...
53034
621.583361617
1197.39683976
###Markdown
Here we are splitting the data up some more to train different models. Discussion and conclusion sections are being put into their own training sets. A TFIDF vectorizer is trained with the whole dataset of conclusion AND discussion sections. The multiple different training sets are then transformed using this vectorizer to get vector encodings of the text normalized to sum to 1 which accounts for differing lengths of conclusion and discussion sections.
###Code
trainSetOne = []
trainSetTwo = []
for x in range(len(train)):
if labelsTrain[x] == "conclusion":
trainSetOne.append(train[x])
else:
trainSetTwo.append(train[x])
# Use tf (raw term count) features for LDA.
print("Extracting tf features for LDA...")
tf_vectorizer = TfidfVectorizer(max_df=0.95, norm = 'l1', min_df=2, max_features=n_features, stop_words='english')
t0 = time()
tf = tf_vectorizer.fit_transform(train)
tfSetOne = tf_vectorizer.transform(trainSetOne)
tfSetTwo = tf_vectorizer.transform(trainSetTwo)
tfTest = tf_vectorizer.transform(test)
test = tfTest
train = tf
trainSetOne = tfSetOne
trainSetTwo = tfSetTwo
print("done in %0.3fs." % (time() - t0))
###Output
Extracting tf features for LDA...
done in 74.115s.
###Markdown
LDA With Two TopicsDefine an LDA topic model on the whole data set with two topics. This is trying to see if the topic model can define the difference between the two groups automatically and prints the top words per topic.
###Code
print("Fitting LDA models with tf features, n_samples=%d and n_features=%d..."
% (n_samples, n_features))
lda = LatentDirichletAllocation(n_topics=n_topics, max_iter=100,
learning_method='online', learning_offset=50.,
random_state=0)
t0 = time()
lda.fit(tf)
print("done in %0.3fs." % (time() - t0))
print("\nTopics in LDA model:")
tf_feature_names = tf_vectorizer.get_feature_names()
print_top_words(lda, tf_feature_names, n_top_words)
###Output
Fitting LDA models with tf features, n_samples=157526 and n_features=1000...
done in 354.955s.
Topics in LDA model:
Topic #0:
patients health study care 1016 authors risk manuscript treatment clinical data disease use research women patient hiv medical children competing history pre interests analysis publication population design quality pain age
Topic #1:
background expression gene cells genes cell protein cancer results different human studies activity used species levels model specific proteins present genetic method using dna genome role number data function observed
###Markdown
Transform the unknown data through the topic model and calculate which topic it is more associated with according to the ratios. Calculate how many of each type (conclusion and discussion) go into each topic (1 or 2).
###Code
results = lda.transform(test)
totalConTop1 = 0
totalConTop2 = 0
totalDisTop1 = 0
totalDisTop2 = 0
for x in range(len(results)):
val1 = results[x][0]
val2 = results[x][1]
total = val1 + val2
print(str(labelsTest[x]) + " " + str(val1/total) + " " + str(val2/total))
if val1 > val2:
if labelsTest[x] == "conclusion":
totalConTop1 += 1
else:
totalDisTop1 += 1
else:
if labelsTest[x] == "conclusion":
totalConTop2 += 1
else:
totalDisTop2 += 1
###Output
_____no_output_____
###Markdown
Print out the results from the topic transforms.
###Code
print("Total Conclusion Topic One: " + str(totalConTop1))
print("Total Conclusion Topic Two: " + str(totalConTop2))
print("Total Discussion Topic One: " + str(totalDisTop1))
print("Total Discussion Topic Two: " + str(totalDisTop2))
###Output
_____no_output_____
###Markdown
Get the parameters for the LDA.
###Code
lda.get_params()
###Output
_____no_output_____
###Markdown
Basic ClassifiersTrain three basic classifiers to solve the problem. Try Gaussian, Bernoulli and K Nearest Neighbors classifiers and calculate how accurate they are.
###Code
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(train.toarray(), labelsTrain)
classResults = classifier.predict(test.toarray())
numRight = 0
for item in range(len(classResults)):
if classResults[item] == labelsTest[item]:
numRight += 1
print(str(numRight * 1.0 / len(classResults) * 1.0))
from sklearn.naive_bayes import MultinomialNB
classifier = MultinomialNB()
classifier.fit(train.toarray(), labelsTrain)
classResults = classifier.predict(test.toarray())
numRight = 0
for item in range(len(classResults)):
if classResults[item] == labelsTest[item]:
numRight += 1
print(str(numRight * 1.0 / len(classResults) * 1.0))
from sklearn.naive_bayes import BernoulliNB
classifier = BernoulliNB()
classifier.fit(train.toarray(), labelsTrain)
classResults = classifier.predict(test.toarray())
numRight = 0
for item in range(len(classResults)):
if classResults[item] == labelsTest[item]:
numRight += 1
print(str(numRight * 1.0 / len(classResults) * 1.0))
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier()
classifier.fit(train, labelsTrain)
classResults = classifier.predict(test)
numRight = 0
for item in range(len(classResults)):
if classResults[item] == labelsTest[item]:
numRight += 1
print(str(numRight * 1.0 / len(classResults) * 1.0))
###Output
0.743212669683
###Markdown
Decision TreesDecision trees work well for binary classification and require little data prep
###Code
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier()
classifier.fit(train.toarray(), labelsTrain)
classResults = classifier.predict(test.toarray())
numRight = 0
for item in range(len(classResults)):
if classResults[item] == labelsTest[item]:
numRight += 1
print(str(numRight * 1.0 / len(classResults) * 1.0))
###Output
0.942873303167
###Markdown
SVM
###Code
from sklearn.linear_model import SGDClassifier
classifier = SGDClassifier()
classifier.fit(train, labelsTrain)
classResults = classifier.predict(test)
numRight = 0
for item in range(len(classResults)):
if classResults[item] == labelsTest[item]:
numRight += 1
print(str(numRight * 1.0 / len(classResults) * 1.0))
###Output
0.863122171946
###Markdown
Two Topic ModelsDefine two topic models with 20 topics each, one on discussion sections and one on conclusion sections. Then transform both the train and test sets using both topic models to get 40 features for each sample based on the probability distribution for each topic in each LDA.
###Code
ldaSet1 = LatentDirichletAllocation(n_topics=20, max_iter=100,
learning_method='online', learning_offset=50.,
random_state=0)
ldaSet2 = LatentDirichletAllocation(n_topics=20, max_iter=100,
learning_method='online', learning_offset=50.,
random_state=0)
ldaSet1.fit(trainSetOne)
print_top_words(ldaSet1, tf_feature_names, n_top_words)
ldaSet2.fit(trainSetTwo)
print_top_words(ldaSet2, tf_feature_names, n_top_words)
results1 = ldaSet1.transform(train)
results2 = ldaSet2.transform(train)
resultsTest1 = ldaSet1.transform(test)
resultsTest2 = ldaSet2.transform(test)
results = np.hstack((results1, results2))
resultsTest = np.hstack((resultsTest1, resultsTest2))
###Output
_____no_output_____
###Markdown
Define two classifiers using the transformed train and test sets from the topic models. Print out the accuracy of each one.
###Code
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(results, labelsTrain)
classResults = classifier.predict(resultsTest)
numRight = 0
for item in range(len(classResults)):
if classResults[item] == labelsTest[item]:
numRight += 1
print(str(numRight * 1.0 / len(classResults) * 1.0))
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier()
classifier.fit(results, labelsTrain)
classResults = classifier.predict(resultsTest)
numRight = 0
for item in range(len(classResults)):
if classResults[item] == labelsTest[item]:
numRight += 1
print(str(numRight * 1.0 / len(classResults) * 1.0))
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier()
classifier.fit(results, labelsTrain)
classResults = classifier.predict(resultsTest)
numRight = 0
for item in range(len(classResults)):
if classResults[item] == labelsTest[item]:
numRight += 1
print(str(numRight * 1.0 / len(classResults) * 1.0))
from sklearn.linear_model import SGDClassifier
classifier = SGDClassifier()
classifier.fit(results, labelsTrain)
classResults = classifier.predict(resultsTest)
numRight = 0
for item in range(len(classResults)):
if classResults[item] == labelsTest[item]:
numRight += 1
print(str(numRight * 1.0 / len(classResults) * 1.0))
###Output
0.588989441931
###Markdown
Normalize the results of each sample of 40 features so they sum to 1. Then train two more classifiers using the data and print out the accuracy of each.
###Code
for x in range(len(results)):
total = 0
for y in range(len(results[x])):
total += results[x][y]
for y in range(len(results[x])):
results[x][y] = results[x][y]/total
for x in range(len(resultsTest)):
total = 0
for y in range(len(resultsTest[x])):
total += resultsTest[x][y]
for y in range(len(resultsTest[x])):
resultsTest[x][y] = resultsTest[x][y]/total
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(results, labelsTrain)
classResults = classifier.predict(resultsTest)
numRight = 0
for item in range(len(classResults)):
if classResults[item] == labelsTest[item]:
numRight += 1
print(str(numRight * 1.0 / len(classResults) * 1.0))
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier()
classifier.fit(results, labelsTrain)
classResults = classifier.predict(resultsTest)
numRight = 0
for item in range(len(classResults)):
if classResults[item] == labelsTest[item]:
numRight += 1
print(str(numRight * 1.0 / len(classResults) * 1.0))
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier()
classifier.fit(results, labelsTrain)
classResults = classifier.predict(resultsTest)
numRight = 0
for item in range(len(classResults)):
if classResults[item] == labelsTest[item]:
numRight += 1
print(str(numRight * 1.0 / len(classResults) * 1.0))
from sklearn.linear_model import SGDClassifier
classifier = SGDClassifier()
classifier.fit(results, labelsTrain)
classResults = classifier.predict(resultsTest)
numRight = 0
for item in range(len(classResults)):
if classResults[item] == labelsTest[item]:
numRight += 1
print(str(numRight * 1.0 / len(classResults) * 1.0))
###Output
0.588989441931
|
keras_tiny_yolo3_train_20200923.ipynb | ###Markdown
Download keras-yolo3 projectGithub: https://github.com/sleepless-se/keras-yolo3(本家)自分のgit : https://github.com/tsuna-can/yolo-test.git Clone sleepless-se/keras-yolo3.git
###Code
#自分のやつtinyに変更済み
!git clone https://github.com/tsuna-can/yolo-test.git
%cd yolo-test
###Output
_____no_output_____
###Markdown
Install requirementsインストールした後にランタイムを再起動するのを忘れずに
###Code
!pip install -r requirements.txt
###Output
_____no_output_____
###Markdown
Upload VoTT export file and directory (.zip)Please upload your Archive.zip
###Code
%cd VOCDevkit/VOC2007
%cd /content/yolo-test/VOCDevkit/VOC2007
from google.colab import files
uploaded = files.upload()
!ls
###Output
_____no_output_____
###Markdown
Unzip and make train *files*
###Code
!unzip Archive
%cd /content/yolo-test/
!python make_train_files.py
###Output
_____no_output_____
###Markdown
Convert annotations for YOLOPlease set your *classes*フラグでクラスを指定する
###Code
!python voc_annotation.py tree tree_white
###Output
_____no_output_____
###Markdown
Train model
###Code
!python train.py
###Output
_____no_output_____
###Markdown
Download trainde weights and classes fileダウンロードがブロックされることがあるので注意
###Code
#weight
trained = 'logs/000/trained_weights_final.h5'
files.download(trained)
#クラス名
classes = "model_data/voc_classes.txt"
files.download(classes)
#train
train_imgs = "model_data/2007_train.txt"
files.download(train_imgs)
#val
val_imgs = "model_data/2007_val.txt"
files.download(val_imgs)
#test
test_imgs = "model_data/2007_test.txt"
files.download(test_imgs)
###Output
_____no_output_____
###Markdown
Predict by new model結果はresult.jpgとして保存されるようにしてありますカーネルを再起動した時はディレクトリを移動し、voc_classes.txtを書き換えて、logs/000/にweightファイルをアップロードする。weightファイルが破損する可能性があるので、必ずアップロードボタンからアップロードする
###Code
!python tiny_yolo_video.py --image
###Output
_____no_output_____ |
notebooks_for_models/1969/model5_penalized_svm_1969.ipynb | ###Markdown
Purpose: Try different models-- Part5. Penalized_SVM.
###Code
# import dependencies.
import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.svm import SVC
###Output
_____no_output_____
###Markdown
STEP1: Read in dataset. Remove data from 2016-2019.- data from 2016-2018 will be used to bs test the model.- data from 2019 will be used to predict the winners of the 2019 WS.
###Code
# read in the data.
team_data = pd.read_csv("../../Resources/clean_data_1969.csv")
del team_data["Unnamed: 0"]
team_data.head()
# remove data from 2016 through 2019.
team_data_new = team_data.loc[team_data["year"] < 2016]
team_data_new.head()
target = team_data_new["winners"]
features = team_data_new.drop({"team", "year", "winners"}, axis=1)
feature_columns = list(features.columns)
print (target.shape)
print (features.shape)
print (feature_columns)
###Output
(1266,)
(1266, 59)
['A', 'DP', 'E', 'G2', 'GS2', 'INN', 'PB', 'PO', 'TC', '2B', '3B', 'AB', 'AO', 'BB', 'CS', 'G', 'GDP', 'H', 'HBP', 'HR', 'IBB', 'NP_x', 'OBP', 'OPS_x', 'PA', 'R', 'RBI', 'SAC', 'SB', 'SF', 'SLG', 'SO', 'TB', 'XBH', 'BB1', 'BK', 'CG', 'ER', 'ERA', 'G1', 'GF', 'GS', 'H1', 'HB', 'HR1', 'IBB1', 'IP', 'L', 'OBP1', 'R1', 'SHO', 'SO1', 'SV', 'SVO', 'TBF', 'W', 'WHIP', 'WP', 'WPCT']
###Markdown
STEP2: Split and scale the data.
###Code
# split data.
X_train, X_test, y_train, y_test = train_test_split(features, target, random_state=42)
# scale data.
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.fit_transform(X_test)
###Output
/Applications/anaconda3/envs/PythonData/lib/python3.6/site-packages/sklearn/preprocessing/data.py:645: DataConversionWarning: Data with input dtype int64, float64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/Applications/anaconda3/envs/PythonData/lib/python3.6/site-packages/sklearn/base.py:464: DataConversionWarning: Data with input dtype int64, float64 were all converted to float64 by StandardScaler.
return self.fit(X, **fit_params).transform(X)
/Applications/anaconda3/envs/PythonData/lib/python3.6/site-packages/sklearn/preprocessing/data.py:645: DataConversionWarning: Data with input dtype int64, float64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/Applications/anaconda3/envs/PythonData/lib/python3.6/site-packages/sklearn/base.py:464: DataConversionWarning: Data with input dtype int64, float64 were all converted to float64 by StandardScaler.
return self.fit(X, **fit_params).transform(X)
###Markdown
STEP3: Try the SVC model.
###Code
# generate the model.
model = SVC(kernel="rbf",
class_weight="balanced",
probability=True)
# fit the model.
model.fit(X_train_scaled, y_train)
# predict.
prediction = model.predict(X_test_scaled)
print ((classification_report(y_test, prediction, target_names=["0", "1"])))
###Output
precision recall f1-score support
0 0.97 0.87 0.92 304
1 0.09 0.31 0.14 13
micro avg 0.85 0.85 0.85 317
macro avg 0.53 0.59 0.53 317
weighted avg 0.93 0.85 0.89 317
###Markdown
STEP4: Predict the winner 2016-2018.
###Code
def predict_the_winner(model, year, team_data, X_train):
'''
INPUT:
-X_train = scaled X train data.
-model = the saved model.
-team_data = complete dataframe with all data.
-year = the year want to look at.
OUTPUT:
-printed prediction.
DESCRIPTION:
-data from year of interest is isolated.
-the data are scaled.
-the prediction is made.
-print out the resulting probability and the name of the team.
'''
# grab the data.
team_data = team_data.loc[team_data["year"] == year].reset_index()
# set features (no team, year, winners).
# set target (winners).
features = team_data[feature_columns]
# scale.
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
features = scaler.fit_transform(features)
# fit the model.
probabilities = model.predict_proba(features)
# convert predictions to datafram.e
WS_predictions = pd.DataFrame(probabilities[:,1])
# Sort the DataFrame (descending)
WS_predictions = WS_predictions.sort_values(0, ascending=False)
WS_predictions['Probability'] = WS_predictions[0]
# Print 50 highest probability HoF inductees from still eligible players
for i, row in WS_predictions.head(50).iterrows():
prob = ' '.join(('WS Probability =', str(row['Probability'])))
print('')
print(prob)
print(team_data.iloc[i,1:27]["team"])
# predict for 2018.
predict_the_winner(model, 2018, team_data, X_train_scaled)
# predict for 2017.
predict_the_winner(model, 2017, team_data, X_train_scaled)
###Output
WS Probability = 0.08984049057305296
Washington Nationals
WS Probability = 0.057445408892973594
Los Angeles Angels
WS Probability = 0.05635823472062213
Boston Red Sox
WS Probability = 0.05190520611531737
Cleveland Indians
WS Probability = 0.050463324998347915
Seattle Mariners
WS Probability = 0.04888431760027947
Atlanta Braves
WS Probability = 0.046303631017353276
Tampa Bay Rays
WS Probability = 0.04273026559203425
New York Yankees
WS Probability = 0.03930025498556103
Houston Astros
WS Probability = 0.03768582846748057
New York Mets
WS Probability = 0.036353868121461665
Milwaukee Brewers
WS Probability = 0.03557061393914468
Oakland Athletics
WS Probability = 0.030991035748564457
Colorado Rockies
WS Probability = 0.0307854271217912
Los Angeles Dodgers
WS Probability = 0.02953121500312916
Minnesota Twins
WS Probability = 0.026272963230368766
Arizona Diamondbacks
WS Probability = 0.02497380042928017
Pittsburgh Pirates
WS Probability = 0.020629277499001553
Chicago Cubs
WS Probability = 0.020046274906916205
Philadelphia Phillies
WS Probability = 0.019356217311531525
San Francisco Giants
WS Probability = 0.01867169331462274
Detroit Tigers
WS Probability = 0.015243968202559311
St. Louis Cardinals
WS Probability = 0.01509342417488987
Miami Marlins
WS Probability = 0.014086150973926036
Toronto Blue Jays
WS Probability = 0.0136490170951492
San Diego Padres
WS Probability = 0.012760562757734989
Baltimore Orioles
WS Probability = 0.012217435273636036
Kansas City Royals
WS Probability = 0.012031110962683162
Chicago White Sox
WS Probability = 0.010842809017258967
Cincinnati Reds
WS Probability = 0.009949770497300045
Texas Rangers
###Markdown
Ok. This didn't work. Let's try this penalized model with a grid search.
###Code
def grid_search_svc(X_train, X_test, y_train, y_test):
'''
INPUT:
-X_train = scaled X train data.
-X_test = scaled X test data.
-y_train = y train data.
-y_test = y test data.
OUTPUT:
-classification report (has F1 score, precision and recall).
-grid = saved model for prediction.
DESCRIPTION:
-the scaled and split data is put through a grid search with svc.
-the model is trained.
-a prediction is made.
-print out the classification report and give the model.
'''
# set up svc model.
model = SVC(kernel="rbf",
class_weight="balanced",
probability=True)
# create gridsearch estimator.
param_grid = {"C": [0.0001, 0.001, 0.01, 0.1, 1, 10, 100],
"gamma": [0.0001, 0.001, 0.01, 0.1]}
grid = GridSearchCV(model, param_grid, verbose=3)
# fit the model.
grid.fit(X_train, y_train)
# predict.
prediction = grid.predict(X_test)
# print out the basic information about the grid search.
print (grid.best_params_)
print (grid.best_score_)
print (grid.best_estimator_)
grid = grid.best_estimator_
predictions = grid.predict(X_test)
print (classification_report(y_test, prediction, target_names=["0", "1"]))
return grid
model_grid = grid_search_svc(X_train, X_test, y_train, y_test)
###Output
Fitting 3 folds for each of 28 candidates, totalling 84 fits
[CV] C=0.0001, gamma=0.0001 ..........................................
|
US_GDP_Dashboard_using_bokeh.ipynb | ###Markdown
Analyzing US Economic Data and Building a Dashboard Description Extracting essential data from a dataset and displaying it is a necessary part of data science; therefore individuals can make correct decisions based on the data. In this assignment, you will extract some essential economic indicators from some data, you will then display these economic indicators in a Dashboard. You can then share the dashboard via an URL. Gross domestic product (GDP) is a measure of the market value of all the final goods and services produced in a period. GDP is an indicator of how well the economy is doing. A drop in GDP indicates the economy is producing less; similarly an increase in GDP suggests the economy is performing better. In this lab, you will examine how changes in GDP impact the unemployment rate. You will take screen shots of every step, you will share the notebook and the URL pointing to the dashboard. We Define Function that Makes a Dashboard We will import the following libraries.
###Code
import pandas as pd
from bokeh.plotting import figure, output_file, show,output_notebook
output_notebook()
###Output
_____no_output_____
###Markdown
In this section, we define the function make_dashboard. You don't have to know how the function works, you should only care about the inputs. The function will produce a dashboard as well as an html file. You can then use this html file to share your dashboard. If you do not know what an html file is don't worry everything you need to know will be provided in the lab.
###Code
def make_dashboard(x, gdp_change, unemployment, title, file_name):
output_file(file_name)
p = figure(title=title, x_axis_label='year', y_axis_label='%')
p.line(x.squeeze(), gdp_change.squeeze(), color="firebrick", line_width=4, legend_label="% GDP change")
p.line(x.squeeze(), unemployment.squeeze(), line_width=4, legend_label="% unemployed")
show(p)
###Output
_____no_output_____
###Markdown
The dictionary links contain the CSV files with all the data. The value for the key GDP is the file that contains the GDP data. The value for the key unemployment contains the unemployment data.
###Code
links={'GDP':'https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/projects/coursera_project/clean_gdp.csv',\
'unemployment':'https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/projects/coursera_project/clean_unemployment.csv'}
###Output
_____no_output_____
###Markdown
Step 1: Create a dataframe that contains the GDP data and display the first five rows of the dataframe. Use the dictionary links and the function pd.read_csv to create a Pandas dataframes that contains the GDP data. Hint: links["GDP"] contains the path or name of the file.
###Code
csv=links['GDP']
df=pd.read_csv(csv)
###Output
_____no_output_____
###Markdown
Use the method head() to display the first five rows of the GDP data, then take a screen-shot.
###Code
df.head()
###Output
_____no_output_____
###Markdown
Step 2: Create a diferent dataframe that contains the unemployment data. Display the first five rows of the dataframe. Use the dictionary links and the function pd.read_csv to create a Pandas dataframes that contains the unemployment data.
###Code
unemployment_csv=links['unemployment']
df2=pd.read_csv(unemployment_csv)
###Output
_____no_output_____
###Markdown
Use the method head() to display the first five rows of the GDP data, then take a screen-shot.
###Code
df2.head()
###Output
_____no_output_____
###Markdown
Step 3: Display a dataframe where unemployment was greater than 8.5%. Take a screen-shot.
###Code
df3=df2[df2['unemployment']>8.5]
df3
###Output
_____no_output_____
###Markdown
Step 4: Use the function make_dashboard to make a dashboard In this section, you will call the function make_dashboard , to produce a dashboard. We will use the convention of giving each variable the same name as the function parameter. Create a new dataframe with the column 'date' called x from the dataframe that contains the GDP data.
###Code
x = pd.DataFrame(df,columns=['date'])
x.head()
###Output
_____no_output_____
###Markdown
Create a new dataframe with the column 'change-current' called gdp_change from the dataframe that contains the GDP data.
###Code
gdp_change = pd.DataFrame(df, columns=['change-current'])
gdp_change.head()
###Output
_____no_output_____
###Markdown
Create a new dataframe with the column 'unemployment' called unemployment from the dataframe that contains the unemployment data.
###Code
unemployment = pd.DataFrame(df2, columns=['unemployment'])
unemployment.head()
###Output
_____no_output_____
###Markdown
Give your dashboard a string title, and assign it to the variable title
###Code
title = "GDP stats of USA"
###Output
_____no_output_____
###Markdown
Finally, the function make_dashboard will output an .html in your direictory, just like a csv file. The name of the file is "index.html" and it will be stored in the varable file_name.
###Code
file_name = "index.html"
###Output
_____no_output_____
###Markdown
Call the function make_dashboard , to produce a dashboard. Assign the parameter values accordingly take a the , take a screen shot of the dashboard and submit it.
###Code
make_dashboard(x=x, gdp_change=gdp_change, unemployment=unemployment, title=title, file_name=file_name)
###Output
_____no_output_____ |
notebooks/kafka-streams-flows-as-source.ipynb | ###Markdown
kafka-streams-flows-as-source The cells needed to run your application are included below. Make any changes and add your sources, analytics and outputs. Documentation - [Streams Python development guide](https://ibmstreams.github.io/streamsx.documentation/docs/latest/python/) - [Streams Python API](https://streamsxtopology.readthedocs.io/) Install python packagesInstalls the required python packages with pip.
###Code
!pip install --user streamsx.kafka>=1.9.0
!pip install --user streamsx==1.14.13
###Output
_____no_output_____
###Markdown
Setup Sets up the Streams instance name and extracts the resources required for the Streams application to a local directory.In order to submit a Streams application you need to provide the name of the Streams instance.To change the instance for the Streams application:1. From the navigation menu, click **My instances**.2. Click the **Provisioned Instances** tab.3. Update the value of streams_instance_name in the cell below according to your Streams instance.
###Code
from project_lib import Project
import os, shutil, tarfile
from icpd_core import icpd_util
def setup(archive, resource_path):
def extract_project_file(file, path):
project = Project.access()
if os.path.exists(path):
shutil.rmtree(path)
os.makedirs(path)
buffio = project.get_file(file, direct_storage=True)
tarfile.open(fileobj=buffio, mode="r:gz").extractall(path)
extract_project_file(archive, resource_path)
os.chdir(resource_path)
streams_instance_name = "streams"
cfg = icpd_util.get_service_instance_details(streams_instance_name)
resource_path = "streams_flows_notebooks/kafka_streams_flows_as_source_1597265335685"
setup("streams_flows_notebooks/kafka_streams_flows_as_source_1597265335685.tar.gz", resource_path)
###Output
_____no_output_____
###Markdown
Create the flow
###Code
%%writefile flow_schemas
from typing import NamedTuple
class KafkaSchema(NamedTuple):
event_key: str = ""
event_topic: str = ""
event_offset: int = 0.0
event_partition: int = 0.0
event_timestamp: int = 0.0
event_message: str = ""
class SchemaMapper1Schema(NamedTuple):
key: str = ""
topic: str = ""
offset: str = ""
partition: float = 0.0
time: str = ""
message: str = ""
from streamsx.topology.topology import Topology
import flow_schemas
from lib.error_utils import TupleError
import lib.file_utils as file_utils
import os
import streamsx.kafka as kafka
import typing
# ================================================================================
# MAIN
def build_flow():
topo = Topology(name='kafka_streams_flows_as_source', namespace=os.environ.get('USER', 'flow'))
topo.name_to_runtime_id = name_mapping().get
topo.add_pip_package('streamsx.kafka>=1.9.0')
kafka_stream = add_kafka(topo) # Node: "Kafka"
debug_stream = add_debug(kafka_stream) # Node: "Debug"
add_views(topo)
return topo
# ================================================================================
# Function for top-level operator: Kafka
def add_kafka(topo):
connection = file_utils.read_from_json(os.path.abspath("connections/kafka_4560768a-c25f-49e7-9333-23726b8ae71e.json"))
return (
topo
.source(
kafka.KafkaConsumer(
config={
'bootstrap.servers': connection['brokers'],
'security.protocol': connection['security_protocol'],
'sasl.mechanism': connection['sasl_mechanism'],
'sasl.jaas.config': f'org.apache.kafka.common.security.plain.PlainLoginModule required username="{connection["username"]}" password="{connection["api_key"]}";',
'auto.offset.reset': 'latest'
},
topic="clicks",
message_attribute_name='event_message',
key_attribute_name='event_key',
topic_attribute_name='event_topic',
offset_attribute_name='event_offset',
partition_attribute_name='event_partition',
timestamp_attribute_name='event_timestamp',
schema=flow_schemas.KafkaSchema),
name='Kafka')
.map(
_map_schema_for_kafka,
name='SchemaMapper1',
schema=flow_schemas.SchemaMapper1Schema)
.filter(
lambda event: True,
name='CompositeOutput1')
)
# ================================================================================
# Function for top-level operator: Debug
def add_debug(stream):
return (
stream
.for_each(
debug,
name='Debug')
)
# ================================================================================
# Operator-specific global code, such as filter classes:
def _map_schema_for_kafka(event):
try:
return flow_schemas.SchemaMapper1Schema(
key=event.event_key,
topic=event.event_topic,
offset=str(event.event_offset),
partition=float(event.event_partition),
time=str(event.event_timestamp),
message=event.event_message
)
except Exception as err:
TupleError(operation_id='Kafka', message=str(err))
return None
def debug(event):
# you can add debugging/logging code here
pass
# ================================================================================
# Utils:
def add_views(topo):
name_to_id = name_mapping()
for name, stream in topo.streams.items():
stream_id = name_to_id.get(name)
if stream_id and stream_id.endswith('__Composite_Output_Id'):
stream.view(name=stream_id + "__output")
def name_mapping():
return {
'Kafka': 'Kafka',
'SchemaMapper1': 'SchemaMapper1',
'CompositeOutput1': 'Kafka__Composite_Output_Id',
'Debug': 'Debug'
}
###Output
_____no_output_____
###Markdown
Submit the application
###Code
import streamsx
import datetime
from streamsx.topology.context import ContextTypes, JobConfig
from streamsx.topology import context
def submit_app():
cfg[context.ConfigParams.SSL_VERIFY] = False
app = build_flow()
dt = datetime.datetime.now().strftime('%F_%T')
job_config = JobConfig(job_name=f'{app.namespace}:{app.name}:{dt}', tracing='info')
job_config.add(cfg)
shutil.copytree('lib', 'python/modules/lib')
app.add_file_dependency('python', 'opt')
submission_result = streamsx.topology.context.submit(ContextTypes.DISTRIBUTED, app, config=cfg)
streams_job = submission_result.job
print("JobId: ", streams_job.id, "\nJob name: ", streams_job.name)
submit_app()
###Output
_____no_output_____
###Markdown
Delete the resource directory (Optional)Cleans up the resource folders used in this application.
###Code
#cleanup()
# import shutil
# os.chdir(os.environ['PWD'])
# if os.path.exists(resource_path):
# shutil.rmtree(resource_path)
###Output
_____no_output_____ |
notebooks/R4ML_Introduction_Exploratory_DataAnalysis.ipynb | ###Markdown
R4ML: Introduction and Exploratory Data Analysis (part I) [Alok Singh](https://github.com/aloknsingh/) Contents 1. Introduction 1.1. R4ML Brief Introduction 1.2. R4ML Architecture 1.3. R4ML Installation 1.4. Starting the R4ML Session 2. Overview of Dataset 3. Load the Data 4. Exploratory Data Analysis 4.1. Graphical/Visual Exploratory Data Analysis 4.2. Analytics Based Exploratory Data Analysis 5. Summary and next steps ... 1. Introduction 1.1. R4ML Brief Introduction[R4ML](https://github.com/SparkTC/r4ml) is an open-source, scalable Machine Learning Framework built using [Apache Spark/SparkR](https://spark.apache.org/docs/latest/sparkr.html) and [Apache SystemML](https://systemml.apache.org/).R4ML is the hybrid of SparkR and SystemML. It’s mission is to ** make BigData R , R-like ** and to:* Support more big data ML algorithms.* Creating custom Algorithms.* Support more R-like syntaxR4ML allows R scripts to invoke custom algorithms developed in Apache SystemML. R4ML integrates seamlessly with SparkR, so data scientists can use the best features of SparkR and SystemML together in the same scripts. In addition, the R4ML package provides a number of useful new R functions that simplifycommon data cleaning and statistical analysis tasks.In this set of tutorial style notebooks, we will walk through a standard example of a data-scientist work flow. This includes data precessing, data exploration, model creation, model tuning and model selection.Let's first install and load the relevant library: 1.2. R4ML Architecture 1.3. Installation Here are the steps to install R4ML. (This will only need to be done once per user.)
###Code
# first step would be to install the R4ML in your environment
# install dependencies . This steps only need to be done once
install.packages(c("uuid", "R6", "PerformanceAnalytics"), repos = "http://cloud.r-project.org")
library("SparkR")
download.file("http://codait-r4ml.s3-api.us-geo.objectstorage.softlayer.net/R4ML_0.8.0.tar.gz", "~/R4ML_0.8.0.tar.gz")
install.packages("~/R4ML_0.8.0.tar.gz", repos = NULL, type = "source")
###Output
Installing packages into ‘/gpfs/global_fs01/sym_shared/YPProdSpark/user/sa28-9716de71e3ac0f-9ac12ed2939a/R/libs’
(as ‘lib’ is unspecified)
Installing package into ‘/gpfs/global_fs01/sym_shared/YPProdSpark/user/sa28-9716de71e3ac0f-9ac12ed2939a/R/libs’
(as ‘lib’ is unspecified)
###Markdown
1.4. Starting the R4ML SessionLet's load the R4ML in R and start a new session
###Code
# now load the R4ML library
library(R4ML)
library(SparkR)
# start the session
r4ml.session()
###Output
Warning message:
“WARN[R4ML]: Reloading SparkR”Loading required namespace: SparkR
_______ _ _ ____ ____ _____
|_ __ \ | | | | |_ \ / _||_ _|
| |__) || |__| |_ | \/ | | |
| __ / |____ _| | |\ /| | | | _
_| | \ \_ _| |_ _| |_\/_| |_ _| |__/ |
|____| |___| |_____||_____||_____||________|
[R4ML]: version 0.8.0
Warning message:
“no function found corresponding to methods exports from ‘R4ML’ for: ‘collect’”
Attaching package: ‘SparkR’
The following object is masked from ‘package:R4ML’:
predict
The following objects are masked from ‘package:stats’:
cov, filter, lag, na.omit, predict, sd, var, window
The following objects are masked from ‘package:base’:
as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
rank, rbind, sample, startsWith, subset, summary, transform, union
Warning message:
“WARN[R4ML]: driver.memory not defined. Defaulting to 2G”Spark package found in SPARK_HOME: /usr/local/src/spark21master/spark
###Markdown
2. Overview of DataWhile there are many data sets to chose from, we have decided to use the airline dataset because: * The airline dataset is not always clean and building predictive model is not straight forward. Thus we can illustrate other points about the data preparation and analysis. * This dataset is free and reasonably sized (around 20GB with around 100M rows). * R4ML is shipped with a sampled version of this dataset (around 130K rows). * Typically, this dataset is used to predict the various types of delays, like arrival delays, etc.Here is the description of the data (you can also see similar info by using help in the R console) Airline Dataset Description: A 1% sample of the "airline" dataset is available at http://stat-computing.org/dataexpo/2009/the-data.html. This data originally comes from RITA (http://www.rita.dot.gov) and is in the public domain. Usage: data(airline) Format: A data frame with 128790 rows and 29 columns Source: American Statistical Association RITA: Research and Innovative Technology Administration 3. Load the DataLet's first load the data.
###Code
# read the airline dataset
airt <- airline
# testing, we just use the small dataset
airt <- airt[airt$Year >= "2007",]
air_hf <- as.r4ml.frame(airt)
# note: in the production environment when you have the big data airline, above three lines are not scalable and should be replaced by read csv
#here is the schema
# (Year,Month,DayofMonth,DayOfWeek,DepTime,CRSDepTime,ArrTime,CRSArrTime,UniqueCarrier,FlightNum,TailNum,ActualElapsedTime,
# CRSElapsedTime,AirTime,ArrDelay,DepDelay,Origin,Dest,Distance,TaxiIn,TaxiOut,Cancelled,CancellationCode,Diverted,CarrierDelay,
# WeatherDelay,NASDelay,SecurityDelay,LateAircraftDelay)
###Output
INFO[calc num partitions]: 48 partitions
INFO[as.r4ml.frame]: repartitioning an object of size: 2448976 into 48 partitions
###Markdown
4. Exploratory Data Analysis 4.1. Graphical data analysisSince R provides a very powerful visualization and exploratory data analysis tool, use the sampling strategy and sample a small dataset from the distributed data frame.Note: you can use the other exploratory analysis options here if you want to try them out.
###Code
airs <- r4ml.sample(air_hf, 0.1)[[1]]
rairs <- SparkR::as.data.frame(airs)
# r indicate R data frame
###Output
_____no_output_____
###Markdown
4.1.1. HistogramsThe blank principle proves that the predictive power of the features are highest if it is approximately gaussian distributed.Let's explore this line of thinking. Lets create histograms to see if the outputs are approximately guassian distributed and which variables are important.
###Code
library(reshape2)
library(ggplot2)
# use reshape util to create tall data for visualization
mrairs <- suppressWarnings(melt(rairs))
g<-suppressWarnings(ggplot(mrairs, aes(x=value, colour=variable))+geom_histogram()+facet_wrap(~variable, scales="free", ncol=5) + theme(legend.position="none"))
suppressWarnings(g)
###Output
Using UniqueCarrier, FlightNum, TailNum, Origin, Dest, CancellationCode, Diverted as id variables
`stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
Warning message:
“Removed 2957 rows containing non-finite values (stat_bin).”
###Markdown
* As we can see from the plot, since Year, Month, DayofMonth, and DayOfWeek have almost uniform distribution, they are not likely to have any predictive power and will be ignored in the following analysis.* Since we will be predicting ArrivalDelay, remove from analysis any other delays that are dependent on it. This includes WeatherDelay, NASDelay, SecurityDelay and LateAircraftDelay.* Also note that there are some one sided [Power Law distributions](https://en.wikipedia.org/wiki/Power_law) (e.g. TaxiOut). We can use log transformations to make them approximately guassian.Let’s prune the data for further exploration.** Note: we can make non-bell shaped curves more normalized by using [box-cox](https://en.wikipedia.org/wiki/Power_transform) transformations. Using [SparkR](https://spark.apache.org/docs/latest/sparkr.html) and our custom machine learning features (explained in later sections) should make for a very straight forward exercise**
###Code
# total number of columns in the dataset
total_feat <- c("Year", "Month", "DayofMonth", "DayOfWeek", "DepTime", "CRSDepTime", "ArrTime", "CRSArrTime", "UniqueCarrier", "FlightNum", "TailNum", "ActualElapsedTime", "CRSElapsedTime", "AirTime", "ArrDelay", "DepDelay", "Origin", "Dest", "Distance", "TaxiIn", "TaxiOut", "Cancelled", "CancellationCode", "Diverted", "CarrierDelay", "WeatherDelay", "NASDelay", "SecurityDelay", "LateAircraftDelay")
# categorical features
# Year , Month , DayofMonth , DayOfWeek ,
cat_feat <- c("UniqueCarrier", "FlightNum", "TailNum", "Origin", "Dest", "CancellationCode", "Diverted")
numeric_feat <- setdiff(total_feat, cat_feat)
# these features have no predictive power as it is uniformly distributed i.e
# less information
unif_feat <- c("Year", "Month", "DayofMonth", "DayOfWeek")
# these are the constant features and we can ignore without much difference
# in output
const_feat <- c("WeatherDelay", "NASDelay", "SecurityDelay", "LateAircraftDelay")
col2rm <- c(unif_feat, const_feat, cat_feat)
airs_names <- names(rairs)
rairs2_names <- setdiff(airs_names, col2rm)
rairs2 <- rairs[, rairs2_names]
###Output
_____no_output_____
###Markdown
4.1.2. Correlated featuresOne of the things you want to avoid is co-related features. In other words, if we have one column (say c4), which is a constant multiple of another column (say c3), then either c4 or c3 should be used. Since geometrically n columns corresponds to n edges of an n dimensional rectangle or cube, if any other edges are dependent (i.e. c4 and c3 are co-linear) then the volume of the cube in n dimension will be zero. This will manifest into the matrix solver error while solving system of equations. We will next find if there is any co-relation between the input data.Though there are many R packages that you can use, we will be using ** Performance Analytics **.
###Code
library(PerformanceAnalytics)
suppressWarnings(chart.Correlation(rairs2, histogram=TRUE, pch=19))
###Output
_____no_output_____
###Markdown
Note the following from the above graphs:* The diagonal cells contain the histogram of the corresponding column.* The off-diagonal entries contain the pairwise correlation. For example, the 7th cell in the 10th row contains the correlation between AirTime and Distance.* These charts provide insight about which columns to use as predictors and we should avoid using any heavily correlated columns in our prediction. 4.2. Analytics Based Exploratory Data AnalysisThis exploratory analysis can also be done in a nongraphical manner, using R4ML/SparkR.It is desirable to have the normally distributed predictors as it provides better predictions. However, the distribution can be skewed from the normal distribution (i.e. left sided or right sided guassian distribtion). This property is measured in [Skewness](https://en.wikipedia.org/wiki/Skewness).Similarly, [Kurtosis](https://en.wikipedia.org/wiki/Kurtosis) is a measure of tailedness of the distribution.For example, we can calculate the skewness and kurtosis to find whether a feature is close to gaussian or whether it has predictive power.The data shows that we have the distribution for distance that is heavy tailed on the right side. To get the best predictive power we might have to create a transformation so that the distribution is close to gaussian. Lets see what happens if we apply log transformation to the Distance feature..
###Code
library(SparkR)
library(R4ML)
#airs_sdf <- new("SparkDataFrame", airs@sdf, isCached = airs@env$isCached) #SparkR::count(airs_sdf)
dist_skew <- SparkR:::agg(airs, SparkR::skewness(log(airs$Distance)))
SparkR::collect(dist_skew)
dist_kurtosis <- SparkR:::agg(airs, SparkR::kurtosis(log(airs$Distance)))
SparkR::collect(dist_kurtosis)
###Output
_____no_output_____
###Markdown
R4ML: Introduction and Exploratory Data Analysis (part I) [Alok Singh](https://github.com/aloknsingh/) Contents 1. Introduction 1.1. R4ML Brief Introduction 1.2. R4ML Architecture 1.3. R4ML Installation 1.4. Starting the R4ML Session 2. Overview of Dataset 3. Load the Data 4. Exploratory Data Analysis 4.1. Graphical/Visual Exploratory Data Analysis 4.2. Analytics Based Exploratory Data Analysis 5. Summary and next steps ... 1. Introduction 1.1. R4ML Brief Introduction[R4ML](https://github.com/SparkTC/r4ml) is an open-source, scalable Machine Learning Framework built using [Apache Spark/SparkR](https://spark.apache.org/docs/latest/sparkr.html) and [Apache SystemML](https://systemml.apache.org/).R4ML is the hybrid of SparkR and SystemML. It’s mission is to ** make BigData R , R-like ** and to:* Support more big data ML algorithms.* Creating custom Algorithms.* Support more R-like syntaxR4ML allows R scripts to invoke custom algorithms developed in Apache SystemML. R4ML integrates seamlessly with SparkR, so data scientists can use the best features of SparkR and SystemML together in the same scripts. In addition, the R4ML package provides a number of useful new R functions that simplifycommon data cleaning and statistical analysis tasks.In this set of tutorial style notebooks, we will walk through a standard example of a data-scientist work flow. This includes data precessing, data exploration, model creation, model tuning and model selection.Let's first install and load the relevant library: 1.2. R4ML Architecture 1.3. Installation Here are the steps to install R4ML. (This will only need to be done once for each user.)
###Code
# first step would be to install the R4ML in your environment
# install dependencies . This steps only need to be done once
install.packages(c("uuid", "R6", "PerformanceAnalytics"), repos = "http://cloud.r-project.org")
library("SparkR")
download.file("http://169.45.79.58/R4ML_0.8.0.tar.gz", "~/R4ML_0.8.0.tar.gz")
install.packages("~/R4ML_0.8.0.tar.gz", repos = NULL, type = "source")
###Output
Installing packages into ‘/gpfs/global_fs01/sym_shared/YPProdSpark/user/sa28-9716de71e3ac0f-9ac12ed2939a/R/libs’
(as ‘lib’ is unspecified)
Installing package into ‘/gpfs/global_fs01/sym_shared/YPProdSpark/user/sa28-9716de71e3ac0f-9ac12ed2939a/R/libs’
(as ‘lib’ is unspecified)
###Markdown
1.4. Starting the R4ML SessionLet's load the R4ML in R and start a new session
###Code
# now load the R4ML library
library(R4ML)
library(SparkR)
# start the session
r4ml.session()
###Output
Warning message:
“WARN[R4ML]: Reloading SparkR”Loading required namespace: SparkR
_______ _ _ ____ ____ _____
|_ __ \ | | | | |_ \ / _||_ _|
| |__) || |__| |_ | \/ | | |
| __ / |____ _| | |\ /| | | | _
_| | \ \_ _| |_ _| |_\/_| |_ _| |__/ |
|____| |___| |_____||_____||_____||________|
[R4ML]: version 0.8.0
Warning message:
“no function found corresponding to methods exports from ‘R4ML’ for: ‘collect’”
Attaching package: ‘SparkR’
The following object is masked from ‘package:R4ML’:
predict
The following objects are masked from ‘package:stats’:
cov, filter, lag, na.omit, predict, sd, var, window
The following objects are masked from ‘package:base’:
as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
rank, rbind, sample, startsWith, subset, summary, transform, union
Warning message:
“WARN[R4ML]: driver.memory not defined. Defaulting to 2G”Spark package found in SPARK_HOME: /usr/local/src/spark21master/spark
###Markdown
2. Overview of DataThere are many data sets and we have decided to use the airline dataset since * airline dataset is not always clean and building predictive model is not straight forward. Thus we can illustrate other points about the data preparation and analysis * This is the free data set and reasonably sized (around 20GB and around 100M rows) * R4ML is shipped with a sampled version of that dataset (around 130K rows) * Typically, this dataset is used to predict the various delays like arrival delays etc.Here is the description of data. (you can also see the similar info by using help in R console) Airline Dataset Description: A 1% sample of the "airline" dataset available at http://stat-computing.org/dataexpo/2009/the-data.html This data originally comes from RITA (http://www.rita.dot.gov) and is in the public domain. Usage: data(airline) Format: A data frame with 128790 rows and 29 columns Source: American Statistical Association RITA: Research and Innovative Technology Administration 3. Load the DataLet's first load the data.
###Code
# read the airline dataset
airt <- airline
# testing, we just use the small dataset
airt <- airt[airt$Year >= "2007",]
air_hf <- as.r4ml.frame(airt)
# note: in the production environment when you have the big data airline, above three lines are not scalable and should be replaced by read csv
#here is the schema
# (Year,Month,DayofMonth,DayOfWeek,DepTime,CRSDepTime,ArrTime,CRSArrTime,UniqueCarrier,FlightNum,TailNum,ActualElapsedTime,
# CRSElapsedTime,AirTime,ArrDelay,DepDelay,Origin,Dest,Distance,TaxiIn,TaxiOut,Cancelled,CancellationCode,Diverted,CarrierDelay,
# WeatherDelay,NASDelay,SecurityDelay,LateAircraftDelay)
###Output
INFO[calc num partitions]: 48 partitions
INFO[as.r4ml.frame]: repartitioning an object of size: 2448976 into 48 partitions
###Markdown
4. Exploratory Data Analysis 4.1. Graphical data analysisSince R provides a very powerful visualization and exploratory data analysis, use the sampling strategy and we will sample a small data set from the distributed data frame .Note: you can use the other exploratory analysis options here if you want to try them out.
###Code
airs <- r4ml.sample(air_hf, 0.1)[[1]]
rairs <- SparkR::as.data.frame(airs)
# r indicate R data frame
###Output
_____no_output_____
###Markdown
4.1.1. HistogramsThe blank principle “proves that the predictive power of the features are highest if it is approximately gaussian distributed.Let's explore this line of thinking. Lets create the histogram to see if the outputs are approximately guassian distributed and what variables are important.
###Code
library(reshape2)
library(ggplot2)
# use reshape util to create tall data for visualization
mrairs <- suppressWarnings(melt(rairs))
g<-suppressWarnings(ggplot(mrairs, aes(x=value, colour=variable))+geom_histogram()+facet_wrap(~variable, scales="free", ncol=5) + theme(legend.position="none"))
suppressWarnings(g)
###Output
Using UniqueCarrier, FlightNum, TailNum, Origin, Dest, CancellationCode, Diverted as id variables
`stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
Warning message:
“Removed 2957 rows containing non-finite values (stat_bin).”
###Markdown
* We can see from the plot, that since Year, Month, DayofMonth, DayOfWeek are almost uniform distribution and hence most likely, they won’t have much predictive power, so it would make sense to remove these variables in the subsequent analysis.* Since we will be predicting the ArrivalDelay and all other delays i.e WeatherDelay, NASDelay, SecurityDelay and LateAircraftDelay are dependent on it so we will be removing them from analysis.* Also note that there are some one sided [Power Law distribution](https://en.wikipedia.org/wiki/Power_law) e.g TaxiOut.We can use log transformation to make it approximately guassian.Let’s prune the data for further exploration.** Note that you can make the non bell shape curve, bell shape, using [box-cox](https://en.wikipedia.org/wiki/Power_transform) transformation. Using [SparkR](https://spark.apache.org/docs/latest/sparkr.html) and our custom machine learning features (explained in later sections), it should be very straight forward exercise **
###Code
# total number of columns in the dataset
total_feat <- c("Year", "Month", "DayofMonth", "DayOfWeek", "DepTime", "CRSDepTime", "ArrTime", "CRSArrTime", "UniqueCarrier", "FlightNum", "TailNum", "ActualElapsedTime", "CRSElapsedTime", "AirTime", "ArrDelay", "DepDelay", "Origin", "Dest", "Distance", "TaxiIn", "TaxiOut", "Cancelled", "CancellationCode", "Diverted", "CarrierDelay", "WeatherDelay", "NASDelay", "SecurityDelay", "LateAircraftDelay")
# categorical features
# Year , Month , DayofMonth , DayOfWeek ,
cat_feat <- c("UniqueCarrier", "FlightNum", "TailNum", "Origin", "Dest", "CancellationCode", "Diverted")
numeric_feat <- setdiff(total_feat, cat_feat)
# these features have no predictive power as it is uniformly distributed i.e
# less information
unif_feat <- c("Year", "Month", "DayofMonth", "DayOfWeek")
# these are the constant features and we can ignore without much difference
# in output
const_feat <- c("WeatherDelay", "NASDelay", "SecurityDelay", "LateAircraftDelay")
col2rm <- c(unif_feat, const_feat, cat_feat)
airs_names <- names(rairs)
rairs2_names <- setdiff(airs_names, col2rm)
rairs2 <- rairs[, rairs2_names]
###Output
_____no_output_____
###Markdown
4.1.2. Correlated featuresOne of the things you want to avoid is the co-related features i.e if we have one column (say c4), which is a constant multiple of another column (say c3) then either one of c4 or c3 should be used. Since geometrically n columns corresponds to n edges of n dimensional rectangle or cube and if any other edges are dependent i.e c4 and c3 are co-linear then the volume of the cube in n dimension will be zero. And this manifest it into the matrix solver error while solving system of equations) We will next find if there is any co-relation between the input data.Though there are many R packages that you can use, we are going to use ** Performance Analytics **
###Code
library(PerformanceAnalytics)
suppressWarnings(chart.Correlation(rairs2, histogram=TRUE, pch=19))
###Output
_____no_output_____
###Markdown
We would like to point out these in the above graph* The diagonal cells contain the histogram of the corresponding column.* The off-diagonal entries contains the pairwise correlation. For example the 7th cell in the 10th row contains the correlation between AirTime and Distance.* The above chart, gives us insight about which columns to use as predictors and we would want to make sure that heavy correlated columns are not used in the prediction. 4.2. Analytics Based Exploratory Data AnalysisThis exploratory analysis can also be done in a nongraphical manner, using R4ML/SparkR.It is desirable to have the normally distributed predictors as it gives a better predictions. However, the distribution can be skewed from the normal distribution i.e left sided or right sided guassian distribtion. This property is measured in [Skewness](https://en.wikipedia.org/wiki/Skewness).Similarly, [Kurtosis](https://en.wikipedia.org/wiki/Kurtosis) is a measure of tailedness of the distribution.For example, we can calculate the skewness and kurtosis to find whether a feature is close to gaussian or whether it has predictive power.The data shows that we have the distribution for distance that is heavy tail on the right side. To get the best predictive power we might have to create a transformation so that the distribution is close to gaussian. Lets see what happens if we apply log transformation to the Distance feature..
###Code
library(SparkR)
library(R4ML)
#airs_sdf <- new("SparkDataFrame", airs@sdf, isCached = airs@env$isCached) #SparkR::count(airs_sdf)
dist_skew <- SparkR:::agg(airs, SparkR::skewness(log(airs$Distance)))
SparkR::collect(dist_skew)
dist_kurtosis <- SparkR:::agg(airs, SparkR::kurtosis(log(airs$Distance)))
SparkR::collect(dist_kurtosis)
###Output
_____no_output_____ |
5-ExamProblems/Exam2/.src/MidTerm2-TakeHome-WS.ipynb | ###Markdown
Full name: R: HEX: Exam 2 Take-Home_{your name} Date: Problem 0.0 (1 pts.)Run the cell below as-is!
###Code
# Preamble script block to identify host, user, and kernel
import sys
! hostname
! whoami
print(sys.executable)
#print(sys.version)
#print(sys.version_info)
! pwd
###Output
atomickitty.aws
compthink
/opt/conda/envs/python/bin/python
/home/compthink/CECE-1330-PsuedoCourse/5-ExamProblems/Exam2
###Markdown
Problem 1 (5 pts)The table below contains some experimental observations.|Elapsed Time (s)|Speed (m/s)||---:|---:||0 |0||1.0 |3||2.0 |7||3.0 |12||4.0 |20||5.0 |30||6.0 | 45.6| |7.0 | 60.3 ||8.0 | 77.7 ||9.0 | 97.3 ||10.0| 121.1|1. Plot the speed vs time (speed on y-axis, time on x-axis) using a scatter plot. Use blue markers. 2. Plot a red line on the scatterplot based on the linear model $f(x) = mx + b$ 3. By trial-and-error find values of $m$ and $b$ that provide a good visual fit (i.e. makes the red line explain the blue markers).4. Using this data model estimate the speed at $t = 15~\texttt{sec.}$
###Code
# Create two lists; time and speed
# Create a data model function
# Create a model list - using same time list
# Create a scatterplot chart of time and speed, overlay a line plot of time and modeled speed
# Report best values m and b
# Estimate speed@ t = 15 sec. using fitted model
###Output
_____no_output_____
###Markdown
Problem 2 (5 pts)Consider the script below, which crudely implements a simulation of Russian Roulette.How many times can you spin the cylinder and pull the trigger, before you fail?Play the game 10 times, record the pull count until failure.1. Create a list of pulls until failure for each of your 10 attempts, and make a histogram of the list.2. From your histogram, estimate the mean number of pulls until failure.In the movie `The Deer Hunter` https://en.wikipedia.org/wiki/The_Deer_Hunter the captured soldiers modify the Russian Roulette Game by using more than a single cartridge. 3. Modify the program to the number of cartridges in the movie (3) and play again 10 times, record your pulls to failure4. Make a second histogram of the `Deer Hunter` version of the game.5. From your histogram, estimate the mean number of pulls until failure under the `Deer Hunter` conditions.
###Code
#RUSSIAN ROULETTE PROGRAM IN PYTHON:
import random
print('THIS IS A RUSSIAN ROULETTE PROGRAM. BEST PLAYED WHILE DRINKING VODKA.')
leaveprogram=0
triggerpulls = 0
while leaveprogram != "q":
print("Press Enter to Spin the Cylinder & Test Your Courage")
input()
number=random.randint (1, 6)
if number==1:
print("[ CLICK! ]")
triggerpulls += 1
print("Pulls = ",triggerpulls, "Type 'q' to quit")
leaveprogram=input()
if number==2:
print("[ CLICK! ]")
triggerpulls += 1
print("Pulls = ",triggerpulls, "Type 'q' to quit")
leaveprogram=input()
if number==3:
print("[ CLICK! ]")
triggerpulls += 1
print("Pulls = ",triggerpulls, "Type 'q' to quit")
leaveprogram=input()
if number==4:
print("[ CLICK! ]")
triggerpulls += 1
print("Pulls = ",triggerpulls, "Type 'q' to quit")
leaveprogram=input()
if number==5:
print("[ BANG!!!! ]")
triggerpulls += 1
print("[ So long ]")
print("[ Comrade. ]")
print("Pulls = ",triggerpulls)
leaveprogram='q'
if number==6:
print("[ CLICK! ]")
triggerpulls += 1
print("Pulls = ",triggerpulls, "Type 'q' to quit")
leaveprogram=input()
#
# List of results
# Histogram
# Mean Pulls to Failure
# Put Deer Hunter Version Here
# List of results
# Histogram
# Mean Pulls to Failure
###Output
_____no_output_____
###Markdown
Problem 3 (10 points)The data below are the impact impact strength of packaging materials in foot-pounds of two branded boxes. Produce a histogram of the two series, and determine if there is evidence of a difference in mean strength between the two brands. Use an appropriate hypothesis test to support your assertion at a level of significance of $\alpha = 0.10$. | Amazon Branded Boxes | Walmart Branded Boxes ||:---|:---|| 1.25 | 0.89|| 1.16 | 1.01|| 1.33| 0.97|| 1.15| 0.95|| 1.23| 0.94|| 1.20| 1.02|| 1.32| 0.98|| 1.28| 1.06|| 1.21| 0.98|
###Code
# define lists and make into dataframe 2 points
amazon =[1.25,1.16,1.33,1.15,1.23,1.20,1.32,1.28,1.21]
wallyworld = [0.89,1.01,0.97,0.95,0.94,1.02,0.98,1.06,0.98]
boxdf = pandas.DataFrame()
boxdf['amazon']=amazon
boxdf['wallyw']=wallyworld
# describe lists/dataframe 2 points
print(boxdf.describe())
boxdf.plot.hist() # 2 points for histogram
# parametric are means same? 2 points
# Example of the Student's t-test
from scipy.stats import ttest_ind
stat, p = ttest_ind(amazon, wallyworld)
print("Student's T-test")
print('stat=%.3f, p=%.3f' % (stat, p))
if p > 0.1:
print('Do not reject Ho : means are the same')
else:
print('Reject Ho : means are not the same')
###Output
Student's T-test
stat=9.564, p=0.000
Reject Ho : means are not the same
###Markdown
Problem 4 (30 points)Precipitation records for Lubbock from 1895 to 2019 for the month of October is located in the file `http://54.243.252.9/engr-1330-psuedo-course/CECE-1330-PsuedoCourse/5-ExamProblems/Exam2/Exam2/`. 1. Produce a plot of year vs precipitation. *[Script + Plot 1: data==blue]*2. Describe the entire data set. *[Script]*3. Split the data into two parts at the year 1960. *[Script]*4. Describe the two data series you have created. *[Script]*5. Plot the two series on the same plot. *[Script + Plot 2: data1==blue, data2==green]*6. Is there evidence of different mean precipitation in the pre-1990 and post-1990 data sets? Use a hypothesis test to support your assertion. *[Markdown + Script]*7. Using the entire data set (before the 1960 split) prepare an empirical cumulative distribution plot using the weibull plotting position formula. *[Script + Plot 3: data==blue]*8. What is the 50% precipitation exceedence depth? *[Markdown]*9. What is the 90% precipitation exceedence depth? *[Markdown]*10. Fit the empirical distribution using a normal distribution data model, plot the model using a red curve. Assess the fit. *[Script + Plot 4: data==blue, model==red]*11. Fit the empirical distribution using a gammal distribution data model, plot the model using a red curve. Assess the fit. *[Script + Plot 5: data==blue, model==red]*12. Using your preferred model (normal vs. gamma) estimate the 99% precipitation exceedence depth. *[Script + Markdown]*
###Code
# Problem 4
import pandas
lbbdata = pandas.read_csv("Lubbockdata.csv") #1 pt
lbbdata.head()
lbbdata.plot.line() # 1pt
lbbdata.describe() # 1pt
lbbold = lbbdata[lbbdata['Date'] <= '1960-10'] # 1pt
lbbnew = lbbdata[lbbdata['Date'] > '1960-10'] # 1pt
print(lbbold.describe()) # 1pt
print(lbbnew.describe()) # 1pt
import matplotlib.pyplot
myfigure = matplotlib.pyplot.figure(figsize = (8,8)) # generate a object from the figure class, set aspect ratio
matplotlib.pyplot.plot(lbbold['Date'],lbbold['precipitation'] ,color ='blue')
matplotlib.pyplot.plot(lbbnew['Date'],lbbnew['precipitation'] ,color ='green')
matplotlib.pyplot.xlabel("Date")
matplotlib.pyplot.ylabel("Precipitation Value")
matplotlib.pyplot.title("Lubbock Precipitation in October")
matplotlib.pyplot.show() # 2 pts for a plot like below; extra point if the year label is readable
# lbbnew has smaller sample mean, but probably not significant. Reuse the T-test 2 pts
# Example of the Student's t-test
from scipy.stats import ttest_ind
stat, p = ttest_ind(lbbold['precipitation'], lbbnew['precipitation'])
print("Student's T-test")
print('stat=%.3f, p=%.3f' % (stat, p))
if p > 0.05:
print('Do not reject Ho : means are the same')
else:
print('Reject Ho : means are not the same')
def weibull_pp(sample): # Weibull plotting position function 1 pt, copy from lab
# returns a list of plotting positions; sample must be a numeric list
weibull_pp = [] # null list to return after fill
sample.sort() # sort the sample list in place
for i in range(0,len(sample),1):
weibull_pp.append((i+1)/(len(sample)+1)) #values from the gringorten formula
return weibull_pp
lbbprecip = lbbdata['precipitation'].tolist() # 1 pt
ecdf = weibull_pp(lbbprecip) # 1pt
import matplotlib.pyplot # 2 pt, copy from lab
myfigure = matplotlib.pyplot.figure(figsize = (6,6)) # generate a object from the figure class, set aspect ratio
matplotlib.pyplot.scatter(ecdf, lbbprecip ,color ='blue')
matplotlib.pyplot.xlabel("Density or Quantile Value")
matplotlib.pyplot.ylabel("Precipitation Value")
matplotlib.pyplot.title("Quantile Plot for LBB October rains Weibull Plotting Function")
matplotlib.pyplot.show()
# 50% is at about 1 inch depth 1 pt
# 90% is at about 4 inch depth 1 pt
import math # copy from lesson/lab 13 2 pts
def normdist(x,mu,sigma):
argument = (x - mu)/(math.sqrt(2.0)*sigma)
normdist = (1.0 + math.erf(argument))/2.0
return normdist
# Fitted Model # copy from lesson/lab 13 2 pts
mu = lbbdata['precipitation'].mean()
sigma = lbbdata['precipitation'].std()
x = []
ycdf = []
xlow = lbbdata['precipitation'].min()
xhigh = lbbdata['precipitation'].max()
howMany = 100
xstep = (xhigh - xlow)/howMany
for i in range(0,howMany+1,1):
x.append(xlow + i*xstep)
yvalue = normdist(xlow + i*xstep,mu,sigma)
ycdf.append(yvalue)
# Now plot the sample values and plotting position 2 pts
myfigure = matplotlib.pyplot.figure(figsize = (10,5)) # generate a object from the figure class, set aspect ratio
# Built the plot
matplotlib.pyplot.scatter(ecdf, lbbprecip ,color ='blue')
matplotlib.pyplot.plot(ycdf, x, color ='red')
matplotlib.pyplot.xlabel("Density or Quantile Value")
matplotlib.pyplot.ylabel("Precipitation Value")
matplotlib.pyplot.title("Quantile Plot for LBB October rains Weibull Plotting Function")
matplotlib.pyplot.show()
import scipy.stats # import scipy stats package ## copy from lab 6 pts
import math # import math package
import numpy # import numpy package
# log and antilog
def loggit(x): # A prototype function to log transform x
return(math.log(x))
def antiloggit(x): # A prototype function to log transform x
return(math.exp(x))
def weibull_pp(sample): # plotting position function
# returns a list of plotting positions; sample must be a numeric list
weibull_pp = [] # null list to return after fill
sample.sort() # sort the sample list in place
for i in range(0,len(sample),1):
weibull_pp.append((i+1)/(len(sample)+1))
return weibull_pp
def gammacdf(x,tau,alpha,beta): # Gamma Cumulative Density function - with three parameter to one parameter convert
xhat = x-tau
lamda = 1.0/beta
gammacdf = scipy.stats.gamma.cdf(lamda*xhat, alpha)
return gammacdf
sample = lbbdata['precipitation'].tolist() # put the log rain into a list
sample_mean = numpy.array(sample).mean()
sample_stdev = numpy.array(sample).std()
sample_skew = scipy.stats.skew(sample)
sample_alpha = 4.0/(sample_skew**2)
sample_beta = numpy.sign(sample_skew)*math.sqrt(sample_stdev**2/sample_alpha)
sample_tau = sample_mean - sample_alpha*sample_beta
plotting = weibull_pp(sample)
x = []; ycdf = []
xlow = (0.9*min(sample)); xhigh = (1.1*max(sample)) ; howMany = 100
xstep = (xhigh - xlow)/howMany
for i in range(0,howMany+1,1):
x.append(xlow + i*xstep)
yvalue = gammacdf(xlow + i*xstep,sample_tau,sample_alpha,sample_beta)
ycdf.append(yvalue)
myfigure = matplotlib.pyplot.figure(figsize = (7,8)) # generate a object from the figure class, set aspect ratio
matplotlib.pyplot.scatter(plotting, sample ,color ='blue')
matplotlib.pyplot.plot(ycdf, x, color ='red')
matplotlib.pyplot.xlabel("Quantile Value")
matplotlib.pyplot.ylabel("Value of RV")
mytitle = "Pearson Type III Distribution Data Model\n "
mytitle += "Mean = " + str((sample_mean)) + "\n"
mytitle += "SD = " + str((sample_stdev)) + "\n"
mytitle += "Skew = " + str((sample_skew)) + "\n"
matplotlib.pyplot.title(mytitle)
matplotlib.pyplot.show()
print(sample_tau)
print(sample_alpha)
print(sample_beta)
# If we want to get fancy we can use Newton's method to get really close to the root
from scipy.optimize import newton
def f(x):
sample_tau = -0.7893085743067978
sample_alpha = 2.3177909391313727
sample_beta = 1.1271890532482214
quantile = 0.99
argument = (x)
gammavalue = gammacdf(argument,sample_tau,sample_alpha,sample_beta)
return gammavalue - quantile
myguess = 8
print(newton(f, myguess))
###Output
7.347913576046983
###Markdown
Bonus Problem (5 pts)Consider the script below, which implements a simulation of Russian Roulette (using an object oriented approach). Run the script to familarize yourself with the output.Then to prevent `Farhang` from dying, determine a way to change his outcome, and explain how you save him. Like with the robot speeding ticket you are channeling Kirk's approach to the Kobayashi-Maru exercise https://en.wikipedia.org/wiki/Kobayashi_Maru You can find the necessary trick from https://en.wikipedia.org/wiki/WarGames
###Code
import random
import itertools
class RussianRoulette:
def __init__(self, players, chambers=6):
random.shuffle(players)
self.players = itertools.cycle(players)
self.chambers = [False for _ in range(chambers)]
self.current = None
self.rounds = 0
def load(self):
"""
Randomly load a chamber with a bullet.
"""
chamber_to_load = random.randint(0, len(self.chambers)-1)
for i, chamber in enumerate(self.chambers):
if i == chamber_to_load:
self.chambers[i] = True
def next_round(self):
"""
Advance to the next round.
Returns:
the `player` whose turn it is next
"""
self.rounds += 1
return next(self.players)
def spin(self):
"""
Randomly assign a new chamber.
"""
self.current = random.randrange(0, len(self.chambers))
def fire(self, player):
"""
Fires the gun, then advances to the next chamber.
The gun will loop back to the first chamber if we were at the
final chamber in the cyclinder.
Returns:
`None` if no one has died
`player` if the bullet was in the next chamber
"""
if self.chambers[self.current]:
return player
self.current = (self.current + 1) % len(self.chambers)
if __name__ == '__main__':
players = ['Nikita', 'Dima', 'Sergey', 'Farhang', 'Andrey', 'Neko']
game = RussianRoulette(players)
game.load()
game.spin()
while True:
player = game.next_round()
choice = random.choice(['spin', 'fire'])
if choice == 'spin':
game.spin()
#pass
if game.fire(player):
print(f'{player} died. :(')
print(f'{game.rounds} completed.')
break
print(f'{player} lives to see another round!')
###Output
Andrey lives to see another round!
Farhang lives to see another round!
Nikita lives to see another round!
Dima lives to see another round!
Sergey died. :(
5 completed.
|
deprecated/boosting-classifier/classifier_feature_importance.ipynb | ###Markdown
1. prepare data
###Code
# -*- coding: utf-8 -*-
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
df1_path = "../dataset/titanic_dataset.csv"
df2_path = "../dataset/titanic_answer.csv"
df1 = pd.read_csv(df1_path)
df2 = pd.read_csv(df2_path)
df = df1.append(df2)
df.head()
df = df[['survived', 'pclass', 'sex', 'age', 'sibsp', 'parch', 'fare', 'embarked']]
df = df.dropna()
df.info()
df.isnull().sum()
###Output
_____no_output_____
###Markdown
----- 2. encoding & split dataset
###Code
categorical_columns = ['pclass', 'sex', 'embarked']
df = pd.get_dummies(df, columns=categorical_columns)
df.head()
train_df, test_df = train_test_split(df, test_size=0.2)
train_X = train_df.loc[:, train_df.columns != 'survived'].values
test_X = test_df.loc[:, test_df.columns != 'survived'].values
train_y = train_df['survived'].values
test_y = test_df['survived'].values
###Output
_____no_output_____
###Markdown
----- 3. Random forest classifier feature importance
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import roc_auc_score
clf = RandomForestClassifier(n_estimators=100, max_depth=2, random_state=0)
clf.fit(train_X, train_y)
pred_y = clf.predict(test_X)
print(roc_auc_score(test_y, pred_y))
final_features = ['age', 'sibsp', 'parch', 'fare', 'pclass_1', 'pclass_2', 'pclass_3',
'sex_female', 'sex_male', 'embarked_C', 'embarked_Q', 'embarked_S']
for importance, feature in zip(clf.feature_importances_, final_features):
print(feature + " : " + str(importance))
plt.bar(range(len(clf.feature_importances_)), clf.feature_importances_)
plt.show()
###Output
_____no_output_____
###Markdown
----- 3. XGB classifier feature importance
###Code
from xgboost import XGBClassifier
from xgboost import plot_importance
clf = XGBClassifier()
clf.fit(train_df.loc[:, train_df.columns != 'survived'], train_df['survived'])
pred_y = clf.predict(test_df.loc[:, test_df.columns != 'survived'])
print(roc_auc_score(test_y, pred_y))
plot_importance(clf)
plt.show()
###Output
0.7673745173745173
|
1 - Neural Networks and Deep Learning/Neural Networks Basics/Logistic_Regression_with_a_Neural_Network_mindset_v5.ipynb | ###Markdown
Logistic Regression with a Neural Network mindsetWelcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.**Instructions:**- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.**You will learn to:**- Build the general architecture of a learning algorithm, including: - Initializing parameters - Calculating the cost function and its gradient - Using an optimization algorithm (gradient descent) - Gather all three functions above into a main model function, in the right order. 1 - Packages First, let's run the cell below to import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.- [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.- [matplotlib](http://matplotlib.org) is a famous library to plot graphs in Python.- [PIL](http://www.pythonware.com/products/pil/) and [scipy](https://www.scipy.org/) are used here to test your model with your own picture at the end.
###Code
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
###Output
_____no_output_____
###Markdown
2 - Overview of the Problem set **Problem Statement**: You are given a dataset ("data.h5") containing: - a training set of m_train images labeled as cat (y=1) or non-cat (y=0) - a test set of m_test images labeled as cat or non-cat - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.Let's get more familiar with the dataset. Load the data by running the following code.
###Code
# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
###Output
_____no_output_____
###Markdown
We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the `index` value and re-run to see other images.
###Code
# Example of a picture
index = 25
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
###Output
y = [1], it's a 'cat' picture.
###Markdown
Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs. **Exercise:** Find the values for: - m_train (number of training examples) - m_test (number of test examples) - num_px (= height = width of a training image)Remember that `train_set_x_orig` is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access `m_train` by writing `train_set_x_orig.shape[0]`.
###Code
### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_x_orig.shape[0]
m_test = test_set_x_orig.shape[0]
num_px = train_set_x_orig.shape[1]
### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
###Output
Number of training examples: m_train = 209
Number of testing examples: m_test = 50
Height/Width of each image: num_px = 64
Each image is of size: (64, 64, 3)
train_set_x shape: (209, 64, 64, 3)
train_set_y shape: (1, 209)
test_set_x shape: (50, 64, 64, 3)
test_set_y shape: (1, 50)
###Markdown
**Expected Output for m_train, m_test and num_px**: **m_train** 209 **m_test** 50 **num_px** 64 For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $*$ num_px $*$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.**Exercise:** Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num\_px $*$ num\_px $*$ 3, 1).A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$*$c$*$d, a) is to use: ```pythonX_flatten = X.reshape(X.shape[0], -1).T X.T is the transpose of X```
###Code
# Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape(train_set_x_orig.shape[0], -1).T
test_set_x_flatten = test_set_x_orig.reshape(test_set_x_orig.shape[0], -1).T
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
###Output
train_set_x_flatten shape: (12288, 209)
train_set_y shape: (1, 209)
test_set_x_flatten shape: (12288, 50)
test_set_y shape: (1, 50)
sanity check after reshaping: [17 31 56 22 33]
###Markdown
**Expected Output**: **train_set_x_flatten shape** (12288, 209) **train_set_y shape** (1, 209) **test_set_x_flatten shape** (12288, 50) **test_set_y shape** (1, 50) **sanity check after reshaping** [17 31 56 22 33] To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel). Let's standardize our dataset.
###Code
train_set_x = train_set_x_flatten / 255.
test_set_x = test_set_x_flatten / 255.
###Output
_____no_output_____
###Markdown
**What you need to remember:**Common steps for pre-processing a new dataset are:- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)- Reshape the datasets such that each example is now a vector of size (num_px \* num_px \* 3, 1)- "Standardize" the data 3 - General Architecture of the learning algorithm It's time to design a simple algorithm to distinguish cat images from non-cat images.You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why **Logistic Regression is actually a very simple Neural Network!****Mathematical expression of the algorithm**:For one example $x^{(i)}$:$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$ $$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$The cost is then computed by summing over all training examples:$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$**Key steps**:In this exercise, you will carry out the following steps: - Initialize the parameters of the model - Learn the parameters for the model by minimizing the cost - Use the learned parameters to make predictions (on the test set) - Analyse the results and conclude 4 - Building the parts of our algorithm The main steps for building a Neural Network are:1. Define the model structure (such as number of input features) 2. Initialize the model's parameters3. Loop: - Calculate current loss (forward propagation) - Calculate current gradient (backward propagation) - Update parameters (gradient descent)You often build 1-3 separately and integrate them into one function we call `model()`. 4.1 - Helper functions**Exercise**: Using your code from "Python Basics", implement `sigmoid()`. As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp().
###Code
# GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1 / (1 + np.exp(-z))
### END CODE HERE ###
return s
print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))
###Output
sigmoid([0, 2]) = [ 0.5 0.88079708]
###Markdown
**Expected Output**: **sigmoid([0, 2])** [ 0.5 0.88079708] 4.2 - Initializing parameters**Exercise:** Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation.
###Code
# GRADED FUNCTION: initialize_with_zeros
def initialize_with_zeros(dim):
"""
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
"""
### START CODE HERE ### (≈ 1 line of code)
w = np.zeros((dim, 1))
b = 0
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
###Output
w = [[ 0.]
[ 0.]]
b = 0
###Markdown
**Expected Output**: ** w ** [[ 0.] [ 0.]] ** b ** 0 For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1). 4.3 - Forward and Backward propagationNow that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters.**Exercise:** Implement a function `propagate()` that computes the cost function and its gradient.**Hints**:Forward Propagation:- You get X- You compute $A = \sigma(w^T X + b) = (a^{(1)}, a^{(2)}, ..., a^{(m-1)}, a^{(m)})$- You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$Here are the two formulas you will be using: $$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
###Code
# GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
"""
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
"""
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = sigmoid(np.dot(w.T, X) + b) # compute activation
cost = - np.sum(Y * np.log(A) + (1 - Y) * np.log(1 - A)) / m # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = np.dot(X, (A - Y).T) / m
db = np.sum(A - Y) / m
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
###Output
dw = [[ 0.99845601]
[ 2.39507239]]
db = 0.00145557813678
cost = 5.80154531939
###Markdown
**Expected Output**: ** dw ** [[ 0.99845601] [ 2.39507239]] ** db ** 0.00145557813678 ** cost ** 5.801545319394553 4.4 - Optimization- You have initialized your parameters.- You are also able to compute a cost function and its gradient.- Now, you want to update the parameters using gradient descent.**Exercise:** Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate.
###Code
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
"""
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
"""
costs = []
for i in range(num_iterations):
# Cost and gradient calculation (≈ 1-4 lines of code)
### START CODE HERE ###
grads, cost = propagate(w, b, X, Y)
### END CODE HERE ###
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
### START CODE HERE ###
w = w - learning_rate * dw
b = b - learning_rate * db
### END CODE HERE ###
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training iterations
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
###Output
w = [[ 0.19033591]
[ 0.12259159]]
b = 1.92535983008
dw = [[ 0.67752042]
[ 1.41625495]]
db = 0.219194504541
###Markdown
**Expected Output**: **w** [[ 0.19033591] [ 0.12259159]] **b** 1.92535983008 **dw** [[ 0.67752042] [ 1.41625495]] **db** 0.219194504541 **Exercise:** The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the `predict()` function. There are two steps to computing predictions:1. Calculate $\hat{Y} = A = \sigma(w^T X + b)$2. Convert the entries of a into 0 (if activation 0.5), stores the predictions in a vector `Y_prediction`. If you wish, you can use an `if`/`else` statement in a `for` loop (though there is also a way to vectorize this).
###Code
# GRADED FUNCTION: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
w = w.reshape(X.shape[0], 1)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
### START CODE HERE ### (≈ 1 line of code)
A = sigmoid(np.dot(w.T, X) + b)
### END CODE HERE ###
# non vectorized way:
for i in range(A.shape[1]):
# Convert probabilities A[0,i] to actual predictions p[0,i]
### START CODE HERE ### (≈ 4 lines of code)
if A[0, i] > 0.5:
Y_prediction[0, i] = 1
elif A[0, i] <= 0.5:
Y_prediction[0, i] = 0
### END CODE HERE ###
# vectorized way:
# Y_prediction = A // 0.5
assert(Y_prediction.shape == (1, m))
return Y_prediction
w = np.array([[0.1124579],[0.23106775]])
b = -0.3
X = np.array([[1.,-1.1,-3.2],[1.2,2.,0.1]])
print ("predictions = " + str(predict(w, b, X)))
###Output
predictions = [[ 1. 1. 0.]]
###Markdown
**Expected Output**: **predictions** [[ 1. 1. 0.]] **What to remember:**You've implemented several functions that:- Initialize (w,b)- Optimize the loss iteratively to learn parameters (w,b): - computing the cost and its gradient - updating the parameters using gradient descent- Use the learned (w,b) to predict the labels for a given set of examples 5 - Merge all functions into a model You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.**Exercise:** Implement the model function. Use the following notation: - Y_prediction_test for your predictions on the test set - Y_prediction_train for your predictions on the train set - w, costs, grads for the outputs of optimize()
###Code
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
"""
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
"""
### START CODE HERE ###
# initialize parameters with zeros (≈ 1 line of code)
w, b = initialize_with_zeros(dim=X_train.shape[0])
# Gradient descent (≈ 1 line of code)
parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost)
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples (≈ 2 lines of code)
Y_prediction_test = predict(w, b, X_test)
Y_prediction_train = predict(w, b, X_train)
### END CODE HERE ###
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
###Output
_____no_output_____
###Markdown
Run the following cell to train your model.
###Code
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
###Output
Cost after iteration 0: 0.693147
Cost after iteration 100: 0.584508
Cost after iteration 200: 0.466949
Cost after iteration 300: 0.376007
Cost after iteration 400: 0.331463
Cost after iteration 500: 0.303273
Cost after iteration 600: 0.279880
Cost after iteration 700: 0.260042
Cost after iteration 800: 0.242941
Cost after iteration 900: 0.228004
Cost after iteration 1000: 0.214820
Cost after iteration 1100: 0.203078
Cost after iteration 1200: 0.192544
Cost after iteration 1300: 0.183033
Cost after iteration 1400: 0.174399
Cost after iteration 1500: 0.166521
Cost after iteration 1600: 0.159305
Cost after iteration 1700: 0.152667
Cost after iteration 1800: 0.146542
Cost after iteration 1900: 0.140872
train accuracy: 99.04306220095694 %
test accuracy: 70.0 %
###Markdown
**Expected Output**: **Cost after iteration 0 ** 0.693147 $\vdots$ $\vdots$ **Train Accuracy** 99.04306220095694 % **Test Accuracy** 70.0 % **Comment**: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test error is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the `index` variable) you can look at predictions on pictures of the test set.
###Code
# Example of a picture that was wrongly classified.
index = 2
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.")
###Output
y = 1, you predicted that it is a "cat" picture.
###Markdown
Let's also plot the cost function and the gradients.
###Code
# Plot learning curve (with costs)
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()
###Output
_____no_output_____
###Markdown
**Interpretation**:You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting. 6 - Further analysis (optional/ungraded exercise) Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$. Choice of learning rate **Reminder**:In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the `learning_rates` variable to contain, and see what happens.
###Code
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))
plt.ylabel('cost')
plt.xlabel('iterations (hundreds)')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
###Output
learning rate is: 0.01
train accuracy: 99.52153110047847 %
test accuracy: 68.0 %
-------------------------------------------------------
learning rate is: 0.001
train accuracy: 88.99521531100478 %
test accuracy: 64.0 %
-------------------------------------------------------
learning rate is: 0.0001
train accuracy: 68.42105263157895 %
test accuracy: 36.0 %
-------------------------------------------------------
###Markdown
**Interpretation**: - Different learning rates give different costs and thus different predictions results.- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost). - A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.- In deep learning, we usually recommend that you: - Choose the learning rate that better minimizes the cost function. - If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.) 7 - Test with your own image (optional/ungraded exercise) Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Change your image's name in the following code 4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
###Code
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "my_image.jpg" # change this to the name of your image file
## END CODE HERE ##
# We preprocess the image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
###Output
_____no_output_____ |
evalvacia_modelu_IV_b/evalvacia_modelu_IV_b.ipynb | ###Markdown
0. Imports
###Code
import pandas as pd
import numpy as np
from scipy import stats # statistika
###Output
_____no_output_____
###Markdown
1. Load CSV
###Code
# change to your file location
df = pd.read_csv('/content/drive/MyDrive/Škola/DM/evalvacia_modelu_IV_b/MLM_vstup.csv', ';', usecols=range(0,13))
df_stats = pd.read_csv('/content/drive/MyDrive/Škola/DM/evalvacia_modelu_IV_b/MLM_ZAM_stats.csv', ';', usecols=range(0,10))
# fiter for students
df = df[(df['HODINA'] > 6) & (df['HODINA'] <= 22) & (df['ZAM'] == 1) & (df['KATEGORIA'].isin(['uvod', 'studium', 'o_fakulte', 'oznamy']))]
# empty dict to save created crosstables
dfDict = {}
###Output
_____no_output_____
###Markdown
2. Create crosstables*Crosstable - PO*
###Code
df1 = df[(df['PO'] == 1)]
crosstable = pd.crosstab(df1['HODINA'], df1['KATEGORIA'], values=df1['PO'], margins=True,
dropna=False, aggfunc='count').reset_index().fillna(0)
# Add missing line
crosstable = crosstable.append({'HODINA': 18, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
crosstable = crosstable.append({'HODINA': 19, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
crosstable = crosstable.append({'HODINA': 20, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
# Add PO crosstable into dict
dfDict['PO'] = crosstable
###Output
_____no_output_____
###Markdown
*Crosstable - UT*
###Code
df1 = df[(df['UT'] == 1)]
crosstable = pd.crosstab(df1['HODINA'], df1['KATEGORIA'], values=df1['UT'], margins=True,
dropna=False, aggfunc='count').reset_index().fillna(0)
# Add missing line
crosstable = crosstable.append({'HODINA': 19, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
crosstable = crosstable.append({'HODINA': 20, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
crosstable = crosstable.append({'HODINA': 21, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
crosstable = crosstable.append({'HODINA': 22, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
# Add UT crosstable into dict
dfDict['UT'] = crosstable
###Output
_____no_output_____
###Markdown
*Crosstable - STR*
###Code
df1 = df[(df['STR'] == 1)]
crosstable = pd.crosstab(df1['HODINA'], df1['KATEGORIA'], values=df1['STR'], margins=True,
dropna=False, aggfunc='count').reset_index().fillna(0)
# Add missing line
crosstable = crosstable.append({'HODINA': 17, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
crosstable = crosstable.append({'HODINA': 20, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
crosstable = crosstable.append({'HODINA': 21, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
crosstable = crosstable.append({'HODINA': 22, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
# Add STR crosstable into dict
dfDict['STR'] = crosstable
###Output
_____no_output_____
###Markdown
*Crosstable - STVR*
###Code
df1 = df[(df['STVR'] == 1)]
crosstable = pd.crosstab(df1['HODINA'], df1['KATEGORIA'], values=df1['STVR'], margins=True,
dropna=False, aggfunc='count').reset_index().fillna(0)
# Add missing lines
crosstable = crosstable.append({'HODINA': 18, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
crosstable = crosstable.append({'HODINA': 19, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
crosstable = crosstable.append({'HODINA': 20, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
crosstable = crosstable.append({'HODINA': 21, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
crosstable = crosstable.append({'HODINA': 22, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
# Add STVR crosstable into dict
dfDict['STVR'] = crosstable
###Output
_____no_output_____
###Markdown
*Crosstable - PIA*
###Code
df1 = df[(df['PIA'] == 1)]
crosstable = pd.crosstab(df1['HODINA'], df1['KATEGORIA'], values=df1['PIA'], margins=True,
dropna=False, aggfunc='count').reset_index().fillna(0)
# Add missing lines
crosstable = crosstable.append({'HODINA': 16, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
crosstable = crosstable.append({'HODINA': 17, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
crosstable = crosstable.append({'HODINA': 18, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
crosstable = crosstable.append({'HODINA': 19, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
crosstable = crosstable.append({'HODINA': 20, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
crosstable = crosstable.append({'HODINA': 21, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
crosstable = crosstable.append({'HODINA': 22, 'o_fakulte': 0, 'oznamy': 0, 'studium': 0, 'uvod': 0, 'All': 0}, ignore_index=True)
# Add PIA crosstable into dict
dfDict['PIA'] = crosstable
###Output
_____no_output_____
###Markdown
3. Create collection of weekdays
###Code
days = ['PO', 'UT', 'STR', 'STVR', 'PIA']
###Output
_____no_output_____
###Markdown
4. Calculate differences
###Code
# Dataframes for empirical relative abundance
df1 = pd.DataFrame()
df2 = pd.DataFrame()
df3 = pd.DataFrame()
df4 = pd.DataFrame()
# Dataframes estimates for web parts
df1_estimate = pd.DataFrame()
df2_estimate = pd.DataFrame()
df3_estimate = pd.DataFrame()
df4_estimate = pd.DataFrame()
index = 0
# Cycle through hours from 7 to 23
for x in range (7,23):
# Rows for empirical relative abundance
new_row_uvod = {}
new_row_studium = {}
new_row_oznamy = {}
new_row_fakulte = {}
# Rows for estimations
new_row_uvod_estimate = {}
new_row_studium_estimate = {}
new_row_oznamy_estimate = {}
new_row_fakulte_estimate = {}
i = 1
# Cycle through weekdays
for day in days:
# Create logits estimates
logit_uvod = df_stats.at[index, 'Intercept'] + df_stats.at[index, 'HODINA']*x+df_stats.at[index, 'HODINA_STV']*(x*x)+df_stats.at[index, day]
logit_studium = df_stats.at[index+1, 'Intercept'] + df_stats.at[index+1, 'HODINA']*x+df_stats.at[index+1, 'HODINA_STV']*(x*x)+df_stats.at[index+1, day]
logit_oznamy = df_stats.at[index+2, 'Intercept'] + df_stats.at[index+2, 'HODINA']*x+df_stats.at[index+2, 'HODINA_STV']*(x*x)+df_stats.at[index+2, day]
reference_web = 1 / (1 + np.exp(logit_uvod) + np.exp(logit_studium) + np.exp(logit_oznamy))
# Create estimates for web parts
estimate_uvod = np.exp(logit_uvod) * reference_web
estimate_studium = np.exp(logit_studium) * reference_web
estimate_oznamy = np.exp(logit_oznamy) * reference_web
estimate_fakulte = np.exp(reference_web) * reference_web
# Get current crosstable
crosstable = dfDict[day]
crosstable = crosstable[(crosstable['HODINA'] == x)]
crosstable_all = crosstable.iloc[0]['All']
# Empirical relative abundance
if(crosstable_all == 0):
dij_uvod = 0
dij_studium = 0
dij_oznamy = 0
dij_fakulte = 0
else:
dij_uvod = crosstable.iloc[0]['uvod'] / crosstable_all
dij_studium = crosstable.iloc[0]['studium'] / crosstable_all
dij_oznamy = crosstable.iloc[0]['oznamy'] / crosstable_all
dij_fakulte = crosstable.iloc[0]['o_fakulte'] / crosstable_all
den = str(i) + '_' + day
# Add data to new rows
# Empirical
new_row_uvod.update({den: dij_uvod})
new_row_studium.update({den: dij_studium})
new_row_oznamy.update({den: dij_oznamy})
new_row_fakulte.update({den: dij_fakulte})
# Estimations
new_row_uvod_estimate.update({den: estimate_uvod})
new_row_studium_estimate.update({den: estimate_studium})
new_row_oznamy_estimate.update({den: estimate_oznamy})
new_row_fakulte_estimate.update({den: estimate_fakulte})
i = i + 1
# Append time and ext to rows
new_row_uvod.update({'0_hod': x})
new_row_studium.update({'0_hod': x})
new_row_oznamy.update({'0_hod': x})
new_row_fakulte.update({'0_hod': x})
new_row_uvod_estimate.update({'0_hod': x})
new_row_studium_estimate.update({'0_hod': x})
new_row_oznamy_estimate.update({'0_hod': x})
new_row_fakulte_estimate.update({'0_hod': x})
# Update dataframes
df1 = df1.append(new_row_uvod, sort=False, ignore_index=True)
df2 = df2.append(new_row_studium, sort=False, ignore_index=True)
df3 = df3.append(new_row_oznamy, sort=False, ignore_index=True)
df4 = df4.append(new_row_fakulte, sort=False, ignore_index=True)
df1_estimate = df1_estimate.append(new_row_uvod_estimate, sort=False, ignore_index=True)
df2_estimate = df2_estimate.append(new_row_studium_estimate, sort=False, ignore_index=True)
df3_estimate = df3_estimate.append(new_row_oznamy_estimate, sort=False, ignore_index=True)
df4_estimate = df4_estimate.append(new_row_fakulte_estimate, sort=False, ignore_index=True)
df1.head()
###Output
_____no_output_____
###Markdown
5. Create collection of weekdays with numbers
###Code
days = ['1_PO', '2_UT', '3_STR', '4_STVR', '5_PIA']
###Output
_____no_output_____
###Markdown
6. Print WilcoxonResult for: *Uvod*
###Code
for day in days:
print(stats.wilcoxon(df1[day], df1_estimate[day]))
###Output
WilcoxonResult(statistic=53.0, pvalue=0.43796657516602056)
WilcoxonResult(statistic=62.0, pvalue=0.7563688628810696)
WilcoxonResult(statistic=19.0, pvalue=0.011285575373529618)
WilcoxonResult(statistic=30.0, pvalue=0.049421966979675956)
WilcoxonResult(statistic=25.0, pvalue=0.026183648097068732)
###Markdown
7. Print WilcoxonResult for: *Studium*
###Code
for day in days:
print(stats.wilcoxon(df2[day], df2_estimate[day]))
###Output
WilcoxonResult(statistic=24.0, pvalue=0.022894784183124583)
WilcoxonResult(statistic=16.0, pvalue=0.007169734292803208)
WilcoxonResult(statistic=23.0, pvalue=0.019970875425605675)
WilcoxonResult(statistic=11.0, pvalue=0.0032045855456292547)
WilcoxonResult(statistic=11.0, pvalue=0.0032045855456292547)
###Markdown
8. Print WilcoxonResult for: *Oznamy*
###Code
for day in days:
print(stats.wilcoxon(df3[day], df3_estimate[day]))
###Output
WilcoxonResult(statistic=16.0, pvalue=0.007169734292803208)
WilcoxonResult(statistic=14.0, pvalue=0.005233909190788298)
WilcoxonResult(statistic=33.0, pvalue=0.07032573521121915)
WilcoxonResult(statistic=44.0, pvalue=0.21460188629190957)
WilcoxonResult(statistic=25.0, pvalue=0.026183648097068732)
###Markdown
9. Print WilcoxonResult for: *Fakulta*
###Code
for day in days:
print(stats.wilcoxon(df4[day], df4_estimate[day]))
###Output
WilcoxonResult(statistic=34.0, pvalue=0.07873081119613402)
WilcoxonResult(statistic=56.0, pvalue=0.5349252131384397)
WilcoxonResult(statistic=16.0, pvalue=0.007169734292803208)
WilcoxonResult(statistic=13.0, pvalue=0.004455352355471741)
WilcoxonResult(statistic=5.0, pvalue=0.0011233790034369743)
|
codici/.ipynb_checkpoints/kmeans-checkpoint.ipynb | ###Markdown
k-means clustering
###Code
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import scipy as sc
import scipy.stats as stats
from scipy.spatial.distance import euclidean
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
plt.style.use('fivethirtyeight')
plt.rcParams['font.family'] = 'sans-serif'
plt.rcParams['font.serif'] = 'Ubuntu'
plt.rcParams['font.monospace'] = 'Ubuntu Mono'
plt.rcParams['font.size'] = 10
plt.rcParams['axes.labelsize'] = 10
plt.rcParams['axes.labelweight'] = 'bold'
plt.rcParams['axes.titlesize'] = 10
plt.rcParams['xtick.labelsize'] = 8
plt.rcParams['ytick.labelsize'] = 8
plt.rcParams['legend.fontsize'] = 10
plt.rcParams['figure.titlesize'] = 12
plt.rcParams['image.cmap'] = 'jet'
plt.rcParams['image.interpolation'] = 'none'
plt.rcParams['figure.figsize'] = (16, 8)
plt.rcParams['lines.linewidth'] = 2
plt.rcParams['lines.markersize'] = 8
colors = ['#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b', '#810f7c',
'#137e6d', '#be0119', '#3b638c', '#af6f09', '#008fd5', '#fc4f30', '#e5ae38', '#6d904f', '#8b8b8b',
'#810f7c', '#137e6d', '#be0119', '#3b638c', '#af6f09']
cmap = mcolors.LinearSegmentedColormap.from_list("", ["#82cafc", "#069af3", "#0485d1", colors[0], colors[8]])
rv0 = stats.multivariate_normal(mean=[3, 3], cov=[[.3, .3],[.3,.4]])
rv1 = stats.multivariate_normal(mean=[1.5, 1], cov=[[.5, -.5],[-.5,.7]])
rv2 = stats.multivariate_normal(mean=[0, 1.2], cov=[[.15, .1],[.1,.3]])
rv3 = stats.multivariate_normal(mean=[3.2, 1], cov=[[.2, 0],[0,.1]])
z0 = rv0.rvs(size=300)
z1 = rv1.rvs(size=300)
z2 = rv2.rvs(size=300)
z3 = rv3.rvs(size=300)
z=np.concatenate((z0, z1, z2, z3), axis=0)
fig, ax = plt.subplots()
ax.scatter(z0[:,0], z0[:,1], s=40, color='C0', alpha =.8, edgecolors='k', label=r'$C_0$')
ax.scatter(z1[:,0], z1[:,1], s=40, color='C1', alpha =.8, edgecolors='k', label=r'$C_1$')
ax.scatter(z2[:,0], z2[:,1], s=40, color='C2', alpha =.8, edgecolors='k', label=r'$C_2$')
ax.scatter(z3[:,0], z3[:,1], s=40, color='C3', alpha =.8, edgecolors='k', label=r'$C_3$')
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.legend()
cc='xkcd:strawberry'
fig = plt.figure(figsize=(16,8))
ax = fig.gca()
plt.scatter(z[:,0], z[:,1], s=40, color=cc, edgecolors='k', alpha=.8)
plt.ylabel('$x_2$', fontsize=12)
plt.xlabel('$x_1$', fontsize=12)
plt.title('Data set', fontsize=12)
plt.show()
# Number of clusters
nc = 3
# X coordinates of random centroids
C_x = np.random.sample(nc)*(np.max(z[:,0])-np.min(z[:,0]))*.7+np.min(z[:,0])*.7
# Y coordinates of random centroids
C_y = np.random.sample(nc)*(np.max(z[:,1])-np.min(z[:,1]))*.7+np.min(z[:,0])*.7
C = np.array(list(zip(C_x, C_y)), dtype=np.float32)
fig = plt.figure(figsize=(16,8))
ax = fig.gca()
plt.scatter(z[:,0], z[:,1], s=40, color=cc, edgecolors='k', alpha=.5)
for i in range(nc):
plt.scatter(C_x[i], C_y[i], marker='*', s=500, c=colors[i], edgecolors='k', linewidth=1.5)
plt.ylabel('$x_2$', fontsize=12)
plt.xlabel('$x_1$', fontsize=12)
plt.title('Data set', fontsize=12)
plt.show()
C_list = []
errors = []
# Cluster Labels(0, 1, 2, 3)
clusters = np.zeros(z.shape[0])
C_list.append(C)
# Error func. - Distance between new centroids and old centroids
error = np.linalg.norm([euclidean(C[i,:], [0,0]) for i in range(nc)])
errors.append(error)
print("Error: {0:3.5f}".format(error))
for l in range(5):
# Assigning each value to its closest cluster
for i in range(z.shape[0]):
distances = [euclidean(z[i,:], C[j,:]) for j in range(nc)]
cluster = np.argmin(distances)
clusters[i] = cluster
# Storing the old centroid values
C = np.zeros([nc,2])
# Finding the new centroids by taking the average value
for i in range(nc):
points = [z[j,:] for j in range(z.shape[0]) if clusters[j] == i]
C[i] = np.mean(points, axis=0)
error = np.linalg.norm([euclidean(C[i,:], C_list[-1][i,:]) for i in range(nc)])
errors.append(error)
C_list.append(C)
fig = plt.figure(figsize=(16,8))
ax = fig.gca()
for cl in range(nc):
z1 = z[clusters==cl]
plt.scatter(z1[:,0],z1[:,1], c=colors[cl], marker='o', s=40, edgecolors='k', alpha=.7)
for i in range(nc):
plt.scatter(C[i,0], C[i,1], marker='*', s=400, c=colors[i], edgecolors='k', linewidth=1.5)
plt.ylabel('$x_2$', fontsize=12)
plt.xlabel('$x_1$', fontsize=12)
plt.title('Data set', fontsize=12)
plt.show()
C_list
print("Error: {0:3.5f}".format(error))
errors
###Output
_____no_output_____ |
code/ch07/07_visualization.ipynb | ###Markdown
Python for Financial Data ScienceDr Yves J Hilpisch | The Python Quants GmbHhttp://tpq.io | [email protected] Data Visualization
###Code
import matplotlib as mpl
mpl.__version__
import matplotlib.pyplot as plt
plt.style.use('seaborn')
mpl.rcParams['font.family'] = 'serif'
%matplotlib inline
###Output
_____no_output_____
###Markdown
Static 2D Plotting One-Dimensional Data Set
###Code
import numpy as np
np.random.seed(1000)
y = np.random.standard_normal(20)
x = np.arange(len(y))
plt.plot(x, y);
# plt.savefig('../../images/ch07/mpl_01')
plt.plot(y);
# plt.savefig('../../images/ch07/mpl_02')
plt.plot(y.cumsum());
# plt.savefig('../../images/ch07/mpl_03')
plt.plot(y.cumsum())
plt.grid(False);
# plt.savefig('../../images/ch07/mpl_04')
plt.plot(y.cumsum())
plt.xlim(-1, 20)
plt.ylim(np.min(y.cumsum()) - 1,
np.max(y.cumsum()) + 1);
# plt.savefig('../../images/ch07/mpl_05')
plt.figure(figsize=(10, 6))
plt.plot(y.cumsum(), 'b', lw=1.5)
plt.plot(y.cumsum(), 'ro')
plt.xlabel('index')
plt.ylabel('value')
plt.title('A Simple Plot');
# plt.savefig('../../images/ch07/mpl_06')
###Output
_____no_output_____
###Markdown
Two-Dimensional Data Set
###Code
y = np.random.standard_normal((20, 2)).cumsum(axis=0)
plt.figure(figsize=(10, 6))
plt.plot(y, lw=1.5)
plt.plot(y, 'ro')
plt.xlabel('index')
plt.ylabel('value')
plt.title('A Simple Plot');
# plt.savefig('../../images/ch07/mpl_07')
plt.figure(figsize=(10, 6))
plt.plot(y[:, 0], lw=1.5, label='1st')
plt.plot(y[:, 1], lw=1.5, label='2nd')
plt.plot(y, 'ro')
plt.legend(loc=0)
plt.xlabel('index')
plt.ylabel('value')
plt.title('A Simple Plot');
# plt.savefig('../../images/ch07/mpl_08')
y[:, 0] = y[:, 0] * 100
plt.figure(figsize=(10, 6))
plt.plot(y[:, 0], lw=1.5, label='1st')
plt.plot(y[:, 1], lw=1.5, label='2nd')
plt.plot(y, 'ro')
plt.legend(loc=0)
plt.xlabel('index')
plt.ylabel('value')
plt.title('A Simple Plot');
# plt.savefig('../../images/ch07/mpl_09')
fig, ax1 = plt.subplots()
plt.plot(y[:, 0], 'b', lw=1.5, label='1st')
plt.plot(y[:, 0], 'ro')
plt.legend(loc=8)
plt.xlabel('index')
plt.ylabel('value 1st')
plt.title('A Simple Plot')
ax2 = ax1.twinx()
plt.plot(y[:, 1], 'g', lw=1.5, label='2nd')a
plt.plot(y[:, 1], 'ro')
plt.legend(loc=0)
plt.ylabel('value 2nd');
# plt.savefig('../../images/ch07/mpl_10')
plt.figure(figsize=(10, 6))
plt.subplot(211)
plt.plot(y[:, 0], lw=1.5, label='1st')
plt.plot(y[:, 0], 'ro')
plt.legend(loc=0)
plt.ylabel('value')
plt.title('A Simple Plot')
plt.subplot(212)
plt.plot(y[:, 1], 'g', lw=1.5, label='2nd')
plt.plot(y[:, 1], 'ro')
plt.legend(loc=0)
plt.xlabel('index')
plt.ylabel('value');
# plt.savefig('../../images/ch07/mpl_11')
plt.figure(figsize=(10, 6))
plt.subplot(121)
plt.plot(y[:, 0], lw=1.5, label='1st')
plt.plot(y[:, 0], 'ro')
plt.legend(loc=0)
plt.xlabel('index')
plt.ylabel('value')
plt.title('1st Data Set')
plt.subplot(122)
plt.bar(np.arange(len(y)), y[:, 1], width=0.5,
color='g', label='2nd')
plt.legend(loc=0)
plt.xlabel('index')
plt.title('2nd Data Set');
# plt.savefig('../../images/ch07/mpl_12')
###Output
_____no_output_____
###Markdown
Other Plot Styles
###Code
y = np.random.standard_normal((1000, 2))
plt.figure(figsize=(10, 6))
plt.plot(y[:, 0], y[:, 1], 'ro')
plt.xlabel('1st')
plt.ylabel('2nd')
plt.title('Scatter Plot');
# plt.savefig('../../images/ch07/mpl_13')
plt.figure(figsize=(10, 6))
plt.scatter(y[:, 0], y[:, 1], marker='o')
plt.xlabel('1st')
plt.ylabel('2nd')
plt.title('Scatter Plot');
# plt.savefig('../../images/ch07/mpl_14')
c = np.random.randint(0, 10, len(y))
plt.figure(figsize=(10, 6))
plt.scatter(y[:, 0], y[:, 1],
c=c,
cmap='coolwarm',
marker='o')
plt.colorbar()
plt.xlabel('1st')
plt.ylabel('2nd')
plt.title('Scatter Plot');
# plt.savefig('../../images/ch07/mpl_15')
plt.figure(figsize=(10, 6))
plt.hist(y, label=['1st', '2nd'], bins=25)
plt.legend(loc=0)
plt.xlabel('value')
plt.ylabel('frequency')
plt.title('Histogram');
# plt.savefig('../../images/ch07/mpl_16')
plt.figure(figsize=(10, 6))
plt.hist(y, label=['1st', '2nd'], color=['b', 'g'],
stacked=True, bins=20, alpha=0.5)
plt.legend(loc=0)
plt.xlabel('value')
plt.ylabel('frequency')
plt.title('Histogram');
# plt.savefig('../../images/ch07/mpl_17')
fig, ax = plt.subplots(figsize=(10, 6))
plt.boxplot(y)
plt.setp(ax, xticklabels=['1st', '2nd'])
plt.xlabel('data set')
plt.ylabel('value')
plt.title('Boxplot');
# plt.savefig('../../images/ch07/mpl_18')
def func(x):
return 0.5 * np.exp(x) + 1
a, b = 0.5, 1.5
x = np.linspace(0, 2)
y = func(x)
Ix = np.linspace(a, b)
Iy = func(Ix) # <6>
verts = [(a, 0)] + list(zip(Ix, Iy)) + [(b, 0)]
from matplotlib.patches import Polygon
fig, ax = plt.subplots(figsize=(10, 6))
plt.plot(x, y, 'b', linewidth=2)
plt.ylim(ymin=0)
poly = Polygon(verts, facecolor='0.7', edgecolor='0.5')
ax.add_patch(poly)
plt.text(0.5 * (a + b), 1, r'$\int_a^b f(x)\mathrm{d}x$',
horizontalalignment='center', fontsize=20)
plt.figtext(0.9, 0.075, '$x$')
plt.figtext(0.075, 0.9, '$f(x)$')
ax.set_xticks((a, b))
ax.set_xticklabels(('$a$', '$b$'))
ax.set_yticks([func(a), func(b)])
ax.set_yticklabels(('$f(a)$', '$f(b)$'))
# plt.savefig('../../images/ch07/mpl_19')
###Output
_____no_output_____
###Markdown
Static 3D Plotting
###Code
strike = np.linspace(50, 150, 24)
ttm = np.linspace(0.5, 2.5, 24)
strike, ttm = np.meshgrid(strike, ttm)
strike[:2].round(1)
iv = (strike - 100) ** 2 / (100 * strike) / ttm
iv[:5, :3]
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(10, 6))
ax = fig.gca(projection='3d')
surf = ax.plot_surface(strike, ttm, iv, rstride=2, cstride=2,
cmap=plt.cm.coolwarm, linewidth=0.5,
antialiased=True)
ax.set_xlabel('strike')
ax.set_ylabel('time-to-maturity')
ax.set_zlabel('implied volatility')
fig.colorbar(surf, shrink=0.5, aspect=5);
# plt.savefig('../../images/ch07/mpl_20')
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(111, projection='3d')
ax.view_init(30, 60)
ax.scatter(strike, ttm, iv, zdir='z', s=25,
c='b', marker='^')
ax.set_xlabel('strike')
ax.set_ylabel('time-to-maturity')
ax.set_zlabel('implied volatility');
# plt.savefig('../../images/ch07/mpl_21')
###Output
_____no_output_____
###Markdown
Interactive 2D Plotting Basic Plots
###Code
import pandas as pd
import cufflinks as cf
cf.set_config_file(offline=True)
a = np.random.standard_normal((250, 5)).cumsum(axis=0)
index = pd.date_range('2019-1-1',
freq='B',
periods=len(a)) # <4>
df = pd.DataFrame(100 + 5 * a,
columns=list('abcde'),
index=index)
df.head()
df.plot()
df.iplot()
df[['a', 'b']].iplot(
theme='polar',
title='A Time Series Plot',
xTitle='date',
yTitle='value',
mode={'a': 'markers', 'b': 'lines+markers'},
symbol={'a': 'dot', 'b': 'diamond'},
size=3.5,
colors={'a': 'blue', 'b': 'magenta'},
)
df.iplot(kind='hist',
subplots=True,
bins=15,
)
###Output
_____no_output_____
###Markdown
Financial Plotting
###Code
# data from FXCM Forex Capital Markets Ltd.
raw = pd.read_csv('http://hilpisch.com/fxcm_eur_usd_eod_data.csv',
index_col=0, parse_dates=True)
raw.info()
quotes = raw[['OpenAsk', 'HighAsk', 'LowAsk', 'CloseAsk']]
quotes = quotes.iloc[-60:]
quotes.tail()
qf = cf.QuantFig(
quotes,
title='EUR/USD Exchange Rate',
legend='top',
name='EUR/USD'
)
qf.iplot()
qf.add_bollinger_bands(periods=15,
boll_std=2)
qf.iplot()
qf.add_rsi(periods=14,
showbands=False)
qf.iplot()
###Output
_____no_output_____
###Markdown
Python for Finance (2nd ed.)**Mastering Data-Driven Finance**© Dr. Yves J. Hilpisch | The Python Quants GmbH Data Visualization
###Code
import matplotlib as mpl
mpl.__version__
import matplotlib.pyplot as plt
plt.style.use('seaborn') # 设置图样
mpl.rcParams['font.family'] = 'serif' # 设置字型
%matplotlib inline
###Output
_____no_output_____
###Markdown
Static 2D Plotting One-Dimensional Data Set
###Code
import numpy as np
np.random.seed(1000)
y = np.random.standard_normal(20)
y
x = np.arange(1, len(y)+1)
plt.plot(x, y); # 画线图
plt.savefig('H:/py4fi/images/ch07/mpl_01')
plt.plot(y);
plt.savefig('H:/py4fi/images/ch07/mpl_02')
plt.plot(y.cumsum());
plt.savefig('H:/py4fi/images/ch07/mpl_03')
plt.plot(y.cumsum())
plt.grid(False) # 不要格线
plt.axis('equal'); # Lead to equal scaling for the two axes
plt.savefig('H:/py4fi/images/ch07/mpl_04')
plt.plot?
plt.plot(y.cumsum())
# 设置图形界限
plt.xlim(-1, 20) # x轴范围-1,20
plt.ylim(np.min(y.cumsum()) - 1,
np.max(y.cumsum()) + 1); # y的范围最小值-1,最大值+1
plt.savefig('H:/py4fi/images/ch07/mpl_05')
plt.figure(figsize=(10, 6)) # 图形大小
# 把下面两张图叠在一起
plt.plot(y.cumsum(), 'b', lw=1.5) # 'b'蓝色的线,lw=1.5线宽1.5
plt.plot(y.cumsum(), 'ro') # ‘ro’红色的点
plt.xlabel('index') # x轴标签‘index’
plt.ylabel('value') # y轴标签‘value’
plt.title('A Simple Plot'); # 图标
plt.savefig('H:/py4fi/images/ch07/mpl_06')
###Output
_____no_output_____
###Markdown
Two-Dimensional Data Set* Two data sets might have such a different scaling that they cannot be plotted using the same y- and/or x-axis scaling.* Another issue might be that one might want to visualize two different data sets in defferent ways, e.g., one by a line plot and the other by a bar plot
###Code
y = np.random.standard_normal((20, 2)).cumsum(axis=0)
y
plt.figure(figsize=(10, 6))
plt.plot(y, lw=1.5)
plt.plot(y, 'ro')
plt.xlabel('index')
plt.ylabel('value')
plt.title('A Simple Plot');
plt.savefig('H:/py4fi/images/ch07/mpl_07')
plt.figure(figsize=(10, 6))
plt.plot(y[:, 0], lw=1.5, label='1st')
plt.plot(y[:, 1], lw=1.5, label='2nd')
plt.plot(y, 'ro')
plt.legend(loc=0) # 0 stands for best location 选空的地方放图例
plt.xlabel('index')
plt.ylabel('value')
plt.title('A Simple Plot');
plt.savefig('H:/py4fi/images/ch07/mpl_08')
y[:, 0] = y[:, 0] * 100
plt.figure(figsize=(10, 6))
plt.plot(y[:, 0], lw=1.5, label='1st')
plt.plot(y[:, 1], lw=1.5, label='2nd')
plt.plot(y, 'ro')
plt.legend(loc=0)
plt.xlabel('index')
plt.ylabel('value')
plt.title('A Simple Plot');
plt.savefig('H:/py4fi/images/ch07/mpl_09')
fig, ax1 = plt.subplots()
plt.plot(y[:, 0], 'b', lw=1.5, label='1st')
plt.plot(y[:, 0], 'ro')
plt.legend(loc=8)
plt.xlabel('index')
plt.ylabel('value 1st')
plt.title('A Simple Plot')
ax2 = ax1.twinx() # Creates a second axis object that shares the x-axis.
plt.plot(y[:, 1], 'g', lw=1.5, label='2nd')
plt.plot(y[:, 1], 'ro')
plt.legend(loc=0)
plt.ylabel('value 2nd');
plt.savefig('H:/py4fi/images/ch07/mpl_10')
ax1.twinx?
plt.figure(figsize=(10, 6))
plt.subplot(211) # 2行1列第1个子图
plt.plot(y[:, 0], lw=1.5, label='1st')
plt.plot(y[:, 0], 'ro')
plt.legend(loc=0)
plt.ylabel('value')
plt.title('A Simple Plot')
plt.subplot(212) # 2行1列第2个子图
plt.plot(y[:, 1], 'g', lw=1.5, label='2nd')
plt.plot(y[:, 1], 'ro')
plt.legend(loc=0)
plt.xlabel('index')
plt.ylabel('value');
plt.savefig('H:/py4fi/images/ch07/mpl_11')
plt.figure(figsize=(10, 6))
plt.subplot(121)
plt.plot(y[:, 0], lw=1.5, label='1st')
plt.plot(y[:, 0], 'ro')
plt.legend(loc=0)
plt.xlabel('index')
plt.ylabel('value')
plt.title('1st Data Set')
plt.subplot(122)
plt.bar(np.arange(len(y)), y[:, 1], width=0.5, # 柱状图的参数
color='g', label='2nd')
plt.legend(loc=0)
plt.xlabel('index')
plt.title('2nd Data Set');
plt.savefig('H:/py4fi/images/ch07/mpl_12')
###Output
_____no_output_____
###Markdown
Other Plot Styles
###Code
y = np.random.standard_normal((1000, 2))
plt.figure(figsize=(10, 6))
plt.plot(y[:, 0], y[:, 1], 'ro') # 线图的方法画散布图
plt.xlabel('1st')
plt.ylabel('2nd')
plt.title('Scatter Plot');
plt.savefig('H:/py4fi/images/ch07/mpl_13')
plt.figure(figsize=(10, 6))
plt.scatter(y[:, 0], y[:, 1], marker='o') # 散布图 Using the plt.scatter()
plt.xlabel('1st')
plt.ylabel('2nd')
plt.title('Scatter Plot');
plt.savefig('H:/py4fi/images/ch07/mpl_14')
c = np.random.randint(0, 10, len(y))
plt.figure(figsize=(10, 6))
plt.scatter(y[:, 0], y[:, 1],
c=c, # color = 0, 1, 2, ..., 9的随机数 第一个c是颜色的意思
cmap='coolwarm', # coolwarm颜色样式
marker='o')
plt.colorbar() # 右边的color bar
plt.xlabel('1st')
plt.ylabel('2nd')
plt.title('Scatter Plot');
plt.savefig('H:/py4fi/images/ch07/mpl_15')
plt.figure(figsize=(10, 6))
plt.hist(y, label=['1st', '2nd'], bins=25) # 直方图 分成25个群
plt.legend(loc=0)
plt.xlabel('value')
plt.ylabel('frequency')
plt.title('Histogram');
plt.savefig('H:/py4fi/images/ch07/mpl_16')
plt.figure(figsize=(10, 6))
plt.hist(y, label=['1st', '2nd'], color=['b', 'g'],
stacked=True, bins=20, alpha=0.5) # alpha透明度 直方图叠起来
plt.legend(loc=0)
plt.xlabel('value')
plt.ylabel('frequency')
plt.title('Histogram');
plt.savefig('H:/py4fi/images/ch07/mpl_17')
fig, ax = plt.subplots(figsize=(10, 6))
plt.boxplot(y) # 盒状图
plt.setp(ax, xticklabels=['1st', '2nd'])
plt.xlabel('data set')
plt.ylabel('value')
plt.title('Boxplot');
plt.savefig('H:/py4fi/images/ch07/mpl_18')
plt.setp?
def func(x):
return 0.5 * np.exp(x) + 1
a, b = 0.5, 1.5 # The integral limits.
x = np.linspace(0, 2) # The x values to plot the function. 预设取50个点
y = func(x)
Ix = np.linspace(a, b)
Iy = func(Ix)
verts = [(a, 0)] + list(zip(Ix, Iy)) + [(b, 0)] # zip()方法
Ix
Iy
verts
len(verts)
from matplotlib.patches import Polygon # polygon 多边形
fig, ax = plt.subplots(figsize=(10, 6))
plt.plot(x, y, 'b', linewidth=2)
plt.ylim(bottom=0)
from matplotlib.patches import Polygon
fig, ax = plt.subplots(figsize=(10, 6))
poly = Polygon([(0.5, 0), (0.6, 0), (0.6, func(0.6)-1), (0.5, func(0.5)-1)])
ax.add_patch(poly) # 画多边形
from matplotlib.patches import Polygon
fig, ax = plt.subplots(figsize=(10, 6))
poly = Polygon([(0.5, 0), (0.6, 0),(0.5, func(0.5)-1),(0.6,func(0.6)-1)])
ax.add_patch(poly) # 画多边形
from matplotlib.patches import Polygon
fig, ax = plt.subplots(figsize=(10, 6))
plt.plot(x, y,'b', lw=0.2)
plt.ylim(bottom=0)
poly = Polygon(verts)
ax.add_patch(poly) # 画多边形
from matplotlib.patches import Polygon
fig, ax = plt.subplots(figsize=(10, 6))
plt.plot(x, y, 'b', linewidth=2)
plt.ylim(bottom=0)
poly = Polygon(verts, facecolor='0.7', edgecolor='0.5') # Plots the polygon (integral area) in gray
ax.add_patch(poly)
plt.text(0.5 * (a + b), 1, r'$\int_a^b f(x)\mathrm{d}x$', # Places the integral formula in the plot.
horizontalalignment='center', fontsize=20)
plt.figtext(0.9, 0.075, '$x$')
plt.figtext(0.075, 0.9, '$f(x)$')
ax.set_xticks((a, b)) # x轴上画线
ax.set_xticklabels(('$a$', '$b$'))
ax.set_yticks([func(a), func(b)]) # y轴上画线
ax.set_yticklabels(('$f(a)$', '$f(b)$'));
plt.savefig('H:/py4fi/images/ch07/mpl_19')
Polygon?
plt.text?
plt.figtext?
###Output
_____no_output_____
###Markdown
Static 3D Plotting
###Code
strike = np.linspace(50, 150, 24)
strike
ttm = np.linspace(0.5, 2.5, 24)
ttm
strike, ttm = np.meshgrid(strike, ttm) # meshgrid撒点
strike
ttm
strike.shape
np.meshgrid?
strike[:2].round(1)
iv = (strike - 100) ** 2 / (100 * strike) / ttm
iv.shape
iv[:5, :3]
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(10, 6))
ax = fig.gca(projection='3d')
surf = ax.plot_surface(strike, ttm, iv, rstride=2, cstride=2,# row 轴步长2, column 轴步长2
cmap=plt.cm.coolwarm, linewidth=0.5, # cmpa 颜色
antialiased=True) # 抗锯齿
ax.set_xlabel('strike')
ax.set_ylabel('time-to-maturity')
ax.set_zlabel('implied volatility')
fig.colorbar(surf, shrink=0.5, aspect=5); # colorbar的属性
plt.savefig('H:/py4fi/images/ch07/mpl_20')
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(111, projection='3d')
# fig, ax = plt.subplot()
ax.view_init(30, 60) # 3D图形角度
ax.scatter(strike, ttm, iv, zdir='z', s=25,
c='b', marker='^')
ax.set_xlabel('strike')
ax.set_ylabel('time-to-maturity')
ax.set_zlabel('implied volatility');
plt.savefig('H:/py4fi/images/ch07/mpl_21')
###Output
_____no_output_____
###Markdown
Interactive 2D Plotting* pip install cufflinks* pip install plotly* The section focuses on selected aspects only, in that Cufflinks is used exclusively to create interactive plots from data stored in DataFrame objects. Basic Plots
###Code
import pandas as pd
import cufflinks as cf
import plotly.offline as plyo
plyo.init_notebook_mode(connected=True)
a = np.random.standard_normal((250, 5)).cumsum(axis=0)
index = pd.date_range('2019-1-1',
freq='B',
periods=len(a))
df = pd.DataFrame(100 + 5 * a,
columns=list('abcde'),
index=index)
df.head()
plyo.iplot(
df.iplot(asFigure=True),
image='png',
filename='ply_01'
)
plyo.iplot(
df[['a', 'b']].iplot(asFigure=True,
theme='polar',
title='A Time Series Plot',
xTitle='date',
yTitle='value',
mode={'a': 'markers', 'b': 'lines+markers'},
symbol={'a': 'circle', 'b': 'diamond'},
size=3.5,
colors={'a': 'blue', 'b': 'magenta'},
),
image='png',
filename='ply_02'
)
plyo.iplot(
df.iplot(kind='hist',
subplots=True,
bins=15,
asFigure=True),
image='png',
filename='ply_03'
)
###Output
_____no_output_____
###Markdown
Financial Plotting
###Code
# from fxcmpy import fxcmpy_candles_data_reader as cdr
# data = cdr('EURUSD', start='2013-1-1', end='2017-12-31', period='D1', verbosity=True)
# data.get_data().to_csv('../../source/fxcm_eur_usd_eod_data.csv')
pd.read_csv?
# data from FXCM Forex Capital Markets Ltd.
raw = pd.read_csv('fxcm_eur_usd_eod_data.csv',
index_col=0, parse_dates=True)
raw.info()
quotes = raw[['AskOpen', 'AskHigh', 'AskLow', 'AskClose']]
quotes = quotes.iloc[-60:] # 最后60笔 按照编码来抓
quotes.tail()
qf = cf.QuantFig(
quotes,
title='EUR/USD Exchange Rate',
legend='top',
name='EUR/USD'
)
cf.QuantFig?
plyo.iplot(
qf.iplot(asFigure=True),
image='png',
filename='qf_01'
)
qf.add_bollinger_bands(periods=15,
boll_std=2)
plyo.iplot(qf.iplot(asFigure=True),
image='png',
filename='qf_02')
qf.add_rsi(periods=14,
showbands=False)
plyo.iplot(
qf.iplot(asFigure=True),
image='png',
filename='qf_03'
)
###Output
_____no_output_____ |
HeroesOfPymoli/HeroesOfPymoli_starter_code-Copy1.ipynb | ###Markdown
Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import pandas as pd
# File to Load (Remember to Change These)
file_to_load = "Resources/purchase_data.csv"
# Read Purchasing File and store into Pandas data frame
purchase_data = pd.read_csv(file_to_load)
purchase_data
###Output
_____no_output_____ |
docs/load-wordvector.ipynb | ###Markdown
Word Vector This tutorial is available as an IPython notebook at [Malaya/example/wordvector](https://github.com/huseinzol05/Malaya/tree/master/example/wordvector). Pretrained word2vecYou can download Malaya pretrained without need to import malaya. word2vec from local news[size-256](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/wordvectordownload) word2vec from wikipedia[size-256](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/wordvectordownload) word2vec from local social media[size-256](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/wordvectordownload) But If you don't know what to do with malaya word2vec, Malaya provided some useful functions for you!
###Code
%%time
import malaya
%matplotlib inline
###Output
/Users/huseinzolkepli/Documents/Malaya/malaya/preprocessing.py:259: FutureWarning: Possible nested set at position 2289
self.tok = re.compile(r'({})'.format('|'.join(pipeline)))
###Markdown
Load malaya news word2vec```pythondef load_news(): """ Return malaya pretrained local malaysia news word2vec size 256. https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/wordvector Returns ------- vocabulary: indices dictionary for `vector`. vector: np.array, 2D. """```
###Code
vocab_news, embedded_news = malaya.wordvector.load_news()
###Output
_____no_output_____
###Markdown
Load malaya wikipedia word2vec```pythondef load_wiki(): """ Return malaya pretrained wikipedia word2vec size 256. https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/wordvector Returns ------- vocabulary: indices dictionary for `vector`. vector: np.array, 2D. """```
###Code
vocab_wiki, embedded_wiki = malaya.wordvector.load_wiki()
###Output
_____no_output_____
###Markdown
Load word vector interface```pythondef load(embed_matrix, dictionary): """ Return malaya.wordvector._wordvector object. Parameters ---------- embed_matrix: numpy array dictionary: dictionary Returns ------- _wordvector: malaya.wordvector._wordvector object """ ```1. `embed_matrix` must be a 2d,```pythonarray([[ 0.25 , -0.10816103, -0.19881412, ..., 0.40432587, 0.19388093, -0.07062137], [ 0.3231817 , -0.01318745, -0.17950962, ..., 0.25 , 0.08444146, -0.11705721], [ 0.29103908, -0.16274083, -0.20255531, ..., 0.25 , 0.06253044, -0.16404966], ..., [ 0.21346697, 0.12686132, -0.4029543 , ..., 0.43466234, 0.20910986, -0.32219803], [ 0.2372157 , 0.32420087, -0.28036436, ..., 0.2894639 , 0.20745888, -0.30600077], [ 0.27907744, 0.35755727, -0.34932107, ..., 0.37472805, 0.42045262, -0.21725406]], dtype=float32)```2. `dictionary`, a dictionary mapped `{'word': 0}`,```python{'mengembanfkan': 394623, 'dipujanya': 234554, 'comicolor': 182282, 'immaz': 538660, 'qabar': 585119, 'phidippus': 180802,}``` Load custom word vectorLike fast-text, example, I download from here, https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ms.vecWe need to parse the data to get `embed_matrix` and `dictionary`.
###Code
import io
import numpy as np
fin = io.open('wiki.ms.vec', 'r', encoding='utf-8', newline='\n', errors='ignore')
n, d = map(int, fin.readline().split())
data, vectors = {}, []
for no, line in enumerate(fin):
tokens = line.rstrip().split(' ')
data[tokens[0]] = no
vectors.append(list(map(float, tokens[1:])))
vectors = np.array(vectors)
fast_text = malaya.wordvector.load(vectors, data)
word_vector_news = malaya.wordvector.load(embedded_news, vocab_news)
word_vector_wiki = malaya.wordvector.load(embedded_wiki, vocab_wiki)
###Output
WARNING:tensorflow:From /Users/huseinzolkepli/Documents/Malaya/malaya/wordvector.py:94: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
WARNING:tensorflow:From /Users/huseinzolkepli/Documents/Malaya/malaya/wordvector.py:105: The name tf.InteractiveSession is deprecated. Please use tf.compat.v1.InteractiveSession instead.
###Markdown
Check top-k similar semantics based on a word```pythondef n_closest( self, word: str, num_closest: int = 5, metric: str = 'cosine', return_similarity: bool = True,): """ find nearest words based on a word. Parameters ---------- word: str Eg, 'najib' num_closest: int, (default=5) number of words closest to the result. metric: str, (default='cosine') vector distance algorithm. return_similarity: bool, (default=True) if True, will return between 0-1 represents the distance. Returns ------- word_list: list of nearest words """```
###Code
word = 'anwar'
print("Embedding layer: 8 closest words to: '%s' using malaya news word2vec"%(word))
print(word_vector_news.n_closest(word=word, num_closest=8, metric='cosine'))
word = 'anwar'
print("Embedding layer: 8 closest words to: '%s' using malaya wiki word2vec"%(word))
print(word_vector_wiki.n_closest(word=word, num_closest=8, metric='cosine'))
###Output
Embedding layer: 8 closest words to: 'anwar' using malaya wiki word2vec
[['rasulullah', 0.6918460130691528], ['jamal', 0.6604709029197693], ['noraniza', 0.65153968334198], ['khalid', 0.6450133323669434], ['mahathir', 0.6447468400001526], ['sukarno', 0.641593337059021], ['wahid', 0.6359774470329285], ['pekin', 0.6262176036834717]]
###Markdown
Check batch top-k similar semantics based on a word```pythondef batch_n_closest( self, words: List[str], num_closest: int = 5, return_similarity: bool = False, soft: bool = True,): """ find nearest words based on a batch of words using Tensorflow. Parameters ---------- words: list Eg, ['najib','anwar'] num_closest: int, (default=5) number of words closest to the result. return_similarity: bool, (default=True) if True, will return between 0-1 represents the distance. soft: bool, (default=True) if True, a word not in the dictionary will be replaced with nearest JaroWinkler ratio. if False, it will throw an exception if a word not in the dictionary. Returns ------- word_list: list of nearest words """```
###Code
words = ['anwar', 'mahathir']
word_vector_news.batch_n_closest(words, num_closest=8,
return_similarity=False)
###Output
_____no_output_____
###Markdown
What happen if a word not in the dictionary?You can set parameter `soft` to `True` or `False`. Default is `True`.if `True`, a word not in the dictionary will be replaced with nearest JaroWrinkler ratio.if `False`, it will throw an exception if a word not in the dictionary.
###Code
words = ['anwar', 'mahathir','husein-comel']
word_vector_wiki.batch_n_closest(words, num_closest=8,
return_similarity=False,soft=False)
words = ['anwar', 'mahathir','husein-comel']
word_vector_wiki.batch_n_closest(words, num_closest=8,
return_similarity=False,soft=True)
###Output
_____no_output_____
###Markdown
Word2vec calculatorYou can put any equation you wanted.```pythondef calculator( self, equation: str, num_closest: int = 5, metric: str = 'cosine', return_similarity: bool = True,): """ calculator parser for word2vec. Parameters ---------- equation: str Eg, '(mahathir + najib) - rosmah' num_closest: int, (default=5) number of words closest to the result. metric: str, (default='cosine') vector distance algorithm. return_similarity: bool, (default=True) if True, will return between 0-1 represents the distance. Returns ------- word_list: list of nearest words """```
###Code
word_vector_news.calculator('anwar + amerika + mahathir', num_closest=8, metric='cosine',
return_similarity=False)
word_vector_wiki.calculator('anwar + amerika + mahathir', num_closest=8, metric='cosine',
return_similarity=False)
###Output
_____no_output_____
###Markdown
Visualize scatter-plot```pythondef scatter_plot( self, labels, centre: str = None, figsize: Tuple[int, int] = (7, 7), plus_minus: int = 25, handoff: float = 5e-5,): """ plot a scatter plot based on output from calculator / n_closest / analogy. Parameters ---------- labels : list output from calculator / n_closest / analogy centre : str, (default=None) centre label, if a str, it will annotate in a red color. figsize : tuple, (default=(7, 7)) figure size for plot. Returns ------- tsne: np.array, 2D. """```
###Code
word = 'anwar'
result = word_vector_news.n_closest(word=word, num_closest=8, metric='cosine')
data = word_vector_news.scatter_plot(result, centre = word)
word = 'anwar'
result = word_vector_wiki.n_closest(word=word, num_closest=8, metric='cosine')
data = word_vector_wiki.scatter_plot(result, centre = word)
###Output
_____no_output_____
###Markdown
Visualize tree-plot```pythondef tree_plot( self, labels, figsize: Tuple[int, int] = (7, 7), annotate: bool = True): """ plot a tree plot based on output from calculator / n_closest / analogy. Parameters ---------- labels : list output from calculator / n_closest / analogy. visualize : bool if True, it will render plt.show, else return data. figsize : tuple, (default=(7, 7)) figure size for plot. Returns ------- embed: np.array, 2D. labelled: labels for X / Y axis. """```
###Code
word = 'anwar'
result = word_vector_news.n_closest(word=word, num_closest=8, metric='cosine')
data = word_vector_news.tree_plot(result)
word = 'anwar'
result = word_vector_wiki.n_closest(word=word, num_closest=8, metric='cosine')
data = word_vector_wiki.tree_plot(result)
###Output
_____no_output_____
###Markdown
Visualize social-network```pythondef network( self, word, num_closest = 8, depth = 4, min_distance = 0.5, iteration = 300, figsize = (15, 15), node_color = '72bbd0', node_factor = 50,): """ plot a social network based on word given Parameters ---------- word : str centre of social network. num_closest: int, (default=8) number of words closest to the node. depth: int, (default=4) depth of social network. More deeper more expensive to calculate, big^O(num_closest ** depth). min_distance: float, (default=0.5) minimum distance among nodes. Increase the value to increase the distance among nodes. iteration: int, (default=300) number of loops to train the social network to fit min_distace. figsize: tuple, (default=(15, 15)) figure size for plot. node_color: str, (default='72bbd0') color for nodes. node_factor: int, (default=10) size factor for depth nodes. Increase this value will increase nodes sizes based on depth. ```
###Code
g = word_vector_news.network('mahathir', figsize = (10, 10), node_factor = 50, depth = 3)
g = word_vector_wiki.network('mahathir', figsize = (10, 10), node_factor = 50, depth = 3)
###Output
_____no_output_____
###Markdown
Get embedding from a word```pythondef get_vector_by_name( self, word: str, soft: bool = False, topn_soft: int = 5): """ get vector based on string. Parameters ---------- word: str soft: bool, (default=True) if True, a word not in the dictionary will be replaced with nearest JaroWinkler ratio. if False, it will throw an exception if a word not in the dictionary. topn_soft: int, (default=5) if word not found in dictionary, will returned `topn_soft` size of similar size using jarowinkler. Returns ------- vector: np.array, 1D """```
###Code
word_vector_wiki.get_vector_by_name('najib').shape
###Output
_____no_output_____
###Markdown
If a word not found in the vocabulary, it will throw an exception with top-5 nearest words
###Code
word_vector_wiki.get_vector_by_name('husein-comel')
###Output
_____no_output_____
###Markdown
Word Vector This tutorial is available as an IPython notebook at [Malaya/example/wordvector](https://github.com/huseinzol05/Malaya/tree/master/example/wordvector). Pretrained word2vecYou can download Malaya pretrained without need to import malaya. word2vec from local news[size-256](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/wordvectordownload) word2vec from wikipedia[size-256](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/wordvectordownload) word2vec from local social media[size-256](https://github.com/huseinzol05/Malaya/tree/master/pretrained-model/wordvectordownload) But If you don't know what to do with malaya word2vec, Malaya provided some useful functions for you!
###Code
%%time
import malaya
%matplotlib inline
###Output
CPU times: user 4.21 s, sys: 793 ms, total: 5 s
Wall time: 4.11 s
###Markdown
Load malaya news word2vec
###Code
vocab_news, embedded_news = malaya.wordvector.load_news()
###Output
_____no_output_____
###Markdown
Load malaya wikipedia word2vec
###Code
vocab_wiki, embedded_wiki = malaya.wordvector.load_wiki()
###Output
_____no_output_____
###Markdown
Load word vector interface```pythondef load(embed_matrix, dictionary): """ Return malaya.wordvector._wordvector object. Parameters ---------- embed_matrix: numpy array dictionary: dictionary Returns ------- _wordvector: malaya.wordvector._wordvector object """ ```1. `embed_matrix` must be a 2d,```pythonarray([[ 0.25 , -0.10816103, -0.19881412, ..., 0.40432587, 0.19388093, -0.07062137], [ 0.3231817 , -0.01318745, -0.17950962, ..., 0.25 , 0.08444146, -0.11705721], [ 0.29103908, -0.16274083, -0.20255531, ..., 0.25 , 0.06253044, -0.16404966], ..., [ 0.21346697, 0.12686132, -0.4029543 , ..., 0.43466234, 0.20910986, -0.32219803], [ 0.2372157 , 0.32420087, -0.28036436, ..., 0.2894639 , 0.20745888, -0.30600077], [ 0.27907744, 0.35755727, -0.34932107, ..., 0.37472805, 0.42045262, -0.21725406]], dtype=float32)```2. `dictionary`, a dictionary mapped `{'word': 0}`,```python{'mengembanfkan': 394623, 'dipujanya': 234554, 'comicolor': 182282, 'immaz': 538660, 'qabar': 585119, 'phidippus': 180802,}``` Load custom word vectorLike fast-text, example, I download from here, https://dl.fbaipublicfiles.com/fasttext/vectors-wiki/wiki.ms.vecWe need to parse the data to get `embed_matrix` and `dictionary`.
###Code
import io
import numpy as np
fin = io.open('wiki.ms.vec', 'r', encoding='utf-8', newline='\n', errors='ignore')
n, d = map(int, fin.readline().split())
data, vectors = {}, []
for no, line in enumerate(fin):
tokens = line.rstrip().split(' ')
data[tokens[0]] = no
vectors.append(list(map(float, tokens[1:])))
vectors = np.array(vectors)
fast_text = malaya.wordvector.load(vectors, data)
word_vector_news = malaya.wordvector.load(embedded_news, vocab_news)
word_vector_wiki = malaya.wordvector.load(embedded_wiki, vocab_wiki)
###Output
WARNING:tensorflow:From /Users/huseinzolkepli/Documents/Malaya/malaya/wordvector.py:94: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
WARNING:tensorflow:From /Users/huseinzolkepli/Documents/Malaya/malaya/wordvector.py:105: The name tf.InteractiveSession is deprecated. Please use tf.compat.v1.InteractiveSession instead.
###Markdown
Check top-k similar semantics based on a word
###Code
word = 'anwar'
print("Embedding layer: 8 closest words to: '%s' using malaya news word2vec"%(word))
print(word_vector_news.n_closest(word=word, num_closest=8, metric='cosine'))
word = 'anwar'
print("Embedding layer: 8 closest words to: '%s' using malaya wiki word2vec"%(word))
print(word_vector_wiki.n_closest(word=word, num_closest=8, metric='cosine'))
###Output
Embedding layer: 8 closest words to: 'anwar' using malaya wiki word2vec
[['rasulullah', 0.6918460130691528], ['jamal', 0.6604709029197693], ['noraniza', 0.65153968334198], ['khalid', 0.6450133323669434], ['mahathir', 0.6447468400001526], ['sukarno', 0.641593337059021], ['wahid', 0.6359774470329285], ['pekin', 0.6262176036834717]]
###Markdown
Check batch top-k similar semantics based on a word
###Code
words = ['anwar', 'mahathir']
word_vector_news.batch_n_closest(words, num_closest=8,
return_similarity=False)
###Output
_____no_output_____
###Markdown
What happen if a word not in the dictionary?You can set parameter `soft` to `True` or `False`. Default is `True`.if `True`, a word not in the dictionary will be replaced with nearest JaroWrinkler ratio.if `False`, it will throw an exception if a word not in the dictionary.
###Code
words = ['anwar', 'mahathir','husein-comel']
word_vector_wiki.batch_n_closest(words, num_closest=8,
return_similarity=False,soft=False)
words = ['anwar', 'mahathir','husein-comel']
word_vector_wiki.batch_n_closest(words, num_closest=8,
return_similarity=False,soft=True)
###Output
_____no_output_____
###Markdown
Word2vec calculatorYou can put any equation you wanted.
###Code
word_vector_news.calculator('anwar + amerika + mahathir', num_closest=8, metric='cosine',
return_similarity=False)
word_vector_wiki.calculator('anwar + amerika + mahathir', num_closest=8, metric='cosine',
return_similarity=False)
###Output
_____no_output_____
###Markdown
Visualize scatter-plot
###Code
word = 'anwar'
result = word_vector_news.n_closest(word=word, num_closest=8, metric='cosine')
data = word_vector_news.scatter_plot(result, centre = word)
word = 'anwar'
result = word_vector_wiki.n_closest(word=word, num_closest=8, metric='cosine')
data = word_vector_wiki.scatter_plot(result, centre = word)
###Output
_____no_output_____
###Markdown
Visualize tree-plot
###Code
word = 'anwar'
result = word_vector_news.n_closest(word=word, num_closest=8, metric='cosine')
data = word_vector_news.tree_plot(result)
word = 'anwar'
result = word_vector_wiki.n_closest(word=word, num_closest=8, metric='cosine')
data = word_vector_wiki.tree_plot(result)
###Output
_____no_output_____
###Markdown
Visualize social-network```pythondef network( self, word, num_closest = 8, depth = 4, min_distance = 0.5, iteration = 300, figsize = (15, 15), node_color = '72bbd0', node_factor = 50,): """ plot a social network based on word given Parameters ---------- word : str centre of social network. num_closest: int, (default=8) number of words closest to the node. depth: int, (default=4) depth of social network. More deeper more expensive to calculate, big^O(num_closest ** depth). min_distance: float, (default=0.5) minimum distance among nodes. Increase the value to increase the distance among nodes. iteration: int, (default=300) number of loops to train the social network to fit min_distace. figsize: tuple, (default=(15, 15)) figure size for plot. node_color: str, (default='72bbd0') color for nodes. node_factor: int, (default=10) size factor for depth nodes. Increase this value will increase nodes sizes based on depth. ```
###Code
g = word_vector_news.network('mahathir', figsize = (10, 10), node_factor = 50, depth = 3)
g = word_vector_wiki.network('mahathir', figsize = (10, 10), node_factor = 50, depth = 3)
###Output
_____no_output_____
###Markdown
Get embedding from a word
###Code
word_vector_wiki.get_vector_by_name('najib').shape
###Output
_____no_output_____
###Markdown
If a word not found in the vocabulary, it will throw an exception with top-5 nearest words
###Code
word_vector_wiki.get_vector_by_name('husein-comel')
###Output
_____no_output_____ |
Data_Science/Regressao/Linear/Regressao_Linear_Stats.ipynb | ###Markdown
Regressão Linear - Stats Model * Regressão linear utilizando a biblioteca Stats Model do Python
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
dados = pd.read_csv('Regresao_Linear.csv')
dados.head()
X = dados['X'].values
Y = dados['Y'].values
plt.scatter(X,Y,label='Y(X)');
plt.xlabel('X');
plt.ylabel('Y');
plt.legend();
###Output
_____no_output_____
###Markdown

###Code
import statsmodels.api as sm
modelo = sm.OLS(Y, X)
resultado = modelo.fit()
print(resultado.summary())
###Output
OLS Regression Results
=======================================================================================
Dep. Variable: y R-squared (uncentered): 0.756
Model: OLS Adj. R-squared (uncentered): 0.754
Method: Least Squares F-statistic: 307.0
Date: Thu, 29 Oct 2020 Prob (F-statistic): 4.23e-32
Time: 17:27:54 Log-Likelihood: -322.75
No. Observations: 100 AIC: 647.5
Df Residuals: 99 BIC: 650.1
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
x1 1.8563 0.106 17.520 0.000 1.646 2.067
==============================================================================
Omnibus: 2.224 Durbin-Watson: 0.394
Prob(Omnibus): 0.329 Jarque-Bera (JB): 1.543
Skew: -0.042 Prob(JB): 0.462
Kurtosis: 2.397 Cond. No. 1.00
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
Modelo precisa de um intercepto
###Code
X = sm.add_constant(X)
modelo2 = sm.OLS(Y, X)
resultado2 = modelo2.fit()
print(resultado2.summary())
coef_linear, coef_angular = resultado2.params
reta = coef_angular*X+coef_linear
X = X[:,1]
reta = reta[:,1]
plt.scatter(X,Y,label='Y(X)');
plt.plot(X,reta,label='Ajuste linear',color='red');
plt.xlabel('X');
plt.ylabel('Y');
plt.legend();
from sklearn.metrics import mean_absolute_error,mean_squared_error
MAE = mean_absolute_error(Y,reta)
RMSE = np.sqrt(mean_squared_error(Y,reta))
print("MAE = {:0.2f}".format(MAE))
print("RMSE = {:0.2f}".format(RMSE))
###Output
MAE = 1.89
RMSE = 2.43
|
doc/jupyter_execute/examples/models/disruption_budgets/pdbs_example.ipynb | ###Markdown
Defining Disruption Budgets for Seldon Deployments Prerequisites * A kubernetes cluster with kubectl configured* pygmentize Setup Seldon CoreUse the setup notebook to [Setup Cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.htmlSetup-Cluster) with [Ambassador Ingress](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.htmlAmbassador) and [Install Seldon Core](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.htmlInstall-Seldon-Core). Instructions [also online](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html).
###Code
!kubectl create namespace seldon
!kubectl config set-context $(kubectl config current-context) --namespace=seldon
###Output
_____no_output_____
###Markdown
Create model with Pod Disruption BudgetTo create a model with a Pod Disruption Budget, it is first important to understand how you would like your application to respond to [voluntary disruptions](https://kubernetes.io/docs/concepts/workloads/pods/disruptions/voluntary-and-involuntary-disruptions). Depending on the type of disruption budgeting your application needs, you will either define either of the following:* `minAvailable` which is a description of the number of pods from that set that must still be available after the eviction, even in the absence of the evicted pod. `minAvailable` can be either an absolute number or a percentage.* `maxUnavailable` which is a description of the number of pods from that set that can be unavailable after the eviction. It can be either an absolute number or a percentage.The full SeldonDeployment spec is shown below.
###Code
!pygmentize model_with_pdb.yaml
!kubectl apply -f model_with_pdb.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=seldon-model -o jsonpath='{.items[0].metadata.name}')
###Output
_____no_output_____
###Markdown
Validate Disruption Budget Configuration
###Code
import json
def getPdbConfig():
dp = !kubectl get pdb seldon-model-example-0-classifier -o json
dp = json.loads("".join(dp))
return dp["spec"]["maxUnavailable"]
assert getPdbConfig() == 2
!kubectl get pods,deployments,pdb
###Output
_____no_output_____
###Markdown
Update Disruption Budget and Validate ChangeNext, we'll update the maximum number of unavailable pods and check that the PDB is properly updated to match.
###Code
!pygmentize model_with_patched_pdb.yaml
!kubectl apply -f model_with_patched_pdb.yaml
!kubectl rollout status deploy/$(kubectl get deploy -l seldon-deployment-id=seldon-model -o jsonpath='{.items[0].metadata.name}')
assert getPdbConfig() == 1
###Output
_____no_output_____
###Markdown
Clean Up
###Code
!kubectl get pods,deployments,pdb
!kubectl delete -f model_with_patched_pdb.yaml
###Output
_____no_output_____ |
demos/Optimizing almost-Clifford circuits.ipynb | ###Markdown
Optimizing almost-Clifford circuitsIn this Notebook we will produce some comparisons for how the PyZX Clifford simplification procedure fares in comparison to more naive approaches. First we import all the necessary libraries
###Code
import sys; sys.path.append('..')
import random, math, os
import pyzx as zx
from fractions import Fraction
from functools import reduce
import numpy as np
###Output
_____no_output_____
###Markdown
Now we define some useful functions for generating our data and optimizing circuits
###Code
def generate_clifford_circuit(qubits, depth, p_cnot=0.3, p_t=0):
p_s = 0.5*(1.0-p_cnot-p_t)
p_had = 0.5*(1.0-p_cnot-p_t)
c = zx.Circuit(qubits)
for _ in range(depth):
r = random.random()
if r > 1-p_had:
c.add_gate("HAD",random.randrange(qubits))
elif r > 1-p_had-p_s:
c.add_gate("S",random.randrange(qubits))
elif r > 1-p_had-p_s-p_t:
c.add_gate("T",random.randrange(qubits))
else:
tgt = random.randrange(qubits)
while True:
ctrl = random.randrange(qubits)
if ctrl!=tgt: break
c.add_gate("CNOT",tgt,ctrl)
return c
###Output
_____no_output_____
###Markdown
For the naive approach, we introduce a function that splits the circuit when a T gate is encountered and a function for merging circuits back together.
###Code
def split_circ_on_T(c):
"""Produces a list of circuits whose odd elements are T-free circuits and
whose even elements are circuits consisting only of T gates. Note it expects
the circuit to only have basic gates (i.e. at most 1 control)."""
q = c.qubits
# keep lists of T-free and T-only circuits
cs0 = [zx.Circuit(q)]
cs1 = [zx.Circuit(q)]
after = zx.Circuit(q)
blocked = set()
blocked2 = set()
for g in c.gates:
if g.name == 'T':
if g.target in blocked2:
cs0.append(after)
after = zx.Circuit(q)
cs1.append(zx.Circuit(q))
blocked.clear()
blocked2.clear()
cs1[-1].gates.append(g)
blocked.add(g.target)
else:
cs1[-1].gates.append(g)
blocked.add(g.target)
else:
if g.name in ('CNOT','HAD') and g.target in blocked:
after.gates.append(g)
blocked2.add(g.target)
if g.name == 'CNOT':
blocked.add(g.control)
elif g.target in blocked2 or (g.name == 'CNOT' and g.control in blocked2):
after.gates.append(g)
if g.name == 'CNOT':
blocked.add(g.target)
blocked2.add(g.target)
else:
cs0[-1].gates.append(g)
cs = []
for i,c0 in enumerate(cs0):
cs.append(c0)
cs.append(cs1[i])
cs.append(after)
return cs
def merge_circ(cs):
c0 = zx.Circuit(cs[0].qubits)
for c in cs:
c0.add_circuit(c)
return c0
###Output
_____no_output_____
###Markdown
Testing circuit generation and split/merge code.
###Code
c = generate_clifford_circuit(6,100,p_t=0.1)
zx.draw(c)
cs = split_circ_on_T(c)
print(len(cs))
#zx.d3.draw(cs[4])
c2 = merge_circ(cs)
print(zx.compare_tensors(c,c2))
g = c2.to_graph(False)
zx.draw(g)
def opt_circuit(c):
g = c.to_graph()
zx.simplify.interior_clifford_simp(g,quiet=True)
c2 = zx.extract_circuit(g)
return zx.optimize.basic_optimization(c2.to_basic_gates()).to_basic_gates()
def part_opt_circuit(c):
cs = split_circ_on_T(c)
for i in range(len(cs)):
if i % 2 == 0:
cs[i] = opt_circuit(cs[i])
return zx.optimize.basic_optimization(merge_circ(cs)).to_basic_gates()
def generate_dataset(qubits,depth,cnot_prob,t_prob,reps=50,two_q=False):
"""Generates a set of `reps` circuits consisting of `layers` amount of Clifford circuits,
interspersed with T gates that appear with probability `t_prob` on every qubit.
Each Clifford layer has `depth` amount of gates."""
stats = []
count = [0,0,0,0]
count2 = [0,0,0,0]
for _ in range(reps):
c = generate_clifford_circuit(qubits,depth,p_cnot=cnot_prob,p_t=t_prob)
c0 = c.copy()
c0 = zx.optimize.basic_optimization(c0).to_basic_gates()
c1 = part_opt_circuit(c)
c2 = opt_circuit(c)
count[0] += len(c.gates)
count[1] += len(c0.gates)
count[2] += len(c1.gates)
count[3] += len(c2.gates)
count2[0] += c.twoqubitcount()
count2[1] += c0.twoqubitcount()
count2[2] += c1.twoqubitcount()
count2[3] += c2.twoqubitcount()
for i in range(4):
count[i] /= reps
count2[i] /= reps
return (count, count2)
###Output
_____no_output_____
###Markdown
Now we generate some data comparing the different optimization methods when we vary the gate count per Clifford block
###Code
random.seed(42)
xs = [0.015*i for i in range(11)]
yys = [[],[],[],[]]
zzs = [[],[],[],[]]
qubits = 8
reps = 20
for t_prob in xs:
print(t_prob, end=';')
depth = 800
ys,zs = generate_dataset(qubits,depth,cnot_prob=0.3,t_prob=t_prob,reps=reps)
for i,y in enumerate(ys): yys[i].append(y)
for i,z in enumerate(zs): zzs[i].append(z)
###Output
0.0;0.015;0.03;0.045;0.06;0.075;0.09;0.105;0.12;0.135;0.15;
###Markdown
And now we plot the resulting data
###Code
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
plt.style.use('seaborn-whitegrid')
colors = ['#53257f', '#bc1b73', '#f8534a', '#ffa600']
names = ['original','original+','naive', 'pyzx']
styles = ['-','--','-.',':']
fig = plt.figure()
ax1 = fig.add_subplot(111)
for i, ys in enumerate(yys):
ax1.plot(xs, ys, c=colors[i], marker="o",markersize=3, linestyle=styles[i], label=names[i])
ax1.set_ylabel("total gate count")
ax1.set_xlabel("$p_t$")
plt.legend(loc='upper left');
plt.grid(color='#EEEEEE')
plt.show()
fig = plt.figure()
ax1 = fig.add_subplot(111)
for i, zs in enumerate(zzs):
ax1.plot(xs, zs, c=colors[i], marker="o",markersize=3, linestyle=styles[i], label=names[i])
ax1.set_ylabel("2-qubit gate count")
ax1.set_xlabel("$p_t$")
plt.legend(loc='upper left');
plt.grid(color='#EEEEEE')
plt.show()
###Output
_____no_output_____
###Markdown
As can be seen, both of the extraction methods, the one that only acts on the block of Cliffords, and the full extraction, saturate in the amount of 2-qubit gates, but the full extraction saturates at a much lower total gate count, showing that it indeed performs better than naive Clifford optimization.
###Code
fig.savefig(r'/home/aleks/git/papers/cliff-simp/graphics/gatecount-2q.pdf',bbox_inches='tight')
###Output
_____no_output_____ |
pittsburgh-bridges-data-set-analysis/models-analyses/cross_validation_analyses/Data Space Report (Official) - Two-Dimensional Analyses-v1.0.1.ipynb | ###Markdown
Data Space Report Pittsburgh Bridges Data Set Andy Warhol Bridge - Pittsburgh.Report created by Student Francesco Maria Chiarlo s253666, for A.A 2019/2020.**Abstract**:The aim of this report is to evaluate the effectiveness of distinct, different statistical learning approaches, in particular focusing on their characteristics as well as on their advantages and backwards when applied onto a relatively small dataset as the one employed within this report, that is Pittsburgh Bridgesdataset.**Key words**:Statistical Learning, Machine Learning, Bridge Design. TOC:* [Imports Section](imports-section)* [Dataset's Attributes Description](attributes-description)* [Data Preparation and Investigation](data-preparation)* [Learning Models](learning-models)* [Improvements and Conclusions](improvements-and-conclusions)* [References](references) Imports Section
###Code
# =========================================================================== #
# STANDARD IMPORTS
# =========================================================================== #
print(__doc__)
from pprint import pprint
import warnings
warnings.filterwarnings('ignore')
import copy
import os
import sys
import time
import pandas as pd
import numpy as np
%matplotlib inline
# Matplotlib pyplot provides plotting API
import matplotlib as mpl
from matplotlib import pyplot as plt
import chart_studio.plotly.plotly as py
import seaborn as sns; sns.set()
# =========================================================================== #
# UTILS IMPORTS (Done by myself)
# =========================================================================== #
from utils.display_utils import *
from utils.preprocessing_utils import *
from utils.training_utils import *
from utils.training_utils_v2 import fit_by_n_components, fit_all_by_n_components
from itertools import islice
# =========================================================================== #
# sklearn IMPORT
# =========================================================================== #
from sklearn.decomposition import PCA, KernelPCA
# Import scikit-learn classes: models (Estimators).
from sklearn.naive_bayes import GaussianNB # Non-parametric Generative Model
from sklearn.naive_bayes import MultinomialNB # Non-parametric Generative Model
from sklearn.linear_model import LinearRegression # Parametric Linear Discriminative Model
from sklearn.linear_model import LogisticRegression # Parametric Linear Discriminative Model
from sklearn.linear_model import Ridge, Lasso
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC # Parametric Linear Discriminative "Support Vector Classifier"
from sklearn.tree import DecisionTreeClassifier # Non-parametric Model
from sklearn.ensemble import BaggingClassifier # Non-parametric Model (Meta-Estimator, that is, an Ensemble Method)
from sklearn.ensemble import RandomForestClassifier # Non-parametric Model (Meta-Estimator, that is, an Ensemble Method)
###Output
_____no_output_____
###Markdown
Dataset's Attributes Description The analyses that I aim at accomplishing while using as means the methods or approaches provided by both Statistical Learning and Machine Learning fields, concern the dataset Pittsburgh Bridges, and what follows is a overview and brief description of the main characteristics, as well as, basic information about this precise dataset.The Pittsburgh Bridges dataset is a dataset available from the web site called mainly *"UCI Machine Learing Repository"*, which is one of the well known web site that let a large amount of different datasets, from different domains or fields, to be used for machine-learning research and which have been cited in peer-reviewed academic journals.In particular, the dataset I'm going to treat and analyze, which is Pittsburgh Bridges dataset, has been made freely available from the Western Pennsylvania Regional Data Center (WPRDC), which is a project led by the University Center of Social and Urban Research (UCSUR) at the University of Pittsburgh ("University") in collaboration with City of Pittsburgh and The County of Allegheny in Pennsylvania. The WPRDC and the WPRDC Project is supported by a grant from the Richard King Mellon Foundation.In order to be more precise, from the official and dedicated web page, within UCI Machine Learning cite, Pittsburgh Bridges dataset is a dataset that has been created after the works of some co-authors which are:- Yoram Reich & Steven J. Fenves from Department of Civil Engineering and Engineering Design Research Center Carnegie Mellon University Pittsburgh, PA 15213The Pittsburgh Bridges dataset is made of up to 108 distinct observations and each of that data sample is made of 12 attributes or features where some of them are considered to be continuous properties and other to be categorical or nominal properties. Those variables are the following:- **RIVER**: which is a nominal type variable that can assume the subsequent possible discrete values which are: A, M, O. Where A stands for Allegheny river, while M stands for Monongahela river and lastly O stands for Ohio river.- **LOCATION**: which represents a nominal type variable too, and assume a positive integer value from 1 up to 52 used as categorical attribute.- **ERECTED**: which might be either a numerical or categorical variable, depending on the fact that we want to aggregate a bunch of value under a categorical quantity. What this means is that, basically such attribute is made of date starting from 1818 up to 1986, but we may imagine to aggregate somehow these data within a given category among those suggested, that are CRAFTS, EMERGENING, MATURE, MODERN.- **PURPOSE**: which is a categorical attribute and represents the reason why a particular bridge has been built, which means that this attribute represents what kind of vehicle can cross the bridge or if the bridge has been made just for people. For this reasons the allowd values for this attributes are the following: WALK, AQUEDUCT, RR, HIGHWAY. Three out of four are self explained values, while RR value that might be tricky at first glance, it just stands for railroad.- **LENGTH**: which represents the bridge's length, is a numerical attribute if we just look at the real number values that go from 804 up to 4558, but we can again decide to handle or arrange such values so that they can be grouped into range of values mapped into SHORT, MEDIUM, LONG so that we can refer to a bridge's length by means of these new categorical values.- **LANES**: which is a categorical variable which is represented by numerical values, that are 1, 2, 4, 6 which indicate the number of distinct lanes that a bridge in Pittsburgh city may have. The larger the value the wider the bridge.- **CLEAR-G**: specifies whether a vertical navigation clearance requirement was enforced in the design or not.- **T-OR-D**: which is a nominal attribute, in other words, a categorical attribute that can assume THROUGH, DECK values. In order to be more precise, this samples attribute deals with structural elements of a bridge. In fact, a deck is the surface of a bridge and this structural element, of bridge's superstructure, may be constructed of concrete, steel, open grating, or wood. On the other hand, a through arch bridge, also known as a half-through arch bridge or a through-type arch bridge, is a bridge that is made from materials such as steel or reinforced concrete, in which the base of an arch structure is below the deck but the top rises above it.- **MATERIAL**: which is a categorical or nominal variable and is used to describe the bridge telling which is the main or core material used to build it. This attribute can assume one of the possible, following values which are: WOOD, IRON, STEEL. Furthermore, we expect to see somehow a bit of correlation between the values assumed by the pairs represented by T-OR-D and MATERIAL columns, when looking just to them.- **SPAN**: which is a categorical or nominal value and has been recorded by means of three possible values for each sample, that are SHORT, MEDIUM, LONG. This attribute, within the field of Structural Engineering, is the distance between two intermediate supports for a structure, e.g. a beam or a bridge. A span can be closed by a solid beam or by a rope. The first kind is used for bridges, the second one for power lines, overhead telecommunication lines, some type of antennas or for aerial tramways. - **REL-L**: which is a categorical or nominal variable and stands for relative length of the main span of the bridge to the total crossing length, it can assume three possible values that are S, S-F, F.- Lastly, **TYPE** which indicates as a categorical or nominal attributes what type of bridge each record represents, among the possible 6 distinct classes or types of bridges that are: WOOD, SUSPEN, SIMPLE-T, ARCH, CANTILEV, CONT-T. Data Preparation and Investigation The aim of this chapter is to get in the data, that are available within Pittsburgh Bridge Dataset, in order to investigate a bit more in to detail and generally speaking deeper the main or high level statistics quantities, such as mean, median, standard deviation of each attribute, as well as displaying somehow data distribution for each attribute by means of histogram plots. This phase allows or enables us to decide which should be the best feature to be selected as the target variable, in other word the attribute that will represent the dependent variable with respect to the remaining attributes that instead will play the role of predictors and independent variables, as well.In order to investigate and explore our data we make usage of *Pandas library*. We recall mainly that, in computer programming, Pandas is a software library written for the Python programming language* for *data manipulation and analysis*. In particular, it offers data structures and operations for manipulating numerical tables and time series. It is free software and a interesting and funny things about such tool is that the name is derived from the term "panel data", an econometrics term for data sets that include observations over multiple time periods for the same individuals.We also note that as the analysis proceeds we will introduce other computer programming as well as programming libraries that allow or enable us to fulfill our goals. Initially, once I have downloaded from the provided web page the dataset with the data samples about Pittsburgh Bridge we load the data by means of functions available using python library's pandas. We notice that the overall set of data points is large up to 108 records or rows, which are sorted by Erected attributes, so this means that are sorted in decreasing order from the oldest bridge which has been built in 1818 up to the most modern bridge that has been erected in 1986. Then we display the first 5 rows to get an overview and have a first idea about what is inside the overall dataset, and the result we obtain by means of head() function applied onto the fetched dataset is equals to what follows:
###Code
# =========================================================================== #
# READ INPUT DATASET
# =========================================================================== #
dataset_path = 'C:\\Users\\Francesco\Documents\\datasets\\pittsburgh_dataset'
dataset_name = 'bridges.data.csv'
# column_names = ['IDENTIF', 'RIVER', 'LOCATION', 'ERECTED', 'PURPOSE', 'LENGTH', 'LANES', 'CLEAR-G', 'T-OR-D', 'MATERIAL', 'SPAN', 'REL-L', 'TYPE']
column_names = ['RIVER', 'LOCATION', 'ERECTED', 'PURPOSE', 'LENGTH', 'LANES', 'CLEAR-G', 'T-OR-D', 'MATERIAL', 'SPAN', 'REL-L', 'TYPE']
dataset = pd.read_csv(os.path.join(dataset_path, dataset_name), names=column_names, index_col=0)
# SHOW SOME STANDARD DATASET INFOS
# --------------------------------------------------------------------------- #
print('Dataset shape: {}'.format(dataset.shape))
print(dataset.info())
# SHOWING FIRSTS N-ROWS AS THEY ARE STORED WITHIN DATASET
# --------------------------------------------------------------------------- #
dataset.head(5)
###Output
_____no_output_____
###Markdown
What we can notice from just the table above is that there are some attributes that are characterized by a special character that is '?' which stands for a missing value, so by chance there was not possibility to get the value for this attribute, such as for LENGTH and SPAN attributes. Analyzing in more details the dataset we discover that there are up to 6 different attributes, in the majority attributes with categorical or nominal nature such as CLEAR-G, T-OR-D, MATERIAL, SPAN, REL-L, and TYPE that contain at list one row characterized by the fact that one of its attributes is set to assuming '?' value that stands, as we already know for a missing value.Here, we can follow different strategies that depends onto the level of complexity as well as accuracy we want to obtain or achieve for models we are going to fit to the data after having correctly pre-processed them, speaking about what we could do with missing values. In fact one can follow the simplest way and can decide to simply discard those rows that contain at least one attribute with a missing value represented by the '?' symbol. Otherwise one may alos decide to follow a different strategy that aims at keeping also those rows that have some missing values by means of some kind of technique that allows to establish a potential substituting value for the missing one.So, in this setting, that is our analyses, we start by just leaving out those rows that at least contain one attribute that has a missing value, this choice leads us to reduce the size of our dataset from 108 records to 70 remaining samples, with a drop of 38 data examples, which may affect the final results, since we left out more or less the 46\% of the data because of missing values.
###Code
# INVESTIGATING DATASET IN ORDER TO DETECT NULL VALUES
# --------------------------------------------------------------------------- #
print('Before preprocessing dataset and handling null values')
result = dataset.isnull().values.any()
print('There are any null values ? Response: {}'.format(result))
result = dataset.isnull().sum()
print('Number of null values for each predictor:\n{}'.format(result))
# DISCOVERING VALUES WITHIN EACH PREDICTOR DOMAIN
# --------------------------------------------------------------------------- #
columns_2_avoid = ['ERECTED', 'LENGTH', 'LOCATION', 'LANES']
# columns_2_avoid = None
list_columns_2_fix = show_categorical_predictor_values(dataset, columns_2_avoid)
# FIXING, UPDATING NULL VALUES CODED AS '?' SYMBOL
# WITHIN EACH CATEGORICAL VARIABLE, IF DETECTED ANY
# --------------------------------------------------------------------------- #
print('"Before" removing \'?\' rows, Dataset dim:', dataset.shape)
for _, predictor in enumerate(list_columns_2_fix):
dataset = dataset[dataset[predictor] != '?']
print('"After" removing \'?\' rows, Dataset dim: ', dataset.shape)
print('-' * 50)
_ = show_categorical_predictor_values(dataset, columns_2_avoid)
# INTERMEDIATE RESULT FOUND
# --------------------------------------------------------------------------- #
preprocess_categorical_variables(dataset, columns_2_avoid)
print(dataset.info())
dataset.head(5)
###Output
_____no_output_____
###Markdown
The next step is represented by the effort of mapping categorical variables into numerical variables, so that them are comparable with the already existing numerical or continuous variables, and also by mapping the categorical variables into numerical variables we allow or enable us to perform some kind of normalization or just transformation onto the entire dataset in order to let some machine learning algorithm to work better or to take advantage of normalized data within our pre-processed dataset. Furthermore, by transforming first the categorical attributes into a continuous version we are also able to calculate the \textit{heatmap}, which is a very useful way of representing a correlation matrix calculated on the whole dataset. Moreover we have displayed data distribution for each attribute by means of histogram representation to take some useful information about the number of occurrences for each possible value, in particular for those attributes that have a categorical nature.
###Code
# MAP NUMERICAL VALUES TO INTEGER VALUES
# --------------------------------------------------------------------------- #
print('Before', dataset.shape)
columns_2_map = ['ERECTED', 'LANES']
for _, predictor in enumerate(columns_2_map):
dataset = dataset[dataset[predictor] != '?']
dataset[predictor] = np.array(list(map(lambda x: int(x), dataset[predictor].values)))
print('After', dataset.shape)
print(dataset.info())
# print(dataset.head(5))
# MAP NUMERICAL VALUES TO FLOAT VALUES
# --------------------------------------------------------------------------- #
# print('Before', dataset.shape)
columns_2_map = ['LOCATION', 'LANES', 'LENGTH']
for _, predictor in enumerate(columns_2_map):
dataset = dataset[dataset[predictor] != '?']
dataset[predictor] = np.array(list(map(lambda x: float(x), dataset[predictor].values)))
# print('After', dataset.shape)
# print(dataset.info())
# print(dataset.head(5))
# columns_2_avoid = None
# list_columns_2_fix = show_categorical_predictor_values(dataset, None)
result = dataset.isnull().values.any()
# print('After handling null values\nThere are any null values ? Response: {}'.format(result))
result = dataset.isnull().sum()
# print('Number of null values for each predictor:\n{}'.format(result))
dataset.head(5)
dataset.describe(include='all')
# sns.pairplot(dataset, hue='T-OR-D', size=1.5)
columns_2_avoid = ['ERECTED', 'LENGTH', 'LOCATION']
target_col = 'T-OR-D'
# show_frequency_distribution_predictors(dataset, columns_2_avoid)
# show_frequency_distribution_predictor(dataset, predictor_name='RIVER', columns_2_avoid=columns_2_avoid)
# build_boxplot(dataset, predictor_name='RIVER', columns_2_avoid=columns_2_avoid, target_col='T-OR-D')
# show_frequency_distribution_predictors(dataset, columns_2_avoid)
# show_frequency_distribution_predictor(dataset, predictor_name='T-OR-D', columns_2_avoid=columns_2_avoid)
# show_frequency_distribution_predictors(dataset, columns_2_avoid)
# show_frequency_distribution_predictor(dataset, predictor_name='CLEAR-G', columns_2_avoid=columns_2_avoid)
# build_boxplot(dataset, predictor_name='CLEAR-G', columns_2_avoid=columns_2_avoid, target_col='T-OR-D')
# show_frequency_distribution_predictors(dataset, columns_2_avoid)
# show_frequency_distribution_predictor(dataset, predictor_name='SPAN', columns_2_avoid=columns_2_avoid)
# build_boxplot(dataset, predictor_name='SPAN', columns_2_avoid=columns_2_avoid, target_col='T-OR-D')
# show_frequency_distribution_predictors(dataset, columns_2_avoid)
# show_frequency_distribution_predictor(dataset, predictor_name='MATERIAL', columns_2_avoid=columns_2_avoid)
# build_boxplot(dataset, predictor_name='MATERIAL', columns_2_avoid=columns_2_avoid, target_col='T-OR-D')
# show_frequency_distribution_predictors(dataset, columns_2_avoid)
# show_frequency_distribution_predictor(dataset, predictor_name='REL-L', columns_2_avoid=columns_2_avoid)
# show_frequency_distribution_predictors(dataset, columns_2_avoid)
# show_frequency_distribution_predictor(dataset, predictor_name='TYPE', columns_2_avoid=columns_2_avoid)
# build_boxplot(dataset, predictor_name='TYPE', columns_2_avoid=columns_2_avoid, target_col='T-OR-D')
corr_result = dataset.corr()
# corr_result.head(corr_result.shape[0])
display_heatmap(corr_result)
# show_histograms_from_heatmap_corr_matrix(corr_result, row_names=dataset.columns)
# Make distinction between Target Variable and Predictors
# --------------------------------------------------------------------------- #
columns = dataset.columns # List of all attribute names
target_col = 'T-OR-D' # Target variable name
# Get Target values and map to 0s and 1s
y = np.array(list(map(lambda x: 0 if x == 1 else 1, dataset[target_col].values)))
print('Summary about Target Variable {target_col}')
print('-' * 50)
print(dataset['T-OR-D'].value_counts())
# Get Predictors
X = dataset.loc[:, dataset.columns != target_col].values
# Standardizing the features
# --------------------------------------------------------------------------- #
scaler_methods = ['minmax', 'standard', 'norm']
scaler_method = 'standard'
rescaledX = preprocessing_data_rescaling(scaler_method, X)
###Output
shape features matrix X, after normalizing: (70, 11)
###Markdown
Pricipal Component AnalysisAfter having investigate the data points inside the dataset, I move one to another section of my report where I decide to explore examples that made up the entire dataset using a particular technique in the field of statistical analysis that corresponds, precisely, to so called Principal Component Analysis. In fact, the major objective of this section is understand whether it is possible to transform, by means of some kind of linear transformation given by a mathematical calculation, the original data examples into reprojected representation that allows me to retrieve most useful information to be later exploited at training time. So, lets dive a bit whitin what is and which are main concepts, pros and cons about Principal Component Analysis.Firstly, we know that **Principal Component Analysis**, more shortly PCA, is a statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called *principal components*. This transformation is defined in such a way that:- the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible),- and each succeeding component in turn has the highest variance possible under the constraint that it is orthogonal to the preceding components.The resulting vectors, each being a linear combination of the variables and containing n observations, are an uncorrelated orthogonal basis set. PCA is sensitive to the relative scaling of the original variables.PCA is mostly used as a tool in *exploratory data analysis* and for making predictive models, for that reasons I used such a technique here, before going through the different learning technique for producing my models. Several Different ImplementationFrom the theory and the filed of research in statistics, we know that out there, there are several different implementation and way of computing principal component analysis, and each adopted technique has different performance as well as numerical stability. The three major derivations are:- PCA by means of an iterative based procedure of extraing pricipal components one after the other selecting each time the one that account for the most of variance along its own axis, within the remainig subspace to be derived.- The second possible way of performing PCA is done via calculation of *Covariance Matrix* applied to attributes, that are our independent predictive variables, used to represent data points.- Lastly, it is used the technique known as *Singular Valued Decomposition* applied to the overall data points within our dataset.Reading scikit-learn documentation, I discovered that PCA's derivation uses the *LAPACK implementation* of the *full SVD* or a *randomized truncated SVD* by the method of *Halko et al. 2009*, depending on the shape of the input data and the number of components to extract. Therefore I will descrive mainly that way of deriving the method with respect to the others that, instead, will be described more briefly and roughly. PCA's Iterative based MethodGoing in order, as depicted briefly above, I start describing PCA obtained by means of iterative based procedure to extract one at a time a new principal componet explointing the data points at hand.We begin, recalling that, PCA is defined as an orthogonal linear transformation that transforms the data to a new coordinate system such that the greatest variance by some scalar projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on.We suppose to deal with a data matrix X, with column-wise zero empirical mean, where each of the n rows represents a different repetition of the experiment, and each of the p columns gives a particular kind of feature.From a math poitn of view, the transformation is defined by a set of p-dimensional vectors of weights or coefficients $\mathbf {w} _{(k)}=(w_{1},\dots ,w_{p})_{(k)}$ that map each row vector $\mathbf{x}_{(i)}$ of X to a new vector of principal component scores ${\displaystyle \mathbf {t} _{(i)}=(t_{1},\dots ,t_{l})_{(i)}}$, given by: ${\displaystyle {t_{k}}_{(i)}=\mathbf {x} _{(i)}\cdot \mathbf {w} _{(k)}\qquad \mathrm {for} \qquad i=1,\dots ,n\qquad k=1,\dots ,l}$.In this way all the individual variables ${\displaystyle t_{1},\dots ,t_{l}}$ of t considered over the data set successively inherit the maximum possible variance from X, with each coefficient vector w constrained to be a unit vector.More precisely, the first component In order to maximize variance has to satisfy the following expression:${\displaystyle \mathbf {w} _{(1)}={\underset {\Vert \mathbf {w} \Vert =1}{\operatorname {\arg \,max} }}\,\left\{\sum _{i}\left(t_{1}\right)_{(i)}^{2}\right\}={\underset {\Vert \mathbf {w} \Vert =1}{\operatorname {\arg \,max} }}\,\left\{\sum _{i}\left(\mathbf {x} _{(i)}\cdot \mathbf {w} \right)^{2}\right\}}$So, with $w_{1}$ found, the first principal component of a data vector $x_{1}$ can then be given as a score $t_{1(i)} = x_{1} ⋅ w_{1}$ in the transformed co-ordinates, or as the corresponding vector in the original variables, $(x_{1} ⋅ w_{1})w_{1}$.The others remainig components are computed as folloes. The kth component can be found by subtracting the first k − 1 principal components from X, as in the following expression:- ${\displaystyle \mathbf {\hat {X}} _{k}=\mathbf {X} -\sum _{s=1}^{k-1}\mathbf {X} \mathbf {w} _{(s)}\mathbf {w} _{(s)}^{\rm {T}}}$- and then finding the weight vector which extracts the maximum variance from this new data matrix ${\mathbf {w}}_{{(k)}}={\underset {\Vert {\mathbf {w}}\Vert =1}{\operatorname {arg\,max}}}\left\{\Vert {\mathbf {{\hat {X}}}}_{{k}}{\mathbf {w}}\Vert ^{2}\right\}={\operatorname {\arg \,max}}\,\left\{{\tfrac {{\mathbf {w}}^{T}{\mathbf {{\hat {X}}}}_{{k}}^{T}{\mathbf {{\hat {X}}}}_{{k}}{\mathbf {w}}}{{\mathbf {w}}^{T}{\mathbf {w}}}}\right\}$It turns out that:- from the formulas depicted above me get the remaining eigenvectors of $X^{T}X$, with the maximum values for the quantity in brackets given by their corresponding eigenvalues. Thus the weight vectors are eigenvectors of $X^{T}X$.- The kth principal component of a data vector $x_(i)$ can therefore be given as a score $t_{k(i)} = x_{(i)} ⋅ w_(k)$ in the transformed co-ordinates, or as the corresponding vector in the space of the original variables, $(x_{(i)} ⋅ w_{(k)}) w_{(k)}$, where $w_{(k)}$ is the kth eigenvector of $X^{T}X$.- The full principal components decomposition of X can therefore be given as: ${\displaystyle \mathbf {T} =\mathbf {X} \mathbf {W}}$, where W is a p-by-p matrix of weights whose columns are the eigenvectors of $X^{T}X$. Covariance Matrix for PCA analysisPCA made from covarian matrix computation requires the calculation of sample covariance matrix of the dataset as follows: $\mathbf{Q} \propto \mathbf{X}^T \mathbf{X} = \mathbf{W} \mathbf{\Lambda} \mathbf{W}^T$.The empirical covariance matrix between the principal components becomes ${\displaystyle \mathbf {W} ^{T}\mathbf {Q} \mathbf {W} \propto \mathbf {W} ^{T}\mathbf {W} \,\mathbf {\Lambda } \,\mathbf {W} ^{T}\mathbf {W} =\mathbf {\Lambda } }$. Singular Value Decomposition for PCA analysisFinally, the principal components transformation can also be associated with another matrix factorization, the singular value decomposition (SVD) of X, ${\displaystyle \mathbf {X} =\mathbf {U} \mathbf {\Sigma } \mathbf {W} ^{T}}$, where more precisely:- Σ is an n-by-p rectangular diagonal matrix of positive numbers $σ_{(k)}$, called the singular values of X;- instead U is an n-by-n matrix, the columns of which are orthogonal unit vectors of length n called the left singular vectors of X;- Then, W is a p-by-p whose columns are orthogonal unit vectors of length p and called the right singular vectors of X.factorizingn the matrix ${X^{T}X}$, it can be written as:${\begin{aligned}\mathbf {X} ^{T}\mathbf {X} &=\mathbf {W} \mathbf {\Sigma } ^{T}\mathbf {U} ^{T}\mathbf {U} \mathbf {\Sigma } \mathbf {W} ^{T}\\&=\mathbf {W} \mathbf {\Sigma } ^{T}\mathbf {\Sigma } \mathbf {W} ^{T}\\&=\mathbf {W} \mathbf {\hat {\Sigma }} ^{2}\mathbf {W} ^{T}\end{aligned}}$Where we recall that ${\displaystyle \mathbf {\hat {\Sigma }} }$ is the square diagonal matrix with the singular values of X and the excess zeros chopped off that satisfies ${\displaystyle \mathbf {{\hat {\Sigma }}^{2}} =\mathbf {\Sigma } ^{T}\mathbf {\Sigma } } {\displaystyle \mathbf {{\hat {\Sigma }}^{2}} =\mathbf {\Sigma } ^{T}\mathbf {\Sigma } }$. Comparison with the eigenvector factorization of $X^{T}X$ establishes that the right singular vectors W of X are equivalent to the eigenvectors of $X^{T}X$ , while the singular values $σ_{(k)}$ of X are equal to the square-root of the eigenvalues $λ_{(k)}$ of $X^{T}X$ . At this point we understand that using the singular value decomposition the score matrix T can be written as:$\begin{align} \mathbf{T} & = \mathbf{X} \mathbf{W} \\ & = \mathbf{U}\mathbf{\Sigma}\mathbf{W}^T \mathbf{W} \\ & = \mathbf{U}\mathbf{\Sigma} \end{align}$so each column of T is given by one of the left singular vectors of X multiplied by the corresponding singular value. This form is also the polar decomposition of T.Efficient algorithms exist to calculate the SVD, as in scikit-learn package, of X without having to form the matrix $X^{T}X$, so computing the SVD is now the standard way to calculate a principal components analysis from a data matrix
###Code
n_components = rescaledX.shape[1]
pca = PCA(n_components=n_components)
# pca = PCA(n_components=2)
# X_pca = pca.fit_transform(X)
pca = pca.fit(rescaledX)
X_pca = pca.transform(rescaledX)
print(f"Cumulative varation explained(percentage) up to given number of pcs:")
tmp_data = []
principal_components = [pc for pc in '2,5,6,7,8,9,10'.split(',')]
for _, pc in enumerate(principal_components):
n_components = int(pc)
cum_var_exp_up_to_n_pcs = np.cumsum(pca.explained_variance_ratio_)[n_components-1]
# print(f"Cumulative varation explained up to {n_components} pcs = {cum_var_exp_up_to_n_pcs}")
# print(f"# pcs {n_components}: {cum_var_exp_up_to_n_pcs*100:.2f}%")
tmp_data.append([n_components, cum_var_exp_up_to_n_pcs * 100])
tmp_df = pd.DataFrame(data=tmp_data, columns=['# PCS', 'Cumulative Varation Explained (percentage)'])
tmp_df.head(len(tmp_data))
n_components = rescaledX.shape[1]
pca = PCA(n_components=n_components)
# pca = PCA(n_components=2)
#X_pca = pca.fit_transform(X)
pca = pca.fit(rescaledX)
X_pca = pca.transform(rescaledX)
fig = show_cum_variance_vs_components(pca, n_components)
# py.sign_in('franec94', 'QbLNKpC0EZB0kol0aL2Z')
# py.iplot(fig, filename='selecting-principal-components {}'.format(scaler_method))
###Output
_____no_output_____
###Markdown
Major Pros & Cons of PCA Learning Models
###Code
# Parameters to be tested for Cross-Validation Approach
estimators_list = [GaussianNB(), LogisticRegression(), KNeighborsClassifier(), SVC(), DecisionTreeClassifier(), RandomForestClassifier()]
estimators_names = ['GaussianNB', 'LogisticRegression', 'KNeighborsClassifier', 'SVC', 'DecisionTreeClassifier', 'RandomForestClassifier']
plots_names = list(map(lambda xi: f"{xi}_learning_curve.png", estimators_names))
pca_kernels_list = ['linear', 'poly', 'rbf', 'cosine',]
cv_list = [10, 9, 8, 7, 6, 5, 4, 3, 2]
parameters_sgd_classifier = {
'clf__loss': ('hinge', 'log', 'modified_huber', 'squared_hinge', 'perceptron'),
'clf__penalty': ('l2', 'l1', 'elasticnet'),
'clf__alpha': (1e-1, 1e-2, 1e-3, 1e-4),
'clf__max_iter': (50, 100, 150, 200, 500, 1000, 1500, 2000, 2500),
'clf__learning_rate': ('optimal',),
'clf__tol': (None, 1e-2, 1e-4, 1e-5, 1e-6)
}
kernel_type = 'svm-rbf-kernel'
parameters_svm = {
'clf__gamma': (0.003, 0.03, 0.05, 0.5, 0.7, 1.0, 1.5),
'clf__max_iter':(1e+2, 1e+3, 2 * 1e+3, 5 * 1e+3, 1e+4, 1.5 * 1e+3),
'clf__C': (1e-4, 1e-3, 1e-2, 0.1, 1.0, 10, 1e+2, 1e+3),
}
parmas_decision_tree = {
'clf__splitter': ('random', 'best'),
'clf__criterion':('gini', 'entropy'),
'clf__max_features': (None, 'auto', 'sqrt', 'log2')
}
parmas_random_forest = {
'clf__n_estimators': (3, 5, 7, 10, 30, 50, 70, 100, 150, 200),
'clf__criterion':('gini', 'entropy'),
'clf__bootstrap': (True, False)
}
model = PCA(n_components=2)
model.fit(X)
X_2D = model.transform(X)
df = pd.DataFrame()
df['PCA1'] = X_2D[:, 0]
df['PCA2'] = X_2D[:, 1]
df[target_col] = dataset[target_col].values
sns.lmplot("PCA1", "PCA2", hue=target_col, data=df, fit_reg=False)
# show_pca_1_vs_pca_2_pcaKernel(X, pca_kernels_list, target_col, dataset)
# show_scatter_plots_pcaKernel(X, pca_kernels_list, target_col, dataset, n_components=12)
###Output
_____no_output_____
###Markdown
PCA = 2
###Code
plot_dest = os.path.join("figures", "n_comp_2_analysis")
N_CV, N_KERNEL = 9, 4
assert len(cv_list) >= N_CV, f"Error: N_CV={N_CV} > len(cv_list)={len(cv_list)}"
assert len(pca_kernels_list) >= N_KERNEL, f"Error: N_KERNEL={N_KERNEL} > len(pca_kernels_list)={len(pca_kernels_list)}"
X = rescaledX
n = len(estimators_list) # len(estimators_list)
dfs_list, df_strfd = fit_all_by_n_components(
estimators_list=estimators_list[:n], \
estimators_names=estimators_names[:n], \
X=X, \
y=y, \
n_components=2, \
show_plots=False, \
cv_list=cv_list[:N_CV], \
# pca_kernels_list=['linear'],
pca_kernels_list=pca_kernels_list[:N_KERNEL],
verbose=0 # 0=silent, 1=show informations
)
df_strfd.head(df_strfd.shape[0])
# GaussianNB
# -----------------------------------
dfs_list[0].head(dfs_list[0].shape[0])
pos = 0
plot_name = plots_names[pos]
show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name)
# LogisticRegression
# -----------------------------------
dfs_list[1].head(dfs_list[0].shape[0])
pos = pos + 1
plot_name = plots_names[pos]
show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name)
# SVC
# -----------------------------------
dfs_list[2].head(dfs_list[0].shape[0])
pos = pos + 1
plot_name = plots_names[pos]
show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name)
# DecisionTreeClassifier
# -----------------------------------
dfs_list[3].head(dfs_list[0].shape[0])
pos = pos + 1
plot_name = plots_names[pos]
show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name)
# RandomForestClassifier
# -----------------------------------
dfs_list[4].head(dfs_list[0].shape[0])
pos = pos + 1
plot_name = plots_names[pos]
show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name)
###Output
_____no_output_____
###Markdown
PCA = 9
###Code
plot_dest = os.path.join("figures", "n_comp_9_analysis")
n = len(estimators_list) # len(estimators_list)
pos = 0
dfs_list, df_strfd = fit_all_by_n_components(
estimators_list=estimators_list[:n], \
estimators_names=estimators_names[:n], \
X=X, \
y=y, \
n_components=9, \
show_plots=False, \
cv_list=cv_list[:N_CV], \
# pca_kernels_list=['linear'],
pca_kernels_list=pca_kernels_list[:N_KERNEL],
verbose=0 # 0=silent, 1=show informations
)
df_strfd.head(df_strfd.shape[0])
# GaussianNB
# -----------------------------------
dfs_list[0].head(dfs_list[0].shape[0])
pos = pos + 1
plot_name = plots_names[pos]
show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name)
# LogisticRegression
# -----------------------------------
dfs_list[1].head(dfs_list[0].shape[0])
ppos = pos + 1
plot_name = plots_names[pos]
show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name)
# SVC
# -----------------------------------
dfs_list[2].head(dfs_list[0].shape[0])
pos = pos + 1
plot_name = plots_names[pos]
show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name)
# DecisionTreeClassifier
# -----------------------------------
dfs_list[3].head(dfs_list[0].shape[0])
pos = pos + 1
plot_name = plots_names[pos]
show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name)
# RandomForestClassifier
# -----------------------------------
dfs_list[4].head(dfs_list[0].shape[0])
pos = pos + 1
plot_name = plots_names[pos]
show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name)
###Output
_____no_output_____
###Markdown
PCA = 12
###Code
plot_dest = os.path.join("figures", "n_comp_12_analysis")
n = len(estimators_list) # len(estimators_list)
pos = 0
dfs_list, df_strfd = fit_all_by_n_components(
estimators_list=estimators_list[:n], \
estimators_names=estimators_names[:n], \
X=X, \
y=y, \
n_components=12, \
show_plots=False, \
cv_list=cv_list[:N_CV], \
# pca_kernels_list=['linear'],
pca_kernels_list=pca_kernels_list[:N_KERNEL],
verbose=0 # 0=silent, 1=show informations
)
df_strfd.head(df_strfd.shape[0])
# GaussianNB
# -----------------------------------
dfs_list[0].head(dfs_list[0].shape[0])
pos = pos + 1
plot_name = plots_names[pos]
show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name)
# LogisticRegression
# -----------------------------------
dfs_list[1].head(dfs_list[0].shape[0])
pos = pos + 1
plot_name = plots_names[pos]
show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name)
# SVC
# -----------------------------------
dfs_list[2].head(dfs_list[0].shape[0])
pos = pos + 1
plot_name = plots_names[pos]
show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name)
# DecisionTreeClassifier
# -----------------------------------
dfs_list[3].head(dfs_list[0].shape[0])
pos = pos + 1
plot_name = plots_names[pos]
show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name)
# RandomForestClassifier
# -----------------------------------
dfs_list[4].head(dfs_list[0].shape[0])
pos = pos + 1
plot_name = plots_names[pos]
show_learning_curve(dfs_list[pos], n=len(cv_list[:N_CV]), plot_dest=plot_dest, grid_size=[2, 2], plot_name=plot_name)
from sklearn.metrics import f1_score
y_true = [0, 1, 2, 0, 1, 2]
y_pred = [0, 2, 1, 0, 0, 1]
f1_score(y_true, y_pred, average='macro')
###Output
_____no_output_____ |
GAN/simple-GAN.ipynb | ###Markdown
https://towardsdatascience.com/build-a-super-simple-gan-in-pytorch-54ba349920e4
###Code
import math
import numpy as np
import torch
from torch import nn
def create_binary_list_from_int(number):
return [int(x) for x in list(bin(number))[2:]]
def generate_even_data(max_int, batch_size):
max_length = int(math.log(max_int, 2))
sampled_integers = np.random.randint(0, int(max_int / 2), batch_size)
labels = [1] * batch_size
data = [create_binary_list_from_int(int(x * 2)) for x in sampled_integers]
data = [([0] * (max_length - len(x))) + x for x in data]
return labels, data
class Generator(nn.Module):
def __init__(self, input_length):
super(Generator, self).__init__()
self.dense_layer = nn.Linear(int(input_length), int(input_length))
self.activation = nn.Sigmoid
def forward(self, x):
return self.activation(self.dense_layer(x))
class Generator(nn.Module):
def __init__(self, input_length: int):
super(Generator, self).__init__()
self.dense_layer = nn.Linear(int(input_length), int(input_length))
self.activation = nn.Sigmoid()
def forward(self, x):
return self.activation(self.dense_layer(x))
def getInteger(x):
x = torch.round(x)
numbers = list()
for i in range(x.shape[0]):
r = 0
for j in range(x.shape[1]):
r += pow(2,x.shape[1]-j-1)*x[i][j].item()
numbers.append(int(r))
return numbers
def train(max_int: int = 128, batch_size: int = 16, training_steps: int = 500):
input_length = int(math.log(max_int, 2))
generator = Generator(input_length)
discriminator = Discriminator(input_length)
generator_optimizer = torch.optim.Adam(generator.parameters(), lr=0.001)
discriminator_optimizer = torch.optim.Adam(discriminator.parameters(), lr=0.001)
loss = nn.BCELoss()
for i in range(training_steps):
generator_optimizer.zero_grad()
noise = torch.randint(0, 2, size=(batch_size, input_length)).float()
generated_data = generator(noise)
if i%100 == 0:
print(getInteger(generated_data))
true_labels, true_data = generate_even_data(max_int, batch_size=batch_size)
true_labels = torch.tensor(true_labels).float()
true_data = torch.tensor(true_data).float()
generator_discriminator_out = discriminator(generated_data)
generator_loss = loss(generator_discriminator_out, true_labels.reshape(1,-1).t())
generator_loss.backward()
generator_optimizer.step()
discriminator_optimizer.zero_grad()
true_discriminator_out = discriminator(true_data)
true_discriminator_loss = loss(true_discriminator_out, true_labels.reshape(1,-1).t())
generator_discriminator_out = discriminator(generated_data.detach())
generator_discriminator_loss = loss(generator_discriminator_out, torch.zeros(batch_size).reshape(1,-1).t())
discriminator_loss = (true_discriminator_loss + generator_discriminator_loss) / 2
discriminator_loss.backward()
discriminator_optimizer.step()
train(training_steps=1000)
###Output
[19, 58, 97, 100, 116, 56, 50, 11, 0, 2, 38, 0, 50, 16, 52, 27]
[108, 95, 117, 68, 22, 64, 68, 68, 64, 68, 100, 117, 68, 100, 68, 64]
[68, 76, 76, 76, 108, 68, 76, 108, 108, 76, 76, 108, 76, 76, 72, 76]
[108, 108, 108, 108, 108, 76, 76, 76, 76, 76, 108, 108, 76, 108, 108, 76]
[100, 100, 108, 4, 100, 100, 108, 100, 100, 100, 108, 100, 100, 8, 108, 100]
[96, 96, 96, 100, 96, 96, 0, 36, 32, 32, 0, 100, 100, 32, 100, 96]
[0, 32, 24, 0, 32, 26, 0, 32, 32, 10, 32, 8, 0, 0, 24, 0]
[26, 26, 8, 26, 8, 26, 24, 10, 8, 26, 26, 26, 26, 10, 26, 26]
[26, 58, 26, 26, 18, 26, 18, 26, 26, 26, 26, 26, 26, 26, 18, 26]
[50, 118, 54, 54, 50, 118, 54, 18, 50, 118, 18, 50, 118, 50, 114, 50]
|
notebooks/char_rnn_sample_tutorial.ipynb | ###Markdown
Now, we are ready to make our RNN model with seq2seq This network is for sampling, so we don't need batches for sequenes nor optimizers
###Code
# Important RNN parameters
rnn_size = 128
num_layers = 2
batch_size = 1 # <= In the training phase, these were both 50
seq_length = 1
# Construct RNN model
unitcell = rnn_cell.BasicLSTMCell(rnn_size)
cell = rnn_cell.MultiRNNCell([unitcell] * num_layers)
input_data = tf.placeholder(tf.int32, [batch_size, seq_length])
istate = cell.zero_state(batch_size, tf.float32)
# Weigths
with tf.variable_scope('rnnlm'):
softmax_w = tf.get_variable("softmax_w", [rnn_size, vocab_size])
softmax_b = tf.get_variable("softmax_b", [vocab_size])
with tf.device("/cpu:0"):
embedding = tf.get_variable("embedding", [vocab_size, rnn_size])
inputs = tf.split(1, seq_length, tf.nn.embedding_lookup(embedding, input_data))
inputs = [tf.squeeze(_input, [1]) for _input in inputs]
# Output
def loop(prev, _):
prev = tf.nn.xw_plus_b(prev, softmax_w, softmax_b)
prev_symbol = tf.stop_gradient(tf.argmax(prev, 1))
return tf.nn.embedding_lookup(embedding, prev_symbol)
outputs, final_state = seq2seq.rnn_decoder(inputs, istate, cell
, loop_function=None, scope='rnnlm')
output = tf.reshape(tf.concat(1, outputs), [-1, rnn_size])
logits = tf.nn.xw_plus_b(output, softmax_w, softmax_b)
probs = tf.nn.softmax(logits)
print ("Network Ready")
# Restore RNN
sess = tf.Session()
sess.run(tf.initialize_all_variables())
saver = tf.train.Saver(tf.all_variables())
ckpt = tf.train.get_checkpoint_state(load_dir)
print (ckpt.model_checkpoint_path)
saver.restore(sess, ckpt.model_checkpoint_path)
###Output
/tmp/tf_logs/char_rnn_tutorial/model.ckpt-8000
###Markdown
Finally, show what RNN has generated!
###Code
# Sampling function
def weighted_pick(weights):
t = np.cumsum(weights)
s = np.sum(weights)
return(int(np.searchsorted(t, np.random.rand(1)*s)))
# Sample using RNN and prime characters
prime = "/* "
state = sess.run(cell.zero_state(1, tf.float32))
for char in prime[:-1]:
x = np.zeros((1, 1))
x[0, 0] = vocab[char]
state = sess.run(final_state, feed_dict={input_data: x, istate:state})
# Sample 'num' characters
ret = prime
char = prime[-1] # <= This goes IN!
num = 1000
for n in range(num):
x = np.zeros((1, 1))
x[0, 0] = vocab[char]
[probsval, state] = sess.run([probs, final_state]
, feed_dict={input_data: x, istate:state})
p = probsval[0]
sample = weighted_pick(p)
# sample = np.argmax(p)
pred = chars[sample]
ret = ret + pred
char = pred
print ("Sampling Done. \n___________________________________________\n")
print (ret)
###Output
Sampling Done.
___________________________________________
/* *syscblanim types when the formathing)
*/
void console_dir(tmp_cgroup_shorts);
/*
* This already (proces6or
* @pid_struct(__kthread_lock
unlatelf if (subskick_map shoulds (the softwach then as up is
boot fields up posix are HRPS5/
int set_futex_intend_head_unintvel_ap,
.max_acquire(void, test_cpu(&rq->load, sizeof(struct file *g),
(singlec_ns);
int EAVECLERPE
}6
rlim_polic_entires;
}
static void audit_unlock_irq(struct helds, void cpu_stattr_namespace(__kprobess) ISusyoffreings void)
{
if (!res->panicalings)
goto root = ATOST_MEM:
p >= d = __delayed_poffset;
*strt' = delta_event_fs_allowed;
BUG_ON(!oops->list, new_audit_lock);
}
freepstart;
long d;
atomic_expid(table, flags);
subbufs_pid_t now;
idle = &pi_state->flags; maxRetect_call(aggm_natch);
return printk(*fa->tv_softirqs_entry) {
address,
.cquient(i)
goto ops_type = seq_rese,
};
static struct head_stats {
struct trans_ctly *next;
/* Ax or with unbless and just even notifier restart= r
###Markdown
Now, we are ready to make our RNN model with seq2seq This network is for sampling, so we don't need batches for sequenes nor optimizers
###Code
# Important RNN parameters
rnn_size = 128
num_layers = 2
batch_size = 1 # <= In the training phase, these were both 50
seq_length = 1
# Construct RNN model
unitcell = tf.nn.rnn_cell.BasicLSTMCell(rnn_size)
cell = tf.nn.rnn_cell.MultiRNNCell([unitcell] * num_layers)
input_data = tf.placeholder(tf.int32, [batch_size, seq_length])
istate = cell.zero_state(batch_size, tf.float32)
# Weigths
with tf.variable_scope('rnnlm'):
softmax_w = tf.get_variable("softmax_w", [rnn_size, vocab_size])
softmax_b = tf.get_variable("softmax_b", [vocab_size])
with tf.device("/cpu:0"):
embedding = tf.get_variable("embedding", [vocab_size, rnn_size])
inputs = tf.split(1, seq_length, tf.nn.embedding_lookup(embedding, input_data))
inputs = [tf.squeeze(_input, [1]) for _input in inputs]
# Output
def loop(prev, _):
prev = tf.nn.xw_plus_b(prev, softmax_w, softmax_b)
prev_symbol = tf.stop_gradient(tf.argmax(prev, 1))
return tf.nn.embedding_lookup(embedding, prev_symbol)
outputs, final_state = seq2seq.rnn_decoder(inputs, istate, cell
, loop_function=None, scope='rnnlm')
output = tf.reshape(tf.concat(1, outputs), [-1, rnn_size])
logits = tf.nn.xw_plus_b(output, softmax_w, softmax_b)
probs = tf.nn.softmax(logits)
print ("Network Ready")
# Restore RNN
sess = tf.Session()
sess.run(tf.initialize_all_variables())
saver = tf.train.Saver(tf.all_variables())
ckpt = tf.train.get_checkpoint_state(load_dir)
print (ckpt.model_checkpoint_path)
saver.restore(sess, ckpt.model_checkpoint_path)
###Output
data/linux_kernel/model.ckpt-8000
###Markdown
Finally, show what RNN has generated!
###Code
# Sampling function
def weighted_pick(weights):
t = np.cumsum(weights)
s = np.sum(weights)
return(int(np.searchsorted(t, np.random.rand(1)*s)))
# Sample using RNN and prime characters
prime = "/* "
state = sess.run(cell.zero_state(1, tf.float32))
for char in prime[:-1]:
x = np.zeros((1, 1))
x[0, 0] = vocab[char]
state = sess.run(final_state, feed_dict={input_data: x, istate:state})
# Sample 'num' characters
ret = prime
char = prime[-1] # <= This goes IN!
num = 1000
for n in range(num):
x = np.zeros((1, 1))
x[0, 0] = vocab[char]
[probsval, state] = sess.run([probs, final_state]
, feed_dict={input_data: x, istate:state})
p = probsval[0]
sample = weighted_pick(p)
# sample = np.argmax(p)
pred = chars[sample]
ret = ret + pred
char = pred
print ("Sampling Done. \n___________________________________________\n")
print (ret)
###Output
Sampling Done.
___________________________________________
/* : A C. Fruemptly etweennars must be serversed */
static int __cgroup_hash_power(struct rt_mutex_d *uaddr, int watab, long
-XIT_PYS__AUTIMER_PAT(seed_class_table_watch, v1->curr);
}
static void down_cpusets(struct pid;
static int pid_thread(voids_mm)
{
if (ps->cpumainte_to_cgroup_grp <= NULL)
return 0;
}
conset sched_VRICE_SOFTIRQ_DISU{
softirq_signal(this_css_set_bytes));
}
void private = {
.mode = CPUCLOCK_BALANCE,
.process = optime)
/*
* The are
* en
* @buf' - for so allows the condext it of it regions)
* massessiging that Sto be stime in the expoxes
*/
void __fsix;
struct audit_chunk *tsk;
key_utvec_oper(struct *read_ns, struct futex_ckernel);
int atomic_attime = res->init_switch(void),
-+signal->state = 0;
tmr = tmp;
printk("%s\n", signal, &max_huts_string, 1, look_t *)(modemask++);
up_sem(cft, &(max))) {
if (probes)
set_cpu(name == 0)
goto out;
}
pposs_unlock(*pefmask_plocks);
audit_log_lock_fuces(rq);
}
static void again;
int
con
###Markdown
Now, we are ready to make our RNN model with seq2seq This network is for sampling, so we don't need batches for sequenes nor optimizers
###Code
'"\''
# Important RNN parameters
rnn_size = 128
num_layers = 2
batch_size = 1 # <= In the training phase, these were both 50
seq_length = 1
# Construct RNN model
unitcell = tf.nn.rnn_cell.BasicLSTMCell(rnn_size)
cell = tf.nn.rnn_cell.MultiRNNCell([unitcell] * num_layers)
input_data = tf.placeholder(tf.int32, [batch_size, seq_length])
istate = cell.zero_state(batch_size, tf.float32)
# Weigths
with tf.variable_scope('rnnlm'):
softmax_w = tf.get_variable("softmax_w", [rnn_size, vocab_size])
softmax_b = tf.get_variable("softmax_b", [vocab_size])
with tf.device("/cpu:0"):
embedding = tf.get_variable("embedding", [vocab_size, rnn_size])
inputs = tf.split(1, seq_length, tf.nn.embedding_lookup(embedding, input_data))
inputs = [tf.squeeze(_input, [1]) for _input in inputs]
# Output
def loop(prev, _):
prev = tf.nn.xw_plus_b(prev, softmax_w, softmax_b)
prev_symbol = tf.stop_gradient(tf.argmax(prev, 1))
return tf.nn.embedding_lookup(embedding, prev_symbol)
outputs, final_state = seq2seq.rnn_decoder(inputs, istate, cell
, loop_function=None, scope='rnnlm')
output = tf.reshape(tf.concat(1, outputs), [-1, rnn_size])
logits = tf.nn.xw_plus_b(output, softmax_w, softmax_b)
probs = tf.nn.softmax(logits)
print ("Network Ready")
# Restore RNN
sess = tf.Session()
sess.run(tf.initialize_all_variables())
saver = tf.train.Saver(tf.all_variables())
ckpt = tf.train.get_checkpoint_state(load_dir)
print (ckpt.model_checkpoint_path)
saver.restore(sess, ckpt.model_checkpoint_path)
###Output
data/linux_kernel/model.ckpt-8000
###Markdown
Finally, show what RNN has generated!
###Code
# Sampling function
def weighted_pick(weights):
t = np.cumsum(weights)
s = np.sum(weights)
return(int(np.searchsorted(t, np.random.rand(1)*s)))
# Sample using RNN and prime characters
prime = "/* "
state = sess.run(cell.zero_state(1, tf.float32))
for char in prime[:-1]:
x = np.zeros((1, 1))
x[0, 0] = vocab[char]
state = sess.run(final_state, feed_dict={input_data: x, istate:state})
# Sample 'num' characters
ret = prime
char = prime[-1] # <= This goes IN!
num = 1000
for n in range(num):
x = np.zeros((1, 1))
x[0, 0] = vocab[char]
[probsval, state] = sess.run([probs, final_state]
, feed_dict={input_data: x, istate:state})
p = probsval[0]
sample = weighted_pick(p)
# sample = np.argmax(p)
pred = chars[sample]
ret = ret + pred
char = pred
print ("Sampling Done. \n___________________________________________\n")
print (ret)
###Output
Sampling Done.
___________________________________________
/* : A C. Fruemptly etweennars must be serversed */
static int __cgroup_hash_power(struct rt_mutex_d *uaddr, int watab, long
-XIT_PYS__AUTIMER_PAT(seed_class_table_watch, v1->curr);
}
static void down_cpusets(struct pid;
static int pid_thread(voids_mm)
{
if (ps->cpumainte_to_cgroup_grp <= NULL)
return 0;
}
conset sched_VRICE_SOFTIRQ_DISU{
softirq_signal(this_css_set_bytes));
}
void private = {
.mode = CPUCLOCK_BALANCE,
.process = optime)
/*
* The are
* en
* @buf' - for so allows the condext it of it regions)
* massessiging that Sto be stime in the expoxes
*/
void __fsix;
struct audit_chunk *tsk;
key_utvec_oper(struct *read_ns, struct futex_ckernel);
int atomic_attime = res->init_switch(void),
-+signal->state = 0;
tmr = tmp;
printk("%s\n", signal, &max_huts_string, 1, look_t *)(modemask++);
up_sem(cft, &(max))) {
if (probes)
set_cpu(name == 0)
goto out;
}
pposs_unlock(*pefmask_plocks);
audit_log_lock_fuces(rq);
}
static void again;
int
con
|
concepts/Python Ternary.ipynb | ###Markdown
A ternary expression allows for a concise way to test a condition on a single line of codeIn C, the syntax would be:```// ternary operator in Cc = (a < b) ? a : b;```Python differs from the C/Java/JavaScript syntax, as we will look at below.
###Code
job_1 = {'title': 'Python Developer', 'salary': 80_000}
job_2 = {'title': 'Store Manager', 'salary': 70_000}
choice = job_1 if job_1['salary'] > job_2['salary'] else job_2
choice
###Output
_____no_output_____ |
Notebooks/Covid_Biopython_analysis.ipynb | ###Markdown
Biopython Basics Applications : - **Sequence Analysis** (DNA/RNA/Protein) - **Transcription** & **translation studies** - Quering & accessing **Bioinformatics Databases**a. **Entrez**b. **PDB**c. **Genbank**- 3D **structure** analysis 1. Install modules & packages
###Code
# !pip install pandas
# !pip install nglview
# !pip install biopython
# !pip install matplotlib
# !conda install -c rmg py3dmol -y
# !pip install dna_features_viewer
import Bio
import heapq
import pylab
import urllib
import py3Dmol
import pandas as pd
import nglview as nv
from Bio.Seq import Seq
from Bio.Blast import NCBIWWW
from Bio.Alphabet import IUPAC
from collections import Counter
from Bio.Data import CodonTable
from Bio import SeqIO, SearchIO, Entrez
from Bio.PDB import PDBParser,MMCIFParser
from Bio.SeqUtils import GC,molecular_weight
from dna_features_viewer import GraphicFeature, GraphicRecord
from Bio.Alphabet import generic_dna,generic_rna,generic_protein
# Attributes of Biopython
dir(Bio)
###Output
_____no_output_____
###Markdown
2. Sequence analysis
###Code
# dir(Seq)
# DNA sequence
dna = Seq('ATATATATAGCGCGCGCGCTCTCTCGGAGAGAGAGAGGCGCGGCGCGCGCGCTTCTCTGAGA')
dna
# identify the type
type(dna)
# converting sequence to string
type(str(dna))
# converting sequence to alphabet
type(dna.alphabet)
###Output
_____no_output_____
###Markdown
2.1. Alphabet Types--- generic_dna/rna- generic_proteins- IUPACUnambiguousDNA (provides basic letters)- IUPACAmbiguousDNA (provides for ambiguity letters for every possible situation) Use cases of Alphabets--- To identify the type of information contained by within a sequence object- provides a mean of constraining the information- facilitates sequence checking.
###Code
seq1 = Seq('atgagtcagcagacatcagacgacg', generic_dna)
seq2 = Seq('auauagcgccucgcgcggcgcauau', generic_rna)
seq3 = Seq('atattatagcacacagacaggatct', IUPAC.unambiguous_dna)
seq1.alphabet
seq2.alphabet
seq3.alphabet
###Output
_____no_output_____
###Markdown
3. Sequence Manipulation- indexing/slicing- concatination- codon search- GC content- complement- transcription- translation
###Code
dna_seq = Seq('ATATATATAGCGCGCGCGCTCTCTCGGAGAGAGAGAGGCGCGGCGCGCGCGCTTCTCTGAGA',generic_dna)
# Indexing / slicing
dna_seq[0:2]
# concatination
dna_seq2 = Seq('cgcgcgtatattagaccagagcaca',generic_dna)
dna_seq[0:4] + dna_seq2[0:4]
# codon search
dna_seq.find('G')
dna_seq.find('AGA')
# codon count
dna_seq.count('T')
# GC content
(dna_seq.count('G') + dna_seq.count('C'))/(len(dna_seq)) * 100
###Output
_____no_output_____
###Markdown
OR
###Code
GC(dna_seq)
# complement & reverse complement
comp1 = dna_seq[0:10]
comp2 = dna_seq[0:10].complement()
comp3 = dna_seq[0:10].reverse_complement()
print(f" \
sequence = {comp1}\n \
complement = {comp2}\n \
reverse complement = {comp3}")
# Calculating molecular weight of the sequence
molecular_weight(dna_seq)
###Output
_____no_output_____
###Markdown
3.1 Transcription & Translation- DNA > mRNA = transcription- mRNA > amino acid = translation
###Code
mRNA = dna_seq.transcribe()
mRNA[:10]
protein = mRNA.translate()
protein[-10:]
# change symbol for stop codon
mRNA.translate(stop_symbol = '$')[-10:]
# reverse transcription
mRNA.back_transcribe()[:10]
###Output
_____no_output_____
###Markdown
Can protein sequences be reverse translated ?Note : there is no function called `back_translate` so we'll make use of `back_transcribe`.
###Code
protein.back_transcribe()
###Output
_____no_output_____
###Markdown
This error is true for all the biological life on earth too...- we can't perform an exact "reverse translation" of course, since several amino acids are produced by the same codon. Note that if instead we started with the nucleotide sequence, then we could use Biopython's .transcribe() and .translate() functions to convert sequences from DNA to RNA and DNA to protein respectively. 3.1.1. Custom translation
###Code
# function to translate any input sequence of any length
translation_table = {
'ATA':'I', 'ATC':'I', 'ATT':'I', 'ATG':'M',
'ACA':'T', 'ACC':'T', 'ACG':'T', 'ACT':'T',
'AAC':'N', 'AAT':'N', 'AAA':'K', 'AAG':'K',
'AGC':'S', 'AGT':'S', 'AGA':'R', 'AGG':'R',
'CTA':'L', 'CTC':'L', 'CTG':'L', 'CTT':'L',
'CCA':'P', 'CCC':'P', 'CCG':'P', 'CCT':'P',
'CAC':'H', 'CAT':'H', 'CAA':'Q', 'CAG':'Q',
'CGA':'R', 'CGC':'R', 'CGG':'R', 'CGT':'R',
'GTA':'V', 'GTC':'V', 'GTG':'V', 'GTT':'V',
'GCA':'A', 'GCC':'A', 'GCG':'A', 'GCT':'A',
'GAC':'D', 'GAT':'D', 'GAA':'E', 'GAG':'E',
'GGA':'G', 'GGC':'G', 'GGG':'G', 'GGT':'G',
'TCA':'S', 'TCC':'S', 'TCG':'S', 'TCT':'S',
'TTC':'F', 'TTT':'F', 'TTA':'L', 'TTG':'L',
'TAC':'Y', 'TAT':'Y', 'TAA':'_', 'TAG':'_',
'TGC':'C', 'TGT':'C', 'TGA':'_', 'TGG':'W',
}
def translate(seq):
'''
translates sequence using the `translation_table`
'''
protein = ''
if len(seq)%3 == 0:
for i in range(0,len(seq),3):
codon = seq[i:i+3]
protein += translation_table[codon]
return protein
translate('ATCGATCTCTGA')
###Output
_____no_output_____
###Markdown
3.1.2. Builtin codon table Unambiguous DNA
###Code
print(CodonTable.unambiguous_dna_by_name['Standard'])
###Output
Table 1 Standard, SGC0
| T | C | A | G |
--+---------+---------+---------+---------+--
T | TTT F | TCT S | TAT Y | TGT C | T
T | TTC F | TCC S | TAC Y | TGC C | C
T | TTA L | TCA S | TAA Stop| TGA Stop| A
T | TTG L(s)| TCG S | TAG Stop| TGG W | G
--+---------+---------+---------+---------+--
C | CTT L | CCT P | CAT H | CGT R | T
C | CTC L | CCC P | CAC H | CGC R | C
C | CTA L | CCA P | CAA Q | CGA R | A
C | CTG L(s)| CCG P | CAG Q | CGG R | G
--+---------+---------+---------+---------+--
A | ATT I | ACT T | AAT N | AGT S | T
A | ATC I | ACC T | AAC N | AGC S | C
A | ATA I | ACA T | AAA K | AGA R | A
A | ATG M(s)| ACG T | AAG K | AGG R | G
--+---------+---------+---------+---------+--
G | GTT V | GCT A | GAT D | GGT G | T
G | GTC V | GCC A | GAC D | GGC G | C
G | GTA V | GCA A | GAA E | GGA G | A
G | GTG V | GCG A | GAG E | GGG G | G
--+---------+---------+---------+---------+--
###Markdown
Unambiguous RNA
###Code
print(CodonTable.unambiguous_rna_by_name['Standard'])
# dir(CodonTable)
###Output
_____no_output_____
###Markdown
4. Handling Sequence data (FASTA File)
###Code
# Loading FASTA file
seq_file = SeqIO.read("Data/sequence.fasta", "fasta")
###Output
_____no_output_____
###Markdown
4.1. Sequence details
###Code
type(seq_file)
# list sequence details
for record in SeqIO.parse("Data/sequence.fasta","fasta"):
print(record)
# list individula features
for record in SeqIO.parse("Data/sequence.fasta","fasta"):
print(record.id)
print(record.description)
# store sequence for later analysis
seqfromfile = seq_file.seq
seqfromfile
###Output
_____no_output_____
###Markdown
We can now perform `transcription` , `translation` or GC content calculation with this sequence as shown above.
###Code
len(seqfromfile)
protein_seq = seqfromfile.translate()
len(protein_seq)
# Listing the most common amino acids
common_amino = Counter(protein_seq)
common_amino.most_common(10)
del common_amino['*']
pylab.bar(common_amino.keys(),common_amino.values())
pylab.title("%i protein sequences\nLengths %i to %i"
% (len(common_amino.values()),
min(common_amino.values()),
max(common_amino.values())))
pylab.xlabel("Amino acid")
pylab.ylabel("frequency")
pylab.show()
###Output
_____no_output_____
###Markdown
Since stop codon * signifies end of a protein we can split the sequence using ( * )
###Code
protein_list = [str(i) for i in protein_seq.split('*')]
protein_list[:10]
# listing proteins greater than a given length
large_proteins = [x for x in protein_list if len(x)> 10]
len(large_proteins)
# convert sequences to dataframe
df = pd.DataFrame({'protein_seq':large_proteins})
df.head()
# add a new column with length
df['length'] = df['protein_seq'].apply(len)
df.head()
# # plot to visualise protein sequences based on length
# pylab.hist(df.length, bins=20)
# pylab.title("%i protein sequences\nLengths %i to %i" \
# % (len(df.length),
# min(df.length),
# max(df.length)))
# pylab.xlabel("Sequence length (bp)")
# pylab.ylabel("Count")
# pylab.show()
#sort based on legth
df.sort_values(by = ['length'], ascending = False)[:10]
###Output
_____no_output_____
###Markdown
OR
###Code
df.nlargest(10,'length')
###Output
_____no_output_____
###Markdown
5. Basic local alignment using NCBI-BLAST
###Code
# let's take a single protein from the table
one_large_protein = df.nlargest(1,'length')
single_prot = one_large_protein.iloc[0,0]
# write to a file
with open("Data/single_seq.fasta","w") as file:
file.write(">unknown \n"+single_prot)
from Bio import SeqIO
read = SeqIO.read("single_seq.fasta", "fasta")
read.seq
%%time
# based on the internet speed this query might take 2-5 minutes to run
result_handle = NCBIWWW.qblast("blastp","pdb",read.seq)
blast_qresult = SearchIO.read(result_handle, "blast-xml")
print(blast_qresult)
#fetch the id, description, evalue, bitscore & alignment of first hit
seqid = blast_qresult[0]
details = seqid[0]
print(f"\
Sequence ID:{seqid.id}\n\
description:{seqid.description}\n\
E value: {details.evalue} \n\
Bit Score: {details.bitscore}\n\
")
print(f"alignment:\n{details.aln}")
pdbid = seqid.id.split('|')[1]
pdbid
###Output
_____no_output_____
###Markdown
Optional 5.1. Entrez
###Code
Entrez.email = "[email protected]"
entrez_record = Entrez.efetch(db="protein", id=seqid.id,
retmode="txt", rettype="gb")
genbank_record = SeqIO.read(entrez_record,"genbank")
with open("Data/genbank_record.txt","w") as gb:
gb.write(str(genbank_record))
###Output
_____no_output_____
###Markdown
There's a lot of information in the genbank record if you know where to find it... 0. Is it single or double stranded and a DNA or RNA ? In case of DNA
###Code
# IN CASE OF DNA
# genbank_record.annotations["molecule"])
###Output
_____no_output_____
###Markdown
1. What is the full NCBI taxonomy of this virus?
###Code
genbank_record.annotations["taxonomy"]
###Output
_____no_output_____
###Markdown
2. What are the relevant references/labs who generated the data?
###Code
for reference in genbank_record.annotations["references"]:
print(reference)
###Output
location: [0:935]
authors: Hillen,H.S., Kokic,G., Farnung,L., Dienemann,C., Tegunov,D. and Cramer,P.
title: Structure of replicating SARS-CoV-2 polymerase
journal: Nature (2020) In press
medline id:
pubmed id: 32438371
comment: Publication Status: Available-Online prior to print
location: [0:935]
authors: Hillen,H.S., Kokic,G., Farnung,L., Dienemann,C., Tegunov,D. and Cramer,P.
title: Direct Submission
journal: Submitted (06-MAY-2020)
medline id:
pubmed id:
comment:
###Markdown
3. Retrieve the protein coding sequences (CDSs) from the Genbank record (in case of DNA) OR3. Retrive the features of the protein(in case of Protein)
###Code
# number of features
len(genbank_record.features)
#list features
{feature.type for feature in genbank_record.features}
# finding the CDS
# CDSs = [feature for feature in genbank_record.features if feature.type == "CDS"]
# len(CDSs)
# listing the gene
# CDSs[0].qualifiers["gene"]
# hunting for it's protein
# protein_seq = Seq(CDSs[0].qualifiers["translation"][0])
###Output
_____no_output_____
###Markdown
4. Does the protein sequence start with a "start codon" ?
###Code
genbank_record.seq.startswith("M")
print(CodonTable.unambiguous_dna_by_id[1])
###Output
Table 1 Standard, SGC0
| T | C | A | G |
--+---------+---------+---------+---------+--
T | TTT F | TCT S | TAT Y | TGT C | T
T | TTC F | TCC S | TAC Y | TGC C | C
T | TTA L | TCA S | TAA Stop| TGA Stop| A
T | TTG L(s)| TCG S | TAG Stop| TGG W | G
--+---------+---------+---------+---------+--
C | CTT L | CCT P | CAT H | CGT R | T
C | CTC L | CCC P | CAC H | CGC R | C
C | CTA L | CCA P | CAA Q | CGA R | A
C | CTG L(s)| CCG P | CAG Q | CGG R | G
--+---------+---------+---------+---------+--
A | ATT I | ACT T | AAT N | AGT S | T
A | ATC I | ACC T | AAC N | AGC S | C
A | ATA I | ACA T | AAA K | AGA R | A
A | ATG M(s)| ACG T | AAG K | AGG R | G
--+---------+---------+---------+---------+--
G | GTT V | GCT A | GAT D | GGT G | T
G | GTC V | GCC A | GAC D | GGC G | C
G | GTA V | GCA A | GAA E | GGA G | A
G | GTG V | GCG A | GAG E | GGG G | G
--+---------+---------+---------+---------+--
###Markdown
5.2. Sequence visualisation- [DNA features viewer](https://github.com/Edinburgh-Genome-Foundry/DnaFeaturesViewer) allows to plot nucleotide or amino acid sequences under the record plot:
###Code
from dna_features_viewer import BiopythonTranslator
graphic_record = BiopythonTranslator().translate_record(genbank_record)
plot = graphic_record.plot(figure_width=15,
strand_in_label_threshold=5)
# plot
###Output
_____no_output_____
###Markdown
This enables for instance to plot an overview of a sequence along with a detailed detail of a sequence subsegment
###Code
# Incase of DNA
# from Bio.SeqRecord import SeqRecord
# import matplotlib.pyplot as plt
# from Bio import SeqIO
# import numpy as np
from dna_features_viewer import BiopythonTranslator
fig, (ax1, ax2) = plt.subplots(
2, 1, figsize=(20, 10), sharex=True, gridspec_kw={"height_ratios": [4, 1]}
)
# PLOT THE RECORD MAP
# record = SeqIO.read(entrez_record,"genbank")
record = genbank_record
graphic_record = BiopythonTranslator().translate_record(record)
graphic_record.plot(ax=ax1, with_ruler=False,
strand_in_label_threshold=4)
# PLOT THE LOCAL GC CONTENT (we use 50bp windows)
gc = lambda s: 100.0 * len([c for c in s if c in "GC"]) / 50
xx = np.arange(len(record.seq) - 50)
yy = [gc(record.seq[x : x + 50]) for x in xx]
ax2.fill_between(xx + 25, yy, alpha=0.3)
ax2.set_ylim(bottom=0)
ax2.set_ylabel("GC(%)")
###Output
_____no_output_____
###Markdown
6. 3D structure visualisation of proteins Inorder to visualise the protein we need to fetch the pdb file from pdb database We'll use `PDBParser` & `MMCIFParser` for this purpose 6.1. retreiving PDB structure from RCSB-PDB
###Code
# link format https://files.rcsb.org/download/6YYT.pdb
urllib.request.urlretrieve('https://files.rcsb.org/download/6YYT.pdb',
'Data/6YYT.pdb')
###Output
_____no_output_____
###Markdown
6.2. Reading the PDB structure
###Code
parser = PDBParser()
structure = parser.get_structure("6YYT","Data/6YYT.pdb")
structure
###Output
_____no_output_____
###Markdown
6.2.1. Identifying the number of chains & atoms
###Code
for chain in structure[0]:
print(f"chain: {chain}, chainid: {chain.id}")
# Check the atoms
for model in structure:
print(model)
for chain in model:
print(chain)
# for residue in chain:
# for atom in residue:
# print(atom)
###Output
<Model id=0>
<Chain id=A>
<Chain id=B>
<Chain id=C>
<Chain id=D>
<Chain id=P>
<Chain id=Q>
<Chain id=T>
<Chain id=U>
###Markdown
6.3. Visualising Protein structure we'll make use of `nglview` & `py3dmol` 6.3.1. `nglview`
###Code
nv.demo()
view1 = nv.show_biopython(structure)
view1
###Output
_____no_output_____
###Markdown
6.3.1.2 capturing the current posture
###Code
view1.render_image()
###Output
_____no_output_____
###Markdown
6.3.2. `py3Dmol`
###Code
view2 = py3Dmol.view(query='pdb:6YYT')
view2.setStyle({
'cartoon':{'color':'spectrum'}
})
view2.display_image()
###Output
_____no_output_____
###Markdown
BONUS- listing modules in the current jupyter notebook- exporting the list of modules used in the current notebook to .txt file
###Code
# Listing currently used packages
import types
def imports():
for name, val in globals().items():
if isinstance(val, types.ModuleType):
yield val.__name__
list(imports())
# writing package names to a file
with open("requirements.txt","w") as req:
req.write(str(list(imports())))
###Output
_____no_output_____ |
notebooks/train/shapes.ipynb | ###Markdown
Config
###Code
config = {
"lr": 1e-5,
"epochs_num": 3000,
"batch_size": 64,
"log_each": 1,
"save_each": 2,
"device": "cuda:2",
"x_dim": 1024,
"z_dim": 8,
"disc_coef": 5,
"lambda": 5
}
###Output
_____no_output_____
###Markdown
Data
###Code
from generation.dataset.shapes_dataset import ShapesDataset
dataset = ShapesDataset(4, signal_dim=config['x_dim'])
idx = np.random.choice(range(len(dataset)))
signal = dataset[idx].numpy()
print("Signal size:", signal.shape)
plt.plot(signal)
plt.show()
###Output
Signal size: (1024,)
###Markdown
Models
###Code
from generation.nets.shapes import Generator, Discriminator
discriminator = Discriminator(config)
test_tensor = dataset[0].unsqueeze(0)
discriminator(test_tensor, debug=True)
generator = Generator(config)
test_z = torch.rand(1, config['z_dim'])
output = generator(test_z, debug=True)
assert(output.shape == test_tensor.shape)
###Output
torch.Size([1, 1, 1024])
torch.Size([1, 8, 1024])
torch.Size([1, 8, 340])
torch.Size([1, 32, 340])
torch.Size([1, 32, 112])
torch.Size([1, 8, 112])
torch.Size([1, 8, 36])
torch.Size([1, 288])
torch.Size([1, 1])
torch.Size([1, 1024])
torch.Size([1, 1, 1024])
torch.Size([1, 8, 1024])
torch.Size([1, 32, 1024])
torch.Size([1, 16, 1024])
torch.Size([1, 8, 1024])
torch.Size([1, 1, 1024])
###Markdown
Training
###Code
from generation.training.wgan_trainer import WganTrainer
g_optimizer = torch.optim.Adam(generator.parameters(), lr=config['lr'])
d_optimizer = torch.optim.Adam(discriminator.parameters(), lr=config['lr'])
trainer = WganTrainer(generator, discriminator, g_optimizer, \
d_optimizer, config)
trainer.run_train(dataset)
###Output
_____no_output_____ |
Week5/PersonAttributes/notebooks/A5_lr_finder_exp.ipynb | ###Markdown
###Code
images[0].shape
len(images)
from models.custom_model_builder import get_custom_model
model = get_custom_model(input_shape=(224, 224, 3))
model
model.summary()
from keras.optimizers import SGD
losses = {
"gender_output": "binary_crossentropy",
"image_quality_output": "categorical_crossentropy",
"age_output": "categorical_crossentropy",
"weight_output": "categorical_crossentropy",
"bag_output": "categorical_crossentropy",
"footwear_output": "categorical_crossentropy",
"pose_output": "categorical_crossentropy",
"emotion_output": "categorical_crossentropy"
}
loss_weights = {
"gender_output": 1.0,
"image_quality_output": 1.0,
"age_output": 1.0,
"weight_output": 1.0,
"bag_output": 1.0,
"footwear_output": 1.0,
"pose_output": 1.0,
"emotion_output": 1.0
}
opt = SGD(lr=0.001, momentum=0.9)
model.compile(
optimizer = opt,
loss = losses,
loss_weights = loss_weights,
metrics=["accuracy"]
)
from feature_scripts.cyclic_lr import LRFinder, OneCycleLR
train_df.shape
lr_dir = Path.join(project_dir, 'models', 'lr_finder')
if not Path.exists(lr_dir):
os.makedirs(lr_dir)
print('Dir created')
lr_dir
lr_callback = LRFinder(num_samples=train_df.shape[0], batch_size=128,
minimum_lr=0.00002, maximum_lr=1.0,verbose=False,
lr_scale='exp', save_dir=lr_dir)
lr_history = model.fit_generator(train_gen,
steps_per_epoch = 10000,
epochs=1,
validation_data = valid_gen,
callbacks=[lr_callback],
verbose=1)
lr_callback.plot_schedule(clip_beginning=10)
lr_callback.plot_schedule(clip_beginning=20)
lr_callback.plot_schedule(clip_beginning=30)
lr_callback.plot_schedule(clip_beginning=40)
###Output
_____no_output_____
###Markdown
Max LR - 10^(-2) --> 0.01
###Code
results = model.evaluate_generator(valid_gen, verbose=1)
dict(zip(model.metrics_names, results))
###Output
_____no_output_____ |
examples/2_animations_and_callbacks/1_surface_plot_animated.ipynb | ###Markdown
Animated surface plot y=f(x,z)This example shows how to: - create a surface plot - animate updates of the plot data - optimize compute load
###Code
# use "notebook" option to display figure between cells
# in the browser window - heaviest to the CPU
%matplotlib notebook
# use "qt" option to open figure outside the browser, this
# reduces CPU load (less interface layers and image copies
# between the ray tracer and GUI display)
#%matplotlib qt
# TkOptiX GUI instead of matplotlib+NpOptix gives the best
# performance plus all GUI actions (rotations, focus, etc.);
# change import below and raytracer constructor name
# (indicated in the code)
#from plotoptix import TkOptiX
from plotoptix import NpOptiX
from plotoptix.utils import map_to_colors, simplex
from plotoptix.materials import m_eye_normal_cos
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Make some data. The mesh size and simplex noise calculations are not very significant in this example. You can try much larger meshes.
###Code
class params():
rx = (-1, 16); nx = 180
rz = (0, 10); nz = 100
x = np.linspace(rx[0], rx[1], nx)
z = np.linspace(rz[0], rz[1], nz)
X, Z = np.meshgrid(x, z)
XZ = np.stack((X.flatten(), Z.flatten(), np.full(nx*nz, 1.0, dtype=np.float32))).T.reshape(nz, nx, 3)
XZ = np.ascontiguousarray(XZ, dtype=np.float32)
Y = simplex(XZ)
###Output
_____no_output_____
###Markdown
**Setup callback functions**
###Code
def init(rt): # configure scene and plot data at initialization
rt.set_param(
min_accumulation_step=16, # <- smooth out images, good for camera with depth
# of field simulation (DoF), affects GPU load
max_accumulation_frames=50 # <- max number of frames to compute when paused
)
rt.setup_material("cos", m_eye_normal_cos) # setup a very fast-shaded material
# (no secondary rays are calculated,
# saves lots of GPU time)
# standard gamma correction (2D postprocessing is almost for free on GPU)
rt.set_float("tonemap_exposure", 0.8)
rt.set_float("tonemap_gamma", 2.2)
rt.add_postproc("Gamma")
rt.set_background(0)
rt.set_ambient(0.25)
rt.set_data_2d("surface", params.Y,
range_x=params.rx, range_z=params.rz,
c=map_to_colors(params.Y, "OrRd"),
mat="cos", # comment out to use default, diffuse material
# (diffuse requires multiple secondary rays)
make_normals=True)
rt.setup_camera("cam1",
cam_type="DoF", # comment out to use default, pinhole camera
# (pinhole has no DoF and requires very few
# accumulaton frames, for anti-aliasing only)
eye=[7.5, 1.5, 18],
aperture_radius=0.2,
fov=20, focal_scale=0.62)
rt.setup_light("light1", pos=[2, 5, 20], color=5, radius=4) # not used with m_eye_normal_cos
def compute(rt, delta): # compute scene updates in parallel to the raytracing
params.XZ += 0.03 * delta * np.array([-0.2, 1, 0.4], dtype=np.float32)
params.Y = simplex(params.XZ, params.Y) # compute noise "in place"
def update_data(rt): # update plot data (raytracing is finished here)
rt.update_data_2d("surface", params.Y,
c=map_to_colors(params.Y, "OrRd"))
def update_image(rt): # update your image here (not used with TkOptiX)
imgplot.set_data(rt._img_rgba)
plt.draw()
###Output
_____no_output_____
###Markdown
Prepare the output figure:
###Code
width = 1500; height = 500 # width*height ~ rays_to_trace, directly affects GPU load!
plt.figure(1, figsize=(9.5, 3.5))
plt.tight_layout()
imgplot = plt.imshow(np.zeros((height, width, 4), dtype=np.uint8))
optix = NpOptiX( # change to TkOptiX for the lowest CPU load
on_initialization=init,
on_scene_compute=compute,
on_rt_completed=update_data,
on_launch_finished=update_image, # comment out if TkOptiX is used
width=width, height=height,
start_now=True)
###Output
_____no_output_____
###Markdown
The `on_scene_compute` - `on_rt_completed` callbacks can be paused/resumed. Raytracing is still running, until the `max_accumulation_frames` is reached. You can run the two following cells multiple times and see how the image is smoothed out during pause.
###Code
optix.pause_compute()
optix.resume_compute()
###Output
_____no_output_____
###Markdown
Stop all (raytracing cannot be restarted from that point):
###Code
optix.close()
###Output
_____no_output_____
###Markdown
Animated surface plot y=f(x,z)This example shows how to: - create a surface plot - animate updates of the plot data - optimize compute load
###Code
# use "notebook" option to display figure between cells
# in the browser window - heaviest to the CPU
%matplotlib notebook
# use "qt" option to open figure outside the browser, this
# reduces CPU load (less interface layers and image copies
# between the ray tracer and GUI display)
#%matplotlib qt
# TkOptiX GUI instead of matplotlib+NpOptix gives the best
# performance plus all GUI actions (rotations, focus, etc.);
# change import below and raytracer constructor name
# (indicated in the code)
#from plotoptix import TkOptiX
from plotoptix import NpOptiX
from plotoptix.utils import map_to_colors, simplex
from plotoptix.materials import m_eye_normal_cos
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Make some data. The mesh size and simplex noise calculations are not very significant in this example. You can try much larger meshes.
###Code
class params():
rx = (-1, 16); nx = 180
rz = (0, 10); nz = 100
x = np.linspace(rx[0], rx[1], nx)
z = np.linspace(rz[0], rz[1], nz)
X, Z = np.meshgrid(x, z)
XZ = np.stack((X.flatten(), Z.flatten(), np.full(nx*nz, 1.0, dtype=np.float32))).T.reshape(nz, nx, 3)
XZ = np.ascontiguousarray(XZ, dtype=np.float32)
Y = simplex(XZ)
###Output
_____no_output_____
###Markdown
**Setup callback functions**
###Code
def init(rt): # configure scene and plot data at initialization
rt.set_param(
min_accumulation_step=16, # <- smooth out images, good for camera with depth
# of field simulation (DoF), affects GPU load
max_accumulation_frames=50 # <- max number of frames to compute when paused
)
rt.setup_material("cos", m_eye_normal_cos) # setup a very fast-shaded material
# (no secondary rays are calculated,
# saves lots of GPU time)
# standard gamma correction (2D postprocessing is almost for free on GPU)
rt.set_float("tonemap_exposure", 0.8)
rt.set_float("tonemap_gamma", 2.2)
rt.add_postproc("Gamma")
rt.set_background(0)
rt.set_ambient(0.25)
rt.set_data_2d("surface", params.Y,
range_x=params.rx, range_z=params.rz,
c=map_to_colors(params.Y, "OrRd"),
mat="cos", # comment out to use default, diffuse material
# (diffuse requires multiple secondary rays)
make_normals=True)
rt.setup_camera("cam1",
cam_type="DoF", # comment out to use default, pinhole camera
# (pinhole has no DoF and requires very few
# accumulaton frames, for anti-aliasing only)
eye=[7.5, 1.5, 18],
aperture_radius=0.2,
fov=20, focal_scale=0.62)
rt.setup_light("light1", pos=[2, 5, 20], color=5, radius=4) # not used with m_eye_normal_cos
def compute(rt, delta): # compute scene updates in parallel to the raytracing
params.XZ += 0.03 * delta * np.array([-0.2, 1, 0.4], dtype=np.float32)
params.Y = simplex(params.XZ, params.Y) # compute noise "in place"
def update_data(rt): # update plot data (raytracing is finished here)
rt.update_data_2d("surface",
pos=params.Y,
c=map_to_colors(params.Y, "OrRd"))
def update_image(rt): # update your image here (not used with TkOptiX)
imgplot.set_data(rt._img_rgba)
plt.draw()
###Output
_____no_output_____
###Markdown
Prepare the output figure:
###Code
width = 1500; height = 500 # width*height ~ rays_to_trace, directly affects GPU load!
plt.figure(1, figsize=(5.5, 2))
plt.tight_layout()
imgplot = plt.imshow(np.zeros((height, width, 4), dtype=np.uint8))
optix = NpOptiX( # change to TkOptiX for the lowest CPU load
on_initialization=init,
on_scene_compute=compute,
on_rt_completed=update_data,
on_launch_finished=update_image, # comment out if TkOptiX is used
width=width, height=height,
start_now=True)
###Output
_____no_output_____
###Markdown
The `on_scene_compute` - `on_rt_completed` callbacks can be paused/resumed. Raytracing is still running, until the `max_accumulation_frames` is reached. You can run the two following cells multiple times and see how the image is smoothed out during pause.
###Code
optix.pause_compute()
optix.resume_compute()
###Output
_____no_output_____
###Markdown
Stop all (raytracing cannot be restarted from that point):
###Code
optix.close()
###Output
_____no_output_____ |
files/notebooks/Macro_Prediction_Models/addingHourlyTraffic.ipynb | ###Markdown
###Code
import pandas as pd
import tqdm
import datetime
import pickle
from ast import literal_eval
import numpy as np
import calendar
import os
#initialization
Vehicle_Type = 'Electric_Vehicles'
Vehicle_ID = [751]
dateFormat = '%Y-%m-%d'
datetimeFormat = '%Y-%m-%d %H:%M:%S:%f'
###Output
_____no_output_____
###Markdown
Loading data for mapping OSM segments to TMC IDs
###Code
# load the OSM_TMC_MAP
OSM_TMC_MAP_PATH = os.path.join(os.getcwd(), "data", "osm_tmc_matching_ids.pickle")
with open(OSM_TMC_MAP_PATH, 'rb') as handle:
OSM_TMC_MAP = pickle.load(handle)
###Output
_____no_output_____
###Markdown
Loading Hourly Traffic Data for Chattanooga
###Code
df_TMC = pd.read_csv(f'Chattanooga_TrafficData_August19_July20.csv')
print(df_TMC.columns)
df_TMC=df_TMC.dropna()
Columns = ['Speed_Real','Speed_FreeFlow','Speed_JF','Hour']
for col in df_TMC.columns:
if col in Columns:
df_TMC[col] = df_TMC[col].apply(literal_eval)
TMC_Id_for_Matching = list(df_TMC.TMC)
Day = list(df_TMC.Day)
Hour = list(df_TMC.Hour)
Date = list(df_TMC.Date)
Hourly_Speed_Real = list(df_TMC.Speed_Real)
Hourly_Speed_Freeflow = list(df_TMC.Speed_FreeFlow)
Hourly_Jam_Factor = list(df_TMC.Speed_JF)
def findDay(year, month, day):
dayNumber = calendar.weekday(year, month, day)
days = ["Monday", "Tuesday", "Wednesday", "Thursday",
"Friday", "Saturday", "Sunday"]
return (dayNumber)
###Output
_____no_output_____
###Markdown
Mapping OSM to TMC ID
###Code
Vehicle_Name = f'BYD_751'
print(f'Processing {Vehicle_Name}')
df = pd.read_csv(f'{Vehicle_Name}_with_Elevation_Weather.csv', low_memory=False)
print(len(df))
OSM_Feature = list(df.OSM_Feature)
TMC_Id = []
OSM = []
Found_OSM = []
Not_Found = 0
for i in OSM_Feature:
i = str(i)
temp = []
for key, value in OSM_TMC_MAP.items():
if i == key:
temp.append(value)
if len(temp) != 0:
for j in temp:
Found_OSM.append(i)
TMC_Id.append(j)
else:
Not_Found += 1
OSM.append(i)
TMC_Id.append(0)
print(f'Total Segments = {len(TMC_Id)}\n TMC-ID not found = {Not_Found}')
print(f'Total Unique OSM = {len(set(OSM_Feature))}\n Mapped to TMC = {len(set(Found_OSM))} \n')
TMC_Id = np.array(TMC_Id)
df['TMC_Id'] = TMC_Id
###Output
_____no_output_____
###Markdown
Breaking up TimeStamps into Time of Day and Day of Week
###Code
Time = df.TimeStart
Lat_ViriCity = df.Initial_recorded_Latitude
Long_ViriCity = df.Initial_recorded_Longitude
Date_from_ViriCity = []
Hour_from_ViriCity = []
Day_of_Week = []
Time_of_Day = []
for i in Time:
timestamp = i
TD = datetime.datetime.strptime(timestamp, datetimeFormat)
Date_from_ViriCity.append(TD.date())
year = TD.year
month = TD.month
day = TD.day
Day_of_Week.append(findDay(year, month, day))
Time_of_Day.append(TD.hour)
df['Day_of_Week'] = Day_of_Week
df['Time_of_Day'] = Time_of_Day
###Output
_____no_output_____
###Markdown
Matching with TMC and adding Hourly Traffic Data
###Code
TMC_Id = list(df.TMC_Id)
Day_of_Week = list(df.Day_of_Week)
Time_of_Day = list(df.Time_of_Day)
TimeStart = list(df.TimeStart)
Speed_Ratio = []
Jam_Factor = []
for count in tqdm.tqdm(range(len(TMC_Id))):
tmc = TMC_Id[count]
indi_tmc = []
timestamp = TimeStart[count]
TD = datetime.datetime.strptime(timestamp, datetimeFormat)
date_viricity = TD.date()
hour_viricity = TD.hour
if len(tmc) > 2:
tmc = tmc.replace("', '", ",")
tmc = tmc.replace("'", "")
tmc = tmc.replace("[", "")
tmc = tmc.replace("]", "")
tmc = tmc.split(',')
for j in tmc:
indi_tmc.append(j)
if len(indi_tmc)>0:
for indi in indi_tmc:
tag = 0
tmp_JF = []
tmp_SR = []
for i in range(len(TMC_Id_for_Matching)):
date_time_str = Date[i]
TD = datetime.datetime.strptime(date_time_str, '%Y-%m-%d')
Date_Traffic = TD.date()
hour_Traffic = TD.hour
if indi==TMC_Id_for_Matching[i] and date_viricity==Date_Traffic:
hour_list=Hour[i]
JF = Hourly_Jam_Factor[i]
RS = Hourly_Speed_Real[i]
FF = Hourly_Speed_Freeflow[i]
for h in hour_list:
if h==hour_viricity:
index = hour_list.index(h)
tmp_JF.append(JF[index])
ratio = RS[index]/FF[index]
tmp_SR.append(ratio)
tag = 1
if tag != 1:
tmp_SR.append(1)
tmp_JF.append(0)
Jam_Factor.append(sum(tmp_JF)/len(tmp_JF))
Speed_Ratio.append(sum(tmp_SR)/len(tmp_SR))
else:
Speed_Ratio.append(1)
Jam_Factor.append(0)
else:
Speed_Ratio.append(1)
Jam_Factor.append(0)
df['Speed_Ratio'] = Speed_Ratio
df['Jam_Factor'] = Jam_Factor
df.to_csv(f'{Vehicle_Name}_with_Elevation_Weather_Traffic_Day_Week.csv',index=False)
###Output
_____no_output_____ |
benchmarking/Convex_Function_1D_Parallel_5.ipynb | ###Markdown
Example of optimizing a convex function Goal is to test the objective values found by Mango Search space size: 10,000 Number of iterations to try: 40 Random domain size: 5000 Benchmarking Parallel Evaluation
###Code
from mango.tuner import Tuner
def get_param_dict():
param_dict = {
'x': range(-5000, 5000)
}
return param_dict
def objfunc(args_list):
results = []
for hyper_par in args_list:
x = hyper_par['x']
result = -(x**2)
results.append(result)
return results
def get_conf():
conf = dict()
conf['batch_size'] = 5
conf['initial_random'] = 5
conf['num_iteration'] = 12
conf['domain_size'] = 5000
return conf
def get_optimal_x():
param_dict = get_param_dict()
conf = get_conf()
tuner = Tuner(param_dict, objfunc,conf)
results = tuner.maximize()
return results
optimal_X = []
Results = []
num_of_tries = 100
for i in range(num_of_tries):
results = get_optimal_x()
Results.append(results)
optimal_X.append(results['best_params']['x'])
print(i,":",results['best_params']['x'])
###Output
0 : 91
1 : 0
2 : 1261
3 : 0
4 : 0
5 : 0
6 : 0
7 : 0
8 : 0
9 : 0
10 : 0
11 : 0
12 : 0
13 : 0
14 : 0
15 : 0
16 : 292
17 : 0
18 : 0
19 : 0
20 : 0
21 : 0
22 : 0
23 : 0
24 : 0
25 : 0
26 : 0
27 : 0
28 : 0
29 : -529
30 : 0
31 : 0
32 : -1567
33 : 0
34 : 0
35 : 0
36 : 0
37 : 0
38 : 0
39 : 0
40 : 0
41 : 497
42 : 0
43 : 1
44 : 0
45 : 0
46 : 0
47 : 0
48 : 0
49 : 0
50 : 0
51 : 0
52 : 0
53 : 0
54 : 0
55 : -207
56 : 0
57 : 0
58 : 0
59 : 0
60 : -249
61 : 0
62 : 0
63 : 0
64 : 0
65 : 0
66 : 1337
67 : 0
68 : 0
69 : 0
70 : -1
71 : 0
72 : 1
73 : 0
74 : -190
75 : 0
76 : 0
77 : 0
78 : 0
79 : 0
80 : 0
81 : -7
82 : 0
83 : 405
84 : -1
85 : 0
86 : 0
87 : 0
88 : 0
89 : -119
90 : 295
91 : -213
92 : 715
93 : 1
94 : 0
95 : -641
96 : -905
97 : 0
98 : 0
99 : 0
###Markdown
Plotting the Parallel run results
###Code
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,10))
n, bins, patches = plt.hist(optimal_X, 20, facecolor='g', alpha=0.75)
def autolabel(rects):
"""
Attach a text label above each bar displaying its height
"""
for rect in rects:
height = rect.get_height()
plt.text(rect.get_x() + rect.get_width()/2., 1.0*height,
'%d' % int(height),
ha='center', va='bottom',fontsize=15)
plt.xlabel('X-Value',fontsize=25)
plt.ylabel('Number of Occurence',fontsize=25)
plt.title('Optimal Objective',fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.grid(True)
autolabel(patches)
plt.show()
###Output
_____no_output_____
###Markdown
Parallel with different number of executions batch sizes for iterations 10
###Code
from mango.tuner import Tuner
def get_param_dict():
param_dict = {
'x': range(-5000, 5000)
}
return param_dict
def objfunc(args_list):
results = []
for hyper_par in args_list:
x = hyper_par['x']
result = -(x**2)
results.append(result)
return results
def get_conf_1():
conf = dict()
conf['batch_size'] = 1
conf['initial_random'] = 5
conf['num_iteration'] = 10
conf['domain_size'] = 5000
return conf
def get_conf_3():
conf = dict()
conf['batch_size'] = 3
conf['initial_random'] = 5
conf['num_iteration'] = 10
conf['domain_size'] = 5000
return conf
def get_conf_5():
conf = dict()
conf['batch_size'] = 5
conf['initial_random'] = 5
conf['num_iteration'] = 10
conf['domain_size'] = 5000
return conf
def get_conf_10():
conf = dict()
conf['batch_size'] = 10
conf['initial_random'] = 5
conf['num_iteration'] = 10
conf['domain_size'] = 5000
return conf
def get_optimal_x():
param_dict = get_param_dict()
conf_1 = get_conf_1()
tuner_1 = Tuner(param_dict, objfunc,conf_1)
conf_3 = get_conf_3()
tuner_3 = Tuner(param_dict, objfunc,conf_3)
conf_5 = get_conf_5()
tuner_5 = Tuner(param_dict, objfunc,conf_5)
conf_10 = get_conf_10()
tuner_10 = Tuner(param_dict, objfunc,conf_10)
results_1 = tuner_1.maximize()
results_3 = tuner_3.maximize()
results_5 = tuner_5.maximize()
results_10 = tuner_10.maximize()
return results_1, results_3, results_5 , results_10
Store_Optimal_X = []
Store_Results = []
num_of_tries = 100
for i in range(num_of_tries):
results_1, results_3, results_5 , results_10 = get_optimal_x()
Store_Results.append([results_1, results_3, results_5 , results_10])
Store_Optimal_X.append([results_1['best_params']['x'],results_3['best_params']['x'],results_5['best_params']['x'],results_10['best_params']['x']])
print(i,":",[results_1['best_params']['x'],results_3['best_params']['x'],results_5['best_params']['x'],results_10['best_params']['x']])
import numpy as np
Store_Optimal_X=np.array(Store_Optimal_X)
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,10))
n, bins, patches = plt.hist(Store_Optimal_X[:,0], 20, facecolor='g', alpha=0.75)
def autolabel(rects):
"""
Attach a text label above each bar displaying its height
"""
for rect in rects:
height = rect.get_height()
plt.text(rect.get_x() + rect.get_width()/2., 1.0*height,
'%d' % int(height),
ha='center', va='bottom',fontsize=15)
plt.xlabel('X-Value',fontsize=25)
plt.ylabel('Number of Occurence',fontsize=25)
plt.title('Optimal Objective: Batch 1',fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.grid(True)
autolabel(patches)
plt.show()
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,10))
n, bins, patches = plt.hist(Store_Optimal_X[:,1], 20, facecolor='g', alpha=0.75)
def autolabel(rects):
"""
Attach a text label above each bar displaying its height
"""
for rect in rects:
height = rect.get_height()
plt.text(rect.get_x() + rect.get_width()/2., 1.0*height,
'%d' % int(height),
ha='center', va='bottom',fontsize=15)
plt.xlabel('X-Value',fontsize=25)
plt.ylabel('Number of Occurence',fontsize=25)
plt.title('Optimal Objective: Batch 3',fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.grid(True)
autolabel(patches)
plt.show()
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,10))
n, bins, patches = plt.hist(Store_Optimal_X[:,2], 20, facecolor='g', alpha=0.75)
def autolabel(rects):
"""
Attach a text label above each bar displaying its height
"""
for rect in rects:
height = rect.get_height()
plt.text(rect.get_x() + rect.get_width()/2., 1.0*height,
'%d' % int(height),
ha='center', va='bottom',fontsize=15)
plt.xlabel('X-Value',fontsize=25)
plt.ylabel('Number of Occurence',fontsize=25)
plt.title('Optimal Objective: Batch 5',fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.grid(True)
autolabel(patches)
plt.show()
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(10,10))
n, bins, patches = plt.hist(Store_Optimal_X[:,3], 20, facecolor='g', alpha=0.75)
def autolabel(rects):
"""
Attach a text label above each bar displaying its height
"""
for rect in rects:
height = rect.get_height()
plt.text(rect.get_x() + rect.get_width()/2., 1.0*height,
'%d' % int(height),
ha='center', va='bottom',fontsize=15)
plt.xlabel('X-Value',fontsize=25)
plt.ylabel('Number of Occurence',fontsize=25)
plt.title('Optimal Objective: Batch 10',fontsize=20)
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
plt.grid(True)
autolabel(patches)
plt.show()
###Output
_____no_output_____ |
getBERT/getBERT_book.ipynb | ###Markdown
###Code
'''
For use on local runtime.
How to run:
- Download the appropriate BERT model from:
https://github.com/google-research/bert (here BERT-Base).
- Download the code to your system path from the same repository, by running the
following command in the command window:
git clone https://github.com/google-research/bert
- Create a virtual environment (e.g. using anaconda), using Python 3.5.
- Install tensorflow in your virtual environment (pip install tensorflow==1.15).
- Start a local runtime in your virtual environment using
https://research.google.com/colaboratory/local-runtimes.html
- Make sure the BERT model is in your system path (here named
'uncased_L-12_H-768_A-12').
- Make sure all data is available and update the paths at the end of this code.
Adapted from Trusca, Wassenberg, Frasincar and Dekker (2020) for use on a local
runtime
Truşcǎ M.M., Wassenberg D., Frasincar F., Dekker R. (2020) A Hybrid Approach for
Aspect-Based Sentiment Analysis Using Deep Contextual Word Embeddings and
Hierarchical Attention. In: Bielikova M., Mikkonen T., Pautasso C. (eds) Web
Engineering. ICWE 2020. Lecture Notes in Computer Science, vol 12128. Springer,
Cham. https://doi.org/10.1007/978-3-030-50578-3_25
https://github.com/mtrusca/HAABSA_PLUS_PLUS
'''
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import sys
sys.path.append('bert/')
import codecs
import collections
import json
import re
import os
import pprint
import numpy as np
import tensorflow as tf
import modeling
import tokenization
BERT_PRETRAINED_DIR = 'uncased_L-12_H-768_A-12'
LAYERS = [-1, -2, -3, -4]
NUM_TPU_CORES = 8
MAX_SEQ_LENGTH = 87
BERT_CONFIG = BERT_PRETRAINED_DIR + '/bert_config.json'
CHKPT_DIR = BERT_PRETRAINED_DIR + '/bert_model.ckpt'
VOCAB_FILE = BERT_PRETRAINED_DIR + '/vocab.txt'
INIT_CHECKPOINT = BERT_PRETRAINED_DIR + '/bert_model.ckpt'
BATCH_SIZE = 128
class InputExample(object):
def __init__(self, unique_id, text_a, text_b=None):
self.unique_id = unique_id
self.text_a = text_a
self.text_b = text_b
class InputFeatures(object):
"""A single set of features of data."""
def __init__(self, unique_id, tokens, input_ids, input_mask, input_type_ids):
self.unique_id = unique_id
self.tokens = tokens
self.input_ids = input_ids
self.input_mask = input_mask
self.input_type_ids = input_type_ids
def input_fn_builder(features, seq_length):
"""Creates an `input_fn` closure to be passed to TPUEstimator."""
all_unique_ids = []
all_input_ids = []
all_input_mask = []
all_input_type_ids = []
for feature in features:
all_unique_ids.append(feature.unique_id)
all_input_ids.append(feature.input_ids)
all_input_mask.append(feature.input_mask)
all_input_type_ids.append(feature.input_type_ids)
def input_fn(params):
"""The actual input function."""
batch_size = params["batch_size"]
num_examples = len(features)
# This is for demo purposes and does NOT scale to large data sets. We do
# not use Dataset.from_generator() because that uses tf.py_func which is
# not TPU compatible. The right way to load data is with TFRecordReader.
d = tf.data.Dataset.from_tensor_slices({
"unique_ids":
tf.constant(all_unique_ids, shape=[num_examples], dtype=tf.int32),
"input_ids":
tf.constant(
all_input_ids, shape=[num_examples, seq_length],
dtype=tf.int32),
"input_mask":
tf.constant(
all_input_mask,
shape=[num_examples, seq_length],
dtype=tf.int32),
"input_type_ids":
tf.constant(
all_input_type_ids,
shape=[num_examples, seq_length],
dtype=tf.int32),
})
d = d.batch(batch_size=batch_size, drop_remainder=False)
return d
return input_fn
def model_fn_builder(bert_config, init_checkpoint, layer_indexes, use_tpu,
use_one_hot_embeddings):
"""Returns `model_fn` closure for TPUEstimator."""
def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
"""The `model_fn` for TPUEstimator."""
unique_ids = features["unique_ids"]
input_ids = features["input_ids"]
input_mask = features["input_mask"]
input_type_ids = features["input_type_ids"]
model = modeling.BertModel(
config=bert_config,
is_training=False,
input_ids=input_ids,
input_mask=input_mask,
token_type_ids=input_type_ids,
use_one_hot_embeddings=use_one_hot_embeddings)
if mode != tf.estimator.ModeKeys.PREDICT:
raise ValueError("Only PREDICT modes are supported: %s" % (mode))
tvars = tf.trainable_variables()
scaffold_fn = None
(assignment_map,
initialized_variable_names) = modeling.get_assignment_map_from_checkpoint(
tvars, init_checkpoint)
if use_tpu:
def tpu_scaffold():
tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
return tf.train.Scaffold()
scaffold_fn = tpu_scaffold
else:
tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
tf.logging.info("**** Trainable Variables ****")
for var in tvars:
init_string = ""
if var.name in initialized_variable_names:
init_string = ", *INIT_FROM_CKPT*"
tf.logging.info(" name = %s, shape = %s%s", var.name, var.shape,
init_string)
all_layers = model.get_all_encoder_layers()
predictions = {
"unique_id": unique_ids,
}
for (i, layer_index) in enumerate(layer_indexes):
predictions["layer_output_%d" % i] = all_layers[layer_index]
output_spec = tf.contrib.tpu.TPUEstimatorSpec(
mode=mode, predictions=predictions, scaffold_fn=scaffold_fn)
return output_spec
return model_fn
def convert_examples_to_features(examples, seq_length, tokenizer):
"""Loads a data file into a list of `InputBatch`s."""
features = []
for (ex_index, example) in enumerate(examples):
tokens_a = tokenizer.tokenize(example.text_a)
tokens_b = None
if example.text_b:
tokens_b = tokenizer.tokenize(example.text_b)
if tokens_b:
# Modifies `tokens_a` and `tokens_b` in place so that the total
# length is less than the specified length.
# Account for [CLS], [SEP], [SEP] with "- 3"
_truncate_seq_pair(tokens_a, tokens_b, seq_length - 3)
else:
# Account for [CLS] and [SEP] with "- 2"
if len(tokens_a) > seq_length - 2:
tokens_a = tokens_a[0:(seq_length - 2)]
tokens = []
input_type_ids = []
tokens.append("[CLS]")
input_type_ids.append(0)
for token in tokens_a:
tokens.append(token)
input_type_ids.append(0)
tokens.append("[SEP]")
input_type_ids.append(0)
if tokens_b:
for token in tokens_b:
tokens.append(token)
input_type_ids.append(1)
tokens.append("[SEP]")
input_type_ids.append(1)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to.
input_mask = [1] * len(input_ids)
# Zero-pad up to the sequence length.
while len(input_ids) < seq_length:
input_ids.append(0)
input_mask.append(0)
input_type_ids.append(0)
assert len(input_ids) == seq_length
assert len(input_mask) == seq_length
assert len(input_type_ids) == seq_length
if ex_index < 5:
tf.logging.info("*** Example ***")
tf.logging.info("unique_id: %s" % (example.unique_id))
tf.logging.info("tokens: %s" % " ".join(
[tokenization.printable_text(x) for x in tokens]))
tf.logging.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
tf.logging.info("input_mask: %s" % " ".join([str(x) for x in input_mask]))
tf.logging.info(
"input_type_ids: %s" % " ".join([str(x) for x in input_type_ids]))
features.append(
InputFeatures(
unique_id=example.unique_id,
tokens=tokens,
input_ids=input_ids,
input_mask=input_mask,
input_type_ids=input_type_ids))
return features
def _truncate_seq_pair(tokens_a, tokens_b, max_length):
"""Truncates a sequence pair in place to the maximum length."""
# This is a simple heuristic which will always truncate the longer sequence
# one token at a time. This makes more sense than truncating an equal percent
# of tokens from each, since if one sequence is very short then each token
# that's truncated likely contains more information than a longer sequence.
while True:
total_length = len(tokens_a) + len(tokens_b)
if total_length <= max_length:
break
if len(tokens_a) > len(tokens_b):
tokens_a.pop()
else:
tokens_b.pop()
def read_sequence(input_sentences):
examples = []
unique_id = 0
for sentence in input_sentences:
line = tokenization.convert_to_unicode(sentence)
examples.append(InputExample(unique_id=unique_id, text_a=line))
unique_id += 1
return examples
def get_features(input_text, dim=768):
tf.logging.set_verbosity(tf.logging.ERROR)
layer_indexes = LAYERS
bert_config = modeling.BertConfig.from_json_file(BERT_CONFIG)
tokenizer = tokenization.FullTokenizer(
vocab_file=VOCAB_FILE, do_lower_case=True)
examples = read_sequence(input_text)
features = convert_examples_to_features(
examples=examples, seq_length=MAX_SEQ_LENGTH, tokenizer=tokenizer)
unique_id_to_feature = {}
for feature in features:
unique_id_to_feature[feature.unique_id] = feature
model_fn = model_fn_builder(
bert_config=bert_config,
init_checkpoint=INIT_CHECKPOINT,
layer_indexes=layer_indexes,
use_tpu=False,
use_one_hot_embeddings=True)
# If TPU is not available, this will fall back to normal Estimator on CPU
# or GPU.
estimator = tf.contrib.tpu.TPUEstimator(
use_tpu=False,
model_fn=model_fn,
config=tf.contrib.tpu.RunConfig(),
predict_batch_size=BATCH_SIZE,
train_batch_size=BATCH_SIZE)
input_fn = input_fn_builder(
features=features, seq_length=MAX_SEQ_LENGTH)
# Get features
for result in estimator.predict(input_fn, yield_single_examples=True):
unique_id = int(result["unique_id"])
feature = unique_id_to_feature[unique_id]
output = collections.OrderedDict()
for (i, token) in enumerate(feature.tokens):
layers = []
for (j, layer_index) in enumerate(layer_indexes):
layer_output = result["layer_output_%d" % j]
layer_output_flat = np.array([x for x in layer_output[i:(i + 1)].flat])
layers.append(layer_output_flat)
output[token] = sum(layers)[:dim]
return output
# When it takes too long, data can be split in multiple subfiles such as in
# lines 5-30
lines = open('dataBERT/raw_data_book_2019.txt', errors='replace').readlines()
'''
for j in range(0, len(lines), 150):
with open("dataBERT/BERT_base_laptop_2014_" + str(round(j/150)) + ".txt", 'w') as f:
for i in range(j, j + 150, 3): # Was 0*3, 2530*3, 3
print("sentence: " + str(i / 3) + " out of " + str(len(lines) / 3) + " in " + "raw_data;")
target = lines[i + 1].lower().split()
words = lines[i].lower().split()
words_l, words_r = [], []
flag = True
for word in words:
if word == '$t$':
flag = False
continue
if flag:
words_l.append(word)
else:
words_r.append(word)
sentence = " ".join(words_l + target + words_r)
print(sentence)
embeddings = get_features([sentence])
for key, value in embeddings.items():
f.write('\n%s ' % key)
for v in value:
f.write('%s ' % v)
'''
with open("dataBERT/BERT_base_book_2019.txt", 'w') as f:
for i in range(0, len(lines), 3): # Was 0*3, 2530*3, 3
print("sentence: " + str(i / 3) + " out of " + str(len(lines) / 3) + " in " + "raw_data;")
target = lines[i + 1].lower().split()
words = lines[i].lower().split()
words_l, words_r = [], []
flag = True
for word in words:
if word == '$t$':
flag = False
continue
if flag:
words_l.append(word)
else:
words_r.append(word)
sentence = " ".join(words_l + target + words_r)
print(sentence)
embeddings = get_features([sentence])
for key, value in embeddings.items():
f.write('\n%s ' % key)
for v in value:
f.write('%s ' % v)
###Output
_____no_output_____ |
hacks/IPython Parallel and R.ipynb | ###Markdown
IPy Parallel and RIn this notebook, we'll use IPython.parallel (IPP) and rpy2 as a quick-and-dirty way of parallelizing work in R. We'll use a cluster of IPP engines running on the same VM as the notebook server to demonstarate. We'll also need to install [rpy2](http://rpy.sourceforge.net/) before we can start.`!pip install rpy2` Start Local IPP EnginesFirst we must start a cluster of IPP engines. We can do this using the *Cluster* tab of the Jupyter dashboard. Or we can do it programmatically in the notebook.
###Code
from IPython.html.services.clusters.clustermanager import ClusterManager
cm = ClusterManager()
###Output
_____no_output_____
###Markdown
We have to list the profiles before we can start anything, even if we know the profile name.
###Code
cm.list_profiles()
###Output
_____no_output_____
###Markdown
For our demo purposes, we'll just use the default profile which starts a cluster on the local machine for us.
###Code
cm.start_cluster('default')
###Output
_____no_output_____
###Markdown
After running the command above, we need to pause for a few moments to let all the workers come up. (Breathe and count 10 ... 9 ... 8 ...) Now we can continue to create a DirectView that can talk to all of the workers. (If you get an error, breathe, count so more, and try again in a few.)
###Code
import IPython.parallel
client = IPython.parallel.Client()
dv = client[:]
###Output
_____no_output_____
###Markdown
In my case, I have 8 CPUs so I get 8 workers by default. Your number will likely differ.
###Code
len(dv)
###Output
_____no_output_____
###Markdown
To ensure the workers are functioning, we can ask each one to run the bash command `echo $$` to print a PID.
###Code
%%px
!echo $$
###Output
[stdout:0] 12973
[stdout:1] 12974
[stdout:2] 12978
[stdout:3] 12980
[stdout:4] 12977
[stdout:5] 12975
[stdout:6] 12976
[stdout:7] 12979
###Markdown
Use R on the EnginesNext, we'll tell each engine to load the `rpy2.ipython` extension. In our local cluster, this step is easy because all of the workers are running in the same environment as the notebook server. If the engines were remote, we'd have many more installs to do.
###Code
%%px
%load_ext rpy2.ipython
###Output
_____no_output_____
###Markdown
Now we can tell every engine to run R code using the `%%R` (or `%R`) magic. Let's sample 50 random numbers from a normal distribution.
###Code
%%px
%%R
x <- rnorm(50)
summary(x)
###Output
_____no_output_____
###Markdown
Pull it Back to PythonWith our hack, we can't simply pull the R vectors back to the local notebook. (IPP can't pickle them.) But we can convert them to Python and pull the resulting objects back.
###Code
%%px
%Rpull x
x = list(x)
x = dv.gather('x', block=True)
###Output
_____no_output_____
###Markdown
We should get 50 elements per engine.
###Code
assert len(x) == 50 * len(dv)
###Output
_____no_output_____
###Markdown
Clean Up the EnginesWhen we're done, we can clean up any engines started using the code at the top of this notebook with the following call.
###Code
cm.stop_cluster('default')
###Output
_____no_output_____ |
MachineLearning_9/08_xgboost_lightgbm/rossmann-store-sales/.ipynb_checkpoints/Rossmann_Store_Sales_competition_mine-checkpoint.ipynb | ###Markdown
引入所需库
###Code
import pandas as pd
import datetime
import numpy as np
import scipy as sp
import csv
import os
import xgboost as xgb
import itertools
import operator
import warnings
warnings.filterwarnings("ignore")
from sklearn.preprocessing import StandardScaler, LabelEncoder
from sklearn.base import TransformerMixin
from sklearn.model_selection import cross_validate
from matplotlib import pylab as plt
plot = True
goal = 'Sales'
myid = 'Id'
###Output
_____no_output_____
###Markdown
定义一些变换和评判准则
###Code
def ToWeight(y):
w = np.zeros(y.shape,dtype=float)
ind = y !=0
w[ind] = 1./(y[ind]**2)
return w
def rmspe(yhat,y):
w = ToWeight(y)
np.sqrt(np.mean(w * (y - yhat)**2))
return rmspe
def rmspe_xg(yhat,y):
# y = y.values
y = y.get_label()
y = np.exp(y) - 1
yhat = np.exp(yhat) - 1
w = ToWeight(y)
rmspe = np.sqrt(np.mean(w * (y - yhat)**2))
return "rmspe",rmspe
store = pd.read_csv('store.csv')
store.head()
train_df = pd.read_csv('train.csv')
train_df.head()
test_df = pd.read_csv('test.csv')
test_df.head()
###Output
_____no_output_____
###Markdown
加在数据
###Code
def load_data():
"""
加在数据, 设定数值型和非数值型
"""
store = pd.read_csv('store.csv')
train_org = pd.read_csv('train.csv',dtype={'StateHoliday':pd.np.string_})
test_org = pd.read_csv('test.csv',dtype={'StateHoliday':pd.np.string_})
train = pd.merge(train_org,store,on='Store',how='left')
test = pd.merge(test_org,store,on='Store',how='left')
features = test.columns.tolist()
numerics = ['int16','int32','int64','float16','float32','float64']
features_numeric = test.select_dtypes(include=numerics).cloumns.tolist()
features_non_numeric = [f for f in features if f not in features_numeric]
return(train,test,features,features_non_numeric)
###Output
_____no_output_____
###Markdown
数据与特征处理
###Code
def process_data(train,test,features,features_non_numeric):
"""
Feature engineering and selection
"""
## Feature engineering
train = train[train['Sales'] > 0]
for data in [train,test]:
# year month day
data['year'] = data.Date.apply(lambda x: x.split('-')[0])
data['year'] = data['year'].astype(float)
data['month'] = data.Date.apply(lambda x: x.split('-')[1])
data['month'] = data['moth'].astype(float)
data['day'] = data.Date.apply(lambda x: x.split('-')[2])
data['day'] = data['data'].astype(float)
# promo interval "Jan,APr,Jul,Oct"
data['promojan'] = data.PromoInterval.apply(lambda x: 0 if isinstance(x, float) else 1 if "Jan" in x else 0)
# TypeError:
data['promofed'] = data.PromoInterval.apply(lambda x: 0 if isinstance(x,float) else 1 if "Feb" in x else 0)
data['promomar'] = data.PromoInterval.apply(lambda x: 0 if isinstance(x,float) else 1 if "Mar" in x else 0)
data['promomapr'] = data.PromoInterval.apply(lambda x: 0 if isinstance(x,float) else 1 if "Apr" in x else 0)
data['promomay'] = data.PromoInterval.apply(lambda x: 0 if isinstance(x,float) else 1 if "May" in x else 0)
data['promomjun'] = data.PromoInterval.apply(lambda x: 0 if isinstance(x,float) else 1 if "Jun" in x else 0)
data['promojul'] = data.PromoInterval.apply(lambda x: 0 if isinstance(x,float) else 1 if "Jul" in x else 0)
data['promoaug'] = data.PromoInterval.apply(lambda x: 0 if isinstance(x,float) esle 1 if "Aug" in x else 0)
data['promosep'] = data.PromoInterval.apply(lambda x: 0 if isinstance(x,float) else 1 if "Sep" in x else 0)
data['promooct'] = data.PromoInterval.apply(lambda x: 0 if isinstance(x,float) else 1 if "Oct" in x else 0)
data['promonov'] = data.PromoInterval.apply(lambda x: 0 if isinstance(x,float) else 1 if "Nov" in x else 0)
data['promodec'] = data.PromoInterval.apply(lambda x: 0 if isinstance(x,float) esle 1 if "Dec" in x else 0)
# Features set
noisy_features = [myid,'Date']
features = [c for c in features if c not in noisy_features]
features_non_numeric = [c for c in features_non_numeric if c not in noisy_features]
features.extend(['year','month','day'])
# Fill NA
class DataFrameImputer(transformerMixin):
def __init__(self):
###Output
_____no_output_____ |
Capitulo_1/.ipynb_checkpoints/Capitulo1-checkpoint.ipynb | ###Markdown
1.1 Marcos de referencia y sistemas de coordenadas
###Code
# Definimos marcos de referencia con sympy.physics.mechanics
a=ReferenceFrame('A')
b=ReferenceFrame('B')
c=ReferenceFrame('C')
# Cada marco de referencia automaticamente define un sistema de coordenadas.
# Observe que la notación en sympy es x,y,z en vez de 1,2,3.
# Podemos verificar la definición de los vectores unitarios con el producto cruz (pag 2.)
c.z.cross(c.x)
###Output
_____no_output_____
###Markdown
1.2 Variables de movimiento
###Code
#Para el ejemplo de la atracción de parque (Figura 3) en vez de definir los marcos de referencia
# por separado podemos utilizar cada marco de referencia para definir el siguiente y vincularlos
# a través de una rotación relativa, definida por las variables de movimiento (q1,q2,q3).
# Definimos primero el marco A (base de la atracción)
a=ReferenceFrame('A')
# Ahora los symbolos para las variables de movimiento
q1,q2=symbols('q1,q2')
# Definimos b rotando q1 respecto a a.x
b=a.orientnew('B','Axis',(q1,a.x))
# Definimos c rotando q2 respecto a b.z
c=b.orientnew('C','Axis',(q2,b.z))
# De esta manera quedan definidas las coordenadas como propone el ejemplo. Esto lo podemos verificar
# facilmente:
print(a.x==b.x) #True
print(c.z==c.z) #True
print(a.x==b.y) #False
###Output
True
True
False
###Markdown
1.3 Derivadas de vectores
###Code
# Implemente el robot SCARA de la Figura 4:
# Defina los símbolos para las variables de movimiento
q1,q2,q3=symbols('q1,q2,q3')
# Defina los marcos de referencia para cada parte A B C A
a=ReferenceFrame('A')
# Defina b rotando q1 respecto a a.z
b=a.orientnew('B','Axis',(q1,a.z))
# Defina c rotando q2 respecto a b.z
c=b.orientnew('C','Axis',(q2,b.z))
# No necesita definir D, ya que tiene la misma orientación de C
# Luega defina cada punto usando los sistemas de coordenadas
# Primero el Origen O
O=Point('O')
# El Punto P en la base del hombro esta a una distancia l1 en el eje a.z
l1,l2,l3=symbols('l1,l2,l3')
P=O.locatenew('P',l1*a.z)
Q=P.locatenew('Q',l2*b.x)
R=Q.locatenew('R',l3*c.x)
S=R.locatenew('S',q3*c.z)
#Ahora encuentre la posición del punto O al punto S
v=S.pos_from(O)
v
# Y la posición del punto Q al punto S
w=S.pos_from(Q)
w
# Si se calcula la derivada en el marco de referencia B
w.diff(q1,b)
# Si se calcula la derivada en el marco de referencia A
w.diff(q1,a)
###Output
_____no_output_____
###Markdown
1.4 Derivadas parciales
###Code
# Ejemplo modelo simplificado de una pierna
# Defina los símbolos para las variables de movimiento
q1,q2,q3,q4=symbols('q1,q2,q3,q4')
# Se definen marcos de referencia para cada parte A B C D
a=ReferenceFrame('A')
# Aqui un comentario para aclarar esta seccion:
# Segun el ejemplo del libro q1 es rotacion en a1(ax) y luego q2 es rotacion en y.
# La figura 5 puede no ser muy clara por si sola hasta que no se revisa el eje intermedio e (figura 6) y se entiende
# que las rotaciones q1 y q2 son flexión/extensión (q1) y abducción/adducción (q2) de la cadera.
# Teniendo en cuenta esto, se define también el marco de referencia intermedio "E" para mayor claridad.
# Aunque se podría realizar directamente la definición del marco b asi:
# b=a.orientnew('B','Body',(q1,q2,0),'XYZ')
# Defina e rotando q1 en a.x
e=a.orientnew('E','Axis',(q1,a.x))
# Defina b rotando q2 en e.y
b=e.orientnew('B','Axis',(q2,e.y))
# Defina c rotando q3 en b.z
c=b.orientnew('C','Axis',(q3,b.x))
# Defina D rotando q4 en c.x
d=c.orientnew('D','Axis',(q4,-c.x))
# Luega defina cada punto usando los sistemas de coordenadas
# Primero el Origen O en la pelvis
O=Point('O')
l1,l2,l3,l4,l5,l6,l7,l8=symbols('l1,l2,l3,l4,l5,l6,l7,l8') # variables de distancia
origen_b=O.locatenew('P',-l1*a.x+l2*a.y-l3*b.x-l4*b.y)
rodilla=origen_b.locatenew('Q',-l5*b.z)
origen_c=rodilla.locatenew('R',-l6*c.z)
origen_d=origen_c.locatenew('S',-l7*d.z-l8*d.y)
# Ahora calcule los vectores
u=origen_d.pos_from(rodilla) # rodilla hasta punta del pie
v=origen_c.pos_from(origen_b) # cadera hasta tobillo
w=rodilla.pos_from(O) # pelvis hasta rodilla
# Tabla 1. Cambio en los vectores en los marcos de referencia
tables=dict()
for frame in [a,b,c,d]:
tbl=np.zeros((4,3),dtype=bool)
for j,vec in enumerate([u,v,w]):
for i,coord in enumerate([q1,q2,q3,q4]):
tbl[i,j]=not(vec.diff(coord,frame)==0)
tables[frame.name]=tbl
# Confirmando los resultados de la tabla 1.
tables
# Puede utilizar el metodo express para expresar cualquier vector en el
# marco de referencia deseado y verificar si contiene terminos qi.
w.express(b)
# Tambien puede utilizar el metodo express para establecer las relaciones
# entre vectores unitarios de la figura 6.
d.y.express(c)
# Y por supuesto calcular derivadas parciales de estos vectores
d.y.diff(q4,c).simplify()
# Puede verificar todas las derivadas parciales del ejemplo
b.x.diff(q2,a).simplify()
#b.y.diff(q2,a).simplify()
#b.z.diff(q2,a).simplify()
#a.x.diff(q1,b).simplify()
#a.y.diff(q1,b).simplify()
#a.z.diff(q1,b).simplify()
###Output
_____no_output_____
###Markdown
1.5 Derivada total
###Code
# Se utiliza el modelo simplificado de una pierna con una simplificación
# adicional: espesor 0.
# Se definen las variables de movimiento.
# Como se quiere encontrar la derivada total (derivada respecto al tiempo), en este caso
# se utilizan simbolos dinamicos para las variables de movimiento. (q1(t), q2(t), etc.)
q1,q2,q3,q4=dynamicsymbols('q1,q2,q3,q4')
# Los marcos se mantienen igual, pero podemos simplificar la definción y eliminar
# el marco intermedio E, al usar b orientado con rotaciones sucesivas ('Body').
# Se definen marcos de referencia para cada parte A B C D
a=ReferenceFrame('A')
# Defina b rotando sucesivamente en x (ax) y luego en y (ey).
b=a.orientnew('B','Body',(q1,q2,0),'XYZ')
# Defina c rotando q3 en b.z
c=b.orientnew('C','Axis',(q3,b.x))
# Defina D rotando q4 en c.x
d=c.orientnew('D','Axis',(q4,-c.x))
# Luega defina cada punto usando los sistemas de coordenadas
# Primero el Origen O en la pelvis
O=Point('O')
la,lb,lc,ld=symbols('la,lb,lc,ld') # variables de distancia
origen_b=O.locatenew('P',-la*a.x)
rodilla=origen_b.locatenew('Q',-lb*b.z)
origen_c=rodilla.locatenew('R',-lc*c.z)
origen_d=origen_c.locatenew('S',-ld*d.y)
# Ahora calcule los vectores
u=origen_d.pos_from(rodilla) # rodilla hasta punta del pie
v=origen_c.pos_from(origen_b) # cadera hasta tobillo
w=rodilla.pos_from(O) # pelvis hasta rodilla
# Puede verificar los vectores
u # rodilla hasta punta del pie
# v # cadera hasta tobillo
# w # pelvis hasta rodilla
# Puede calcular las derivadas totales
u.dt(d) # derivada du/dt en el marco D
#v.dt(d).express(c) # derivada dv/dt en el marco D expresadas en coordenadas c
#w.dt(a) # derivada dw/dt en el marco A
###Output
_____no_output_____
###Markdown
1.6 Matrices de rotacion
###Code
# Defina un marco de referencia A
a=ReferenceFrame('A')
# Defina el símbolo theta para el angulo de rotación
theta=symbols('theta')
# Defina un marco de referencia B aplicando una rotación en ax.
b=a.orientnew('B','Axis',(theta,a.x))
# Puede expresar vectores definidos con componentes en b. Por ejemplo:
b1,b2,b3=symbols('b1,b2,b3')
vec1=b1*b.x+b2*b.y+b3*b.z # Vector vec1 definido por componentes b1 en bx, b2 en by, b3, en bz.
# Observe que pasa cuando expresa el vector en el sistema de coordenadas de A
vec1.express(a)
#vec1.express(b)
###Output
_____no_output_____
###Markdown
1.7 Cosenos directores
###Code
# Defina un marco de referencia A
a=ReferenceFrame('A')
# Defina el símbolo theta para el ángulo de rotación
theta=symbols('theta')
# Ahora defina un marco de referencia B con con diferentes rotaciones.
b=a.orientnew('B','Axis',(theta,a.x))
print('en x:')
pprint(a.dcm(b))
b=a.orientnew('B','Axis',(theta,a.y))
print('en y:')
pprint(a.dcm(b))
b=a.orientnew('B','Axis',(theta,a.z))
print('en z:')
pprint(a.dcm(b))
###Output
en x:
⎡1 0 0 ⎤
⎢ ⎥
⎢0 cos(θ) -sin(θ)⎥
⎢ ⎥
⎣0 sin(θ) cos(θ) ⎦
en y:
⎡cos(θ) 0 sin(θ)⎤
⎢ ⎥
⎢ 0 1 0 ⎥
⎢ ⎥
⎣-sin(θ) 0 cos(θ)⎦
en z:
⎡cos(θ) -sin(θ) 0⎤
⎢ ⎥
⎢sin(θ) cos(θ) 0⎥
⎢ ⎥
⎣ 0 0 1⎦
###Markdown
1.8 Diádicas de vectores
###Code
# Puede construir diádicas utilizando el método outer (producto diádico)
from sympy.physics.mechanics import outer,dot
A=outer(a.x,a.x)+outer(a.y,a.y)+outer(a.z,a.z)
A # Diádica A
#Calule el producto A.b
A.dot(b1*b.x+b2*b.y+b3*b.z)
###Output
_____no_output_____ |
Payslip.ipynb | ###Markdown
We need to program to print out a payslip for sales people. Consider 'Ram' who has a salary of \$25000. They have sold goods worth $20000 and earns 2% commision on the sales. They fall into the 10% tax bracket and so before making payments we need to deduct 10% form his total payout.
###Code
salary = 25000
sales = 20000
commission = 0.02 * sales
tax = (salary + commission) * 0.10
pay = salary + commission - tax
print('Payslip of Ram')
print('Salary', salary,'Commission',commission,'Tax', tax)
print('Total pay',pay)
###Output
Payslip of Ram
Salary 25000 Commission 400.0 Tax 2540.0
Total pay 22860.0
###Markdown
Consider 'Radha' who has a salary of \$30000. They have sold goods worth $40000 and earns 2.5% commision on the sales. They fall into the 10% tax bracket and so before making payments we need to deduct 10% form his total payout.
###Code
salary = 30000
sales = 40000
commission = 0.025 * sales
tax = (salary + commission) * 0.10
pay = salary + commission - tax
print('Payslip of Radha')
print('Salary', salary,'Commission',commission,'Tax', tax)
print('Total pay',pay)
###Output
Payslip of Radha
Salary 30000 Commission 1000.0 Tax 3100.0
Total pay 27900.0
###Markdown
What did we change from Ram to Radha?
###Code
we change values of salary, sales, commission and name.
###Output
_____no_output_____
###Markdown
Make what we changed as inputs (parameters) to a function
###Code
def pay_slip (name,salary,sales,rate):
commission = rate * sales
tax = (salary + commission) * 0.10
pay = salary + commission - tax
print('Payslip of', name)
print('Salary:', salary,',Commission:',commission,',Tax:', tax)
print('Total pay:',pay)
pay_slip('Parshav',20000,50000,0.30)
pay_slip('Brooks',50000,60000,0.26)
###Output
_____no_output_____
###Markdown
We need to program to print out a payslip for sales people. Consider 'Ram' who has a salary of \$25000. They have sold goods worth $20000 and earns 2% commision on the sales. They fall into the 10% tax bracket and so before making payments we need to deduct 10% form his total payout.
###Code
salary = 30000 #25000
sales = 40000 #20000
commission = 0.02 * sales
tax = (salary + commission) * 0.10
pay = salary + commission - tax
print('Payslip of Ram') #ram
print('Salary', salary,'Commission',commission,'Tax', tax)
print('Total pay',pay)
###Output
Payslip of Ram
Salary 30000 Commission 800.0 Tax 3080.0
Total pay 27720.0
###Markdown
Consider 'Radha' who has a salary of \$30000. They have sold goods worth $40000 and earns 2.5% commision on the sales. They fall into the 10% tax bracket and so before making payments we need to deduct 10% form his total payout.
###Code
def printPaySlip(salary,sales, rate,name):
#salary = 30000 #25000
#sales = 40000 #20000
commission = rate *sales
tax = (salary + commission) * 0.10
pay = salary + commission - tax
print('Payslip of ' + name) #ram
print('Salary', salary,'Commission',commission,'Tax', tax)
print('Total pay',pay)
###Output
_____no_output_____
###Markdown
What did we change from Ram to Radha?
###Code
printPaySlip(30000, 40000, 0.025, 'Radha')
###Output
Payslip of Radha
Salary 30000 Commission 1000.0 Tax 3100.0
Total pay 27900.0
###Markdown
Make what we changed as inputs (parameters) to a function
###Code
###Output
_____no_output_____
###Markdown
We need to program to print out a payslip for sales people. Consider 'Ram' who has a salary of \$25000. They have sold goods worth $20000 and earns 2% commision on the sales. They fall into the 10% tax bracket and so before making payments we need to deduct 10% form his total payout.
###Code
salary = 25000
sales = 20000
commission = 0.02 * sales
tax = (salary + commission) * 0.10
pay = salary + commission - tax
print('Payslip of Ram')
print('Salary', salary,'Commission',commission,'Tax', tax)
print('Total pay',pay)
###Output
Payslip of Ram
Salary 25000 Commission 400.0 Tax 2540.0
Total pay 22860.0
###Markdown
Consider 'Radha' who has a salary of \$30000. They have sold goods worth $40000 and earns 2.5% commision on the sales. They fall into the 10% tax bracket and so before making payments we need to deduct 10% form his total payout.
###Code
salary = 30000
sales = 40000
commission = 0.025 * sales
tax = (salary + commission) * 0.10
pay = salary + commission - tax
print('Payslip of Radha')
print('Salary', salary,'Commission',commission,'Tax', tax)
print('Total pay',pay)
###Output
Payslip of Radha
Salary 30000 Commission 1000.0 Tax 3100.0
Total pay 27900.0
###Markdown
What did we change from Ram to Radha?
###Code
###Output
_____no_output_____
###Markdown
Make what we changed as inputs (parameters) to a function
###Code
def payslip (name, salary, sales, rate, tax_rate):
commission = rate * sales
tax = (salary + commission) * tax_rate
pay = salary + commission - tax
print('Payslip of {}'.format(name))
print('Salary:', salary,'Commission:',commission,'Tax:', tax)
print('Total pay:',pay)
payslip('Radha', 30000, 40000, 0.025, 0.1)
###Output
Payslip of Radha
Salary: 30000 Commission: 1000.0 Tax: 3100.0
Total pay: 27900.0
|
15-ParseProductionBrowser.ipynb | ###Markdown
User Agent representation User Agent as tuple From Udger `UserAgent = {ua_family_code, ua_version, ua_class_code, device_class_code, os_family_code, os_code}` Load data (if needed)
###Code
main_data = np.load('df/main_prod_data.npy').tolist()
values_data = np.load('df/values_prod_data.npy').tolist()
order_data = np.load('df/order_prod_data.npy').tolist()
main_df = pd.DataFrame(main_data)
main_df
list_device_class_code = pd.DataFrame(main_data).device_class_code.value_counts().index.tolist()
list_os_family_code = pd.DataFrame(main_data).os_family_code.value_counts().index.tolist()
list_os_code = pd.DataFrame(main_data).os_code.value_counts().index.tolist()
list_ua_class_code = pd.DataFrame(main_data).ua_class_code.value_counts().index.tolist()
list_ua_family_code = pd.DataFrame(main_data).ua_family_code.value_counts().index.tolist()
list_ua_version = pd.DataFrame(main_data).ua_version.value_counts().index.tolist()
print("Device count: {}".format(len(list_device_class_code)))
print("Device platform family count: {}".format(len(list_os_family_code)))
print("Device platform count: {}".format(len(list_os_code)))
print("Device browser class count: {}".format(len(list_ua_class_code)))
print("Device browser family count: {}".format(len(list_ua_family_code)))
print("Device browser version count: {}".format(len(list_ua_version)))
###Output
Device count: 5
Device platform family count: 29
Device platform count: 98
Device browser class count: 5
Device browser family count: 129
Device browser version count: 2585
###Markdown
Train Part
###Code
important_orders_keys_set = {
'Upgrade-Insecure-Requests',
'Accept',
'If-Modified-Since',
'Host',
'Connection',
'User-Agent',
'From',
'Accept-Encoding'
}
important_values_keys_set = {
'Accept',
'Accept-Charset',
'Accept-Encoding'
}
orders_vectorizer = sklearn.feature_extraction.DictVectorizer(sparse=True, dtype=float)
values_vectorizer = sklearn.feature_extraction.DictVectorizer(sparse=True, dtype=float)
from lib.parsers.logParser import LogParser
l_parser = LogParser(log_folder='Logs/')
#from sklearn import preprocessing
#y = pd.DataFrame(main_data).User_Agent.fillna('NaN')
#print("UA count: {}".format(len(list_ua)))
#from sklearn import preprocessing
#y = pd.DataFrame(main_data).User_Agent.fillna('NaN')
#print("UA count: {}".format(len(list_ua)))#### OS_family_code
l_parser.reassign_orders_values(order_data, values_data)
full_sparce_dummy = l_parser.prepare_data(orders_vectorizer, values_vectorizer, important_orders_keys_set, important_values_keys_set, fit_dict=True)
import os
from sklearn.externals import joblib
filename_order = 'cls/prod_orders_vectorizer.joblib.pkl'
_ = joblib.dump(orders_vectorizer, filename_order, compress=9)
filename_values = 'cls/prod_values_vectorizer.joblib.pkl'
_ = joblib.dump(values_vectorizer, filename_values, compress=9)
from lib.helpers.fileSplitter import split_file
files_count = split_file(filename_order, 'parted-cls/prod_orders_vectorizer.joblib.pkl')
files_count = split_file(filename_values, 'parted-cls/prod_values_vectorizer.joblib.pkl')
###Output
_____no_output_____
###Markdown
WarningSometimes if dataset have over 150K rows and n_jobs=-1 we get `OSError: [Errno 28] No space left on device` in `sklearn/externals/joblib/pool.py`https://github.com/scikit-learn/scikit-learn/issues/3313https://stackoverflow.com/questions/24406937/scikit-learn-joblib-bug-multiprocessing-pool-self-value-out-of-range-for-i-foMaybehttps://stackoverflow.com/questions/40115043/no-space-left-on-device-error-while-fitting-sklearn-model`It seems, that your are running out of shared memory (/dev/shm when you run df -h). Try setting JOBLIB_TEMP_FOLDER environment variable to something different: e.g., to /tmp. In my case it has solved the problem.` OS_family_code
###Code
%%time
from sklearn.linear_model import LogisticRegression
clf_os_family_code = LogisticRegression(random_state=42, C=100)
clf_os_family_code.fit(full_sparce_dummy, main_df.os_family_code.fillna('NaN'))
import os
from sklearn.externals import joblib#### OS_code
filename = 'cls/prod_os_family_code_logreg_cls.joblib.pkl'
_ = joblib.dump(clf_os_family_code, filename, compress=9)
print("Model saved with size(Bytes): {}".format(os.stat(filename).st_size))
files_count = split_file(filename, 'parted-cls/prod_os_family_code_logreg_cls.joblib.pkl')
print('Splitted in {} files'.format(files_count))
###Output
Model saved with size(Bytes): 59470
Splitted in 0 files
###Markdown
OS_code
###Code
%%time
clf_os_code = LogisticRegression(random_state=42, C=100)
clf_os_code.fit(full_sparce_dummy, main_df.os_code.fillna('NaN'))
filename = 'cls/prod_os_code_logreg_cls.joblib.pkl'
_ = joblib.dump(clf_os_code, filename, compress=9)
print("Model saved with size(Bytes): {}".format(os.stat(filename).st_size))
files_count = split_file(filename, 'parted-cls/prod_os_code_logreg_cls.joblib.pkl')
print('Splitted in {} files'.format(files_count))
###Output
Model saved with size(Bytes): 203087
Splitted in 0 files
###Markdown
Browser family_code
###Code
%%time
clf_ua_family_code = LogisticRegression(random_state=42, C=100)
clf_ua_family_code.fit(full_sparce_dummy, main_df.ua_family_code.fillna('NaN'))
filename = 'cls/prod_ua_family_code_logreg_cls.joblib.pkl'
_ = joblib.dump(clf_ua_family_code, filename, compress=9)
print("Model saved with size(Bytes): {}".format(os.stat(filename).st_size))
files_count = split_file(filename, 'parted-cls/prod_ua_family_code_logreg_cls.joblib.pkl')
print('Splitted in {} files'.format(files_count))
###Output
Model saved with size(Bytes): 257819
Splitted in 0 files
###Markdown
Browser version
###Code
%%time
clf_ua_version = LogisticRegression(random_state=42, C=100)
clf_ua_version.fit(full_sparce_dummy, main_df.ua_version.fillna('NaN'))
filename = 'cls/prod_ua_version_logreg_cls.joblib.pkl'
_ = joblib.dump(clf_ua_version, filename, compress=9)
print("Model saved with size(Bytes): {}".format(os.stat(filename).st_size))
files_count = split_file(filename, 'parted-cls/prod_ua_version_logreg_cls.joblib.pkl')
print('Splitted in {} files'.format(files_count))
###Output
Model saved with size(Bytes): 4852875
Splitted in 0 files
###Markdown
Test part
###Code
import pandas as pd
import numpy as np
import scipy.sparse
import sklearn.feature_extraction
import matplotlib.pylab as plt
%matplotlib inline
from tqdm import tqdm
import platform
pd.set_option("display.max_rows", 10)
pd.set_option('display.max_columns', 1100)
import os
%pylab inline
warnings.filterwarnings('ignore')
important_orders_keys_set = {
'Upgrade-Insecure-Requests',
'Accept',
'If-Modified-Since',
'Host',
'Connection',
'User-Agent',
'From',
'Accept-Encoding'
}
important_values_keys_set = {
'Accept',
'Accept-Charset',
'Accept-Encoding'
}
import os
from sklearn.externals import joblib
from lib.helpers.fileSplitter import cat_files
orders_vectorizer = joblib.load('cls/prod_orders_vectorizer.joblib.pkl')
values_vectorizer = joblib.load("cls/prod_values_vectorizer.joblib.pkl")
clf_os_family_code = joblib.load('cls/prod_os_family_code_logreg_cls.joblib.pkl')
clf_os_code = joblib.load('cls/prod_os_code_logreg_cls.joblib.pkl')
clf_ua_family_code = joblib.load('cls/prod_ua_family_code_logreg_cls.joblib.pkl')
clf_ua_version = joblib.load('cls/prod_ua_version_logreg_cls.joblib.pkl')
###Output
_____no_output_____
###Markdown
Load test data
###Code
main_data = np.load('df/main_prodtest_data1.npy').tolist()[200000:250000]
values_data = np.load('df/values_prodtest_data1.npy').tolist()[200000:250000]
order_data = np.load('df/order_prodtest_data1.npy').tolist()[200000:250000]
main_df = pd.DataFrame(main_data)
main_df
important_values_keys_set = {
'Accept',
'Accept-Charset',
'Accept-Encoding'
}
important_orders_keys_set = {
'Upgrade-Insecure-Requests',
'Accept',
'If-Modified-Since',
'Host',
'Connection',
'User-Agent',
'From',
'Accept-Encoding'
}
from lib.parsers.logParser import LogParser
l_parser = LogParser(log_folder='Logs/')
l_parser.reassign_orders_values(order_data, values_data)
X_test = l_parser.prepare_data(orders_vectorizer, values_vectorizer, important_orders_keys_set, important_values_keys_set, fit_dict=False)
###Output
100%|██████████| 50000/50000 [00:00<00:00, 540769.30it/s]
100%|██████████| 50000/50000 [00:00<00:00, 442357.81it/s]
100%|██████████| 50000/50000 [00:00<00:00, 699685.05it/s]
###Markdown
Calculate scoresПримечание: Для Decision Tree в cross_val_score по умолчанию берется показатель 'Accuracy'Поскольку 'Accuracy' для линейной регрессии линейный мы не будем считать на 3-х или 5-ти фолдах(долго), а просто возьмем от тренировочной выборки 'Accuracy'
###Code
thres = 0.00001
###Output
_____no_output_____
###Markdown
**Browser (clf_ua_family_code)**
###Code
from lib.thresholdPredictions import ThresholdPredictions
pred = ThresholdPredictions(user_agent_list=clf_ua_family_code.classes_.tolist(), clf=clf_ua_family_code)
y_test_names, y_predicted, compare_answers, is_bot, answers_count = pred.bot_predict(X_test, main_df.ua_family_code.fillna('NaN'), thres, sparce_y=False, mark_new_labels_None=True, single_labels=True)
compare_frame = pd.concat(
[
pd.DataFrame(y_test_names),
y_predicted,
pd.DataFrame(compare_answers),
pd.DataFrame(is_bot),
pd.DataFrame(answers_count)
], keys=['browser_name', 'predicted_browser_name', 'browser_name_correctness', 'browser_name_bot', 'browser_name_count'], axis=1, join='inner')
compare_frame
###Output
_____no_output_____
###Markdown
Accuracy: $ACC = \frac{TP + TN}{P + N},\ \ \mathrm{where}\ \ P + N = length,\ \ TP = sum(True), \ \ TN = 0$
###Code
compare_frame.browser_name_bot[0].value_counts()
print('Сonfirmed bot: {}'.format(sum(compare_frame.browser_name_bot[0])/50000))
###Output
Сonfirmed bot: 0.11904
###Markdown
**Browser + Browser version (clf_ua_family_code + clf_ua_version)**
###Code
pred = ThresholdPredictions(user_agent_list=clf_ua_version.classes_.tolist(), clf=clf_ua_version)
y_test_names, y_predicted, compare_answers, is_bot, answers_count = pred.bot_predict(X_test, main_df.ua_version.fillna('NaN'), thres, sparce_y=False, mark_new_labels_None=True, single_labels=True)
compare_frame['browser_version'] = pd.DataFrame(y_test_names)
compare_frame['predicted_browser_version'] = y_predicted
compare_frame['browser_version_correctness'] = pd.DataFrame(compare_answers)
compare_frame['browser_version_bot'] = pd.DataFrame(is_bot)
compare_frame['browser_version_count'] = pd.DataFrame(answers_count)
compare_frame
print('Сonfirmed bot: {}'.format(sum(compare_frame.browser_version_bot)/50000))
print('Conditional Сonfirmed bot: {}'.format(sum(compare_frame.browser_name_bot[0] | compare_frame.browser_version_bot)/50000))
###Output
Сonfirmed bot: 0.27672
Conditional Сonfirmed bot: 0.33966
###Markdown
**Browser + Browser version + Platform (clf_ua_family_code + clf_ua_version + clf_os_family_code)**
###Code
pred = ThresholdPredictions(user_agent_list=clf_os_family_code.classes_.tolist(), clf=clf_os_family_code)
y_test_names, y_predicted, compare_answers, is_bot, answers_count = pred.bot_predict(X_test, main_df.os_family_code.fillna('NaN'), thres, sparce_y=False, mark_new_labels_None=True, single_labels=True)
compare_frame['platform'] = pd.DataFrame(y_test_names)
compare_frame['predicted_platform'] = y_predicted
compare_frame['platform_correctness'] = pd.DataFrame(compare_answers)
compare_frame['platform_bot'] = pd.DataFrame(is_bot)
compare_frame['platform_count'] = pd.DataFrame(answers_count)
compare_frame
print('Сonfirmed bot: {}'.format(sum(compare_frame.platform_bot)/50000))
print('Conditional Сonfirmed bot: {}'.format(sum(compare_frame.browser_name_bot[0] | compare_frame.browser_version_bot | compare_frame.platform_bot)/50000))
###Output
Сonfirmed bot: 0.0
Conditional Сonfirmed bot: 0.33966
###Markdown
**Browser + Browser version + Platform + Platform version (clf_ua_family_code + clf_ua_version + clf_os_family_code + clf_os_code)**
###Code
pred = ThresholdPredictions(user_agent_list=clf_os_code.classes_.tolist(), clf=clf_os_code)
y_test_names, y_predicted, compare_answers, is_bot, answers_count = pred.bot_predict(X_test, main_df.os_code.fillna('NaN'), thres, sparce_y=False, mark_new_labels_None=True, single_labels=True)
compare_frame['platform_version'] = pd.DataFrame(y_test_names)
compare_frame['predicted_platform_version'] = y_predicted
compare_frame['platform_version_correctness'] = pd.DataFrame(compare_answers)
compare_frame['platform_version_bot'] = pd.DataFrame(is_bot)
compare_frame['platform_version_count'] = pd.DataFrame(answers_count)
compare_frame
print('Сonfirmed bot: {}'.format(sum(compare_frame.platform_version_bot)/50000))
print('Conditional Сonfirmed bot: {}'.format(sum(compare_frame.browser_name_bot[0] | compare_frame.browser_version_bot | compare_frame.platform_bot | compare_frame.platform_version_bot)/50000))
###Output
Сonfirmed bot: 0.06036
Conditional Сonfirmed bot: 0.35916
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.