path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
docs/howto/grouped_data.ipynb | ###Markdown
How to work with grouped dataOne of the often appearing properties of the Data Science problems is the natural grouping of the data. You could for instance have multiple samples for the same customer. In such case, you need to make sure that all samples from a given group are in the same fold e.g. in Cross-Validation.Let's prepare a dataset with groups.
###Code
from sklearn.datasets import make_classification
X, y = make_classification(n_samples=100, n_features=10, random_state=42)
groups = [i % 5 for i in range(100)]
groups[:10]
###Output
_____no_output_____
###Markdown
The integers in `groups` variable indicate the group id, to which a given sample belongs.One of the easiest ways to ensure that the data is split using the information about groups is using `from sklearn.model_selection import GroupKFold`. You can also read more about other ways of splitting data with groups in sklearn [here](https://scikit-learn.org/stable/modules/cross_validation.htmlcross-validation-iterators-for-grouped-data).
###Code
from sklearn.model_selection import GroupKFold
cv = list(GroupKFold(n_splits=5).split(X, y, groups=groups))
###Output
_____no_output_____
###Markdown
Such variable can be passed to the `cv` parameter in `probatus` as well as to hyperparameter optimization e.g. `RandomizedSearchCV` classes.
###Code
from probatus.feature_elimination import ShapRFECV
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import RandomizedSearchCV
clf = RandomForestClassifier(random_state=42)
param_grid = {
'n_estimators': [5, 7, 10],
'max_leaf_nodes': [3, 5, 7, 10],
}
search = RandomizedSearchCV(clf, param_grid, cv=cv, n_iter=1, random_state=42)
shap_elimination = ShapRFECV(
clf=search, step=0.2, cv=cv, scoring='roc_auc', n_jobs=3, random_state=42)
report = shap_elimination.fit_compute(X, y)
report
###Output
_____no_output_____ |
Tarea_8.ipynb | ###Markdown
###Code
def busqB(id, lista, izquierda, derecha):
mitad = (izquierda + derecha)//2
if izquierda >= derecha:
return "No existe el id"
if lista[mitad].get('id') == id:
return mitad
else:
if lista[mitad].get('id') > id:
return busqB(id, lista, izquierda, (mitad-1))
else:
if lista[mitad].get('id') < id:
return busqB(id, lista, (mitad+1), derecha)
else:
return busqB(id, lista, izquierda, mitad)
alumno1={'id':2, 'nombre':"Juan" , 'carrera':"ICO", 'promedio':7.67}
alumno2={'id':4, 'nombre':"Rocio" , 'carrera':"ICI", 'promedio':8.67}
alumno3={'id':5, 'nombre':"Diego" , 'carrera':"DER", 'promedio':8.98}
alumno4={'id':7, 'nombre':"May" , 'carrera':"ICI", 'promedio':9.87}
alumno5={'id':9, 'nombre':"Rob" , 'carrera':"IME", 'promedio':10.00}
alumno6={'id':10, 'nombre':"Santi" , 'carrera':"ICO", 'promedio':5.37}
alumno7={'id':14, 'nombre':"Moy" , 'carrera':"IME", 'promedio':6.85}
alumno8={'id':16, 'nombre':"Diana" , 'carrera':"DER", 'promedio':9.99}
alumno9={'id':19, 'nombre':"Zoila" , 'carrera':"ICO", 'promedio':8.22}
alumno10={'id':22, 'nombre':"Armando" , 'carrera':"ICO", 'promedio':7.32}
lista = []
lista.append(alumno1)
lista.append(alumno2)
lista.append(alumno3)
lista.append(alumno4)
lista.append(alumno5)
lista.append(alumno6)
lista.append(alumno7)
lista.append(alumno8)
lista.append(alumno9)
lista.append(alumno10)
id = int(input("ID del estudiante: "))
alumno = busqB(id, lista, 0, len(lista))
print(f"{lista[id].get('nombre')} estudia la carrera de {lista[id].get('carrera')} y tiene un promedio de {lista[id].get('promedio')}")
###Output
_____no_output_____
###Markdown
###Code
alumno1={'id':2, 'nombre':"Juan" , 'carrera':"ICO", 'promedio':7.67}
alumno2={'id':4, 'nombre':"Rocio" , 'carrera':"ICI", 'promedio':8.67}
alumno3={'id':5, 'nombre':"Diego" , 'carrera':"DER", 'promedio':8.98}
alumno4={'id':7, 'nombre':"May" , 'carrera':"ICI", 'promedio':9.87}
alumno5={'id':9, 'nombre':"Rob" , 'carrera':"IME", 'promedio':10.00}
alumno6={'id':10, 'nombre':"Santi" , 'carrera':"ICO", 'promedio':5.37}
alumno7={'id':14, 'nombre':"Moy" , 'carrera':"IME", 'promedio':6.85}
alumno8={'id':16, 'nombre':"Diana" , 'carrera':"DER", 'promedio':9.99}
alumno9={'id':19, 'nombre':"Zoila" , 'carrera':"ICO", 'promedio':8.22}
alumno10={'id':22, 'nombre':"Armando" , 'carrera':"ICO", 'promedio':7.32}
bd = []
bd.append(alumno1)
bd.append(alumno2)
bd.append(alumno3)
bd.append(alumno4)
bd.append(alumno5)
bd.append(alumno6)
bd.append(alumno7)
bd.append(alumno8)
bd.append(alumno9)
bd.append(alumno10)
def BusquedaBin(cadena, id, inicio, final, indice):
if id < 0 or id == 1 or id == 3 or id == 6 or id == 8 or id == 11 or id == 12 or id == 13 or id == 15 or id == 17 or id == 18 or id == 20 or id == 21 or id > 22:
print("No se ha encontrado la ID del alumno")
pass
elif cadena[indice]["id"] == id:
print(bd[indice]["nombre"], "estudia la carrera de", bd[indice]["carrera"], "y actualmente tiene un promedio de", bd[indice]["promedio"])
elif id < bd[indice]["id"]:
final = final // 2
indice = final // 2
BusquedaBin(cadena, id, inicio, final, indice)
elif id > bd[indice]["id"]:
inicio = indice
indice=inicio+(final//4)
BusquedaBin(cadena, id, inicio, final, indice)
id = int(input("Ingrese el ID del alumno que quiere consultar: "))
BusquedaBin(bd, id, 0, len(bd), len(bd)//2 )
###Output
Ingrese el ID del alumno que quiere consultar: 6
No se ha encontrado la ID del alumno
###Markdown
Simulación de procesos financieros. **Nombres:** Betsy Torres | Eduardo Loza**Fecha:** 31 de octubre del 2020.**Expediente** : 714095 | 713423 **Profesor:** Oscar David Jaramillo Zuluaga.**Liga GitHub:** https://github.com/BetsyTorres/ProyectoConjunto_TorresBetsy_LozaEduardo/blob/main/Tarea_8.ipynb Tarea 8 Ejercicio  Solución Betsy Torres Tenemos la siguiente función a trozos:$$f(x)= \left\{ \begin{array}{lcc} \frac{1}{x^2} & si & x \geq 1 \\ \\ 0 & para & otros \end{array} \right.$$Resolvemos la integral y obtenemos lo siguiente:$$\int_{x}^{1}\frac{1}{x^2}=\frac{1}{x}-1$$El siguiente paso es igualar a u:$$\frac{1}{x}-1 = u$$Por último despejamos x y obtenemos el siguiente resultado:$$x=\frac{1}{u+1}$$
###Code
# Importamos librerías
import numpy as np
from functools import reduce
import time
import matplotlib.pyplot as plt
import scipy.stats as st # Librería estadística
import pandas as pd
from scipy import optimize
###Output
_____no_output_____
###Markdown
a) Montecarlo
###Code
np.random.seed(55555)
f_x = lambda x:1/(x ** 2) if x>=1 else 0
f_inv = lambda u : 1/(u+1)
N = 10
u=np.random.uniform(-1,1,N)
mc = f_inv(u).mean()
print('La media con el método de Montecarlo crudo es:',mc)
###Output
La media con el método de Montecarlo crudo es: 1.2903119480353522
###Markdown
b) Muestreo estratificado
###Code
n1, n2, n3 = 3, 3, 4
a, b, c, d = 0, 0.6, 0.9, 1
s= n1+n2+n3
r1 = np.random.uniform(a, b, n1)
r2 = np.random.uniform(b, c, n2)
r3 = np.random.uniform(c, d, n3)
# valores de cada estrato
r = [r1,r2,r3]
# cantidad de estratos
m = len(r)
# ponderación
w1 = (n1/s)/0.6
w2 = (n2/s)/0.3
w3 = (n3/s)/0.1
w = [w1, w2, w3]
# evaluamos r en la inversa para obterner xi
xi = list(map(f_inv,r))
muestras = [list(map(lambda x: f_inv(x), i)) for i in r]
estrato = [list(map(lambda i, w: i/w, i,w))for i in muestras]
muestra_est = np.concatenate(estrato).mean()
print("La media con", m, "estratos es:", muestra_est)
###Output
La media con 3 estratos es: 0.6359586152424306
###Markdown
c) Estratificado 2
###Code
def estratos(x):
U2 =np.random.rand(x)
i =np.arange(0,x)
estratos = (U2+i)/x
return estratos
x=10
rand = estratos(x)
muestra_est2 = lmuestras = list(map(lambda x: f_inv(x), rand))
np.mean(muestra_est2)
###Output
_____no_output_____
###Markdown
d) Complementario
###Code
np.random.seed(55555)
# complementarios del primer vector
u_complementaria = -u
#evalúo los complementarios en la función inversa y calculo la media
complementario = f_inv(u_complementaria).mean()
#aplico el promedio de las medias
complementarios = (mc+complementario)/2
print("La media con el método de números complementarios es ", complementarios)
###Output
La media con el método de números complementarios es 1.9236274673860305
###Markdown
Solución Eduardo Loza
###Code
import numpy as np
import matplotlib.pyplot as plt
from itertools import cycle
import matplotlib.pyplot as plt
import scipy.stats as st
import pandas as pd
###Output
_____no_output_____
###Markdown
$$F(x)=\int_{1}^{x} \dfrac{1}{x^2}dx=\dfrac{1}{x}-1$$ $$\dfrac{1}{x}-1=u$$ $$x=\dfrac{1}{u+1}$$ a) Montecarlo
###Code
np.random.seed(55555)
f_x = lambda x:1/(x ** 2) if x>=1 else 0
f_inv = lambda u : 1/(u+1)
N = 10
u=np.random.uniform(-1,1,N)
mcrudo = f_inv(u).mean()
print('La media con montecarlo crudo es:',mcrudo)
###Output
La media con montecarlo crudo es: 1.2903119480353522
###Markdown
b) Muestreo estratificado
###Code
np.random.seed(55555)
r1 = np.random.uniform(0,0.6,3)
r2 = np.random.uniform(0.6,0.9,3)
r3 = np.random.uniform(0.9,1,4)
# valores de cada estrato
r = [r1,r2,r3]
# cantidad de estratos
m = range(len(r))
w = [1/2,1,4] #pesos
# evaluamos para obterner xi
xi = list(map(f_inv,r))
muestras = [list(map(lambda x: f_inv(x), i)) for i in r]
estrato = [list(map(lambda i, w: i/w, i,w))for i in muestras]
mestratificado = np.concatenate(estrato).mean()
print("La media con 3 estratos es ", mestratificado)
###Output
La media con 3 estratos es 0.6794379994929494
###Markdown
c) Estratificado 2
###Code
def estratos(x):
U2 =np.random.rand(x)
i =np.arange(0,x)
estratos = (U2+i)/x
return estratos
random = estratos(10)
muestras2 = lmuestras = list(map(lambda x: f_inv(x), random))
np.mean(muestras2)
###Output
_____no_output_____
###Markdown
d) complementario
###Code
# complementarios del primer vector
u_c = -u
complementario = f_inv(u_c).mean() #evaluar los complementarios en la función y calculamos la media
complementarios = (mcrudo+complementario)/2 #aplicamos el promedio de las medias
print("La media con el método de números complementarios es ", complementarios)
###Output
La media con el método de números complementarios es 1.9236274673860305
###Markdown
###Code
def busqBin(dic,busq):
x =len(dic)//2
if len(dic) == 0:
print("El valor no se encuentra")
return x
if dic[x].get('id') == busq:
print("Nombre:",dic[x].get('nombre'),",estudia la carrera de:",dic[x].get('carrera'),", y tiene un promedio de:",dic[x].get('promedio') )
elif dic[x].get('id') > busq:
busqBin(dic[:x],busq)
elif dic[x].get('id') < busq:
busqBin(dic[x+1:], busq)
alumno1={'id':2, 'nombre':"Juan" , 'carrera':"ICO", 'promedio':7.67}
alumno2={'id':4, 'nombre':"Rocio" , 'carrera':"ICI", 'promedio':8.67}
alumno3={'id':5, 'nombre':"Diego" , 'carrera':"DER", 'promedio':8.98}
alumno4={'id':7, 'nombre':"May" , 'carrera':"ICI", 'promedio':9.87}
alumno5={'id':9, 'nombre':"Rob" , 'carrera':"IME", 'promedio':10.00}
alumno6={'id':10, 'nombre':"Santi" , 'carrera':"ICO", 'promedio':5.37}
alumno7={'id':14, 'nombre':"Moy" , 'carrera':"IME", 'promedio':6.85}
alumno8={'id':16, 'nombre':"Diana" , 'carrera':"DER", 'promedio':9.99}
alumno9={'id':19, 'nombre':"Zoila" , 'carrera':"ICO", 'promedio':8.22}
alumno10={'id':22, 'nombre':"Armando" , 'carrera':"ICO", 'promedio':7.32}
alumno11={'id':27, 'nombre':"Fernan" , 'carrera':"ICO", 'promedio':8.32}
alumno12={'id':32, 'nombre':"Dana" , 'carrera':"ICO", 'promedio':6.45}
alumno13={'id':39, 'nombre':"Luisa" , 'carrera':"DER", 'promedio':8.66}
alumno14={'id':44, 'nombre':"Sam" , 'carrera':"ICO", 'promedio':9.53}
alumno15={'id':45, 'nombre':"Juan" , 'carrera':"ICI", 'promedio':8.75}
alumno16={'id':57, 'nombre':"Jerico" , 'carrera':"ICO", 'promedio':6.02}
alumno17={'id':63, 'nombre':"Jesus" , 'carrera':"ICO", 'promedio':9.21}
alumno18={'id':70, 'nombre':"Dani" , 'carrera':"DER", 'promedio':5.00}
bd = []
bd.append(alumno1)
bd.append(alumno2)
bd.append(alumno3)
bd.append(alumno4)
bd.append(alumno5)
bd.append(alumno6)
bd.append(alumno7)
bd.append(alumno8)
bd.append(alumno9)
bd.append(alumno10)
bd.append(alumno11)
bd.append(alumno12)
bd.append(alumno13)
bd.append(alumno14)
bd.append(alumno15)
bd.append(alumno16)
bd.append(alumno17)
bd.append(alumno18)
busqBin(bd,44)
###Output
Nombre: Sam ,estudia la carrera de: ICO , y tiene un promedio de: 9.53
|
notebooks/examples/2 - Plot - Reproject.ipynb | ###Markdown
This notebook is the first of two notebooks to help plot our products using rasterio.We simply take all the raster products we produced in the last notebook and reproject them to lat/lon coordinates (i.e. `epsg:4326`) for visualization.
###Code
import rasterio
import numpy as np
import matplotlib.pyplot as plt
from orinoco import reproject_arr_to_match_profile, reproject_arr_to_new_crs
from pathlib import Path
import glob
from tqdm import tqdm
###Output
_____no_output_____
###Markdown
Directories of Orinoco ProductsAgain there are two options:1. `stamen_terrain_12`2. `google_16`Toggle the map name to change. **We assume you have run the previous notebooks using said option.**
###Code
# options are `stamen_terrain_12` or `google_16`
map_name = 'stamen_terrain_12'
data_path = Path(f'data/{map_name}')
product_dir = Path('products')
products_for_map_dir = product_dir/map_name
products_for_map_dir.exists()
###Output
_____no_output_____
###Markdown
Directory of Our Reprojected Rasters
###Code
products_for_map_dir_4326 = Path(f'{products_for_map_dir}_4326')
products_for_map_dir_4326.mkdir(exist_ok=True)
###Output
_____no_output_____
###Markdown
Let's visualize one of the rasters for sanity sake.
###Code
with rasterio.open(data_path/f'{map_name}.tif') as ds:
src_profile = ds.profile
src_arr = ds.read()
src_arr_rgb = src_arr.transpose([1, 2, 0])[...,:3]
plt.imshow(src_arr_rgb)
###Output
_____no_output_____
###Markdown
Reprojecting One Raster
###Code
if map_name == 'stamen_terrain_12':
target_resolution = 0.0003
elif map_name == 'google_16':
# divide by 15 if really want the 2 meter raster
# Using the same resolution for both masks ensures that
# we have approximately the same indices for all viz
target_resolution = 0.0003 #/ 15
else:
raise ValueError('only works for google_16 and stamen_terrain_12')
src_arr_4326, src_profile_4326 = reproject_arr_to_new_crs(src_arr,
src_profile,
'epsg:4326',
resampling='nearest',
target_resolution=target_resolution
)
src_arr_4326.shape
###Output
_____no_output_____
###Markdown
We save the reprojected raster.
###Code
src_profile_4326['dtype'] = 'uint8'
with rasterio.open(products_for_map_dir_4326/f'{map_name}.tif', 'w', **src_profile_4326) as ds:
ds.write(src_arr_4326.astype(np.uint8))
###Output
_____no_output_____
###Markdown
Let's visualize the raster as a sanity check.
###Code
src_arr_rgb_4326 = src_arr_4326.transpose([1, 2, 0])[...,:3].astype(int)
plt.imshow(src_arr_rgb_4326)
###Output
_____no_output_____
###Markdown
Reprojecting the RestWe collect all the tif files (rasters) in our product directory and automate the previous process using the profile to ensure they all are reprojected to the same CRS and frame.
###Code
raster_data = list(products_for_map_dir.glob('*.tif'))
raster_data
REFERENCE_PROFILE = src_profile_4326.copy()
REFERENCE_PROFILE['count'] = 1
def reproject_single_band(path):
with rasterio.open(path) as ds:
band = ds.read()
profile = ds.profile
r_copy = REFERENCE_PROFILE.copy()
r_copy['count'] = profile['count']
dtype = profile['dtype']
r_copy['dtype'] = dtype
band_r, _ = reproject_arr_to_match_profile(band, profile, r_copy, resampling='nearest')
with rasterio.open(products_for_map_dir_4326/path.name, 'w', **r_copy) as ds:
ds.write(band_r.astype(dtype))
return products_for_map_dir_4326/path.name
list(map(reproject_single_band, tqdm(raster_data)))
###Output
100%|██████████| 8/8 [00:00<00:00, 9.97it/s]
|
Part 1 - Training.ipynb | ###Markdown
Dataset & Dataloader
###Code
from src.dataset import AudioFolder
from torch.utils.data import DataLoader
sr = 16000
train_set = AudioFolder("dataset/train/", sr=sr, n_data=500, slice_dur=3.3)
trainloader = DataLoader(train_set, batch_size=128, shuffle=True, num_workers=2)
test_set = AudioFolder("dataset/valid/", sr=sr, n_data=200, slice_dur=2.4)
testloader = DataLoader(test_set, batch_size=128, num_workers=2)
###Output
_____no_output_____
###Markdown
Architecture & Config
###Code
from src.model import SpeakerDiarization
config = set_config({
"n_input": train_set.n_mel,
"n_output": len(train_set.classes),
"n_hidden": 256,
"n_layer": 5,
"dropout": 0.2,
"audio_params": train_set.audio_params,
"classes": train_set.classes
})
###Output
_____no_output_____
###Markdown
Training Preparation -> MCOC
###Code
model = SpeakerDiarization(config.n_input, config.n_output, config.n_hidden, config.n_layer, config.dropout).to(device)
criterion = nn.CrossEntropyLoss()
optimizer = optim.AdamW(model.parameters(), lr=0.001)
callback = Callback(model, config, outdir="model")
###Output
_____no_output_____
###Markdown
Training
###Code
while True:
model.train()
cost = 0
for specs, labels in tqdm(trainloader, desc="Train"):
specs, labels = specs.to(device), labels.to(device)
output, _ = model(specs, None)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
optimizer.zero_grad()
cost += loss.item()*specs.shape[0]
train_cost = cost/len(train_set)
with torch.no_grad():
model.eval()
cost = 0
for specs, labels in tqdm(testloader, desc="Test"):
specs, labels = specs.to(device), labels.to(device)
output, _ = model(specs, None)
loss = criterion(output, labels)
cost += loss.item()*specs.shape[0]
test_cost = cost/len(test_set)
# Logging
callback.log(train_cost, test_cost)
# Runtime Plotting
callback.cost_runtime_plotting()
# Early Stopping
if callback.early_stopping(model, monitor="test_cost"):
callback.plot_cost()
break
###Output
_____no_output_____
###Markdown
Dataset & Dataloader
###Code
from src.dataset import VCTKTripletDataset, VCTKTripletDataloader
from torch.utils.data import DataLoader
bs = 32
train_set = VCTKTripletDataset("vctk_dataset/wav48/", "vctk_dataset/txt/", n_data=3000, min_dur=1.5)
trainloader = VCTKTripletDataloader(train_set, batch_size=bs)
test_set = VCTKTripletDataset("vctk_dataset/wav48/", "vctk_dataset/txt/", n_data=3000, min_dur=1.5)
testloader = VCTKTripletDataloader(test_set, batch_size=bs)
###Output
_____no_output_____
###Markdown
Architecture & Config
###Code
from src.model import Encoder
config = set_config({
"ndim": 256,
"margin": 1,
"sr": train_set.sr,
"n_mfcc": train_set.n_mfcc,
"min_dur": train_set.min_dur
})
###Output
_____no_output_____
###Markdown
Training Preparation
###Code
from jcopdl.optim import RangerLARS
model = Encoder(ndim=config.ndim, triplet=True).to(device)
criterion = nn.TripletMarginLoss(config.margin)
callback = Callback(model, config, outdir="model", early_stop_patience=15)
optimizer = RangerLARS(model.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
Training
###Code
from tqdm.auto import tqdm
while True:
if callback.ckpt.epoch % 15 == 0:
train_set = VCTKTripletDataset("vctk_dataset/wav48/", "vctk_dataset/txt/", n_data=3000)
trainloader = VCTKTripletDataloader(train_set, batch_size=bs)
model.train()
cost = 0
for images, labels in tqdm(trainloader, desc="Train"):
images = images.to(device)
output = model(images)
loss = criterion(output[0], output[1], output[2])
loss.backward()
optimizer.step()
optimizer.zero_grad()
cost += loss.item()*images.shape[0]
train_cost = cost/len(train_set)
with torch.no_grad():
model.eval()
cost = 0
for images, labels in tqdm(testloader, desc="Test"):
images = images.to(device)
output = model(images)
loss = criterion(output[0], output[1], output[2])
cost += loss.item()*images.shape[0]
test_cost = cost/len(test_set)
# Logging
callback.log(train_cost, test_cost)
# Checkpoint
callback.save_checkpoint()
# Runtime Plotting
callback.cost_runtime_plotting()
# Early Stopping
if callback.early_stopping(model, monitor="test_cost"):
callback.plot_cost()
break
###Output
_____no_output_____ |
ml/quote_effectiveness/develop/20180107-tc-visualization.ipynb | ###Markdown
Quote VisualizationGoal: bla bla bla
###Code
#%pylab inline
%matplotlib inline
%config InlineBackend.figure_format='retina'
import numpy as np
import pandas as pd
from sqlalchemy import create_engine
import os, sys
import warnings
warnings.filterwarnings('ignore')
import seaborn as sns
sns.set_context('poster')
sns.set_style('whitegrid')
# sns.set_style('darkgrid')
# plt.rcParams['figure.figsize'] = 12, 8 # plotsize
from decouple import config
config.search_path = '/home/jovyan/work'
###Output
_____no_output_____
###Markdown
Connect to SQL DB
###Code
engine = create_engine(config('DATABASE_URL'))
###Output
_____no_output_____
###Markdown
Pull Branch data
###Code
sql = '''
select * from brs_branch
'''
pd = pd.read_sql_query(sql, engine);
pd.hist();
###Output
_____no_output_____ |
notebooks/OLD/cayley_table_OLD.ipynb | ###Markdown
Cayley Table OLD This notebook is only used for trying out ideas.
###Code
import numpy as np
import pprint as pp
class CayleyTable:
def __init__(self, arr):
tmp = np.array(arr, dtype=int)
nrows, ncols = tmp.shape
if nrows == ncols:
if (tmp.min() >= 0) and (tmp.max() < nrows):
self.__order = nrows
self.__table = tmp
else:
raise Exception(f"All integers must be between 0 and {nrows - 1}, inclusive.")
else:
raise Exception(f"Input arrays must be square; this one is {nrows}x{ncols}.")
def __repr__(self):
return f"{self.__class__.__name__}(\n{pp.pformat(self.__table.tolist())}\n)"
def __str__(self):
return f"{self.__class__.__name__}({self.__table.tolist()})"
def __getitem__(self, tup):
row, col = tup
return self.__table[row][col]
@property
def order(self):
return self.__order
@property
def table(self):
return self.__table
def tolist(self):
return self.__table.tolist()
def is_associative(self):
indices = range(len(self.__table))
result = True
for a in indices:
for b in indices:
for c in indices:
ab = self.__table[a][b]
bc = self.__table[b][c]
if not (self.__table[ab][c] == self.__table[a][bc]):
result = False
break
return result
def is_commutative(self):
n = self.__table.shape[0]
result = True
# Loop over the table's upper off-diagonal elements
for a in range(n):
for b in range(a + 1, n):
if self.__table[a][b] != self.__table[b][a]:
result = False
break
return result
def left_identity(self):
indices = range(len(self.__table))
identity = None
for x in indices:
if all(self.__table[x][y] == y for y in indices):
identity = x
break
return identity
def right_identity(self):
indices = range(len(self.__table))
identity = None
for x in indices:
if all(self.__table[y][x] == y for y in indices):
identity = x
break
return identity
def identity(self):
left_id = self.left_identity()
right_id = self.right_identity()
if (left_id is not None) and (right_id is not None):
return left_id
else:
return None
def has_inverses(self):
if self.identity:
row_indices, col_indices = np.where(self.__table == self.identity())
if set(row_indices) == set(col_indices):
if len(row_indices) == self.__order:
return True
else:
return False
else:
return False
else:
return False
def inverse_lookup_dict(self, identity):
elements = range(len(self.__table))
row_indices, col_indices = np.where(self.__table == identity)
return {elements[elem_index]: elements[elem_inv_index]
for (elem_index, elem_inv_index)
in zip(row_indices, col_indices)}
def about(self):
table_order = str(self.order)
is_associative = str(self.is_associative())
is_commutative = str(self.is_commutative())
left_id = str(self.left_identity())
right_id = str(self.right_identity())
id = str(self.identity())
has_inverses = str(self.has_inverses())
return table_order, is_associative, is_commutative, left_id, right_id, id, has_inverses
###Output
_____no_output_____
###Markdown
A Handy Utility
###Code
def about_tables(list_of_cayley_tables):
print(" Table Order Associative? Commutative? Left Id? Right Id? Identity? Inverses?")
print('-' * 85)
for tbl in list_of_cayley_tables:
i = list_of_cayley_tables.index(tbl) + 1
n, assoc, comm, lid, rid, id, invs = tbl.about()
print(f"{i :>{6}} {n :>{6}} {assoc :>{11}} {comm :>{12}} {lid :>{12}} {rid :>{9}} {id :>{10}} {invs :>{10}}")
# not assoc; is comm; no identities -- the RPS magma table, above
arr1 = [[0, 1, 0], [1, 1, 2], [0, 2, 2]]
# is assoc; not comm; has identity (0) --- the S3 group table
arr2 = [[0, 1, 2, 3, 4, 5], [1, 2, 0, 5, 3, 4], [2, 0, 1, 4, 5, 3],
[3, 4, 5, 0, 1, 2], [4, 5, 3, 2, 0, 1], [5, 3, 4, 1, 2, 0]]
# is assoc; is comm; has identity (0) --- the Z4 group table
arr3 = [[0, 1, 2, 3], [1, 2, 3, 0], [2, 3, 0, 1], [3, 0, 1, 2]]
# is assoc; is comm; has identity (0) --- powerset(3) group table
arr4 = [[0, 1, 2, 3, 4, 5, 6, 7], [1, 0, 4, 5, 2, 3, 7, 6], [2, 4, 0, 6, 1, 7, 3, 5],
[3, 5, 6, 0, 7, 1, 2, 4], [4, 2, 1, 7, 0, 6, 5, 3], [5, 3, 7, 1, 6, 0, 4, 2],
[6, 7, 3, 2, 5, 4, 0, 1], [7, 6, 5, 4, 3, 2, 1, 0]]
arr5 = [[0, 3, 0, 3, 0, 3], [1, 4, 1, 4, 1, 4], [2, 5, 2, 5, 2, 5],
[3, 0, 3, 0, 3, 0], [4, 1, 4, 1, 4, 1], [5, 2, 5, 2, 5, 2]]
# is assoc; is not comm; no left id; has right id --- Smarandache Groupoid
test_arrays = [arr1, arr2, arr3, arr4, arr5]
test_cayley_tables = [CayleyTable(arr) for arr in test_arrays]
about_tables(test_cayley_tables)
###Output
Table Order Associative? Commutative? Left Id? Right Id? Identity? Inverses?
-------------------------------------------------------------------------------------
1 3 False True None None None False
2 6 True False 0 0 0 True
3 4 True True 0 0 0 True
4 8 True True 0 0 0 True
5 6 True False None 0 None False
###Markdown
print(" Table Order Associative? Commutative? Left Id? Right Id? Identity? Inverses?")print('-' * 85)for tbl in test_cayley_tables: i = test_cayley_tables.index(tbl) + 1 n, assoc, comm, lid, rid, id, invs = tbl.about() print(f"{i :>{6}} {n :>{6}} {assoc :>{11}} {comm :>{12}} {lid :>{12}} {rid :>{9}} {id :>{10}} {invs :>{10}}")
###Code
ct1 = CayleyTable(arr5)
ct1
ct1.tolist()
print(ct1)
str(ct1)
###Output
_____no_output_____ |
nmonti-unsupervised-learning-clustering.ipynb | ###Markdown
Project: Identify Customer SegmentsIn this project, you will apply unsupervised learning techniques to identify segments of the population that form the core customer base for a mail-order sales company in Germany. These segments can then be used to direct marketing campaigns towards audiences that will have the highest expected rate of returns. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.This notebook will help you complete this task by providing a framework within which you will perform your analysis steps. In each step of the project, you will see some text describing the subtask that you will perform, followed by one or more code cells for you to complete your work. **Feel free to add additional code and markdown cells as you go along so that you can explore everything in precise chunks.** The code cells provided in the base template will outline only the major tasks, and will usually not be enough to cover all of the minor tasks that comprise it.It should be noted that while there will be precise guidelines on how you should handle certain tasks in the project, there will also be places where an exact specification is not provided. **There will be times in the project where you will need to make and justify your own decisions on how to treat the data.** These are places where there may not be only one way to handle the data. In real-life tasks, there may be many valid ways to approach an analysis task. One of the most important things you can do is clearly document your approach so that other scientists can understand the decisions you've made.At the end of most sections, there will be a Markdown cell labeled **Discussion**. In these cells, you will report your findings for the completed section, as well as document the decisions that you made in your approach to each subtask. **Your project will be evaluated not just on the code used to complete the tasks outlined, but also your communication about your observations and conclusions at each stage.**
###Code
# import libraries here; add more as necessary
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler, Imputer
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans, MiniBatchKMeans
import warnings
warnings.filterwarnings('ignore')
# magic word for producing visualizations in notebook
%matplotlib inline
'''
Import note: The classroom currently uses sklearn version 0.19.
If you need to use an imputer, it is available in sklearn.preprocessing.Imputer,
instead of sklearn.impute as in newer versions of sklearn.
'''
###Output
_____no_output_____
###Markdown
Step 0: Load the DataThere are four files associated with this project (not including this one):- `Udacity_AZDIAS_Subset.csv`: Demographics data for the general population of Germany; 891211 persons (rows) x 85 features (columns).- `Udacity_CUSTOMERS_Subset.csv`: Demographics data for customers of a mail-order company; 191652 persons (rows) x 85 features (columns).- `Data_Dictionary.md`: Detailed information file about the features in the provided datasets.- `AZDIAS_Feature_Summary.csv`: Summary of feature attributes for demographics data; 85 features (rows) x 4 columnsEach row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. You will use this information to cluster the general population into groups with similar demographic properties. Then, you will see how the people in the customers dataset fit into those created clusters. The hope here is that certain clusters are over-represented in the customers data, as compared to the general population; those over-represented clusters will be assumed to be part of the core userbase. This information can then be used for further applications, such as targeting for a marketing campaign.To start off with, load in the demographics data for the general population into a pandas DataFrame, and do the same for the feature attributes summary. Note for all of the `.csv` data files in this project: they're semicolon (`;`) delimited, so you'll need an additional argument in your [`read_csv()`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) call to read in the data properly. Also, considering the size of the main dataset, it may take some time for it to load completely.Once the dataset is loaded, it's recommended that you take a little bit of time just browsing the general structure of the dataset and feature summary file. You'll be getting deep into the innards of the cleaning in the first major step of the project, so gaining some general familiarity can help you get your bearings.
###Code
# Load in the general demographics data.
azdias = pd.read_csv('Udacity_AZDIAS_Subset.csv',sep=';')
# Load in the feature summary file.
feat_info = pd.read_csv('AZDIAS_Feature_Summary.csv',sep=';')
# Check the structure of the data after it's loaded (e.g. print the number of
# rows and columns, print the first few rows).
azdias.shape
azdias.head()
azdias.describe()
feat_info.shape
feat_info.head()
###Output
_____no_output_____
###Markdown
> **Tip**: Add additional cells to keep everything in reasonably-sized chunks! Keyboard shortcut `esc --> a` (press escape to enter command mode, then press the 'A' key) adds a new cell before the active cell, and `esc --> b` adds a new cell after the active cell. If you need to convert an active cell to a markdown cell, use `esc --> m` and to convert to a code cell, use `esc --> y`. Step 1: Preprocessing Step 1.1: Assess Missing DataThe feature summary file contains a summary of properties for each demographics data column. You will use this file to help you make cleaning decisions during this stage of the project. First of all, you should assess the demographics data in terms of missing data. Pay attention to the following points as you perform your analysis, and take notes on what you observe. Make sure that you fill in the **Discussion** cell with your findings and decisions at the end of each step that has one! Step 1.1.1: Convert Missing Value Codes to NaNsThe fourth column of the feature attributes summary (loaded in above as `feat_info`) documents the codes from the data dictionary that indicate missing or unknown data. While the file encodes this as a list (e.g. `[-1,0]`), this will get read in as a string object. You'll need to do a little bit of parsing to make use of it to identify and clean the data. Convert data that matches a 'missing' or 'unknown' value code into a numpy NaN value. You might want to see how much data takes on a 'missing' or 'unknown' code, and how much data is naturally missing, as a point of interest.**As one more reminder, you are encouraged to add additional cells to break up your analysis into manageable chunks.**
###Code
# Identify missing or unknown data values and convert them to NaNs.
# convert string of unknown data into a list
feat_info.missing_or_unknown = feat_info.missing_or_unknown.str.strip('[]').str.split(',')
# replace unknown data with nan
for i in range(len(feat_info)):
for j in range(len(feat_info['missing_or_unknown'][i])):
if feat_info['missing_or_unknown'][i][j] not in ['','X','XX']:
feat_info['missing_or_unknown'][i][j] = int(feat_info['missing_or_unknown'][i][j])
azdias.loc[:, feat_info['attribute'][i]].replace(feat_info['missing_or_unknown'][i], np.nan, inplace=True)
###Output
_____no_output_____
###Markdown
Step 1.1.2: Assess Missing Data in Each ColumnHow much missing data is present in each column? There are a few columns that are outliers in terms of the proportion of values that are missing. You will want to use matplotlib's [`hist()`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.hist.html) function to visualize the distribution of missing value counts to find these columns. Identify and document these columns. While some of these columns might have justifications for keeping or re-encoding the data, for this project you should just remove them from the dataframe. (Feel free to make remarks about these outlier columns in the discussion, however!)For the remaining features, are there any patterns in which columns have, or share, missing data?
###Code
# Perform an assessment of how much missing data there is in each column of the
# dataset.
na_count = azdias.isnull().sum()
na_percent = np.round(na_count * 100 / azdias.shape[0], 2)
colnames = na_count.index
data_tuples = list(zip(colnames, na_count, na_percent))
df_nan_col = pd.DataFrame(data_tuples, columns=['attribute','na_count', 'na_percent'], index=na_count.index)
df_nan_col.sort_values(by='na_count', ascending=False, inplace=True)
# Investigate patterns in the amount of missing data in each column.
df_nan_col.na_percent.plot.bar(figsize=(16, 9))
plt.title('NaN by Feature')
plt.ylabel('% of NaN values found by feature')
plt.show()
# Remove the outlier columns from the dataset. (You'll perform other data
# engineering tasks such as re-encoding and imputation later.)
dropcolnames = df_nan_col[df_nan_col.na_percent > 25].index.tolist()
dropcolnames
azdias_dropcols = azdias.drop(dropcolnames, axis=1)
azdias_dropcols.shape
# For the remaining features, are there any patterns in which columns have, or share, missing data?
df_nan_dropcols = df_nan_col.drop(dropcolnames, axis=0)
feat_info_dropcols = df_nan_dropcols.reset_index().merge(feat_info, how='left').set_index('index')
feat_info_dropcols.shape
features_sel = feat_info_dropcols['attribute']
data_explore = azdias.loc[:,features_sel]
plt.figure(figsize=(16,16))
sns.set(font_scale=0.7)
hmap = sns.heatmap(data_explore.corr(), cmap="YlGnBu",linewidths=.5,xticklabels=True, yticklabels=True)
plt.title('Correlation heatmap for the remaining features')
plt.show()
###Output
_____no_output_____
###Markdown
Discussion 1.1.2: Assess Missing Data in Each ColumnBased on the 'NaN by Feature' bar plot above, there are some columns which seem to have a higher proportion of missing data in comparison to the rest. Although it may sound a bit arbitrary, I decided to exclude any column with more than 25% of mising data. I am not sure what would be a reasonable cut-off value to drop a column but by looking at the bar plot it seems that the proportion of NaN values in each column starts to decrease slowly after the ALTER_HH feature (which has 34.81% of missing values). This means that any cut-off value below the NaN proportion of the following feature -named KKK- could result in a significan amount of columns being dropped - which is not a desired situation since we could be losing lots of information. Having said this, I decided to come up with a 25% cut-off value which is somewhere in between the ALTER_HH and the KKK proportions of NaN.In addition, I investigated the correlation for the remaining features by plotting a correlation heatmap. Potentially, I could drop some additional features which have a high proportion of NaN values and which are highly correlated with other features that have no NaN values or a lower proportion of them. We could argue that those features would not provide additional value when it comes to perform the clustering analysis since there are other features with more complete data providing similar information. So, removing those correlated features with high missing data may improve our clustering goal by getting rid of noisy data. Unfortunately, it seems that those features that are correlated to each other have typically a very similar amount of missing values. So, I am not convinced that we will benefit that much from dropping additional columns just because they present high correlation. In addition, since I will be applying the PCA before the clustering, I would expect for the PCA to take care of assigning low weights to those features which are highly correlated to others. So, at this stage, I would prefer not to drop any additional features. Step 1.1.3: Assess Missing Data in Each RowNow, you'll perform a similar assessment for the rows of the dataset. How much data is missing in each row? As with the columns, you should see some groups of points that have a very different numbers of missing values. Divide the data into two subsets: one for data points that are above some threshold for missing values, and a second subset for points below that threshold.In order to know what to do with the outlier rows, we should see if the distribution of data values on columns that are not missing data (or are missing very little data) are similar or different between the two groups. Select at least five of these columns and compare the distribution of values.- You can use seaborn's [`countplot()`](https://seaborn.pydata.org/generated/seaborn.countplot.html) function to create a bar chart of code frequencies and matplotlib's [`subplot()`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.subplot.html) function to put bar charts for the two subplots side by side.- To reduce repeated code, you might want to write a function that can perform this comparison, taking as one of its arguments a column to be compared.Depending on what you observe in your comparison, this will have implications on how you approach your conclusions later in the analysis. If the distributions of non-missing features look similar between the data with many missing values and the data with few or no missing values, then we could argue that simply dropping those points from the analysis won't present a major issue. On the other hand, if the data with many missing values looks very different from the data with few or no missing values, then we should make a note on those data as special. We'll revisit these data later on. **Either way, you should continue your analysis for now using just the subset of the data with few or no missing values.**
###Code
# How much data is missing in each row of the dataset?
na_countperrow = azdias_dropcols.isnull().sum(axis=1)
row_per = sum(na_countperrow.lt(1))/len(na_countperrow)
print('{:.2%}' .format(row_per) + ' of the rows have no missing values.')
plt.figure(figsize=(14,10))
sns.countplot(na_countperrow)
sns.set(font_scale=1)
plt.title('NaN per Observation')
plt.show()
# Write code to divide the data into two subsets based on the number of missing
# values in each row.
split_value = 20
data_reduced = azdias_dropcols[na_countperrow < split_value]
data_nan = azdias_dropcols[na_countperrow >= split_value]
# confirm the split is correct by checking the total number of rows on both dataframes
len(data_reduced)+len(data_nan)
# Porportion of rows with a few or no missing values (<20 NaN per row)
lowna_row_per = len(data_reduced)/len(azdias_dropcols)
print('{:.2%}' .format(lowna_row_per) + ' of the rows have a few or no missing values.')
# Compare the distribution of values for at least five columns where there are
# no or few missing values, between the two subsets.
def plot_feature_dist(col_name):
fig, (ax1, ax2) = plt.subplots(1, 2, sharey=True, figsize=(16, 8))
ax1.title.set_text('Histogram from data with few missing values')
sns.countplot(data_reduced.loc[:,col_name], ax = ax1)
ax2.title.set_text('Histogram from data with high missing values')
sns.countplot(data_nan.loc[:,col_name], ax = ax2)
# Only compare features with no missing data
sample_cols = azdias_dropcols.columns[azdias_dropcols.notnull().all()].to_series().sample(5)
for col in sample_cols:
plot_feature_dist(col)
###Output
_____no_output_____
###Markdown
Discussion 1.1.3: Assess Missing Data in Each Row(Double-click this cell and replace this text with your own text, reporting your observations regarding missing data in rows. Are the data with lots of missing values are qualitatively different from data with few or no missing values?) Approximately 70% of the rows have no missing values at all. Now, by looking at the bar plot titled 'NaN Per Observation' it seems that there are a few amount of rows with clearly a higher proportion of missing values compared to the rest (the tail of the plot). Having said this, I decided to use a threshold of 20 missing values per row to split the original dataset into two new subsets. This means that ~89% of the records having a few or no missing values per row are now captured on the data_reduced dataframe while the remaining ~11% of the data having a high proportion of missing values per row are stored on the data_nan dataframe.When comparing the distributions of five "non-missing" features between the subset of data with high missing values against the other subset of data with a few missing values, I can see that these do not look similar. This means that by leaving behind those rows with a higher number of NaN values I am also potentially loosing some valuable information on some of the "non-missing" features. Since it is suggested that I continue with the subset of data with a few missing values then I will revisit this later on. Step 1.2: Select and Re-Encode FeaturesChecking for missing data isn't the only way in which you can prepare a dataset for analysis. Since the unsupervised learning techniques to be used will only work on data that is encoded numerically, you need to make a few encoding changes or additional assumptions to be able to make progress. In addition, while almost all of the values in the dataset are encoded using numbers, not all of them represent numeric values. Check the third column of the feature summary (`feat_info`) for a summary of types of measurement.- For numeric and interval data, these features can be kept without changes.- Most of the variables in the dataset are ordinal in nature. While ordinal values may technically be non-linear in spacing, make the simplifying assumption that the ordinal variables can be treated as being interval in nature (that is, kept without any changes).- Special handling may be necessary for the remaining two variable types: categorical, and 'mixed'.In the first two parts of this sub-step, you will perform an investigation of the categorical and mixed-type features and make a decision on each of them, whether you will keep, drop, or re-encode each. Then, in the last part, you will create a new data frame with only the selected and engineered columns.Data wrangling is often the trickiest part of the data analysis process, and there's a lot of it to be done here. But stick with it: once you're done with this step, you'll be ready to get to the machine learning parts of the project!
###Code
# How many features are there of each data type?
feat_info_dropcols.type.value_counts()
###Output
_____no_output_____
###Markdown
Step 1.2.1: Re-Encode Categorical FeaturesFor categorical data, you would ordinarily need to encode the levels as dummy variables. Depending on the number of categories, perform one of the following:- For binary (two-level) categoricals that take numeric values, you can keep them without needing to do anything.- There is one binary variable that takes on non-numeric values. For this one, you need to re-encode the values as numbers or create a dummy variable.- For multi-level categoricals (three or more values), you can choose to encode the values using multiple dummy variables (e.g. via [OneHotEncoder](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html)), or (to keep things straightforward) just drop them from the analysis. As always, document your choices in the Discussion section.
###Code
# Assess categorical variables: which are binary, which are multi-level, and
# which one needs to be re-encoded?
categorical = feat_info_dropcols[feat_info_dropcols['type'] == 'categorical']['attribute'].values
data_reduced[categorical].nunique()
binary = [x for x in categorical if data_reduced[x].nunique()==2]
multilevel = [x for x in categorical if data_reduced[x].nunique()>2]
# Identify binary feature with non-numeric values
for col in binary:
print(data_reduced[col].value_counts())
data_reduced.shape
# Re-encode categorical variable(s) to be kept in the analysis.
data_reduced.loc[:, 'OST_WEST_KZ'].replace({'W':0, 'O':1}, inplace=True)
data_reduced = pd.get_dummies(data_reduced, columns=multilevel)
data_reduced.shape
# Print the number of features after one-hot encoding
encoded = list(data_reduced.columns)
print("{} total features after one-hot encoding.".format(len(encoded)))
# Print encoded feature names
print(encoded)
###Output
['ALTERSKATEGORIE_GROB', 'ANREDE_KZ', 'FINANZ_MINIMALIST', 'FINANZ_SPARER', 'FINANZ_VORSORGER', 'FINANZ_ANLEGER', 'FINANZ_UNAUFFAELLIGER', 'FINANZ_HAUSBAUER', 'GREEN_AVANTGARDE', 'HEALTH_TYP', 'LP_LEBENSPHASE_FEIN', 'LP_LEBENSPHASE_GROB', 'PRAEGENDE_JUGENDJAHRE', 'RETOURTYP_BK_S', 'SEMIO_SOZ', 'SEMIO_FAM', 'SEMIO_REL', 'SEMIO_MAT', 'SEMIO_VERT', 'SEMIO_LUST', 'SEMIO_ERL', 'SEMIO_KULT', 'SEMIO_RAT', 'SEMIO_KRIT', 'SEMIO_DOM', 'SEMIO_KAEM', 'SEMIO_PFLICHT', 'SEMIO_TRADV', 'SOHO_KZ', 'VERS_TYP', 'ANZ_PERSONEN', 'ANZ_TITEL', 'HH_EINKOMMEN_SCORE', 'W_KEIT_KIND_HH', 'WOHNDAUER_2008', 'ANZ_HAUSHALTE_AKTIV', 'ANZ_HH_TITEL', 'KONSUMNAEHE', 'MIN_GEBAEUDEJAHR', 'OST_WEST_KZ', 'WOHNLAGE', 'CAMEO_INTL_2015', 'KBA05_ANTG1', 'KBA05_ANTG2', 'KBA05_ANTG3', 'KBA05_ANTG4', 'KBA05_GBZ', 'BALLRAUM', 'EWDICHTE', 'INNENSTADT', 'GEBAEUDETYP_RASTER', 'KKK', 'MOBI_REGIO', 'ONLINE_AFFINITAET', 'REGIOTYP', 'KBA13_ANZAHL_PKW', 'PLZ8_ANTG1', 'PLZ8_ANTG2', 'PLZ8_ANTG3', 'PLZ8_ANTG4', 'PLZ8_BAUMAX', 'PLZ8_HHZ', 'PLZ8_GBZ', 'ARBEIT', 'ORTSGR_KLS9', 'RELAT_AB', 'SHOPPER_TYP_0.0', 'SHOPPER_TYP_1.0', 'SHOPPER_TYP_2.0', 'SHOPPER_TYP_3.0', 'NATIONALITAET_KZ_1.0', 'NATIONALITAET_KZ_2.0', 'NATIONALITAET_KZ_3.0', 'CAMEO_DEU_2015_1A', 'CAMEO_DEU_2015_1B', 'CAMEO_DEU_2015_1C', 'CAMEO_DEU_2015_1D', 'CAMEO_DEU_2015_1E', 'CAMEO_DEU_2015_2A', 'CAMEO_DEU_2015_2B', 'CAMEO_DEU_2015_2C', 'CAMEO_DEU_2015_2D', 'CAMEO_DEU_2015_3A', 'CAMEO_DEU_2015_3B', 'CAMEO_DEU_2015_3C', 'CAMEO_DEU_2015_3D', 'CAMEO_DEU_2015_4A', 'CAMEO_DEU_2015_4B', 'CAMEO_DEU_2015_4C', 'CAMEO_DEU_2015_4D', 'CAMEO_DEU_2015_4E', 'CAMEO_DEU_2015_5A', 'CAMEO_DEU_2015_5B', 'CAMEO_DEU_2015_5C', 'CAMEO_DEU_2015_5D', 'CAMEO_DEU_2015_5E', 'CAMEO_DEU_2015_5F', 'CAMEO_DEU_2015_6A', 'CAMEO_DEU_2015_6B', 'CAMEO_DEU_2015_6C', 'CAMEO_DEU_2015_6D', 'CAMEO_DEU_2015_6E', 'CAMEO_DEU_2015_6F', 'CAMEO_DEU_2015_7A', 'CAMEO_DEU_2015_7B', 'CAMEO_DEU_2015_7C', 'CAMEO_DEU_2015_7D', 'CAMEO_DEU_2015_7E', 'CAMEO_DEU_2015_8A', 'CAMEO_DEU_2015_8B', 'CAMEO_DEU_2015_8C', 'CAMEO_DEU_2015_8D', 'CAMEO_DEU_2015_9A', 'CAMEO_DEU_2015_9B', 'CAMEO_DEU_2015_9C', 'CAMEO_DEU_2015_9D', 'CAMEO_DEU_2015_9E', 'CAMEO_DEUG_2015_1', 'CAMEO_DEUG_2015_2', 'CAMEO_DEUG_2015_3', 'CAMEO_DEUG_2015_4', 'CAMEO_DEUG_2015_5', 'CAMEO_DEUG_2015_6', 'CAMEO_DEUG_2015_7', 'CAMEO_DEUG_2015_8', 'CAMEO_DEUG_2015_9', 'GEBAEUDETYP_1.0', 'GEBAEUDETYP_2.0', 'GEBAEUDETYP_3.0', 'GEBAEUDETYP_4.0', 'GEBAEUDETYP_5.0', 'GEBAEUDETYP_6.0', 'GEBAEUDETYP_8.0', 'LP_FAMILIE_FEIN_1.0', 'LP_FAMILIE_FEIN_2.0', 'LP_FAMILIE_FEIN_3.0', 'LP_FAMILIE_FEIN_4.0', 'LP_FAMILIE_FEIN_5.0', 'LP_FAMILIE_FEIN_6.0', 'LP_FAMILIE_FEIN_7.0', 'LP_FAMILIE_FEIN_8.0', 'LP_FAMILIE_FEIN_9.0', 'LP_FAMILIE_FEIN_10.0', 'LP_FAMILIE_FEIN_11.0', 'LP_FAMILIE_GROB_1.0', 'LP_FAMILIE_GROB_2.0', 'LP_FAMILIE_GROB_3.0', 'LP_FAMILIE_GROB_4.0', 'LP_FAMILIE_GROB_5.0', 'GFK_URLAUBERTYP_1.0', 'GFK_URLAUBERTYP_2.0', 'GFK_URLAUBERTYP_3.0', 'GFK_URLAUBERTYP_4.0', 'GFK_URLAUBERTYP_5.0', 'GFK_URLAUBERTYP_6.0', 'GFK_URLAUBERTYP_7.0', 'GFK_URLAUBERTYP_8.0', 'GFK_URLAUBERTYP_9.0', 'GFK_URLAUBERTYP_10.0', 'GFK_URLAUBERTYP_11.0', 'GFK_URLAUBERTYP_12.0', 'CJT_GESAMTTYP_1.0', 'CJT_GESAMTTYP_2.0', 'CJT_GESAMTTYP_3.0', 'CJT_GESAMTTYP_4.0', 'CJT_GESAMTTYP_5.0', 'CJT_GESAMTTYP_6.0', 'LP_STATUS_FEIN_1.0', 'LP_STATUS_FEIN_2.0', 'LP_STATUS_FEIN_3.0', 'LP_STATUS_FEIN_4.0', 'LP_STATUS_FEIN_5.0', 'LP_STATUS_FEIN_6.0', 'LP_STATUS_FEIN_7.0', 'LP_STATUS_FEIN_8.0', 'LP_STATUS_FEIN_9.0', 'LP_STATUS_FEIN_10.0', 'LP_STATUS_GROB_1.0', 'LP_STATUS_GROB_2.0', 'LP_STATUS_GROB_3.0', 'LP_STATUS_GROB_4.0', 'LP_STATUS_GROB_5.0', 'FINANZTYP_1', 'FINANZTYP_2', 'FINANZTYP_3', 'FINANZTYP_4', 'FINANZTYP_5', 'FINANZTYP_6', 'ZABEOTYP_1', 'ZABEOTYP_2', 'ZABEOTYP_3', 'ZABEOTYP_4', 'ZABEOTYP_5', 'ZABEOTYP_6']
###Markdown
Discussion 1.2.1: Re-Encode Categorical Features(Double-click this cell and replace this text with your own text, reporting your findings and decisions regarding categorical features. Which ones did you keep, which did you drop, and what engineering steps did you perform?)As suggested, I re-encoded the binary variable OST_WEST_KZ which had non-numeric values to distinguish between East and West Germany. The rest of the binary variables remained unchanged.In terms of the multi-level categorical features, I decided to perform the one-hot encoding. There is one particular feature CAMEO_DEU_2015 that has many levels as it describes the wealth/life stage typology in more detail. Although doing a one-hot encoding will result in many additional features being generated I don't think this will present a major issue ('curse of dimensionality) since I will be applying the PCA later on.I decided not to drop any additional features at this point. Step 1.2.2: Engineer Mixed-Type FeaturesThere are a handful of features that are marked as "mixed" in the feature summary that require special treatment in order to be included in the analysis. There are two in particular that deserve attention; the handling of the rest are up to your own choices:- "PRAEGENDE_JUGENDJAHRE" combines information on three dimensions: generation by decade, movement (mainstream vs. avantgarde), and nation (east vs. west). While there aren't enough levels to disentangle east from west, you should create two new variables to capture the other two dimensions: an interval-type variable for decade, and a binary variable for movement.- "CAMEO_INTL_2015" combines information on two axes: wealth and life stage. Break up the two-digit codes by their 'tens'-place and 'ones'-place digits into two new ordinal variables (which, for the purposes of this project, is equivalent to just treating them as their raw numeric values).- If you decide to keep or engineer new features around the other mixed-type features, make sure you note your steps in the Discussion section.Be sure to check `Data_Dictionary.md` for the details needed to finish these tasks.
###Code
# Investigate "PRAEGENDE_JUGENDJAHRE" and engineer two new variables.
values = [x+1 for x in range(15)]
decades = [40, 40, 50, 50, 60, 60, 60, 70, 70, 80, 80, 80, 80, 90, 90]
generation = pd.Series(decades, index = values)
mainstream = [1, 3, 5, 8, 10, 12, 14]
avantgarde = [2, 4, 6, 7, 9, 11, 13, 15]
def get_movement(value):
if value in mainstream:
return 1
elif value in avantgarde:
return 0
else:
return value
# Genereate 2 new features
data_reduced['PRAEGENDE_JUGENDJAHRE_GEN'] = data_reduced['PRAEGENDE_JUGENDJAHRE'].map(generation)
data_reduced['PRAEGENDE_JUGENDJAHRE_MOV'] = data_reduced['PRAEGENDE_JUGENDJAHRE'].apply(get_movement)
# Quick to ensure that the conversion worked
data_reduced.loc[0:10, ('PRAEGENDE_JUGENDJAHRE','PRAEGENDE_JUGENDJAHRE_GEN','PRAEGENDE_JUGENDJAHRE_MOV')]
# Remove the original feature 'PRAEGENDE_JUGENDJAHRE'
data_reduced = data_reduced.drop('PRAEGENDE_JUGENDJAHRE', axis=1)
data_reduced.shape
# Investigate "CAMEO_INTL_2015" and engineer two new variables.
def get_wealth(value):
if pd.isnull(value):
return value
else:
return int(str(value)[0])
def get_lifestage(value):
if pd.isnull(value):
return value
else:
return int(str(value)[1])
# Genereate 2 new features
data_reduced['CAMEO_INTL_2015_WEALTH'] = data_reduced['CAMEO_INTL_2015'].apply(get_wealth)
data_reduced['CAMEO_INTL_2015_LIFESTAGE'] = data_reduced['CAMEO_INTL_2015'].apply(get_lifestage)
# Quick to ensure that the conversion worked
data_reduced.loc[0:10, ('CAMEO_INTL_2015','CAMEO_INTL_2015_WEALTH','CAMEO_INTL_2015_LIFESTAGE')]
# Remove the original feature 'CAMEO_INTL_2015'
data_reduced = data_reduced.drop('CAMEO_INTL_2015', axis=1)
data_reduced.shape
# Update the features info dataframe to drop above original mixed features
feat_info_dropcols = feat_info_dropcols.drop(['PRAEGENDE_JUGENDJAHRE','CAMEO_INTL_2015'], axis=0)
feat_info_dropcols.shape
# Review remaining mixed features
mixed = feat_info_dropcols[feat_info_dropcols['type'] == 'mixed']['attribute'].values
data_reduced[mixed].nunique()
# Investigate PLZ8_BAUMAX - Most common building type within the PLZ8 region
# 1: mainly 1-2 family homes
# 2: mainly 3-5 family homes
# 3: mainly 6-10 family homes
# 4: mainly 10+ family homes
# 5: mainly business buildings
# Convert into two new binary features to simply distinguish between family type and business type
family = [1, 2, 3, 4]
building = [5]
def get_family(value):
if pd.isnull(value):
return value
elif value in family:
return 1
else:
return 0
def get_business(value):
if pd.isnull(value):
return value
elif value in building:
return 1
else:
return 0
# Genereate 2 new features
data_reduced['PLZ8_BAUMAX_FAMILY'] = data_reduced['PLZ8_BAUMAX'].apply(get_family)
data_reduced['PLZ8_BAUMAX_BUSINESS'] = data_reduced['PLZ8_BAUMAX'].apply(get_business)
# Quick check to ensure that the conversion worked
data_reduced.loc[50:60, ('PLZ8_BAUMAX','PLZ8_BAUMAX_FAMILY','PLZ8_BAUMAX_BUSINESS')]
# Remove the original feature 'PLZ8_BAUMAX'
data_reduced = data_reduced.drop('PLZ8_BAUMAX', axis=1)
data_reduced.shape
# Investigate WOHNLAGE - Neighborhood quality (or rural flag)
# 0: no score calculated
# 1: very good neighborhood
# 2: good neighborhood
# 3: average neighborhood
# 4: poor neighborhood
# 5: very poor neighborhood
# 7: rural neighborhood
# 8: new building in rural neighborhood
data_reduced['WOHNLAGE'].value_counts()
# Initial transformation steps:
# Assign 0 values to the most frequent value (average neighborhood)
# Combine 1 and 2 into a single category - 'good neighborhood'
# Combine 4 and 5 into a single category - 'poor neighborhood'
# Combine 7 and 8 into a single category - 'rural neighborhood'
data_reduced.loc[:,'WOHNLAGE'].replace({0:3, 1:2, 5:4, 8:7}, inplace=True)
# Quick check on above replacement
data_reduced['WOHNLAGE'].value_counts()
# Finally perform one-hot encoding on the updated categories
data_reduced = pd.get_dummies(data_reduced, columns=['WOHNLAGE'])
# Investigate LP_LEBENSPHASE_FEIN -Life stage, fine scale
# Life stage, fine scale
# - 1: single low-income earners of younger age
# - 2: single low-income earners of middle age
# - 3: single average earners of younger age
# - 4: single average earners of middle age
# - 5: single low-income earners of advanced age
# - 6: single low-income earners at retirement age
# - 7: single average earners of advanced age
# - 8: single average earners at retirement age
# - 9: single independent persons
# - 10: wealthy single homeowners
# - 11: single homeowners of advanced age
# - 12: single homeowners at retirement age
# - 13: single top earners of higher age
# - 14: low-income and average earner couples of younger age
# - 15: low-income earner couples of higher age
# - 16: average earner couples of higher age
# - 17: independent couples
# - 18: wealthy homeowner couples of younger age
# - 19: homeowner couples of higher age
# - 20: top earner couples of higher age
# - 21: single parent low-income earners
# - 22: single parent average earners
# - 23: single parent high-income earners
# - 24: low-income earner families
# - 25: average earner families
# - 26: independent families
# - 27: homeowner families
# - 28: top earner families
# - 29: low-income earners of younger age from multiperson households
# - 30: average earners of younger age from multiperson households
# - 31: low-income earners of higher age from multiperson households
# - 32: average earners of higher age from multiperson households
# - 33: independent persons of younger age from multiperson households
# - 34: homeowners of younger age from multiperson households
# - 35: top earners of younger age from multiperson households
# - 36: independent persons of higher age from multiperson households
# - 37: homeowners of advanced age from multiperson households
# - 38: homeowners at retirement age from multiperson households
# - 39: top earners of middle age from multiperson households
# - 40: top earners at retirement age from multiperson households
data_reduced['LP_LEBENSPHASE_FEIN'].value_counts()
# Investigate LP_LEBENSPHASE_GROB - Life stage, rough scale
# Life stage, rough scale
# - 1: single low-income and average earners of younger age
# - 2: single low-income and average earners of higher age
# - 3: single high-income earners
# - 4: single low-income and average-earner couples
# - 5: single high-income earner couples
# - 6: single parents
# - 7: single low-income and average earner families
# - 8: high-income earner families
# - 9: average earners of younger age from multiperson households
# - 10: low-income and average earners of higher age from multiperson households
# - 11: high-income earners of younger age from multiperson households
# - 12: high-income earners of higher age from multiperson households
data_reduced['LP_LEBENSPHASE_GROB'].value_counts()
# The categories under each of the above two features LP_LEBENSPHASE_FEIN & LP_LEBENSPHASE_GROB are quite specific.
# It is difficult to perform any grouping.
# Since these 2 features are highly correlated, I will drop the one with higher number of categories and keep the other one
# for one-hot encoding.
# Drop LP_LEBENSPHASE_FEIN
data_reduced = data_reduced.drop('LP_LEBENSPHASE_FEIN', axis=1)
# One-hot encode LP_LEBENSPHASE_GROB
data_reduced = pd.get_dummies(data_reduced, columns=['LP_LEBENSPHASE_GROB'])
# Update the features info dataframe to drop above original mixed feature
feat_info_dropcols = feat_info_dropcols.drop(['LP_LEBENSPHASE_FEIN'], axis=0)
feat_info_dropcols.shape
# Print the number of features after one-hot encoding
encoded = list(data_reduced.columns)
print("{} total features after one-hot encoding.".format(len(encoded)))
# Print encoded feature names
print(encoded)
###Output
['ALTERSKATEGORIE_GROB', 'ANREDE_KZ', 'FINANZ_MINIMALIST', 'FINANZ_SPARER', 'FINANZ_VORSORGER', 'FINANZ_ANLEGER', 'FINANZ_UNAUFFAELLIGER', 'FINANZ_HAUSBAUER', 'GREEN_AVANTGARDE', 'HEALTH_TYP', 'RETOURTYP_BK_S', 'SEMIO_SOZ', 'SEMIO_FAM', 'SEMIO_REL', 'SEMIO_MAT', 'SEMIO_VERT', 'SEMIO_LUST', 'SEMIO_ERL', 'SEMIO_KULT', 'SEMIO_RAT', 'SEMIO_KRIT', 'SEMIO_DOM', 'SEMIO_KAEM', 'SEMIO_PFLICHT', 'SEMIO_TRADV', 'SOHO_KZ', 'VERS_TYP', 'ANZ_PERSONEN', 'ANZ_TITEL', 'HH_EINKOMMEN_SCORE', 'W_KEIT_KIND_HH', 'WOHNDAUER_2008', 'ANZ_HAUSHALTE_AKTIV', 'ANZ_HH_TITEL', 'KONSUMNAEHE', 'MIN_GEBAEUDEJAHR', 'OST_WEST_KZ', 'KBA05_ANTG1', 'KBA05_ANTG2', 'KBA05_ANTG3', 'KBA05_ANTG4', 'KBA05_GBZ', 'BALLRAUM', 'EWDICHTE', 'INNENSTADT', 'GEBAEUDETYP_RASTER', 'KKK', 'MOBI_REGIO', 'ONLINE_AFFINITAET', 'REGIOTYP', 'KBA13_ANZAHL_PKW', 'PLZ8_ANTG1', 'PLZ8_ANTG2', 'PLZ8_ANTG3', 'PLZ8_ANTG4', 'PLZ8_HHZ', 'PLZ8_GBZ', 'ARBEIT', 'ORTSGR_KLS9', 'RELAT_AB', 'SHOPPER_TYP_0.0', 'SHOPPER_TYP_1.0', 'SHOPPER_TYP_2.0', 'SHOPPER_TYP_3.0', 'NATIONALITAET_KZ_1.0', 'NATIONALITAET_KZ_2.0', 'NATIONALITAET_KZ_3.0', 'CAMEO_DEU_2015_1A', 'CAMEO_DEU_2015_1B', 'CAMEO_DEU_2015_1C', 'CAMEO_DEU_2015_1D', 'CAMEO_DEU_2015_1E', 'CAMEO_DEU_2015_2A', 'CAMEO_DEU_2015_2B', 'CAMEO_DEU_2015_2C', 'CAMEO_DEU_2015_2D', 'CAMEO_DEU_2015_3A', 'CAMEO_DEU_2015_3B', 'CAMEO_DEU_2015_3C', 'CAMEO_DEU_2015_3D', 'CAMEO_DEU_2015_4A', 'CAMEO_DEU_2015_4B', 'CAMEO_DEU_2015_4C', 'CAMEO_DEU_2015_4D', 'CAMEO_DEU_2015_4E', 'CAMEO_DEU_2015_5A', 'CAMEO_DEU_2015_5B', 'CAMEO_DEU_2015_5C', 'CAMEO_DEU_2015_5D', 'CAMEO_DEU_2015_5E', 'CAMEO_DEU_2015_5F', 'CAMEO_DEU_2015_6A', 'CAMEO_DEU_2015_6B', 'CAMEO_DEU_2015_6C', 'CAMEO_DEU_2015_6D', 'CAMEO_DEU_2015_6E', 'CAMEO_DEU_2015_6F', 'CAMEO_DEU_2015_7A', 'CAMEO_DEU_2015_7B', 'CAMEO_DEU_2015_7C', 'CAMEO_DEU_2015_7D', 'CAMEO_DEU_2015_7E', 'CAMEO_DEU_2015_8A', 'CAMEO_DEU_2015_8B', 'CAMEO_DEU_2015_8C', 'CAMEO_DEU_2015_8D', 'CAMEO_DEU_2015_9A', 'CAMEO_DEU_2015_9B', 'CAMEO_DEU_2015_9C', 'CAMEO_DEU_2015_9D', 'CAMEO_DEU_2015_9E', 'CAMEO_DEUG_2015_1', 'CAMEO_DEUG_2015_2', 'CAMEO_DEUG_2015_3', 'CAMEO_DEUG_2015_4', 'CAMEO_DEUG_2015_5', 'CAMEO_DEUG_2015_6', 'CAMEO_DEUG_2015_7', 'CAMEO_DEUG_2015_8', 'CAMEO_DEUG_2015_9', 'GEBAEUDETYP_1.0', 'GEBAEUDETYP_2.0', 'GEBAEUDETYP_3.0', 'GEBAEUDETYP_4.0', 'GEBAEUDETYP_5.0', 'GEBAEUDETYP_6.0', 'GEBAEUDETYP_8.0', 'LP_FAMILIE_FEIN_1.0', 'LP_FAMILIE_FEIN_2.0', 'LP_FAMILIE_FEIN_3.0', 'LP_FAMILIE_FEIN_4.0', 'LP_FAMILIE_FEIN_5.0', 'LP_FAMILIE_FEIN_6.0', 'LP_FAMILIE_FEIN_7.0', 'LP_FAMILIE_FEIN_8.0', 'LP_FAMILIE_FEIN_9.0', 'LP_FAMILIE_FEIN_10.0', 'LP_FAMILIE_FEIN_11.0', 'LP_FAMILIE_GROB_1.0', 'LP_FAMILIE_GROB_2.0', 'LP_FAMILIE_GROB_3.0', 'LP_FAMILIE_GROB_4.0', 'LP_FAMILIE_GROB_5.0', 'GFK_URLAUBERTYP_1.0', 'GFK_URLAUBERTYP_2.0', 'GFK_URLAUBERTYP_3.0', 'GFK_URLAUBERTYP_4.0', 'GFK_URLAUBERTYP_5.0', 'GFK_URLAUBERTYP_6.0', 'GFK_URLAUBERTYP_7.0', 'GFK_URLAUBERTYP_8.0', 'GFK_URLAUBERTYP_9.0', 'GFK_URLAUBERTYP_10.0', 'GFK_URLAUBERTYP_11.0', 'GFK_URLAUBERTYP_12.0', 'CJT_GESAMTTYP_1.0', 'CJT_GESAMTTYP_2.0', 'CJT_GESAMTTYP_3.0', 'CJT_GESAMTTYP_4.0', 'CJT_GESAMTTYP_5.0', 'CJT_GESAMTTYP_6.0', 'LP_STATUS_FEIN_1.0', 'LP_STATUS_FEIN_2.0', 'LP_STATUS_FEIN_3.0', 'LP_STATUS_FEIN_4.0', 'LP_STATUS_FEIN_5.0', 'LP_STATUS_FEIN_6.0', 'LP_STATUS_FEIN_7.0', 'LP_STATUS_FEIN_8.0', 'LP_STATUS_FEIN_9.0', 'LP_STATUS_FEIN_10.0', 'LP_STATUS_GROB_1.0', 'LP_STATUS_GROB_2.0', 'LP_STATUS_GROB_3.0', 'LP_STATUS_GROB_4.0', 'LP_STATUS_GROB_5.0', 'FINANZTYP_1', 'FINANZTYP_2', 'FINANZTYP_3', 'FINANZTYP_4', 'FINANZTYP_5', 'FINANZTYP_6', 'ZABEOTYP_1', 'ZABEOTYP_2', 'ZABEOTYP_3', 'ZABEOTYP_4', 'ZABEOTYP_5', 'ZABEOTYP_6', 'PRAEGENDE_JUGENDJAHRE_GEN', 'PRAEGENDE_JUGENDJAHRE_MOV', 'CAMEO_INTL_2015_WEALTH', 'CAMEO_INTL_2015_LIFESTAGE', 'PLZ8_BAUMAX_FAMILY', 'PLZ8_BAUMAX_BUSINESS', 'WOHNLAGE_2.0', 'WOHNLAGE_3.0', 'WOHNLAGE_4.0', 'WOHNLAGE_7.0', 'LP_LEBENSPHASE_GROB_1.0', 'LP_LEBENSPHASE_GROB_2.0', 'LP_LEBENSPHASE_GROB_3.0', 'LP_LEBENSPHASE_GROB_4.0', 'LP_LEBENSPHASE_GROB_5.0', 'LP_LEBENSPHASE_GROB_6.0', 'LP_LEBENSPHASE_GROB_7.0', 'LP_LEBENSPHASE_GROB_8.0', 'LP_LEBENSPHASE_GROB_9.0', 'LP_LEBENSPHASE_GROB_10.0', 'LP_LEBENSPHASE_GROB_11.0', 'LP_LEBENSPHASE_GROB_12.0']
###Markdown
Discussion 1.2.2: Engineer Mixed-Type Features(Double-click this cell and replace this text with your own text, reporting your findings and decisions regarding mixed-value features. Which ones did you keep, which did you drop, and what engineering steps did you perform?)As suggested on the project, I performed the transformations to the features named PRAEGENDE_JUGENDJAHRE and CAMEO_INTL_2015. These features had a clear way of separating its mixed values in order to get new features.The next mixed feature I analized was PLZ8_BAUMAX - which refers to the most common building type within the PLZ8 region. On this feature, we can make a distinction between family and business building types. Although we will lose visiblity on the size of the family buildings, we can still make use of some of the information contained on this feature. Two new binary variables were created.The other feature considered was WOHNLAGE - which refers to the quality of the neighborhood (or rural flag). If we look at the frequency per category value, it seems reasonable to group some of these values and then perform a one-hot encoding to create new binary variables. I decided to group some categories because they had very low frequency and if I would create a binary variable out of those then I would end up with a feature having mainly 0's and a few 1's - which is not going to help that much for clustering or any sort of analyses. So, the following categories were grouped together before one-hot encoding: "good and very good", "poor and very poor", and finally "rural and new building in rural neighborhood". Also, I replaced the 'no score calculated' with the most frequent category (average neighborhood).Finally, I examined the LP_LEBENSPHASE_FEIN -life stage, fine scale- and LP_LEBENSPHASE_GROB -life stage, rough scale. The categories under these two features are quite specific and it is hard to perform any grouping or transformation into them. Having said this, I decided to perform a one-hot encoding transformation. Since these features are highly correlated (see correlation heatmap plotted in previous section), it makes sense to drop the feature with the higher number of categories and just keep the other one. This will avoid increasing the number of freatures in our data set unnecessarily after doing the one-hot encoding. Step 1.2.3: Complete Feature SelectionIn order to finish this step up, you need to make sure that your data frame now only has the columns that you want to keep. To summarize, the dataframe should consist of the following:- All numeric, interval, and ordinal type columns from the original dataset.- Binary categorical features (all numerically-encoded).- Engineered features from other multi-level categorical features and mixed features.Make sure that for any new columns that you have engineered, that you've excluded the original columns from the final dataset. Otherwise, their values will interfere with the analysis later on the project. For example, you should not keep "PRAEGENDE_JUGENDJAHRE", since its values won't be useful for the algorithm: only the values derived from it in the engineered features you created should be retained. As a reminder, your data should only be from **the subset with few or no missing values**.
###Code
# If there are other re-engineering tasks you need to perform, make sure you
# take care of them here. (Dealing with missing data will come in step 2.1.)
# Do whatever you need to in order to ensure that the dataframe only contains
# the columns that should be passed to the algorithm functions.
###Output
_____no_output_____
###Markdown
Step 1.3: Create a Cleaning FunctionEven though you've finished cleaning up the general population demographics data, it's important to look ahead to the future and realize that you'll need to perform the same cleaning steps on the customer demographics data. In this substep, complete the function below to execute the main feature selection, encoding, and re-engineering steps you performed above. Then, when it comes to looking at the customer data in Step 3, you can just run this function on that DataFrame to get the trimmed dataset in a single step.
###Code
def clean_data(df):
"""
Perform feature trimming, re-encoding, and engineering for demographics
data
INPUT: Demographics DataFrame
OUTPUT: Trimmed and cleaned demographics DataFrame
"""
feat_info = pd.read_csv('AZDIAS_Feature_Summary.csv',sep=';')
# convert string of unknown data into a list
feat_info.missing_or_unknown = feat_info.missing_or_unknown.str.strip('[]').str.split(',')
# replace unknown data with nan
for i in range(len(feat_info)):
for j in range(len(feat_info['missing_or_unknown'][i])):
if feat_info['missing_or_unknown'][i][j] not in ['','X','XX']:
feat_info['missing_or_unknown'][i][j] = int(feat_info['missing_or_unknown'][i][j])
df.loc[:, feat_info['attribute'][i]].replace(feat_info['missing_or_unknown'][i], np.nan, inplace=True)
# Remove columns based on analysis previously done (more than %25 of NaNs)
df_dropcols = df.drop(dropcolnames, axis=1)
df_nan_dropcols = df_nan_col.drop(dropcolnames, axis=0)
feat_info_dropcols = df_nan_dropcols.reset_index().merge(feat_info, how='left').set_index('index')
# Remove any rows with high proportion of NaNs per row
na_countperrow = df_dropcols.isnull().sum(axis=1)
data_reduced = df_dropcols[na_countperrow < split_value]
# Re-encode categorical features
categorical = feat_info_dropcols[feat_info_dropcols['type'] == 'categorical']['attribute'].values
multilevel = [x for x in categorical if data_reduced[x].nunique()>2]
data_reduced = pd.get_dummies(data_reduced, columns=multilevel)
# Re-encode OST_WEST_KZ
data_reduced.loc[:, 'OST_WEST_KZ'].replace({'W':0, 'O':1}, inplace=True)
# Re-encode PRAEGENDE_JUGENDJAHRE
values = [x+1 for x in range(15)]
decades = [40, 40, 50, 50, 60, 60, 60, 70, 70, 80, 80, 80, 80, 90, 90]
generation = pd.Series(decades, index = values)
mainstream = [1, 3, 5, 8, 10, 12, 14]
avantgarde = [2, 4, 6, 7, 9, 11, 13, 15]
def get_movement(value):
if value in mainstream:
return 1
elif value in avantgarde:
return 0
else:
return value
data_reduced['PRAEGENDE_JUGENDJAHRE_GEN'] = data_reduced['PRAEGENDE_JUGENDJAHRE'].map(generation)
data_reduced['PRAEGENDE_JUGENDJAHRE_MOV'] = data_reduced['PRAEGENDE_JUGENDJAHRE'].apply(get_movement)
data_reduced = data_reduced.drop('PRAEGENDE_JUGENDJAHRE', axis=1)
# Re-encode CAMEO_INTL_2015
def get_wealth(value):
if pd.isnull(value):
return value
else:
return int(str(value)[0])
def get_lifestage(value):
if pd.isnull(value):
return value
else:
return int(str(value)[1])
data_reduced['CAMEO_INTL_2015_WEALTH'] = data_reduced['CAMEO_INTL_2015'].apply(get_wealth)
data_reduced['CAMEO_INTL_2015_LIFESTAGE'] = data_reduced['CAMEO_INTL_2015'].apply(get_lifestage)
data_reduced = data_reduced.drop('CAMEO_INTL_2015', axis=1)
# Re-encode PLZ8_BAUMAX
family = [1, 2, 3, 4]
building = [5]
def get_family(value):
if pd.isnull(value):
return value
elif value in family:
return 1
else:
return 0
def get_business(value):
if pd.isnull(value):
return value
elif value in building:
return 1
else:
return 0
data_reduced['PLZ8_BAUMAX_FAMILY'] = data_reduced['PLZ8_BAUMAX'].apply(get_family)
data_reduced['PLZ8_BAUMAX_BUSINESS'] = data_reduced['PLZ8_BAUMAX'].apply(get_business)
data_reduced = data_reduced.drop('PLZ8_BAUMAX', axis=1)
# Re-encode WOHNLAGE
data_reduced.loc[:,'WOHNLAGE'].replace({0:3, 1:2, 5:4, 8:7}, inplace=True)
data_reduced = pd.get_dummies(data_reduced, columns=['WOHNLAGE'])
# Drop LP_LEBENSPHASE_FEIN
data_reduced = data_reduced.drop('LP_LEBENSPHASE_FEIN', axis=1)
# Re-encode LP_LEBENSPHASE_GROB
data_reduced = pd.get_dummies(data_reduced, columns=['LP_LEBENSPHASE_GROB'])
# Drop GEBAEUDETYP_5.0 if generated before since this is not present on the customers dataset
# See "Discussion Point" added below
if 'GEBAEUDETYP_5.0' in data_reduced:
data_reduced = data_reduced.drop('GEBAEUDETYP_5.0', axis=1)
return data_reduced
# Apply the clean_data function
df = pd.read_csv('Udacity_AZDIAS_Subset.csv',sep=';')
azdias_clean_data = clean_data(df)
# Quick check to confirm the clean_data function is giving the correct shape
azdias_clean_data.shape
# Quick check to confirm the clean_data function is giving the correct shape
data_reduced.shape
azdias_clean_data.isnull().values.any()
###Output
_____no_output_____
###Markdown
__Discussion Point:__ By the time I reached the point when I had to apply the above clean_data() function into the customers dataset, I noticed that the shape of the resulting customer dataframe was (115643, 209). However, I was expecting for the customers cleaned dataset to have 210 features, i.e. the same number of features as in the azdias demographics dataset.After doing some investigation, I realized that this occurred because the azdias and the customers datasets don't have the same unique values for all the categorical variables. In particular, the customers dataset doesn't have a value of 5 under the GEBAEUDETYP feature - which is used to identify company buildings. This was the reason why the customers dataset had 1 less feature. Having said this, I decided to drop this dummy feature as it doesn't seem that relevant. To do this, I added an additional check at the end of the data_clean() function. This also explains why the shape of the data_reduced dataframe has 1 extra feature in compared to the azdias_clean_data shape resulting from the clean_data() function. Step 2: Feature Transformation Step 2.1: Apply Feature ScalingBefore we apply dimensionality reduction techniques to the data, we need to perform feature scaling so that the principal component vectors are not influenced by the natural differences in scale for features. Starting from this part of the project, you'll want to keep an eye on the [API reference page for sklearn](http://scikit-learn.org/stable/modules/classes.html) to help you navigate to all of the classes and functions that you'll need. In this substep, you'll need to check the following:- sklearn requires that data not have missing values in order for its estimators to work properly. So, before applying the scaler to your data, make sure that you've cleaned the DataFrame of the remaining missing values. This can be as simple as just removing all data points with missing data, or applying an [Imputer](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Imputer.html) to replace all missing values. You might also try a more complicated procedure where you temporarily remove missing values in order to compute the scaling parameters before re-introducing those missing values and applying imputation. Think about how much missing data you have and what possible effects each approach might have on your analysis, and justify your decision in the discussion section below.- For the actual scaling function, a [StandardScaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) instance is suggested, scaling each feature to mean 0 and standard deviation 1.- For these classes, you can make use of the `.fit_transform()` method to both fit a procedure to the data as well as apply the transformation to the data at the same time. Don't forget to keep the fit sklearn objects handy, since you'll be applying them to the customer demographics data towards the end of the project.
###Code
# If you've not yet cleaned the dataset of all NaN values, then investigate and
# do that now.
imputer = Imputer()
azdias_clean_data = pd.DataFrame(imputer.fit_transform(azdias_clean_data), columns = azdias_clean_data.columns)
# Quick check to confirm there are no missing values
azdias_clean_data.isnull().values.any()
# Apply feature scaling to the general population demographics data.
scaler = StandardScaler()
azdias_scaled_data = scaler.fit_transform(azdias_clean_data)
###Output
_____no_output_____
###Markdown
Discussion 2.1: Apply Feature Scaling(Double-click this cell and replace this text with your own text, reporting your decisions regarding feature scaling.)I decided to use the Imputer() function to replace all missing values and then to apply the StandardScaler(). Step 2.2: Perform Dimensionality ReductionOn your scaled data, you are now ready to apply dimensionality reduction techniques.- Use sklearn's [PCA](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) class to apply principal component analysis on the data, thus finding the vectors of maximal variance in the data. To start, you should not set any parameters (so all components are computed) or set a number of components that is at least half the number of features (so there's enough features to see the general trend in variability).- Check out the ratio of variance explained by each principal component as well as the cumulative variance explained. Try plotting the cumulative or sequential values using matplotlib's [`plot()`](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.plot.html) function. Based on what you find, select a value for the number of transformed features you'll retain for the clustering part of the project.- Once you've made a choice for the number of components to keep, make sure you re-fit a PCA instance to perform the decided-on transformation.
###Code
# Apply PCA to the data.
pca = PCA()
azdias_pca = pca.fit_transform(azdias_scaled_data)
# Investigate the variance accounted for by each principal component.
# Below code was adapted from the lessons
def scree_plot(pca, comp=None, show_labels=False):
'''
Creates a scree plot associated with the principal components
INPUT: pca - the result of instantian of PCA in scikit learn
OUTPUT:
None
'''
# num_components=len(pca.explained_variance_ratio_)
# ind = np.arange(num_components)
# vals = pca.explained_variance_ratio_
if comp:
num_components=comp
ind = np.arange(num_components)
vals = pca.explained_variance_ratio_[:num_components]
else:
num_components=len(pca.explained_variance_ratio_)
ind = np.arange(num_components)
vals = pca.explained_variance_ratio_
plt.figure(figsize=(18, 8))
ax = plt.subplot(111)
cumvals = np.cumsum(vals)
ax.bar(ind, vals)
ax.plot(ind, cumvals)
if show_labels:
for i in range(num_components):
ax.annotate(r"%s%%" % ((str(vals[i]*100)[:4])), (ind[i]+0.05, vals[i])
,va="bottom"
,ha="center"
,fontsize=12
,rotation=90
)
ax.xaxis.set_tick_params(width=0)
ax.yaxis.set_tick_params(width=2, length=12)
ax.set_xlabel("Principal Component")
ax.set_ylabel("Variance Explained (%)")
plt.title('Explained Variance Per Principal Component')
scree_plot(pca, show_labels=False)
scree_plot(pca, comp=70, show_labels=True)
## Keep only those components that explained at least 0.5% of the variability on the original data
var_threshold = 0.005
num_comps = max(np.where(pca.explained_variance_ratio_ >= var_threshold)[0]) + 1
# What is the total variability explained by those components?
var_explained_per = pca.explained_variance_ratio_[:num_comps-1].sum()
print('The first {} components explain {:.2%} of the total variability on the original data.' .format(num_comps, var_explained_per))
# Re-apply PCA to the data while selecting for number of components to retain.
pca = PCA(n_components=num_comps)
azdias_pca = pca.fit_transform(azdias_scaled_data)
# Quick check on the new pca
pca.components_.shape
print('{:.2%} of the total variability explained.' .format(pca.explained_variance_ratio_.sum()))
###Output
64.90% of the total variability explained.
###Markdown
Discussion 2.2: Perform Dimensionality Reduction(Double-click this cell and replace this text with your own text, reporting your findings and decisions regarding dimensionality reduction. How many principal components / transformed features are you retaining for the next step of the analysis?)If we look at the scree plot we can identify a threshold after which the variability explained by each component is too small to justify keeping it. From the initial scree plot where all components were plotted, it seems that approximately after the 150th component the percentage of the cummulative variability explained is increasing at a very slow rate. This would suggest that it is not worthy to keep any more than 150 components. However, we should also notice that we still have a very long tail of components explaining a rather small percentage of the variability (less than 1%) so I would think that many of those components would not contribute that much to our clustering analysis. Having said this, I decided to choose a threshold of 0.5% on the variablilty explained by each component. This means that any component that explain or holds less than 0.5% of the original data variability will be discarded.Based on the above approach, I decided to keep the first 59 components which explain a total of ~65% of the variability on the orginal data. Step 2.3: Interpret Principal ComponentsNow that we have our transformed principal components, it's a nice idea to check out the weight of each variable on the first few components to see if they can be interpreted in some fashion.As a reminder, each principal component is a unit vector that points in the direction of highest variance (after accounting for the variance captured by earlier principal components). The further a weight is from zero, the more the principal component is in the direction of the corresponding feature. If two features have large weights of the same sign (both positive or both negative), then increases in one tend expect to be associated with increases in the other. To contrast, features with different signs can be expected to show a negative correlation: increases in one variable should result in a decrease in the other.- To investigate the features, you should map each weight to their corresponding feature name, then sort the features according to weight. The most interesting features for each principal component, then, will be those at the beginning and end of the sorted list. Use the data dictionary document to help you understand these most prominent features, their relationships, and what a positive or negative value on the principal component might indicate.- You should investigate and interpret feature associations from the first three principal components in this substep. To help facilitate this, you should write a function that you can call at any time to print the sorted list of feature weights, for the *i*-th principal component. This might come in handy in the next step of the project, when you interpret the tendencies of the discovered clusters.
###Code
# HINT: Try defining a function here or in a new cell that you can reuse in the
# other cells.
# Again code below was adapted from the lessons
def pca_results(data, pca, k_comp, top_k_weights):
'''
Create a DataFrame of the PCA results
Includes dimension feature weights and explained variance
Visualizes the PCA results
'''
if k_comp <= len(pca.components_) and k_comp > 0 and top_k_weights <= azdias_pca.shape[1]:
# PCA components
components = pd.DataFrame(np.round(pca.components_, 4), columns = data.keys()).iloc[k_comp-1]
components.sort_values(ascending=False, inplace=True)
# Return a concatenated DataFrame with top_k_weights
df = pd.concat([components.head(top_k_weights),components.tail(top_k_weights)])
# PCA explained variance
#ratios = pca.explained_variance_ratio_.reshape(len(pca.components_), 1)[k_comp-1]
#variance_ratios = pd.DataFrame(np.round(ratios, 4), columns = ['Explained Variance'])
#variance_ratios.index = dimensions
# Create a bar plot visualization
fig, ax = plt.subplots(figsize = (14,8))
# Plot the feature weights as a function of the components
df.plot(ax = ax, kind = 'bar');
ax.set_ylabel("Feature Weights")
ax.set_xticklabels(df.index, rotation=90)
plt.title('Component {} explains {:.2%} of the original data variability'.format(k_comp, pca.explained_variance_ratio_[k_comp-1]))
return df
else:
print('That is not the right input, please review the number of components and features.')
# Map weights for the first principal component to corresponding feature names
# and then print the linked values, sorted by weight.
pca_results(azdias_clean_data, pca, k_comp=1, top_k_weights=10)
# Map weights for the second principal component to corresponding feature names
# and then print the linked values, sorted by weight.
pca_results(azdias_clean_data, pca, k_comp=2, top_k_weights=10)
# Map weights for the third principal component to corresponding feature names
# and then print the linked values, sorted by weight.
pca_results(azdias_clean_data, pca, k_comp=3, top_k_weights=10)
###Output
_____no_output_____
###Markdown
Discussion 2.3: Interpret Principal Components(Double-click this cell and replace this text with your own text, reporting your observations from detailed investigation of the first few principal components generated. Can we interpret positive and negative values from them in a meaningful way?)__Component 1__ The first component seems to be more influenced by features related to characteristics of the area where people live in. Higher values of this component would represent areas with a larger population, lower income, poor neighborhoods, high percentage of people per household, close to the cities and point of sales, possibly more residential areas(1), and with a high movement pattern. Also, it seems that people living in these areas don't own a property and they are quite concerned about getting low interest rates. In summary, this component seems to represent __areas with poor/low-income, high-density population, close to city centre and close to shops.__(1) The assumption under this component being more related to residential areas is because of a negative correlation with the feature PLZ8_GBZ - Number of buildings within the PLZ8 region. The higher the number of building the lower the value of this component becomes. This seems to contradict to some extent how other features weight on this component where the component increases for populated densed areas close to cities. So, I am not sure how to interpret the weight on this specific feature.__Component 2__ The second component is mainly influenced by personal features. Typically, this component would represent older people who like to be prepared. They are money savers although they may look into investment options. They are traditional people when it comes to buying and they don't seem to shop online and they are not crazy shoppers. They are rather lower or average income earners and mainly single. They are traditional, diutiful, religious and rational people.In summary, this component seems to represent __traiditional and conservative people.__ __Component 3__ The third component is also influenced mainly by personal features. This component would represent male people with a rather combative attitude, dominant and critical minded. They are not religious, dreamful, family oriented, nor socially minded. They do care about investments. Generally speaking, they don't care that much about shopping although they do care about ecology and the environment. They are considered as avantgarde.In summary, this component seems to represent __male, combative and dominant people.__ Step 3: Clustering Step 3.1: Apply Clustering to General PopulationYou've assessed and cleaned the demographics data, then scaled and transformed them. Now, it's time to see how the data clusters in the principal components space. In this substep, you will apply k-means clustering to the dataset and use the average within-cluster distances from each point to their assigned cluster's centroid to decide on a number of clusters to keep.- Use sklearn's [KMeans](http://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.htmlsklearn.cluster.KMeans) class to perform k-means clustering on the PCA-transformed data.- Then, compute the average difference from each point to its assigned cluster's center. **Hint**: The KMeans object's `.score()` method might be useful here, but note that in sklearn, scores tend to be defined so that larger is better. Try applying it to a small, toy dataset, or use an internet search to help your understanding.- Perform the above two steps for a number of different cluster counts. You can then see how the average distance decreases with an increasing number of clusters. However, each additional cluster provides a smaller net benefit. Use this fact to select a final number of clusters in which to group the data. **Warning**: because of the large size of the dataset, it can take a long time for the algorithm to resolve. The more clusters to fit, the longer the algorithm will take. You should test for cluster counts through at least 10 clusters to get the full picture, but you shouldn't need to test for a number of clusters above about 30.- Once you've selected a final number of clusters to use, re-fit a KMeans instance to perform the clustering operation. Make sure that you also obtain the cluster assignments for the general demographics data, since you'll be using them in the final Step 3.3.
###Code
# Over a number of different cluster counts...
# run k-means clustering on the data and...
# compute the average within-cluster distances.
# Below code is adapted from the lessons
def get_kmeans_scores(data, centers):
scores = []
for center in centers:
#instantiate kmeans
kmeans = MiniBatchKMeans(n_clusters=center)
# Then fit the model to your data using the fit method
model = kmeans.fit(data)
# Obtain a score related to the model fit
score = np.abs(model.score(data))
scores.append(score)
return centers, scores
centers = list(range(1,36))
centers, scores = get_kmeans_scores(azdias_pca, centers)
# Investigate the change in within-cluster distance across number of clusters.
# HINT: Use matplotlib's plot function to visualize this relationship.
plt.figure(figsize=(12, 8))
plt.plot(centers, scores, linestyle='--', marker='o', color='b')
plt.xlabel('Number of clusters')
plt.ylabel('Average distance from centroid')
plt.title('Change in within-cluster distance across number of clusters')
plt.show()
# Re-fit the k-means model with the selected number of clusters and obtain
# cluster predictions for the general population demographics data.
k = 20
kmeans = KMeans(n_clusters=k)
azdias_kmeans = kmeans.fit_predict(azdias_pca)
###Output
_____no_output_____
###Markdown
Discussion 3.1: Apply Clustering to General Population(Double-click this cell and replace this text with your own text, reporting your findings and decisions regarding clustering. Into how many clusters have you decided to segment the population?)After computing the average within-cluster distances, it seems that they start to level-off somewhere between 20-25 clusters.I decided to continue the analysis with a total of 20 clusters. Step 3.2: Apply All Steps to the Customer DataNow that you have clusters and cluster centers for the general population, it's time to see how the customer data maps on to those clusters. Take care to not confuse this for re-fitting all of the models to the customer data. Instead, you're going to use the fits from the general population to clean, transform, and cluster the customer data. In the last step of the project, you will interpret how the general population fits apply to the customer data.- Don't forget when loading in the customers data, that it is semicolon (`;`) delimited.- Apply the same feature wrangling, selection, and engineering steps to the customer demographics using the `clean_data()` function you created earlier. (You can assume that the customer demographics data has similar meaning behind missing data patterns as the general demographics data.)- Use the sklearn objects from the general demographics data, and apply their transformations to the customers data. That is, you should not be using a `.fit()` or `.fit_transform()` method to re-fit the old objects, nor should you be creating new sklearn objects! Carry the data through the feature scaling, PCA, and clustering steps, obtaining cluster assignments for all of the data in the customer demographics data.
###Code
# Load in the customer demographics data.
customers = pd.read_csv('Udacity_CUSTOMERS_Subset.csv',sep=';')
customers.shape
# Apply preprocessing, feature transformation, and clustering from the general
# demographics onto the customer data, obtaining cluster predictions for the
# customer demographics data.
# Apply the clean_data function
customers_clean_data = clean_data(customers)
# Quick check to ensure that we have the same shape as the demographics dataset
customers_clean_data.shape
# Which columns are on the azdias subset which are missing on the customers dataset?
#azdias_clean_data.columns.difference(customers_clean_data.columns)
# Which columns are on the customers subset which are missing on the azdias dataset?
#customers_clean_data.columns.difference(azdias_clean_data.columns)
# Note: It seems that on the customer dataset there were no records under the GEBAEUDETYP feature with a value of 5.
# This means that the get_dummies function will return one less feature on the customers_clean_data set
# when compared to the demographics data azdias_clean_data.
# Quick check to confirm above
#customers['GEBAEUDETYP'].loc[customers['GEBAEUDETYP']==5]
###Output
_____no_output_____
###Markdown
__Discussion Point:__ The above discrepancy on the number of features, was solved by removing the dummy feature 'GEBAEUDETYP_5.0' from the azdias_clean_data dataframe. If I had not take care of this difference on the features on each dataset then I wouldn't have been able to apply the same scaler and PCA fitted models I derived from the azdias dataset into the customers dataset. So, I decided to go back, ammend the data_clean() function so that the dummy feature 'GEBAEUDETYP_5.0' is dropped from the azdias_clean_data dataframe and re-run all steps to get correct fitted models for the azdias dataset that I can apply to the customers dataset as well.
###Code
# Remove NaN with Imputer()
customers_clean_data = pd.DataFrame(imputer.transform(customers_clean_data), columns = customers_clean_data.columns)
# Quick check to confirm there are no missing values
customers_clean_data.isnull().values.any()
# Scale the customer data
customers_scaled_data = scaler.transform(customers_clean_data)
# Apply PCS on the customer data
customers_pca = pca.transform(customers_scaled_data)
# Identify clusters on the customer data
customers_kmeans = kmeans.predict(customers_pca)
###Output
_____no_output_____
###Markdown
Step 3.3: Compare Customer Data to Demographics DataAt this point, you have clustered data based on demographics of the general population of Germany, and seen how the customer data for a mail-order sales company maps onto those demographic clusters. In this final substep, you will compare the two cluster distributions to see where the strongest customer base for the company is.Consider the proportion of persons in each cluster for the general population, and the proportions for the customers. If we think the company's customer base to be universal, then the cluster assignment proportions should be fairly similar between the two. If there are only particular segments of the population that are interested in the company's products, then we should see a mismatch from one to the other. If there is a higher proportion of persons in a cluster for the customer data compared to the general population (e.g. 5% of persons are assigned to a cluster for the general population, but 15% of the customer data is closest to that cluster's centroid) then that suggests the people in that cluster to be a target audience for the company. On the other hand, the proportion of the data in a cluster being larger in the general population than the customer data (e.g. only 2% of customers closest to a population centroid that captures 6% of the data) suggests that group of persons to be outside of the target demographics.Take a look at the following points in this step:- Compute the proportion of data points in each cluster for the general population and the customer data. Visualizations will be useful here: both for the individual dataset proportions, but also to visualize the ratios in cluster representation between groups. Seaborn's [`countplot()`](https://seaborn.pydata.org/generated/seaborn.countplot.html) or [`barplot()`](https://seaborn.pydata.org/generated/seaborn.barplot.html) function could be handy. - Recall the analysis you performed in step 1.1.3 of the project, where you separated out certain data points from the dataset if they had more than a specified threshold of missing values. If you found that this group was qualitatively different from the main bulk of the data, you should treat this as an additional data cluster in this analysis. Make sure that you account for the number of data points in this subset, for both the general population and customer datasets, when making your computations!- Which cluster or clusters are overrepresented in the customer dataset compared to the general population? Select at least one such cluster and infer what kind of people might be represented by that cluster. Use the principal component interpretations from step 2.3 or look at additional components to help you make this inference. Alternatively, you can use the `.inverse_transform()` method of the PCA and StandardScaler objects to transform centroids back to the original data space and interpret the retrieved values directly.- Perform a similar investigation for the underrepresented clusters. Which cluster or clusters are underrepresented in the customer dataset compared to the general population, and what kinds of people are typified by these clusters?
###Code
# Porportion of rows with few or no missing values per row (<20 NaN per row)
customers_lowna_row_per = len(customers_clean_data)/len(customers)
print('{:.2%}' .format(customers_lowna_row_per) + ' of the rows on the customers data have a few or no missing values.')
# Compare the proportion of data in each cluster for the customer data to the
# proportion of data in each cluster for the general population.
azdias_clusters = pd.Series(azdias_kmeans)
azdias_clusters_count = azdias_clusters.value_counts().sort_index()
azdias_clusters_count = pd.Series(azdias_clusters_count)
azdias_clusters_count
customers_clusters = pd.Series(customers_kmeans)
customers_clusters_count = customers_clusters.value_counts().sort_index()
customers_clusters_count = pd.Series(customers_clusters_count)
customers_clusters_count
clusters = pd.concat([azdias_clusters_count, customers_clusters_count], axis=1).reset_index()
clusters.columns = ['ClusterID', 'AzdiasCount', 'CustomersCount']
clusters
clusters['AzdiasPercent'] = clusters['AzdiasCount'] / clusters['AzdiasCount'].sum()
clusters['CustomersPercent'] = clusters['CustomersCount'] / clusters['CustomersCount'].sum()
clusters['PercentDiff'] = clusters['CustomersPercent'] - clusters['AzdiasPercent']
clusters.sort_values('PercentDiff')
clusters.plot(x='ClusterID', y=['AzdiasPercent', 'CustomersPercent'], kind='bar', figsize=(18, 8))
plt.title('Azdias vs. Customers Clusters Comparison')
plt.ylabel('Proportion of people in each cluster')
plt.show()
# Get the original features values from the centroids of each clusters
clusters_specs = pd.DataFrame(scaler.inverse_transform(pca.inverse_transform(kmeans.cluster_centers_)), columns=azdias_clean_data.columns)
# Quick check to ensure I got the correct shape
clusters_specs.shape
# Filter out only personal features to facilitate identifying what kind of people are part of a cluster
relevant_personal_features = [
'ALTERSKATEGORIE_GROB'
, 'ANREDE_KS'
, 'CJT_GESAMTTYP_1.0'
, 'CJT_GESAMTTYP_2.0'
, 'CJT_GESAMTTYP_3.0'
, 'CJT_GESAMTTYP_4.0'
, 'CJT_GESAMTTYP_5.0'
, 'CJT_GESAMTTYP_6.0'
, 'FINANZ_MINIMALIST'
, 'FINANZ_SPARER'
, 'FINANZ_VORSORGER'
, 'FINANZ_ANLEGER'
, 'FINANZ_UNAUFFAELLIGER'
, 'FINANZ_HAUSBAUER'
, 'GFK_URLAUBERTYP_1.0'
, 'GFK_URLAUBERTYP_2.0'
, 'GFK_URLAUBERTYP_3.0'
, 'GFK_URLAUBERTYP_4.0'
, 'GFK_URLAUBERTYP_5.0'
, 'GFK_URLAUBERTYP_6.0'
, 'GFK_URLAUBERTYP_7.0'
, 'GFK_URLAUBERTYP_8.0'
, 'GFK_URLAUBERTYP_9.0'
, 'GFK_URLAUBERTYP_10.0'
, 'GFK_URLAUBERTYP_11.0'
, 'GFK_URLAUBERTYP_12.0'
, 'GREEN_AVANTGARDE'
, 'LP_LEBENSPHASE_GROB_1.0'
, 'LP_LEBENSPHASE_GROB_2.0'
, 'LP_LEBENSPHASE_GROB_3.0'
, 'LP_LEBENSPHASE_GROB_4.0'
, 'LP_LEBENSPHASE_GROB_5.0'
, 'LP_LEBENSPHASE_GROB_6.0'
, 'LP_LEBENSPHASE_GROB_7.0'
, 'LP_LEBENSPHASE_GROB_8.0'
, 'LP_LEBENSPHASE_GROB_9.0'
, 'LP_LEBENSPHASE_GROB_10.0'
, 'LP_LEBENSPHASE_GROB_11.0'
, 'LP_LEBENSPHASE_GROB_12.0'
, 'LP_FAMILIE_GROB_1.0'
, 'LP_FAMILIE_GROB_2.0'
, 'LP_FAMILIE_GROB_3.0'
, 'LP_FAMILIE_GROB_4.0'
, 'LP_FAMILIE_GROB_5.0'
, 'PRAEGENDE_JUGENDJAHRE_GEN'
, 'PRAEGENDE_JUGENDJAHRE_MOV'
, 'RETOURTYP_BK_S'
, 'SEMIO_SOZ'
, 'SEMIO_FAM'
, 'SEMIO_REL'
, 'SEMIO_MAT'
, 'SEMIO_VERT'
, 'SEMIO_LUST'
, 'SEMIO_ERL'
, 'SEMIO_KULT'
, 'SEMIO_RAT'
, 'SEMIO_KRIT'
, 'SEMIO_DOM'
, 'SEMIO_KAEM'
, 'SEMIO_PFLICHT'
, 'SEMIO_TRADV'
, 'SHOPPER_TYP_0.0'
, 'SHOPPER_TYP_1.0'
, 'SHOPPER_TYP_2.0'
, 'SHOPPER_TYP_3.0'
, 'ZABEOTYP_1'
, 'ZABEOTYP_2'
, 'ZABEOTYP_3'
, 'ZABEOTYP_4'
, 'ZABEOTYP_5'
, 'ZABEOTYP_6'
]
clusters_specs_reduced = clusters_specs.T
clusters_specs_reduced['attribute'] = clusters_specs_reduced.index
clusters_specs_reduced = clusters_specs_reduced[clusters_specs_reduced['attribute'].isin(relevant_personal_features)]
clusters_specs_reduced.shape
# What kinds of people are part of a cluster that is overrepresented in the
# customer data compared to the general population?
# I will extract original feature values for a selected group of personal features for cluster 7
pd.set_option('display.max_rows', 100)
clusters_specs_reduced[7]
# What kinds of people are part of a cluster that is underrepresented in the
# customer data compared to the general population?
# I will extract original feature values for a selected group of personal features for cluster 5
clusters_specs_reduced[5]
###Output
_____no_output_____ |
Custom and Distributed Training with TensorFlow/Week1/C2W1_Assignment.ipynb | ###Markdown
Basic Tensor operations and GradientTape.In this graded assignment, you will perform different tensor operations as well as use [GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape). These are important building blocks for the next parts of this course so it's important to master the basics. Let's begin!
###Code
import tensorflow as tf
import numpy as np
###Output
_____no_output_____
###Markdown
Exercise 1 - [tf.constant]((https://www.tensorflow.org/api_docs/python/tf/constant))Creates a constant tensor from a tensor-like object.
###Code
# Convert NumPy array to Tensor using `tf.constant`
def tf_constant(array):
"""
Args:
array (numpy.ndarray): tensor-like array.
Returns:
tensorflow.python.framework.ops.EagerTensor: tensor.
"""
### START CODE HERE ###
tf_constant_array = tf.constant(array)
### END CODE HERE ###
return tf_constant_array
tmp_array = np.arange(1,10)
x = tf_constant(tmp_array)
x
# Expected output:
# <tf.Tensor: shape=(9,), dtype=int64, numpy=array([1, 2, 3, 4, 5, 6, 7, 8, 9])>
###Output
_____no_output_____
###Markdown
Note that for future docstrings, the type `EagerTensor` will be used as a shortened version of `tensorflow.python.framework.ops.EagerTensor`. Exercise 2 - [tf.square](https://www.tensorflow.org/api_docs/python/tf/math/square)Computes the square of a tensor element-wise.
###Code
# Square the input tensor
def tf_square(array):
"""
Args:
array (numpy.ndarray): tensor-like array.
Returns:
EagerTensor: tensor.
"""
# make sure it's a tensor
array = tf.constant(array)
### START CODE HERE ###
tf_squared_array = tf.square(array)
### END CODE HERE ###
return tf_squared_array
tmp_array = tf.constant(np.arange(1, 10))
x = tf_square(tmp_array)
x
# Expected output:
# <tf.Tensor: shape=(9,), dtype=int64, numpy=array([ 1, 4, 9, 16, 25, 36, 49, 64, 81])>
###Output
_____no_output_____
###Markdown
Exercise 3 - [tf.reshape](https://www.tensorflow.org/api_docs/python/tf/reshape)Reshapes a tensor.
###Code
# Reshape tensor into the given shape parameter
def tf_reshape(array, shape):
"""
Args:
array (EagerTensor): tensor to reshape.
shape (tuple): desired shape.
Returns:
EagerTensor: reshaped tensor.
"""
# make sure it's a tensor
array = tf.constant(array)
### START CODE HERE ###
tf_reshaped_array = tf.reshape(array, shape = shape)
### END CODE HERE ###
return tf_reshaped_array
# Check your function
tmp_array = np.array([1,2,3,4,5,6,7,8,9])
# Check that your function reshapes a vector into a matrix
x = tf_reshape(tmp_array, (3, 3))
x
# Expected output:
# <tf.Tensor: shape=(3, 3), dtype=int64, numpy=
# [[1, 2, 3],
# [4, 5, 6],
# [7, 8, 9]]
###Output
_____no_output_____
###Markdown
Exercise 4 - [tf.cast](https://www.tensorflow.org/api_docs/python/tf/cast)Casts a tensor to a new type.
###Code
# Cast tensor into the given dtype parameter
def tf_cast(array, dtype):
"""
Args:
array (EagerTensor): tensor to be casted.
dtype (tensorflow.python.framework.dtypes.DType): desired new type. (Should be a TF dtype!)
Returns:
EagerTensor: casted tensor.
"""
# make sure it's a tensor
array = tf.constant(array)
### START CODE HERE ###
tf_cast_array = tf.cast(array, dtype = dtype)
### END CODE HERE ###
return tf_cast_array
# Check your function
tmp_array = [1,2,3,4]
x = tf_cast(tmp_array, tf.float32)
x
# Expected output:
# <tf.Tensor: shape=(4,), dtype=float32, numpy=array([1., 2., 3., 4.], dtype=float32)>
###Output
_____no_output_____
###Markdown
Exercise 5 - [tf.multiply](https://www.tensorflow.org/api_docs/python/tf/multiply)Returns an element-wise x * y.
###Code
# Multiply tensor1 and tensor2
def tf_multiply(tensor1, tensor2):
"""
Args:
tensor1 (EagerTensor): a tensor.
tensor2 (EagerTensor): another tensor.
Returns:
EagerTensor: resulting tensor.
"""
# make sure these are tensors
tensor1 = tf.constant(tensor1)
tensor2 = tf.constant(tensor2)
### START CODE HERE ###
product = tf.multiply(tensor1, tensor2)
### END CODE HERE ###
return product
# Check your function
tmp_1 = tf.constant(np.array([[1,2],[3,4]]))
tmp_2 = tf.constant(np.array(2))
result = tf_multiply(tmp_1, tmp_2)
result
# Expected output:
# <tf.Tensor: shape=(2, 2), dtype=int64, numpy=
# array([[2, 4],
# [6, 8]])>
###Output
_____no_output_____
###Markdown
Exercise 6 - [tf.add](https://www.tensorflow.org/api_docs/python/tf/add)Returns x + y element-wise.
###Code
# Add tensor1 and tensor2
def tf_add(tensor1, tensor2):
"""
Args:
tensor1 (EagerTensor): a tensor.
tensor2 (EagerTensor): another tensor.
Returns:
EagerTensor: resulting tensor.
"""
# make sure these are tensors
tensor1 = tf.constant(tensor1)
tensor2 = tf.constant(tensor2)
### START CODE HERE ###
total = tf.add(tensor1, tensor2)
### END CODE HERE ###
return total
# Check your function
tmp_1 = tf.constant(np.array([1, 2, 3]))
tmp_2 = tf.constant(np.array([4, 5, 6]))
tf_add(tmp_1, tmp_2)
# Expected output:
# <tf.Tensor: shape=(3,), dtype=int64, numpy=array([5, 7, 9])>
###Output
_____no_output_____
###Markdown
Exercise 7 - Gradient TapeImplement the function `tf_gradient_tape` by replacing the instances of `None` in the code below. The instructions are given in the code comments.You can review the [docs](https://www.tensorflow.org/api_docs/python/tf/GradientTape) or revisit the lectures to complete this task.
###Code
def tf_gradient_tape(x):
"""
Args:
x (EagerTensor): a tensor.
Returns:
EagerTensor: Derivative of z with respect to the input tensor x.
"""
with tf.GradientTape() as t:
### START CODE HERE ###
# Record the actions performed on tensor x with `watch`
t.watch(x)
# Define a polynomial of form 3x^3 - 2x^2 + x
y = 3 * (x ** 3) - 2 * (x ** 2) + x
# Obtain the sum of the elements in variable y
z = tf.reduce_sum(y)
# Get the derivative of z with respect to the original input tensor x
dz_dx = t.gradient(z, x)
### END CODE HERE
return dz_dx
# Check your function
tmp_x = tf.constant(2.0)
dz_dx = tf_gradient_tape(tmp_x)
result = dz_dx.numpy()
result
# Expected output:
# 29.0
###Output
_____no_output_____ |
tensorflow/Pingan-TrainingPipeline.ipynb | ###Markdown
This notebook is aimed to use transfer learning to improve the accuracy of image classification. The inception model(https://arxiv.org/abs/1512.00567) was developed at Google to provide state of the art performance on the ImageNet Large-Scale Visual Recognition Challenge.The code developed here is referred to https://github.com/Hvass-Labs/TensorFlow-Tutorials.
###Code
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
import os
import time
from datetime import timedelta
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Load the Data
###Code
data_root = './data/'
pickle_file = os.path.join(data_root, 'data.pickle')
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
###Output
Training set (395, 128, 128, 3) (395,)
Validation set (99, 128, 128, 3) (99,)
###Markdown
Reformat into a TensorFlow-friendly shapeThe inception needs the pixel between 0 and 255
###Code
width = 128
height = 128
num_labels = 5 # number of class labels
num_channels = 3
def reformat(dataset, labels):
dataset = dataset.reshape((-1, height, width, num_channels)).astype(np.float32)
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))/predictions.shape[0])
###Output
_____no_output_____
###Markdown
Download and Load the Inception Model* The transfer values are obtained by passing the images into the inception model.* Then it will be used as the input data to the new classifier.* This step will take a while, depending on the data size.
###Code
import inception
inception.maybe_download() # download incpetion model
model = inception.Inception() # load the Inception model
from inception import transfer_values_cache
file_path_cache_train = os.path.join(data_root, 'inception_train.pkl')
file_path_cache_validate = os.path.join(data_root, 'inception_validate.pkl')
print("Processing Inception transfer-values for training-images ...")
transfer_values_train = transfer_values_cache(cache_path=file_path_cache_train,
images=train_dataset,
model=model)
print("Processing Inception transfer-values for validate-images ...")
transfer_values_validate = transfer_values_cache(cache_path=file_path_cache_validate,
images=valid_dataset,
model=model)
print (transfer_values_train.shape)
print (transfer_values_validate.shape)
###Output
Downloading Inception v3 Model ...
Data has apparently already been downloaded and unpacked.
Processing Inception transfer-values for training-images ...
- Processing image: 395 / 395
- Data saved to cache-file: ./blob_data3/inception_train.pkl
Processing Inception transfer-values for validate-images ...
- Processing image: 99 / 99
- Data saved to cache-file: ./blob_data3/inception_validate.pkl
(395, 2048)
(99, 2048)
###Markdown
Plot the Input Image and Transfer-values for the Image using Inception Model
###Code
def plot_transfer_values(transfer_values, images, i):
print("Input image:")
# Plot the i'th image from the images dataset.
plt.imshow(images[i]/255, interpolation='nearest')
plt.show()
print("Transfer-values for the image using Inception model:")
# Transform the transfer-values into an image.
img = transfer_values[i]
img = img.reshape((32, 64))
# Plot the image for the transfer-values.
plt.imshow(img, interpolation='nearest', cmap='Reds')
plt.show()
plot_transfer_values(transfer_values_validate,valid_dataset,17)
###Output
Input image:
###Markdown
Build a new Classiferbuild a fully connected neural network as the last layer
###Code
transfer_len = int(inception.Inception().transfer_len)
num_hidden_nodes = 128 #set the num of hidden nodes in the fully connected neural network
batch_size = 16
graph = tf.Graph()
with graph.as_default():
x = tf.placeholder(tf.float32, shape=[None, transfer_len], name='x')
y_true = tf.placeholder(tf.float32, shape=[None, num_labels], name='y')
tf_valid_dataset = tf.constant(transfer_values_validate)
w1 = tf.Variable(tf.random_normal([transfer_len,num_hidden_nodes], stddev = 0.03), name = 'w1')
b1 = tf.Variable(tf.random_normal([num_hidden_nodes]),name = 'b1')
w2 = tf.Variable(tf.random_normal([num_hidden_nodes, num_labels], stddev = 0.03), name = 'w2')
b2 = tf.Variable(tf.random_normal([num_labels]),name = 'b2')
def model(x):
hidden_out = tf.add(tf.matmul(x,w1),b1)
hidden_out = tf.nn.relu(hidden_out)
return tf.add(tf.matmul(hidden_out,w2),b2)
# Training computation
logits = model(x)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=logits))
# Optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01).minimize(loss)
# Predictions for the training, and validation
train_prediction = tf.nn.softmax(logits, name='restore_model')
valid_prediction = tf.nn.softmax(model(tf_valid_dataset))
# Save the model for future scoring
saver = tf.train.Saver()
num_steps = 1001
num_epochs = 10
session = tf.Session()
session.run(tf.global_variables_initializer())
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print('Initialized')
for ite in range(num_epochs):
permutation = np.random.permutation(train_labels.shape[0])
shuffled_dataset = transfer_values_train[permutation,:]
shuffled_labels = train_labels[permutation]
for step in range(num_steps):
offset = (step * batch_size) % (shuffled_labels.shape[0] - batch_size)
batch_data = shuffled_dataset[offset:(offset + batch_size), :]
batch_labels = shuffled_labels[offset:(offset + batch_size), :]
feed_dict = {x : batch_data, y_true : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 50 == 0):
print('Minibatch loss at iteration %d step %d: %f' % (ite, step, l))
print('Minibatch accuracy: %.1f%%' % accuracy(predictions, batch_labels))
print('Validation dataset accuracy: %.1f%%' % accuracy(valid_prediction.eval(), valid_labels))
valid_preds = valid_prediction.eval()
# store the model
save_dir = './model/'
if not os.path.exists(save_dir):
os.mkdir(save_dir)
checkpoint_path = os.path.join(save_dir, 'model')
save_path=saver.save(session, checkpoint_path)
print ('Model saved in file: %s'% save_path)
###Output
Initialized
Minibatch loss at iteration 0 step 0: 1.719679
Minibatch accuracy: 6.2%
Minibatch loss at iteration 0 step 50: 1.234572
Minibatch accuracy: 81.2%
Minibatch loss at iteration 0 step 100: 0.948798
Minibatch accuracy: 87.5%
Minibatch loss at iteration 0 step 150: 0.493773
Minibatch accuracy: 93.8%
Minibatch loss at iteration 0 step 200: 0.441596
Minibatch accuracy: 93.8%
Minibatch loss at iteration 0 step 250: 0.235281
Minibatch accuracy: 100.0%
Minibatch loss at iteration 0 step 300: 0.201557
Minibatch accuracy: 100.0%
Minibatch loss at iteration 0 step 350: 0.186901
Minibatch accuracy: 93.8%
Minibatch loss at iteration 0 step 400: 0.274800
Minibatch accuracy: 81.2%
Minibatch loss at iteration 0 step 450: 0.174269
Minibatch accuracy: 100.0%
Minibatch loss at iteration 0 step 500: 0.138601
Minibatch accuracy: 93.8%
Minibatch loss at iteration 0 step 550: 0.098777
Minibatch accuracy: 100.0%
Minibatch loss at iteration 0 step 600: 0.062443
Minibatch accuracy: 100.0%
Minibatch loss at iteration 0 step 650: 0.117720
Minibatch accuracy: 93.8%
Minibatch loss at iteration 0 step 700: 0.044701
Minibatch accuracy: 100.0%
Minibatch loss at iteration 0 step 750: 0.048320
Minibatch accuracy: 100.0%
Minibatch loss at iteration 0 step 800: 0.072919
Minibatch accuracy: 100.0%
Minibatch loss at iteration 0 step 850: 0.110188
Minibatch accuracy: 93.8%
Minibatch loss at iteration 0 step 900: 0.066443
Minibatch accuracy: 100.0%
Minibatch loss at iteration 0 step 950: 0.056370
Minibatch accuracy: 100.0%
Minibatch loss at iteration 0 step 1000: 0.036452
Minibatch accuracy: 100.0%
Minibatch loss at iteration 1 step 0: 0.059919
Minibatch accuracy: 100.0%
Minibatch loss at iteration 1 step 50: 0.040204
Minibatch accuracy: 100.0%
Minibatch loss at iteration 1 step 100: 0.087364
Minibatch accuracy: 100.0%
Minibatch loss at iteration 1 step 150: 0.047487
Minibatch accuracy: 100.0%
Minibatch loss at iteration 1 step 200: 0.031967
Minibatch accuracy: 100.0%
Minibatch loss at iteration 1 step 250: 0.057154
Minibatch accuracy: 100.0%
Minibatch loss at iteration 1 step 300: 0.014755
Minibatch accuracy: 100.0%
Minibatch loss at iteration 1 step 350: 0.080259
Minibatch accuracy: 100.0%
Minibatch loss at iteration 1 step 400: 0.045759
Minibatch accuracy: 100.0%
Minibatch loss at iteration 1 step 450: 0.082877
Minibatch accuracy: 93.8%
Minibatch loss at iteration 1 step 500: 0.022961
Minibatch accuracy: 100.0%
Minibatch loss at iteration 1 step 550: 0.053480
Minibatch accuracy: 100.0%
Minibatch loss at iteration 1 step 600: 0.027776
Minibatch accuracy: 100.0%
Minibatch loss at iteration 1 step 650: 0.019689
Minibatch accuracy: 100.0%
Minibatch loss at iteration 1 step 700: 0.036080
Minibatch accuracy: 100.0%
Minibatch loss at iteration 1 step 750: 0.008765
Minibatch accuracy: 100.0%
Minibatch loss at iteration 1 step 800: 0.037719
Minibatch accuracy: 100.0%
Minibatch loss at iteration 1 step 850: 0.028761
Minibatch accuracy: 100.0%
Minibatch loss at iteration 1 step 900: 0.079585
Minibatch accuracy: 93.8%
Minibatch loss at iteration 1 step 950: 0.014911
Minibatch accuracy: 100.0%
Minibatch loss at iteration 1 step 1000: 0.041697
Minibatch accuracy: 100.0%
Minibatch loss at iteration 2 step 0: 0.008579
Minibatch accuracy: 100.0%
Minibatch loss at iteration 2 step 50: 0.038619
Minibatch accuracy: 100.0%
Minibatch loss at iteration 2 step 100: 0.012432
Minibatch accuracy: 100.0%
Minibatch loss at iteration 2 step 150: 0.010452
Minibatch accuracy: 100.0%
Minibatch loss at iteration 2 step 200: 0.012915
Minibatch accuracy: 100.0%
Minibatch loss at iteration 2 step 250: 0.011346
Minibatch accuracy: 100.0%
Minibatch loss at iteration 2 step 300: 0.017163
Minibatch accuracy: 100.0%
Minibatch loss at iteration 2 step 350: 0.012527
Minibatch accuracy: 100.0%
Minibatch loss at iteration 2 step 400: 0.019671
Minibatch accuracy: 100.0%
Minibatch loss at iteration 2 step 450: 0.014643
Minibatch accuracy: 100.0%
Minibatch loss at iteration 2 step 500: 0.020280
Minibatch accuracy: 100.0%
Minibatch loss at iteration 2 step 550: 0.010022
Minibatch accuracy: 100.0%
Minibatch loss at iteration 2 step 600: 0.007956
Minibatch accuracy: 100.0%
Minibatch loss at iteration 2 step 650: 0.010669
Minibatch accuracy: 100.0%
Minibatch loss at iteration 2 step 700: 0.008829
Minibatch accuracy: 100.0%
Minibatch loss at iteration 2 step 750: 0.013099
Minibatch accuracy: 100.0%
Minibatch loss at iteration 2 step 800: 0.010091
Minibatch accuracy: 100.0%
Minibatch loss at iteration 2 step 850: 0.014766
Minibatch accuracy: 100.0%
Minibatch loss at iteration 2 step 900: 0.010770
Minibatch accuracy: 100.0%
Minibatch loss at iteration 2 step 950: 0.015756
Minibatch accuracy: 100.0%
Minibatch loss at iteration 2 step 1000: 0.007924
Minibatch accuracy: 100.0%
Minibatch loss at iteration 3 step 0: 0.018174
Minibatch accuracy: 100.0%
Minibatch loss at iteration 3 step 50: 0.008708
Minibatch accuracy: 100.0%
Minibatch loss at iteration 3 step 100: 0.006960
Minibatch accuracy: 100.0%
Minibatch loss at iteration 3 step 150: 0.010125
Minibatch accuracy: 100.0%
Minibatch loss at iteration 3 step 200: 0.007635
Minibatch accuracy: 100.0%
Minibatch loss at iteration 3 step 250: 0.012585
Minibatch accuracy: 100.0%
Minibatch loss at iteration 3 step 300: 0.003635
Minibatch accuracy: 100.0%
Minibatch loss at iteration 3 step 350: 0.005554
Minibatch accuracy: 100.0%
Minibatch loss at iteration 3 step 400: 0.006729
Minibatch accuracy: 100.0%
Minibatch loss at iteration 3 step 450: 0.009902
Minibatch accuracy: 100.0%
Minibatch loss at iteration 3 step 500: 0.006710
Minibatch accuracy: 100.0%
Minibatch loss at iteration 3 step 550: 0.005501
Minibatch accuracy: 100.0%
Minibatch loss at iteration 3 step 600: 0.008232
Minibatch accuracy: 100.0%
Minibatch loss at iteration 3 step 650: 0.006193
Minibatch accuracy: 100.0%
Minibatch loss at iteration 3 step 700: 0.007011
Minibatch accuracy: 100.0%
Minibatch loss at iteration 3 step 750: 0.002959
Minibatch accuracy: 100.0%
Minibatch loss at iteration 3 step 800: 0.003964
Minibatch accuracy: 100.0%
Minibatch loss at iteration 3 step 850: 0.005450
Minibatch accuracy: 100.0%
Minibatch loss at iteration 3 step 900: 0.008498
Minibatch accuracy: 100.0%
Minibatch loss at iteration 3 step 950: 0.005602
Minibatch accuracy: 100.0%
Minibatch loss at iteration 3 step 1000: 0.004609
Minibatch accuracy: 100.0%
Minibatch loss at iteration 4 step 0: 0.005989
Minibatch accuracy: 100.0%
Minibatch loss at iteration 4 step 50: 0.009786
Minibatch accuracy: 100.0%
Minibatch loss at iteration 4 step 100: 0.008211
Minibatch accuracy: 100.0%
Minibatch loss at iteration 4 step 150: 0.007708
Minibatch accuracy: 100.0%
Minibatch loss at iteration 4 step 200: 0.008080
Minibatch accuracy: 100.0%
Minibatch loss at iteration 4 step 250: 0.003003
Minibatch accuracy: 100.0%
Minibatch loss at iteration 4 step 300: 0.003932
Minibatch accuracy: 100.0%
Minibatch loss at iteration 4 step 350: 0.008887
Minibatch accuracy: 100.0%
Minibatch loss at iteration 4 step 400: 0.005522
Minibatch accuracy: 100.0%
Minibatch loss at iteration 4 step 450: 0.008601
Minibatch accuracy: 100.0%
Minibatch loss at iteration 4 step 500: 0.005132
Minibatch accuracy: 100.0%
Minibatch loss at iteration 4 step 550: 0.006914
Minibatch accuracy: 100.0%
Minibatch loss at iteration 4 step 600: 0.007053
Minibatch accuracy: 100.0%
Minibatch loss at iteration 4 step 650: 0.006974
Minibatch accuracy: 100.0%
Minibatch loss at iteration 4 step 700: 0.002390
Minibatch accuracy: 100.0%
Minibatch loss at iteration 4 step 750: 0.003612
Minibatch accuracy: 100.0%
Minibatch loss at iteration 4 step 800: 0.007628
Minibatch accuracy: 100.0%
Minibatch loss at iteration 4 step 850: 0.003914
Minibatch accuracy: 100.0%
Minibatch loss at iteration 4 step 900: 0.009558
Minibatch accuracy: 100.0%
Minibatch loss at iteration 4 step 950: 0.002797
Minibatch accuracy: 100.0%
Minibatch loss at iteration 4 step 1000: 0.005686
Minibatch accuracy: 100.0%
Minibatch loss at iteration 5 step 0: 0.002227
Minibatch accuracy: 100.0%
Minibatch loss at iteration 5 step 50: 0.005979
Minibatch accuracy: 100.0%
Minibatch loss at iteration 5 step 100: 0.004824
Minibatch accuracy: 100.0%
Minibatch loss at iteration 5 step 150: 0.005787
Minibatch accuracy: 100.0%
|
Convolutional_Filters_Edge_Detection/6.2. Hough circles, agriculture.ipynb | ###Markdown
Hough Circle Detection
###Code
import numpy as np
import matplotlib.pyplot as plt
import cv2
%matplotlib inline
# Read in the image
image = cv2.imread('images/round_farms.jpg')
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image)
# Gray and blur
gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY)
gray_blur = cv2.GaussianBlur(gray, (3, 3), 0)
plt.imshow(gray_blur, cmap='gray')
###Output
_____no_output_____
###Markdown
HoughCircles function`HoughCircles` takes in a few things as its arguments:* an input image, detection method (Hough gradient), resolution factor between the detection and image (1),* minDist - the minimum distance between circles* param1 - the higher value for performing Canny edge detection* param2 - threshold for circle detection, a smaller value --> more circles will be detected* min/max radius for detected circlesThe variable you should change will be the last two: min/max radius for detected circles. Take a look at the image above and estimate how many pixels the average circle is in diameter; use this estimate to provide values for min/max arguments. You may also want to see what happens if you change minDist.
###Code
# for drawing circles on
circles_im = np.copy(image)
## TODO: use HoughCircles to detect circles
# right now there are too many, large circles being detected
# try changing the value of maxRadius, minRadius, and minDist
circles = cv2.HoughCircles(gray_blur, cv2.HOUGH_GRADIENT, 1,
minDist=45,
param1=70,
param2=10,
minRadius=15,
maxRadius=33)
# convert circles into expected type
circles = np.uint16(np.around(circles))
# draw each one
for i in circles[0,:]:
# draw the outer circle
cv2.circle(circles_im,(i[0],i[1]),i[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(circles_im,(i[0],i[1]),2,(0,0,255),3)
plt.imshow(circles_im)
print('Circles shape: ', circles.shape)
###Output
Circles shape: (1, 167, 3)
|
Yahoo Stock Price Analysis/stock-price-yahoo.ipynb | ###Markdown
Stock Price Yahoo This is just a quick exploration of two awesome Python packages that I wanted to play with for a while1. 'Prophet' for time series forecasting2. 'pandas_datareader' for grabbing historic stock price data Enviroment I used the jupyter notebook with Anaconda for making this Analysis. Try to install Prophet using pip. Or We can install it just using a simple command in Anaconda prompt. 'pip install fbprophet'
###Code
#The usual suspects
import datetime
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
#Plus some new interesting characters
from fbprophet import Prophet
import pandas_datareader.data as web
###Output
_____no_output_____
###Markdown
About Data Source pandas_datareader makes it incredibly easy to get historic stock market dataFor today we'll just get the daily closing prices of the S&P 500 (SPY)We'll train on data from the start of 2000 to 2017-03-14 We'll use the last week's data as a holdout set to evaluate our model (up until 2017-03-21)
###Code
start = datetime.datetime(2000, 1,1)
end = datetime.datetime(2017, 3, 21)
train = web.DataReader("SPY", 'yahoo', start, end)
compare = train.copy()
train = train['2000-01-03': '2017-03-14']
train.head()
###Output
_____no_output_____
###Markdown
Can we forecast future prices? This is where we get to play with Prophet. First we'll prepare our dataset in a way that Prophet likes it.
###Code
df = pd.DataFrame({'ds': train.index, 'y': train['Adj Close']}).reset_index()
df = df.drop('Date', axis=1)
df.head()
###Output
_____no_output_____
###Markdown
Now we can train our model. Since we don't expect stock prices to really follow a weekly seasonality we'll switch that off.
###Code
m = Prophet(weekly_seasonality=False)
m.fit(df);
###Output
_____no_output_____ |
Modulo 4/Projeto_Paty_mod_4 (1).ipynb | ###Markdown
-------------------------------- como trataríamos os NaNs os NaNs entraram em grupos, todos que existem na posição 0* educação: distribuir na proporção que eles aparecem, mantendo a média dos que vieram preenchidos no DF* estado civil: dropar as linhas parece fazer mais sentido do que completa-las* income: copiar o limite de crédito como se fosse dentro do limite inferior da faixa ou ordenando credit limit e dando fillna onde os NaN estão situados`ccc.isnull().sum()` drop de todas as linhas com NaN. Ao fim vamos ver se devemos substituilos por outros valores`ccc.dropna(inplace = True)`* há na coluna Months_on_book uma concentração de valores em 36 meses. Aparentam seguir as médias das outras linhas, mas em torno de 25% do total das observações é 36 meses. Não conseguimos explicar o porquê ---------------
###Code
plt.hist(ccc['Months_on_book'], bins = 50)
plt.show()
# filtro com pessoas que saíram e ver o perfil delas em relação a média total
ccc_atritados = ccc[ccc["Attrition_Flag"] == "Attrited Customer"]
ccc_existentes= ccc[ccc["Attrition_Flag"] == "Existing Customer"]
ccc_atritados[(ccc_atritados['Months_on_book'] <= 40) & (ccc_atritados['Months_on_book'] >= 30)].count()
# df[(df["Fare"] >= 260) & (df["Sex"] == 'female')]
sns.barplot(x="Marital_Status", y="Total_Trans_Amt", hue='Attrition_Flag', data=ccc)
plt.xticks([0, 1, 2, 3], ['Unknown', 'Single', 'Married', 'Divorced'])
plt.legend(labels=['Left: Attrited', 'Right: Existing'])
plt.xlabel('MARITAL STATUS')
plt.ylabel('TOTAL SPENDINGS')
plt.title('MARRIAGE EFFECT')
plt.show()
sns.set_style('whitegrid')
sns.barplot(x='Customer_Age', y='Attrition_Flag', data=ccc)
plt.ylabel('ATTRITION')
plt.xlabel('AGE')
plt.title('AGE vs FLAG')
plt.show()
'''
contatos com banco nos ultimos 12 meses x attrition. Quanto maior o número de contatos, mais as pessoas dão churn
'''
contato_banco = ccc.groupby('Contacts_Count_12_mon')['Attrition_Flag'].mean()
contato_banco.plot(kind='bar', ylabel='Attrited Ratio')
ccc.groupby('Contacts_Count_12_mon')['Attrition_Flag'].count()
# 0 meses é outlier. Só 29 observações
inatividade = ccc.groupby('Months_Inactive_12_mon')['Attrition_Flag'].mean()
inatividade.plot(kind='bar', ylabel='Attrited Ratio')
ccc.groupby('Months_Inactive_12_mon')['Attrition_Flag'].count()
'''
0 é unknown. Os dados mostram que ao aumentar a escolaridade do cliente, a tendência dele a sair do banco é maior
'''
educacao = ccc.groupby('Education_Level')['Attrition_Flag'].mean()
educacao.plot(kind='bar', ylabel='Attrited Ratio')
ccc.groupby('Education_Level')['Attrition_Flag'].count()
'''
0 é unknown. Os casados (2) são os que menos saem do banco, apesar da diferença não ser tão grande.
'''
estado_civil = ccc.groupby('Marital_Status')['Attrition_Flag'].mean()
estado_civil.plot(kind='bar', ylabel='Attrited Ratio')
ccc.groupby('Marital_Status')['Attrition_Flag'].count()
by_income_df = ccc.groupby('Income_Category')['Attrition_Flag'].mean()
by_income_df.plot(kind='bar', ylabel='Attrited Ratio')
by_gender_df = ccc.groupby('Gender')['Attrition_Flag'].mean()
by_gender_df.plot(kind='bar', ylabel='Attrited Ratio')
by_income_df = ccc.groupby('Income_Category')['Attrition_Flag'].mean()
by_income_df.plot(kind='bar', ylabel='Attrited Ratio')
ccc.groupby('Income_Category')['Attrition_Flag'].count()
by_card_df = ccc.groupby('Card_Category')['Attrition_Flag'].mean()
by_card_df.plot(kind='bar', ylabel='Attrited Ratio')
ccc.groupby('Card_Category')['Attrition_Flag'].count()
'''
o rotativo parece prender as pessoas ao banco, tornando-as menos propensas a saírem
'''
total_rotativo_0 = ccc[ccc['Total_Revolving_Bal']==0]['Attrition_Flag'].mean()
print(f'média de pessoas que saíram com rotativo zerado é {total_rotativo_0:.2f}')
total_rotativo_1 = ccc[ccc['Total_Revolving_Bal']!=0]['Attrition_Flag'].mean()
print(f'média de pessoas que saíram com rotativo acima de zero é {total_rotativo_1:.2f}')
'''
pela média de mudanças na quantidade de transações feitas pelo cliente,
percebe-se que quanto menor o número de transações, maior a chance
dela sair do banco
'''
by_Total_revolving_df_1 = ccc[(ccc['Total_Ct_Chng_Q4_Q1'] < 0.71 )]['Attrition_Flag'].mean()
print (f'Porcentagem de pessoas abaixo da média que saíram é {by_Total_revolving_df_1:.2f}')
by_Total_revolving_df_0 = ccc[(ccc['Total_Ct_Chng_Q4_Q1'] >= 0.71 )]['Attrition_Flag'].mean()
print (f'Porcentagem de pessoas acima da média que saíram é {by_Total_revolving_df_0:.2f}')
# análise da diferença entre Attrited e Existing
ccc.groupby(['Attrition_Flag'], as_index=False)[['Customer_Age', 'Income_Category','Months_on_book', 'Credit_Limit']].mean()
# análise da diferença entre Attrited e Existing
ccc.groupby(['Gender'], as_index=False)[['Customer_Age', 'Income_Category','Months_on_book', 'Credit_Limit','Attrition_Flag']].mean()
ccc.groupby([','Attrition_Flag','Gender'], as_index=False)[['Customer_Age','Months_on_book', 'Credit_Limit','Attrition_Flag', 'Income_Category']].agg(['mean', 'count'])
'''
pode-se observar alguma grandes correlações:
Gender com income tem 78% de correlação, provado pela media dos homens da tabela terem income muito superior ao das mulheres
Months on book com customer age também tem forte correlação, pelo óbvio fato de pessoas mais velhas terem mais tempo de vida
credit limit com card category tem 50% de correlação por obviedade também. Cartões de ranking mais alto disponibilizam mais crédito
'''
ccc.corr()
# ccc.groupby(['Attrition_Flag'], as_index=False)[['Months_on_book',]].agg(['mean', 'count'])
# ccc.drop()
# ccc[(ccc['Months_on_book'] == 36)].groupby('Attrition_Flag').count()
i = ccc[(ccc['Months_on_book'] == 36)].index
for j in i:
ccc.drop(i, inplace = True, axis = 0)
ccc
###Output
_____no_output_____ |
examples/frameworks/pytorch/notebooks/table/tabular_ml_pipeline.ipynb | ###Markdown
Tabular Data Pipeline with Concurrent StepsThis example demonstrates an ML pipeline which preprocesses data in two concurrent steps, trains two networks, where each network's training depends upon the completion of its own preprocessed data, and picks the best model. It is implemented using the PipelineController class.The pipeline uses four tasks (each Task is created using a different notebook):* The pipeline controller Task (the current task)* A data preprocessing Task ([preprocessing_and_encoding.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/preprocessing_and_encoding.ipynb))* A training Task [(train_tabular_predictor.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/train_tabular_predictor.ipynb))* A comparison Task ([pick_best_model.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/pick_best_model.ipynb))In this pipeline example, the data preprocessing Task and training Task are each added to the pipeline twice (each is in two steps). When the pipeline runs, the data preprocessing Task and training Task are cloned twice, and the newly cloned Tasks execute. The Task they are cloned from, called the base Task, does not execute. The pipeline controller passes different data to each cloned Task by overriding parameters. In this way, the same Task can run more than once in the pipeline, but with different data. PrerequisiteMake sure to download the data needed for this task. See the [download_and_split.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/download_and_split.ipynb) notebook
###Code
# pip install with locked versions
! pip install -U pip
! pip install -U clearml
from clearml import Task, PipelineController
TABULAR_DATASET_ID = Task.get_task(
task_name="Download and split tabular dataset", project_name="Tabular Example"
).id
###Output
_____no_output_____
###Markdown
Create Pipeline ControllerThe PipelineController class includes functionality to create a pipeline controller, add steps to the pipeline, pass data from one step to another, control the dependencies of a step beginning only after other steps complete, run the pipeline, wait for it to complete, and cleanup afterwards.Input the following parameters:* `name` - Name of the PipelineController task which will created* `project` - Project which the controller will be associated with* `version` - Pipeline's version number. This version allows to uniquely identify the pipeline template execution.* `auto_version_bump` (default True) - if the same pipeline version already exists (with any difference from the current one), the current pipeline version will be bumped to a new version (e.g. 1.0.0 -> 1.0.1 , 1.2 -> 1.3, 10 -> 11)
###Code
pipe = PipelineController(
project="Tabular Example",
name="tabular training pipeline",
add_pipeline_tags=True,
version="0.1",
)
###Output
_____no_output_____
###Markdown
Add Preprocessing StepTwo preprocessing nodes are added to the pipeline: `preprocessing_1` and `preprocessing_2`. These two nodes will be cloned from the same base task, created from the [preprocessing_and_encoding.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/preprocessing_and_encoding.ipynb) script. These steps will run concurrently.The preprocessing data task fills in values of NaN data based on the values of the parameters named `fill_categorical_NA` and `fill_numerical_NA`. It will connect a parameter dictionary to the task which contains keys with those same names. The pipeline will override the values of those keys when the pipeline executes the cloned tasks of the base Task. In this way, two sets of data are created in the pipeline.
###Code
pipe.add_step(
name="preprocessing_1",
base_task_project="Tabular Example",
base_task_name="tabular preprocessing",
parameter_override={
"General/data_task_id": TABULAR_DATASET_ID,
"General/fill_categorical_NA": "True",
"General/fill_numerical_NA": "True",
},
)
pipe.add_step(
name="preprocessing_2",
base_task_project="Tabular Example",
base_task_name="tabular preprocessing",
parameter_override={
"General/data_task_id": TABULAR_DATASET_ID,
"General/fill_categorical_NA": "False",
"General/fill_numerical_NA": "True",
},
)
###Output
_____no_output_____
###Markdown
Add Training Step Two training nodes are added to the pipeline: `train_1` and `train_2`. These two nodes will be cloned from the same base task, created from the [train_tabular_predictor.ipynb](https://github.com/allegroai/clearml/blob/master/examples/frameworks/pytorch/notebooks/table/train_tabular_predictor.ipynb) script.Each training node depends upon the completion of one preprocessing node. The `parents` parameter is a list of step names indicating all steps that must complete before the new step starts. In this case, `preprocessing_1` must complete before `train_1` begins, and `preprocessing_2` must complete before `train_2` begins.The ID of a task whose artifact contains a set of preprocessed data for training will be overridden using the `data_task_id key`. Its value takes the form `${.}`. In this case, `${preprocessing_1.id}` is the ID of one of the preprocessing node tasks. In this way, each training task consumes its own set of data.
###Code
pipe.add_step(
name="train_1",
parents=["preprocessing_1"],
base_task_project="Tabular Example",
base_task_name="tabular prediction",
parameter_override={"General/data_task_id": "${preprocessing_1.id}"},
)
pipe.add_step(
name="train_2",
parents=["preprocessing_2"],
base_task_project="Tabular Example",
base_task_name="tabular prediction",
parameter_override={"General/data_task_id": "${preprocessing_2.id}"},
)
###Output
_____no_output_____
###Markdown
Add Model Comparison StepThe model comparison step depends upon both training nodes completing and takes the two training node task IDs to override the parameters in the base task. The IDs of the training tasks from the steps named `train_1` and `train_2` are passed to the model comparison Task. They take the form `${.}`.
###Code
pipe.add_step(
name="pick_best",
parents=["train_1", "train_2"],
base_task_project="Tabular Example",
base_task_name="pick best model",
parameter_override={"General/train_tasks_ids": "[${train_1.id}, ${train_2.id}]"},
)
###Output
_____no_output_____
###Markdown
Set Default Execution QueueSet the default execution queue for pipeline steps that did not specify an execution queue. The pipeline steps will be enqueued for execution in this queue.> **_Note_** Make sure to assign a ClearML Agent to the queue which the steps are enqueued, so they will be executed
###Code
pipe.set_default_execution_queue(default_execution_queue="default")
###Output
_____no_output_____
###Markdown
Execute the PipelineStart the pipeline! The `start` method launches the pipeline controller remotely, by default on the `services` queue (change the queue by passing `queue=`).In order to launch the pipeline control logic locally, use the `start_locally` method instead. Once the pipeline starts, wait for it to complete. Finally, cleanup the pipeline processes.
###Code
# Starting the pipeline (in the background)
pipe.start()
# Wait until pipeline terminates
pipe.wait()
# cleanup everything
pipe.stop()
###Output
_____no_output_____ |
src/Expe_modele_gensim.ipynb | ###Markdown
Code pour les différentes expériences
###Code
model.kv.distance('malinois.n.01', 'sea_cow.n.01')
model.kv.distance('malinois.n.01', 'raccoon_dog.n.01')
model.kv.distance('malinois.n.01', 'lakeland_terrier.n.01')
model.kv.distance('malinois.n.01', 'racehorse.n.01')
model.kv.most_similar('malinois.n.01')
# Position in hierarchy - lower values represent that the node is higher in the hierarchy
print(model.kv.norm('placental.n.01'))
print(model.kv.norm('mammal.n.01'))
print(model.kv.norm('sea_cow.n.01'))
print(model.kv.norm('canine.n.02'))
print(model.kv.norm('hunting_dog.n.01'))
print(model.kv.norm('white-tailed_jackrabbit.n.01'))
###Output
0.05903926322826634
0.041231292311324934
0.9835994575692365
|
module3-model-interpretation/LS_DS_413_Model_Interpretation.ipynb | ###Markdown
_Lambda School Data Science — Tree Ensembles_ Model Interpretation Objectives- Partial Dependence Plots- Shapley Values Pre-reads1. Kaggle / Dan Becker: Machine Learning Explainability - https://www.kaggle.com/dansbecker/partial-plots - https://www.kaggle.com/dansbecker/shap-values2. Christoph Molnar: Interpretable Machine Learning - https://christophm.github.io/interpretable-ml-book/pdp.html - https://christophm.github.io/interpretable-ml-book/shapley.html Libraries- [PDPbox](https://github.com/SauceCat/PDPbox): `pip install pdpbox`- [shap](https://github.com/slundberg/shap): `conda install -c conda-forge shap` / `pip install shap` Types of explanations Global explanation: all features in relation to each other- Feature Importances (mean decrease impurity)- Permutation Importances- Drop-Column Importances Global explanation: individual feature in relation to target- Partial Dependence plots Individual prediction explanation- Shapley Values_Note that the coefficients from a linear model give you all three types of explanations!_ Titanic
###Code
%matplotlib inline
import pandas as pd
import seaborn as sns
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
def load_titanic():
df = sns.load_dataset('titanic')
df['age'] = df['age'].fillna(df['age'].mean())
df['class'] = df['class'].map({'First': 1, 'Second': 2, 'Third': 3})
df['female'] = df['sex'] == 'female'
X = df[['age', 'class', 'fare', 'female']]
y = df['survived']
return X, y
X, y = load_titanic()
###Output
_____no_output_____
###Markdown
Naive majority calss baseline
###Code
y.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
lr = LogisticRegression(solver='lbfgs')
cross_val_score(lr, X, y, scoring='accuracy', cv=5, n_jobs=-1)
lr.fit(X, y)
pd.Series(lr.coef_[0], X.columns)
sns.regplot(x=X['age'], y=y, logistic=True, y_jitter=.05);
###Output
_____no_output_____
###Markdown
Gradient Boosting
###Code
gb = GradientBoostingClassifier()
cross_val_score(gb, X, y, scoring='accuracy', cv=5, n_jobs=-1)
gb.fit(X, y)
pd.Series(gb.feature_importances_, X.columns)
from pdpbox.pdp import pdp_isolate, pdp_plot
feature='age'
pdp_isolated = pdp_isolate(model=gb, dataset=X, model_features=X.columns, feature=feature)
pdp_plot(pdp_isolated, feature);
###Output
_____no_output_____
###Markdown
From [PDPbox documentation](https://pdpbox.readthedocs.io/en/latest/):>**The common headache**: When using black box machine learning algorithms like random forest and boosting, it is hard to understand the relations between predictors and model outcome. For example, in terms of random forest, all we get is the feature importance. Although we can know which feature is significantly influencing the outcome based on the importance calculation, it really sucks that we don’t know in which direction it is influencing. And in most of the real cases, the effect is non-monotonic. We need some powerful tools to help understanding the complex relations between predictors and model prediction. [Animation by Christoph Molnar](https://twitter.com/ChristophMolnar/status/1066398522608635904), author of [_Interpretable Machine Learning_](https://christophm.github.io/interpretable-ml-book/)> Partial dependence plots show how a feature affects predictions of a Machine Learning model on average.> 1. Define grid along feature> 2. Model predictions at grid points> 3. Line per data instance -> ICE (Individual Conditional Expectation) curve> 4. Average curves to get a PDP (Partial Dependence Plot) Compare Predictions
###Code
from sklearn.model_selection import cross_val_predict
y_pred_lr = cross_val_predict(lr, X, y, cv=5, n_jobs=-1)
y_pred_gb = cross_val_predict(gb, X, y, cv=5, n_jobs=-1)
preds = pd.DataFrame({'true': y, 'lr': y_pred_lr, 'gb': y_pred_gb})
gb_right = preds['gb'] == preds['true']
lr_wrong = preds['lr'] != preds['true']
len(preds[gb_right & lr_wrong]) / len(preds)
preds[gb_right & lr_wrong].head()
data_for_prediction = X.loc[27]
data_for_prediction
###Output
_____no_output_____
###Markdown
Explain individual predictionhttps://www.kaggle.com/dansbecker/shap-values
###Code
import shap
# Create object that can calculate shap values
explainer = shap.TreeExplainer(gb)
# Calculate Shap values
shap_values = explainer.shap_values(data_for_prediction)
shap.initjs()
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
###Output
_____no_output_____
###Markdown
Lending Club
###Code
import category_encoders as ce
import pandas as pd
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
# Load data from https://www.kaggle.com/c/ds1-tree-ensembles/data
X_train = pd.read_csv('train_features.csv')
X_test = pd.read_csv('test_features.csv')
y_train = pd.read_csv('train_labels.csv')['charged_off']
sample_submission = pd.read_csv('sample_submission.csv')
def wrangle(X):
X = X.copy()
# Drop some columns
X = X.drop(columns='id') # id is random
X = X.drop(columns=['member_id', 'url', 'desc']) # All null
X = X.drop(columns='title') # Duplicative of purpose
X = X.drop(columns='grade') # Duplicative of sub_grade
# Transform sub_grade from "A1" - "G5" to 1.1 - 7.5
def wrangle_sub_grade(x):
first_digit = ord(x[0]) - 64
second_digit = int(x[1])
return first_digit + second_digit/10
X['sub_grade'] = X['sub_grade'].apply(wrangle_sub_grade)
# Convert percentages from strings to floats
X['int_rate'] = X['int_rate'].str.strip('%').astype(float)
X['revol_util'] = X['revol_util'].str.strip('%').astype(float)
# Transform earliest_cr_line to an integer: how many days it's been open
X['earliest_cr_line'] = pd.to_datetime(X['earliest_cr_line'], infer_datetime_format=True)
X['earliest_cr_line'] = pd.Timestamp.today() - X['earliest_cr_line']
X['earliest_cr_line'] = X['earliest_cr_line'].dt.days
# Create features for three employee titles: teacher, manager, owner
X['emp_title'] = X['emp_title'].str.lower()
X['emp_title_teacher'] = X['emp_title'].str.contains('teacher', na=False)
X['emp_title_manager'] = X['emp_title'].str.contains('manager', na=False)
X['emp_title_owner'] = X['emp_title'].str.contains('owner', na=False)
# Drop categoricals with high cardinality
X = X.drop(columns=['emp_title', 'zip_code'])
# Transform features with many nulls to binary flags
many_nulls = ['sec_app_mths_since_last_major_derog',
'sec_app_revol_util',
'sec_app_earliest_cr_line',
'sec_app_mort_acc',
'dti_joint',
'sec_app_collections_12_mths_ex_med',
'sec_app_chargeoff_within_12_mths',
'sec_app_num_rev_accts',
'sec_app_open_act_il',
'sec_app_open_acc',
'revol_bal_joint',
'annual_inc_joint',
'sec_app_inq_last_6mths',
'mths_since_last_record',
'mths_since_recent_bc_dlq',
'mths_since_last_major_derog',
'mths_since_recent_revol_delinq',
'mths_since_last_delinq',
'il_util',
'emp_length',
'mths_since_recent_inq',
'mo_sin_old_il_acct',
'mths_since_rcnt_il',
'num_tl_120dpd_2m',
'bc_util',
'percent_bc_gt_75',
'bc_open_to_buy',
'mths_since_recent_bc']
for col in many_nulls:
X[col] = X[col].isnull()
# For features with few nulls, do mean imputation
for col in X:
if X[col].isnull().sum() > 0:
X[col] = X[col].fillna(X[col].mean())
# Return the wrangled dataframe
return X
# Wrangle train and test in the same way
X_train = wrangle(X_train)
X_test = wrangle(X_test)
%%time
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=0.2, stratify=y_train, random_state=42)
encoder = ce.OrdinalEncoder()
X_train = encoder.fit_transform(X_train)
X_val = encoder.transform(X_val)
gb = GradientBoostingClassifier()
gb.fit(X_train, y_train)
y_pred_proba = gb.predict_proba(X_val)[:,1]
print('Validation ROC AUC:', roc_auc_score(y_val, y_pred_proba))
###Output
Validation ROC AUC: 0.7441515819801925
CPU times: user 10.3 s, sys: 62.7 ms, total: 10.4 s
Wall time: 8.51 s
###Markdown
Partial Dependence Plot
###Code
from pdpbox.pdp import pdp_isolate, pdp_plot
feature='int_rate'
pdp_isolated = pdp_isolate(model=gb, dataset=X_val, model_features=X_val.columns, feature=feature)
pdp_plot(pdp_isolated, feature);
###Output
_____no_output_____
###Markdown
Individual predictions
###Code
import numpy as np
y_pred = (y_pred_proba >= 0.5).astype(int)
confidence = np.abs(y_pred_proba - 0.5)
preds = pd.DataFrame({'y_val': y_val, 'y_pred': y_pred, 'y_pred_proba': y_pred_proba, 'confidence': confidence})
# True positives, with high confidence
preds[(y_val==1) & (y_pred==1)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[17575]
explainer = shap.TreeExplainer(gb)
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# True negatives, with high confidence
preds[(y_val==0) & (y_pred==0)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[1778]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# False positives, with high (mistaken) confidence
preds[(y_val==0) & (y_pred==1)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[33542]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# False negatives, with high (mistaken) confidence
preds[(y_val==1) & (y_pred==0)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[30492]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# Most uncertain predictions (least confidence)
preds.sort_values(by='confidence', ascending=True).head()
data_for_prediction = X_val.loc[22527]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
###Output
_____no_output_____
###Markdown
_Lambda School Data Science — Tree Ensembles_ Model Interpretation Objectives- Partial Dependence Plots- Shapley Values Pre-reads1. Kaggle / Dan Becker: Machine Learning Explainability - https://www.kaggle.com/dansbecker/partial-plots - https://www.kaggle.com/dansbecker/shap-values2. Christoph Molnar: Interpretable Machine Learning - https://christophm.github.io/interpretable-ml-book/pdp.html - https://christophm.github.io/interpretable-ml-book/shapley.html Librarires- [PDPbox](https://github.com/SauceCat/PDPbox): `pip install pdpbox`- [shap](https://github.com/slundberg/shap): `conda install -c conda-forge shap` / `pip install shap` Types of explanations Global explanation: all features in relation to each other- Feature Importances (mean decrease impurity)- Permutation Importances- Drop-Column Importances Global explanation: individual feature in relation to target- Partial Dependence plots Individual prediction explanation- Shapley Values_Note that the coefficients from a linear model give you all three types of explanations!_ Titanic
###Code
%matplotlib inline
import pandas as pd
import seaborn as sns
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
def load_titanic():
df = sns.load_dataset('titanic')
df['age'] = df['age'].fillna(df['age'].mean())
df['class'] = df['class'].map({'First': 1, 'Second': 2, 'Third': 3})
df['female'] = df['sex'] == 'female'
X = df[['age', 'class', 'fare', 'female']]
y = df['survived']
return X, y
X, y = load_titanic()
###Output
_____no_output_____
###Markdown
Naive majority calss baseline
###Code
y.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
lr = LogisticRegression(solver='lbfgs')
cross_val_score(lr, X, y, scoring='accuracy', cv=5, n_jobs=-1)
lr.fit(X, y)
pd.Series(lr.coef_[0], X.columns)
sns.regplot(x=X['age'], y=y, logistic=True, y_jitter=.05);
###Output
_____no_output_____
###Markdown
Gradient Boosting
###Code
gb = GradientBoostingClassifier()
cross_val_score(gb, X, y, scoring='accuracy', cv=5, n_jobs=-1)
gb.fit(X, y)
pd.Series(gb.feature_importances_, X.columns)
from pdpbox.pdp import pdp_isolate, pdp_plot
feature='age'
pdp_isolated = pdp_isolate(model=gb, dataset=X, model_features=X.columns, feature=feature)
pdp_plot(pdp_isolated, feature);
###Output
_____no_output_____
###Markdown
From [PDPbox documentation](https://pdpbox.readthedocs.io/en/latest/):>**The common headache**: When using black box machine learning algorithms like random forest and boosting, it is hard to understand the relations between predictors and model outcome. For example, in terms of random forest, all we get is the feature importance. Although we can know which feature is significantly influencing the outcome based on the importance calculation, it really sucks that we don’t know in which direction it is influencing. And in most of the real cases, the effect is non-monotonic. We need some powerful tools to help understanding the complex relations between predictors and model prediction. [Animation by Christoph Molnar](https://twitter.com/ChristophMolnar/status/1066398522608635904), author of [_Interpretable Machine Learning_](https://christophm.github.io/interpretable-ml-book/)> Partial dependence plots show how a feature affects predictions of a Machine Learning model on average.> 1. Define grid along feature> 2. Model predictions at grid points> 3. Line per data instance -> ICE (Individual Conditional Expectation) curve> 4. Average curves to get a PDP (Partial Dependence Plot) Compare Predictions
###Code
from sklearn.model_selection import cross_val_predict
y_pred_lr = cross_val_predict(lr, X, y, cv=5, n_jobs=-1)
y_pred_gb = cross_val_predict(gb, X, y, cv=5, n_jobs=-1)
preds = pd.DataFrame({'true': y, 'lr': y_pred_lr, 'gb': y_pred_gb})
gb_right = preds['gb'] == preds['true']
lr_wrong = preds['lr'] != preds['true']
len(preds[gb_right & lr_wrong]) / len(preds)
preds[gb_right & lr_wrong].head()
data_for_prediction = X.loc[27]
data_for_prediction
###Output
_____no_output_____
###Markdown
Explain individual predictionhttps://www.kaggle.com/dansbecker/shap-values
###Code
import shap
# Create object that can calculate shap values
explainer = shap.TreeExplainer(gb)
# Calculate Shap values
shap_values = explainer.shap_values(data_for_prediction)
shap.initjs()
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
###Output
_____no_output_____
###Markdown
Lending Club
###Code
import category_encoders as ce
import pandas as pd
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
# Load data from https://www.kaggle.com/c/ds1-tree-ensembles/data
X_train = pd.read_csv('train_features.csv')
X_test = pd.read_csv('test_features.csv')
y_train = pd.read_csv('train_labels.csv')['charged_off']
sample_submission = pd.read_csv('sample_submission.csv')
def wrangle(X):
X = X.copy()
# Drop some columns
X = X.drop(columns='id') # id is random
X = X.drop(columns=['member_id', 'url', 'desc']) # All null
X = X.drop(columns='title') # Duplicative of purpose
X = X.drop(columns='grade') # Duplicative of sub_grade
# Transform sub_grade from "A1" - "G5" to 1.1 - 7.5
def wrangle_sub_grade(x):
first_digit = ord(x[0]) - 64
second_digit = int(x[1])
return first_digit + second_digit/10
X['sub_grade'] = X['sub_grade'].apply(wrangle_sub_grade)
# Convert percentages from strings to floats
X['int_rate'] = X['int_rate'].str.strip('%').astype(float)
X['revol_util'] = X['revol_util'].str.strip('%').astype(float)
# Transform earliest_cr_line to an integer: how many days it's been open
X['earliest_cr_line'] = pd.to_datetime(X['earliest_cr_line'], infer_datetime_format=True)
X['earliest_cr_line'] = pd.Timestamp.today() - X['earliest_cr_line']
X['earliest_cr_line'] = X['earliest_cr_line'].dt.days
# Create features for three employee titles: teacher, manager, owner
X['emp_title'] = X['emp_title'].str.lower()
X['emp_title_teacher'] = X['emp_title'].str.contains('teacher', na=False)
X['emp_title_manager'] = X['emp_title'].str.contains('manager', na=False)
X['emp_title_owner'] = X['emp_title'].str.contains('owner', na=False)
# Drop categoricals with high cardinality
X = X.drop(columns=['emp_title', 'zip_code'])
# Transform features with many nulls to binary flags
many_nulls = ['sec_app_mths_since_last_major_derog',
'sec_app_revol_util',
'sec_app_earliest_cr_line',
'sec_app_mort_acc',
'dti_joint',
'sec_app_collections_12_mths_ex_med',
'sec_app_chargeoff_within_12_mths',
'sec_app_num_rev_accts',
'sec_app_open_act_il',
'sec_app_open_acc',
'revol_bal_joint',
'annual_inc_joint',
'sec_app_inq_last_6mths',
'mths_since_last_record',
'mths_since_recent_bc_dlq',
'mths_since_last_major_derog',
'mths_since_recent_revol_delinq',
'mths_since_last_delinq',
'il_util',
'emp_length',
'mths_since_recent_inq',
'mo_sin_old_il_acct',
'mths_since_rcnt_il',
'num_tl_120dpd_2m',
'bc_util',
'percent_bc_gt_75',
'bc_open_to_buy',
'mths_since_recent_bc']
for col in many_nulls:
X[col] = X[col].isnull()
# For features with few nulls, do mean imputation
for col in X:
if X[col].isnull().sum() > 0:
X[col] = X[col].fillna(X[col].mean())
# Return the wrangled dataframe
return X
# Wrangle train and test in the same way
X_train = wrangle(X_train)
X_test = wrangle(X_test)
%%time
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=0.2, stratify=y_train, random_state=42)
encoder = ce.OrdinalEncoder()
X_train = encoder.fit_transform(X_train)
X_val = encoder.transform(X_val)
gb = GradientBoostingClassifier()
gb.fit(X_train, y_train)
y_pred_proba = gb.predict_proba(X_val)[:,1]
print('Validation ROC AUC:', roc_auc_score(y_val, y_pred_proba))
###Output
Validation ROC AUC: 0.7441515819801925
CPU times: user 10.3 s, sys: 62.7 ms, total: 10.4 s
Wall time: 8.51 s
###Markdown
Partial Dependence Plot
###Code
from pdpbox.pdp import pdp_isolate, pdp_plot
feature='int_rate'
pdp_isolated = pdp_isolate(model=gb, dataset=X_val, model_features=X_val.columns, feature=feature)
pdp_plot(pdp_isolated, feature);
###Output
_____no_output_____
###Markdown
Individual predictions
###Code
import numpy as np
y_pred = (y_pred_proba >= 0.5).astype(int)
confidence = np.abs(y_pred_proba - 0.5)
preds = pd.DataFrame({'y_val': y_val, 'y_pred': y_pred, 'y_pred_proba': y_pred_proba, 'confidence': confidence})
# True positives, with high confidence
preds[(y_val==1) & (y_pred==1)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[17575]
explainer = shap.TreeExplainer(gb)
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# True negatives, with high confidence
preds[(y_val==0) & (y_pred==0)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[1778]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# False positives, with high (mistaken) confidence
preds[(y_val==0) & (y_pred==1)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[33542]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# False negatives, with high (mistaken) confidence
preds[(y_val==1) & (y_pred==0)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[30492]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# Most uncertain predictions (least confidence)
preds.sort_values(by='confidence', ascending=True).head()
data_for_prediction = X_val.loc[22527]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
###Output
_____no_output_____
###Markdown
_Lambda School Data Science — Tree Ensembles_ Model Interpretation Objectives- Partial Dependence Plots- Shapley Values Pre-reads1. Kaggle / Dan Becker: Machine Learning Explainability - https://www.kaggle.com/dansbecker/partial-plots - https://www.kaggle.com/dansbecker/shap-values2. Christoph Molnar: Interpretable Machine Learning - https://christophm.github.io/interpretable-ml-book/pdp.html - https://christophm.github.io/interpretable-ml-book/shapley.html Libraries- [PDPbox](https://github.com/SauceCat/PDPbox): `pip install pdpbox`- [shap](https://github.com/slundberg/shap): `conda install -c conda-forge shap` / `pip install shap` Types of explanations Global explanation: all features in relation to each other- Feature Importances (mean decrease impurity)- Permutation Importances- Drop-Column Importances Global explanation: individual feature in relation to target- Partial Dependence plots Individual prediction explanation- Shapley Values_Note that the coefficients from a linear model give you all three types of explanations!_ Titanic
###Code
%matplotlib inline
import pandas as pd
import seaborn as sns
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
def load_titanic():
df = sns.load_dataset('titanic')
df['age'] = df['age'].fillna(df['age'].mean())
df['class'] = df['class'].map({'First': 1, 'Second': 2, 'Third': 3})
df['female'] = df['sex'] == 'female'
X = df[['age', 'class', 'fare', 'female']]
y = df['survived']
return X, y
X, y = load_titanic()
###Output
_____no_output_____
###Markdown
Naive majority class baseline
###Code
y.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
lr = LogisticRegression(solver='lbfgs')
cross_val_score(lr, X, y, scoring='accuracy', cv=5, n_jobs=-1)
lr.fit(X, y)
pd.Series(lr.coef_[0], X.columns)
sns.regplot(x=X['age'], y=y, logistic=True, y_jitter=.05);
###Output
_____no_output_____
###Markdown
Gradient Boosting
###Code
gb = GradientBoostingClassifier()
cross_val_score(gb, X, y, scoring='accuracy', cv=5, n_jobs=-1)
gb.fit(X, y)
pd.Series(gb.feature_importances_, X.columns)
from pdpbox.pdp import pdp_isolate, pdp_plot
feature='age'
pdp_isolated = pdp_isolate(model=gb, dataset=X, model_features=X.columns,
feature=feature)
pdp_plot(pdp_isolated, feature);
###Output
_____no_output_____
###Markdown
From [PDPbox documentation](https://pdpbox.readthedocs.io/en/latest/):>**The common headache**: When using black box machine learning algorithms like random forest and boosting, it is hard to understand the relations between predictors and model outcome. For example, in terms of random forest, all we get is the feature importance. Although we can know which feature is significantly influencing the outcome based on the importance calculation, it really sucks that we don’t know in which direction it is influencing. And in most of the real cases, the effect is non-monotonic. We need some powerful tools to help understanding the complex relations between predictors and model prediction. [Animation by Christoph Molnar](https://twitter.com/ChristophMolnar/status/1066398522608635904), author of [_Interpretable Machine Learning_](https://christophm.github.io/interpretable-ml-book/)> Partial dependence plots show how a feature affects predictions of a Machine Learning model on average.> 1. Define grid along feature> 2. Model predictions at grid points> 3. Line per data instance -> ICE (Individual Conditional Expectation) curve> 4. Average curves to get a PDP (Partial Dependence Plot) Compare Predictions
###Code
from sklearn.model_selection import cross_val_predict
y_pred_lr = cross_val_predict(lr, X, y, cv=5, n_jobs=-1)
y_pred_gb = cross_val_predict(gb, X, y, cv=5, n_jobs=-1)
preds = pd.DataFrame({'true': y, 'lr': y_pred_lr, 'gb': y_pred_gb})
gb_right = preds['gb'] == preds['true']
lr_wrong = preds['lr'] != preds['true']
len(preds[gb_right & lr_wrong]) / len(preds)
preds[gb_right & lr_wrong].head()
data_for_prediction = X.loc[27]
data_for_prediction
###Output
_____no_output_____
###Markdown
Explain individual predictionhttps://www.kaggle.com/dansbecker/shap-values
###Code
import shap
# Create object that can calculate shap values
explainer = shap.TreeExplainer(gb)
# Calculate Shap values
shap_values = explainer.shap_values(data_for_prediction)
shap.initjs()
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
###Output
_____no_output_____
###Markdown
Lending Club
###Code
import category_encoders as ce
import pandas as pd
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
# Load data from https://www.kaggle.com/c/ds1-tree-ensembles/data
X_train = pd.read_csv('train_features.csv')
X_test = pd.read_csv('test_features.csv')
y_train = pd.read_csv('train_labels.csv')['charged_off']
sample_submission = pd.read_csv('sample_submission.csv')
def wrangle(X):
X = X.copy()
# Drop some columns
X = X.drop(columns='id') # id is random
X = X.drop(columns=['member_id', 'url', 'desc']) # All null
X = X.drop(columns='title') # Duplicative of purpose
X = X.drop(columns='grade') # Duplicative of sub_grade
# Transform sub_grade from "A1" - "G5" to 1.1 - 7.5
def wrangle_sub_grade(x):
first_digit = ord(x[0]) - 64
second_digit = int(x[1])
return first_digit + second_digit/10
X['sub_grade'] = X['sub_grade'].apply(wrangle_sub_grade)
# Convert percentages from strings to floats
X['int_rate'] = X['int_rate'].str.strip('%').astype(float)
X['revol_util'] = X['revol_util'].str.strip('%').astype(float)
# Transform earliest_cr_line to an integer: how many days it's been open
X['earliest_cr_line'] = pd.to_datetime(X['earliest_cr_line'], infer_datetime_format=True)
X['earliest_cr_line'] = pd.Timestamp.today() - X['earliest_cr_line']
X['earliest_cr_line'] = X['earliest_cr_line'].dt.days
# Create features for three employee titles: teacher, manager, owner
X['emp_title'] = X['emp_title'].str.lower()
X['emp_title_teacher'] = X['emp_title'].str.contains('teacher', na=False)
X['emp_title_manager'] = X['emp_title'].str.contains('manager', na=False)
X['emp_title_owner'] = X['emp_title'].str.contains('owner', na=False)
# Drop categoricals with high cardinality
X = X.drop(columns=['emp_title', 'zip_code'])
# Transform features with many nulls to binary flags
many_nulls = ['sec_app_mths_since_last_major_derog',
'sec_app_revol_util',
'sec_app_earliest_cr_line',
'sec_app_mort_acc',
'dti_joint',
'sec_app_collections_12_mths_ex_med',
'sec_app_chargeoff_within_12_mths',
'sec_app_num_rev_accts',
'sec_app_open_act_il',
'sec_app_open_acc',
'revol_bal_joint',
'annual_inc_joint',
'sec_app_inq_last_6mths',
'mths_since_last_record',
'mths_since_recent_bc_dlq',
'mths_since_last_major_derog',
'mths_since_recent_revol_delinq',
'mths_since_last_delinq',
'il_util',
'emp_length',
'mths_since_recent_inq',
'mo_sin_old_il_acct',
'mths_since_rcnt_il',
'num_tl_120dpd_2m',
'bc_util',
'percent_bc_gt_75',
'bc_open_to_buy',
'mths_since_recent_bc']
for col in many_nulls:
X[col] = X[col].isnull()
# For features with few nulls, do mean imputation
for col in X:
if X[col].isnull().sum() > 0:
X[col] = X[col].fillna(X[col].mean())
# Return the wrangled dataframe
return X
# Wrangle train and test in the same way
X_train = wrangle(X_train)
X_test = wrangle(X_test)
%%time
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=0.2, stratify=y_train, random_state=42)
encoder = ce.OrdinalEncoder()
X_train = encoder.fit_transform(X_train)
X_val = encoder.transform(X_val)
gb = GradientBoostingClassifier()
gb.fit(X_train, y_train)
y_pred_proba = gb.predict_proba(X_val)[:,1]
print('Validation ROC AUC:', roc_auc_score(y_val, y_pred_proba))
###Output
Validation ROC AUC: 0.744114812226796
CPU times: user 12.3 s, sys: 145 ms, total: 12.5 s
Wall time: 10.3 s
###Markdown
Partial Dependence Plot
###Code
from pdpbox.pdp import pdp_isolate, pdp_plot
feature='int_rate'
pdp_isolated = pdp_isolate(model=gb, dataset=X_val,
model_features=X_val.columns, feature=feature)
pdp_plot(pdp_isolated, feature);
###Output
_____no_output_____
###Markdown
Individual predictions
###Code
import numpy as np
y_pred = (y_pred_proba >= 0.5).astype(int)
confidence = np.abs(y_pred_proba - 0.5)
preds = pd.DataFrame({'y_val': y_val, 'y_pred': y_pred,
'y_pred_proba': y_pred_proba,
'confidence': confidence})
preds.head()
# True positives, with high confidence
preds[(y_val==1) & (y_pred==1)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[17575]
explainer = shap.TreeExplainer(gb)
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# True negatives, with high confidence
preds[(y_val==0) & (y_pred==0)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[1778]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# False positives, with high (mistaken) confidence
preds[(y_val==0) & (y_pred==1)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[33542]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
data_for_prediction
# False negatives, with high (mistaken) confidence
preds[(y_val==1) & (y_pred==0)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[30492]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# Most uncertain predictions (least confidence)
preds.sort_values(by='confidence', ascending=True).head()
data_for_prediction = X_val.loc[33095]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values,
data_for_prediction)
data_for_prediction
###Output
_____no_output_____
###Markdown
_Lambda School Data Science — Tree Ensembles_ Model Interpretation Objectives- Partial Dependence Plots- Shapley Values Pre-reads1. Kaggle / Dan Becker: Machine Learning Explainability - https://www.kaggle.com/dansbecker/partial-plots - https://www.kaggle.com/dansbecker/shap-values2. Christoph Molnar: Interpretable Machine Learning - https://christophm.github.io/interpretable-ml-book/pdp.html - https://christophm.github.io/interpretable-ml-book/shapley.html Libraries- [PDPbox](https://github.com/SauceCat/PDPbox): `pip install pdpbox`- [shap](https://github.com/slundberg/shap): `conda install -c conda-forge shap` / `pip install shap` Types of explanations Global explanation: all features in relation to each other- Feature Importances (mean decrease impurity)- Permutation Importances- Drop-Column Importances Global explanation: individual feature in relation to target- Partial Dependence plots Individual prediction explanation- Shapley Values_Note that the coefficients from a linear model give you all three types of explanations!_ Titanic
###Code
%matplotlib inline
import pandas as pd
import seaborn as sns
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
def load_titanic():
df = sns.load_dataset('titanic')
df['age'] = df['age'].fillna(df['age'].mean())
df['class'] = df['class'].map({'First': 1, 'Second': 2, 'Third': 3})
df['female'] = df['sex'] == 'female'
X = df[['age', 'class', 'fare', 'female']]
y = df['survived']
return X, y
X, y = load_titanic()
###Output
_____no_output_____
###Markdown
Naive majority class baseline
###Code
y.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
lr = LogisticRegression(solver='lbfgs')
cross_val_score(lr, X, y, scoring='accuracy', cv=5, n_jobs=-1)
lr.fit(X, y)
pd.Series(lr.coef_[0], X.columns)
sns.regplot(x=X['age'], y=y, logistic=True, y_jitter=.05);
###Output
/anaconda3/envs/sandbox/lib/python3.6/site-packages/scipy/stats/stats.py:1713: FutureWarning: Using a non-tuple sequence for multidimensional indexing is deprecated; use `arr[tuple(seq)]` instead of `arr[seq]`. In the future this will be interpreted as an array index, `arr[np.array(seq)]`, which will result either in an error or a different result.
return np.add.reduce(sorted[indexer] * weights, axis=axis) / sumval
###Markdown
Gradient Boosting
###Code
gb = GradientBoostingClassifier()
cross_val_score(gb, X, y, scoring='accuracy', cv=5, n_jobs=-1)
gb.fit(X, y)
pd.Series(gb.feature_importances_, X.columns)
from pdpbox.pdp import pdp_isolate, pdp_plot
feature='age'
pdp_isolated = pdp_isolate(model=gb, dataset=X, model_features=X.columns,
feature=feature)
pdp_plot(pdp_isolated, feature);
###Output
_____no_output_____
###Markdown
From [PDPbox documentation](https://pdpbox.readthedocs.io/en/latest/):>**The common headache**: When using black box machine learning algorithms like random forest and boosting, it is hard to understand the relations between predictors and model outcome. For example, in terms of random forest, all we get is the feature importance. Although we can know which feature is significantly influencing the outcome based on the importance calculation, it really sucks that we don’t know in which direction it is influencing. And in most of the real cases, the effect is non-monotonic. We need some powerful tools to help understanding the complex relations between predictors and model prediction. [Animation by Christoph Molnar](https://twitter.com/ChristophMolnar/status/1066398522608635904), author of [_Interpretable Machine Learning_](https://christophm.github.io/interpretable-ml-book/)> Partial dependence plots show how a feature affects predictions of a Machine Learning model on average.> 1. Define grid along feature> 2. Model predictions at grid points> 3. Line per data instance -> ICE (Individual Conditional Expectation) curve> 4. Average curves to get a PDP (Partial Dependence Plot) Compare Predictions
###Code
from sklearn.model_selection import cross_val_predict
y_pred_lr = cross_val_predict(lr, X, y, cv=5, n_jobs=-1)
y_pred_gb = cross_val_predict(gb, X, y, cv=5, n_jobs=-1)
preds = pd.DataFrame({'true': y, 'lr': y_pred_lr, 'gb': y_pred_gb})
gb_right = preds['gb'] == preds['true']
lr_wrong = preds['lr'] != preds['true']
len(preds[gb_right & lr_wrong]) / len(preds)
preds[gb_right & lr_wrong].head()
data_for_prediction = X.loc[27]
data_for_prediction
###Output
_____no_output_____
###Markdown
Explain individual predictionhttps://www.kaggle.com/dansbecker/shap-values
###Code
import shap
# Create object that can calculate shap values
explainer = shap.TreeExplainer(gb)
# Calculate Shap values
shap_values = explainer.shap_values(data_for_prediction)
shap.initjs()
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
###Output
_____no_output_____
###Markdown
Lending Club
###Code
import category_encoders as ce
import pandas as pd
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
# Load data from https://www.kaggle.com/c/ds1-tree-ensembles/data
X_train = pd.read_csv('../kaggle_data/train_features.csv')
X_test = pd.read_csv('../kaggle_data/test_features.csv')
y_train = pd.read_csv('../kaggle_data/train_labels.csv')['charged_off']
sample_submission = pd.read_csv('../kaggle_data/sample_submission.csv')
def wrangle(X):
X = X.copy()
# Drop some columns
X = X.drop(columns='id') # id is random
X = X.drop(columns=['member_id', 'url', 'desc']) # All null
X = X.drop(columns='title') # Duplicative of purpose
X = X.drop(columns='grade') # Duplicative of sub_grade
# Transform sub_grade from "A1" - "G5" to 1.1 - 7.5
def wrangle_sub_grade(x):
first_digit = ord(x[0]) - 64
second_digit = int(x[1])
return first_digit + second_digit/10
X['sub_grade'] = X['sub_grade'].apply(wrangle_sub_grade)
# Convert percentages from strings to floats
X['int_rate'] = X['int_rate'].str.strip('%').astype(float)
X['revol_util'] = X['revol_util'].str.strip('%').astype(float)
# Transform earliest_cr_line to an integer: how many days it's been open
X['earliest_cr_line'] = pd.to_datetime(X['earliest_cr_line'], infer_datetime_format=True)
X['earliest_cr_line'] = pd.Timestamp.today() - X['earliest_cr_line']
X['earliest_cr_line'] = X['earliest_cr_line'].dt.days
# Create features for three employee titles: teacher, manager, owner
X['emp_title'] = X['emp_title'].str.lower()
X['emp_title_teacher'] = X['emp_title'].str.contains('teacher', na=False)
X['emp_title_manager'] = X['emp_title'].str.contains('manager', na=False)
X['emp_title_owner'] = X['emp_title'].str.contains('owner', na=False)
# Drop categoricals with high cardinality
X = X.drop(columns=['emp_title', 'zip_code'])
# Transform features with many nulls to binary flags
many_nulls = ['sec_app_mths_since_last_major_derog',
'sec_app_revol_util',
'sec_app_earliest_cr_line',
'sec_app_mort_acc',
'dti_joint',
'sec_app_collections_12_mths_ex_med',
'sec_app_chargeoff_within_12_mths',
'sec_app_num_rev_accts',
'sec_app_open_act_il',
'sec_app_open_acc',
'revol_bal_joint',
'annual_inc_joint',
'sec_app_inq_last_6mths',
'mths_since_last_record',
'mths_since_recent_bc_dlq',
'mths_since_last_major_derog',
'mths_since_recent_revol_delinq',
'mths_since_last_delinq',
'il_util',
'emp_length',
'mths_since_recent_inq',
'mo_sin_old_il_acct',
'mths_since_rcnt_il',
'num_tl_120dpd_2m',
'bc_util',
'percent_bc_gt_75',
'bc_open_to_buy',
'mths_since_recent_bc']
for col in many_nulls:
X[col] = X[col].isnull()
# For features with few nulls, do mean imputation
for col in X:
if X[col].isnull().sum() > 0:
X[col] = X[col].fillna(X[col].mean())
# Return the wrangled dataframe
return X
# Wrangle train and test in the same way
X_train = wrangle(X_train)
X_test = wrangle(X_test)
%%time
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=0.2, stratify=y_train, random_state=42)
encoder = ce.OrdinalEncoder()
X_train = encoder.fit_transform(X_train)
X_val = encoder.transform(X_val)
gb = GradientBoostingClassifier()
gb.fit(X_train, y_train)
y_pred_proba = gb.predict_proba(X_val)[:,1]
print('Validation ROC AUC:', roc_auc_score(y_val, y_pred_proba))
###Output
Validation ROC AUC: 0.744114812226796
CPU times: user 12.3 s, sys: 145 ms, total: 12.5 s
Wall time: 10.3 s
###Markdown
Partial Dependence Plot
###Code
from pdpbox.pdp import pdp_isolate, pdp_plot
feature='int_rate'
pdp_isolated = pdp_isolate(model=gb, dataset=X_val,
model_features=X_val.columns, feature=feature)
pdp_plot(pdp_isolated, feature);
###Output
_____no_output_____
###Markdown
Individual predictions
###Code
import numpy as np
y_pred = (y_pred_proba >= 0.5).astype(int)
confidence = np.abs(y_pred_proba - 0.5)
preds = pd.DataFrame({'y_val': y_val, 'y_pred': y_pred,
'y_pred_proba': y_pred_proba,
'confidence': confidence})
preds.head()
# True positives, with high confidence
preds[(y_val==1) & (y_pred==1)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[17575]
explainer = shap.TreeExplainer(gb)
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# True negatives, with high confidence
preds[(y_val==0) & (y_pred==0)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[1778]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# False positives, with high (mistaken) confidence
preds[(y_val==0) & (y_pred==1)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[33542]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
data_for_prediction
# False negatives, with high (mistaken) confidence
preds[(y_val==1) & (y_pred==0)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[30492]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# Most uncertain predictions (least confidence)
preds.sort_values(by='confidence', ascending=True).head()
data_for_prediction = X_val.loc[33095]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values,
data_for_prediction)
data_for_prediction
###Output
_____no_output_____
###Markdown
_Lambda School Data Science — Tree Ensembles_ Model Interpretation Objectives- Partial Dependence Plots- Shapley Values Pre-reads1. Kaggle / Dan Becker: Machine Learning Explainability - https://www.kaggle.com/dansbecker/partial-plots - https://www.kaggle.com/dansbecker/shap-values2. Christoph Molnar: Interpretable Machine Learning - https://christophm.github.io/interpretable-ml-book/pdp.html - https://christophm.github.io/interpretable-ml-book/shapley.html Libraries- [PDPbox](https://github.com/SauceCat/PDPbox): `pip install pdpbox`- [shap](https://github.com/slundberg/shap): `conda install -c conda-forge shap` / `pip install shap` Types of explanations Global explanation: all features in relation to each other- Feature Importances (mean decrease impurity)- Permutation Importances- Drop-Column Importances Global explanation: individual feature in relation to target- Partial Dependence plots Individual prediction explanation- Shapley Values_Note that the coefficients from a linear model give you all three types of explanations!_ Titanic
###Code
%matplotlib inline
import pandas as pd
import seaborn as sns
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
def load_titanic():
df = sns.load_dataset('titanic')
df['age'] = df['age'].fillna(df['age'].mean())
df['class'] = df['class'].map({'First': 1, 'Second': 2, 'Third': 3})
df['female'] = df['sex'] == 'female'
X = df[['age', 'class', 'fare', 'female']]
y = df['survived']
return X, y
X, y = load_titanic()
###Output
_____no_output_____
###Markdown
Naive majority class baseline
###Code
y.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
lr = LogisticRegression(solver='lbfgs')
cross_val_score(lr, X, y, scoring='accuracy', cv=5, n_jobs=-1)
lr.fit(X, y)
pd.Series(lr.coef_[0], X.columns)
sns.regplot(x=X['age'], y=y, logistic=True, y_jitter=.05);
###Output
_____no_output_____
###Markdown
Gradient Boosting
###Code
gb = GradientBoostingClassifier()
cross_val_score(gb, X, y, scoring='accuracy', cv=5, n_jobs=-1)
gb.fit(X, y)
pd.Series(gb.feature_importances_, X.columns)
from pdpbox.pdp import pdp_isolate, pdp_plot
feature='age'
pdp_isolated = pdp_isolate(model=gb, dataset=X, model_features=X.columns,
feature=feature)
pdp_plot(pdp_isolated, feature);
###Output
_____no_output_____
###Markdown
From [PDPbox documentation](https://pdpbox.readthedocs.io/en/latest/):>**The common headache**: When using black box machine learning algorithms like random forest and boosting, it is hard to understand the relations between predictors and model outcome. For example, in terms of random forest, all we get is the feature importance. Although we can know which feature is significantly influencing the outcome based on the importance calculation, it really sucks that we don’t know in which direction it is influencing. And in most of the real cases, the effect is non-monotonic. We need some powerful tools to help understanding the complex relations between predictors and model prediction. [Animation by Christoph Molnar](https://twitter.com/ChristophMolnar/status/1066398522608635904), author of [_Interpretable Machine Learning_](https://christophm.github.io/interpretable-ml-book/)> Partial dependence plots show how a feature affects predictions of a Machine Learning model on average.> 1. Define grid along feature> 2. Model predictions at grid points> 3. Line per data instance -> ICE (Individual Conditional Expectation) curve> 4. Average curves to get a PDP (Partial Dependence Plot) Compare Predictions
###Code
from sklearn.model_selection import cross_val_predict
y_pred_lr = cross_val_predict(lr, X, y, cv=5, n_jobs=-1)
y_pred_gb = cross_val_predict(gb, X, y, cv=5, n_jobs=-1)
preds = pd.DataFrame({'true': y, 'lr': y_pred_lr, 'gb': y_pred_gb})
gb_right = preds['gb'] == preds['true']
lr_wrong = preds['lr'] != preds['true']
len(preds[gb_right & lr_wrong]) / len(preds)
preds[gb_right & lr_wrong].head()
data_for_prediction = X.loc[27]
data_for_prediction
###Output
_____no_output_____
###Markdown
Explain individual predictionhttps://www.kaggle.com/dansbecker/shap-values
###Code
import shap
# Create object that can calculate shap values
explainer = shap.TreeExplainer(gb)
# Calculate Shap values
shap_values = explainer.shap_values(data_for_prediction)
shap.initjs()
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
###Output
_____no_output_____
###Markdown
Lending Club
###Code
import category_encoders as ce
import pandas as pd
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
# Load data from https://www.kaggle.com/c/ds1-tree-ensembles/data
X_train = pd.read_csv('train_features.csv')
X_test = pd.read_csv('test_features.csv')
y_train = pd.read_csv('train_labels.csv')['charged_off']
sample_submission = pd.read_csv('sample_submission.csv')
def wrangle(X):
X = X.copy()
# Drop some columns
X = X.drop(columns='id') # id is random
X = X.drop(columns=['member_id', 'url', 'desc']) # All null
X = X.drop(columns='title') # Duplicative of purpose
X = X.drop(columns='grade') # Duplicative of sub_grade
# Transform sub_grade from "A1" - "G5" to 1.1 - 7.5
def wrangle_sub_grade(x):
first_digit = ord(x[0]) - 64
second_digit = int(x[1])
return first_digit + second_digit/10
X['sub_grade'] = X['sub_grade'].apply(wrangle_sub_grade)
# Convert percentages from strings to floats
X['int_rate'] = X['int_rate'].str.strip('%').astype(float)
X['revol_util'] = X['revol_util'].str.strip('%').astype(float)
# Transform earliest_cr_line to an integer: how many days it's been open
X['earliest_cr_line'] = pd.to_datetime(X['earliest_cr_line'], infer_datetime_format=True)
X['earliest_cr_line'] = pd.Timestamp.today() - X['earliest_cr_line']
X['earliest_cr_line'] = X['earliest_cr_line'].dt.days
# Create features for three employee titles: teacher, manager, owner
X['emp_title'] = X['emp_title'].str.lower()
X['emp_title_teacher'] = X['emp_title'].str.contains('teacher', na=False)
X['emp_title_manager'] = X['emp_title'].str.contains('manager', na=False)
X['emp_title_owner'] = X['emp_title'].str.contains('owner', na=False)
# Drop categoricals with high cardinality
X = X.drop(columns=['emp_title', 'zip_code'])
# Transform features with many nulls to binary flags
many_nulls = ['sec_app_mths_since_last_major_derog',
'sec_app_revol_util',
'sec_app_earliest_cr_line',
'sec_app_mort_acc',
'dti_joint',
'sec_app_collections_12_mths_ex_med',
'sec_app_chargeoff_within_12_mths',
'sec_app_num_rev_accts',
'sec_app_open_act_il',
'sec_app_open_acc',
'revol_bal_joint',
'annual_inc_joint',
'sec_app_inq_last_6mths',
'mths_since_last_record',
'mths_since_recent_bc_dlq',
'mths_since_last_major_derog',
'mths_since_recent_revol_delinq',
'mths_since_last_delinq',
'il_util',
'emp_length',
'mths_since_recent_inq',
'mo_sin_old_il_acct',
'mths_since_rcnt_il',
'num_tl_120dpd_2m',
'bc_util',
'percent_bc_gt_75',
'bc_open_to_buy',
'mths_since_recent_bc']
for col in many_nulls:
X[col] = X[col].isnull()
# For features with few nulls, do mean imputation
for col in X:
if X[col].isnull().sum() > 0:
X[col] = X[col].fillna(X[col].mean())
# Return the wrangled dataframe
return X
# Wrangle train and test in the same way
X_train = wrangle(X_train)
X_test = wrangle(X_test)
%%time
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=0.2, stratify=y_train, random_state=42)
encoder = ce.OrdinalEncoder()
X_train = encoder.fit_transform(X_train)
X_val = encoder.transform(X_val)
gb = GradientBoostingClassifier()
gb.fit(X_train, y_train)
y_pred_proba = gb.predict_proba(X_val)[:,1]
print('Validation ROC AUC:', roc_auc_score(y_val, y_pred_proba))
###Output
Validation ROC AUC: 0.744114812226796
CPU times: user 12.3 s, sys: 145 ms, total: 12.5 s
Wall time: 10.3 s
###Markdown
Partial Dependence Plot
###Code
from pdpbox.pdp import pdp_isolate, pdp_plot
feature='int_rate'
pdp_isolated = pdp_isolate(model=gb, dataset=X_val,
model_features=X_val.columns, feature=feature)
pdp_plot(pdp_isolated, feature);
###Output
_____no_output_____
###Markdown
Individual predictions
###Code
import numpy as np
y_pred = (y_pred_proba >= 0.5).astype(int)
confidence = np.abs(y_pred_proba - 0.5)
preds = pd.DataFrame({'y_val': y_val, 'y_pred': y_pred,
'y_pred_proba': y_pred_proba,
'confidence': confidence})
preds.head()
# True positives, with high confidence
preds[(y_val==1) & (y_pred==1)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[17575]
explainer = shap.TreeExplainer(gb)
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# True negatives, with high confidence
preds[(y_val==0) & (y_pred==0)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[1778]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# False positives, with high (mistaken) confidence
preds[(y_val==0) & (y_pred==1)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[33542]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
data_for_prediction
# False negatives, with high (mistaken) confidence
preds[(y_val==1) & (y_pred==0)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[30492]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# Most uncertain predictions (least confidence)
preds.sort_values(by='confidence', ascending=True).head()
data_for_prediction = X_val.loc[33095]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values,
data_for_prediction)
data_for_prediction
###Output
_____no_output_____
###Markdown
_Lambda School Data Science — Tree Ensembles_ Model Interpretation Objectives- Partial Dependence Plots- Shapley Values Pre-reads1. Kaggle / Dan Becker: Machine Learning Explainability - https://www.kaggle.com/dansbecker/partial-plots - https://www.kaggle.com/dansbecker/shap-values2. Christoph Molnar: Interpretable Machine Learning - https://christophm.github.io/interpretable-ml-book/pdp.html - https://christophm.github.io/interpretable-ml-book/shapley.html Libraries- [PDPbox](https://github.com/SauceCat/PDPbox): `pip install pdpbox`- [shap](https://github.com/slundberg/shap): `conda install -c conda-forge shap` / `pip install shap` Types of explanations Global explanation: all features in relation to each other- Feature Importances (mean decrease impurity)- Permutation Importances- Drop-Column Importances Global explanation: individual feature in relation to target- Partial Dependence plots Individual prediction explanation- Shapley Values_Note that the coefficients from a linear model give you all three types of explanations!_ Titanic
###Code
%matplotlib inline
import pandas as pd
import seaborn as sns
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
def load_titanic():
df = sns.load_dataset('titanic')
df['age'] = df['age'].fillna(df['age'].mean())
df['class'] = df['class'].map({'First': 1, 'Second': 2, 'Third': 3})
df['female'] = df['sex'] == 'female'
X = df[['age', 'class', 'fare', 'female']]
y = df['survived']
return X, y
X, y = load_titanic()
###Output
_____no_output_____
###Markdown
Naive majority class baseline
###Code
y.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
lr = LogisticRegression(solver='lbfgs')
cross_val_score(lr, X, y, scoring='accuracy', cv=5, n_jobs=-1)
lr.fit(X, y)
pd.Series(lr.coef_[0], X.columns)
sns.regplot(x=X['age'], y=y, logistic=True, y_jitter=.05);
###Output
_____no_output_____
###Markdown
Gradient Boosting
###Code
gb = GradientBoostingClassifier()
cross_val_score(gb, X, y, scoring='accuracy', cv=5, n_jobs=-1)
gb.fit(X, y)
pd.Series(gb.feature_importances_, X.columns)
from pdpbox.pdp import pdp_isolate, pdp_plot
feature='age'
pdp_isolated = pdp_isolate(model=gb, dataset=X, model_features=X.columns,
feature=feature)
pdp_plot(pdp_isolated, feature);
###Output
_____no_output_____
###Markdown
From [PDPbox documentation](https://pdpbox.readthedocs.io/en/latest/):>**The common headache**: When using black box machine learning algorithms like random forest and boosting, it is hard to understand the relations between predictors and model outcome. For example, in terms of random forest, all we get is the feature importance. Although we can know which feature is significantly influencing the outcome based on the importance calculation, it really sucks that we don’t know in which direction it is influencing. And in most of the real cases, the effect is non-monotonic. We need some powerful tools to help understanding the complex relations between predictors and model prediction. [Animation by Christoph Molnar](https://twitter.com/ChristophMolnar/status/1066398522608635904), author of [_Interpretable Machine Learning_](https://christophm.github.io/interpretable-ml-book/)> Partial dependence plots show how a feature affects predictions of a Machine Learning model on average.> 1. Define grid along feature> 2. Model predictions at grid points> 3. Line per data instance -> ICE (Individual Conditional Expectation) curve> 4. Average curves to get a PDP (Partial Dependence Plot) Compare Predictions
###Code
from sklearn.model_selection import cross_val_predict
y_pred_lr = cross_val_predict(lr, X, y, cv=5, n_jobs=-1)
y_pred_gb = cross_val_predict(gb, X, y, cv=5, n_jobs=-1)
preds = pd.DataFrame({'true': y, 'lr': y_pred_lr, 'gb': y_pred_gb})
gb_right = preds['gb'] == preds['true']
lr_wrong = preds['lr'] != preds['true']
len(preds[gb_right & lr_wrong]) / len(preds)
preds[gb_right & lr_wrong].head()
data_for_prediction = X.loc[27]
data_for_prediction
###Output
_____no_output_____
###Markdown
Explain individual predictionhttps://www.kaggle.com/dansbecker/shap-values
###Code
import shap
# Create object that can calculate shap values
explainer = shap.TreeExplainer(gb)
# Calculate Shap values
shap_values = explainer.shap_values(data_for_prediction)
shap.initjs()
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
###Output
_____no_output_____
###Markdown
Lending Club
###Code
import category_encoders as ce
import pandas as pd
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
# Load data from https://www.kaggle.com/c/ds1-tree-ensembles/data
X_train = pd.read_csv('train_features.csv')
X_test = pd.read_csv('test_features.csv')
y_train = pd.read_csv('train_labels.csv')['charged_off']
sample_submission = pd.read_csv('sample_submission.csv')
def wrangle(X):
X = X.copy()
# Drop some columns
X = X.drop(columns='id') # id is random
X = X.drop(columns=['member_id', 'url', 'desc']) # All null
X = X.drop(columns='title') # Duplicative of purpose
X = X.drop(columns='grade') # Duplicative of sub_grade
# Transform sub_grade from "A1" - "G5" to 1.1 - 7.5
def wrangle_sub_grade(x):
first_digit = ord(x[0]) - 64
second_digit = int(x[1])
return first_digit + second_digit/10
X['sub_grade'] = X['sub_grade'].apply(wrangle_sub_grade)
# Convert percentages from strings to floats
X['int_rate'] = X['int_rate'].str.strip('%').astype(float)
X['revol_util'] = X['revol_util'].str.strip('%').astype(float)
# Transform earliest_cr_line to an integer: how many days it's been open
X['earliest_cr_line'] = pd.to_datetime(X['earliest_cr_line'], infer_datetime_format=True)
X['earliest_cr_line'] = pd.Timestamp.today() - X['earliest_cr_line']
X['earliest_cr_line'] = X['earliest_cr_line'].dt.days
# Create features for three employee titles: teacher, manager, owner
X['emp_title'] = X['emp_title'].str.lower()
X['emp_title_teacher'] = X['emp_title'].str.contains('teacher', na=False)
X['emp_title_manager'] = X['emp_title'].str.contains('manager', na=False)
X['emp_title_owner'] = X['emp_title'].str.contains('owner', na=False)
# Drop categoricals with high cardinality
X = X.drop(columns=['emp_title', 'zip_code'])
# Transform features with many nulls to binary flags
many_nulls = ['sec_app_mths_since_last_major_derog',
'sec_app_revol_util',
'sec_app_earliest_cr_line',
'sec_app_mort_acc',
'dti_joint',
'sec_app_collections_12_mths_ex_med',
'sec_app_chargeoff_within_12_mths',
'sec_app_num_rev_accts',
'sec_app_open_act_il',
'sec_app_open_acc',
'revol_bal_joint',
'annual_inc_joint',
'sec_app_inq_last_6mths',
'mths_since_last_record',
'mths_since_recent_bc_dlq',
'mths_since_last_major_derog',
'mths_since_recent_revol_delinq',
'mths_since_last_delinq',
'il_util',
'emp_length',
'mths_since_recent_inq',
'mo_sin_old_il_acct',
'mths_since_rcnt_il',
'num_tl_120dpd_2m',
'bc_util',
'percent_bc_gt_75',
'bc_open_to_buy',
'mths_since_recent_bc']
for col in many_nulls:
X[col] = X[col].isnull()
# For features with few nulls, do mean imputation
for col in X:
if X[col].isnull().sum() > 0:
X[col] = X[col].fillna(X[col].mean())
# Return the wrangled dataframe
return X
# Wrangle train and test in the same way
X_train = wrangle(X_train)
X_test = wrangle(X_test)
%%time
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=0.2, stratify=y_train, random_state=42)
encoder = ce.OrdinalEncoder()
X_train = encoder.fit_transform(X_train)
X_val = encoder.transform(X_val)
gb = GradientBoostingClassifier()
gb.fit(X_train, y_train)
y_pred_proba = gb.predict_proba(X_val)[:,1]
print('Validation ROC AUC:', roc_auc_score(y_val, y_pred_proba))
###Output
Validation ROC AUC: 0.744114812226796
CPU times: user 12.3 s, sys: 145 ms, total: 12.5 s
Wall time: 10.3 s
###Markdown
Partial Dependence Plot
###Code
from pdpbox.pdp import pdp_isolate, pdp_plot
feature='int_rate'
pdp_isolated = pdp_isolate(model=gb, dataset=X_val,
model_features=X_val.columns, feature=feature)
pdp_plot(pdp_isolated, feature);
###Output
_____no_output_____
###Markdown
Individual predictions
###Code
import numpy as np
y_pred = (y_pred_proba >= 0.5).astype(int)
confidence = np.abs(y_pred_proba - 0.5)
preds = pd.DataFrame({'y_val': y_val, 'y_pred': y_pred,
'y_pred_proba': y_pred_proba,
'confidence': confidence})
preds.head()
# True positives, with high confidence
preds[(y_val==1) & (y_pred==1)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[17575]
explainer = shap.TreeExplainer(gb)
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# True negatives, with high confidence
preds[(y_val==0) & (y_pred==0)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[1778]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# False positives, with high (mistaken) confidence
preds[(y_val==0) & (y_pred==1)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[33542]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
data_for_prediction
# False negatives, with high (mistaken) confidence
preds[(y_val==1) & (y_pred==0)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[30492]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# Most uncertain predictions (least confidence)
preds.sort_values(by='confidence', ascending=True).head()
data_for_prediction = X_val.loc[33095]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values,
data_for_prediction)
data_for_prediction
###Output
_____no_output_____
###Markdown
_Lambda School Data Science — Tree Ensembles_ Model Interpretation Objectives- Partial Dependence Plots- Shapley Values Pre-reads1. Kaggle / Dan Becker: Machine Learning Explainability - https://www.kaggle.com/dansbecker/partial-plots - https://www.kaggle.com/dansbecker/shap-values2. Christoph Molnar: Interpretable Machine Learning - https://christophm.github.io/interpretable-ml-book/pdp.html - https://christophm.github.io/interpretable-ml-book/shapley.html Libraries- [PDPbox](https://github.com/SauceCat/PDPbox): `pip install pdpbox`- [shap](https://github.com/slundberg/shap): `conda install -c conda-forge shap` / `pip install shap` Types of explanations Global explanation: all features in relation to each other- Feature Importances (mean decrease impurity)- Permutation Importances- Drop-Column Importances Global explanation: individual feature in relation to target- Partial Dependence plots Individual prediction explanation- Shapley Values_Note that the coefficients from a linear model give you all three types of explanations!_ Titanic
###Code
%matplotlib inline
import pandas as pd
import seaborn as sns
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
def load_titanic():
df = sns.load_dataset('titanic')
df['age'] = df['age'].fillna(df['age'].mean())
df['class'] = df['class'].map({'First': 1, 'Second': 2, 'Third': 3})
df['female'] = df['sex'] == 'female'
X = df[['age', 'class', 'fare', 'female']]
y = df['survived']
return X, y
X, y = load_titanic()
###Output
_____no_output_____
###Markdown
Naive majority class baseline
###Code
y.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
lr = LogisticRegression(solver='lbfgs')
cross_val_score(lr, X, y, scoring='accuracy', cv=5, n_jobs=-1)
lr.fit(X, y)
pd.Series(lr.coef_[0], X.columns)
sns.regplot(x=X['age'], y=y, logistic=True, y_jitter=.05);
###Output
_____no_output_____
###Markdown
Gradient Boosting
###Code
gb = GradientBoostingClassifier()
cross_val_score(gb, X, y, scoring='accuracy', cv=5, n_jobs=-1)
gb.fit(X, y)
pd.Series(gb.feature_importances_, X.columns)
from pdpbox.pdp import pdp_isolate, pdp_plot
feature='age'
pdp_isolated = pdp_isolate(model=gb, dataset=X, model_features=X.columns,
feature=feature)
pdp_plot(pdp_isolated, feature);
###Output
_____no_output_____
###Markdown
From [PDPbox documentation](https://pdpbox.readthedocs.io/en/latest/):>**The common headache**: When using black box machine learning algorithms like random forest and boosting, it is hard to understand the relations between predictors and model outcome. For example, in terms of random forest, all we get is the feature importance. Although we can know which feature is significantly influencing the outcome based on the importance calculation, it really sucks that we don’t know in which direction it is influencing. And in most of the real cases, the effect is non-monotonic. We need some powerful tools to help understanding the complex relations between predictors and model prediction. [Animation by Christoph Molnar](https://twitter.com/ChristophMolnar/status/1066398522608635904), author of [_Interpretable Machine Learning_](https://christophm.github.io/interpretable-ml-book/)> Partial dependence plots show how a feature affects predictions of a Machine Learning model on average.> 1. Define grid along feature> 2. Model predictions at grid points> 3. Line per data instance -> ICE (Individual Conditional Expectation) curve> 4. Average curves to get a PDP (Partial Dependence Plot) Compare Predictions
###Code
from sklearn.model_selection import cross_val_predict
y_pred_lr = cross_val_predict(lr, X, y, cv=5, n_jobs=-1)
y_pred_gb = cross_val_predict(gb, X, y, cv=5, n_jobs=-1)
preds = pd.DataFrame({'true': y, 'lr': y_pred_lr, 'gb': y_pred_gb})
gb_right = preds['gb'] == preds['true']
lr_wrong = preds['lr'] != preds['true']
len(preds[gb_right & lr_wrong]) / len(preds)
preds[gb_right & lr_wrong].head()
data_for_prediction = X.loc[27]
data_for_prediction
###Output
_____no_output_____
###Markdown
Explain individual predictionhttps://www.kaggle.com/dansbecker/shap-values
###Code
import shap
# Create object that can calculate shap values
explainer = shap.TreeExplainer(gb)
# Calculate Shap values
shap_values = explainer.shap_values(data_for_prediction)
shap.initjs()
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
###Output
_____no_output_____
###Markdown
Lending Club
###Code
import category_encoders as ce
import pandas as pd
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
# Load data from https://www.kaggle.com/c/ds1-tree-ensembles/data
X_train = pd.read_csv('train_features.csv')
X_test = pd.read_csv('test_features.csv')
y_train = pd.read_csv('train_labels.csv')['charged_off']
sample_submission = pd.read_csv('sample_submission.csv')
def wrangle(X):
X = X.copy()
# Drop some columns
X = X.drop(columns='id') # id is random
X = X.drop(columns=['member_id', 'url', 'desc']) # All null
X = X.drop(columns='title') # Duplicative of purpose
X = X.drop(columns='grade') # Duplicative of sub_grade
# Transform sub_grade from "A1" - "G5" to 1.1 - 7.5
def wrangle_sub_grade(x):
first_digit = ord(x[0]) - 64
second_digit = int(x[1])
return first_digit + second_digit/10
X['sub_grade'] = X['sub_grade'].apply(wrangle_sub_grade)
# Convert percentages from strings to floats
X['int_rate'] = X['int_rate'].str.strip('%').astype(float)
X['revol_util'] = X['revol_util'].str.strip('%').astype(float)
# Transform earliest_cr_line to an integer: how many days it's been open
X['earliest_cr_line'] = pd.to_datetime(X['earliest_cr_line'], infer_datetime_format=True)
X['earliest_cr_line'] = pd.Timestamp.today() - X['earliest_cr_line']
X['earliest_cr_line'] = X['earliest_cr_line'].dt.days
# Create features for three employee titles: teacher, manager, owner
X['emp_title'] = X['emp_title'].str.lower()
X['emp_title_teacher'] = X['emp_title'].str.contains('teacher', na=False)
X['emp_title_manager'] = X['emp_title'].str.contains('manager', na=False)
X['emp_title_owner'] = X['emp_title'].str.contains('owner', na=False)
# Drop categoricals with high cardinality
X = X.drop(columns=['emp_title', 'zip_code'])
# Transform features with many nulls to binary flags
many_nulls = ['sec_app_mths_since_last_major_derog',
'sec_app_revol_util',
'sec_app_earliest_cr_line',
'sec_app_mort_acc',
'dti_joint',
'sec_app_collections_12_mths_ex_med',
'sec_app_chargeoff_within_12_mths',
'sec_app_num_rev_accts',
'sec_app_open_act_il',
'sec_app_open_acc',
'revol_bal_joint',
'annual_inc_joint',
'sec_app_inq_last_6mths',
'mths_since_last_record',
'mths_since_recent_bc_dlq',
'mths_since_last_major_derog',
'mths_since_recent_revol_delinq',
'mths_since_last_delinq',
'il_util',
'emp_length',
'mths_since_recent_inq',
'mo_sin_old_il_acct',
'mths_since_rcnt_il',
'num_tl_120dpd_2m',
'bc_util',
'percent_bc_gt_75',
'bc_open_to_buy',
'mths_since_recent_bc']
for col in many_nulls:
X[col] = X[col].isnull()
# For features with few nulls, do mean imputation
for col in X:
if X[col].isnull().sum() > 0:
X[col] = X[col].fillna(X[col].mean())
# Return the wrangled dataframe
return X
# Wrangle train and test in the same way
X_train = wrangle(X_train)
X_test = wrangle(X_test)
%%time
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train,
train_size=40000,
test_size=40000,
stratify=y_train,
random_state=42)
encoder = ce.OrdinalEncoder()
X_train = encoder.fit_transform(X_train)
X_val = encoder.transform(X_val)
gb = GradientBoostingClassifier()
gb.fit(X_train, y_train)
y_pred_proba = gb.predict_proba(X_val)[:,1]
print('Validation ROC AUC:', roc_auc_score(y_val, y_pred_proba))
###Output
Validation ROC AUC: 0.744114812226796
CPU times: user 12.3 s, sys: 145 ms, total: 12.5 s
Wall time: 10.3 s
###Markdown
Partial Dependence Plot
###Code
from pdpbox.pdp import pdp_isolate, pdp_plot
feature='int_rate'
pdp_isolated = pdp_isolate(model=gb, dataset=X_val,
model_features=X_val.columns, feature=feature)
pdp_plot(pdp_isolated, feature);
###Output
_____no_output_____
###Markdown
Individual predictions
###Code
import numpy as np
y_pred = (y_pred_proba >= 0.5).astype(int)
confidence = np.abs(y_pred_proba - 0.5)
preds = pd.DataFrame({'y_val': y_val, 'y_pred': y_pred,
'y_pred_proba': y_pred_proba,
'confidence': confidence})
preds.head()
# True positives, with high confidence
preds[(y_val==1) & (y_pred==1)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[17575]
explainer = shap.TreeExplainer(gb)
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# True negatives, with high confidence
preds[(y_val==0) & (y_pred==0)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[1778]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# False positives, with high (mistaken) confidence
preds[(y_val==0) & (y_pred==1)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[33542]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
data_for_prediction
# False negatives, with high (mistaken) confidence
preds[(y_val==1) & (y_pred==0)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[30492]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# Most uncertain predictions (least confidence)
preds.sort_values(by='confidence', ascending=True).head()
data_for_prediction = X_val.loc[33095]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values,
data_for_prediction)
data_for_prediction
###Output
_____no_output_____
###Markdown
_Lambda School Data Science — Tree Ensembles_ Model Interpretation Objectives- Partial Dependence Plots- Shapley Values Pre-reads1. Kaggle / Dan Becker: Machine Learning Explainability - https://www.kaggle.com/dansbecker/partial-plots - https://www.kaggle.com/dansbecker/shap-values2. Christoph Molnar: Interpretable Machine Learning - https://christophm.github.io/interpretable-ml-book/pdp.html - https://christophm.github.io/interpretable-ml-book/shapley.html Librarires- [PDPbox](https://github.com/SauceCat/PDPbox): `pip install pdpbox`- [shap](https://github.com/slundberg/shap): `conda install -c conda-forge shap` / `pip install shap` Types of explanations Global explanation: all features in relation to each other- Feature Importances (mean decrease impurity)- Permutation Importances- Drop-Column Importances Global explanation: individual feature in relation to target- Partial Dependence plots Individual prediction explanation- Shapley Values_Note that the coefficients from a linear model give you all three types of explanations!_ Titanic
###Code
%matplotlib inline
import pandas as pd
import seaborn as sns
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
def load_titanic():
df = sns.load_dataset('titanic')
df['age'] = df['age'].fillna(df['age'].mean())
df['class'] = df['class'].map({'First': 1, 'Second': 2, 'Third': 3})
df['female'] = df['sex'] == 'female'
X = df[['age', 'class', 'fare', 'female']]
y = df['survived']
return X, y
X, y = load_titanic()
###Output
_____no_output_____
###Markdown
Naive majority calss baseline
###Code
y.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
lr = LogisticRegression(solver='lbfgs')
cross_val_score(lr, X, y, scoring='accuracy', cv=5, n_jobs=-1)
lr.fit(X, y)
pd.Series(lr.coef_[0], X.columns)
sns.regplot(x=X['age'], y=y, logistic=True, y_jitter=.05);
###Output
_____no_output_____
###Markdown
Gradient Boosting
###Code
gb = GradientBoostingClassifier()
cross_val_score(gb, X, y, scoring='accuracy', cv=5, n_jobs=-1)
gb.fit(X, y)
pd.Series(gb.feature_importances_, X.columns)
from pdpbox.pdp import pdp_isolate, pdp_plot
feature='age'
pdp_isolated = pdp_isolate(model=gb, dataset=X, model_features=X.columns, feature=feature)
pdp_plot(pdp_isolated, feature);
###Output
_____no_output_____
###Markdown
From [PDPbox documentation](https://pdpbox.readthedocs.io/en/latest/):>**The common headache**: When using black box machine learning algorithms like random forest and boosting, it is hard to understand the relations between predictors and model outcome. For example, in terms of random forest, all we get is the feature importance. Although we can know which feature is significantly influencing the outcome based on the importance calculation, it really sucks that we don’t know in which direction it is influencing. And in most of the real cases, the effect is non-monotonic. We need some powerful tools to help understanding the complex relations between predictors and model prediction. [Animation by Christoph Molnar](https://twitter.com/ChristophMolnar/status/1066398522608635904), author of [_Interpretable Machine Learning_](https://christophm.github.io/interpretable-ml-book/)> Partial dependence plots show how a feature affects predictions of a Machine Learning model on average.> 1. Define grid along feature> 2. Model predictions at grid points> 3. Line per data instance -> ICE (Individual Conditional Expectation) curve> 4. Average curves to get a PDP (Partial Dependence Plot) Compare Predictions
###Code
from sklearn.model_selection import cross_val_predict
y_pred_lr = cross_val_predict(lr, X, y, cv=5, n_jobs=-1)
y_pred_gb = cross_val_predict(gb, X, y, cv=5, n_jobs=-1)
preds = pd.DataFrame({'true': y, 'lr': y_pred_lr, 'gb': y_pred_gb})
gb_right = preds['gb'] == preds['true']
lr_wrong = preds['lr'] != preds['true']
len(preds[gb_right & lr_wrong]) / len(preds)
preds[gb_right & lr_wrong].head()
data_for_prediction = X.loc[27]
data_for_prediction
###Output
_____no_output_____
###Markdown
Explain individual predictionhttps://www.kaggle.com/dansbecker/shap-values
###Code
import shap
# Create object that can calculate shap values
explainer = shap.TreeExplainer(gb)
# Calculate Shap values
shap_values = explainer.shap_values(data_for_prediction)
shap.initjs()
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
###Output
_____no_output_____
###Markdown
Lending Club
###Code
import category_encoders as ce
import pandas as pd
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
# Load data from https://www.kaggle.com/c/ds1-tree-ensembles/data
X_train = pd.read_csv('train_features.csv')
X_test = pd.read_csv('test_features.csv')
y_train = pd.read_csv('train_labels.csv')['charged_off']
sample_submission = pd.read_csv('sample_submission.csv')
def wrangle(X):
X = X.copy()
# Drop some columns
X = X.drop(columns='id') # id is random
X = X.drop(columns=['member_id', 'url', 'desc']) # All null
X = X.drop(columns='title') # Duplicative of purpose
X = X.drop(columns='grade') # Duplicative of sub_grade
# Transform sub_grade from "A1" - "G5" to 1.1 - 7.5
def wrangle_sub_grade(x):
first_digit = ord(x[0]) - 64
second_digit = int(x[1])
return first_digit + second_digit/10
X['sub_grade'] = X['sub_grade'].apply(wrangle_sub_grade)
# Convert percentages from strings to floats
X['int_rate'] = X['int_rate'].str.strip('%').astype(float)
X['revol_util'] = X['revol_util'].str.strip('%').astype(float)
# Transform earliest_cr_line to an integer: how many days it's been open
X['earliest_cr_line'] = pd.to_datetime(X['earliest_cr_line'], infer_datetime_format=True)
X['earliest_cr_line'] = pd.Timestamp.today() - X['earliest_cr_line']
X['earliest_cr_line'] = X['earliest_cr_line'].dt.days
# Create features for three employee titles: teacher, manager, owner
X['emp_title'] = X['emp_title'].str.lower()
X['emp_title_teacher'] = X['emp_title'].str.contains('teacher', na=False)
X['emp_title_manager'] = X['emp_title'].str.contains('manager', na=False)
X['emp_title_owner'] = X['emp_title'].str.contains('owner', na=False)
# Drop categoricals with high cardinality
X = X.drop(columns=['emp_title', 'zip_code'])
# Transform features with many nulls to binary flags
many_nulls = ['sec_app_mths_since_last_major_derog',
'sec_app_revol_util',
'sec_app_earliest_cr_line',
'sec_app_mort_acc',
'dti_joint',
'sec_app_collections_12_mths_ex_med',
'sec_app_chargeoff_within_12_mths',
'sec_app_num_rev_accts',
'sec_app_open_act_il',
'sec_app_open_acc',
'revol_bal_joint',
'annual_inc_joint',
'sec_app_inq_last_6mths',
'mths_since_last_record',
'mths_since_recent_bc_dlq',
'mths_since_last_major_derog',
'mths_since_recent_revol_delinq',
'mths_since_last_delinq',
'il_util',
'emp_length',
'mths_since_recent_inq',
'mo_sin_old_il_acct',
'mths_since_rcnt_il',
'num_tl_120dpd_2m',
'bc_util',
'percent_bc_gt_75',
'bc_open_to_buy',
'mths_since_recent_bc']
for col in many_nulls:
X[col] = X[col].isnull()
# For features with few nulls, do mean imputation
for col in X:
if X[col].isnull().sum() > 0:
X[col] = X[col].fillna(X[col].mean())
# Return the wrangled dataframe
return X
# Wrangle train and test in the same way
X_train = wrangle(X_train)
X_test = wrangle(X_test)
%%time
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=0.2, stratify=y_train, random_state=42)
encoder = ce.OrdinalEncoder()
X_train = encoder.fit_transform(X_train)
X_val = encoder.transform(X_val)
gb = GradientBoostingClassifier()
gb.fit(X_train, y_train)
y_pred_proba = gb.predict_proba(X_val)[:,1]
print('Validation ROC AUC:', roc_auc_score(y_val, y_pred_proba))
###Output
Validation ROC AUC: 0.7441515819801925
CPU times: user 10.3 s, sys: 62.7 ms, total: 10.4 s
Wall time: 8.51 s
###Markdown
Partial Dependence Plot
###Code
from pdpbox.pdp import pdp_isolate, pdp_plot
feature='int_rate'
pdp_isolated = pdp_isolate(model=gb, dataset=X_val, model_features=X_val.columns, feature=feature)
pdp_plot(pdp_isolated, feature);
###Output
_____no_output_____
###Markdown
Individual predictions
###Code
import numpy as np
y_pred = (y_pred_proba >= 0.5).astype(int)
confidence = np.abs(y_pred_proba - 0.5)
preds = pd.DataFrame({'y_val': y_val, 'y_pred': y_pred, 'y_pred_proba': y_pred_proba, 'confidence': confidence})
# True positives, with high confidence
preds[(y_val==1) & (y_pred==1)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[17575]
explainer = shap.TreeExplainer(gb)
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# True negatives, with high confidence
preds[(y_val==0) & (y_pred==0)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[1778]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# False positives, with high (mistaken) confidence
preds[(y_val==0) & (y_pred==1)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[33542]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# False negatives, with high (mistaken) confidence
preds[(y_val==1) & (y_pred==0)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[30492]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# Most uncertain predictions (least confidence)
preds.sort_values(by='confidence', ascending=True).head()
data_for_prediction = X_val.loc[22527]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
###Output
_____no_output_____
###Markdown
_Lambda School Data Science — Tree Ensembles_ Model Interpretation Objectives- Partial Dependence Plots- Shapley Values Pre-reads1. Kaggle / Dan Becker: Machine Learning Explainability - https://www.kaggle.com/dansbecker/partial-plots - https://www.kaggle.com/dansbecker/shap-values2. Christoph Molnar: Interpretable Machine Learning - https://christophm.github.io/interpretable-ml-book/pdp.html - https://christophm.github.io/interpretable-ml-book/shapley.html Librarires- [PDPbox](https://github.com/SauceCat/PDPbox): `pip install pdpbox`- [shap](https://github.com/slundberg/shap): `conda install -c conda-forge shap` / `pip install shap` Types of explanations Global explanation: all features in relation to each other- Feature Importances (mean decrease impurity)- Permutation Importances- Drop-Column Importances Global explanation: individual feature in relation to target- Partial Dependence plots Individual prediction explanation- Shapley Values_Note that the coefficients from a linear model give you all three types of explanations!_
###Code
# !conda install --yes -c conda-forge shap
###Output
Collecting package metadata: done
Solving environment: done
==> WARNING: A newer version of conda exists. <==
current version: 4.6.7
latest version: 4.6.8
Please update conda by running
$ conda update -n base conda
## Package Plan ##
environment location: /home/mark/anaconda3/envs/myf
added / updated specs:
- shap
The following packages will be downloaded:
package | build
---------------------------|-----------------
ca-certificates-2019.3.9 | hecc5488_0 146 KB conda-forge
certifi-2019.3.9 | py36_0 149 KB conda-forge
shap-0.28.5 | py36h637b7d7_0 301 KB conda-forge
------------------------------------------------------------
Total: 596 KB
The following NEW packages will be INSTALLED:
shap conda-forge/linux-64::shap-0.28.5-py36h637b7d7_0
The following packages will be UPDATED:
ca-certificates anaconda::ca-certificates-2019.1.23-0 --> conda-forge::ca-certificates-2019.3.9-hecc5488_0
certifi anaconda::certifi-2018.11.29-py36_0 --> conda-forge::certifi-2019.3.9-py36_0
The following packages will be SUPERSEDED by a higher-priority channel:
openssl anaconda::openssl-1.0.2r-h7b6447c_0 --> conda-forge::openssl-1.0.2r-h14c3975_0
Downloading and Extracting Packages
ca-certificates-2019 | 146 KB | ##################################### | 100%
certifi-2019.3.9 | 149 KB | ##################################### | 100%
shap-0.28.5 | 301 KB | ##################################### | 100%
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
###Markdown
Titanic
###Code
%matplotlib inline
import pandas as pd
import seaborn as sns
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
def load_titanic():
df = sns.load_dataset('titanic')
df['age'] = df['age'].fillna(df['age'].mean())
df['class'] = df['class'].map({'First': 1, 'Second': 2, 'Third': 3})
df['female'] = df['sex'] == 'female'
X = df[['age', 'class', 'fare', 'female']]
y = df['survived']
return X, y
X, y = load_titanic()
###Output
_____no_output_____
###Markdown
Naive majority calss baseline
###Code
y.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
lr = LogisticRegression(solver='lbfgs')
cross_val_score(lr, X, y, scoring='accuracy', cv=5, n_jobs=-1)
lr.fit(X, y)
pd.Series(lr.coef_[0], X.columns)
sns.regplot(x=X['age'], y=y, logistic=True, y_jitter=.05);
###Output
_____no_output_____
###Markdown
Gradient Boosting
###Code
gb = GradientBoostingClassifier()
cross_val_score(gb, X, y, scoring='accuracy', cv=5, n_jobs=-1)
gb.fit(X, y)
pd.Series(gb.feature_importances_, X.columns)
# !pip install PDPbox
from pdpbox.pdp import pdp_isolate, pdp_plot
feature='age'
pdp_isolated = pdp_isolate(model=gb, dataset=X, model_features=X.columns, feature=feature)
pdp_plot(pdp_isolated, feature);
###Output
_____no_output_____
###Markdown
From [PDPbox documentation](https://pdpbox.readthedocs.io/en/latest/):>**The common headache**: When using black box machine learning algorithms like random forest and boosting, it is hard to understand the relations between predictors and model outcome. For example, in terms of random forest, all we get is the feature importance. Although we can know which feature is significantly influencing the outcome based on the importance calculation, it really sucks that we don’t know in which direction it is influencing. And in most of the real cases, the effect is non-monotonic. We need some powerful tools to help understanding the complex relations between predictors and model prediction. [Animation by Christoph Molnar](https://twitter.com/ChristophMolnar/status/1066398522608635904), author of [_Interpretable Machine Learning_](https://christophm.github.io/interpretable-ml-book/)> Partial dependence plots show how a feature affects predictions of a Machine Learning model on average.> 1. Define grid along feature> 2. Model predictions at grid points> 3. Line per data instance -> ICE (Individual Conditional Expectation) curve> 4. Average curves to get a PDP (Partial Dependence Plot) Compare Predictions
###Code
from sklearn.model_selection import cross_val_predict
y_pred_lr = cross_val_predict(lr, X, y, cv=5, n_jobs=-1)
y_pred_gb = cross_val_predict(gb, X, y, cv=5, n_jobs=-1)
preds = pd.DataFrame({'true': y, 'lr': y_pred_lr, 'gb': y_pred_gb})
gb_right = preds['gb'] == preds['true']
lr_wrong = preds['lr'] != preds['true']
len(preds[gb_right & lr_wrong]) / len(preds)
preds[gb_right & lr_wrong].head()
data_for_prediction = X.loc[27]
data_for_prediction
###Output
_____no_output_____
###Markdown
Explain individual predictionhttps://www.kaggle.com/dansbecker/shap-values
###Code
import shap
# Create object that can calculate shap values
explainer = shap.TreeExplainer(gb)
# Calculate Shap values
shap_values = explainer.shap_values(data_for_prediction)
shap.initjs()
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
###Output
_____no_output_____
###Markdown
Lending Club
###Code
import category_encoders as ce
import pandas as pd
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
data_directory = '/home/mark/lambda/DS-Unit-4-Sprint-1-Tree-Ensembles/module1-decision-trees/'
# Load data from https://www.kaggle.com/c/ds1-tree-ensembles/data
X_train = pd.read_csv(data_directory + 'train_features.csv')
X_test = pd.read_csv(data_directory +'test_features.csv')
y_train = pd.read_csv(data_directory +'train_labels.csv')['charged_off']
sample_submission = pd.read_csv(data_directory +'sample_submission.csv')
def wrangle(X):
X = X.copy()
# Drop some columns
X = X.drop(columns='id') # id is random
X = X.drop(columns=['member_id', 'url', 'desc']) # All null
X = X.drop(columns='title') # Duplicative of purpose
X = X.drop(columns='grade') # Duplicative of sub_grade
# Transform sub_grade from "A1" - "G5" to 1.1 - 7.5
def wrangle_sub_grade(x):
first_digit = ord(x[0]) - 64
second_digit = int(x[1])
return first_digit + second_digit/10
X['sub_grade'] = X['sub_grade'].apply(wrangle_sub_grade)
# Convert percentages from strings to floats
X['int_rate'] = X['int_rate'].str.strip('%').astype(float)
X['revol_util'] = X['revol_util'].str.strip('%').astype(float)
# Transform earliest_cr_line to an integer: how many days it's been open
X['earliest_cr_line'] = pd.to_datetime(X['earliest_cr_line'], infer_datetime_format=True)
X['earliest_cr_line'] = pd.Timestamp.today() - X['earliest_cr_line']
X['earliest_cr_line'] = X['earliest_cr_line'].dt.days
# Create features for three employee titles: teacher, manager, owner
X['emp_title'] = X['emp_title'].str.lower()
X['emp_title_teacher'] = X['emp_title'].str.contains('teacher', na=False)
X['emp_title_manager'] = X['emp_title'].str.contains('manager', na=False)
X['emp_title_owner'] = X['emp_title'].str.contains('owner', na=False)
# Drop categoricals with high cardinality
X = X.drop(columns=['emp_title', 'zip_code'])
# Transform features with many nulls to binary flags
many_nulls = ['sec_app_mths_since_last_major_derog',
'sec_app_revol_util',
'sec_app_earliest_cr_line',
'sec_app_mort_acc',
'dti_joint',
'sec_app_collections_12_mths_ex_med',
'sec_app_chargeoff_within_12_mths',
'sec_app_num_rev_accts',
'sec_app_open_act_il',
'sec_app_open_acc',
'revol_bal_joint',
'annual_inc_joint',
'sec_app_inq_last_6mths',
'mths_since_last_record',
'mths_since_recent_bc_dlq',
'mths_since_last_major_derog',
'mths_since_recent_revol_delinq',
'mths_since_last_delinq',
'il_util',
'emp_length',
'mths_since_recent_inq',
'mo_sin_old_il_acct',
'mths_since_rcnt_il',
'num_tl_120dpd_2m',
'bc_util',
'percent_bc_gt_75',
'bc_open_to_buy',
'mths_since_recent_bc']
for col in many_nulls:
X[col] = X[col].isnull()
# For features with few nulls, do mean imputation
for col in X:
if X[col].isnull().sum() > 0:
X[col] = X[col].fillna(X[col].mean())
# Return the wrangled dataframe
return X
# Wrangle train and test in the same way
X_train = wrangle(X_train)
X_test = wrangle(X_test)
%%time
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=0.2, stratify=y_train, random_state=42)
clist = X_train.select_dtypes(exclude=[np.number]).columns.tolist()
encoder = ce.OrdinalEncoder(cols=clist)
xfit = encoder.fit(X_train)
X_train_ = xfit.transform(X_train)
X_val = encoder.transform(X_val)
gb = GradientBoostingClassifier(loss='exponential', learning_rate=0.1, n_estimators=180,min_samples_leaf=3 )
gb.fit(X_train_, y_train)
y_pred_proba = gb.predict_proba(X_val)[:,1]
print('Validation ROC AUC:', roc_auc_score(y_val, y_pred_proba))
X_train_.head()
test_y = xfit.transform(X_test)
test_y.head()
with open(data_directory + 'submit.csv', 'w') as file:
file.write('id,charged_off\n')
for id, charged_off in zip(testdf.id,test_y):
# if charged_off > 0:
# print('a charge off')
file.write(f"{id},{charged_off}")
file.write('\n')
###Output
_____no_output_____
###Markdown
Partial Dependence Plot
###Code
gb.predict(test_y)
from pdpbox.pdp import pdp_isolate, pdp_plot
feature='int_rate'
pdp_isolated = pdp_isolate(model=gb, dataset=X_val, model_features=X_val.columns, feature=feature)
pdp_plot(pdp_isolated, feature);
###Output
_____no_output_____
###Markdown
Individual predictions
###Code
import numpy as np
y_pred = (y_pred_proba >= 0.5).astype(int)
confidence = np.abs(y_pred_proba - 0.5)
preds = pd.DataFrame({'y_val': y_val, 'y_pred': y_pred, 'y_pred_proba': y_pred_proba, 'confidence': confidence})
# True positives, with high confidence
preds[(y_val==1) & (y_pred==1)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[17575]
explainer = shap.TreeExplainer(gb)
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# True negatives, with high confidence
preds[(y_val==0) & (y_pred==0)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[1778]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# False positives, with high (mistaken) confidence
preds[(y_val==0) & (y_pred==1)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[33542]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# False negatives, with high (mistaken) confidence
preds[(y_val==1) & (y_pred==0)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[30492]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# Most uncertain predictions (least confidence)
preds.sort_values(by='confidence', ascending=True).head()
data_for_prediction = X_val.loc[22527]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
###Output
_____no_output_____
###Markdown
_Lambda School Data Science — Tree Ensembles_ Model Interpretation Objectives- Partial Dependence Plots- Shapley Values Pre-reads1. Kaggle / Dan Becker: Machine Learning Explainability - https://www.kaggle.com/dansbecker/partial-plots - https://www.kaggle.com/dansbecker/shap-values2. Christoph Molnar: Interpretable Machine Learning - https://christophm.github.io/interpretable-ml-book/pdp.html - https://christophm.github.io/interpretable-ml-book/shapley.html Libraries- [PDPbox](https://github.com/SauceCat/PDPbox): `pip install pdpbox`- [shap](https://github.com/slundberg/shap): `conda install -c conda-forge shap` / `pip install shap` Types of explanations Global explanation: all features in relation to each other- Feature Importances (mean decrease impurity)- Permutation Importances- Drop-Column Importances Global explanation: individual feature in relation to target- Partial Dependence plots Individual prediction explanation- Shapley Values_Note that the coefficients from a linear model give you all three types of explanations!_ Titanic
###Code
%matplotlib inline
import pandas as pd
import seaborn as sns
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
def load_titanic():
df = sns.load_dataset('titanic')
df['age'] = df['age'].fillna(df['age'].mean())
df['class'] = df['class'].map({'First': 1, 'Second': 2, 'Third': 3})
df['female'] = df['sex'] == 'female'
X = df[['age', 'class', 'fare', 'female']]
y = df['survived']
return X, y
X, y = load_titanic()
###Output
_____no_output_____
###Markdown
Naive majority class baseline
###Code
y.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Logistic Regression
###Code
lr = LogisticRegression(solver='lbfgs')
cross_val_score(lr, X, y, scoring='accuracy', cv=5, n_jobs=-1)
lr.fit(X, y)
pd.Series(lr.coef_[0], X.columns)
sns.regplot(x=X['age'], y=y, logistic=True, y_jitter=.05);
###Output
_____no_output_____
###Markdown
Gradient Boosting
###Code
gb = GradientBoostingClassifier()
cross_val_score(gb, X, y, scoring='accuracy', cv=5, n_jobs=-1)
gb.fit(X, y)
pd.Series(gb.feature_importances_, X.columns)
from pdpbox.pdp import pdp_isolate, pdp_plot
feature='age'
pdp_isolated = pdp_isolate(model=gb, dataset=X, model_features=X.columns,
feature=feature)
pdp_plot(pdp_isolated, feature);
###Output
_____no_output_____
###Markdown
From [PDPbox documentation](https://pdpbox.readthedocs.io/en/latest/):>**The common headache**: When using black box machine learning algorithms like random forest and boosting, it is hard to understand the relations between predictors and model outcome. For example, in terms of random forest, all we get is the feature importance. Although we can know which feature is significantly influencing the outcome based on the importance calculation, it really sucks that we don’t know in which direction it is influencing. And in most of the real cases, the effect is non-monotonic. We need some powerful tools to help understanding the complex relations between predictors and model prediction. [Animation by Christoph Molnar](https://twitter.com/ChristophMolnar/status/1066398522608635904), author of [_Interpretable Machine Learning_](https://christophm.github.io/interpretable-ml-book/)> Partial dependence plots show how a feature affects predictions of a Machine Learning model on average.> 1. Define grid along feature> 2. Model predictions at grid points> 3. Line per data instance -> ICE (Individual Conditional Expectation) curve> 4. Average curves to get a PDP (Partial Dependence Plot) Compare Predictions
###Code
from sklearn.model_selection import cross_val_predict
y_pred_lr = cross_val_predict(lr, X, y, cv=5, n_jobs=-1)
y_pred_gb = cross_val_predict(gb, X, y, cv=5, n_jobs=-1)
preds = pd.DataFrame({'true': y, 'lr': y_pred_lr, 'gb': y_pred_gb})
gb_right = preds['gb'] == preds['true']
lr_wrong = preds['lr'] != preds['true']
len(preds[gb_right & lr_wrong]) / len(preds)
preds[gb_right & lr_wrong].head()
data_for_prediction = X.loc[27]
data_for_prediction
###Output
_____no_output_____
###Markdown
Explain individual predictionhttps://www.kaggle.com/dansbecker/shap-values
###Code
import shap
# Create object that can calculate shap values
explainer = shap.TreeExplainer(gb)
# Calculate Shap values
shap_values = explainer.shap_values(data_for_prediction)
shap.initjs()
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
###Output
_____no_output_____
###Markdown
Lending Club
###Code
import category_encoders as ce
import pandas as pd
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import make_pipeline
# Load data from https://www.kaggle.com/c/ds1-tree-ensembles/data
X_train = pd.read_csv('train_features.csv')
X_test = pd.read_csv('test_features.csv')
y_train = pd.read_csv('train_labels.csv')['charged_off']
sample_submission = pd.read_csv('sample_submission.csv')
def wrangle(X):
X = X.copy()
# Drop some columns
X = X.drop(columns='id') # id is random
X = X.drop(columns=['member_id', 'url', 'desc']) # All null
X = X.drop(columns='title') # Duplicative of purpose
X = X.drop(columns='grade') # Duplicative of sub_grade
# Transform sub_grade from "A1" - "G5" to 1.1 - 7.5
def wrangle_sub_grade(x):
first_digit = ord(x[0]) - 64
second_digit = int(x[1])
return first_digit + second_digit/10
X['sub_grade'] = X['sub_grade'].apply(wrangle_sub_grade)
# Convert percentages from strings to floats
X['int_rate'] = X['int_rate'].str.strip('%').astype(float)
X['revol_util'] = X['revol_util'].str.strip('%').astype(float)
# Transform earliest_cr_line to an integer: how many days it's been open
X['earliest_cr_line'] = pd.to_datetime(X['earliest_cr_line'], infer_datetime_format=True)
X['earliest_cr_line'] = pd.Timestamp.today() - X['earliest_cr_line']
X['earliest_cr_line'] = X['earliest_cr_line'].dt.days
# Create features for three employee titles: teacher, manager, owner
X['emp_title'] = X['emp_title'].str.lower()
X['emp_title_teacher'] = X['emp_title'].str.contains('teacher', na=False)
X['emp_title_manager'] = X['emp_title'].str.contains('manager', na=False)
X['emp_title_owner'] = X['emp_title'].str.contains('owner', na=False)
# Drop categoricals with high cardinality
X = X.drop(columns=['emp_title', 'zip_code'])
# Transform features with many nulls to binary flags
many_nulls = ['sec_app_mths_since_last_major_derog',
'sec_app_revol_util',
'sec_app_earliest_cr_line',
'sec_app_mort_acc',
'dti_joint',
'sec_app_collections_12_mths_ex_med',
'sec_app_chargeoff_within_12_mths',
'sec_app_num_rev_accts',
'sec_app_open_act_il',
'sec_app_open_acc',
'revol_bal_joint',
'annual_inc_joint',
'sec_app_inq_last_6mths',
'mths_since_last_record',
'mths_since_recent_bc_dlq',
'mths_since_last_major_derog',
'mths_since_recent_revol_delinq',
'mths_since_last_delinq',
'il_util',
'emp_length',
'mths_since_recent_inq',
'mo_sin_old_il_acct',
'mths_since_rcnt_il',
'num_tl_120dpd_2m',
'bc_util',
'percent_bc_gt_75',
'bc_open_to_buy',
'mths_since_recent_bc']
for col in many_nulls:
X[col] = X[col].isnull()
# For features with few nulls, do mean imputation
for col in X:
if X[col].isnull().sum() > 0:
X[col] = X[col].fillna(X[col].mean())
# Return the wrangled dataframe
return X
# Wrangle train and test in the same way
X_train = wrangle(X_train)
X_test = wrangle(X_test)
%%time
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=0.2, stratify=y_train, random_state=42)
encoder = ce.OrdinalEncoder()
X_train = encoder.fit_transform(X_train)
X_val = encoder.transform(X_val)
gb = GradientBoostingClassifier()
gb.fit(X_train, y_train)
y_pred_proba = gb.predict_proba(X_val)[:,1]
print('Validation ROC AUC:', roc_auc_score(y_val, y_pred_proba))
###Output
Validation ROC AUC: 0.744114812226796
CPU times: user 12.3 s, sys: 145 ms, total: 12.5 s
Wall time: 10.3 s
###Markdown
Partial Dependence Plot
###Code
from pdpbox.pdp import pdp_isolate, pdp_plot
feature='int_rate'
pdp_isolated = pdp_isolate(model=gb, dataset=X_val,
model_features=X_val.columns, feature=feature)
pdp_plot(pdp_isolated, feature);
###Output
_____no_output_____
###Markdown
Individual predictions
###Code
import numpy as np
y_pred = (y_pred_proba >= 0.5).astype(int)
confidence = np.abs(y_pred_proba - 0.5)
preds = pd.DataFrame({'y_val': y_val, 'y_pred': y_pred,
'y_pred_proba': y_pred_proba,
'confidence': confidence})
preds.head()
# True positives, with high confidence
preds[(y_val==1) & (y_pred==1)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[17575]
explainer = shap.TreeExplainer(gb)
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# True negatives, with high confidence
preds[(y_val==0) & (y_pred==0)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[1778]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# False positives, with high (mistaken) confidence
preds[(y_val==0) & (y_pred==1)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[33542]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
data_for_prediction
# False negatives, with high (mistaken) confidence
preds[(y_val==1) & (y_pred==0)].sort_values(by='confidence', ascending=False).head()
data_for_prediction = X_val.loc[30492]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values, data_for_prediction)
# Most uncertain predictions (least confidence)
preds.sort_values(by='confidence', ascending=True).head()
data_for_prediction = X_val.loc[33095]
shap_values = explainer.shap_values(data_for_prediction)
shap.force_plot(explainer.expected_value, shap_values,
data_for_prediction)
data_for_prediction
###Output
_____no_output_____ |
Random_Forest_regression_challenge_5.ipynb | ###Markdown
Random Forest Regression on the World Population© Explore Data Science Academy For the final test of the week, we'll learn how decision trees can be expanded upon as simple regressors in order to create an [ensemble](https://en.wikipedia.org/wiki/Ensemble_learning) model know as a Random Forest. Like our previous coding challenges, we train this new model using the world population data from the Analyse Exam. Honour CodeI **MANGALISO, MAKHOBA**, confirm - by submitting this document - that the solutions in this notebook are a result of my own work and that I abide by the EDSA honour code (https://drive.google.com/file/d/1QDCjGZJ8-FmJE3bZdIQNwnJyQKPhHZBn/view?usp=sharing).Non-compliance with the honour code constitutes a material breach of contract. Imports
###Code
import numpy as np
import pandas as pd
from numpy import array
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import KFold
from sklearn.metrics import mean_squared_error
population_df = pd.read_csv('https://raw.githubusercontent.com/Explore-AI/Public-Data/master/AnalyseProject/world_population.csv', index_col='Country Code')
meta_df = pd.read_csv('https://raw.githubusercontent.com/Explore-AI/Public-Data/master/AnalyseProject/metadata.csv', index_col='Country Code')
population_df.head()
meta_df.head()
###Output
_____no_output_____
###Markdown
Question 1 As we've seen previously, the world population data spans from 1960 to 2017. We'd like to build a predictive model that can give us the best guess at what the world population in a given year was. However, as a slight twist this time, we want to compute this estimate for only _countries within a given income group_. First, however, we need to organise our data such that the sklearn's `RandomForestRegressor` class can train on our data. To do this, we will write a function that takes as input an income group and return a 2-d numpy array that contains the year and the measured population._**Function Specifications:**_* Should take a `str` argument, called `income_group_name` as input and return a numpy `array` type as output.* Set the default argument of `income_group_name` to equal `'Low income'`.* If the specified value of `income_group_name` does not exist, the function must raise a `ValueError`.* The array should only have two columns containing the year and the population, in other words, it should have a shape `(?, 2)` where `?` is the length of the data.* The values within the array should be of type `np.int64`. _**Further Reading:**_Data types are associated with memory allocation. As such, your choice of data type affects the precision of computations in your program. For example, the `np.int` data type in numpy can only store values between -2147483648 to 2147483647 and assigning values outside this range for variables of this data type may cause run-time errors. To avoid this, we can use data types with larger memory capacity e.g. `np.int64`.https://docs.scipy.org/doc/numpy/user/basics.types.html
###Code
### START FUNCTION
def get_total_pop_by_income(income_group_name='Low income'):
if income_group_name not in meta_df['Income Group'].unique():
raise ValueError
newdf = pd.melt(pd.merge(population_df, meta_df['Income Group'], \
how='inner', left_index=True, right_index=True),\
id_vars=['Income Group'], \
value_vars=population_df.columns, var_name='Year')\
.groupby(['Income Group', 'Year']).sum().reset_index().set_index('Income Group')
newdf['Year'] = newdf['Year'].apply(int)
return newdf.loc[income_group_name].to_numpy()
### END FUNCTION
data = get_total_pop_by_income('High income')
###Output
_____no_output_____
###Markdown
_**Expected Outputs:**_```pythonget_total_pop_by_income('High income')```> ```array([[ 1960, 769889923], [ 1961, 781225329], [ 1962, 791207437], [ 1963, 801108277], ... [ 2015, 1211252041], [ 2016, 1218629612], [ 2017, 1225514228]])``` Question 2Now that we have have our data, we need to split this into a set of variables we will be training on, and the set of variables that we will make our predictions on.Unlike in the previous coding challenge, a friend of ours has indicated that sklearn has a bunch of built-in functionality for creating training and testing sets. In this case however, they have asked us to implement a k-fold cross validation split of the data using sklearn's `KFold` [class](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html) (which has already been imported into this notebook for your convenience). Using this knowledge, write a function which uses sklearn's `KFold` [class](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html) internally, and that will take as input a 2-d numpy array and an integer `K` corresponding to the number of splits. This function will then return a list of tuples of length `K`. Each tuple in this list should consist of a `train_indices` list and a `test_indices` list containing the training/testing data point indices for that particular K$^{th}$ split._**Function Specifications:**_* Should take a 2-d numpy `array` and an integer `K` as input.* Should use sklearn's `KFold` [class](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html).* Should return a list of `K` `tuples` containing a list of training and testing indices corresponding to the data points that belong to a particular split. For example, given an array called `data` and an integer `K`, the function should return: >```data_indices = [(list_of_train_indices_for_split_1, list_of_test_indices_for_split_1) (list_of_train_indices_for_split_2, list_of_test_indices_for_split_2) (list_of_train_indices_for_split_3, list_of_test_indices_for_split_3) ... ... (list_of_train_indices_for_split_K, list_of_test_indices_for_split_K)]```* The `shuffle` argument in the KFold object should be set to `False`.**_Hint_**: To see an example of how to use the `KFold` object enter `help(KFold)` in a new notebook cell
###Code
### START FUNCTION
def sklearn_kfold_split(data,K):
kf = KFold(n_splits=K, shuffle=False)
return [(datatr, datate) for datatr, datate in kf.split(data)]
### END FUNCTION
data = get_total_pop_by_income('High income');
sklearn_kfold_split(data,4)
###Output
_____no_output_____
###Markdown
_**Expected Outputs:**_```pythondata = get_total_pop_by_income('High income')sklearn_kfold_split(data,4)```> ```[(array([15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57]), array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14])), (array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57]), array([15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29])), (array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57]), array([30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43])), (array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43]), array([44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57]))] ``` Question 3Now that we have formatted our data, we can fit a model using sklearn's `RandomForestRegressor` class. We'll write a function that will take as input the data indices (consisting of the train and test indices for each split) that we created in the last question, train a different `RandomForestRegressor` on each split and return the model that obtains the best testing set performance across all K splits.**Important Note:** Due to the random initialisation process used within sklearn's `RandomForestRegressor` class, you will need to fix the value of the `random_state` argument in order to get repeatable and predictable results._**Function Specifications:**_* Should take a 2-d numpy array (i.e. the data) and `data_indices` (a list of `(train_indices,test_indices)` tuples) as input.* For each `(train_indices,test_indices)` tuple in `data_indices` the function should: * Train a new `RandomForestRegressor` model on the portion of data indexed by `train_indices` * Evaluate the trained `RandomForestRegressor` model on the portion of data indexed by `test_indices` using the **mean squared error** (which has also been imported for your convenience).* After training and evalating the `RandomForestRegressor` models, the function should return the `RandomForestRegressor` model that obtained highest testing set `mean_square_error` over its allocated data split across all trained models. * The trained `RandomForestRegressor` models should be trained with `random_state` equal `42`, all other parameters should be left as default.**_Hint_**: for each tuple in the `data_indices` list, you can obtain `X_train`,`X_test`, `y_train`, `y_test` as follows: >``` X_train, y_train = data[train_indices,0],data[train_indices,1] X_test, y_test = data[test_indices,0],data[test_indices,1]```
###Code
### START FUNCTION
def best_k_model(data,data_indices):
mse0 = 0
for row in data_indices:
X_train, y_train = data[row[0], 0], data[row[0], 1]
X_test, y_test = data[row[1], 0], data[row[1], 1]
rf = RandomForestRegressor(random_state=42)
rf.fit(X_train.reshape(-1, 1), y_train.reshape(-1, 1))
mse = mean_squared_error(y_test.reshape(-1, 1), rf.predict(X_test.reshape(-1, 1)))
if mse > mse0:
mse0 = mse
best_model = rf
return best_model
### END FUNCTION
data = get_total_pop_by_income('High income')
data_indices = sklearn_kfold_split(data,5)
best_model = best_k_model(data,data_indices)
best_model.predict([[1960]])
###Output
_____no_output_____ |
assignments/assignment4/HotdogOrNot.ipynb | ###Markdown
Задание 4 - Перенос обучения (transfer learning) и тонкая настройка (fine-tuning)Одной из важнейшних техник в тренировке сетей - использовать заранее натренированные веса на более общей задачи в качестве начальной точки, а потом "дотренировать" их на конкретной.Такой подход и убыстряет обучение, и позволяет тренировать эффективные модели на маленьких наборах данных.В этом упражнении мы натренируем классификатор, который отличает хотдоги от не хотдогов! (более подробно - https://www.youtube.com/watch?v=ACmydtFDTGs)Это задание требует доступа к GPU, поэтому его можно выполнять либо на компьютере с GPU от NVidia, либо в [Google Colab](https://colab.research.google.com/).
###Code
import json
import os
import csv
import urllib
from io import BytesIO
from PIL import Image
from socket import timeout
from google.colab import files
#from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
#platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
#accelerator = 'cu80' if path.exists('/opt/bin/nvidia-smi') else 'cpu'
!pip3 install -q torch torchvision
!pip3 install -q Pillow==4.0.0
###Output
_____no_output_____
###Markdown
Сначала давайте скачаем данные с картинками. Это сделает код в следующей ячейке. Данные будут разделены на две части. На обучающей выборке, которая будет храниться в папке **train_kaggle**, мы будем строить наши модели, а на тестовой выборке **test_kaggle** будем предсказывать класс, к которому относится фотография (хотдог или нет). Если вы в Google Colab!В нем можно запускать ноутбуки с доступом к GPU. Они не очень быстрые, зато бесплатные!Каждый ноутбук получает свой собственный environment c доступным диском итд.Через 90 минут отсуствия активности этот environment пропадает со всеми данными.Поэтому нам придется скачивать данные каждый раз.
###Code
# Download train data
!wget "https://www.dropbox.com/s/cupinvuotopehty/train.zip?dl=0"
!unzip -q "train.zip?dl=0"
#local_folder = "../../HotDogOrNot/content/train_kaggle/"
train_folder = "train_kaggle/"
# Count number of files in the train folder, should be 4603
print('Number of files in the train folder', len(os.listdir(train_folder)))
# Download test data
!wget "https://www.dropbox.com/s/7xakfl2r9gn5p1j/test.zip?dl=0"
!unzip -q "test.zip?dl=0"
test_folder = "test_kaggle/"
# Count number of files in the test folder, should be 1150
print('Number of files in the test folder', len(os.listdir(test_folder)))
import torch
from torchvision import models
from torch.utils.data import Dataset, SubsetRandomSampler
from torchvision import transforms
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Имплементируем свой Dataset для загрузки данныхВ этом задании мы реализуем свой собственный класс Dataset для загрузки данных. Его цель - загрузить данные с диска и выдать по ним тензор с входом сети, меткой и идентификатором картинки (так будет проще подготовить сабмит для kaggle на тестовых данных).Вот ссылка, где хорошо объясняется как это делать на примере: https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmlВаш Dataset должен в качестве количества сэмплов выдать количество файлов в папке и уметь выдавать кортеж из сэмпла, метки по индексу и названия файла.Если название файла начинается со слов 'frankfurter', 'chili-dog' или 'hotdog' - метка положительная. Иначе отрицательная (ноль).И не забудьте поддержать возможность трансформации входа (аргумент `transforms`), она нам понадобится!
###Code
class HotdogOrNotDataset(Dataset):
def __init__(self, folder, transform=None):
self.transform = transform
# TODO: Your code here!
self.folder = folder
def __len__(self):
return len(os.listdir(self.folder))
def __getitem__(self, index):
# TODO Implement getting item by index
# Hint: os.path.join is helpful!
img_id = os.listdir(self.folder)[index]
img = Image.open(os.path.join(self.folder, img_id))
if self.transform:
img = self.transform(img)
y = 1 if img_id.lower().startswith(('frankfurter', 'chili-dog', 'hotdog')) else 0
return img, y, img_id
def visualize_samples(dataset, indices, title=None, count=10):
# visualize random 10 samples
plt.figure(figsize=(count*3,3))
display_indices = indices[:count]
if title:
plt.suptitle("%s %s/%s" % (title, len(display_indices), len(indices)))
for i, index in enumerate(display_indices):
x, y, _ = dataset[index]
plt.subplot(1,count,i+1)
plt.title("Label: %s" % y)
plt.imshow(x)
plt.grid(False)
plt.axis('off')
orig_dataset = HotdogOrNotDataset(train_folder)
indices = np.random.choice(np.arange(len(orig_dataset)), 7, replace=False)
visualize_samples(orig_dataset, indices, "Samples")
# Let's make sure transforms work!
dataset = HotdogOrNotDataset(train_folder, transform=transforms.RandomVerticalFlip(0.9))
visualize_samples(dataset, indices, "Samples with flip - a lot should be flipped!")
###Output
_____no_output_____
###Markdown
Создаем Dataset для тренировкиИ разделяем его на train и validation.На train будем обучать модель, на validation проверять ее качество, а соревнование Kaggle In-Class проведем на фотографиях из папки test_kaggle.
###Code
# First, lets load the dataset
train_dataset = HotdogOrNotDataset(train_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
test_dataset = HotdogOrNotDataset(test_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
batch_size = 64
data_size = len(dataset)
validation_fraction = .2
val_split = int(np.floor((validation_fraction) * data_size))
indices = list(range(data_size))
np.random.seed(42)
np.random.shuffle(indices)
val_indices, train_indices = indices[:val_split], indices[val_split:]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=val_sampler)
# Notice that we create test data loader in a different way. We don't have the labels.
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Наши обычные функции для тренировки
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y,_) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
correct_samples = 0
total_samples = 0
for x, y, _ in loader:
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y_gpu.shape[0]
return float(correct_samples) / total_samples
###Output
_____no_output_____
###Markdown
Использование заранее натренированной сети (pretrained network)Чаще всего в качестве заранее натренированной сети используется сеть, натренированная на данных ImageNet с 1M изображений и 1000 классами.PyTorch включает такие натренированные сети для различных архитектур (https://pytorch.org/docs/stable/torchvision/models.html) Мы будем использовать ResNet18.Для начала посмотрим, что выдает уже натренированная сеть на наших картинках. То есть, посмотрим к какому из 1000 классов их отнесет сеть.Запустите модель на 10 случайных картинках из датасета и выведите их вместе с классами с наибольшей вероятностью. В коде уже есть код, который формирует соответствие между индексами в выходном векторе и классами ImageNet.
###Code
# Thanks to https://discuss.pytorch.org/t/imagenet-classes/4923/2
def load_imagenet_classes():
classes_json = urllib.request.urlopen('https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json').read()
classes = json.loads(classes_json)
# TODO: Process it to return dict of class index to name
return { int(k): v[-1] for k, v in classes.items()}
model = models.resnet18(pretrained=True)
# TODO: Run this model on 10 random images of your dataset and visualize what it predicts
def visualize_model(dataset, indices, classes, model, title=None, count=10):
plt.figure(figsize=(count * 3, 3))
display_indices = indices[:count]
if title:
plt.suptitle("%s %s/%s" % (title, len(display_indices), len(indices)))
model.eval()
for i, index in enumerate(display_indices):
x, _, _ = dataset[index]
_, pred = torch.max(model(x.unsqueeze(0)), 1)
plt.subplot(1, count, i + 1)
plt.title("Class: %s" % classes[int(pred)])
plt.imshow(x.numpy().transpose((1, 2, 0)).astype(np.uint8))
plt.grid(False)
plt.axis('off')
indices = np.random.choice(np.arange(len(train_dataset)), 10, replace=False)
classes = load_imagenet_classes()
visualize_model(train_dataset, indices, classes, model)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать только последний слойСуществует несколько вариантов переноса обучения, мы попробуем основные. Первый вариант - сделать новый последний слой и тренировать только его, заморозив остальные.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Freeze all the layers of this model and add a new output layer
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
for param in model.parameters():
param.requires_grad = False
n_features = model.fc.in_features
model.fc = torch.nn.Linear(n_features, 2)
model.type(torch.cuda.FloatTensor)
model.to(device)
parameters = model.parameters() # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD(parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 2)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать всю модельВторой вариант - добавить новый слой и обучать всю модель целиком.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer and train the whole model
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
n_features = model.fc.in_features
model.fc = torch.nn.Linear(n_features, 2)
model.type(torch.cuda.FloatTensor)
model.to(device)
parameters = model.parameters() # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD(parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - разные скорости обучения для разных слоевИ наконец последний вариант, который мы рассмотрим - использовать разные скорости обучения для новых и старых слоев
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer
# Train new layer with learning speed 0.001 and old layers with 0.0001
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
n_features = model.fc.in_features
model.fc = torch.nn.Linear(n_features, 2)
model.type(torch.cuda.FloatTensor)
model.to(device)
*old_params, new_params = model.parameters()
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD([{'params': new_params, 'lr': 0.001, 'momentum': 0.9},
{'params': old_params, 'lr': 0.0001, 'momentum': 0.9}])
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Визуализируем метрики и ошибки моделиПопробуем посмотреть, где модель ошибается - визуализируем ложные срабатывания (false positives) и ложноотрицательные срабатывания (false negatives).Для этого мы прогоним модель через все примеры и сравним ее с истинными метками (ground truth).
###Code
from torch.utils.data.sampler import Sampler
class SubsetSampler(Sampler):
r"""Samples elements with given indices sequentially
Arguments:
data_source (Dataset): dataset to sample from
indices (ndarray): indices of the samples to take
"""
def __init__(self, indices):
self.indices = indices
def __iter__(self):
return (self.indices[i] for i in range(len(self.indices)))
def __len__(self):
return len(self.indices)
def evaluate_model(model, dataset, indices):
"""
Computes predictions and ground truth labels for the indices of the dataset
Returns:
predictions: np array of booleans of model predictions
grount_truth: np array of boolean of actual labels of the dataset
"""
model.eval() # Evaluation mode
# TODO: Evaluate model on the list of indices and capture predictions
# and ground truth labels
# Hint: SubsetSampler above could be useful!
sampler = SubsetSampler(indices)
loader = torch.utils.data.DataLoader(dataset, sampler=sampler)
predictions, ground_truth = [], []
for x, y, _ in loader:
x_gpu = x.to(device)
y_gpu = y.to(device)
_, indices = torch.max(model(x_gpu), 1)
predictions.append(indices)
ground_truth.append(y_gpu)
return predictions, ground_truth
predictions, gt = evaluate_model(model, train_dataset, val_indices)
###Output
_____no_output_____
###Markdown
И теперь можно визуализировать false positives и false negatives.
###Code
# TODO: Compute indices of the false positives on the validation set.
# Note those have to be indices of the original dataset
# TODO: Compute indices of the false negatives on the validation set.
# Note those have to be indices of the original dataset
false_positive_indices = []
false_negatives_indices = []
for pred, truth, idx in zip(predictions, gt, val_indices):
if pred and not truth:
false_positive_indices.append(idx)
if not pred and truth:
false_negatives_indices.append(idx)
visualize_samples(orig_dataset, false_positive_indices, "False positives")
visualize_samples(orig_dataset, false_negatives_indices, "False negatives")
import sklearn.metrics as metrics
def binary_classification_metrics(prediction, ground_truth):
# TODO: Implement this function!
# We did this already it in the assignment1
precision = 0
recall = 0
f1 = 0
TP, FP, FN, TN = 0, 0, 0, 0
N = len(prediction)
for (y_pred, y) in zip(prediction, ground_truth):
if y_pred and y:
TP += 1
if y_pred and not y:
FP += 1
if not y_pred and y:
FN += 1
if not y_pred and not y:
TN += 1
if (TP + FP):
precision = TP / (TP + FP)
if TP + FN:
recall = TP / (TP + FN)
if precision + recall:
f1 = 2 * precision * recall / (precision + recall)
return precision, recall, f1
precision, recall, f1 = binary_classification_metrics(predictions, gt)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (f1, precision, recall))
###Output
_____no_output_____
###Markdown
Что будет в конце вы уже понялиНатренируйте лучшую модель на основе `resnet18`, меняя только процесс тренировки.Выбирайте лучшую модель по F1 score.Как всегда, не забываем:- побольше агментаций!- перебор гиперпараметров- различные оптимизаторы- какие слои тюнить- learning rate annealing- на какой эпохе останавливатьсяНаша цель - довести F1 score на validation set до значения, большего **0.9**.
###Code
import PIL
import random
from collections import namedtuple
tfs = transforms.Compose([
transforms.Resize((224, 224)),
transforms.ColorJitter(hue=.20, saturation=.20),
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.RandomRotation(10, resample=PIL.Image.BILINEAR),
transforms.RandomGrayscale(),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
train_dataset = HotdogOrNotDataset(train_folder, tfs)
test_dataset = HotdogOrNotDataset(test_folder, tfs)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=val_sampler)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size)
model = models.resnet18(pretrained=True)
n_features = model.fc.in_features
model.fc = torch.nn.Linear(n_features, 2)
model.type(torch.cuda.FloatTensor)
model.to(device)
parameters = model.parameters()
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs, step_size):
scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=step_size, gamma=0.5)
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
scheduler.step()
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y, _) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
Hyperparams = namedtuple("Hyperparams", ['learning_rate', 'anneal_epochs', 'reg'])
RunResult = namedtuple("RunResult", ['model', 'train_history', 'val_history', 'final_val_accuracy', 'f1_score'])
learning_rates = [1e-2, 1e-3, 1e-4]
anneal_epochs = [2, 5, 8]
reg = [1e-3, 1e-4, 1e-5, 1e-6]
batch_size = 64
epoch_num = 15
run_record = {}
params_grid = [learning_rates, anneal_epochs, reg]
n_results = 5
for i in range(n_results):
params = [random.sample(param, 1)[0] for param in params_grid]
print(f"lr={params[0]}, anneal_epochs={params[1]}, reg={params[2]}")
optimizer = optim.SGD(model.parameters(), momentum=0.9, lr=params[0], weight_decay=params[2])
_, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, epoch_num, params[1])
predictions, gt = evaluate_model(model, train_dataset, val_indices)
_, _, f1 = binary_classification_metrics(predictions, gt)
print(f"f1={f1}")
run_record[Hyperparams(*params)] = RunResult(model, train_history, val_history, val_history[-1], f1)
best_model = None
best_f1 = None
best_hyperparams = None
best_run = None
for hyperparams, run_result in run_record.items():
if best_f1 is None or best_f1 < run_result.f1_score:
best_model = run_result.model
best_f1 = run_result.f1_score
best_hyperparams = hyperparams
best_run = run_result
print("Best validation f1 score: %4.2f, best hyperparams: %s" % (best_f1, best_hyperparams))
# Let's check how it performs on validation set!
predictions, ground_truth = evaluate_model(best_model, train_dataset, val_indices)
precision, recall, f1 = binary_classification_metrics(predictions, ground_truth)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (precision, recall, f1))
# TODO: Visualize training curve for the best model
plt.plot(best_run.train_history)
###Output
_____no_output_____
###Markdown
Визуализируйте ошибки лучшей модели
###Code
# TODO Visualize false positives and false negatives of the best model on the validation set
false_positive_indices = []
false_negatives_indices = []
for pred, truth, idx in zip(predictions, ground_truth, val_indices):
if pred and not truth:
false_positive_indices.append(idx)
if not pred and truth:
false_negatives_indices.append(idx)
visualize_samples(orig_dataset, false_positive_indices, "False positives")
visualize_samples(orig_dataset, false_negatives_indices, "False negatives")
###Output
_____no_output_____
###Markdown
Необязательное задание с большой звездочкойПоучавствуйте в Kaggle In-Class Hot Dog Recognition Challenge! Это соревнование сделано специально для курса и в нем учавствуют только те, кто проходит курс.В нем участники соревнуются в качестве натренированных моделей, загружая на сайт предсказания своих моделей на тестовой выборке. Разметка тестовой выборке участникам недоступна.Более подробно о правилах соревнования ниже.Те, кто проходят курс лично, за высокое место в соревновании получат дополнительные баллы.Вот ссылка на соревнование:https://www.kaggle.com/c/hotdogornot
###Code
image_id = []
predictions = []
model.eval()
for x,_,id_img in test_loader:
# TODO : Напишите код для предсказания меток (1 = есть хотдог, 0 = хотдога нет)
# Код должен возвратить список из id картинки и метку predictions
# image id - это название файла картинки, например '10000.jpg'
pass
# Так можно создать csv файл, чтобы затем загрузить его на kaggle
# Ожидаемый формат csv-файла:
# image_id,label
# 10000.jpg,1
# 10001.jpg,1
# 10002.jpg,0
# 10003.jpg,1
# 10004.jpg,0
with open('subm.csv', 'w') as submissionFile:
writer = csv.writer(submissionFile)
writer.writerow(['image_id', 'label'])
writer.writerows(zip(image_id,predictions))
# А так можно скачать файл с Google Colab
files.download('subm.csv')
###Output
_____no_output_____
###Markdown
Задание 4 - Перенос обучения (transfer learning) и тонкая настройка (fine-tuning)Одной из важнейшних техник в тренировке сетей - использовать заранее натренированные веса на более общей задачи в качестве начальной точки, а потом "дотренировать" их на конкретной.Такой подход и убыстряет обучение, и позволяет тренировать эффективные модели на маленьких наборах данных.В этом упражнении мы натренируем классификатор, который отличает хотдоги от не хотдогов! (более подробно - https://www.youtube.com/watch?v=ACmydtFDTGs)Это задание требует доступа к GPU, поэтому его можно выполнять либо на компьютере с GPU от NVidia, либо в [Google Colab](https://colab.research.google.com/).
###Code
import json
import os
import csv
import urllib
from io import BytesIO
from PIL import Image
from socket import timeout
from google.colab import files
#from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
#platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
#accelerator = 'cu80' if path.exists('/opt/bin/nvidia-smi') else 'cpu'
!pip3 install -q torch torchvision
!pip3 install -q Pillow==4.0.0
###Output
_____no_output_____
###Markdown
Сначала давайте скачаем данные с картинками. Это сделает код в следующей ячейке. Данные будут разделены на две части. На обучающей выборке, которая будет храниться в папке **train_kaggle**, мы будем строить наши модели, а на тестовой выборке **test_kaggle** будем предсказывать класс, к которому относится фотография (хотдог или нет). Если вы в Google Colab!В нем можно запускать ноутбуки с доступом к GPU. Они не очень быстрые, зато бесплатные!Каждый ноутбук получает свой собственный environment c доступным диском итд.Через 90 минут отсуствия активности этот environment пропадает со всеми данными.Поэтому нам придется скачивать данные каждый раз.
###Code
# Download train data
!wget "https://www.dropbox.com/s/cupinvuotopehty/train.zip?dl=0"
!unzip -q "train.zip?dl=0"
#local_folder = "../../HotDogOrNot/content/train_kaggle/"
train_folder = "train_kaggle/"
# Count number of files in the train folder, should be 4603
print('Number of files in the train folder', len(os.listdir(train_folder)))
# Download test data
!wget "https://www.dropbox.com/s/7xakfl2r9gn5p1j/test.zip?dl=0"
!unzip -q "test.zip?dl=0"
test_folder = "test_kaggle/"
# Count number of files in the test folder, should be 1150
print('Number of files in the test folder', len(os.listdir(test_folder)))
import torch
from torchvision import models
from torch.utils.data import Dataset, SubsetRandomSampler
from torchvision import transforms
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Имплементируем свой Dataset для загрузки данныхВ этом задании мы реализуем свой собственный класс Dataset для загрузки данных. Его цель - загрузить данные с диска и выдать по ним тензор с входом сети, меткой и идентификатором картинки (так будет проще подготовить сабмит для kaggle на тестовых данных).Вот ссылка, где хорошо объясняется как это делать на примере: https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmlВаш Dataset должен в качестве количества сэмплов выдать количество файлов в папке и уметь выдавать кортеж из сэмпла, метки по индексу и названия файла.Если название файла начинается со слов 'frankfurter', 'chili-dog' или 'hotdog' - метка положительная. Иначе отрицательная (ноль).И не забудьте поддержать возможность трансформации входа (аргумент `transforms`), она нам понадобится!
###Code
class HotdogOrNotDataset(Dataset):
def __init__(self, folder, transform=None):
self.transform = transform
# TODO: Your code here!
def __len__(self):
raise Exception("Not implemented!")
def __getitem__(self, index):
# TODO Implement getting item by index
# Hint: os.path.join is helpful!
raise Exception("Not implemented!")
return img, y, img_id
def visualize_samples(dataset, indices, title=None, count=10):
# visualize random 10 samples
plt.figure(figsize=(count*3,3))
display_indices = indices[:count]
if title:
plt.suptitle("%s %s/%s" % (title, len(display_indices), len(indices)))
for i, index in enumerate(display_indices):
x, y, _ = dataset[index]
plt.subplot(1,count,i+1)
plt.title("Label: %s" % y)
plt.imshow(x)
plt.grid(False)
plt.axis('off')
orig_dataset = HotdogOrNotDataset(train_folder)
indices = np.random.choice(np.arange(len(orig_dataset)), 7, replace=False)
visualize_samples(orig_dataset, indices, "Samples")
# Let's make sure transforms work!
dataset = HotdogOrNotDataset(train_folder, transform=transforms.RandomVerticalFlip(0.9))
visualize_samples(dataset, indices, "Samples with flip - a lot should be flipped!")
###Output
_____no_output_____
###Markdown
Создаем Dataset для тренировкиИ разделяем его на train и validation.На train будем обучать модель, на validation проверять ее качество, а соревнование Kaggle In-Class проведем на фотографиях из папки test_kaggle.
###Code
# First, lets load the dataset
train_dataset = HotdogOrNotDataset(train_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
test_dataset = HotdogOrNotDataset(test_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
batch_size = 64
data_size = len(dataset)
validation_fraction = .2
val_split = int(np.floor((validation_fraction) * data_size))
indices = list(range(data_size))
np.random.seed(42)
np.random.shuffle(indices)
val_indices, train_indices = indices[:val_split], indices[val_split:]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=val_sampler)
# Notice that we create test data loader in a different way. We don't have the labels.
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Наши обычные функции для тренировки
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y,_) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
###Output
_____no_output_____
###Markdown
Использование заранее натренированной сети (pretrained network)Чаще всего в качестве заранее натренированной сети используется сеть, натренированная на данных ImageNet с 1M изображений и 1000 классами.PyTorch включает такие натренированные сети для различных архитектур (https://pytorch.org/docs/stable/torchvision/models.html) Мы будем использовать ResNet18.Для начала посмотрим, что выдает уже натренированная сеть на наших картинках. То есть, посмотрим к какому из 1000 классов их отнесет сеть.Запустите модель на 10 случайных картинках из датасета и выведите их вместе с классами с наибольшей вероятностью. В коде уже есть код, который формирует соответствие между индексами в выходном векторе и классами ImageNet.
###Code
# Thanks to https://discuss.pytorch.org/t/imagenet-classes/4923/2
def load_imagenet_classes():
classes_json = urllib.request.urlopen('https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json').read()
classes = json.loads(classes_json)
# TODO: Process it to return dict of class index to name
return { int(k): v[-1] for k, v in classes.items()}
model = models.resnet18(pretrained=True)
# TODO: Run this model on 10 random images of your dataset and visualize what it predicts
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать только последний слойСуществует несколько вариантов переноса обучения, мы попробуем основные. Первый вариант - заменить последний слой на новый и тренировать только его, заморозив остальные.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Freeze all the layers of this model and add a new output layer
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 2)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать всю модельВторой вариант - точно так же заменить последгний слой на новый и обучать всю модель целиком.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer and train the whole model
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - разные скорости обучения для разных слоевИ наконец последний вариант, который мы рассмотрим - использовать разные скорости обучения для новых и старых слоев
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer
# Train new layer with learning speed 0.001 and old layers with 0.0001
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
loss = nn.CrossEntropyLoss()
optimizer = None # Hint - look into what PyTorch optimizers let you configure!
loss_history, train_history, val_history = train_model(model_conv, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Визуализируем метрики и ошибки моделиПопробуем посмотреть, где модель ошибается - визуализируем ложные срабатывания (false positives) и ложноотрицательные срабатывания (false negatives).Для этого мы прогоним модель через все примеры и сравним ее с истинными метками (ground truth).
###Code
from torch.utils.data.sampler import Sampler
class SubsetSampler(Sampler):
r"""Samples elements with given indices sequentially
Arguments:
data_source (Dataset): dataset to sample from
indices (ndarray): indices of the samples to take
"""
def __init__(self, indices):
self.indices = indices
def __iter__(self):
return (self.indices[i] for i in range(len(self.indices)))
def __len__(self):
return len(self.indices)
def evaluate_model(model, dataset, indices):
"""
Computes predictions and ground truth labels for the indices of the dataset
Returns:
predictions: np array of booleans of model predictions
grount_truth: np array of boolean of actual labels of the dataset
"""
model.eval() # Evaluation mode
# TODO: Evaluate model on the list of indices and capture predictions
# and ground truth labels
# Hint: SubsetSampler above could be useful!
raise Exception("Not implemented")
return predictions, ground_truth
predictions, gt = evaluate_model(model_conv, train_dataset, val_indices)
###Output
_____no_output_____
###Markdown
И теперь можно визуализировать false positives и false negatives.
###Code
# TODO: Compute indices of the false positives on the validation set.
# Note those have to be indices of the original dataset
false_positive_indices = None
visualize_samples(orig_dataset, false_positive_indices, "False positives")
# TODO: Compute indices of the false negatives on the validation set.
# Note those have to be indices of the original dataset
false_negatives_indices = None
visualize_samples(orig_dataset, false_negatives_indices, "False negatives")
import sklearn.metrics as metrics
def binary_classification_metrics(prediction, ground_truth):
# TODO: Implement this function!
# We did this already it in the assignment1
return precision, recall, f1
precision, recall, f1 = binary_classification_metrics(predictions, gt)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (f1, precision, recall))
###Output
_____no_output_____
###Markdown
Что будет в конце вы уже понялиНатренируйте лучшую модель на основе `resnet18`, меняя только процесс тренировки.Выбирайте лучшую модель по F1 score.Как всегда, не забываем:- побольше агментаций!- перебор гиперпараметров- различные оптимизаторы- какие слои тюнить- learning rate annealing- на какой эпохе останавливатьсяНаша цель - довести F1 score на validation set до значения, большего **0.9**.
###Code
# TODO: Train your best model!
best_model = None
# Let's check how it performs on validation set!
predictions, ground_truth = evaluate_model(best_model, dataset, val_indices)
precision, recall, f1 = binary_classification_metrics(predictions, ground_truth)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (precision, recall, f1))
# TODO: Visualize training curve for the best model
###Output
_____no_output_____
###Markdown
Визуализируйте ошибки лучшей модели
###Code
# TODO Visualize false positives and false negatives of the best model on the validation set
###Output
_____no_output_____
###Markdown
Необязательное задание с большой звездочкойПоучавствуйте в Kaggle In-Class Hot Dog Recognition Challenge! Это соревнование сделано специально для курса и в нем учавствуют только те, кто проходит курс.В нем участники соревнуются в качестве натренированных моделей, загружая на сайт предсказания своих моделей на тестовой выборке. Разметка тестовой выборке участникам недоступна.Более подробно о правилах соревнования ниже.Те, кто проходят курс лично, за высокое место в соревновании получат дополнительные баллы.Здесь уже можно использовать и другие модели кроме `resnet18`, и ансамбли, и другие трюки для тренировки моделей.Вот ссылка на соревнование:https://www.kaggle.com/c/hotdogornot
###Code
image_id = []
predictions = []
model.eval()
for x,_,id_img in test_loader:
# TODO : Напишите код для предсказания меток (1 = есть хотдог, 0 = хотдога нет)
# Код должен возвратить список из id картинки и метку predictions
# image id - это название файла картинки, например '10000.jpg'
pass
# Так можно создать csv файл, чтобы затем загрузить его на kaggle
# Ожидаемый формат csv-файла:
# image_id,label
# 10000.jpg,1
# 10001.jpg,1
# 10002.jpg,0
# 10003.jpg,1
# 10004.jpg,0
with open('subm.csv', 'w') as submissionFile:
writer = csv.writer(submissionFile)
writer.writerow(['image_id', 'label'])
writer.writerows(zip(image_id,predictions))
# А так можно скачать файл с Google Colab
files.download('subm.csv')
###Output
_____no_output_____
###Markdown
Задание 4 - Перенос обучения (transfer learning) и тонкая настройка (fine-tuning)Одной из важнейшних техник в тренировке сетей - использовать заранее натренированные веса на более общей задачи в качестве начальной точки, а потом "дотренировать" их на конкретной.Такой подход и убыстряет обучение, и позволяет тренировать эффективные модели на маленьких наборах данных.В этом упражнении мы натренируем классификатор, который отличает хотдоги от не хотдогов! (более подробно - https://www.youtube.com/watch?v=ACmydtFDTGs)Это задание требует доступа к GPU, поэтому его можно выполнять либо на компьютере с GPU от NVidia, либо в [Google Colab](https://colab.research.google.com/).
###Code
import json
import os
import csv
import urllib
from io import BytesIO
from PIL import Image
import re
from socket import timeout
from google.colab import files
#from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
#platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
#accelerator = 'cu80' if path.exists('/opt/bin/nvidia-smi') else 'cpu'
!pip3 install -q torch torchvision
!pip3 install -q Pillow==4.0.0
###Output
_____no_output_____
###Markdown
Сначала давайте скачаем данные с картинками. Это сделает код в следующей ячейке. Данные будут разделены на две части. На обучающей выборке, которая будет храниться в папке **train_kaggle**, мы будем строить наши модели, а на тестовой выборке **test_kaggle** будем предсказывать класс, к которому относится фотография (хотдог или нет). Если вы в Google Colab!В нем можно запускать ноутбуки с доступом к GPU. Они не очень быстрые, зато бесплатные!Каждый ноутбук получает свой собственный environment c доступным диском итд.Через 90 минут отсуствия активности этот environment пропадает со всеми данными.Поэтому нам придется скачивать данные каждый раз.
###Code
# Download train data
!wget "https://www.dropbox.com/s/cupinvuotopehty/train.zip?dl=0"
!unzip -q "train.zip?dl=0"
#local_folder = "../../HotDogOrNot/content/train_kaggle/"
train_folder = "train_kaggle/"
# Count number of files in the train folder, should be 4603
print('Number of files in the train folder', len(os.listdir(train_folder)))
# Download test data
!wget "https://www.dropbox.com/s/7xakfl2r9gn5p1j/test.zip?dl=0"
!unzip -q "test.zip?dl=0"
test_folder = "test_kaggle/"
# Count number of files in the test folder, should be 1150
print('Number of files in the test folder', len(os.listdir(test_folder)))
import torch
from torchvision import models
from torch.utils.data import Dataset, SubsetRandomSampler
from torchvision import transforms
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Имплементируем свой Dataset для загрузки данныхВ этом задании мы реализуем свой собственный класс Dataset для загрузки данных. Его цель - загрузить данные с диска и выдать по ним тензор с входом сети, меткой и идентификатором картинки (так будет проще подготовить сабмит для kaggle на тестовых данных).Вот ссылка, где хорошо объясняется как это делать на примере: https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmlВаш Dataset должен в качестве количества сэмплов выдать количество файлов в папке и уметь выдавать кортеж из сэмпла, метки по индексу и названия файла.Если название файла начинается со слов 'frankfurter', 'chili-dog' или 'hotdog' - метка положительная. Иначе отрицательная (ноль).И не забудьте поддержать возможность трансформации входа (аргумент `transforms`), она нам понадобится!
###Code
class HotdogOrNotDataset(Dataset):
def __init__(self, folder, transform=None):
self.transform = transform
self.folder = folder
# TODO: Your code here!
def __len__(self):
return(len(os.listdir(folder)))
#raise Exception("Not implemented!")
def __getitem__(self, index):
# TODO Implement getting item by index
# Hint: os.path.join is helpful!
img_id = os.listdir(self.folder)[index]
img_path = os.path.join(self.folder, img_id)
y = 1 if re.search('frankfurter', img_id) or re.search('chili-dog', img_id) or re.search('hotdog', img_id) else 0
img = Image.open(img_path)
if self.transform:
img = self.transform(img)
#raise Exception("Not implemented!")
return img, y, img_id
def visualize_samples(dataset, indices, title=None, count=10):
# visualize random 10 samples
plt.figure(figsize=(count*3,3))
display_indices = indices[:count]
if title:
plt.suptitle("%s %s/%s" % (title, len(display_indices), len(indices)))
for i, index in enumerate(display_indices):
x, y, _ = dataset[index]
plt.subplot(1,count,i+1)
plt.title("Label: %s" % y)
plt.imshow(x)
plt.grid(False)
plt.axis('off')
orig_dataset = HotdogOrNotDataset(train_folder)
indices = np.random.choice(np.arange(len(orig_dataset)), 7, replace=False)
visualize_samples(orig_dataset, indices, "Samples")
# Let's make sure transforms work!
dataset = HotdogOrNotDataset(train_folder, transform=transforms.RandomVerticalFlip(0.9))
visualize_samples(dataset, indices, "Samples with flip - a lot should be flipped!")
###Output
_____no_output_____
###Markdown
Создаем Dataset для тренировкиИ разделяем его на train и validation.На train будем обучать модель, на validation проверять ее качество, а соревнование Kaggle In-Class проведем на фотографиях из папки test_kaggle.
###Code
# First, lets load the dataset
train_dataset = HotdogOrNotDataset(train_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
test_dataset = HotdogOrNotDataset(test_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
batch_size = 64
data_size = len(dataset)
validation_fraction = .2
val_split = int(np.floor((validation_fraction) * data_size))
indices = list(range(data_size))
np.random.seed(42)
np.random.shuffle(indices)
val_indices, train_indices = indices[:val_split], indices[val_split:]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=val_sampler)
# Notice that we create test data loader in a different way. We don't have the labels.
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Наши обычные функции для тренировки
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y,_) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
val_accuracy = 0
correct = 0
total = 0
for i_step, (x, y) in enumerate(loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
pred = model(x_gpu)
_, indices = torch.max(pred, 1)
correct += torch.sum(indices == y_gpu)
total += y.shape[0]
val_accuracy = float(correct)/total
return val_accuracy
###Output
_____no_output_____
###Markdown
Использование заранее натренированной сети (pretrained network)Чаще всего в качестве заранее натренированной сети используется сеть, натренированная на данных ImageNet с 1M изображений и 1000 классами.PyTorch включает такие натренированные сети для различных архитектур (https://pytorch.org/docs/stable/torchvision/models.html) Мы будем использовать ResNet18.Для начала посмотрим, что выдает уже натренированная сеть на наших картинках. То есть, посмотрим к какому из 1000 классов их отнесет сеть.Запустите модель на 10 случайных картинках из датасета и выведите их вместе с классами с наибольшей вероятностью. В коде уже есть код, который формирует соответствие между индексами в выходном векторе и классами ImageNet.
###Code
# Thanks to https://discuss.pytorch.org/t/imagenet-classes/4923/2
def load_imagenet_classes():
classes_json = urllib.request.urlopen('https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json').read()
classes = json.loads(classes_json)
# TODO: Process it to return dict of class index to name
return { int(k): v[-1] for k, v in classes.items()}
model = models.resnet18(pretrained=True)
# TODO: Run this model on 10 random images of your dataset and visualize what it predicts
###Output
Downloading: "https://download.pytorch.org/models/resnet18-5c106cde.pth" to /home/ivan/.torch/models/resnet18-5c106cde.pth
###Markdown
Перенос обучения (transfer learning) - тренировать только последний слойСуществует несколько вариантов переноса обучения, мы попробуем основные. Первый вариант - сделать новый последний слой и тренировать только его, заморозив остальные.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Freeze all the layers of this model and add a new output layer
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 2)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать всю модельВторой вариант - добавить новый слой и обучать всю модель целиком.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer and train the whole model
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - разные скорости обучения для разных слоевИ наконец последний вариант, который мы рассмотрим - использовать разные скорости обучения для новых и старых слоев
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer
# Train new layer with learning speed 0.001 and old layers with 0.0001
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
loss = nn.CrossEntropyLoss()
optimizer = None # Hint - look into what PyTorch optimizers let you configure!
loss_history, train_history, val_history = train_model(model_conv, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Визуализируем метрики и ошибки моделиПопробуем посмотреть, где модель ошибается - визуализируем ложные срабатывания (false positives) и ложноотрицательные срабатывания (false negatives).Для этого мы прогоним модель через все примеры и сравним ее с истинными метками (ground truth).
###Code
from torch.utils.data.sampler import Sampler
class SubsetSampler(Sampler):
r"""Samples elements with given indices sequentially
Arguments:
data_source (Dataset): dataset to sample from
indices (ndarray): indices of the samples to take
"""
def __init__(self, indices):
self.indices = indices
def __iter__(self):
return (self.indices[i] for i in range(len(self.indices)))
def __len__(self):
return len(self.indices)
def evaluate_model(model, dataset, indices):
"""
Computes predictions and ground truth labels for the indices of the dataset
Returns:
predictions: np array of booleans of model predictions
grount_truth: np array of boolean of actual labels of the dataset
"""
model.eval() # Evaluation mode
# TODO: Evaluate model on the list of indices and capture predictions
# and ground truth labels
# Hint: SubsetSampler above could be useful!
raise Exception("Not implemented")
return predictions, ground_truth
predictions, gt = evaluate_model(model_conv, train_dataset, val_indices)
###Output
_____no_output_____
###Markdown
И теперь можно визуализировать false positives и false negatives.
###Code
# TODO: Compute indices of the false positives on the validation set.
# Note those have to be indices of the original dataset
false_positive_indices = None
visualize_samples(orig_dataset, false_positive_indices, "False positives")
# TODO: Compute indices of the false negatives on the validation set.
# Note those have to be indices of the original dataset
false_negatives_indices = None
visualize_samples(orig_dataset, false_negatives_indices, "False negatives")
import sklearn.metrics as metrics
def binary_classification_metrics(prediction, ground_truth):
# TODO: Implement this function!
# We did this already it in the assignment1
return precision, recall, f1
precision, recall, f1 = binary_classification_metrics(predictions, gt)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (f1, precision, recall))
###Output
_____no_output_____
###Markdown
Что будет в конце вы уже понялиНатренируйте лучшую модель на основе `resnet18`, меняя только процесс тренировки.Выбирайте лучшую модель по F1 score.Как всегда, не забываем:- побольше агментаций!- перебор гиперпараметров- различные оптимизаторы- какие слои тюнить- learning rate annealing- на какой эпохе останавливатьсяНаша цель - довести F1 score на validation set до значения, большего **0.9**.
###Code
# TODO: Train your best model!
best_model = None
# Let's check how it performs on validation set!
predictions, ground_truth = evaluate_model(best_model, dataset, val_indices)
precision, recall, f1 = binary_classification_metrics(predictions, ground_truth)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (precision, recall, f1))
# TODO: Visualize training curve for the best model
###Output
_____no_output_____
###Markdown
Визуализируйте ошибки лучшей модели
###Code
# TODO Visualize false positives and false negatives of the best model on the validation set
###Output
_____no_output_____
###Markdown
Необязательное задание с большой звездочкойПоучавствуйте в Kaggle In-Class Hot Dog Recognition Challenge! Это соревнование сделано специально для курса и в нем учавствуют только те, кто проходит курс.В нем участники соревнуются в качестве натренированных моделей, загружая на сайт предсказания своих моделей на тестовой выборке. Разметка тестовой выборке участникам недоступна.Более подробно о правилах соревнования ниже.Те, кто проходят курс лично, за высокое место в соревновании получат дополнительные баллы.Вот ссылка на соревнование:https://www.kaggle.com/c/hotdogornot
###Code
image_id = []
predictions = []
model.eval()
for x,_,id_img in test_loader:
# TODO : Напишите код для предсказания меток (1 = есть хотдог, 0 = хотдога нет)
# Код должен возвратить список из id картинки и метку predictions
# image id - это название файла картинки, например '10000.jpg'
pass
# Так можно создать csv файл, чтобы затем загрузить его на kaggle
# Ожидаемый формат csv-файла:
# image_id,label
# 10000.jpg,1
# 10001.jpg,1
# 10002.jpg,0
# 10003.jpg,1
# 10004.jpg,0
with open('subm.csv', 'w') as submissionFile:
writer = csv.writer(submissionFile)
writer.writerow(['image_id', 'label'])
writer.writerows(zip(image_id,predictions))
# А так можно скачать файл с Google Colab
files.download('subm.csv')
###Output
_____no_output_____
###Markdown
Задание 4 - Перенос обучения (transfer learning) и тонкая настройка (fine-tuning)Одной из важнейшних техник в тренировке сетей - использовать заранее натренированные веса на более общей задачи в качестве начальной точки, а потом "дотренировать" их на конкретной.Такой подход и убыстряет обучение, и позволяет тренировать эффективные модели на маленьких наборах данных.В этом упражнении мы натренируем классификатор, который отличает хотдоги от не хотдогов! (более подробно - https://www.youtube.com/watch?v=ACmydtFDTGs)Это задание требует доступа к GPU, поэтому его можно выполнять либо на компьютере с GPU от NVidia, либо в [Google Colab](https://colab.research.google.com/).
###Code
import json
import os
import csv
import urllib
from io import BytesIO
from PIL import Image
from socket import timeout
from google.colab import files
#from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
#platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
#accelerator = 'cu80' if path.exists('/opt/bin/nvidia-smi') else 'cpu'
!pip3 install -q torch torchvision
!pip3 install -q Pillow==4.0.0
###Output
_____no_output_____
###Markdown
Сначала давайте скачаем данные с картинками. Это сделает код в следующей ячейке. Данные будут разделены на две части. На обучающей выборке, которая будет храниться в папке **train_kaggle**, мы будем строить наши модели, а на тестовой выборке **test_kaggle** будем предсказывать класс, к которому относится фотография (хотдог или нет). Если вы в Google Colab!В нем можно запускать ноутбуки с доступом к GPU. Они не очень быстрые, зато бесплатные!Каждый ноутбук получает свой собственный environment c доступным диском итд.Через 90 минут отсуствия активности этот environment пропадает со всеми данными.Поэтому нам придется скачивать данные каждый раз.
###Code
# Download train data
!wget "https://www.dropbox.com/s/cupinvuotopehty/train.zip?dl=0"
!unzip -q "train.zip?dl=0"
#local_folder = "../../HotDogOrNot/content/train_kaggle/"
train_folder = "train_kaggle/"
# Count number of files in the train folder, should be 4603
print('Number of files in the train folder', len(os.listdir(train_folder)))
# Download test data
!wget "https://www.dropbox.com/s/7xakfl2r9gn5p1j/test.zip?dl=0"
!unzip -q "test.zip?dl=0"
test_folder = "test_kaggle/"
# Count number of files in the test folder, should be 1150
print('Number of files in the test folder', len(os.listdir(test_folder)))
import torch
from torchvision import models
from torch.utils.data import Dataset, SubsetRandomSampler
from torchvision import transforms
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Имплементируем свой Dataset для загрузки данныхВ этом задании мы реализуем свой собственный класс Dataset для загрузки данных. Его цель - загрузить данные с диска и выдать по ним тензор с входом сети, меткой и идентификатором картинки (так будет проще подготовить сабмит для kaggle на тестовых данных).Вот ссылка, где хорошо объясняется как это делать на примере: https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmlВаш Dataset должен в качестве количества сэмплов выдать количество файлов в папке и уметь выдавать кортеж из сэмпла, метки по индексу и названия файла.Если название файла начинается со слов 'frankfurter', 'chili-dog' или 'hotdog' - метка положительная. Иначе отрицательная (ноль).И не забудьте поддержать возможность трансформации входа (аргумент `transforms`), она нам понадобится!
###Code
class HotdogOrNotDataset(Dataset):
def __init__(self, folder, transform=None):
self.transform = transform
# TODO: Your code here!
def __len__(self):
raise Exception("Not implemented!")
def __getitem__(self, index):
# TODO Implement getting item by index
# Hint: os.path.join is helpful!
raise Exception("Not implemented!")
return img, y, img_id
def visualize_samples(dataset, indices, title=None, count=10):
# visualize random 10 samples
plt.figure(figsize=(count*3,3))
display_indices = indices[:count]
if title:
plt.suptitle("%s %s/%s" % (title, len(display_indices), len(indices)))
for i, index in enumerate(display_indices):
x, y, _ = dataset[index]
plt.subplot(1,count,i+1)
plt.title("Label: %s" % y)
plt.imshow(x)
plt.grid(False)
plt.axis('off')
orig_dataset = HotdogOrNotDataset(train_folder)
indices = np.random.choice(np.arange(len(orig_dataset)), 7, replace=False)
visualize_samples(orig_dataset, indices, "Samples")
# Let's make sure transforms work!
dataset = HotdogOrNotDataset(train_folder, transform=transforms.RandomVerticalFlip(0.9))
visualize_samples(dataset, indices, "Samples with flip - a lot should be flipped!")
###Output
_____no_output_____
###Markdown
Создаем Dataset для тренировкиИ разделяем его на train и validation.На train будем обучать модель, на validation проверять ее качество, а соревнование Kaggle In-Class проведем на фотографиях из папки test_kaggle.
###Code
# First, lets load the dataset
train_dataset = HotdogOrNotDataset(train_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
test_dataset = HotdogOrNotDataset(test_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
batch_size = 64
data_size = len(dataset)
validation_fraction = .2
val_split = int(np.floor((validation_fraction) * data_size))
indices = list(range(data_size))
np.random.seed(42)
np.random.shuffle(indices)
val_indices, train_indices = indices[:val_split], indices[val_split:]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=val_sampler)
# Notice that we create test data loader in a different way. We don't have the labels.
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Наши обычные функции для тренировки
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y,_) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
###Output
_____no_output_____
###Markdown
Использование заранее натренированной сети (pretrained network)Чаще всего в качестве заранее натренированной сети используется сеть, натренированная на данных ImageNet с 1M изображений и 1000 классами.PyTorch включает такие натренированные сети для различных архитектур (https://pytorch.org/docs/stable/torchvision/models.html) Мы будем использовать ResNet18.Для начала посмотрим, что выдает уже натренированная сеть на наших картинках. То есть, посмотрим к какому из 1000 классов их отнесет сеть.Запустите модель на 10 случайных картинках из датасета и выведите их вместе с классами с наибольшей вероятностью. В коде уже есть код, который формирует соответствие между индексами в выходном векторе и классами ImageNet.
###Code
# Thanks to https://discuss.pytorch.org/t/imagenet-classes/4923/2
def load_imagenet_classes():
classes_json = urllib.request.urlopen('https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json').read()
classes = json.loads(classes_json)
# TODO: Process it to return dict of class index to name
return { int(k): v[-1] for k, v in classes.items()}
model = models.resnet18(pretrained=True)
# TODO: Run this model on 10 random images of your dataset and visualize what it predicts
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать только последний слойСуществует несколько вариантов переноса обучения, мы попробуем основные. Первый вариант - сделать новый последний слой и тренировать только его, заморозив остальные.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Freeze all the layers of this model and add a new output layer
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 2)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать всю модельВторой вариант - добавить новый слой и обучать всю модель целиком.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Freeze all the layers of this model and add a new output layer
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - разные скорости обучения для разных слоевИ наконец последний вариант, который мы рассмотрим - использовать разные скорости обучения для новых и старых слоев
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Freeze all the layers of this model and add a new output layer
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
loss = nn.CrossEntropyLoss()
# Train new layer with learning speed 0.001 and old layers with 0.0001
optimizer = None #
loss_history, train_history, val_history = train_model(model_conv, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Визуализируем метрики и ошибки моделиПопробуем посмотреть, где модель ошибается - визуализируем ложные срабатывания (false positives) и ложноотрицательные срабатывания (false negatives).Для этого мы прогоним модель через все примеры и сравним ее с истинными метками (ground truth).
###Code
from torch.utils.data.sampler import Sampler
class SubsetSampler(Sampler):
r"""Samples elements with given indices sequentially
Arguments:
data_source (Dataset): dataset to sample from
indices (ndarray): indices of the samples to take
"""
def __init__(self, indices):
self.indices = indices
def __iter__(self):
return (self.indices[i] for i in range(len(self.indices)))
def __len__(self):
return len(self.indices)
def evaluate_model(model, dataset, indices):
"""
Computes predictions and ground truth labels for the indices of the dataset
Returns:
predictions: np array of booleans of model predictions
grount_truth: np array of boolean of actual labels of the dataset
"""
model.eval() # Evaluation mode
# TODO: Evaluate model on the list of indices and capture predictions
# and ground truth labels
# Hint: SubsetSampler above could be useful!
raise Exception("Not implemented")
return predictions, ground_truth
predictions, gt = evaluate_model(model_conv, train_dataset, val_indices)
###Output
_____no_output_____
###Markdown
И теперь можно визуализировать false positives и false negatives.
###Code
# TODO: Compute indices of the false positives on the validation set.
# Note those have to be indices of the original dataset
false_positive_indices = None
visualize_samples(orig_dataset, false_positive_indices, "False positives")
# TODO: Compute indices of the false negatives on the validation set.
# Note those have to be indices of the original dataset
false_negatives_indices = None
visualize_samples(orig_dataset, false_negatives_indices, "False negatives")
import sklearn.metrics as metrics
def binary_classification_metrics(prediction, ground_truth):
# TODO: Implement this function!
# We did this already it in the assignment1
return precision, recall, f1
precision, recall, f1 = binary_classification_metrics(predictions, gt)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (precision, recall, f1))
###Output
_____no_output_____
###Markdown
Что будет в конце вы уже понялиНатренируйте лучшую модель на основе `resnet18`, меняя только процесс тренировки.Выбирайте лучшую модель по F1 score.Как всегда, не забываем:- побольше агментаций!- перебор гиперпараметров- различные оптимизаторы- какие слои тюнить- learning rate annealing- на какой эпохе останавливатьсяНаша цель - довести F1 score на validation set до значения, большего **0.9**.
###Code
# TODO: Train your best model!
best_model = None
# Let's check how it performs on validation set!
predictions, ground_truth = evaluate_model(best_model, dataset, val_indices)
precision, recall, f1 = binary_classification_metrics(predictions, ground_truth)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (precision, recall, f1))
# TODO: Visualize training curve for the best model
###Output
_____no_output_____
###Markdown
Визуализируйте ошибки лучшей модели
###Code
# TODO Visualize false positives and false negatives of the best model on the validation set
###Output
_____no_output_____
###Markdown
Необязательное задание с большой звездочкойПоучавствуйте в Kaggle In-Class Hot Dog Recognition Challenge! Это соревнование сделано специально для курса и в нем учавствуют только те, кто проходит курс.В нем участники соревнуются в качестве натренированных моделей, загружая на сайт предсказания своих моделей на тестовой выборке. Разметка тестовой выборке участникам недоступна.Более подробно о правилах соревнования ниже.Те, кто проходят курс лично, за высокое место в соревновании получат дополнительные баллы.Вот ссылка на соревнование:https://www.kaggle.com/c/hotdogornot
###Code
image_id = []
predictions = []
model.eval()
for x,_,id_img in test_loader:
# TODO : Напишите код для предсказания меток (1 = есть хотдог, 0 = хотдога нет)
# Код должен возвратить список из id картинки и метку predictions
# image id - это название файла картинки, например '10000.jpg'
pass
# Так можно создать csv файл, чтобы затем загрузить его на kaggle
# Ожидаемый формат csv-файла:
# image_id,label
# 10000.jpg,1
# 10001.jpg,1
# 10002.jpg,0
# 10003.jpg,1
# 10004.jpg,0
with open('subm.csv', 'w') as submissionFile:
writer = csv.writer(submissionFile)
writer.writerow(['image_id', 'label'])
writer.writerows(zip(image_id,predictions))
# А так можно скачать файл с Google Colab
files.download('subm.csv')
###Output
_____no_output_____
###Markdown
Задание 4 - Перенос обучения (transfer learning) и тонкая настройка (fine-tuning)Одной из важнейшних техник в тренировке сетей - использовать заранее натренированные веса на более общей задачи в качестве начальной точки, а потом "дотренировать" их на конкретной.Такой подход и убыстряет обучение, и позволяет тренировать эффективные модели на маленьких наборах данных.В этом упражнении мы натренируем классификатор, который отличает хотдоги от не хотдогов! (более подробно - https://www.youtube.com/watch?v=ACmydtFDTGs)Это задание требует доступа к GPU, поэтому его можно выполнять либо на компьютере с GPU от NVidia, либо в [Google Colab](https://colab.research.google.com/).
###Code
import json
import os
import csv
import urllib
from io import BytesIO
from PIL import Image
from socket import timeout
from google.colab import files
!pip3 install -q torch torchvision
!pip3 install -q Pillow==4.0.0
###Output
_____no_output_____
###Markdown
Сначала давайте скачаем данные с картинками. Это сделает код в следующей ячейке. Данные будут разделены на две части. На обучающей выборке, которая будет храниться в папке **train_kaggle**, мы будем строить наши модели, а на тестовой выборке **test_kaggle** будем предсказывать класс, к которому относится фотография (хотдог или нет). Если вы в Google Colab!В нем можно запускать ноутбуки с доступом к GPU. Они не очень быстрые, зато бесплатные!Каждый ноутбук получает свой собственный environment c доступным диском итд.Через 90 минут отсуствия активности этот environment пропадает со всеми данными.Поэтому нам придется скачивать данные каждый раз.
###Code
# Download train data
!wget "https://storage.googleapis.com/dlcourse_ai/train.zip"
!unzip -q "train.zip"
train_folder = "train_kaggle/"
# Count number of files in the train folder, should be 4603
print('Number of files in the train folder', len(os.listdir(train_folder)))
# Download test data
!wget "https://storage.googleapis.com/dlcourse_ai/test.zip"
!unzip -q "test.zip"
test_folder = "test_kaggle/"
# Count number of files in the test folder, should be 1150
print('Number of files in the test folder', len(os.listdir(test_folder)))
import torch
from torchvision import models
from torch.utils.data import Dataset, SubsetRandomSampler
from torchvision import transforms
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Имплементируем свой Dataset для загрузки данныхВ этом задании мы реализуем свой собственный класс Dataset для загрузки данных. Его цель - загрузить данные с диска и выдать по ним тензор с входом сети, меткой и идентификатором картинки (так будет проще подготовить сабмит для kaggle на тестовых данных).Вот ссылка, где хорошо объясняется как это делать на примере: https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmlВаш Dataset должен в качестве количества сэмплов выдать количество файлов в папке и уметь выдавать кортеж из сэмпла, метки по индексу и названия файла.Если название файла начинается со слов 'frankfurter', 'chili-dog' или 'hotdog' - метка положительная. Иначе отрицательная (ноль).И не забудьте поддержать возможность трансформации входа (аргумент `transforms`), она нам понадобится!
###Code
class HotdogOrNotDataset(Dataset):
def __init__(self, folder, transform=None):
self.transform = transform
# TODO: Your code here!
def __len__(self):
raise Exception("Not implemented!")
def __getitem__(self, index):
# TODO Implement getting item by index
# Hint: os.path.join is helpful!
raise Exception("Not implemented!")
return img, y, img_id
def visualize_samples(dataset, indices, title=None, count=10):
# visualize random 10 samples
plt.figure(figsize=(count*3,3))
display_indices = indices[:count]
if title:
plt.suptitle("%s %s/%s" % (title, len(display_indices), len(indices)))
for i, index in enumerate(display_indices):
x, y, _ = dataset[index]
plt.subplot(1,count,i+1)
plt.title("Label: %s" % y)
plt.imshow(x)
plt.grid(False)
plt.axis('off')
orig_dataset = HotdogOrNotDataset(train_folder)
indices = np.random.choice(np.arange(len(orig_dataset)), 7, replace=False)
visualize_samples(orig_dataset, indices, "Samples")
# Let's make sure transforms work!
dataset = HotdogOrNotDataset(train_folder, transform=transforms.RandomVerticalFlip(0.9))
visualize_samples(dataset, indices, "Samples with flip - a lot should be flipped!")
###Output
_____no_output_____
###Markdown
Создаем Dataset для тренировкиИ разделяем его на train и validation.На train будем обучать модель, на validation проверять ее качество, а соревнование Kaggle In-Class проведем на фотографиях из папки test_kaggle.
###Code
# First, lets load the dataset
train_dataset = HotdogOrNotDataset(train_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
test_dataset = HotdogOrNotDataset(test_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
batch_size = 64
data_size = len(dataset)
validation_fraction = .2
val_split = int(np.floor((validation_fraction) * data_size))
indices = list(range(data_size))
np.random.seed(42)
np.random.shuffle(indices)
val_indices, train_indices = indices[:val_split], indices[val_split:]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=val_sampler)
# Notice that we create test data loader in a different way. We don't have the labels.
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Наши обычные функции для тренировки
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y,_) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
###Output
_____no_output_____
###Markdown
Использование заранее натренированной сети (pretrained network)Чаще всего в качестве заранее натренированной сети используется сеть, натренированная на данных ImageNet с 1M изображений и 1000 классами.PyTorch включает такие натренированные сети для различных архитектур (https://pytorch.org/docs/stable/torchvision/models.html) Мы будем использовать ResNet18.Для начала посмотрим, что выдает уже натренированная сеть на наших картинках. То есть, посмотрим к какому из 1000 классов их отнесет сеть.Запустите модель на 10 случайных картинках из датасета и выведите их вместе с классами с наибольшей вероятностью. В коде уже есть код, который формирует соответствие между индексами в выходном векторе и классами ImageNet.
###Code
# Thanks to https://discuss.pytorch.org/t/imagenet-classes/4923/2
def load_imagenet_classes():
classes_json = urllib.request.urlopen('https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json').read()
classes = json.loads(classes_json)
# TODO: Process it to return dict of class index to name
return { int(k): v[-1] for k, v in classes.items()}
model = models.resnet18(pretrained=True)
# TODO: Run this model on 10 random images of your dataset and visualize what it predicts
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать только последний слойСуществует несколько вариантов переноса обучения, мы попробуем основные. Первый вариант - заменить последний слой на новый и тренировать только его, заморозив остальные.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Freeze all the layers of this model and add a new output layer
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 2)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать всю модельВторой вариант - точно так же заменить последгний слой на новый и обучать всю модель целиком.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer and train the whole model
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - разные скорости обучения для разных слоевИ наконец последний вариант, который мы рассмотрим - использовать разные скорости обучения для новых и старых слоев
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer
# Train new layer with learning speed 0.001 and old layers with 0.0001
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
loss = nn.CrossEntropyLoss()
optimizer = None # Hint - look into what PyTorch optimizers let you configure!
loss_history, train_history, val_history = train_model(model_conv, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Визуализируем метрики и ошибки моделиПопробуем посмотреть, где модель ошибается - визуализируем ложные срабатывания (false positives) и ложноотрицательные срабатывания (false negatives).Для этого мы прогоним модель через все примеры и сравним ее с истинными метками (ground truth).
###Code
from torch.utils.data.sampler import Sampler
class SubsetSampler(Sampler):
r"""Samples elements with given indices sequentially
Arguments:
data_source (Dataset): dataset to sample from
indices (ndarray): indices of the samples to take
"""
def __init__(self, indices):
self.indices = indices
def __iter__(self):
return (self.indices[i] for i in range(len(self.indices)))
def __len__(self):
return len(self.indices)
def evaluate_model(model, dataset, indices):
"""
Computes predictions and ground truth labels for the indices of the dataset
Returns:
predictions: np array of booleans of model predictions
grount_truth: np array of boolean of actual labels of the dataset
"""
model.eval() # Evaluation mode
# TODO: Evaluate model on the list of indices and capture predictions
# and ground truth labels
# Hint: SubsetSampler above could be useful!
raise Exception("Not implemented")
return predictions, ground_truth
predictions, gt = evaluate_model(model_conv, train_dataset, val_indices)
###Output
_____no_output_____
###Markdown
И теперь можно визуализировать false positives и false negatives.
###Code
# TODO: Compute indices of the false positives on the validation set.
# Note those have to be indices of the original dataset
false_positive_indices = None
visualize_samples(orig_dataset, false_positive_indices, "False positives")
# TODO: Compute indices of the false negatives on the validation set.
# Note those have to be indices of the original dataset
false_negatives_indices = None
visualize_samples(orig_dataset, false_negatives_indices, "False negatives")
import sklearn.metrics as metrics
def binary_classification_metrics(prediction, ground_truth):
# TODO: Implement this function!
# We did this already it in the assignment1
return precision, recall, f1
precision, recall, f1 = binary_classification_metrics(predictions, gt)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (f1, precision, recall))
###Output
_____no_output_____
###Markdown
Что будет в конце вы уже понялиНатренируйте лучшую модель на основе `resnet18`, меняя только процесс тренировки.Выбирайте лучшую модель по F1 score.Как всегда, не забываем:- побольше агментаций!- перебор гиперпараметров- различные оптимизаторы- какие слои тюнить- learning rate annealing- на какой эпохе останавливатьсяНаша цель - довести F1 score на validation set до значения, большего **0.93**.
###Code
# TODO: Train your best model!
best_model = None
# Let's check how it performs on validation set!
predictions, ground_truth = evaluate_model(best_model, dataset, val_indices)
precision, recall, f1 = binary_classification_metrics(predictions, ground_truth)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (precision, recall, f1))
# TODO: Visualize training curve for the best model
###Output
_____no_output_____
###Markdown
Визуализируйте ошибки лучшей модели
###Code
# TODO Visualize false positives and false negatives of the best model on the validation set
###Output
_____no_output_____
###Markdown
Необязательное задание с большой звездочкойПоучавствуйте в Kaggle In-Class Hot Dog Recognition Challenge! Это соревнование сделано специально для курса и в нем учавствуют только те, кто проходит курс.В нем участники соревнуются в качестве натренированных моделей, загружая на сайт предсказания своих моделей на тестовой выборке. Разметка тестовой выборке участникам недоступна.Более подробно о правилах соревнования ниже.Те, кто проходят курс лично, за высокое место в соревновании получат дополнительные баллы.Здесь уже можно использовать и другие базовые архитектуры кроме `resnet18`, и ансамбли, и другие трюки тренировки моделей.Вот ссылка на соревнование:https://www.kaggle.com/c/hotdogornot
###Code
image_id = []
predictions = []
model.eval()
for x,_,id_img in test_loader:
# TODO : Напишите код для предсказания меток (1 = есть хотдог, 0 = хотдога нет)
# Код должен возвратить список из id картинки и метку predictions
# image id - это название файла картинки, например '10000.jpg'
pass
# Так можно создать csv файл, чтобы затем загрузить его на kaggle
# Ожидаемый формат csv-файла:
# image_id,label
# 10000.jpg,1
# 10001.jpg,1
# 10002.jpg,0
# 10003.jpg,1
# 10004.jpg,0
with open('subm.csv', 'w') as submissionFile:
writer = csv.writer(submissionFile)
writer.writerow(['image_id', 'label'])
writer.writerows(zip(image_id,predictions))
# А так можно скачать файл с Google Colab
files.download('subm.csv')
###Output
_____no_output_____
###Markdown
Задание 4 - Перенос обучения (transfer learning) и тонкая настройка (fine-tuning)Одной из важнейшних техник в тренировке сетей - использовать заранее натренированные веса на более общей задачи в качестве начальной точки, а потом "дотренировать" их на конкретной.Такой подход и убыстряет обучение, и позволяет тренировать эффективные модели на маленьких наборах данных.В этом упражнении мы натренируем классификатор, который отличает хотдоги от не хотдогов! (более подробно - https://www.youtube.com/watch?v=ACmydtFDTGs)Это задание требует доступа к GPU, поэтому его можно выполнять либо на компьютере с GPU от NVidia, либо в [Google Colab](https://colab.research.google.com/).
###Code
import json
import os
import csv
import urllib
from io import BytesIO
from PIL import Image
from socket import timeout
from google.colab import files
#from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
#platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
#accelerator = 'cu80' if path.exists('/opt/bin/nvidia-smi') else 'cpu'
!pip3 install -q torch torchvision
!pip3 install -q Pillow==4.0.0
###Output
_____no_output_____
###Markdown
Сначала давайте скачаем данные с картинками. Это сделает код в следующей ячейке. Данные будут разделены на две части. На обучающей выборке, которая будет храниться в папке **train_kaggle**, мы будем строить наши модели, а на тестовой выборке **test_kaggle** будем предсказывать класс, к которому относится фотография (хотдог или нет). Если вы в Google Colab!В нем можно запускать ноутбуки с доступом к GPU. Они не очень быстрые, зато бесплатные!Каждый ноутбук получает свой собственный environment c доступным диском итд.Через 90 минут отсуствия активности этот environment пропадает со всеми данными.Поэтому нам придется скачивать данные каждый раз.
###Code
# Download train data
!wget "https://www.dropbox.com/s/cupinvuotopehty/train.zip?dl=0"
!unzip -q "train.zip?dl=0"
#local_folder = "../../HotDogOrNot/content/train_kaggle/"
train_folder = "train_kaggle/"
# Count number of files in the train folder, should be 4603
print('Number of files in the train folder', len(os.listdir(train_folder)))
# Download test data
!wget "https://www.dropbox.com/s/7xakfl2r9gn5p1j/test.zip?dl=0"
!unzip -q "test.zip?dl=0"
test_folder = "test_kaggle/"
# Count number of files in the test folder, should be 1150
print('Number of files in the test folder', len(os.listdir(test_folder)))
import torch
from torchvision import models
from torch.utils.data import Dataset, SubsetRandomSampler
from torchvision import transforms
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Имплементируем свой Dataset для загрузки данныхВ этом задании мы реализуем свой собственный класс Dataset для загрузки данных. Его цель - загрузить данные с диска и выдать по ним тензор с входом сети, меткой и идентификатором картинки (так будет проще подготовить сабмит для kaggle на тестовых данных).Вот ссылка, где хорошо объясняется как это делать на примере: https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmlВаш Dataset должен в качестве количества сэмплов выдать количество файлов в папке и уметь выдавать кортеж из сэмпла, метки по индексу и названия файла.Если название файла начинается со слов 'frankfurter', 'chili-dog' или 'hotdog' - метка положительная. Иначе отрицательная (ноль).И не забудьте поддержать возможность трансформации входа (аргумент `transforms`), она нам понадобится!
###Code
class HotdogOrNotDataset(Dataset):
def __init__(self, folder, transform=None):
self.transform = transform
# TODO: Your code here!
def __len__(self):
raise Exception("Not implemented!")
def __getitem__(self, index):
# TODO Implement getting item by index
# Hint: os.path.join is helpful!
raise Exception("Not implemented!")
return img, y, img_id
def visualize_samples(dataset, indices, title=None, count=10):
# visualize random 10 samples
plt.figure(figsize=(count*3,3))
display_indices = indices[:count]
if title:
plt.suptitle("%s %s/%s" % (title, len(display_indices), len(indices)))
for i, index in enumerate(display_indices):
x, y, _ = dataset[index]
plt.subplot(1,count,i+1)
plt.title("Label: %s" % y)
plt.imshow(x)
plt.grid(False)
plt.axis('off')
orig_dataset = HotdogOrNotDataset(train_folder)
indices = np.random.choice(np.arange(len(orig_dataset)), 7, replace=False)
visualize_samples(orig_dataset, indices, "Samples")
# Let's make sure transforms work!
dataset = HotdogOrNotDataset(train_folder, transform=transforms.RandomVerticalFlip(0.9))
visualize_samples(dataset, indices, "Samples with flip - a lot should be flipped!")
###Output
_____no_output_____
###Markdown
Создаем Dataset для тренировкиИ разделяем его на train и validation.На train будем обучать модель, на validation проверять ее качество, а соревнование Kaggle In-Class проведем на фотографиях из папки test_kaggle.
###Code
# First, lets load the dataset
train_dataset = HotdogOrNotDataset(train_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
test_dataset = HotdogOrNotDataset(test_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
batch_size = 64
data_size = len(dataset)
validation_fraction = .2
val_split = int(np.floor((validation_fraction) * data_size))
indices = list(range(data_size))
np.random.seed(42)
np.random.shuffle(indices)
val_indices, train_indices = indices[:val_split], indices[val_split:]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=val_sampler)
# Notice that we create test data loader in a different way. We don't have the labels.
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Наши обычные функции для тренировки
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y,_) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
###Output
_____no_output_____
###Markdown
Использование заранее натренированной сети (pretrained network)Чаще всего в качестве заранее натренированной сети используется сеть, натренированная на данных ImageNet с 1M изображений и 1000 классами.PyTorch включает такие натренированные сети для различных архитектур (https://pytorch.org/docs/stable/torchvision/models.html) Мы будем использовать ResNet18.Для начала посмотрим, что выдает уже натренированная сеть на наших картинках. То есть, посмотрим к какому из 1000 классов их отнесет сеть.Запустите модель на 10 случайных картинках из датасета и выведите их вместе с классами с наибольшей вероятностью. В коде уже есть код, который формирует соответствие между индексами в выходном векторе и классами ImageNet.
###Code
# Thanks to https://discuss.pytorch.org/t/imagenet-classes/4923/2
def load_imagenet_classes():
classes_json = urllib.request.urlopen('https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json').read()
classes = json.loads(classes_json)
# TODO: Process it to return dict of class index to name
return { int(k): v[-1] for k, v in classes.items()}
model = models.resnet18(pretrained=True)
# TODO: Run this model on 10 random images of your dataset and visualize what it predicts
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать только последний слойСуществует несколько вариантов переноса обучения, мы попробуем основные. Первый вариант - заменить последний слой на новый и тренировать только его, заморозив остальные.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Freeze all the layers of this model and add a new output layer
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 2)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать всю модельВторой вариант - точно так же заменить последгний слой на новый и обучать всю модель целиком.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer and train the whole model
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - разные скорости обучения для разных слоевИ наконец последний вариант, который мы рассмотрим - использовать разные скорости обучения для новых и старых слоев
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer
# Train new layer with learning speed 0.001 and old layers with 0.0001
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
loss = nn.CrossEntropyLoss()
optimizer = None # Hint - look into what PyTorch optimizers let you configure!
loss_history, train_history, val_history = train_model(model_conv, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Визуализируем метрики и ошибки моделиПопробуем посмотреть, где модель ошибается - визуализируем ложные срабатывания (false positives) и ложноотрицательные срабатывания (false negatives).Для этого мы прогоним модель через все примеры и сравним ее с истинными метками (ground truth).
###Code
from torch.utils.data.sampler import Sampler
class SubsetSampler(Sampler):
r"""Samples elements with given indices sequentially
Arguments:
data_source (Dataset): dataset to sample from
indices (ndarray): indices of the samples to take
"""
def __init__(self, indices):
self.indices = indices
def __iter__(self):
return (self.indices[i] for i in range(len(self.indices)))
def __len__(self):
return len(self.indices)
def evaluate_model(model, dataset, indices):
"""
Computes predictions and ground truth labels for the indices of the dataset
Returns:
predictions: np array of booleans of model predictions
grount_truth: np array of boolean of actual labels of the dataset
"""
model.eval() # Evaluation mode
# TODO: Evaluate model on the list of indices and capture predictions
# and ground truth labels
# Hint: SubsetSampler above could be useful!
raise Exception("Not implemented")
return predictions, ground_truth
predictions, gt = evaluate_model(model_conv, train_dataset, val_indices)
###Output
_____no_output_____
###Markdown
И теперь можно визуализировать false positives и false negatives.
###Code
# TODO: Compute indices of the false positives on the validation set.
# Note those have to be indices of the original dataset
false_positive_indices = None
visualize_samples(orig_dataset, false_positive_indices, "False positives")
# TODO: Compute indices of the false negatives on the validation set.
# Note those have to be indices of the original dataset
false_negatives_indices = None
visualize_samples(orig_dataset, false_negatives_indices, "False negatives")
import sklearn.metrics as metrics
def binary_classification_metrics(prediction, ground_truth):
# TODO: Implement this function!
# We did this already it in the assignment1
return precision, recall, f1
precision, recall, f1 = binary_classification_metrics(predictions, gt)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (f1, precision, recall))
###Output
_____no_output_____
###Markdown
Что будет в конце вы уже понялиНатренируйте лучшую модель на основе `resnet18`, меняя только процесс тренировки.Выбирайте лучшую модель по F1 score.Как всегда, не забываем:- побольше агментаций!- перебор гиперпараметров- различные оптимизаторы- какие слои тюнить- learning rate annealing- на какой эпохе останавливатьсяНаша цель - довести F1 score на validation set до значения, большего **0.9**.
###Code
# TODO: Train your best model!
best_model = None
# Let's check how it performs on validation set!
predictions, ground_truth = evaluate_model(best_model, dataset, val_indices)
precision, recall, f1 = binary_classification_metrics(predictions, ground_truth)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (precision, recall, f1))
# TODO: Visualize training curve for the best model
###Output
_____no_output_____
###Markdown
Визуализируйте ошибки лучшей модели
###Code
# TODO Visualize false positives and false negatives of the best model on the validation set
###Output
_____no_output_____
###Markdown
Необязательное задание с большой звездочкойПоучавствуйте в Kaggle In-Class Hot Dog Recognition Challenge! Это соревнование сделано специально для курса и в нем учавствуют только те, кто проходит курс.В нем участники соревнуются в качестве натренированных моделей, загружая на сайт предсказания своих моделей на тестовой выборке. Разметка тестовой выборке участникам недоступна.Более подробно о правилах соревнования ниже.Те, кто проходят курс лично, за высокое место в соревновании получат дополнительные баллы.Здесь уже можно использовать и другие базовые архитектуры кроме `resnet18`, и ансамбли, и другие трюки тренировки моделей.Вот ссылка на соревнование:https://www.kaggle.com/c/hotdogornot
###Code
image_id = []
predictions = []
model.eval()
for x,_,id_img in test_loader:
# TODO : Напишите код для предсказания меток (1 = есть хотдог, 0 = хотдога нет)
# Код должен возвратить список из id картинки и метку predictions
# image id - это название файла картинки, например '10000.jpg'
pass
# Так можно создать csv файл, чтобы затем загрузить его на kaggle
# Ожидаемый формат csv-файла:
# image_id,label
# 10000.jpg,1
# 10001.jpg,1
# 10002.jpg,0
# 10003.jpg,1
# 10004.jpg,0
with open('subm.csv', 'w') as submissionFile:
writer = csv.writer(submissionFile)
writer.writerow(['image_id', 'label'])
writer.writerows(zip(image_id,predictions))
# А так можно скачать файл с Google Colab
files.download('subm.csv')
###Output
_____no_output_____
###Markdown
Задание 4 - Перенос обучения (transfer learning) и тонкая настройка (fine-tuning)Одной из важнейшних техник в тренировке сетей - использовать заранее натренированные веса на более общей задачи в качестве начальной точки, а потом "дотренировать" их на конкретной.Такой подход и убыстряет обучение, и позволяет тренировать эффективные модели на маленьких наборах данных.В этом упражнении мы натренируем классификатор, который отличает хотдоги от не хотдогов! (более подробно - https://www.youtube.com/watch?v=ACmydtFDTGs)Это задание требует доступа к GPU, поэтому его можно выполнять либо на компьютере с GPU от NVidia, либо в [Google Colab](https://colab.research.google.com/).
###Code
import json
import os
import csv
import urllib
from io import BytesIO
from PIL import Image
from socket import timeout
from google.colab import files
!pip3 install -q torch torchvision
!pip3 install -q Pillow==4.0.0
###Output
_____no_output_____
###Markdown
Сначала давайте скачаем данные с картинками. Это сделает код в следующей ячейке. Данные будут разделены на две части. На обучающей выборке, которая будет храниться в папке **train_kaggle**, мы будем строить наши модели, а на тестовой выборке **test_kaggle** будем предсказывать класс, к которому относится фотография (хотдог или нет). Если вы в Google Colab!В нем можно запускать ноутбуки с доступом к GPU. Они не очень быстрые, зато бесплатные!Каждый ноутбук получает свой собственный environment c доступным диском итд.Через 90 минут отсуствия активности этот environment пропадает со всеми данными.Поэтому нам придется скачивать данные каждый раз.
###Code
# Download train data
!wget "https://storage.googleapis.com/dlcourse_ai/train.zip"
!unzip -q "train.zip"
train_folder = "train_kaggle/"
# Count number of files in the train folder, should be 4603
print('Number of files in the train folder', len(os.listdir(train_folder)))
# Download test data
!wget "https://storage.googleapis.com/dlcourse_ai/test.zip"
!unzip -q "test.zip"
test_folder = "test_kaggle/"
# Count number of files in the test folder, should be 1150
print('Number of files in the test folder', len(os.listdir(test_folder)))
import torch
from torchvision import models
from torch.utils.data import Dataset, SubsetRandomSampler
from torchvision import transforms
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Имплементируем свой Dataset для загрузки данныхВ этом задании мы реализуем свой собственный класс Dataset для загрузки данных. Его цель - загрузить данные с диска и выдать по ним тензор с входом сети, меткой и идентификатором картинки (так будет проще подготовить сабмит для kaggle на тестовых данных).Вот ссылка, где хорошо объясняется как это делать на примере: https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmlВаш Dataset должен в качестве количества сэмплов выдать количество файлов в папке и уметь выдавать кортеж из сэмпла, метки по индексу и названия файла.Если название файла начинается со слов 'frankfurter', 'chili-dog' или 'hotdog' - метка положительная. Иначе отрицательная (ноль).И не забудьте поддержать возможность трансформации входа (аргумент `transforms`), она нам понадобится!
###Code
class HotdogOrNotDataset(Dataset):
def __init__(self, folder, transform=None):
self.transform = transform
# TODO: Your code here!
def __len__(self):
raise Exception("Not implemented!")
def __getitem__(self, index):
# TODO Implement getting item by index
# Hint: os.path.join is helpful!
raise Exception("Not implemented!")
return img, y, img_id
def visualize_samples(dataset, indices, title=None, count=10):
# visualize random 10 samples
plt.figure(figsize=(count*3,3))
display_indices = indices[:count]
if title:
plt.suptitle("%s %s/%s" % (title, len(display_indices), len(indices)))
for i, index in enumerate(display_indices):
x, y, _ = dataset[index]
plt.subplot(1,count,i+1)
plt.title("Label: %s" % y)
plt.imshow(x)
plt.grid(False)
plt.axis('off')
orig_dataset = HotdogOrNotDataset(train_folder)
indices = np.random.choice(np.arange(len(orig_dataset)), 7, replace=False)
visualize_samples(orig_dataset, indices, "Samples")
# Let's make sure transforms work!
dataset = HotdogOrNotDataset(train_folder, transform=transforms.RandomVerticalFlip(0.9))
visualize_samples(dataset, indices, "Samples with flip - a lot should be flipped!")
###Output
_____no_output_____
###Markdown
Создаем Dataset для тренировкиИ разделяем его на train и validation.На train будем обучать модель, на validation проверять ее качество, а соревнование Kaggle In-Class проведем на фотографиях из папки test_kaggle.
###Code
# First, lets load the dataset
train_dataset = HotdogOrNotDataset(train_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
test_dataset = HotdogOrNotDataset(test_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
batch_size = 64
data_size = len(dataset)
validation_fraction = .2
val_split = int(np.floor((validation_fraction) * data_size))
indices = list(range(data_size))
np.random.seed(42)
np.random.shuffle(indices)
val_indices, train_indices = indices[:val_split], indices[val_split:]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=val_sampler)
# Notice that we create test data loader in a different way. We don't have the labels.
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Наши обычные функции для тренировки
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y,_) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
###Output
_____no_output_____
###Markdown
Использование заранее натренированной сети (pretrained network)Чаще всего в качестве заранее натренированной сети используется сеть, натренированная на данных ImageNet с 1M изображений и 1000 классами.PyTorch включает такие натренированные сети для различных архитектур (https://pytorch.org/docs/stable/torchvision/models.html) Мы будем использовать ResNet18.Для начала посмотрим, что выдает уже натренированная сеть на наших картинках. То есть, посмотрим к какому из 1000 классов их отнесет сеть.Запустите модель на 10 случайных картинках из датасета и выведите их вместе с классами с наибольшей вероятностью. В коде уже есть код, который формирует соответствие между индексами в выходном векторе и классами ImageNet.
###Code
# Thanks to https://discuss.pytorch.org/t/imagenet-classes/4923/2
def load_imagenet_classes():
classes_json = urllib.request.urlopen('https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json').read()
classes = json.loads(classes_json)
# TODO: Process it to return dict of class index to name
return { int(k): v[-1] for k, v in classes.items()}
model = models.resnet18(pretrained=True)
# TODO: Run this model on 10 random images of your dataset and visualize what it predicts
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать только последний слойСуществует несколько вариантов переноса обучения, мы попробуем основные. Первый вариант - заменить последний слой на новый и тренировать только его, заморозив остальные.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Freeze all the layers of this model and add a new output layer
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 2)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать всю модельВторой вариант - точно так же заменить последгний слой на новый и обучать всю модель целиком.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer and train the whole model
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - разные скорости обучения для разных слоевИ наконец последний вариант, который мы рассмотрим - использовать разные скорости обучения для новых и старых слоев
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer
# Train new layer with learning speed 0.001 and old layers with 0.0001
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
loss = nn.CrossEntropyLoss()
optimizer = None # Hint - look into what PyTorch optimizers let you configure!
loss_history, train_history, val_history = train_model(model_conv, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Визуализируем метрики и ошибки моделиПопробуем посмотреть, где модель ошибается - визуализируем ложные срабатывания (false positives) и ложноотрицательные срабатывания (false negatives).Для этого мы прогоним модель через все примеры и сравним ее с истинными метками (ground truth).
###Code
from torch.utils.data.sampler import Sampler
class SubsetSampler(Sampler):
r"""Samples elements with given indices sequentially
Arguments:
data_source (Dataset): dataset to sample from
indices (ndarray): indices of the samples to take
"""
def __init__(self, indices):
self.indices = indices
def __iter__(self):
return (self.indices[i] for i in range(len(self.indices)))
def __len__(self):
return len(self.indices)
def evaluate_model(model, dataset, indices):
"""
Computes predictions and ground truth labels for the indices of the dataset
Returns:
predictions: np array of booleans of model predictions
grount_truth: np array of boolean of actual labels of the dataset
"""
model.eval() # Evaluation mode
# TODO: Evaluate model on the list of indices and capture predictions
# and ground truth labels
# Hint: SubsetSampler above could be useful!
raise Exception("Not implemented")
return predictions, ground_truth
predictions, gt = evaluate_model(model_conv, train_dataset, val_indices)
###Output
_____no_output_____
###Markdown
И теперь можно визуализировать false positives и false negatives.
###Code
# TODO: Compute indices of the false positives on the validation set.
# Note those have to be indices of the original dataset
false_positive_indices = None
visualize_samples(orig_dataset, false_positive_indices, "False positives")
# TODO: Compute indices of the false negatives on the validation set.
# Note those have to be indices of the original dataset
false_negatives_indices = None
visualize_samples(orig_dataset, false_negatives_indices, "False negatives")
import sklearn.metrics as metrics
def binary_classification_metrics(prediction, ground_truth):
# TODO: Implement this function!
# We did this already it in the assignment1
return precision, recall, f1
precision, recall, f1 = binary_classification_metrics(predictions, gt)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (f1, precision, recall))
###Output
_____no_output_____
###Markdown
Что будет в конце вы уже понялиНатренируйте лучшую модель на основе `resnet18`, меняя только процесс тренировки.Выбирайте лучшую модель по F1 score.Как всегда, не забываем:- побольше агментаций!- перебор гиперпараметров- различные оптимизаторы- какие слои тюнить- learning rate annealing- на какой эпохе останавливатьсяНаша цель - довести F1 score на validation set до значения, большего **0.9**.
###Code
# TODO: Train your best model!
best_model = None
# Let's check how it performs on validation set!
predictions, ground_truth = evaluate_model(best_model, dataset, val_indices)
precision, recall, f1 = binary_classification_metrics(predictions, ground_truth)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (precision, recall, f1))
# TODO: Visualize training curve for the best model
###Output
_____no_output_____
###Markdown
Визуализируйте ошибки лучшей модели
###Code
# TODO Visualize false positives and false negatives of the best model on the validation set
###Output
_____no_output_____
###Markdown
Необязательное задание с большой звездочкойПоучавствуйте в Kaggle In-Class Hot Dog Recognition Challenge! Это соревнование сделано специально для курса и в нем учавствуют только те, кто проходит курс.В нем участники соревнуются в качестве натренированных моделей, загружая на сайт предсказания своих моделей на тестовой выборке. Разметка тестовой выборке участникам недоступна.Более подробно о правилах соревнования ниже.Те, кто проходят курс лично, за высокое место в соревновании получат дополнительные баллы.Здесь уже можно использовать и другие базовые архитектуры кроме `resnet18`, и ансамбли, и другие трюки тренировки моделей.Вот ссылка на соревнование:https://www.kaggle.com/c/hotdogornot
###Code
image_id = []
predictions = []
model.eval()
for x,_,id_img in test_loader:
# TODO : Напишите код для предсказания меток (1 = есть хотдог, 0 = хотдога нет)
# Код должен возвратить список из id картинки и метку predictions
# image id - это название файла картинки, например '10000.jpg'
pass
# Так можно создать csv файл, чтобы затем загрузить его на kaggle
# Ожидаемый формат csv-файла:
# image_id,label
# 10000.jpg,1
# 10001.jpg,1
# 10002.jpg,0
# 10003.jpg,1
# 10004.jpg,0
with open('subm.csv', 'w') as submissionFile:
writer = csv.writer(submissionFile)
writer.writerow(['image_id', 'label'])
writer.writerows(zip(image_id,predictions))
# А так можно скачать файл с Google Colab
files.download('subm.csv')
###Output
_____no_output_____
###Markdown
Задание 4 - Перенос обучения (transfer learning) и тонкая настройка (fine-tuning)Одной из важнейшних техник в тренировке сетей - использовать заранее натренированные веса на более общей задачи в качестве начальной точки, а потом "дотренировать" их на конкретной.Такой подход и убыстряет обучение, и позволяет тренировать эффективные модели на маленьких наборах данных.В этом упражнении мы натренируем классификатор, который отличает хотдоги от не хотдогов! (более подробно - https://www.youtube.com/watch?v=ACmydtFDTGs)Это задание требует доступа к GPU, поэтому его можно выполнять либо на компьютере с GPU от NVidia, либо в [Google Colab](https://colab.research.google.com/).
###Code
import json
import os
import csv
import urllib
from io import BytesIO
from PIL import Image
from socket import timeout
from google.colab import files
#from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
#platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
#accelerator = 'cu80' if path.exists('/opt/bin/nvidia-smi') else 'cpu'
!pip3 install -q torch torchvision
!pip3 install -q Pillow==4.0.0
###Output
_____no_output_____
###Markdown
Сначала давайте скачаем данные с картинками. Это сделает код в следующей ячейке. Данные будут разделены на две части. На обучающей выборке, которая будет храниться в папке **train_kaggle**, мы будем строить наши модели, а на тестовой выборке **test_kaggle** будем предсказывать класс, к которому относится фотография (хотдог или нет). Если вы в Google Colab!В нем можно запускать ноутбуки с доступом к GPU. Они не очень быстрые, зато бесплатные!Каждый ноутбук получает свой собственный environment c доступным диском итд.Через 90 минут отсуствия активности этот environment пропадает со всеми данными.Поэтому нам придется скачивать данные каждый раз.
###Code
# Download train data
!wget "https://www.dropbox.com/s/cupinvuotopehty/train.zip?dl=0"
!unzip -q "train.zip?dl=0"
#local_folder = "../../HotDogOrNot/content/train_kaggle/"
train_folder = "train_kaggle/"
# Count number of files in the train folder, should be 4603
print('Number of files in the train folder', len(os.listdir(train_folder)))
# Download test data
!wget "https://www.dropbox.com/s/7xakfl2r9gn5p1j/test.zip?dl=0"
!unzip -q "test.zip?dl=0"
test_folder = "test_kaggle/"
# Count number of files in the test folder, should be 1150
print('Number of files in the test folder', len(os.listdir(test_folder)))
import torch
from torchvision import models
from torch.utils.data import Dataset, SubsetRandomSampler
from torchvision import transforms
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Имплементируем свой Dataset для загрузки данныхВ этом задании мы реализуем свой собственный класс Dataset для загрузки данных. Его цель - загрузить данные с диска и выдать по ним тензор с входом сети, меткой и идентификатором картинки (так будет проще подготовить сабмит для kaggle на тестовых данных).Вот ссылка, где хорошо объясняется как это делать на примере: https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmlВаш Dataset должен в качестве количества сэмплов выдать количество файлов в папке и уметь выдавать кортеж из сэмпла, метки по индексу и названия файла.Если название файла начинается со слов 'frankfurter', 'chili-dog' или 'hotdog' - метка положительная. Иначе отрицательная (ноль).И не забудьте поддержать возможность трансформации входа (аргумент `transforms`), она нам понадобится!
###Code
class HotdogOrNotDataset(Dataset):
def __init__(self, folder, transform=None):
self.transform = transform
# TODO: Your code here!
def __len__(self):
raise Exception("Not implemented!")
def __getitem__(self, index):
# TODO Implement getting item by index
# Hint: os.path.join is helpful!
raise Exception("Not implemented!")
return img, y, img_id
def visualize_samples(dataset, indices, title=None, count=10):
# visualize random 10 samples
plt.figure(figsize=(count*3,3))
display_indices = indices[:count]
if title:
plt.suptitle("%s %s/%s" % (title, len(display_indices), len(indices)))
for i, index in enumerate(display_indices):
x, y, _ = dataset[index]
plt.subplot(1,count,i+1)
plt.title("Label: %s" % y)
plt.imshow(x)
plt.grid(False)
plt.axis('off')
orig_dataset = HotdogOrNotDataset(train_folder)
indices = np.random.choice(np.arange(len(orig_dataset)), 7, replace=False)
visualize_samples(orig_dataset, indices, "Samples")
# Let's make sure transforms work!
dataset = HotdogOrNotDataset(train_folder, transform=transforms.RandomVerticalFlip(0.9))
visualize_samples(dataset, indices, "Samples with flip - a lot should be flipped!")
###Output
_____no_output_____
###Markdown
Создаем Dataset для тренировкиИ разделяем его на train и validation.На train будем обучать модель, на validation проверять ее качество, а соревнование Kaggle In-Class проведем на фотографиях из папки test_kaggle.
###Code
# First, lets load the dataset
train_dataset = HotdogOrNotDataset(train_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
test_dataset = HotdogOrNotDataset(test_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
batch_size = 64
data_size = len(dataset)
validation_fraction = .2
val_split = int(np.floor((validation_fraction) * data_size))
indices = list(range(data_size))
np.random.seed(42)
np.random.shuffle(indices)
val_indices, train_indices = indices[:val_split], indices[val_split:]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=val_sampler)
# Notice that we create test data loader in a different way. We don't have the labels.
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Наши обычные функции для тренировки
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y,_) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
###Output
_____no_output_____
###Markdown
Использование заранее натренированной сети (pretrained network)Чаще всего в качестве заранее натренированной сети используется сеть, натренированная на данных ImageNet с 1M изображений и 1000 классами.PyTorch включает такие натренированные сети для различных архитектур (https://pytorch.org/docs/stable/torchvision/models.html) Мы будем использовать ResNet18.Для начала посмотрим, что выдает уже натренированная сеть на наших картинках. То есть, посмотрим к какому из 1000 классов их отнесет сеть.Запустите модель на 10 случайных картинках из датасета и выведите их вместе с классами с наибольшей вероятностью. В коде уже есть код, который формирует соответствие между индексами в выходном векторе и классами ImageNet.
###Code
# Thanks to https://discuss.pytorch.org/t/imagenet-classes/4923/2
def load_imagenet_classes():
classes_json = urllib.request.urlopen('https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json').read()
classes = json.loads(classes_json)
# TODO: Process it to return dict of class index to name
return { int(k): v[-1] for k, v in classes.items()}
model = models.resnet18(pretrained=True)
# TODO: Run this model on 10 random images of your dataset and visualize what it predicts
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать только последний слойСуществует несколько вариантов переноса обучения, мы попробуем основные. Первый вариант - сделать новый последний слой и тренировать только его, заморозив остальные.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Freeze all the layers of this model and add a new output layer
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 2)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать всю модельВторой вариант - добавить новый слой и обучать всю модель целиком.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer and train the whole model
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - разные скорости обучения для разных слоевИ наконец последний вариант, который мы рассмотрим - использовать разные скорости обучения для новых и старых слоев
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer
# Train new layer with learning speed 0.001 and old layers with 0.0001
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
loss = nn.CrossEntropyLoss()
optimizer = None # Hint - look into what PyTorch optimizers let you configure!
loss_history, train_history, val_history = train_model(model_conv, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Визуализируем метрики и ошибки моделиПопробуем посмотреть, где модель ошибается - визуализируем ложные срабатывания (false positives) и ложноотрицательные срабатывания (false negatives).Для этого мы прогоним модель через все примеры и сравним ее с истинными метками (ground truth).
###Code
from torch.utils.data.sampler import Sampler
class SubsetSampler(Sampler):
r"""Samples elements with given indices sequentially
Arguments:
data_source (Dataset): dataset to sample from
indices (ndarray): indices of the samples to take
"""
def __init__(self, indices):
self.indices = indices
def __iter__(self):
return (self.indices[i] for i in range(len(self.indices)))
def __len__(self):
return len(self.indices)
def evaluate_model(model, dataset, indices):
"""
Computes predictions and ground truth labels for the indices of the dataset
Returns:
predictions: np array of booleans of model predictions
grount_truth: np array of boolean of actual labels of the dataset
"""
model.eval() # Evaluation mode
# TODO: Evaluate model on the list of indices and capture predictions
# and ground truth labels
# Hint: SubsetSampler above could be useful!
raise Exception("Not implemented")
return predictions, ground_truth
predictions, gt = evaluate_model(model_conv, train_dataset, val_indices)
###Output
_____no_output_____
###Markdown
И теперь можно визуализировать false positives и false negatives.
###Code
# TODO: Compute indices of the false positives on the validation set.
# Note those have to be indices of the original dataset
false_positive_indices = None
visualize_samples(orig_dataset, false_positive_indices, "False positives")
# TODO: Compute indices of the false negatives on the validation set.
# Note those have to be indices of the original dataset
false_negatives_indices = None
visualize_samples(orig_dataset, false_negatives_indices, "False negatives")
import sklearn.metrics as metrics
def binary_classification_metrics(prediction, ground_truth):
# TODO: Implement this function!
# We did this already it in the assignment1
return precision, recall, f1
precision, recall, f1 = binary_classification_metrics(predictions, gt)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (f1, precision, recall))
###Output
_____no_output_____
###Markdown
Что будет в конце вы уже понялиНатренируйте лучшую модель на основе `resnet18`, меняя только процесс тренировки.Выбирайте лучшую модель по F1 score.Как всегда, не забываем:- побольше агментаций!- перебор гиперпараметров- различные оптимизаторы- какие слои тюнить- learning rate annealing- на какой эпохе останавливатьсяНаша цель - довести F1 score на validation set до значения, большего **0.9**.
###Code
# TODO: Train your best model!
best_model = None
# Let's check how it performs on validation set!
predictions, ground_truth = evaluate_model(best_model, dataset, val_indices)
precision, recall, f1 = binary_classification_metrics(predictions, ground_truth)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (precision, recall, f1))
# TODO: Visualize training curve for the best model
###Output
_____no_output_____
###Markdown
Визуализируйте ошибки лучшей модели
###Code
# TODO Visualize false positives and false negatives of the best model on the validation set
###Output
_____no_output_____
###Markdown
Необязательное задание с большой звездочкойПоучавствуйте в Kaggle In-Class Hot Dog Recognition Challenge! Это соревнование сделано специально для курса и в нем учавствуют только те, кто проходит курс.В нем участники соревнуются в качестве натренированных моделей, загружая на сайт предсказания своих моделей на тестовой выборке. Разметка тестовой выборке участникам недоступна.Более подробно о правилах соревнования ниже.Те, кто проходят курс лично, за высокое место в соревновании получат дополнительные баллы.Вот ссылка на соревнование:https://www.kaggle.com/c/hotdogornot
###Code
image_id = []
predictions = []
model.eval()
for x,_,id_img in test_loader:
# TODO : Напишите код для предсказания меток (1 = есть хотдог, 0 = хотдога нет)
# Код должен возвратить список из id картинки и метку predictions
# image id - это название файла картинки, например '10000.jpg'
pass
# Так можно создать csv файл, чтобы затем загрузить его на kaggle
# Ожидаемый формат csv-файла:
# image_id,label
# 10000.jpg,1
# 10001.jpg,1
# 10002.jpg,0
# 10003.jpg,1
# 10004.jpg,0
with open('subm.csv', 'w') as submissionFile:
writer = csv.writer(submissionFile)
writer.writerow(['image_id', 'label'])
writer.writerows(zip(image_id,predictions))
# А так можно скачать файл с Google Colab
files.download('subm.csv')
###Output
_____no_output_____
###Markdown
Задание 4 - Перенос обучения (transfer learning) и тонкая настройка (fine-tuning)Одной из важнейшних техник в тренировке сетей - использовать заранее натренированные веса на более общей задачи в качестве начальной точки, а потом "дотренировать" их на конкретной.Такой подход и убыстряет обучение, и позволяет тренировать эффективные модели на маленьких наборах данных.В этом упражнении мы натренируем классификатор, который отличает хотдоги от не хотдогов! (более подробно - https://www.youtube.com/watch?v=ACmydtFDTGs)Это задание требует доступа к GPU, поэтому его можно выполнять либо на компьютере с GPU от NVidia, либо в [Google Colab](https://colab.research.google.com/).
###Code
import json
import os
import csv
import urllib
from io import BytesIO
from PIL import Image
import cv2
from socket import timeout
from google.colab import files
import re
from skimage import io, transform
!pip3 install -q torch torchvision
#!pip3 install -q Pillow==4.0.0
!pip3 install -q Pillow
!pip3 install -U git+https://github.com/albumentations-team/albumentations
###Output
_____no_output_____
###Markdown
Сначала давайте скачаем данные с картинками. Это сделает код в следующей ячейке. Данные будут разделены на две части. На обучающей выборке, которая будет храниться в папке **train_kaggle**, мы будем строить наши модели, а на тестовой выборке **test_kaggle** будем предсказывать класс, к которому относится фотография (хотдог или нет). Если вы в Google Colab!В нем можно запускать ноутбуки с доступом к GPU. Они не очень быстрые, зато бесплатные!Каждый ноутбук получает свой собственный environment c доступным диском итд.Через 90 минут отсуствия активности этот environment пропадает со всеми данными.Поэтому нам придется скачивать данные каждый раз.
###Code
# Download train data
!wget -nc "https://storage.googleapis.com/dlcourse_ai/train.zip"
!unzip -qn "train.zip"
train_folder = "train_kaggle/"
# Count number of files in the train folder, should be 4603
print('Number of files in the train folder', len(os.listdir(train_folder)))
# Download test data
#!wget -nc "https://storage.googleapis.com/dlcourse_ai/test.zip"
#!unzip -qn "test.zip"
#test_folder = "test_kaggle/"
# Count number of files in the test folder, should be 1150
#print('Number of files in the test folder', len(os.listdir(test_folder)))
# Download test data
!wget -nc "https://storage.googleapis.com/dlcourse_ai/test.zip"
!unzip -qn "test.zip"
test_folder = "test_kaggle/"
# Count number of files in the test folder, should be 1150
print('Number of files in the test folder', len(os.listdir(test_folder)))
for img_name in os.listdir(train_folder):
img_path = os.path.join(train_folder, img_name)
if re.match(r'frankfurter|chili-dog|hotdog', img_name):
try:
os.symlink(img_name, ".".join([img_path,"copy"]))
except:
pass
import torch
from torchvision import models
from torch.utils.data import Dataset, SubsetRandomSampler
from torchvision import transforms
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from torch.utils.data.sampler import Sampler
import torch.nn.functional as tf
import sklearn.metrics as metrics
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
from torch.optim.lr_scheduler import ReduceLROnPlateau, CosineAnnealingLR
import albumentations as A
from albumentations.pytorch import ToTensorV2
device = torch.device("cuda:0") # Let's make sure GPU is available!
#torch.set_default_tensor_type(torch.HalfTensor)
###Output
_____no_output_____
###Markdown
Имплементируем свой Dataset для загрузки данныхВ этом задании мы реализуем свой собственный класс Dataset для загрузки данных. Его цель - загрузить данные с диска и выдать по ним тензор с входом сети, меткой и идентификатором картинки (так будет проще подготовить сабмит для kaggle на тестовых данных).Вот ссылка, где хорошо объясняется как это делать на примере: https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmlВаш Dataset должен в качестве количества сэмплов выдать количество файлов в папке и уметь выдавать кортеж из сэмпла, метки по индексу и названия файла.Если название файла начинается со слов 'frankfurter', 'chili-dog' или 'hotdog' - метка положительная. Иначе отрицательная (ноль).И не забудьте поддержать возможность трансформации входа (аргумент `transforms`), она нам понадобится!
###Code
import torchvision.transforms as tvtf
class HotdogOrNotDataset(Dataset):
def __init__(self, folder, transform=None):
self.transform = transform
self.folder = folder
# TODO: Your code here!
def __len__(self):
return len(os.listdir(self.folder))
def __getitem__(self, index):
# TODO Implement getting item by index
# Hint: os.path.join is helpful!
img_name = os.listdir(self.folder)[index]
img_path = os.path.join(self.folder, img_name)
# Read an image with OpenCV
image = cv2.imread(img_path)
#print(re.match(r'frankfurter|chili-dog|hotdog', img_name), img_name)
if re.match(r'frankfurter|chili-dog|hotdog', img_name):
y = 1
else:
y = 0
#plt.imshow(image)
# By default OpenCV uses BGR color space for color images,
# so we need to convert the image to RGB color space.
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
if self.transform:
augmented = self.transform(image=image)
image = augmented['image']
#imagePIL = tvtf.ToPILImage()(image)
return image, y, img_name
def visualize_samples(dataset, indices, title=None, count=10, labels=None):
# visualize random 10 samples
fig1 = plt.figure(figsize=(count*3,3))
display_indices = indices[:count]
if title:
fig1.suptitle("%s %s/%s" % (title, len(display_indices), len(indices)))
for i, index in enumerate(display_indices):
x, y, _ = dataset[index]
ax = fig1.add_subplot(1,count,i+1)
if labels:
ax.set_title(labels[i])
else:
ax.set_title("Label: %s" % y)
ax.imshow(x)
plt.grid(False)
plt.axis('off')
orig_dataset = HotdogOrNotDataset(train_folder)
indices = np.random.choice(np.arange(len(orig_dataset)), 7, replace=False)
visualize_samples(orig_dataset, indices, "Samples")
# Let's make sure transforms work!
dataset = HotdogOrNotDataset(train_folder, transform=A.VerticalFlip(0.9))
visualize_samples(dataset, indices, "Samples with flip - a lot should be flipped!")
###Output
_____no_output_____
###Markdown
Создаем Dataset для тренировкиИ разделяем его на train и validation.На train будем обучать модель, на validation проверять ее качество, а соревнование Kaggle In-Class проведем на фотографиях из папки test_kaggle.
###Code
light = A.Compose([
A.RandomBrightnessContrast(p=1),
A.RandomGamma(p=1),
A.CLAHE(p=1),
], p=1)
medium = A.Compose([
A.CLAHE(p=1),
A.HueSaturationValue(hue_shift_limit=20, sat_shift_limit=50, val_shift_limit=50, p=1),
], p=1)
strong = A.Compose([
A.ChannelShuffle(p=1),
], p=1)
transformcolor=A.Compose([
A.OneOf([
light,
medium,
strong
])
])
transformnonrigid=A.Compose([
A.OneOf([
A.ElasticTransform(alpha=120, sigma=120 * 0.05, alpha_affine=120 * 0.03),
A.GridDistortion(),
A.OpticalDistortion(distort_limit=2, shift_limit=0.5)
])
])
# First, lets load the dataset
train_dataset = HotdogOrNotDataset(train_folder,
transform=A.Compose([
A.CLAHE(p=1),
#A.HueSaturationValue(hue_shift_limit=20, sat_shift_limit=30, val_shift_limit=20, p=0.5),
#transformcolor,
##A.Compose([ transformnonrigid ], p = 0.5),
A.Compose([
A.HueSaturationValue(hue_shift_limit=20, sat_shift_limit=50, val_shift_limit=50, p=1),
], p=0.3),
A.OneOf([
A.HorizontalFlip(),
A.Rotate(limit=45),
A.Compose([
A.SmallestMaxSize(224),
A.CenterCrop(224,224)
])
]),
#A.Blur(),
#transforms.RandomCrop((224, 224)),
#,
A.Resize(224, 224),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
A.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
ToTensorV2(),
])
)
val_dataset = HotdogOrNotDataset(train_folder,
transform=A.Compose([
A.CLAHE(p=1),
A.Resize(224, 224),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
A.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]) ,
ToTensorV2(),
])
)
test_dataset = HotdogOrNotDataset(test_folder,
transform=A.Compose([
A.CLAHE(p=1),
A.Resize(224, 224),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
A.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]) ,
ToTensorV2(),
])
)
batch_size = 128
data_size = len(dataset)
validation_fraction = .2
val_split = int(np.floor((validation_fraction) * data_size))
indices = list(range(data_size))
np.random.seed(42)
np.random.shuffle(indices)
val_indices, train_indices = indices[:val_split], indices[val_split:]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(val_dataset, batch_size=batch_size,
sampler=val_sampler)
# Notice that we create test data loader in a different way. We don't have the labels.
#test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size)
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Наши обычные функции для тренировки
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, scheduler=None, num_epochs=10):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y,_) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value.item()
del x_gpu, y_gpu, prediction, loss_value, indices
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy, val_loss, val_f1 = compute_accuracy(model, val_loader)
if scheduler:
#scheduler.step(val_loss)
scheduler.step()
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Epoch: %d, Average loss: %f, Train accuracy: %f, Val accuracy: %f, Val F1: %f" % (epoch, ave_loss, train_accuracy, val_accuracy, val_f1))
return loss_history, train_history, val_history
from scipy.special import softmax
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Implement the inference of the model on all of the batches from loader,
# and compute the overall accuracy.
# Hint: PyTorch has the argmax function!
correct_samples = 0
total_samples = 0
loss_accum = 0
predictions = []
ground_truth = []
with torch.no_grad():
for i_step, (x, y, _) in enumerate(loader):
#print(x.shape)
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
predictions.extend(torch.argmax(tf.softmax(prediction, dim=0), dim=1).cpu().detach().numpy())
ground_truth.extend(y)
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y_gpu.shape[0]
#loss_value = loss(prediction.cpu(), y)
#loss_accum += loss_value
loss_accum += 0
del x_gpu, y_gpu, prediction, indices
accuracy = float(correct_samples) / total_samples
precision, recall, f1 = binary_classification_metrics(predictions, ground_truth)
return accuracy, loss_accum, f1
#return accuracy, loss_accum
# Don't forget to move the data to device before running it through the model!
from torch.utils.data.sampler import Sampler
import torch.nn.functional as tf
class SubsetSampler(Sampler):
r"""Samples elements with given indices sequentially
Arguments:
data_source (Dataset): dataset to sample from
indices (ndarray): indices of the samples to take
"""
def __init__(self, indices):
self.indices = indices
def __iter__(self):
return (self.indices[i] for i in range(len(self.indices)))
def __len__(self):
return len(self.indices)
def evaluate_model(model, dataset, indices):
"""
Computes predictions and ground truth labels for the indices of the dataset
Returns:
predictions: np array of booleans of model predictions
grount_truth: np array of boolean of actual labels of the dataset
"""
val_sampler = SubsetSampler(indices)
loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size,
sampler=val_sampler)
model.eval() # Evaluation mode
# TODO: Evaluate model on the list of indices and capture predictions
# and ground truth labels
# Hint: SubsetSampler above could be useful!
predictions = []
ground_truth = []
for i_step, (x, y, _) in enumerate(loader):
#print(i_step)
x_gpu = x.to(device)
prediction = model(x_gpu)
_, indices = torch.max(prediction, 1)
ground_truth.extend(y.detach().numpy())
predictions.extend(torch.argmax(tf.softmax(prediction, dim=0), dim=1).cpu().detach().numpy())
return predictions, ground_truth
import sklearn.metrics as metrics
def binary_classification_metrics(prediction, ground_truth):
# TODO: Implement this function!
# We did this already it in the assignment1
precision = metrics.precision_score(ground_truth, prediction)
recall = metrics.recall_score(ground_truth, prediction)
f1 = metrics.f1_score(ground_truth, prediction)
return precision, recall, f1
###Output
_____no_output_____
###Markdown
Использование заранее натренированной сети (pretrained network)Чаще всего в качестве заранее натренированной сети используется сеть, натренированная на данных ImageNet с 1M изображений и 1000 классами.PyTorch включает такие натренированные сети для различных архитектур (https://pytorch.org/docs/stable/torchvision/models.html) Мы будем использовать ResNet18.Для начала посмотрим, что выдает уже натренированная сеть на наших картинках. То есть, посмотрим к какому из 1000 классов их отнесет сеть.Запустите модель на 10 случайных картинках из датасета и выведите их вместе с классами с наибольшей вероятностью. В коде уже есть код, который формирует соответствие между индексами в выходном векторе и классами ImageNet.
###Code
import json
!wget "https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json"
class_idx = json.load(open("imagenet_class_index.json"))
# Thanks to https://discuss.pytorch.org/t/imagenet-classes/4923/2
def load_imagenet_classes():
classes_json = urllib.request.urlopen('https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json').read()
classes = json.loads(classes_json)
# TODO: Process it to return dict of class index to name
return { int(k): v[-1] for k, v in classes.items()}
def print_labels(class_idx, out, num_labels=10):
idx2label = [class_idx[str(k)][1] for k in range(len(class_idx))]
res=[]
#print("top10", out[0].sort(descending = True, dim=-1)[1].detach().numpy()[:num_labels])
for idx in out[0].sort(descending = True, dim=0)[1].detach().numpy()[:num_labels]:
res.append(idx2label[idx])
return res
model = models.resnet18(pretrained=True)
model.cuda()
indices = list(range(len(train_dataset)))
#np.random.seed(42)
np.random.shuffle(indices)
ind10 = indices[:10]
sampler10 = SubsetRandomSampler(ind10)
first_try_loader = torch.utils.data.DataLoader(train_dataset, batch_size=1,
sampler=sampler10)
#print(list(first_try_loader, val_loader))
labels=[]
for i, (x, y,_) in enumerate(first_try_loader):
x_gpu = x.to(device)
with torch.no_grad():
prediction = model(x_gpu)
#print(prediction[0].sort()[0])
#plt.title("Resnet 18 labels:\n %s" % print_labels(class_idx, prediction, 3))
#plt.imshow(train_dataset[ind10[i]])
labels.append(print_labels(class_idx, prediction.cpu(), 3))
visualize_samples(dataset, ind10, "resnet18", 10, labels)
#print(labels)
#visualize_samples(orig_dataset, indices, "Samples")
# TODO: Run this model on 10 random images of your dataset and visualize what it predicts
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать только последний слойСуществует несколько вариантов переноса обучения, мы попробуем основные. Первый вариант - заменить последний слой на новый и тренировать только его, заморозив остальные.
###Code
import torch.nn as nn
import torch.optim as optim
model_st1 = models.resnet18(pretrained=True)
# TODO: Freeze all the layers of this model and add a new output layer
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
for param in model_st1.parameters():
param.requires_grad = False
# Parameters of newly constructed modules have requires_grad=True by default
num_ftrs = model_st1.fc.in_features
model_st1.fc = nn.Linear(num_ftrs, 2)
model_st1 = model_st1.to(device)
parameters = model_st1.fc.parameters() # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model_st1, train_loader, val_loader, loss, optimizer, None, 5)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать всю модельВторой вариант - точно так же заменить последний слой на новый и обучать всю модель целиком.
###Code
model_st2 = models.resnet34(pretrained=True)
# TODO: Add a new output layer and train the whole model
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
num_ftrs = model_st2.fc.in_features
model_st2.fc = nn.Linear(num_ftrs, 2)
model_st2 = model_st2.to(device)
parameters = model_st2.fc.parameters() # Fill the right thing here!
loss = nn.CrossEntropyLoss()
#optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
optimizer = optim.Adam(parameters, lr=0.01, weight_decay = 1e-4)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.9)
#exp_lr_scheduler = ReduceLROnPlateau(optimizer, 'min', patience = 1, factor=0.2)
loss_history, train_history, val_history = train_model(model_st2, train_loader, val_loader,
loss, optimizer, exp_lr_scheduler, 15)
loss_history, train_history, val_history = train_model(model_st2, train_loader, val_loader,
loss, optimizer, exp_lr_scheduler, 15)
torch.save(model_st2.state_dict(), "model_st2.weights")
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - разные скорости обучения для разных слоевИ наконец последний вариант, который мы рассмотрим - использовать разные скорости обучения для новых и старых слоев
###Code
model_st3 = models.resnet18(pretrained=True)
# TODO: Add a new output layer
# Train new layer with learning speed 0.001 and old layers with 0.0001
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
num_ftrs = model_st3.fc.in_features
model_st3.fc = nn.Sequential(
nn.BatchNorm1d(num_ftrs),
nn.Linear(num_ftrs, 2)
)
# for i, param in enumerate(model_st3.parameters()):
# if i < 5:
# param.requieres_grad=False
model_st3 = model_st3.to(device)
#parameters = model_st3.fc.parameters() # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.Adam([
{'params': model_st3.fc.parameters(), 'lr': 5e-2, 'weight_decay' : 1e-6},
{'params': model_st3.layer4.parameters(), 'lr': 5e-3, 'weight_decay' : 1e-7}
], lr=1e-4, weight_decay = 0)
exp_lr_scheduler = CosineAnnealingLR(optimizer, 15, 1e-6)
loss_history, train_history, val_history = train_model(model_st3, train_loader, val_loader,
loss, optimizer, exp_lr_scheduler, 15)
#for freeze 2 first layers for param in model_st3.parameters()[:2]:
loss_history, train_history, val_history = train_model(model_st3, train_loader, val_loader,
loss, optimizer, exp_lr_scheduler, 15)
loss_history, train_history, val_history = train_model(model_st3, train_loader, val_loader,
loss, optimizer, exp_lr_scheduler, 30)
torch.save(model_st3.state_dict(), "model_st3_19.weights")
###Output
_____no_output_____
###Markdown
Визуализируем метрики и ошибки моделиПопробуем посмотреть, где модель ошибается - визуализируем ложные срабатывания (false positives) и ложноотрицательные срабатывания (false negatives).Для этого мы прогоним модель через все примеры и сравним ее с истинными метками (ground truth).
###Code
predictions, gt = evaluate_model(model_st3, val_dataset, val_indices)
#print(predictions, "\n", gt)
###Output
_____no_output_____
###Markdown
И теперь можно визуализировать false positives и false negatives.
###Code
# TODO: Compute indices of the false positives on the validation set.
# Note those have to be indices of the original dataset
diff = (np.array(predictions) - np.array(gt))
false_positive_indices = np.array(val_indices)[diff == 1]
visualize_samples(orig_dataset, false_positive_indices, "False positives")
# TODO: Compute indices of the false negatives on the validation set.
# Note those have to be indices of the original dataset
false_negatives_indices = np.array(val_indices)[diff == -1]
visualize_samples(orig_dataset, false_negatives_indices, "False negatives")
precision, recall, f1 = binary_classification_metrics(predictions, gt)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (f1, precision, recall))
from google.colab import drive
drive.mount('/content/drive')
###Output
_____no_output_____
###Markdown
Что будет в конце вы уже понялиНатренируйте лучшую модель на основе `resnet18`, меняя только процесс тренировки.Выбирайте лучшую модель по F1 score.Как всегда, не забываем:- побольше агментаций!- перебор гиперпараметров- различные оптимизаторы- какие слои тюнить- learning rate annealing- на какой эпохе останавливатьсяНаша цель - довести F1 score на validation set до значения, большего **0.93**.
###Code
# TODO: Train your best model!
best_model = model_st3
# Let's check how it performs on validation set!
predictions1, ground_truth = evaluate_model(model_st3, val_dataset, val_indices)
precision, recall, f1 = binary_classification_metrics(predictions1, ground_truth)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (f1, precision, recall))
# TODO: Visualize training curve for the best model
###Output
_____no_output_____
###Markdown
Визуализируйте ошибки лучшей модели
###Code
# TODO Visualize false positives and false negatives of the best model on the validation set
###Output
_____no_output_____
###Markdown
Необязательное задание с большой звездочкойПоучавствуйте в Kaggle In-Class Hot Dog Recognition Challenge! Это соревнование сделано специально для курса и в нем учавствуют только те, кто проходит курс.В нем участники соревнуются в качестве натренированных моделей, загружая на сайт предсказания своих моделей на тестовой выборке. Разметка тестовой выборке участникам недоступна.Более подробно о правилах соревнования ниже.Те, кто проходят курс лично, за высокое место в соревновании получат дополнительные баллы.Здесь уже можно использовать и другие базовые архитектуры кроме `resnet18`, и ансамбли, и другие трюки тренировки моделей.Вот ссылка на соревнование:https://www.kaggle.com/c/hotdogornot
###Code
image_id = []
predictions = []
model = model_st3
model.eval()
predictions = []
image_id = []
with torch.no_grad():
for x,_,id_img in test_loader:
# TODO : Напишите код для предсказания меток (1 = есть хотдог, 0 = хотдога нет)
# Код должен возвратить список из id картинки и метку predictions
# image id - это название файла картинки, например '10000.jpg'
x_gpu = x.to(device)
prediction = model(x_gpu)
predictions.extend(torch.argmax(tf.softmax(prediction, dim=0), dim=1).cpu().detach().numpy())
image_id.extend(id_img)
del x_gpu, prediction
print(predictions, id_img)
resp = {}
ke = []
va =[]
for i, j in zip(image_id,predictions):
resp[i] = j
for k in sorted(resp.keys()):
# print(k, resp[k])
ke.append(k)
va.append(resp[k])
print(ke)
# Так можно создать csv файл, чтобы затем загрузить его на kaggle
# Ожидаемый формат csv-файла:
# image_id,label
# 10000.jpg,1
# 10001.jpg,1
# 10002.jpg,0
# 10003.jpg,1
# 10004.jpg,0
with open('subm2.csv', 'w') as submissionFile:
writer = csv.writer(submissionFile)
writer.writerow(['image_id', 'label'])
writer.writerows(zip(ke,va))
# А так можно скачать файл с Google Colab
files.download('subm2.csv')
###Output
_____no_output_____
###Markdown
Небольшое введение в Kaggle для тех, кто не слышал об этой платформе раньше В основе своей Kaggle - это платформа для проведения соревнований по машинному обучению. Появилась она в 2010 и, пожалуй, стала самой популярной и известной из всех существующих площадок по машинному обучению. Надо сказать, что Kaggle - это не только соревнования, но и сообщество людей, увлеченных машинным обучением. А судя по Википедии, в 2017 году отметка зарегистрированных пользователей перевалила за миллион. Есть там и обучающие материалы, возможность задавать вопросы, делиться кодом и идеями - просто мечта. Как проходят соревнования? Обычно участники скачивают данные для обучения моделей (train data), чтобы затем делать предсказания на тестовых данных (test data). Обучающая выборка содержит как сами данные, так и правильные метки (значения зависимой переменной), чтобы можно было обучить модель. Но тестовые данные ответа не содержат - и нашей целью является предсказание меток по имеющимся данным. Файл с ответами для каждого наблюдения из тестовой выборки загружается на Kaggle и оценивается в соответствии с выбранной метрикой соревнования, а результат является публичным и показывается в общей таблице (ее называют еще лидербордом - leaderboard) - чтобы появилось желание посоревноваться и создать еще более сильную модель. В "настоящих" соревнованиях, которые проходят на Kaggle, есть и денежное вознаграждение для тех участников, кто занимает первые места на лидерборде. Например, в [этом](https://www.kaggle.com/c/zillow-prize-1description) соревновании, человек, занявший первое место, получил около 1 000 000 долларов. Тестовые данные делятся случайным образом в некоторой пропорции. И пока соревнование идет, на лидерборде показываются очки и рейтинг участников только по одной части (Public Leaderboard). А вот когда соревнование заканчивается, то рейтинг участников составляется по второй части тестовых данных (Private Leaderboard). И часто можно видеть, как люди занимавшие первые места на публичной части тестовых данных, оказываются далеко не первыми на закрытой части тестовых данных. Зачем это сделано? Есть несколько причин, но, пожалуй, самой фундаментальной является идея недообучения-переобучения. Всегда возможно, что наша модель настроилась на конкретную выборку, но как она поведет себя на тех данных, которые еще не видела? Разбиение тестовых данных на публичную и скрытую части сделано для того, чтобы отобрать модели, которые имеют большую обобщающую способность. Одним из лозунгов участников соревнований стал "Доверяйте своей локальной кросс-валидации" (Trust your CV!). Есть очень большой соблазн оценивать свою модель по публичной части лидерборда, но лучшей стратегией будет выбирать ту модель, которая дает лучшую метрику на кросс-валидации на обучающей выборке. В нашем соревновании публичная часть лидерборда составляет 30%, а скрытая 70%. Вы можете делать до двух попыток в день, а оцениваться попытки будут по F1-мере. Удачи и доверяйте своей локальной валидации! В конце соревнования у вас будет возможность выбрать 2 из всех совершенных попыток - лучшая из этих двух и будет засчитана вам на скрытой части тестовых данных.
###Code
###Output
_____no_output_____
###Markdown
Задание 4 - Перенос обучения (transfer learning) и тонкая настройка (fine-tuning)Одной из важнейшних техник в тренировке сетей - использовать заранее натренированные веса на более общей задачи в качестве начальной точки, а потом "дотренировать" их на конкретной.Такой подход и убыстряет обучение, и позволяет тренировать эффективные модели на маленьких наборах данных.В этом упражнении мы натренируем классификатор, который отличает хотдоги от не хотдогов! (более подробно - https://www.youtube.com/watch?v=ACmydtFDTGs)Это задание требует доступа к GPU, поэтому его можно выполнять либо на компьютере с GPU от NVidia, либо в [Google Colab](https://colab.research.google.com/).
###Code
import json
import os
import csv
import urllib
from io import BytesIO
from PIL import Image
from socket import timeout
from google.colab import files
!pip3 install -q torch torchvision
!pip3 install -q Pillow==4.0.0
###Output
_____no_output_____
###Markdown
Сначала давайте скачаем данные с картинками. Это сделает код в следующей ячейке. Данные будут разделены на две части. На обучающей выборке, которая будет храниться в папке **train_kaggle**, мы будем строить наши модели, а на тестовой выборке **test_kaggle** будем предсказывать класс, к которому относится фотография (хотдог или нет). Если вы в Google Colab!В нем можно запускать ноутбуки с доступом к GPU. Они не очень быстрые, зато бесплатные!Каждый ноутбук получает свой собственный environment c доступным диском итд.Через 90 минут отсуствия активности этот environment пропадает со всеми данными.Поэтому нам придется скачивать данные каждый раз.
###Code
# Download train data
!wget "https://storage.googleapis.com/dlcourse_ai/train.zip"
!unzip -q "train.zip"
train_folder = "train_kaggle/"
# Count number of files in the train folder, should be 4603
print('Number of files in the train folder', len(os.listdir(train_folder)))
# Download test data
!wget "https://storage.googleapis.com/dlcourse_ai/test.zip"
!unzip -q "test.zip"
test_folder = "test_kaggle/"
# Count number of files in the test folder, should be 1150
print('Number of files in the test folder', len(os.listdir(test_folder)))
import torch
from torchvision import models
from torch.utils.data import Dataset, SubsetRandomSampler
from torchvision import transforms
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Имплементируем свой Dataset для загрузки данныхВ этом задании мы реализуем свой собственный класс Dataset для загрузки данных. Его цель - загрузить данные с диска и выдать по ним тензор с входом сети, меткой и идентификатором картинки (так будет проще подготовить сабмит для kaggle на тестовых данных).Вот ссылка, где хорошо объясняется как это делать на примере: https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmlВаш Dataset должен в качестве количества сэмплов выдать количество файлов в папке и уметь выдавать кортеж из сэмпла, метки по индексу и названия файла.Если название файла начинается со слов 'frankfurter', 'chili-dog' или 'hotdog' - метка положительная. Иначе отрицательная (ноль).И не забудьте поддержать возможность трансформации входа (аргумент `transforms`), она нам понадобится!
###Code
class HotdogOrNotDataset(Dataset):
def __init__(self, folder, transform=None):
self.transform = transform
# TODO: Your code here!
def __len__(self):
raise Exception("Not implemented!")
def __getitem__(self, index):
# TODO Implement getting item by index
# Hint: os.path.join is helpful!
raise Exception("Not implemented!")
return img, y, img_id
def visualize_samples(dataset, indices, title=None, count=10):
# visualize random 10 samples
plt.figure(figsize=(count*3,3))
display_indices = indices[:count]
if title:
plt.suptitle("%s %s/%s" % (title, len(display_indices), len(indices)))
for i, index in enumerate(display_indices):
x, y, _ = dataset[index]
plt.subplot(1,count,i+1)
plt.title("Label: %s" % y)
plt.imshow(x)
plt.grid(False)
plt.axis('off')
orig_dataset = HotdogOrNotDataset(train_folder)
indices = np.random.choice(np.arange(len(orig_dataset)), 7, replace=False)
visualize_samples(orig_dataset, indices, "Samples")
# Let's make sure transforms work!
dataset = HotdogOrNotDataset(train_folder, transform=transforms.RandomVerticalFlip(0.9))
visualize_samples(dataset, indices, "Samples with flip - a lot should be flipped!")
###Output
_____no_output_____
###Markdown
Создаем Dataset для тренировкиИ разделяем его на train и validation.На train будем обучать модель, на validation проверять ее качество, а соревнование Kaggle In-Class проведем на фотографиях из папки test_kaggle.
###Code
# First, lets load the dataset
train_dataset = HotdogOrNotDataset(train_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
test_dataset = HotdogOrNotDataset(test_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
batch_size = 64
data_size = len(dataset)
validation_fraction = .2
val_split = int(np.floor((validation_fraction) * data_size))
indices = list(range(data_size))
np.random.seed(42)
np.random.shuffle(indices)
val_indices, train_indices = indices[:val_split], indices[val_split:]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=val_sampler)
# Notice that we create test data loader in a different way. We don't have the labels.
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Наши обычные функции для тренировки
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y,_) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
###Output
_____no_output_____
###Markdown
Использование заранее натренированной сети (pretrained network)Чаще всего в качестве заранее натренированной сети используется сеть, натренированная на данных ImageNet с 1M изображений и 1000 классами.PyTorch включает такие натренированные сети для различных архитектур (https://pytorch.org/docs/stable/torchvision/models.html) Мы будем использовать ResNet18.Для начала посмотрим, что выдает уже натренированная сеть на наших картинках. То есть, посмотрим к какому из 1000 классов их отнесет сеть.Запустите модель на 10 случайных картинках из датасета и выведите их вместе с классами с наибольшей вероятностью. В коде уже есть код, который формирует соответствие между индексами в выходном векторе и классами ImageNet.
###Code
# Thanks to https://discuss.pytorch.org/t/imagenet-classes/4923/2
def load_imagenet_classes():
classes_json = urllib.request.urlopen('https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json').read()
classes = json.loads(classes_json)
# TODO: Process it to return dict of class index to name
return { int(k): v[-1] for k, v in classes.items()}
model = models.resnet18(pretrained=True)
# TODO: Run this model on 10 random images of your dataset and visualize what it predicts
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать только последний слойСуществует несколько вариантов переноса обучения, мы попробуем основные. Первый вариант - заменить последний слой на новый и тренировать только его, заморозив остальные.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Freeze all the layers of this model and add a new output layer
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 2)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать всю модельВторой вариант - точно так же заменить последгний слой на новый и обучать всю модель целиком.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer and train the whole model
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - разные скорости обучения для разных слоевИ наконец последний вариант, который мы рассмотрим - использовать разные скорости обучения для новых и старых слоев
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer
# Train new layer with learning speed 0.001 and old layers with 0.0001
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
loss = nn.CrossEntropyLoss()
optimizer = None # Hint - look into what PyTorch optimizers let you configure!
loss_history, train_history, val_history = train_model(model_conv, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Визуализируем метрики и ошибки моделиПопробуем посмотреть, где модель ошибается - визуализируем ложные срабатывания (false positives) и ложноотрицательные срабатывания (false negatives).Для этого мы прогоним модель через все примеры и сравним ее с истинными метками (ground truth).
###Code
from torch.utils.data.sampler import Sampler
class SubsetSampler(Sampler):
r"""Samples elements with given indices sequentially
Arguments:
data_source (Dataset): dataset to sample from
indices (ndarray): indices of the samples to take
"""
def __init__(self, indices):
self.indices = indices
def __iter__(self):
return (self.indices[i] for i in range(len(self.indices)))
def __len__(self):
return len(self.indices)
def evaluate_model(model, dataset, indices):
"""
Computes predictions and ground truth labels for the indices of the dataset
Returns:
predictions: np array of booleans of model predictions
grount_truth: np array of boolean of actual labels of the dataset
"""
model.eval() # Evaluation mode
# TODO: Evaluate model on the list of indices and capture predictions
# and ground truth labels
# Hint: SubsetSampler above could be useful!
raise Exception("Not implemented")
return predictions, ground_truth
predictions, gt = evaluate_model(model_conv, train_dataset, val_indices)
###Output
_____no_output_____
###Markdown
И теперь можно визуализировать false positives и false negatives.
###Code
# TODO: Compute indices of the false positives on the validation set.
# Note those have to be indices of the original dataset
false_positive_indices = None
visualize_samples(orig_dataset, false_positive_indices, "False positives")
# TODO: Compute indices of the false negatives on the validation set.
# Note those have to be indices of the original dataset
false_negatives_indices = None
visualize_samples(orig_dataset, false_negatives_indices, "False negatives")
import sklearn.metrics as metrics
def binary_classification_metrics(prediction, ground_truth):
# TODO: Implement this function!
# We did this already it in the assignment1
return precision, recall, f1
precision, recall, f1 = binary_classification_metrics(predictions, gt)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (f1, precision, recall))
###Output
_____no_output_____
###Markdown
Что будет в конце вы уже понялиНатренируйте лучшую модель на основе `resnet18`, меняя только процесс тренировки.Выбирайте лучшую модель по F1 score.Как всегда, не забываем:- побольше агментаций!- перебор гиперпараметров- различные оптимизаторы- какие слои тюнить- learning rate annealing- на какой эпохе останавливатьсяНаша цель - довести F1 score на validation set до значения, большего **0.93**.
###Code
# TODO: Train your best model!
best_model = None
# Let's check how it performs on validation set!
predictions, ground_truth = evaluate_model(best_model, dataset, val_indices)
precision, recall, f1 = binary_classification_metrics(predictions, ground_truth)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (precision, recall, f1))
# TODO: Visualize training curve for the best model
###Output
_____no_output_____
###Markdown
Визуализируйте ошибки лучшей модели
###Code
# TODO Visualize false positives and false negatives of the best model on the validation set
###Output
_____no_output_____
###Markdown
Необязательное задание с большой звездочкойПоучавствуйте в Kaggle In-Class Hot Dog Recognition Challenge! Это соревнование сделано специально для курса и в нем учавствуют только те, кто проходит курс.В нем участники соревнуются в качестве натренированных моделей, загружая на сайт предсказания своих моделей на тестовой выборке. Разметка тестовой выборке участникам недоступна.Более подробно о правилах соревнования ниже.Те, кто проходят курс лично, за высокое место в соревновании получат дополнительные баллы.Здесь уже можно использовать и другие базовые архитектуры кроме `resnet18`, и ансамбли, и другие трюки тренировки моделей.Вот ссылка на соревнование:https://www.kaggle.com/c/hotdogornot
###Code
image_id = []
predictions = []
model.eval()
for x,_,id_img in test_loader:
# TODO : Напишите код для предсказания меток (1 = есть хотдог, 0 = хотдога нет)
# Код должен возвратить список из id картинки и метку predictions
# image id - это название файла картинки, например '10000.jpg'
pass
# Так можно создать csv файл, чтобы затем загрузить его на kaggle
# Ожидаемый формат csv-файла:
# image_id,label
# 10000.jpg,1
# 10001.jpg,1
# 10002.jpg,0
# 10003.jpg,1
# 10004.jpg,0
with open('subm.csv', 'w') as submissionFile:
writer = csv.writer(submissionFile)
writer.writerow(['image_id', 'label'])
writer.writerows(zip(image_id,predictions))
# А так можно скачать файл с Google Colab
files.download('subm.csv')
###Output
_____no_output_____
###Markdown
Задание 4 - Перенос обучения (transfer learning) и тонкая настройка (fine-tuning)Одной из важнейшних техник в тренировке сетей - использовать заранее натренированные веса на более общей задачи в качестве начальной точки, а потом "дотренировать" их на конкретной.Такой подход и убыстряет обучение, и позволяет тренировать эффективные модели на маленьких наборах данных.В этом упражнении мы натренируем классификатор, который отличает хотдоги от не хотдогов! (более подробно - https://www.youtube.com/watch?v=ACmydtFDTGs)Это задание требует доступа к GPU, поэтому его можно выполнять либо на компьютере с GPU от NVidia, либо в [Google Colab](https://colab.research.google.com/).
###Code
import json
import os
import csv
import urllib
from io import BytesIO
from PIL import Image
from socket import timeout
from google.colab import files
#from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
#platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
#accelerator = 'cu80' if path.exists('/opt/bin/nvidia-smi') else 'cpu'
!pip3 install -q torch torchvision
!pip3 install -q Pillow==4.0.0
###Output
_____no_output_____
###Markdown
Сначала давайте скачаем данные с картинками. Это сделает код в следующей ячейке. Данные будут разделены на две части. На обучающей выборке, которая будет храниться в папке **train_kaggle**, мы будем строить наши модели, а на тестовой выборке **test_kaggle** будем предсказывать класс, к которому относится фотография (хотдог или нет). Если вы в Google Colab!В нем можно запускать ноутбуки с доступом к GPU. Они не очень быстрые, зато бесплатные!Каждый ноутбук получает свой собственный environment c доступным диском итд.Через 90 минут отсуствия активности этот environment пропадает со всеми данными.Поэтому нам придется скачивать данные каждый раз.
###Code
# Download train data
!wget "https://www.dropbox.com/s/cupinvuotopehty/train.zip?dl=0"
!unzip -q "train.zip?dl=0"
#local_folder = "../../HotDogOrNot/content/train_kaggle/"
train_folder = "train_kaggle/"
# Count number of files in the train folder, should be 4603
print('Number of files in the train folder', len(os.listdir(train_folder)))
# Download test data
!wget "https://www.dropbox.com/s/7xakfl2r9gn5p1j/test.zip?dl=0"
!unzip -q "test.zip?dl=0"
test_folder = "test_kaggle/"
# Count number of files in the test folder, should be 1150
print('Number of files in the test folder', len(os.listdir(test_folder)))
import torch
from torchvision import models
from torch.utils.data import Dataset, SubsetRandomSampler
from torchvision import transforms
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Имплементируем свой Dataset для загрузки данныхВ этом задании мы реализуем свой собственный класс Dataset для загрузки данных. Его цель - загрузить данные с диска и выдать по ним тензор с входом сети, меткой и идентификатором картинки (так будет проще подготовить сабмит для kaggle на тестовых данных).Вот ссылка, где хорошо объясняется как это делать на примере: https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmlВаш Dataset должен в качестве количества сэмплов выдать количество файлов в папке и уметь выдавать кортеж из сэмпла, метки по индексу и названия файла.Если название файла начинается со слов 'frankfurter', 'chili-dog' или 'hotdog' - метка положительная. Иначе отрицательная (ноль).И не забудьте поддержать возможность трансформации входа (аргумент `transforms`), она нам понадобится!
###Code
class HotdogOrNotDataset(Dataset):
def __init__(self, folder, transform=None):
self.transform = transform
# TODO: Your code here!
def __len__(self):
raise Exception("Not implemented!")
def __getitem__(self, index):
# TODO Implement getting item by index
# Hint: os.path.join is helpful!
raise Exception("Not implemented!")
return img, y, img_id
def visualize_samples(dataset, indices, title=None, count=10):
# visualize random 10 samples
plt.figure(figsize=(count*3,3))
display_indices = indices[:count]
if title:
plt.suptitle("%s %s/%s" % (title, len(display_indices), len(indices)))
for i, index in enumerate(display_indices):
x, y, _ = dataset[index]
plt.subplot(1,count,i+1)
plt.title("Label: %s" % y)
plt.imshow(x)
plt.grid(False)
plt.axis('off')
orig_dataset = HotdogOrNotDataset(train_folder)
indices = np.random.choice(np.arange(len(orig_dataset)), 7, replace=False)
visualize_samples(orig_dataset, indices, "Samples")
# Let's make sure transforms work!
dataset = HotdogOrNotDataset(train_folder, transform=transforms.RandomVerticalFlip(0.9))
visualize_samples(dataset, indices, "Samples with flip - a lot should be flipped!")
###Output
_____no_output_____
###Markdown
Создаем Dataset для тренировкиИ разделяем его на train и validation.На train будем обучать модель, на validation проверять ее качество, а соревнование Kaggle In-Class проведем на фотографиях из папки test_kaggle.
###Code
# First, lets load the dataset
train_dataset = HotdogOrNotDataset(train_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
test_dataset = HotdogOrNotDataset(test_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
batch_size = 64
data_size = len(dataset)
validation_fraction = .2
val_split = int(np.floor((validation_fraction) * data_size))
indices = list(range(data_size))
np.random.seed(42)
np.random.shuffle(indices)
val_indices, train_indices = indices[:val_split], indices[val_split:]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=val_sampler)
# Notice that we create test data loader in a different way. We don't have the labels.
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Наши обычные функции для тренировки
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y,_) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
###Output
_____no_output_____
###Markdown
Использование заранее натренированной сети (pretrained network)Чаще всего в качестве заранее натренированной сети используется сеть, натренированная на данных ImageNet с 1M изображений и 1000 классами.PyTorch включает такие натренированные сети для различных архитектур (https://pytorch.org/docs/stable/torchvision/models.html) Мы будем использовать ResNet18.Для начала посмотрим, что выдает уже натренированная сеть на наших картинках. То есть, посмотрим к какому из 1000 классов их отнесет сеть.Запустите модель на 10 случайных картинках из датасета и выведите их вместе с классами с наибольшей вероятностью. В коде уже есть код, который формирует соответствие между индексами в выходном векторе и классами ImageNet.
###Code
# Thanks to https://discuss.pytorch.org/t/imagenet-classes/4923/2
def load_imagenet_classes():
classes_json = urllib.request.urlopen('https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json').read()
classes = json.loads(classes_json)
# TODO: Process it to return dict of class index to name
return { int(k): v[-1] for k, v in classes.items()}
model = models.resnet18(pretrained=True)
# TODO: Run this model on 10 random images of your dataset and visualize what it predicts
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать только последний слойСуществует несколько вариантов переноса обучения, мы попробуем основные. Первый вариант - сделать новый последний слой и тренировать только его, заморозив остальные.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Freeze all the layers of this model and add a new output layer
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 2)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать всю модельВторой вариант - добавить новый слой и обучать всю модель целиком.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer and train the whole model
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - разные скорости обучения для разных слоевИ наконец последний вариант, который мы рассмотрим - использовать разные скорости обучения для новых и старых слоев
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer
# Train new layer with learning speed 0.001 and old layers with 0.0001
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
loss = nn.CrossEntropyLoss()
optimizer = None # Hint - look into what PyTorch optimizers let you configure!
loss_history, train_history, val_history = train_model(model_conv, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Визуализируем метрики и ошибки моделиПопробуем посмотреть, где модель ошибается - визуализируем ложные срабатывания (false positives) и ложноотрицательные срабатывания (false negatives).Для этого мы прогоним модель через все примеры и сравним ее с истинными метками (ground truth).
###Code
from torch.utils.data.sampler import Sampler
class SubsetSampler(Sampler):
r"""Samples elements with given indices sequentially
Arguments:
data_source (Dataset): dataset to sample from
indices (ndarray): indices of the samples to take
"""
def __init__(self, indices):
self.indices = indices
def __iter__(self):
return (self.indices[i] for i in range(len(self.indices)))
def __len__(self):
return len(self.indices)
def evaluate_model(model, dataset, indices):
"""
Computes predictions and ground truth labels for the indices of the dataset
Returns:
predictions: np array of booleans of model predictions
grount_truth: np array of boolean of actual labels of the dataset
"""
model.eval() # Evaluation mode
# TODO: Evaluate model on the list of indices and capture predictions
# and ground truth labels
# Hint: SubsetSampler above could be useful!
raise Exception("Not implemented")
return predictions, ground_truth
predictions, gt = evaluate_model(model_conv, train_dataset, val_indices)
###Output
_____no_output_____
###Markdown
И теперь можно визуализировать false positives и false negatives.
###Code
# TODO: Compute indices of the false positives on the validation set.
# Note those have to be indices of the original dataset
false_positive_indices = None
visualize_samples(orig_dataset, false_positive_indices, "False positives")
# TODO: Compute indices of the false negatives on the validation set.
# Note those have to be indices of the original dataset
false_negatives_indices = None
visualize_samples(orig_dataset, false_negatives_indices, "False negatives")
import sklearn.metrics as metrics
def binary_classification_metrics(prediction, ground_truth):
# TODO: Implement this function!
# We did this already it in the assignment1
return precision, recall, f1
precision, recall, f1 = binary_classification_metrics(predictions, gt)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (precision, recall, f1))
###Output
_____no_output_____
###Markdown
Что будет в конце вы уже понялиНатренируйте лучшую модель на основе `resnet18`, меняя только процесс тренировки.Выбирайте лучшую модель по F1 score.Как всегда, не забываем:- побольше агментаций!- перебор гиперпараметров- различные оптимизаторы- какие слои тюнить- learning rate annealing- на какой эпохе останавливатьсяНаша цель - довести F1 score на validation set до значения, большего **0.9**.
###Code
# TODO: Train your best model!
best_model = None
# Let's check how it performs on validation set!
predictions, ground_truth = evaluate_model(best_model, dataset, val_indices)
precision, recall, f1 = binary_classification_metrics(predictions, ground_truth)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (precision, recall, f1))
# TODO: Visualize training curve for the best model
###Output
_____no_output_____
###Markdown
Визуализируйте ошибки лучшей модели
###Code
# TODO Visualize false positives and false negatives of the best model on the validation set
###Output
_____no_output_____
###Markdown
Необязательное задание с большой звездочкойПоучавствуйте в Kaggle In-Class Hot Dog Recognition Challenge! Это соревнование сделано специально для курса и в нем учавствуют только те, кто проходит курс.В нем участники соревнуются в качестве натренированных моделей, загружая на сайт предсказания своих моделей на тестовой выборке. Разметка тестовой выборке участникам недоступна.Более подробно о правилах соревнования ниже.Те, кто проходят курс лично, за высокое место в соревновании получат дополнительные баллы.Вот ссылка на соревнование:https://www.kaggle.com/c/hotdogornot
###Code
image_id = []
predictions = []
model.eval()
for x,_,id_img in test_loader:
# TODO : Напишите код для предсказания меток (1 = есть хотдог, 0 = хотдога нет)
# Код должен возвратить список из id картинки и метку predictions
# image id - это название файла картинки, например '10000.jpg'
pass
# Так можно создать csv файл, чтобы затем загрузить его на kaggle
# Ожидаемый формат csv-файла:
# image_id,label
# 10000.jpg,1
# 10001.jpg,1
# 10002.jpg,0
# 10003.jpg,1
# 10004.jpg,0
with open('subm.csv', 'w') as submissionFile:
writer = csv.writer(submissionFile)
writer.writerow(['image_id', 'label'])
writer.writerows(zip(image_id,predictions))
# А так можно скачать файл с Google Colab
files.download('subm.csv')
###Output
_____no_output_____
###Markdown
Задание 4 - Перенос обучения (transfer learning) и тонкая настройка (fine-tuning)Одной из важнейшних техник в тренировке сетей - использовать заранее натренированные веса на более общей задачи в качестве начальной точки, а потом "дотренировать" их на конкретной.Такой подход и убыстряет обучение, и позволяет тренировать эффективные модели на маленьких наборах данных.В этом упражнении мы натренируем классификатор, который отличает хотдоги от не хотдогов! (более подробно - https://www.youtube.com/watch?v=ACmydtFDTGs)Это задание требует доступа к GPU, поэтому его можно выполнять либо на компьютере с GPU от NVidia, либо в [Google Colab](https://colab.research.google.com/).
###Code
import json
import os
import csv
import urllib
from io import BytesIO
from PIL import Image
from socket import timeout
from google.colab import files
#from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
#platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
#accelerator = 'cu80' if path.exists('/opt/bin/nvidia-smi') else 'cpu'
!pip3 install -q torch torchvision
!pip3 install -q Pillow==4.0.0
###Output
_____no_output_____
###Markdown
Сначала давайте скачаем данные с картинками. Это сделает код в следующей ячейке. Данные будут разделены на две части. На обучающей выборке, которая будет храниться в папке **train_kaggle**, мы будем строить наши модели, а на тестовой выборке **test_kaggle** будем предсказывать класс, к которому относится фотография (хотдог или нет). Если вы в Google Colab!В нем можно запускать ноутбуки с доступом к GPU. Они не очень быстрые, зато бесплатные!Каждый ноутбук получает свой собственный environment c доступным диском итд.Через 90 минут отсуствия активности этот environment пропадает со всеми данными.Поэтому нам придется скачивать данные каждый раз.
###Code
# Download train data
!wget "https://www.dropbox.com/s/cupinvuotopehty/train.zip?dl=0"
!unzip -q "train.zip?dl=0"
#local_folder = "../../HotDogOrNot/content/train_kaggle/"
train_folder = "train_kaggle/"
# Count number of files in the train folder, should be 4603
print('Number of files in the train folder', len(os.listdir(train_folder)))
# Download test data
!wget "https://www.dropbox.com/s/7xakfl2r9gn5p1j/test.zip?dl=0"
!unzip -q "test.zip?dl=0"
test_folder = "test_kaggle/"
# Count number of files in the test folder, should be 1150
print('Number of files in the test folder', len(os.listdir(test_folder)))
import torch
from torchvision import models
from torch.utils.data import Dataset, SubsetRandomSampler
from torchvision import transforms
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Имплементируем свой Dataset для загрузки данныхВ этом задании мы реализуем свой собственный класс Dataset для загрузки данных. Его цель - загрузить данные с диска и выдать по ним тензор с входом сети, меткой и идентификатором картинки (так будет проще подготовить сабмит для kaggle на тестовых данных).Вот ссылка, где хорошо объясняется как это делать на примере: https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmlВаш Dataset должен в качестве количества сэмплов выдать количество файлов в папке и уметь выдавать кортеж из сэмпла, метки по индексу и названия файла.Если название файла начинается со слов 'frankfurter', 'chili-dog' или 'hotdog' - метка положительная. Иначе отрицательная (ноль).И не забудьте поддержать возможность трансформации входа (аргумент `transforms`), она нам понадобится!
###Code
class HotdogOrNotDataset(Dataset):
def __init__(self, folder, transform=None):
self.transform = transform
# TODO: Your code here!
def __len__(self):
raise Exception("Not implemented!")
def __getitem__(self, index):
# TODO Implement getting item by index
# Hint: os.path.join is helpful!
raise Exception("Not implemented!")
return img, y, img_id
def visualize_samples(dataset, indices, title=None, count=10):
# visualize random 10 samples
plt.figure(figsize=(count*3,3))
display_indices = indices[:count]
if title:
plt.suptitle("%s %s/%s" % (title, len(display_indices), len(indices)))
for i, index in enumerate(display_indices):
x, y, _ = dataset[index]
plt.subplot(1,count,i+1)
plt.title("Label: %s" % y)
plt.imshow(x)
plt.grid(False)
plt.axis('off')
orig_dataset = HotdogOrNotDataset(train_folder)
indices = np.random.choice(np.arange(len(orig_dataset)), 7, replace=False)
visualize_samples(orig_dataset, indices, "Samples")
# Let's make sure transforms work!
dataset = HotdogOrNotDataset(train_folder, transform=transforms.RandomVerticalFlip(0.9))
visualize_samples(dataset, indices, "Samples with flip - a lot should be flipped!")
###Output
_____no_output_____
###Markdown
Создаем Dataset для тренировкиИ разделяем его на train и validation.На train будем обучать модель, на validation проверять ее качество, а соревнование Kaggle In-Class проведем на фотографиях из папки test_kaggle.
###Code
# First, lets load the dataset
train_dataset = HotdogOrNotDataset(train_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
test_dataset = HotdogOrNotDataset(test_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
batch_size = 64
data_size = len(dataset)
validation_fraction = .2
val_split = int(np.floor((validation_fraction) * data_size))
indices = list(range(data_size))
np.random.seed(42)
np.random.shuffle(indices)
val_indices, train_indices = indices[:val_split], indices[val_split:]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=val_sampler)
# Notice that we create test data loader in a different way. We don't have the labels.
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Наши обычные функции для тренировки
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y,_) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
###Output
_____no_output_____
###Markdown
Использование заранее натренированной сети (pretrained network)Чаще всего в качестве заранее натренированной сети используется сеть, натренированная на данных ImageNet с 1M изображений и 1000 классами.PyTorch включает такие натренированные сети для различных архитектур (https://pytorch.org/docs/stable/torchvision/models.html) Мы будем использовать ResNet18.Для начала посмотрим, что выдает уже натренированная сеть на наших картинках. То есть, посмотрим к какому из 1000 классов их отнесет сеть.Запустите модель на 10 случайных картинках из датасета и выведите их вместе с классами с наибольшей вероятностью. В коде уже есть код, который формирует соответствие между индексами в выходном векторе и классами ImageNet.
###Code
# Thanks to https://discuss.pytorch.org/t/imagenet-classes/4923/2
def load_imagenet_classes():
classes_json = urllib.request.urlopen('https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json').read()
classes = json.loads(classes_json)
# TODO: Process it to return dict of class index to name
return { int(k): v[-1] for k, v in classes.items()}
model = models.resnet18(pretrained=True)
# TODO: Run this model on 10 random images of your dataset and visualize what it predicts
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать только последний слойСуществует несколько вариантов переноса обучения, мы попробуем основные. Первый вариант - заменить последний слой на новый и тренировать только его, заморозив остальные.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Freeze all the layers of this model and add a new output layer
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 2)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать всю модельВторой вариант - точно так же заменить последгний слой на новый и обучать всю модель целиком.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer and train the whole model
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - разные скорости обучения для разных слоевИ наконец последний вариант, который мы рассмотрим - использовать разные скорости обучения для новых и старых слоев
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer
# Train new layer with learning speed 0.001 and old layers with 0.0001
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
loss = nn.CrossEntropyLoss()
optimizer = None # Hint - look into what PyTorch optimizers let you configure!
loss_history, train_history, val_history = train_model(model_conv, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Визуализируем метрики и ошибки моделиПопробуем посмотреть, где модель ошибается - визуализируем ложные срабатывания (false positives) и ложноотрицательные срабатывания (false negatives).Для этого мы прогоним модель через все примеры и сравним ее с истинными метками (ground truth).
###Code
from torch.utils.data.sampler import Sampler
class SubsetSampler(Sampler):
r"""Samples elements with given indices sequentially
Arguments:
data_source (Dataset): dataset to sample from
indices (ndarray): indices of the samples to take
"""
def __init__(self, indices):
self.indices = indices
def __iter__(self):
return (self.indices[i] for i in range(len(self.indices)))
def __len__(self):
return len(self.indices)
def evaluate_model(model, dataset, indices):
"""
Computes predictions and ground truth labels for the indices of the dataset
Returns:
predictions: np array of booleans of model predictions
grount_truth: np array of boolean of actual labels of the dataset
"""
model.eval() # Evaluation mode
# TODO: Evaluate model on the list of indices and capture predictions
# and ground truth labels
# Hint: SubsetSampler above could be useful!
raise Exception("Not implemented")
return predictions, ground_truth
predictions, gt = evaluate_model(model_conv, train_dataset, val_indices)
###Output
_____no_output_____
###Markdown
И теперь можно визуализировать false positives и false negatives.
###Code
# TODO: Compute indices of the false positives on the validation set.
# Note those have to be indices of the original dataset
false_positive_indices = None
visualize_samples(orig_dataset, false_positive_indices, "False positives")
# TODO: Compute indices of the false negatives on the validation set.
# Note those have to be indices of the original dataset
false_negatives_indices = None
visualize_samples(orig_dataset, false_negatives_indices, "False negatives")
import sklearn.metrics as metrics
def binary_classification_metrics(prediction, ground_truth):
# TODO: Implement this function!
# We did this already it in the assignment1
return precision, recall, f1
precision, recall, f1 = binary_classification_metrics(predictions, gt)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (f1, precision, recall))
###Output
_____no_output_____
###Markdown
Что будет в конце вы уже понялиНатренируйте лучшую модель на основе `resnet18`, меняя только процесс тренировки.Выбирайте лучшую модель по F1 score.Как всегда, не забываем:- побольше агментаций!- перебор гиперпараметров- различные оптимизаторы- какие слои тюнить- learning rate annealing- на какой эпохе останавливатьсяНаша цель - довести F1 score на validation set до значения, большего **0.9**.
###Code
# TODO: Train your best model!
best_model = None
# Let's check how it performs on validation set!
predictions, ground_truth = evaluate_model(best_model, dataset, val_indices)
precision, recall, f1 = binary_classification_metrics(predictions, ground_truth)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (precision, recall, f1))
# TODO: Visualize training curve for the best model
###Output
_____no_output_____
###Markdown
Визуализируйте ошибки лучшей модели
###Code
# TODO Visualize false positives and false negatives of the best model on the validation set
###Output
_____no_output_____
###Markdown
Необязательное задание с большой звездочкойПоучавствуйте в Kaggle In-Class Hot Dog Recognition Challenge! Это соревнование сделано специально для курса и в нем учавствуют только те, кто проходит курс.В нем участники соревнуются в качестве натренированных моделей, загружая на сайт предсказания своих моделей на тестовой выборке. Разметка тестовой выборке участникам недоступна.Более подробно о правилах соревнования ниже.Те, кто проходят курс лично, за высокое место в соревновании получат дополнительные баллы.Здесь уже можно использовать и другие базовые архитектуры кроме `resnet18`, и ансамбли, и другие трюки тренировки моделей.Вот ссылка на соревнование:https://www.kaggle.com/c/hotdogornot
###Code
image_id = []
predictions = []
model.eval()
for x,_,id_img in test_loader:
# TODO : Напишите код для предсказания меток (1 = есть хотдог, 0 = хотдога нет)
# Код должен возвратить список из id картинки и метку predictions
# image id - это название файла картинки, например '10000.jpg'
pass
# Так можно создать csv файл, чтобы затем загрузить его на kaggle
# Ожидаемый формат csv-файла:
# image_id,label
# 10000.jpg,1
# 10001.jpg,1
# 10002.jpg,0
# 10003.jpg,1
# 10004.jpg,0
with open('subm.csv', 'w') as submissionFile:
writer = csv.writer(submissionFile)
writer.writerow(['image_id', 'label'])
writer.writerows(zip(image_id,predictions))
# А так можно скачать файл с Google Colab
files.download('subm.csv')
###Output
_____no_output_____
###Markdown
Задание 4 - Перенос обучения (transfer learning) и тонкая настройка (fine-tuning)Одной из важнейшних техник в тренировке сетей - использовать заранее натренированные веса на более общей задачи в качестве начальной точки, а потом "дотренировать" их на конкретной.Такой подход и убыстряет обучение, и позволяет тренировать эффективные модели на маленьких наборах данных.В этом упражнении мы натренируем классификатор, который отличает хотдоги от не хотдогов! (более подробно - https://www.youtube.com/watch?v=ACmydtFDTGs)Это задание требует доступа к GPU, поэтому его можно выполнять либо на компьютере с GPU от NVidia, либо в [Google Colab](https://colab.research.google.com/).
###Code
import json
import os
import csv
import urllib
from io import BytesIO
from PIL import Image
from socket import timeout
from google.colab import files
!pip3 install -q torch torchvision
!pip3 install -q Pillow==4.0.0
###Output
_____no_output_____
###Markdown
Сначала давайте скачаем данные с картинками. Это сделает код в следующей ячейке. Данные будут разделены на две части. На обучающей выборке, которая будет храниться в папке **train_kaggle**, мы будем строить наши модели, а на тестовой выборке **test_kaggle** будем предсказывать класс, к которому относится фотография (хотдог или нет). Если вы в Google Colab!В нем можно запускать ноутбуки с доступом к GPU. Они не очень быстрые, зато бесплатные!Каждый ноутбук получает свой собственный environment c доступным диском итд.Через 90 минут отсуствия активности этот environment пропадает со всеми данными.Поэтому нам придется скачивать данные каждый раз.
###Code
# Download train data
!wget "https://storage.googleapis.com/dlcourse_ai/train.zip"
!unzip -q "train.zip"
train_folder = "train_kaggle/"
# Count number of files in the train folder, should be 4603
print('Number of files in the train folder', len(os.listdir(train_folder)))
# Download test data
!wget "https://storage.googleapis.com/dlcourse_ai/test.zip"
!unzip -q "test.zip"
test_folder = "test_kaggle/"
# Count number of files in the test folder, should be 1150
print('Number of files in the test folder', len(os.listdir(test_folder)))
import torch
from torchvision import models
from torch.utils.data import Dataset, SubsetRandomSampler
from torchvision import transforms
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Имплементируем свой Dataset для загрузки данныхВ этом задании мы реализуем свой собственный класс Dataset для загрузки данных. Его цель - загрузить данные с диска и выдать по ним тензор с входом сети, меткой и идентификатором картинки (так будет проще подготовить сабмит для kaggle на тестовых данных).Вот ссылка, где хорошо объясняется как это делать на примере: https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmlВаш Dataset должен в качестве количества сэмплов выдать количество файлов в папке и уметь выдавать кортеж из сэмпла, метки по индексу и названия файла.Если название файла начинается со слов 'frankfurter', 'chili-dog' или 'hotdog' - метка положительная. Иначе отрицательная (ноль).И не забудьте поддержать возможность трансформации входа (аргумент `transforms`), она нам понадобится!
###Code
class HotdogOrNotDataset(Dataset):
def __init__(self, folder, transform=None):
self.transform = transform
# TODO: Your code here!
def __len__(self):
raise Exception("Not implemented!")
def __getitem__(self, index):
# TODO Implement getting item by index
# Hint: os.path.join is helpful!
raise Exception("Not implemented!")
return img, y, img_id
def visualize_samples(dataset, indices, title=None, count=10):
# visualize random 10 samples
plt.figure(figsize=(count*3,3))
display_indices = indices[:count]
if title:
plt.suptitle("%s %s/%s" % (title, len(display_indices), len(indices)))
for i, index in enumerate(display_indices):
x, y, _ = dataset[index]
plt.subplot(1,count,i+1)
plt.title("Label: %s" % y)
plt.imshow(x)
plt.grid(False)
plt.axis('off')
orig_dataset = HotdogOrNotDataset(train_folder)
indices = np.random.choice(np.arange(len(orig_dataset)), 7, replace=False)
visualize_samples(orig_dataset, indices, "Samples")
# Let's make sure transforms work!
dataset = HotdogOrNotDataset(train_folder, transform=transforms.RandomVerticalFlip(0.9))
visualize_samples(dataset, indices, "Samples with flip - a lot should be flipped!")
###Output
_____no_output_____
###Markdown
Создаем Dataset для тренировкиИ разделяем его на train и validation.На train будем обучать модель, на validation проверять ее качество, а соревнование Kaggle In-Class проведем на фотографиях из папки test_kaggle.
###Code
# First, lets load the dataset
train_dataset = HotdogOrNotDataset(train_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
test_dataset = HotdogOrNotDataset(test_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
batch_size = 64
data_size = len(dataset)
validation_fraction = .2
val_split = int(np.floor((validation_fraction) * data_size))
indices = list(range(data_size))
np.random.seed(42)
np.random.shuffle(indices)
val_indices, train_indices = indices[:val_split], indices[val_split:]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=val_sampler)
# Notice that we create test data loader in a different way. We don't have the labels.
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Наши обычные функции для тренировки
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y,_) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
###Output
_____no_output_____
###Markdown
Использование заранее натренированной сети (pretrained network)Чаще всего в качестве заранее натренированной сети используется сеть, натренированная на данных ImageNet с 1M изображений и 1000 классами.PyTorch включает такие натренированные сети для различных архитектур (https://pytorch.org/docs/stable/torchvision/models.html) Мы будем использовать ResNet18.Для начала посмотрим, что выдает уже натренированная сеть на наших картинках. То есть, посмотрим к какому из 1000 классов их отнесет сеть.Запустите модель на 10 случайных картинках из датасета и выведите их вместе с классами с наибольшей вероятностью. В коде уже есть код, который формирует соответствие между индексами в выходном векторе и классами ImageNet.
###Code
# Thanks to https://discuss.pytorch.org/t/imagenet-classes/4923/2
def load_imagenet_classes():
classes_json = urllib.request.urlopen('https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json').read()
classes = json.loads(classes_json)
# TODO: Process it to return dict of class index to name
return { int(k): v[-1] for k, v in classes.items()}
model = models.resnet18(pretrained=True)
# TODO: Run this model on 10 random images of your dataset and visualize what it predicts
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать только последний слойСуществует несколько вариантов переноса обучения, мы попробуем основные. Первый вариант - заменить последний слой на новый и тренировать только его, заморозив остальные.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Freeze all the layers of this model and add a new output layer
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 2)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать всю модельВторой вариант - точно так же заменить последгний слой на новый и обучать всю модель целиком.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer and train the whole model
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - разные скорости обучения для разных слоевИ наконец последний вариант, который мы рассмотрим - использовать разные скорости обучения для новых и старых слоев
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer
# Train new layer with learning speed 0.001 and old layers with 0.0001
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
loss = nn.CrossEntropyLoss()
optimizer = None # Hint - look into what PyTorch optimizers let you configure!
loss_history, train_history, val_history = train_model(model_conv, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Визуализируем метрики и ошибки моделиПопробуем посмотреть, где модель ошибается - визуализируем ложные срабатывания (false positives) и ложноотрицательные срабатывания (false negatives).Для этого мы прогоним модель через все примеры и сравним ее с истинными метками (ground truth).
###Code
from torch.utils.data.sampler import Sampler
class SubsetSampler(Sampler):
r"""Samples elements with given indices sequentially
Arguments:
data_source (Dataset): dataset to sample from
indices (ndarray): indices of the samples to take
"""
def __init__(self, indices):
self.indices = indices
def __iter__(self):
return (self.indices[i] for i in range(len(self.indices)))
def __len__(self):
return len(self.indices)
def evaluate_model(model, dataset, indices):
"""
Computes predictions and ground truth labels for the indices of the dataset
Returns:
predictions: np array of booleans of model predictions
grount_truth: np array of boolean of actual labels of the dataset
"""
model.eval() # Evaluation mode
# TODO: Evaluate model on the list of indices and capture predictions
# and ground truth labels
# Hint: SubsetSampler above could be useful!
raise Exception("Not implemented")
return predictions, ground_truth
predictions, gt = evaluate_model(model_conv, train_dataset, val_indices)
###Output
_____no_output_____
###Markdown
И теперь можно визуализировать false positives и false negatives.
###Code
# TODO: Compute indices of the false positives on the validation set.
# Note those have to be indices of the original dataset
false_positive_indices = None
visualize_samples(orig_dataset, false_positive_indices, "False positives")
# TODO: Compute indices of the false negatives on the validation set.
# Note those have to be indices of the original dataset
false_negatives_indices = None
visualize_samples(orig_dataset, false_negatives_indices, "False negatives")
import sklearn.metrics as metrics
def binary_classification_metrics(prediction, ground_truth):
# TODO: Implement this function!
# We did this already it in the assignment1
return precision, recall, f1
precision, recall, f1 = binary_classification_metrics(predictions, gt)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (f1, precision, recall))
###Output
_____no_output_____
###Markdown
Что будет в конце вы уже понялиНатренируйте лучшую модель на основе `resnet18`, меняя только процесс тренировки.Выбирайте лучшую модель по F1 score.Как всегда, не забываем:- побольше агментаций!- перебор гиперпараметров- различные оптимизаторы- какие слои тюнить- learning rate annealing- на какой эпохе останавливатьсяНаша цель - довести F1 score на validation set до значения, большего **0.93**.
###Code
# TODO: Train your best model!
best_model = None
# Let's check how it performs on validation set!
predictions, ground_truth = evaluate_model(best_model, dataset, val_indices)
precision, recall, f1 = binary_classification_metrics(predictions, ground_truth)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (precision, recall, f1))
# TODO: Visualize training curve for the best model
###Output
_____no_output_____
###Markdown
Визуализируйте ошибки лучшей модели
###Code
# TODO Visualize false positives and false negatives of the best model on the validation set
###Output
_____no_output_____
###Markdown
Необязательное задание с большой звездочкойПоучавствуйте в Kaggle In-Class Hot Dog Recognition Challenge! Это соревнование сделано специально для курса и в нем учавствуют только те, кто проходит курс.В нем участники соревнуются в качестве натренированных моделей, загружая на сайт предсказания своих моделей на тестовой выборке. Разметка тестовой выборке участникам недоступна.Более подробно о правилах соревнования ниже.Те, кто проходят курс лично, за высокое место в соревновании получат дополнительные баллы.Здесь уже можно использовать и другие базовые архитектуры кроме `resnet18`, и ансамбли, и другие трюки тренировки моделей.Вот ссылка на соревнование:https://www.kaggle.com/c/hotdogornot
###Code
image_id = []
predictions = []
model.eval()
for x,_,id_img in test_loader:
# TODO : Напишите код для предсказания меток (1 = есть хотдог, 0 = хотдога нет)
# Код должен возвратить список из id картинки и метку predictions
# image id - это название файла картинки, например '10000.jpg'
pass
# Так можно создать csv файл, чтобы затем загрузить его на kaggle
# Ожидаемый формат csv-файла:
# image_id,label
# 10000.jpg,1
# 10001.jpg,1
# 10002.jpg,0
# 10003.jpg,1
# 10004.jpg,0
with open('subm.csv', 'w') as submissionFile:
writer = csv.writer(submissionFile)
writer.writerow(['image_id', 'label'])
writer.writerows(zip(image_id,predictions))
# А так можно скачать файл с Google Colab
files.download('subm.csv')
###Output
_____no_output_____
###Markdown
Задание 4 - Перенос обучения (transfer learning) и тонкая настройка (fine-tuning)Одной из важнейшних техник в тренировке сетей - использовать заранее натренированные веса на более общей задачи в качестве начальной точки, а потом "дотренировать" их на конкретной.Такой подход и убыстряет обучение, и позволяет тренировать эффективные модели на маленьких наборах данных.В этом упражнении мы натренируем классификатор, который отличает хотдоги от не хотдогов! (более подробно - https://www.youtube.com/watch?v=ACmydtFDTGs)Это задание требует доступа к GPU, поэтому его можно выполнять либо на компьютере с GPU от NVidia, либо в [Google Colab](https://colab.research.google.com/).
###Code
import json
import os
import csv
import urllib
from io import BytesIO
from PIL import Image
from socket import timeout
from google.colab import files
!pip3 install -q torch torchvision
!pip3 install -q Pillow==4.0.0
###Output
_____no_output_____
###Markdown
Сначала давайте скачаем данные с картинками. Это сделает код в следующей ячейке. Данные будут разделены на две части. На обучающей выборке, которая будет храниться в папке **train_kaggle**, мы будем строить наши модели, а на тестовой выборке **test_kaggle** будем предсказывать класс, к которому относится фотография (хотдог или нет). Если вы в Google Colab!В нем можно запускать ноутбуки с доступом к GPU. Они не очень быстрые, зато бесплатные!Каждый ноутбук получает свой собственный environment c доступным диском итд.Через 90 минут отсуствия активности этот environment пропадает со всеми данными.Поэтому нам придется скачивать данные каждый раз.
###Code
# Download train data
!wget "https://storage.googleapis.com/dlcourse_ai/train.zip"
!unzip -q "train.zip"
train_folder = "train_kaggle/"
# Count number of files in the train folder, should be 4603
print('Number of files in the train folder', len(os.listdir(train_folder)))
# Download test data
!wget "https://storage.googleapis.com/dlcourse_ai/test.zip"
!unzip -q "test.zip"
test_folder = "test_kaggle/"
# Count number of files in the test folder, should be 1150
print('Number of files in the test folder', len(os.listdir(test_folder)))
import torch
from torchvision import models
from torch.utils.data import Dataset, SubsetRandomSampler
from torchvision import transforms
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Имплементируем свой Dataset для загрузки данныхВ этом задании мы реализуем свой собственный класс Dataset для загрузки данных. Его цель - загрузить данные с диска и выдать по ним тензор с входом сети, меткой и идентификатором картинки (так будет проще подготовить сабмит для kaggle на тестовых данных).Вот ссылка, где хорошо объясняется как это делать на примере: https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmlВаш Dataset должен в качестве количества сэмплов выдать количество файлов в папке и уметь выдавать кортеж из сэмпла, метки по индексу и названия файла.Если название файла начинается со слов 'frankfurter', 'chili-dog' или 'hotdog' - метка положительная. Иначе отрицательная (ноль).И не забудьте поддержать возможность трансформации входа (аргумент `transforms`), она нам понадобится!
###Code
class HotdogOrNotDataset(Dataset):
def __init__(self, folder, transform=None):
self.transform = transform
# TODO: Your code here!
def __len__(self):
raise Exception("Not implemented!")
def __getitem__(self, index):
# TODO Implement getting item by index
# Hint: os.path.join is helpful!
raise Exception("Not implemented!")
return img, y, img_id
def visualize_samples(dataset, indices, title=None, count=10):
# visualize random 10 samples
plt.figure(figsize=(count*3,3))
display_indices = indices[:count]
if title:
plt.suptitle("%s %s/%s" % (title, len(display_indices), len(indices)))
for i, index in enumerate(display_indices):
x, y, _ = dataset[index]
plt.subplot(1,count,i+1)
plt.title("Label: %s" % y)
plt.imshow(x)
plt.grid(False)
plt.axis('off')
orig_dataset = HotdogOrNotDataset(train_folder)
indices = np.random.choice(np.arange(len(orig_dataset)), 7, replace=False)
visualize_samples(orig_dataset, indices, "Samples")
# Let's make sure transforms work!
dataset = HotdogOrNotDataset(train_folder, transform=transforms.RandomVerticalFlip(0.9))
visualize_samples(dataset, indices, "Samples with flip - a lot should be flipped!")
###Output
_____no_output_____
###Markdown
Создаем Dataset для тренировкиИ разделяем его на train и validation.На train будем обучать модель, на validation проверять ее качество, а соревнование Kaggle In-Class проведем на фотографиях из папки test_kaggle.
###Code
# First, lets load the dataset
train_dataset = HotdogOrNotDataset(train_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
test_dataset = HotdogOrNotDataset(test_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
batch_size = 64
data_size = len(dataset)
validation_fraction = .2
val_split = int(np.floor((validation_fraction) * data_size))
indices = list(range(data_size))
np.random.seed(42)
np.random.shuffle(indices)
val_indices, train_indices = indices[:val_split], indices[val_split:]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=val_sampler)
# Notice that we create test data loader in a different way. We don't have the labels.
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Наши обычные функции для тренировки
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y,_) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
###Output
_____no_output_____
###Markdown
Использование заранее натренированной сети (pretrained network)Чаще всего в качестве заранее натренированной сети используется сеть, натренированная на данных ImageNet с 1M изображений и 1000 классами.PyTorch включает такие натренированные сети для различных архитектур (https://pytorch.org/docs/stable/torchvision/models.html) Мы будем использовать ResNet18.Для начала посмотрим, что выдает уже натренированная сеть на наших картинках. То есть, посмотрим к какому из 1000 классов их отнесет сеть.Запустите модель на 10 случайных картинках из датасета и выведите их вместе с классами с наибольшей вероятностью. В коде уже есть код, который формирует соответствие между индексами в выходном векторе и классами ImageNet.
###Code
# Thanks to https://discuss.pytorch.org/t/imagenet-classes/4923/2
def load_imagenet_classes():
classes_json = urllib.request.urlopen('https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json').read()
classes = json.loads(classes_json)
# TODO: Process it to return dict of class index to name
return { int(k): v[-1] for k, v in classes.items()}
model = models.resnet18(pretrained=True)
# TODO: Run this model on 10 random images of your dataset and visualize what it predicts
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать только последний слойСуществует несколько вариантов переноса обучения, мы попробуем основные. Первый вариант - заменить последний слой на новый и тренировать только его, заморозив остальные.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Freeze all the layers of this model and add a new output layer
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 2)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать всю модельВторой вариант - точно так же заменить последгний слой на новый и обучать всю модель целиком.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer and train the whole model
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - разные скорости обучения для разных слоевИ наконец последний вариант, который мы рассмотрим - использовать разные скорости обучения для новых и старых слоев
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer
# Train new layer with learning speed 0.001 and old layers with 0.0001
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
loss = nn.CrossEntropyLoss()
optimizer = None # Hint - look into what PyTorch optimizers let you configure!
loss_history, train_history, val_history = train_model(model_conv, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Визуализируем метрики и ошибки моделиПопробуем посмотреть, где модель ошибается - визуализируем ложные срабатывания (false positives) и ложноотрицательные срабатывания (false negatives).Для этого мы прогоним модель через все примеры и сравним ее с истинными метками (ground truth).
###Code
from torch.utils.data.sampler import Sampler
class SubsetSampler(Sampler):
r"""Samples elements with given indices sequentially
Arguments:
data_source (Dataset): dataset to sample from
indices (ndarray): indices of the samples to take
"""
def __init__(self, indices):
self.indices = indices
def __iter__(self):
return (self.indices[i] for i in range(len(self.indices)))
def __len__(self):
return len(self.indices)
def evaluate_model(model, dataset, indices):
"""
Computes predictions and ground truth labels for the indices of the dataset
Returns:
predictions: np array of booleans of model predictions
grount_truth: np array of boolean of actual labels of the dataset
"""
model.eval() # Evaluation mode
# TODO: Evaluate model on the list of indices and capture predictions
# and ground truth labels
# Hint: SubsetSampler above could be useful!
raise Exception("Not implemented")
return predictions, ground_truth
predictions, gt = evaluate_model(model_conv, train_dataset, val_indices)
###Output
_____no_output_____
###Markdown
И теперь можно визуализировать false positives и false negatives.
###Code
# TODO: Compute indices of the false positives on the validation set.
# Note those have to be indices of the original dataset
false_positive_indices = None
visualize_samples(orig_dataset, false_positive_indices, "False positives")
# TODO: Compute indices of the false negatives on the validation set.
# Note those have to be indices of the original dataset
false_negatives_indices = None
visualize_samples(orig_dataset, false_negatives_indices, "False negatives")
import sklearn.metrics as metrics
def binary_classification_metrics(prediction, ground_truth):
# TODO: Implement this function!
# We did this already it in the assignment1
return precision, recall, f1
precision, recall, f1 = binary_classification_metrics(predictions, gt)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (f1, precision, recall))
###Output
_____no_output_____
###Markdown
Что будет в конце вы уже понялиНатренируйте лучшую модель на основе `resnet18`, меняя только процесс тренировки.Выбирайте лучшую модель по F1 score.Как всегда, не забываем:- побольше агментаций!- перебор гиперпараметров- различные оптимизаторы- какие слои тюнить- learning rate annealing- на какой эпохе останавливатьсяНаша цель - довести F1 score на validation set до значения, большего **0.93**.
###Code
# TODO: Train your best model!
best_model = None
# Let's check how it performs on validation set!
predictions, ground_truth = evaluate_model(best_model, dataset, val_indices)
precision, recall, f1 = binary_classification_metrics(predictions, ground_truth)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (precision, recall, f1))
# TODO: Visualize training curve for the best model
###Output
_____no_output_____
###Markdown
Визуализируйте ошибки лучшей модели
###Code
# TODO Visualize false positives and false negatives of the best model on the validation set
###Output
_____no_output_____
###Markdown
Необязательное задание с большой звездочкойПоучавствуйте в Kaggle In-Class Hot Dog Recognition Challenge! Это соревнование сделано специально для курса и в нем учавствуют только те, кто проходит курс.В нем участники соревнуются в качестве натренированных моделей, загружая на сайт предсказания своих моделей на тестовой выборке. Разметка тестовой выборке участникам недоступна.Более подробно о правилах соревнования ниже.Те, кто проходят курс лично, за высокое место в соревновании получат дополнительные баллы.Здесь уже можно использовать и другие базовые архитектуры кроме `resnet18`, и ансамбли, и другие трюки тренировки моделей.Вот ссылка на соревнование:https://www.kaggle.com/c/hotdogornot
###Code
image_id = []
predictions = []
model.eval()
for x,_,id_img in test_loader:
# TODO : Напишите код для предсказания меток (1 = есть хотдог, 0 = хотдога нет)
# Код должен возвратить список из id картинки и метку predictions
# image id - это название файла картинки, например '10000.jpg'
pass
# Так можно создать csv файл, чтобы затем загрузить его на kaggle
# Ожидаемый формат csv-файла:
# image_id,label
# 10000.jpg,1
# 10001.jpg,1
# 10002.jpg,0
# 10003.jpg,1
# 10004.jpg,0
with open('subm.csv', 'w') as submissionFile:
writer = csv.writer(submissionFile)
writer.writerow(['image_id', 'label'])
writer.writerows(zip(image_id,predictions))
# А так можно скачать файл с Google Colab
files.download('subm.csv')
###Output
_____no_output_____
###Markdown
Задание 4 - Перенос обучения (transfer learning) и тонкая настройка (fine-tuning)Одной из важнейшних техник в тренировке сетей - использовать заранее натренированные веса на более общей задачи в качестве начальной точки, а потом "дотренировать" их на конкретной.Такой подход и убыстряет обучение, и позволяет тренировать эффективные модели на маленьких наборах данных.В этом упражнении мы натренируем классификатор, который отличает хотдоги от не хотдогов! (более подробно - https://www.youtube.com/watch?v=ACmydtFDTGs)Это задание требует доступа к GPU, поэтому его можно выполнять либо на компьютере с GPU от NVidia, либо в [Google Colab](https://colab.research.google.com/).
###Code
import json
import os
import csv
import urllib
from io import BytesIO
from PIL import Image
from socket import timeout
from google.colab import files
#from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
#platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
#accelerator = 'cu80' if path.exists('/opt/bin/nvidia-smi') else 'cpu'
!pip3 install -q torch torchvision
!pip3 install -q Pillow==4.0.0
###Output
_____no_output_____
###Markdown
Сначала давайте скачаем данные с картинками. Это сделает код в следующей ячейке. Данные будут разделены на две части. На обучающей выборке, которая будет храниться в папке **train_kaggle**, мы будем строить наши модели, а на тестовой выборке **test_kaggle** будем предсказывать класс, к которому относится фотография (хотдог или нет). Если вы в Google Colab!В нем можно запускать ноутбуки с доступом к GPU. Они не очень быстрые, зато бесплатные!Каждый ноутбук получает свой собственный environment c доступным диском итд.Через 90 минут отсуствия активности этот environment пропадает со всеми данными.Поэтому нам придется скачивать данные каждый раз.
###Code
# Download train data
!wget "https://www.dropbox.com/s/cupinvuotopehty/train.zip?dl=0"
!unzip -q "train.zip?dl=0"
#local_folder = "../../HotDogOrNot/content/train_kaggle/"
train_folder = "train_kaggle/"
# Count number of files in the train folder, should be 4603
print('Number of files in the train folder', len(os.listdir(train_folder)))
# Download test data
!wget "https://www.dropbox.com/s/7xakfl2r9gn5p1j/test.zip?dl=0"
!unzip -q "test.zip?dl=0"
test_folder = "test_kaggle/"
# Count number of files in the test folder, should be 1150
print('Number of files in the test folder', len(os.listdir(test_folder)))
import torch
from torchvision import models
from torch.utils.data import Dataset, SubsetRandomSampler
from torchvision import transforms
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
device = torch.device("cuda:0") # Let's make sure GPU is available!
###Output
_____no_output_____
###Markdown
Имплементируем свой Dataset для загрузки данныхВ этом задании мы реализуем свой собственный класс Dataset для загрузки данных. Его цель - загрузить данные с диска и выдать по ним тензор с входом сети, меткой и идентификатором картинки (так будет проще подготовить сабмит для kaggle на тестовых данных).Вот ссылка, где хорошо объясняется как это делать на примере: https://pytorch.org/tutorials/beginner/data_loading_tutorial.htmlВаш Dataset должен в качестве количества сэмплов выдать количество файлов в папке и уметь выдавать кортеж из сэмпла, метки по индексу и названия файла.Если название файла начинается со слов 'frankfurter', 'chili-dog' или 'hotdog' - метка положительная. Иначе отрицательная (ноль).И не забудьте поддержать возможность трансформации входа (аргумент `transforms`), она нам понадобится!
###Code
class HotdogOrNotDataset(Dataset):
def __init__(self, folder, transform=None):
self.transform = transform
# TODO: Your code here!
def __len__(self):
raise Exception("Not implemented!")
def __getitem__(self, index):
# TODO Implement getting item by index
# Hint: os.path.join is helpful!
raise Exception("Not implemented!")
return img, y, img_id
def visualize_samples(dataset, indices, title=None, count=10):
# visualize random 10 samples
plt.figure(figsize=(count*3,3))
display_indices = indices[:count]
if title:
plt.suptitle("%s %s/%s" % (title, len(display_indices), len(indices)))
for i, index in enumerate(display_indices):
x, y, _ = dataset[index]
plt.subplot(1,count,i+1)
plt.title("Label: %s" % y)
plt.imshow(x)
plt.grid(False)
plt.axis('off')
orig_dataset = HotdogOrNotDataset(train_folder)
indices = np.random.choice(np.arange(len(orig_dataset)), 7, replace=False)
visualize_samples(orig_dataset, indices, "Samples")
# Let's make sure transforms work!
dataset = HotdogOrNotDataset(train_folder, transform=transforms.RandomVerticalFlip(0.9))
visualize_samples(dataset, indices, "Samples with flip - a lot should be flipped!")
###Output
_____no_output_____
###Markdown
Создаем Dataset для тренировкиИ разделяем его на train и validation.На train будем обучать модель, на validation проверять ее качество, а соревнование Kaggle In-Class проведем на фотографиях из папки test_kaggle.
###Code
# First, lets load the dataset
train_dataset = HotdogOrNotDataset(train_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
test_dataset = HotdogOrNotDataset(test_folder,
transform=transforms.Compose([
transforms.Resize((224, 224)),
transforms.ToTensor(),
# Use mean and std for pretrained models
# https://pytorch.org/docs/stable/torchvision/models.html
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
)
batch_size = 64
data_size = len(dataset)
validation_fraction = .2
val_split = int(np.floor((validation_fraction) * data_size))
indices = list(range(data_size))
np.random.seed(42)
np.random.shuffle(indices)
val_indices, train_indices = indices[:val_split], indices[val_split:]
train_sampler = SubsetRandomSampler(train_indices)
val_sampler = SubsetRandomSampler(val_indices)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=train_sampler)
val_loader = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size,
sampler=val_sampler)
# Notice that we create test data loader in a different way. We don't have the labels.
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
Наши обычные функции для тренировки
###Code
def train_model(model, train_loader, val_loader, loss, optimizer, num_epochs):
loss_history = []
train_history = []
val_history = []
for epoch in range(num_epochs):
model.train() # Enter train mode
loss_accum = 0
correct_samples = 0
total_samples = 0
for i_step, (x, y,_) in enumerate(train_loader):
x_gpu = x.to(device)
y_gpu = y.to(device)
prediction = model(x_gpu)
loss_value = loss(prediction, y_gpu)
optimizer.zero_grad()
loss_value.backward()
optimizer.step()
_, indices = torch.max(prediction, 1)
correct_samples += torch.sum(indices == y_gpu)
total_samples += y.shape[0]
loss_accum += loss_value
ave_loss = loss_accum / i_step
train_accuracy = float(correct_samples) / total_samples
val_accuracy = compute_accuracy(model, val_loader)
loss_history.append(float(ave_loss))
train_history.append(train_accuracy)
val_history.append(val_accuracy)
print("Average loss: %f, Train accuracy: %f, Val accuracy: %f" % (ave_loss, train_accuracy, val_accuracy))
return loss_history, train_history, val_history
def compute_accuracy(model, loader):
"""
Computes accuracy on the dataset wrapped in a loader
Returns: accuracy as a float value between 0 and 1
"""
model.eval() # Evaluation mode
# TODO: Copy implementation from previous assignment
# Don't forget to move the data to device before running it through the model!
raise Exception("Not implemented")
###Output
_____no_output_____
###Markdown
Использование заранее натренированной сети (pretrained network)Чаще всего в качестве заранее натренированной сети используется сеть, натренированная на данных ImageNet с 1M изображений и 1000 классами.PyTorch включает такие натренированные сети для различных архитектур (https://pytorch.org/docs/stable/torchvision/models.html) Мы будем использовать ResNet18.Для начала посмотрим, что выдает уже натренированная сеть на наших картинках. То есть, посмотрим к какому из 1000 классов их отнесет сеть.Запустите модель на 10 случайных картинках из датасета и выведите их вместе с классами с наибольшей вероятностью. В коде уже есть код, который формирует соответствие между индексами в выходном векторе и классами ImageNet.
###Code
# Thanks to https://discuss.pytorch.org/t/imagenet-classes/4923/2
def load_imagenet_classes():
classes_json = urllib.request.urlopen('https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json').read()
classes = json.loads(classes_json)
# TODO: Process it to return dict of class index to name
return { int(k): v[-1] for k, v in classes.items()}
model = models.resnet18(pretrained=True)
# TODO: Run this model on 10 random images of your dataset and visualize what it predicts
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать только последний слойСуществует несколько вариантов переноса обучения, мы попробуем основные. Первый вариант - заменить последний слой на новый и тренировать только его, заморозив остальные.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Freeze all the layers of this model and add a new output layer
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 2)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - тренировать всю модельВторой вариант - точно так же заменить последгний слой на новый и обучать всю модель целиком.
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer and train the whole model
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
parameters = None # Fill the right thing here!
loss = nn.CrossEntropyLoss()
optimizer = optim.SGD( parameters, lr=0.001, momentum=0.9)
loss_history, train_history, val_history = train_model(model, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Перенос обучения (transfer learning) - разные скорости обучения для разных слоевИ наконец последний вариант, который мы рассмотрим - использовать разные скорости обучения для новых и старых слоев
###Code
import torch.nn as nn
import torch.optim as optim
model = models.resnet18(pretrained=True)
# TODO: Add a new output layer
# Train new layer with learning speed 0.001 and old layers with 0.0001
# https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
loss = nn.CrossEntropyLoss()
optimizer = None # Hint - look into what PyTorch optimizers let you configure!
loss_history, train_history, val_history = train_model(model_conv, train_loader, val_loader, loss, optimizer, 5)
###Output
_____no_output_____
###Markdown
Визуализируем метрики и ошибки моделиПопробуем посмотреть, где модель ошибается - визуализируем ложные срабатывания (false positives) и ложноотрицательные срабатывания (false negatives).Для этого мы прогоним модель через все примеры и сравним ее с истинными метками (ground truth).
###Code
from torch.utils.data.sampler import Sampler
class SubsetSampler(Sampler):
r"""Samples elements with given indices sequentially
Arguments:
data_source (Dataset): dataset to sample from
indices (ndarray): indices of the samples to take
"""
def __init__(self, indices):
self.indices = indices
def __iter__(self):
return (self.indices[i] for i in range(len(self.indices)))
def __len__(self):
return len(self.indices)
def evaluate_model(model, dataset, indices):
"""
Computes predictions and ground truth labels for the indices of the dataset
Returns:
predictions: np array of booleans of model predictions
grount_truth: np array of boolean of actual labels of the dataset
"""
model.eval() # Evaluation mode
# TODO: Evaluate model on the list of indices and capture predictions
# and ground truth labels
# Hint: SubsetSampler above could be useful!
raise Exception("Not implemented")
return predictions, ground_truth
predictions, gt = evaluate_model(model_conv, train_dataset, val_indices)
###Output
_____no_output_____
###Markdown
И теперь можно визуализировать false positives и false negatives.
###Code
# TODO: Compute indices of the false positives on the validation set.
# Note those have to be indices of the original dataset
false_positive_indices = None
visualize_samples(orig_dataset, false_positive_indices, "False positives")
# TODO: Compute indices of the false negatives on the validation set.
# Note those have to be indices of the original dataset
false_negatives_indices = None
visualize_samples(orig_dataset, false_negatives_indices, "False negatives")
import sklearn.metrics as metrics
def binary_classification_metrics(prediction, ground_truth):
# TODO: Implement this function!
# We did this already it in the assignment1
return precision, recall, f1
precision, recall, f1 = binary_classification_metrics(predictions, gt)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (f1, precision, recall))
###Output
_____no_output_____
###Markdown
Что будет в конце вы уже понялиНатренируйте лучшую модель на основе `resnet18`, меняя только процесс тренировки.Выбирайте лучшую модель по F1 score.Как всегда, не забываем:- побольше агментаций!- перебор гиперпараметров- различные оптимизаторы- какие слои тюнить- learning rate annealing- на какой эпохе останавливатьсяНаша цель - довести F1 score на validation set до значения, большего **0.9**.
###Code
# TODO: Train your best model!
best_model = None
# Let's check how it performs on validation set!
predictions, ground_truth = evaluate_model(best_model, dataset, val_indices)
precision, recall, f1 = binary_classification_metrics(predictions, ground_truth)
print("F1: %4.3f, P: %4.3f, R: %4.3f" % (precision, recall, f1))
# TODO: Visualize training curve for the best model
###Output
_____no_output_____
###Markdown
Визуализируйте ошибки лучшей модели
###Code
# TODO Visualize false positives and false negatives of the best model on the validation set
###Output
_____no_output_____
###Markdown
Необязательное задание с большой звездочкойПоучавствуйте в Kaggle In-Class Hot Dog Recognition Challenge! Это соревнование сделано специально для курса и в нем учавствуют только те, кто проходит курс.В нем участники соревнуются в качестве натренированных моделей, загружая на сайт предсказания своих моделей на тестовой выборке. Разметка тестовой выборке участникам недоступна.Более подробно о правилах соревнования ниже.Те, кто проходят курс лично, за высокое место в соревновании получат дополнительные баллы.Здесь уже можно использовать и другие базовые архитектуры кроме `resnet18`, и ансамбли, и другие трюки тренировки моделей.Вот ссылка на соревнование:https://www.kaggle.com/c/hotdogornot
###Code
image_id = []
predictions = []
model.eval()
for x,_,id_img in test_loader:
# TODO : Напишите код для предсказания меток (1 = есть хотдог, 0 = хотдога нет)
# Код должен возвратить список из id картинки и метку predictions
# image id - это название файла картинки, например '10000.jpg'
pass
# Так можно создать csv файл, чтобы затем загрузить его на kaggle
# Ожидаемый формат csv-файла:
# image_id,label
# 10000.jpg,1
# 10001.jpg,1
# 10002.jpg,0
# 10003.jpg,1
# 10004.jpg,0
with open('subm.csv', 'w') as submissionFile:
writer = csv.writer(submissionFile)
writer.writerow(['image_id', 'label'])
writer.writerows(zip(image_id,predictions))
# А так можно скачать файл с Google Colab
files.download('subm.csv')
###Output
_____no_output_____ |
doc/ipython-notebooks/ica/ecg_sep.ipynb | ###Markdown
Fetal Electrocardiogram Extraction by Source Subspace Separation By Kevin Hughes and Andreas Ziehe This notebook illustrates Blind Source Seperation(BSS) on several time synchronised Electrocardiogram's (ECG's) of the baby's mother using Independent Component Analysis (ICA) in Shogun. This is used to extract the baby's ECG from it. This task has been studied before and has been published in these papers:Cardoso, J. F. (1998, May). Multidimensional independent component analysis. In Acoustics, Speech and Signal Processing, 1998. Proceedings of the 1998 IEEE International Conference on (Vol. 4, pp. 1941-1944). IEEE.Dirk Callaerts, "Signal Separation Methods based on Singular ValueDecomposition and their Application to the Real-Time Extraction of theFetal Electrocardiogram from Cutaneous Recordings", Ph.D. Thesis,K.U.Leuven - E.E. Dept., Dec. 1989.L. De Lathauwer, B. De Moor, J. Vandewalle, "Fetal ElectrocardiogramExtraction by Source Subspace Separation", Proc. IEEE SP / ATHOSWorkshop on HOS, June 12-14, 1995, Girona, Spain, pp. 134-138.In this workbook I am going to show you how a similar result can be obtained using the ICA algorithms available in the Shogun Machine Learning Toolbox. First we need some data, luckily an ECG dataset is distributed in the Shogun data repository. So the first step is to change the directory then we'll load the data.
###Code
# change to the shogun-data directory
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
os.chdir(os.path.join(SHOGUN_DATA_DIR, 'ica'))
import numpy as np
# load data
# Data originally from:
# http://perso.telecom-paristech.fr/~cardoso/icacentral/base_single.html
data = np.loadtxt('foetal_ecg.dat')
# time steps
time_steps = data[:,0]
# abdominal signals
abdominal2 = data[:,1]
abdominal3 = data[:,2]
abdominal4 = data[:,3]
abdominal5 = data[:,4]
abdominal6 = data[:,5]
# thoracic signals
thoracic7 = data[:,6]
thoracic8 = data[:,7]
thoracic9 = data[:,8]
###Output
_____no_output_____
###Markdown
Before we go any further let's take a look at this data by plotting it:
###Code
%matplotlib inline
# plot signals
import matplotlib.pyplot as plt
# abdominal signals
for i in range(1,6):
plt.figure(figsize=(14,3))
plt.plot(time_steps, data[:,i], 'r')
plt.title('Abdominal %d' % (i))
plt.grid()
plt.show()
# thoracic signals
for i in range(6,9):
plt.figure(figsize=(14,3))
plt.plot(time_steps, data[:,i], 'r')
plt.title('Thoracic %d' % (i))
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
The peaks in the plot represent a heart beat but its pretty hard to interpret and I know I definitely can't see two distinc signals, lets see what we can do with ICA!In general for performing Source Separation we need at least as many mixed signals as sources we're hoping to separate and in this case we actually have a lot more (9 mixtures but there is only 2 sources, mother and baby). There are several different approaches for handling this situation, some algorithms are specifically designed to handle this case while other times the data is pre-processed with Principal Component Analysis (PCA). It is also common to simply apply the separation to all the sources and then choose some of the extracted signal manually or using some other know criteria which is what I'll be showing in this example. Now we create our ICA data set and convert to a Shogun features type:
###Code
import shogun as sg
# Signal Matrix X
X = (np.c_[abdominal2, abdominal3, abdominal4, abdominal5, abdominal6, thoracic7,thoracic8,thoracic9]).T
# Convert to features for shogun
mixed_signals = sg.create_features((X).astype(np.float64))
###Output
_____no_output_____
###Markdown
Next we apply the ICA algorithm to separate the sources:
###Code
# Separating with SOBI
sep = sg.create_transformer('SOBI')
sep.put('tau', 1.0*np.arange(0,120))
sep.fit(mixed_signals)
signals = sep.transform(mixed_signals)
S_ = signals.get('feature_matrix')
###Output
_____no_output_____
###Markdown
And we plot the separated signals:
###Code
# Show separation results
# Separated Signal i
for i in range(S_.shape[0]):
plt.figure(figsize=(14,3))
plt.plot(time_steps, S_[i], 'r')
plt.title('Separated Signal %d' % (i+1))
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Fetal Electrocardiogram Extraction by Source Subspace Separation By Kevin Hughes and Andreas Ziehe This notebook illustrates Blind Source Seperation(BSS) on several time synchronised Electrocardiogram's (ECG's) of the baby's mother using Independent Component Analysis (ICA) in Shogun. This is used to extract the baby's ECG from it. This task has been studied before and has been published in these papers:Cardoso, J. F. (1998, May). Multidimensional independent component analysis. In Acoustics, Speech and Signal Processing, 1998. Proceedings of the 1998 IEEE International Conference on (Vol. 4, pp. 1941-1944). IEEE.Dirk Callaerts, "Signal Separation Methods based on Singular ValueDecomposition and their Application to the Real-Time Extraction of theFetal Electrocardiogram from Cutaneous Recordings", Ph.D. Thesis,K.U.Leuven - E.E. Dept., Dec. 1989.L. De Lathauwer, B. De Moor, J. Vandewalle, "Fetal ElectrocardiogramExtraction by Source Subspace Separation", Proc. IEEE SP / ATHOSWorkshop on HOS, June 12-14, 1995, Girona, Spain, pp. 134-138.In this workbook I am going to show you how a similar result can be obtained using the ICA algorithms available in the Shogun Machine Learning Toolbox. First we need some data, luckily an ECG dataset is distributed in the Shogun data repository. So the first step is to change the directory then we'll load the data.
###Code
# change to the shogun-data directory
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
os.chdir(os.path.join(SHOGUN_DATA_DIR, 'ica'))
import numpy as np
# load data
# Data originally from:
# http://perso.telecom-paristech.fr/~cardoso/icacentral/base_single.html
data = np.loadtxt('foetal_ecg.dat')
# time steps
time_steps = data[:,0]
# abdominal signals
abdominal2 = data[:,1]
abdominal3 = data[:,2]
abdominal4 = data[:,3]
abdominal5 = data[:,4]
abdominal6 = data[:,5]
# thoracic signals
thoracic7 = data[:,6]
thoracic8 = data[:,7]
thoracic9 = data[:,8]
###Output
_____no_output_____
###Markdown
Before we go any further let's take a look at this data by plotting it:
###Code
%matplotlib inline
# plot signals
import pylab as pl
# abdominal signals
for i in range(1,6):
pl.figure(figsize=(14,3))
pl.plot(time_steps, data[:,i], 'r')
pl.title('Abdominal %d' % (i))
pl.grid()
pl.show()
# thoracic signals
for i in range(6,9):
pl.figure(figsize=(14,3))
pl.plot(time_steps, data[:,i], 'r')
pl.title('Thoracic %d' % (i))
pl.grid()
pl.show()
###Output
_____no_output_____
###Markdown
The peaks in the plot represent a heart beat but its pretty hard to interpret and I know I definitely can't see two distinc signals, lets see what we can do with ICA!In general for performing Source Separation we need at least as many mixed signals as sources we're hoping to separate and in this case we actually have a lot more (9 mixtures but there is only 2 sources, mother and baby). There are several different approaches for handling this situation, some algorithms are specifically designed to handle this case while other times the data is pre-processed with Principal Component Analysis (PCA). It is also common to simply apply the separation to all the sources and then choose some of the extracted signal manually or using some other know criteria which is what I'll be showing in this example. Now we create our ICA data set and convert to a Shogun features type:
###Code
import shogun as sg
# Signal Matrix X
X = (np.c_[abdominal2, abdominal3, abdominal4, abdominal5, abdominal6, thoracic7,thoracic8,thoracic9]).T
# Convert to features for shogun
mixed_signals = sg.features((X).astype(np.float64))
###Output
_____no_output_____
###Markdown
Next we apply the ICA algorithm to separate the sources:
###Code
# Separating with SOBI
sep = sg.transformer('SOBI')
sep.put('tau', 1.0*np.arange(0,120))
sep.fit(mixed_signals)
signals = sep.transform(mixed_signals)
S_ = signals.get('feature_matrix')
###Output
_____no_output_____
###Markdown
And we plot the separated signals:
###Code
# Show separation results
# Separated Signal i
for i in range(S_.shape[0]):
pl.figure(figsize=(14,3))
pl.plot(time_steps, S_[i], 'r')
pl.title('Separated Signal %d' % (i+1))
pl.grid()
pl.show()
###Output
_____no_output_____
###Markdown
Fetal Electrocardiogram Extraction by Source Subspace Separation By Kevin Hughes and Andreas Ziehe This notebook illustrates Blind Source Seperation(BSS) on several time synchronised Electrocardiogram's (ECG's) of the baby's mother using Independent Component Analysis (ICA) in Shogun. This is used to extract the baby's ECG from it. This task has been studied before and has been published in these papers:Cardoso, J. F. (1998, May). Multidimensional independent component analysis. In Acoustics, Speech and Signal Processing, 1998. Proceedings of the 1998 IEEE International Conference on (Vol. 4, pp. 1941-1944). IEEE.Dirk Callaerts, "Signal Separation Methods based on Singular ValueDecomposition and their Application to the Real-Time Extraction of theFetal Electrocardiogram from Cutaneous Recordings", Ph.D. Thesis,K.U.Leuven - E.E. Dept., Dec. 1989.L. De Lathauwer, B. De Moor, J. Vandewalle, "Fetal ElectrocardiogramExtraction by Source Subspace Separation", Proc. IEEE SP / ATHOSWorkshop on HOS, June 12-14, 1995, Girona, Spain, pp. 134-138.In this workbook I am going to show you how a similar result can be obtained using the ICA algorithms available in the Shogun Machine Learning Toolbox. First we need some data, luckily an ECG dataset is distributed in the Shogun data repository. So the first step is to change the directory then we'll load the data.
###Code
# change to the shogun-data directory
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
os.chdir(os.path.join(SHOGUN_DATA_DIR, 'ica'))
import numpy as np
# load data
# Data originally from:
# http://perso.telecom-paristech.fr/~cardoso/icacentral/base_single.html
data = np.loadtxt('foetal_ecg.dat')
# time steps
time_steps = data[:,0]
# abdominal signals
abdominal2 = data[:,1]
abdominal3 = data[:,2]
abdominal4 = data[:,3]
abdominal5 = data[:,4]
abdominal6 = data[:,5]
# thoracic signals
thoracic7 = data[:,6]
thoracic8 = data[:,7]
thoracic9 = data[:,8]
###Output
_____no_output_____
###Markdown
Before we go any further let's take a look at this data by plotting it:
###Code
%matplotlib inline
# plot signals
import pylab as pl
# abdominal signals
for i in range(1,6):
pl.figure(figsize=(14,3))
pl.plot(time_steps, data[:,i], 'r')
pl.title('Abdominal %d' % (i))
pl.grid()
pl.show()
# thoracic signals
for i in range(6,9):
pl.figure(figsize=(14,3))
pl.plot(time_steps, data[:,i], 'r')
pl.title('Thoracic %d' % (i))
pl.grid()
pl.show()
###Output
_____no_output_____
###Markdown
The peaks in the plot represent a heart beat but its pretty hard to interpret and I know I definitely can't see two distinc signals, lets see what we can do with ICA!In general for performing Source Separation we need at least as many mixed signals as sources we're hoping to separate and in this case we actually have a lot more (9 mixtures but there is only 2 sources, mother and baby). There are several different approaches for handling this situation, some algorithms are specifically designed to handle this case while other times the data is pre-processed with Principal Component Analysis (PCA). It is also common to simply apply the separation to all the sources and then choose some of the extracted signal manually or using some other know criteria which is what I'll be showing in this example. Now we create our ICA data set and convert to a Shogun features type:
###Code
from shogun import features
# Signal Matrix X
X = (np.c_[abdominal2, abdominal3, abdominal4, abdominal5, abdominal6, thoracic7,thoracic8,thoracic9]).T
# Convert to features for shogun
mixed_signals = features((X).astype(np.float64))
###Output
_____no_output_____
###Markdown
Next we apply the ICA algorithm to separate the sources:
###Code
from shogun import SOBI
# Separating with SOBI
sep = SOBI()
sep.put('tau', 1.0*np.arange(0,120))
signals = sep.apply(mixed_signals)
S_ = signals.get_real_matrix('feature_matrix')
###Output
_____no_output_____
###Markdown
And we plot the separated signals:
###Code
# Show separation results
# Separated Signal i
for i in range(S_.shape[0]):
pl.figure(figsize=(14,3))
pl.plot(time_steps, S_[i], 'r')
pl.title('Separated Signal %d' % (i+1))
pl.grid()
pl.show()
###Output
_____no_output_____
###Markdown
Fetal Electrocardiogram Extraction by Source Subspace Separation By Kevin Hughes and Andreas Ziehe This notebook illustrates Blind Source Seperation(BSS) on several time synchronised Electrocardiogram's (ECG's) of the baby's mother using Independent Component Analysis (ICA) in Shogun. This is used to extract the baby's ECG from it. This task has been studied before and has been published in these papers:Cardoso, J. F. (1998, May). Multidimensional independent component analysis. In Acoustics, Speech and Signal Processing, 1998. Proceedings of the 1998 IEEE International Conference on (Vol. 4, pp. 1941-1944). IEEE.Dirk Callaerts, "Signal Separation Methods based on Singular ValueDecomposition and their Application to the Real-Time Extraction of theFetal Electrocardiogram from Cutaneous Recordings", Ph.D. Thesis,K.U.Leuven - E.E. Dept., Dec. 1989.L. De Lathauwer, B. De Moor, J. Vandewalle, "Fetal ElectrocardiogramExtraction by Source Subspace Separation", Proc. IEEE SP / ATHOSWorkshop on HOS, June 12-14, 1995, Girona, Spain, pp. 134-138.In this workbook I am going to show you how a similar result can be obtained using the ICA algorithms available in the Shogun Machine Learning Toolbox. First we need some data, luckily an ECG dataset is distributed in the Shogun data repository. So the first step is to change the directory then we'll load the data.
###Code
# change to the shogun-data directory
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
os.chdir(os.path.join(SHOGUN_DATA_DIR, 'ica'))
import numpy as np
# load data
# Data originally from:
# http://perso.telecom-paristech.fr/~cardoso/icacentral/base_single.html
data = np.loadtxt('foetal_ecg.dat')
# time steps
time_steps = data[:,0]
# abdominal signals
abdominal2 = data[:,1]
abdominal3 = data[:,2]
abdominal4 = data[:,3]
abdominal5 = data[:,4]
abdominal6 = data[:,5]
# thoracic signals
thoracic7 = data[:,6]
thoracic8 = data[:,7]
thoracic9 = data[:,8]
###Output
_____no_output_____
###Markdown
Before we go any further let's take a look at this data by plotting it:
###Code
%matplotlib inline
# plot signals
import matplotlib.pyplot as plt
# abdominal signals
for i in range(1,6):
plt.figure(figsize=(14,3))
plt.plot(time_steps, data[:,i], 'r')
plt.title('Abdominal %d' % (i))
plt.grid()
plt.show()
# thoracic signals
for i in range(6,9):
plt.figure(figsize=(14,3))
plt.plot(time_steps, data[:,i], 'r')
plt.title('Thoracic %d' % (i))
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
The peaks in the plot represent a heart beat but its pretty hard to interpret and I know I definitely can't see two distinc signals, lets see what we can do with ICA!In general for performing Source Separation we need at least as many mixed signals as sources we're hoping to separate and in this case we actually have a lot more (9 mixtures but there is only 2 sources, mother and baby). There are several different approaches for handling this situation, some algorithms are specifically designed to handle this case while other times the data is pre-processed with Principal Component Analysis (PCA). It is also common to simply apply the separation to all the sources and then choose some of the extracted signal manually or using some other know criteria which is what I'll be showing in this example. Now we create our ICA data set and convert to a Shogun features type:
###Code
import shogun as sg
# Signal Matrix X
X = (np.c_[abdominal2, abdominal3, abdominal4, abdominal5, abdominal6, thoracic7,thoracic8,thoracic9]).T
# Convert to features for shogun
mixed_signals = sg.features((X).astype(np.float64))
###Output
_____no_output_____
###Markdown
Next we apply the ICA algorithm to separate the sources:
###Code
# Separating with SOBI
sep = sg.transformer('SOBI')
sep.put('tau', 1.0*np.arange(0,120))
sep.fit(mixed_signals)
signals = sep.transform(mixed_signals)
S_ = signals.get('feature_matrix')
###Output
_____no_output_____
###Markdown
And we plot the separated signals:
###Code
# Show separation results
# Separated Signal i
for i in range(S_.shape[0]):
plt.figure(figsize=(14,3))
plt.plot(time_steps, S_[i], 'r')
plt.title('Separated Signal %d' % (i+1))
plt.grid()
plt.show()
###Output
_____no_output_____ |
training/distributed_training/pytorch/data_parallel/mnist/infer_pytorch.ipynb | ###Markdown
SageMaker endpointTo deploy the model you previously trained, you need to create a Sagemaker Endpoint. This is a hosted prediction service that you can use to perform inference. Finding the modelThis notebook uses a stored model if it exists. If you recently ran a training example that use the `%store%` magic, it will be restored in the next cell.Otherwise, you can pass the URI to the model file (a .tar.gz file) in the `model_data` variable.You can find your model files through the [SageMaker console](https://console.aws.amazon.com/sagemaker/home) by choosing **Training > Training jobs** in the left navigation pane. Find your recent training job, choose it, and then look for the `s3://` link in the **Output** pane. Uncomment the model_data line in the next cell that manually sets the model's URI.
###Code
# Retrieve a saved model from a previous notebook run's stored variable
%store -r model_data
# If no model was found, set it manually here.
# model_data = 's3://sagemaker-us-west-2-XXX/pytorch-smdataparallel-mnist-2020-10-16-17-15-16-419/output/model.tar.gz'
print("Using this model: {}".format(model_data))
###Output
_____no_output_____
###Markdown
Create a model objectYou define the model object by using SageMaker SDK's `PyTorchModel` and pass in the model from the `estimator` and the `entry_point`. The endpoint's entry point for inference is defined by `model_fn` as seen in the following code block that prints out `inference.py`. The function loads the model and sets it to use a GPU, if available.
###Code
!pygmentize code/inference.py
import sagemaker
role = sagemaker.get_execution_role()
from sagemaker.pytorch import PyTorchModel
model = PyTorchModel(model_data=model_data, source_dir='code',
entry_point='inference.py', role=role, framework_version='1.6.0', py_version='py3')
###Output
_____no_output_____
###Markdown
Deploy the model on an endpointYou create a `predictor` by using the `model.deploy` function. You can optionally change both the instance count and instance type.
###Code
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Test the modelYou can test the depolyed model using samples from the test set.
###Code
# Download the test set
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
test_set = datasets.MNIST('data', download=True, train=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
# Randomly sample 16 images from the test set
test_loader = DataLoader(test_set, shuffle=True, batch_size=16)
test_images, _ = iter(test_loader).next()
# inspect the images
import torchvision
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def imshow(img):
img = img.numpy()
img = np.transpose(img, (1, 2, 0))
plt.imshow(img)
return
# unnormalize the test images for displaying
unnorm_images = (test_images * 0.3081) + 0.1307
print("Sampled test images: ")
imshow(torchvision.utils.make_grid(unnorm_images))
# Send the sampled images to endpoint for inference
outputs = predictor.predict(test_images.numpy())
predicted = np.argmax(outputs, axis=1)
print("Predictions: ")
print(predicted.tolist())
###Output
_____no_output_____
###Markdown
CleanupIf you don't intend on trying out inference or to do anything else with the endpoint, you should delete it.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Part 2: Deploy a model trained using SageMaker distributed data parallelUse this notebook after you have completed **Part 1: Distributed data parallel MNIST training with PyTorch and SageMaker's distributed data parallel library** in the notebook pytorch_smdataparallel_mnist_demo.ipynb. To deploy the model you previously trained, you need to create a Sagemaker Endpoint. This is a hosted prediction service that you can use to perform inference. Finding the modelThis notebook uses a stored model if it exists. If you recently ran a training example that use the `%store%` magic, it will be restored in the next cell.Otherwise, you can pass the URI to the model file (a .tar.gz file) in the `model_data` variable.To find the location of model files in the [SageMaker console](https://console.aws.amazon.com/sagemaker/home), do the following: 1. Go to the SageMaker console: https://console.aws.amazon.com/sagemaker/home.1. Select **Training** in the left navigation pane and then Select **Training jobs**. 1. Find your recent training job and choose it.1. In the **Output** section, you should see an S3 URI under **S3 model artifact**. Copy this S3 URI.1. Uncomment the `model_data` line in the next cell that manually sets the model's URI and replace the placeholder value with that S3 URI.
###Code
# Retrieve a saved model from a previous notebook run's stored variable
%store -r model_data
# If no model was found, set it manually here.
# model_data = 's3://sagemaker-us-west-2-XXX/pytorch-smdataparallel-mnist-2020-10-16-17-15-16-419/output/model.tar.gz'
print("Using this model: {}".format(model_data))
###Output
_____no_output_____
###Markdown
Create a model objectYou define the model object by using the SageMaker Python SDK's `PyTorchModel` and pass in the model from the `estimator` and the `entry_point`. The endpoint's entry point for inference is defined by `model_fn` as seen in the following code block that prints out `inference.py`. The function loads the model and sets it to use a GPU, if available.
###Code
!pygmentize code/inference.py
import sagemaker
role = sagemaker.get_execution_role()
from sagemaker.pytorch import PyTorchModel
model = PyTorchModel(
model_data=model_data,
source_dir="code",
entry_point="inference.py",
role=role,
framework_version="1.6.0",
py_version="py3",
)
###Output
_____no_output_____
###Markdown
Deploy the model on an endpointYou create a `predictor` by using the `model.deploy` function. You can optionally change both the instance count and instance type.
###Code
predictor = model.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge")
###Output
_____no_output_____
###Markdown
Test the modelYou can test the depolyed model using samples from the test set.
###Code
# Download the test set
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
test_set = datasets.MNIST(
"data",
download=True,
train=False,
transform=transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
),
)
# Randomly sample 16 images from the test set
test_loader = DataLoader(test_set, shuffle=True, batch_size=16)
test_images, _ = iter(test_loader).next()
# inspect the images
import torchvision
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def imshow(img):
img = img.numpy()
img = np.transpose(img, (1, 2, 0))
plt.imshow(img)
return
# unnormalize the test images for displaying
unnorm_images = (test_images * 0.3081) + 0.1307
print("Sampled test images: ")
imshow(torchvision.utils.make_grid(unnorm_images))
# Send the sampled images to endpoint for inference
outputs = predictor.predict(test_images.numpy())
predicted = np.argmax(outputs, axis=1)
print("Predictions: ")
print(predicted.tolist())
###Output
_____no_output_____
###Markdown
CleanupIf you don't intend on trying out inference or to do anything else with the endpoint, you should delete it.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Part 2: Deploy a model trained using SageMaker distributed data parallelUse this notebook after you have completed **Part 1: Distributed data parallel MNIST training with PyTorch and SageMaker's distributed data parallel library** in the notebook pytorch_smdataparallel_mnist_demo.ipynb. To deploy the model you previously trained, you need to create a Sagemaker Endpoint. This is a hosted prediction service that you can use to perform inference. Finding the modelThis notebook uses a stored model if it exists. If you recently ran a training example that use the `%store%` magic, it will be restored in the next cell.Otherwise, you can pass the URI to the model file (a .tar.gz file) in the `model_data` variable.To find the location of model files in the [SageMaker console](https://console.aws.amazon.com/sagemaker/home), do the following: 1. Go to the SageMaker console: https://console.aws.amazon.com/sagemaker/home.1. Select **Training** in the left navigation pane and then Select **Training jobs**. 1. Find your recent training job and choose it.1. In the **Output** section, you should see an S3 URI under **S3 model artifact**. Copy this S3 URI.1. Uncomment the `model_data` line in the next cell that manually sets the model's URI and replace the placeholder value with that S3 URI.
###Code
# Retrieve a saved model from a previous notebook run's stored variable
%store -r model_data
# If no model was found, set it manually here.
# model_data = 's3://sagemaker-us-west-2-XXX/pytorch-smdataparallel-mnist-2020-10-16-17-15-16-419/output/model.tar.gz'
print("Using this model: {}".format(model_data))
###Output
_____no_output_____
###Markdown
Create a model objectYou define the model object by using the SageMaker Python SDK's `PyTorchModel` and pass in the model from the `estimator` and the `entry_point`. The endpoint's entry point for inference is defined by `model_fn` as seen in the following code block that prints out `inference.py`. The function loads the model and sets it to use a GPU, if available.
###Code
!pygmentize code/inference.py
import sagemaker
role = sagemaker.get_execution_role()
from sagemaker.pytorch import PyTorchModel
model = PyTorchModel(
model_data=model_data,
source_dir="code",
entry_point="inference.py",
role=role,
framework_version="1.6.0",
py_version="py3",
)
###Output
_____no_output_____
###Markdown
Deploy the model on an endpointYou create a `predictor` by using the `model.deploy` function. You can optionally change both the instance count and instance type.
###Code
predictor = model.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge")
###Output
_____no_output_____
###Markdown
Test the modelYou can test the depolyed model using samples from the test set.
###Code
# Download the test set
import torchvision
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
from packaging.version import Version
# Set the source to download MNIST data from
TORCHVISION_VERSION = "0.9.1"
if Version(torchvision.__version__) < Version(TORCHVISION_VERSION):
# Set path to data source and include checksum key to make sure data isn't corrupted
datasets.MNIST.resources = [
(
"https://sagemaker-sample-files.s3.amazonaws.com/datasets/image/MNIST/train-images-idx3-ubyte.gz",
"f68b3c2dcbeaaa9fbdd348bbdeb94873",
),
(
"https://sagemaker-sample-files.s3.amazonaws.com/datasets/image/MNIST/train-labels-idx1-ubyte.gz",
"d53e105ee54ea40749a09fcbcd1e9432",
),
(
"https://sagemaker-sample-files.s3.amazonaws.com/datasets/image/MNIST/t10k-images-idx3-ubyte.gz",
"9fb629c4189551a2d022fa330f9573f3",
),
(
"https://sagemaker-sample-files.s3.amazonaws.com/datasets/image/MNIST/t10k-labels-idx1-ubyte.gz",
"ec29112dd5afa0611ce80d1b7f02629c",
),
]
else:
# Set path to data source
datasets.MNIST.mirrors = [
"https://sagemaker-sample-files.s3.amazonaws.com/datasets/image/MNIST/"
]
test_set = datasets.MNIST(
"data",
download=True,
train=False,
transform=transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
),
)
# Randomly sample 16 images from the test set
test_loader = DataLoader(test_set, shuffle=True, batch_size=16)
test_images, _ = iter(test_loader).next()
# inspect the images
import torchvision
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def imshow(img):
img = img.numpy()
img = np.transpose(img, (1, 2, 0))
plt.imshow(img)
return
# unnormalize the test images for displaying
unnorm_images = (test_images * 0.3081) + 0.1307
print("Sampled test images: ")
imshow(torchvision.utils.make_grid(unnorm_images))
# Send the sampled images to endpoint for inference
outputs = predictor.predict(test_images.numpy())
predicted = np.argmax(outputs, axis=1)
print("Predictions: ")
print(predicted.tolist())
###Output
_____no_output_____
###Markdown
CleanupIf you don't intend on trying out inference or to do anything else with the endpoint, you should delete it.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____
###Markdown
Part 2: Deploy a model trained using SageMaker distributed data parallelUse this notebook after you have completed **Part 1: Distributed data parallel MNIST training with PyTorch and SageMaker's distributed data parallel library** in the notebook pytorch_smdataparallel_mnist_demo.ipynb. To deploy the model you previously trained, you need to create a Sagemaker Endpoint. This is a hosted prediction service that you can use to perform inference. Finding the modelThis notebook uses a stored model if it exists. If you recently ran a training example that use the `%store%` magic, it will be restored in the next cell.Otherwise, you can pass the URI to the model file (a .tar.gz file) in the `model_data` variable.To find the location of model files in the [SageMaker console](https://console.aws.amazon.com/sagemaker/home), do the following: 1. Go to the SageMaker console: https://console.aws.amazon.com/sagemaker/home.1. Select **Training** in the left navigation pane and then Select **Training jobs**. 1. Find your recent training job and choose it.1. In the **Output** section, you should see an S3 URI under **S3 model artifact**. Copy this S3 URI.1. Uncomment the `model_data` line in the next cell that manually sets the model's URI and replace the placeholder value with that S3 URI.
###Code
# Retrieve a saved model from a previous notebook run's stored variable
%store -r model_data
# If no model was found, set it manually here.
# model_data = 's3://sagemaker-us-west-2-XXX/pytorch-smdataparallel-mnist-2020-10-16-17-15-16-419/output/model.tar.gz'
print("Using this model: {}".format(model_data))
###Output
_____no_output_____
###Markdown
Create a model objectYou define the model object by using the SageMaker Python SDK's `PyTorchModel` and pass in the model from the `estimator` and the `entry_point`. The endpoint's entry point for inference is defined by `model_fn` as seen in the following code block that prints out `inference.py`. The function loads the model and sets it to use a GPU, if available.
###Code
!pygmentize code/inference.py
import sagemaker
role = sagemaker.get_execution_role()
from sagemaker.pytorch import PyTorchModel
model = PyTorchModel(model_data=model_data, source_dir='code',
entry_point='inference.py', role=role, framework_version='1.6.0', py_version='py3')
###Output
_____no_output_____
###Markdown
Deploy the model on an endpointYou create a `predictor` by using the `model.deploy` function. You can optionally change both the instance count and instance type.
###Code
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m4.xlarge')
###Output
_____no_output_____
###Markdown
Test the modelYou can test the depolyed model using samples from the test set.
###Code
# Download the test set
from torchvision import datasets, transforms
from torch.utils.data import DataLoader
test_set = datasets.MNIST('data', download=True, train=False,
transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
]))
# Randomly sample 16 images from the test set
test_loader = DataLoader(test_set, shuffle=True, batch_size=16)
test_images, _ = iter(test_loader).next()
# inspect the images
import torchvision
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def imshow(img):
img = img.numpy()
img = np.transpose(img, (1, 2, 0))
plt.imshow(img)
return
# unnormalize the test images for displaying
unnorm_images = (test_images * 0.3081) + 0.1307
print("Sampled test images: ")
imshow(torchvision.utils.make_grid(unnorm_images))
# Send the sampled images to endpoint for inference
outputs = predictor.predict(test_images.numpy())
predicted = np.argmax(outputs, axis=1)
print("Predictions: ")
print(predicted.tolist())
###Output
_____no_output_____
###Markdown
CleanupIf you don't intend on trying out inference or to do anything else with the endpoint, you should delete it.
###Code
predictor.delete_endpoint()
###Output
_____no_output_____ |
Analise_Dados_IFNMG (3).ipynb | ###Markdown
Analisando Dados
###Code
import pandas as pd
import numpy as np
from unidecode import unidecode
from datetime import datetime
###Output
_____no_output_____
###Markdown
Obtendo dados
###Code
# Importando dados para análise.
file = 'Estudantes IFSP - Maio-2018.1.csv'
dadosEstudantes = pd.read_csv(
file, sep=';', engine='python', encoding='iso-8859-1')
dadosEstudantes.head(5)
###Output
_____no_output_____
###Markdown
Tratando os dados
###Code
# Obtendo as colunas do DataSet
dadosEstudantes.columns
# Tamanho do DataSet
dadosEstudantes.shape
# Alterando coluna par o formato Datetime
dadosEstudantes['Data de Início do Ciclo'] = pd.to_datetime(
dadosEstudantes['Data de Início do Ciclo'], errors='coerce')
# Alterando coluna par o formato Datetime
dadosEstudantes['Data de Fim Previsto para o Ciclo'] = pd.to_datetime(
dadosEstudantes['Data de Fim Previsto para o Ciclo'], errors='coerce')
# Incluindo Zero para os campos NAN - missing
dadosEstudantes.fillna(0, inplace=True)
###Output
_____no_output_____ |
1_load_process_who_gho_data.ipynb | ###Markdown
Inspect and drop duplicates
###Code
df_dups = df.loc[df.duplicated(subset=["Country"], keep=False)]
display(df_dups)
df = df.drop_duplicates(keep="first")
###Output
_____no_output_____
###Markdown
Reshape
###Code
df = df.set_index(["Country"]).unstack().reset_index()
df.columns = new_col_names
df = df.dropna()
###Output
_____no_output_____
###Markdown
Add country codes
###Code
codes = []
for index, row in df.iterrows():
country = row["Country"].split(r" (")[0]
try:
code = pycountry.countries.get(name=country).alpha_3
except:
if row["Country"] in list(d.keys()):
code = d[row["Country"]]
else:
code = row["Country"]
codes.append(code)
df["Code"] = codes
display(df.head())
display(df.tail())
###Output
_____no_output_____
###Markdown
Wrapping this code in a helper function
###Code
# df = combine_process_data(files, new_col_names, d)
# display(df.head())
# display(df.tail())
# display(df.dtypes.to_frame())
###Output
_____no_output_____
###Markdown
Export processed data to disk
###Code
# df.to_parquet(
# processed_data_file_path, compression="gzip", index=False, engine="pyarrow"
# )
###Output
_____no_output_____
###Markdown
Example of loading processed data while filtering rows
###Code
def bokeh_create_multiline_plot(
data,
x,
x_start,
x_end,
y_names,
lw=2,
ptitle="Title",
t_str="",
t_loc="above",
axis_tick_font_size="12pt",
plot_titleFontSize="14pt",
legend_axis_gap=5,
fig_size=(700, 400),
):
col_names = data.column_names[1:]
color = Category20[len(data.column_names[1:])]
p = figure(
plot_width=fig_size[0], plot_height=fig_size[1], toolbar_location=None, tools=""
)
p_dict = dict()
for col, c, col_name in zip(y_names, color, col_names):
p_dict[col_name] = p.line(
x,
col,
source=data,
color=c,
line_width=lw,
line_alpha=1.0,
line_color=c,
)
p.add_tools(
HoverTool(
renderers=[p_dict[col_name]],
# show_arrow=False,
# line_policy="next",
tooltips=[(x, f"@{x}"), (col, f"@{col}")],
)
)
legend = Legend(
items=[(x, [p_dict[x]]) for x in p_dict],
location="top_right",
orientation="vertical",
)
p.hover.point_policy = "follow_mouse"
p.add_layout(legend, "right")
p.legend.border_line_alpha = 0
p.legend.background_fill_alpha = 0
p.xaxis.major_label_text_font_size = axis_tick_font_size
p.yaxis.major_label_text_font_size = axis_tick_font_size
p.title.text_font_size = plot_titleFontSize
p.legend.label_text_font_size = axis_tick_font_size
p.title.text = f"{ptitle}, {x_start}-{x_end}"
p.legend.border_line_width = 0
p.legend.padding = 0
p.legend.margin = legend_axis_gap
show(p)
def get_line_chart_data(
processed_data_file_path,
year_start=1961,
year_end=2010,
groupby_col="Country",
countries=["Austria"],
x="Year",
z="Value",
):
df = pd.read_parquet(
processed_data_file_path,
filters=[("Year", ">=", year_start), ("Year", "<=", year_end)],
engine="pyarrow",
)
df = df.loc[df[groupby_col].isin(countries)]
df = df.pivot(index=x, columns=[groupby_col], values=z)
df = df.loc[:, :].div(df.iloc[1, :])
return df
x_start = 1960
x_end = 2018
groupby_col = "Code"
nlargest = 15
x = "Year"
z = "Value"
df_all = pd.read_parquet(processed_data_file_path, engine="pyarrow")
countries = df_all.groupby([groupby_col])[z].mean().nlargest(nlargest).index.tolist()
countries
source = ColumnDataSource(data=dict(Year=[], Austria=[]))
df = get_line_chart_data(
processed_data_file_path,
x_start,
x_end,
groupby_col,
countries,
x,
z,
)
source.data = df
bokeh_create_multiline_plot(
source,
"Year",
x_start,
x_end,
countries,
2,
ptitle="Relative Alcohol Consumption",
axis_tick_font_size="12pt",
plot_titleFontSize="14pt",
legend_axis_gap=5,
fig_size=(700, 400),
)
###Output
_____no_output_____ |
docs/examples/driver_examples/Qcodes example with Decadac.ipynb | ###Markdown
Qcodes example with Decadac
###Code
%matplotlib nbagg
import qcodes as qc
from qcodes.instrument_drivers.Harvard.Decadac import Decadac
deca = Decadac('Decadac', port=4, slot=0)
###Output
_____no_output_____
###Markdown
The most used feature of the Decadac is its voltage output capability.There are four parameters corresponding to the channels, i.e. deca.ch0_voltage, deca.ch1_voltage, etc.
###Code
deca.ch0_voltage.set(1)
deca.ch0_voltage.get()
###Output
_____no_output_____
###Markdown
The Decadac has a **global** setting (i.e. shared by all channels and slots) controlling whether the voltages jump or gradually ramp to the set voltage.
###Code
deca.set_ramping(True, time=1000) # time in ms
deca.ch0_voltage.set(0)
deca.set_ramping(False)
###Output
_____no_output_____
###Markdown
The precision of the Decadac may be rather poor, so one might want to apply a correctional offset to each channel.
###Code
deca.ch0_offset.set(-0.1)
deca.ch0_voltage.set(1)
deca.ch0_voltage.get()
###Output
_____no_output_____
###Markdown
It is possible to toggle the Decadac output ON/OFF without changing anything else.
###Code
deca.mode.set(0) # 0: output off, 1: output on
deca.close()
###Output
_____no_output_____ |
Week 8/SVM_ Maximum margin separating hyperplane.ipynb | ###Markdown
Plot the maximum margin separating hyperplane within a two-class separable dataset using a Support Vector Machine classifier with linear kernel.
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm
from sklearn.datasets import make_blobs
# we create 40 separable points
X, y = make_blobs(n_samples=40, centers=2, random_state=6)
# fit the model, don't regularize for illustration purposes
clf = svm.SVC(kernel="linear", C=1000)
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=30, cmap=plt.cm.Paired)
# plot the decision function
ax = plt.gca()
xlim = ax.get_xlim()
ylim = ax.get_ylim()
# create grid to evaluate model
xx = np.linspace(xlim[0], xlim[1], 30)
yy = np.linspace(ylim[0], ylim[1], 30)
YY, XX = np.meshgrid(yy, xx)
xy = np.vstack([XX.ravel(), YY.ravel()]).T
Z = clf.decision_function(xy).reshape(XX.shape)
# plot decision boundary and margins
ax.contour(
XX, YY, Z, colors="k", levels=[-1, 0, 1], alpha=0.5, linestyles=["--", "-", "--"]
)
# plot support vectors
ax.scatter(
clf.support_vectors_[:, 0],
clf.support_vectors_[:, 1],
s=100,
linewidth=1,
facecolors="none",
edgecolors="k",
)
plt.show()
###Output
_____no_output_____ |
Chapter02/FeatureImportance_SensitivityAnalysis.ipynb | ###Markdown
Feature Importance and Sensitivity Analysis CHAPTER 02 - *Model Explainability Methods*From **Applied Machine Learning Explainability Techniques** by [**Aditya Bhattacharya**](https://www.linkedin.com/in/aditya-bhattacharya-b59155b6/), published by **Packt** ObjectiveIn this notebook, we will try to implement some of the concepts related to Feature Importance and Sensitivity Analysis part of the Influence based explainability methods discussed in Chapter 2 - Model Explainability Methods. Installing the modules Install the following libraries in Google Colab or your local environment, if not already installed.
###Code
!pip install --upgrade pandas numpy matplotlib seaborn scikit-learn
###Output
_____no_output_____
###Markdown
Loading the modules
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="whitegrid")
import warnings
warnings.filterwarnings("ignore")
np.random.seed(5)
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier as RFC
import tensorflow as tf
tf.get_logger().setLevel(40) # suppress deprecation messages
from tensorflow.keras.layers import Dense, Input, Embedding, Concatenate, Reshape, Dropout, Lambda
from tensorflow.keras.models import Model
from tensorflow.keras.utils import to_categorical
###Output
_____no_output_____
###Markdown
About the data Kaggle Data Source Link - [Kaggle | Pima Indians Diabetes Database](https://www.kaggle.com/uciml/pima-indians-diabetes-database?select=diabetes.csv)The Pima Indian Diabetes dataset is used to predict whether or not the diagnosed patient has diabetes, which is also a Binary Classification problem, based on the various diagnostic feature values provided. The dataset used for this analysis is obtained from Kaggle. Although the dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The patient dynamics provided in this dataset is that of female patients who are at least 21 years old and of Pima Indian heritage. The datasets used might be derived and transformed datasets from original datasets. The sources of the original datasets will be mentioned and I would strongly recommend to look at the original data for more details on the data description and for a more detailed analysis. Loading the data
###Code
data = pd.read_csv('Datasets/diabetes.csv')
data.head()
data.shape
data.hist(layout = (2,5), figsize=(15,8), color = 'r')
print('Data Distribution')
print('This looks like a fairly imbalanced dataset')
sns.countplot(x="Outcome", data=data, palette="bwr")
plt.show()
data['Outcome'].value_counts()
print('Percentage of data belonging to class 1 is',int((268/768)*100))
print('Percentage of data belonging to class 0 is',int((500/768)*100))
###Output
Percentage of data belonging to class 1 is 34
Percentage of data belonging to class 0 is 65
###Markdown
We do see that the dataset is imbalanced, but we will not focus much on solving the class imbalance problem. But let's do some custom EDA steps to understand the data more. We need to do some null check, duplication check, noise removal and correlation check along with some feature engineering and normalization before applying feature importance and sensitivity analysis. Null Check
###Code
data.isnull().sum()
###Output
_____no_output_____
###Markdown
There are no null or missing values. Let's do a redundancy check now. Duplication Check
###Code
data.duplicated().any()
###Output
_____no_output_____
###Markdown
Data Description
###Code
data.describe()
###Output
_____no_output_____
###Markdown
There are two main observations from this data description :1. There are few records for which BMI, Glucose, Blood Pressure are 0. It might be possible that these values where not recorded properly during the data capture process. So, it is better not to consider these rows for the training purpose.2. There are some outliers observed from the max values in no. of Pregnancies, Skin Thickness, Insulin etc, which needs further inspection. Data Correlation
###Code
print(data.corr()['Outcome'])
sns.heatmap(data.corr())
plt.show()
###Output
Pregnancies 0.221898
Glucose 0.466581
BloodPressure 0.065068
SkinThickness 0.074752
Insulin 0.130548
BMI 0.292695
DiabetesPedigreeFunction 0.173844
Age 0.238356
Outcome 1.000000
Name: Outcome, dtype: float64
###Markdown
It is observed that the features does not have a strong correlation (>0.5) with the target variable. Outlier Check
###Code
data[(data['BMI'] == 0) & (data['Glucose'] == 0) & (data['BloodPressure'] == 0)]
data[(data['Glucose'] == 0)]
###Output
_____no_output_____
###Markdown
From the above observation, it looks like the data does have alot of noise, as there are multiple cases where some of the key features are 0. But, following human intuition, since blood glucose level is one of the key features to observe diabetes, I would consider dropping all records where Glucose value is 0. Noise removal
###Code
cleaned_data = data[(data['Glucose'] != 0)]
cleaned_data.shape
###Output
_____no_output_____
###Markdown
Feature Engineering
###Code
feature_engg_data = cleaned_data.copy()
outlier_data = cleaned_data.copy()
factor = 3
# Include this only for columns with suspected outliers
# Using a factor of 3, following Nelson's rule 1 to remove outliers - https://en.wikipedia.org/wiki/Nelson_rules
# Only for non-categorical fields
columns_to_include = ['Pregnancies','Glucose','BloodPressure','SkinThickness','Insulin','BMI','DiabetesPedigreeFunction']
for column in columns_to_include:
upper_lim = feature_engg_data[column].mean () + feature_engg_data[column].std () * factor
lower_lim = feature_engg_data[column].mean () - feature_engg_data[column].std () * factor
feature_engg_data = feature_engg_data[(feature_engg_data[column] < upper_lim) & (feature_engg_data[column] > lower_lim)]
outlier_data = pd.concat([outlier_data, feature_engg_data]).drop_duplicates(keep=False)
print(feature_engg_data.shape)
print(outlier_data.shape)
###Output
(688, 9)
(75, 9)
###Markdown
In the following section in-order to build the model, we will need to normalize the data and split the data into train, validation and test dataset. The outlier data that we have, we will keep it separate, just in case to see how does our model performs on the outlier dataset. Normalization
###Code
def normalize_data(df):
val = df.values
min_max_normalizer = preprocessing.MinMaxScaler()
norm_val = min_max_normalizer.fit_transform(val)
df2 = pd.DataFrame(norm_val, columns=df.columns)
return df2
norm_feature_engg_data = normalize_data(feature_engg_data)
norm_outlier_data = normalize_data(outlier_data)
###Output
_____no_output_____
###Markdown
In the previous steps we have done some fundamental steps to understand and prepare the data so that it can be used for further modeling. Let's split the data and then we will try to apply the feature importance and sensitivity analysis methods for influence based explainability. Train-Test split
###Code
input_data = norm_feature_engg_data.drop(['Outcome'],axis='columns')
targets =norm_feature_engg_data.filter(['Outcome'],axis='columns')
x, x_test, y, y_test = train_test_split(input_data,targets,test_size=0.1,train_size=0.9, random_state=5)
x_train, x_valid, y_train, y_valid = train_test_split(x,y,test_size = 0.22,train_size =0.78, random_state=5)
###Output
_____no_output_____
###Markdown
Feature Importance Feature importance is a technique that allocates a particular score to the input features present in the dataset based on the usefulness of the features in predicting the target value. We will apply the random forest based feature importance method using the Scikit-Learn Python framework. This is a model specific global explanability method. But it is quite human-friendly explanation method for structured data. It is quite consistent with how human beings try to inspect or interpret natural phenomena in life.
###Code
def apply_RFC(X,y,columns):
rfc = RFC(n_estimators=500,min_samples_leaf=round(len(X)*.01),random_state=5,n_jobs=-1)
imp_features = rfc.fit(X,y).feature_importances_
imp_features = pd.DataFrame(imp_features,columns=['Feature Importance'],index=columns)
imp_features.sort_values(by=['Feature Importance'],inplace=True,ascending=False)
imp_features['Moving Sum'] = imp_features['Feature Importance'].cumsum()
imp_features = imp_features[imp_features['Moving Sum']<=0.95]
top_features = imp_features.index.tolist()
return imp_features, top_features
important_features, top_features = apply_RFC(x,y, data.columns.drop('Outcome'))
sns.barplot(important_features['Feature Importance'], important_features.index, palette = 'tab10')
plt.title('Random Forest Feature Importance for: '+"Diabetes Dataset")
plt.show()
###Output
_____no_output_____
###Markdown
From the above plot we can observe that Glucose or amount of blood glucose level is the most influential feature for determining the presence of diabetes. BMI and Age can also play a vital role in determining the presence of diabetes. This observation is quite consistent with our prior knowledge for Diabetes Detection. This is very useful for model interpretebility, as logicaly speaking also, Blood Glubose play maximum importance for Diabetes detection.In this notebook, for the sake of simplicity and considering the level of understanding for any beginner learner, I have only demonstrated only one way for doing feature importance. But there are multiple other ways to perform feature importance on structured data. I found a very good resource to follow, which provides a much deeper insights on the feature importance technique and demonstrates multiple ways to perform feature importance on your dataset: [How to Calculate Feature Importance With Python | Machine Learning Mastery](https://machinelearningmastery.com/calculate-feature-importance-with-python/). Sensitivity Analysis Sensitivity analysis is a quantitative process that approximates uncertainty in forecasts by altering the assumptions made about important input features used by the forecasting model. In sensitivity analysis, the individual input feature variables are increased or decreased to assess the impact of the individual features on the target outcome. Although, I have observed experts applying Sensitivity Analysis to train and re-train a model. But we will apply sensitivity analysis on a trained model to observe the sensitivity of the features towards the target outcome and how it can be used as a model agnostic local explainability method to explain black-box models. We will apply the 6-σ (six sigma) variation rule for Sensitivity analysis for the classification problem of Diabetes Detection.
###Code
# For this we need a trained model. So, let's train a model first, may be with a neural network architecture.
def model():
'''
Simple 3 layered Neural Network model for binary classification
'''
inp = Input(shape=(x_train.shape[1],))
x = Dense(40, activation='relu')(inp)
x = Dense(40, activation='relu')(x)
op = Dense(2, activation='softmax')(x)
model = Model(inputs=inp, outputs=op)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
model = model()
model.fit(x_train, to_categorical(y_train), batch_size=64, epochs=300, verbose=0)
# Evaluate the trained model
model.evaluate(x_test, to_categorical(y_test))[1]
###Output
3/3 [==============================] - 0s 2ms/step - loss: 0.5551 - accuracy: 0.8406
###Markdown
Although we are not concerned about the final model accuracy, but we do have a decent model to try sensitivity analysis on. Next, we will take a query instance to perform the 6-σ (six sigma) variation rule for Sensitivity analysis on the query instance.
###Code
query_instance = x_test.iloc[5].values.reshape((1,) + x_test.iloc[5].shape)
print("Let's take a look at the normalized query data instance in which all the features are in the range of (0.0 - 1.0):" )
df_query = pd.DataFrame(query_instance, columns = input_data.columns)
df_query
predicted_outcome = np.argmax(model.predict(query_instance))
true_label = int(y_test.iloc[5][0])
print(f" The true label is : {true_label}")
print(f" The predicted outcome is : {predicted_outcome}")
###Output
The true label is : 1
The predicted outcome is : 1
###Markdown
We can clearly see the model is correctly predicting the presence of diabetes. Now, let's see if it changes when we are doing sensitivity analysis, one by one for all the features. The measure for standard deviation(σ) can be calculated on the nomalized training data as we will be using the normalized data for the prediction part.
###Code
sigma_glucose = np.std(x['Glucose'])
sigma_bmi = np.std(x['BMI'])
sigma_age = np.std(x['Age'])
sigma_dpf = np.std(x['DiabetesPedigreeFunction'])
sigma_pregnancies = np.std(x['Pregnancies'])
sigma_insulin = np.std(x['Insulin'])
sigma_bp = np.std(x['BloodPressure'])
# Let's see the sensitivity analysis plots now
def sensitivity_analysis_plot(measure_tuple):
'''
Sensitivity Analysis plot using the 6-σ variation method
'''
(measure, sigma) = measure_tuple
sensitivity_output = []
original_value = df_query[measure].copy()
for k in [-3, -2, -1, 1, 2, 3]:
df_query[measure] = original_value.copy()
df_query[measure] = np.clip(df_query[measure] + k * sigma, 0.0, 1.0)
sensitivity_output.append(np.argmax(model.predict(df_query.values)))
plt.plot(['-3σ', '-2σ', '-σ', 'σ', '2σ', '3σ'], sensitivity_output, 'r.-', label = 'Sensitivity output')
plt.axhline(y = predicted_outcome, color = 'b', linestyle = '--', label = 'Original Prediction')
plt.title(f'6-σ variation sensitity plot for the feature: {measure}')
plt.legend()
plt.show()
measure_tuple_list = [('Glucose', sigma_glucose),
('BMI', sigma_bmi),
('Age', sigma_age),
('DiabetesPedigreeFunction', sigma_dpf),
('Pregnancies', sigma_pregnancies),
('Insulin', sigma_insulin),
('BloodPressure', sigma_bp)]
for measure_tuple in measure_tuple_list:
sensitivity_analysis_plot(measure_tuple)
###Output
_____no_output_____ |
exercises/sales_visualization.ipynb | ###Markdown
Sales Visualization Setup
###Code
# Python ≥3.5 is required
import sys
assert sys.version_info >= (3, 5)
sys.path.append("..")
# Scikit-Learn ≥0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# Common imports
import numpy as np
import os
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "end_to_end_project"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
# Ignore useless warnings (see SciPy issue #5998)
import warnings
warnings.filterwarnings(action="ignore", message="^internal gelsd")
# Spark libs
from pyspark.sql.session import SparkSession
# helpers
from helpers.data_prep_and_print import print_df
from helpers.path_translation import translate_to_file_string
###Output
_____no_output_____
###Markdown
Select the Imput File
###Code
inputFile = translate_to_file_string("../data/sales.csv")
###Output
_____no_output_____
###Markdown
SparkSession creation
###Code
spark = (SparkSession
.builder
.appName("SalesVisualization")
.getOrCreate())
###Output
_____no_output_____
###Markdown
Create a DataFrame using an ifered schema
###Code
df = spark.read.option("header", "true") \
.option("inferSchema", "true") \
.option("delimiter", ",") \
.csv(inputFile)
print(df.printSchema())
###Output
_____no_output_____
###Markdown
Plot the data
###Code
print_df(df,20)
spark.stop()
###Output
_____no_output_____ |
studies/fino_offshore_microscale/analysisFINOsowfa.ipynb | ###Markdown
from mmctools.helper_functions import theta, power_spectral_density, calc_spectrafrom windtools.common import calc_wind from windtools.SOWFA6.postProcessing.averaging import PlanarAveragesfrom windtools.SOWFA6.postProcessing.probes import Probefrom windtools.SOWFA6.postProcessing.probeSets import ProbeSetsfrom windtools.plotting import plot_timeheight, plot_timehistory_at_height, plot_profile, plot_spectrumfrom windtools.vtk import readVTK
###Code
def interpolate_to_heights(df,heights):
"""
Interpolate data in dataframe to specified heights
and return a new dataframe
"""
from scipy.interpolate import interp1d
# If single height is asked
if isinstance(heights, (float,int)):
heights=[heights]
# Unstack to single height index (= most time-consuming operation)
unstacked = df.unstack(level='datetime')
# Interpolate to specified heights
f = interp1d(unstacked.index,unstacked,axis=0,fill_value='extrapolate')
for hgt in heights:
unstacked.loc[hgt] = f(hgt)
# Restack and set index
df_out = unstacked.loc[heights].stack().reset_index().set_index(['datetime','height']).sort_index()
return df_out
###Output
_____no_output_____
###Markdown
Process and analysis of FINO1 SOWFA microscale caseThis notebook uses processed data, generated by running `processFINOsowfa.ipynb` and `processFINOobs.ipynb`.Regis Thedin \April 2021
###Code
# Paths
vtkcasepath = '/projects/mmc/rthedin/OpenFOAM/rthedin-6/run/offshoreCases/02_fino_sampling/'
probecasepath = '/scratch/rthedin/OpenFOAM/rthedin-6/run/offshore/03_fino_sampling/'
obspath = '~/a2e-mmc/assessment/studies/fino_offshore_microscale/'
wrfpath = '/projects/mmc/rthedin/OpenFOAM/rthedin-6/run/offshoreCases'
figdir = os.path.join(vtkcasepath,'figures')
procdatadir = os.path.join(vtkcasepath,'processedFiles')
if not os.path.exists(figdir):
os.makedirs(figdir)
if not os.path.exists(procdatadir):
os.makedirs(procdatadir)
###Output
_____no_output_____
###Markdown
1. Load data TimesPerform operations in time. OpenFOAM only deals with seconds.- WRF data comes in `datetime` already and goes from `datefrom` to `dateto`.- SOWFA time `0` is related to `dateref`. That is done to align the WRF output to forcing. We are interested in the time period that starts at `LESfrom` until `LESto`. We start the simulation 20,000s before that to allow spin-up time. Thus SOWFA's `startTime` is `LESspinupfrom`. SOWFA runs from 113200 s until 133200 s for spin-up. The value 113200 is `dateref` to `LESspinupfrom` in seconds. The 4-hour period of interest (and thus most data) is saved starting at 133200 s (`LESfrom`) until 147600 s (`LESto`). I have rounded-up the dates to get rid of the 3-s gap.
###Code
# WRF
datefrom = pd.to_datetime('2010-05-14 12:00:03')
dateto = pd.to_datetime('2010-05-17 00:00:00')
# SOWFA
dateref = pd.to_datetime('2010-05-14 12:00:00')
LESfrom=pd.to_datetime('2010-05-16 01:00:00')
LESto=pd.to_datetime('2010-05-16 05:00:00')
LESspinupfrom = LESfrom - pd.Timedelta(seconds=20000)
###Output
_____no_output_____
###Markdown
Load planar-average data
###Code
# Micro planar-average
padir = os.path.join(vtkcasepath,'postProcessing/planarAveraging')
dfpa = PlanarAverages(padir, varList=['U','T','UU','TU','wUU']).to_pandas()
dfpa.index.names = ['time','height']
dfpa = dfpa.rename(columns={ 'Ux':'u', 'Uy':'v', 'Uz':'w', 'T':'theta',
'UUxx':'uu', 'UUxy':'uv', 'UUxz':'uw', 'UUyy':'vv', 'UUyz':'vw', 'UUzz':'ww',
'TUx':'tu', 'TUy':'tv', 'TUz':'tw',
'wUUxx':'wuu', 'wUUxy':'wuv', 'wUUxz':'wuw', 'wUUyy':'wvv', 'wUUyz':'wvw', 'wUUzz':'www'
})
# Adjust time index from seconds to datetime
dfpa = dfpa.reset_index()
dfpa['datetime'] = pd.to_datetime(dfpa['time'], unit='s', origin=dateref)
dfpa = dfpa.set_index(['datetime','height'])
dfpa['wspd'], dfpa['wdir'] = calc_wind(dfpa)
###Output
_____no_output_____
###Markdown
Load mesoscale planar-averaged data
###Code
# WRF feather file
# Created in OpenFOAM/rthedin-6/run/offshoreCases/WRFtoSOWFAnetcdf_offshore.ipynb
wrfftr = os.path.join(wrfpath, 'FINO1_d03_full.ftr')
# WRF
dfwrf = pd.read_feather(wrfftr, columns=None, use_threads=True)
dfwrf.drop(columns='station',inplace=True)
dfwrf = dfwrf.set_index(['datetime','height'])
dfwrf = dfwrf.sort_index()
dswrf = interpolate_to_heights(dfwrf,[40,60,80]).to_xarray()
###Output
_____no_output_____
###Markdown
Read surface probe information
###Code
# read all saved probes (Rwall and qwall only)
surf = Probe(os.path.join(probecasepath,'postProcessing/probeSurface/'),fields=['Rwall','qwall'])
###Output
_____no_output_____
###Markdown
Load virtual masts probes dataset
###Code
# read the netCDF file with all probes
dssowfa = xr.open_dataset(os.path.join(procdatadir,'ds_allmasts_01Z_05Z.nc'))
# Calculate wspd and wdir
dssowfa['wspd'], dssowfa['wdir'] = calc_wind(dssowfa)
###Output
_____no_output_____
###Markdown
Load VTK data
###Code
ds80m = xr.open_dataset(os.path.join(procdatadir,'ds_VTK80m_01Z_04Z.nc'))
###Output
_____no_output_____
###Markdown
Load observation datasetHigh-frequency observation dataset sent by Raj on April 8. Processing and creation of netCDF file available on A2e's github page under `assessment/studies/fino_offshore_microscale/`.
###Code
obsrajfull = xr.open_dataset(os.path.join(obspath, 'FINO1_obs_10Hz.nc'))
# Get only the period of interest
obs = obsrajfull.sel(datetime=slice(LESfrom,LESto))
###Output
_____no_output_____
###Markdown
Load observation dataset sent by Will
###Code
obswillfull = xr.open_dataset(os.path.join(obspath,'FINO1_obs_will.nc'))
# Get only the period of interest
obswill = obswillfull.sel(datetime=slice(LESfrom,LESto))
# there are a few extra times on hh:09, even though the hh:10 datapoint is still there. Removing those
obswill = obswill.resample(datetime='10min').nearest()
# Each variable has its own levels. Here we create a single level 'height' and map the data to such level. NaN's are placed into non-existing levels
heights = np.unique(np.concatenate([levels.values for coord,levels in obswill.coords.items() if coord != 'datetime']))
for dvar in obswill.data_vars:
level = obswill[dvar].dims[0]
obswill[dvar] = obswill[dvar].reindex({level:heights}).rename({level: 'height'})
obswill = obswill.drop_vars(level)
###Output
_____no_output_____
###Markdown
2. Spectral Analysis
###Code
psdsowfa = calc_spectra(dssowfa.sel(y=dssowfa.y[2]),
var_oi=['u','v','w','wspd'],
spectra_dim='datetime',
average_dim='x',
level_dim='height',
window='hamming',
window_length='30min',
#level=[0,1]
)
psdobs = calc_spectra(obs,
var_oi=['u','v','w','wspd'],
spectra_dim='datetime',
level_dim='height',
window='hamming',
window_length='30min',
)
fig, axs = plt.subplots(nrows=1, ncols=3, figsize=(18,4))
_,ax = plot_spectrum(
datasets={'Obs':psdobs.sel(height=80).to_dataframe(), \
'SOWFA':psdsowfa.sel(height=85).mean(dim='x').to_dataframe(), },
#height=80,
#times=spectraTimes,
fields='wspd',
freqlimits=(1e-3,1),
fieldlimits={'wspd':(1e-3,1e2)},
showlegend=False,
fig=fig, ax=axs[0]
)
ax.set_title('85m SOWFA / 80m obs')
_,ax = plot_spectrum(
datasets={'Obs':psdobs.sel(height=60).to_dataframe(), \
'SOWFA':psdsowfa.sel(height=65,method='nearest').mean(dim='x').to_dataframe(), },
#height=80,
#times=spectraTimes,
fields='wspd',
freqlimits=(1e-3,1),
fieldlimits={'wspd':(1e-3,1e2)},
showlegend=False,
fig=fig, ax=axs[1]
)
ax.set_title('65m SOWFA / 60m obs')
_,ax = plot_spectrum(
datasets={'Obs':psdobs.sel(height=40).to_dataframe(), \
'SOWFA':psdsowfa.sel(height=45,method='nearest').mean(dim='x').to_dataframe(), },
#height=80,
#times=spectraTimes,
fields='wspd',
freqlimits=(1e-3,1),
fieldlimits={'wspd':(1e-3,1e2)},
fig=fig, ax=axs[2]
)
ax.set_title('45m SOWFA / 40m obs')
plt.show()
###Output
_____no_output_____
###Markdown
Time-history of mast points Obs
###Code
fig,axs = plot_timehistory_at_height(
obs.resample(datetime='5min').mean(),
fields=['wspd','wdir'],
heights=[40,60,80],
cmap='viridis',
timelimits=[LESfrom,LESto],
#fieldlimits={'wspd':(11,21),'wdir':(280,310),'theta':(280,283)},
plot_local_time=False,
subfigsize=(15,2.1)
)
axs[0].set_title('5-min mean high-frequency obs')
fig,axs = plot_timehistory_at_height(
obswill,
fields=['wspd','wdir'],
heights=[40,60,80],
cmap='viridis',
timelimits=[LESfrom,LESto],
#fieldlimits={'wspd':(11,21),'wdir':(280,310),'theta':(280,283)},
plot_local_time=False,
subfigsize=(15,2.1)
)
axs[0].set_title('processed annemometer obs')
plt.show()
###Output
_____no_output_____
###Markdown
Microscale
###Code
dsmean = dssowfa.mean(dim=['x','y'])
ds00 = dssowfa.sel(x=0,y=0).to_dataframe().drop(columns=['x','y'])
fig,axs = plot_timehistory_at_height(
datasets={'mean of 25 masts':dsmean,
'5-min mean of 25 masts':dsmean.resample(datetime='5min').mean(),
'single mast at 0,0':ds00},
fields=['wspd','wdir'],
heights=[85,65,45],
cmap='copper',
timelimits=[LESfrom,LESto],
fieldlimits={'wspd':(10,17.5),'wdir':(280,305)},
plot_local_time=False,
subfigsize=(15,2.1)
)
###Output
_____no_output_____
###Markdown
Comparison of time histories at specific heights
###Code
fig,axs = plot_timehistory_at_height(
datasets={'SOWFA (5-min mean of 25 masts)':dsmean.resample(datetime='5min').mean(),
'WRF meso (5-min mean)':dswrf.resample(datetime='5min').mean(),
'Obs':obswill,
'High-freq obs (5-min mean)':obs.resample(datetime='5min').mean()
},
fields=['wspd','wdir'],
heights=[80,60,40],
cmap='viridis',
timelimits=[LESfrom,LESto],
fieldlimits={'wspd':(11,15),'wdir':(283,303)},
plot_local_time=False,
stack_by_datasets=True,
subfigsize=(15,2.1)
)
###Output
_____no_output_____
###Markdown
VTK handling Instantaneous plots at 02Z, 03Z, 04Z
###Code
t_hours = [136800, 140400, 144000]
dshours = ds80m.sel(datetime=pd.to_datetime(t_hours, unit='s', origin=dateref))
fig, axs = plt.subplots(nrows=1, ncols=3, figsize=(20,7))
for i, ax in enumerate(axs):
currds = dshours.sel(datetime=dshours.datetime[i])
p = ax.pcolormesh(currds.x, currds.y, (currds.u**2 + currds.v**2)**0.5, vmin=10, vmax=18, cmap='viridis', shading='auto')
ax.set_title(dshours.datetime[i].values)
ax.set_xlim([0,6000])
ax.set_ylim([0,6000])
ax.set_aspect('equal', 'box')
###Output
_____no_output_____
###Markdown
Sanity-check mean comparisonCompare the means that is saved from OpenFOAM directly, accounting for the all time steps in the first hour, and the recomposed mean from the instantaneaous flowfield
###Code
meanslicedir = '/home/rthedin/OpenFOAM/rthedin-6/run/offshoreCases/02_fino_sampling/postProcessing/slicesWRF'
ds1hour = readVTK(meanslicedir,t=136800, sliceType='UMean_zNormal.80.vtk')
ds2hour = readVTK(meanslicedir,t=140400, sliceType='UMean_zNormal.80.vtk')
mean_1hour_reconstruct = ds80m.sel(datetime=slice(LESfrom,LESfrom+pd.Timedelta(seconds=3600))).mean(dim=['datetime'])
mean_2hour_reconstruct = ds80m.sel(datetime=slice(LESfrom,LESfrom+pd.Timedelta(seconds=3600*2))).mean(dim=['datetime'])
fig, axs = plt.subplots(nrows=2, ncols=2, figsize=(14,12))
ax=axs.flatten()
vmin=13.6
vmax=14.1
p = np.sqrt(ds1hour.u**2 + ds1hour.v**2).plot(ax=ax[0], vmin=vmin, vmax=vmax, shading='auto', cmap='viridis', add_colorbar=False)
ax[0].set_title('OpenFOAM 1-hour mean')
p = np.sqrt(mean_1hour_reconstruct.u**2 + mean_1hour_reconstruct.v**2).plot(ax=ax[1], vmin=vmin, vmax=vmax, shading='auto', cmap='viridis', add_colorbar=False)
ax[1].set_title('reconstructed 1-hour mean')
p = np.sqrt(ds2hour.u**2 + ds2hour.v**2).plot(ax=ax[2], vmin=vmin, vmax=vmax, shading='auto', cmap='viridis', add_colorbar=False)
ax[2].set_title('OpenFOAM 2-hour mean')
p = np.sqrt(mean_2hour_reconstruct.u**2 + mean_2hour_reconstruct.v**2).plot(ax=ax[3], vmin=vmin, vmax=vmax, shading='auto', cmap='viridis', add_colorbar=False)
ax[3].set_title('reconstructed 2-hour mean')
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.82, 0.3, 0.02, 0.4])
cb = fig.colorbar(p, cax=cbar_ax)
cb.set_label('mean horiz velocity [m/s]')
for a in ax:
a.set_xlim([0,6000]); a.set_ylim([0,6000])
a.set_aspect('equal', 'box')
a.axis('off')
###Output
_____no_output_____
###Markdown
Spatial Correlation
###Code
def spatialCorrelation2D (ds, x0, y0, ti=None, tf=None, dateref=None):
# Operations in time
if dateref != None:
ti = pd.to_datetime(ti, unit='s', origin = dateref)
tf = pd.to_datetime(tf, unit='s', origin = dateref)
ds = ds.sel(datetime=slice(ti,tf)).copy()
times = ds.datetime.values
if 'wspd' not in list(ds.var()):
ds['wspd'] = (ds.u**2 + ds.v**2)**0.5
# find position of (x0, y0)
iNearest = (abs(ds.x-x0)).argmin().values
jNearest = (abs(ds.y-y0)).argmin().values
print(f'Performing spatial correlation wrt to point ({ds.isel(x=iNearest).x.values}, ' \
f'{ds.isel(x=iNearest).x.values}), between {ti} and {tf}.')
mean = ds.sel(datetime=slice(ti,tf)).mean(dim=['datetime'])
vlist=[]
for i, t in enumerate(times):
print(f'Processing time {t}', end='\r', flush=True)
primeField = ds.sel(datetime=t) - mean
v = primeField*primeField.isel(x=iNearest, y=jNearest)
vlist.append(v)
finalv = xr.concat(vlist, dim='datetime').mean(dim='datetime')
finalv = finalv/finalv.isel(x=iNearest, y=jNearest)
return finalv
v = spatialCorrelation2D(ds80m, x0=3000, y0=3000, ti=133200, tf=133200+1*3600, dateref=dateref)
fig, ax= plt.subplots(nrows=1, ncols=1, figsize=(7,7))
v.wspd.plot(ax=ax, vmin=-0.2, vmax=1, cmap='plasma')
ax.set_ylim([1000, 5000])
ax.set_xlim([1000, 5000])
ax.set_aspect('equal', 'box')
###Output
_____no_output_____
###Markdown
Vertical profiles
###Code
# 10min mean
obsRaj10min = obs.resample(datetime='10min').mean()
obsWill10min = obswill.copy()
dssowfa10min = dsmean.resample(datetime='10min').mean()
dswrf10min = dfwrf.to_xarray().sel(datetime=slice(LESfrom,LESto)).resample(datetime='10min').mean()
# Find common times on all 10-min mean arrays for plotting purposes
from functools import reduce
times10min = reduce(np.intersect1d,(obsRaj10min.datetime.values, obsWill10min.datetime.values, dssowfa10min.datetime.values, dswrf10min.datetime.values))
fig,ax = plot_profile(
datasets={#'10-min mean of 10Hz obs': obsRaj10min,
#'Obs': obswill,
'SOWFA': dssowfa10min,
#'WRF meso':dswrf10min
},
fields=['wspd','wdir'],
times=times10min[::2],
fieldlimits={'wspd':(9,16),'wdir':(280,303)},
showlegend=True,
heightlimits=[0, 180],
cmap='viridis',
datasetkwargs={'Obs':dict(linestyle='None', marker='o')},
)
fig,ax = plot_profile(
datasets={#'10-min mean of 10Hz obs': obsRaj10min,
'Obs': obswill,
#'SOWFA': dssowfa10min,
#'WRF meso':dswrf10min
},
fields=['wspd','wdir'],
times=times10min[::2],
fieldlimits={'wspd':(9,16),'wdir':(280,303)},
showlegend=True,
heightlimits=[0, 180],
cmap='viridis',
datasetkwargs={'Obs':dict(linestyle='None', marker='o')},
fig=fig, ax=ax
)
fig.suptitle('SOWFA vs. annemometer obs'); plt.show()
# ----------------------------------------------------------------------------------------
fig,ax = plot_profile(
datasets={#'10-min mean of 10Hz obs': obsRaj10min,
#'Obs': obswill,
'SOWFA': dssowfa10min,
#'WRF meso':dswrf10min
},
fields=['wspd','wdir'],
times=times10min[::2],
fieldlimits={'wspd':(9,16),'wdir':(280,303)},
showlegend=True,
heightlimits=[0, 180],
cmap='viridis',
datasetkwargs={'Obs':dict(linestyle='None', marker='o')},
)
fig,ax = plot_profile(
datasets={'10-min mean of 10Hz obs': obsRaj10min,
#'Obs': obswill,
#'SOWFA': dssowfa10min,
#'WRF meso':dswrf10min
},
fields=['wspd','wdir'],
times=times10min[::2],
fieldlimits={'wspd':(9,16),'wdir':(280,303)},
showlegend=True,
heightlimits=[0, 180],
cmap='viridis',
datasetkwargs={'10-min mean of 10Hz obs':dict(linestyle='None', marker='o')},
fig=fig, ax=ax
)
fig.suptitle('SOWFA vs. high-freq obs'); plt.show()
fig,ax = plot_profile(
dsmean.resample(datetime='10min').mean(),
fields=['wspd','wdir'],
times=times10min[::2],
fieldlimits={'wspd':(5,22),'wdir':(280,330),'theta':(280,290)},
showlegend=True,
heightlimits=[0, 1500],
cmap='viridis'
)
ax[1].set_title('Microscale (10-min mean)')
fig,ax = plot_profile(
dfwrf.to_xarray().resample(datetime='10min').mean(),
fields=['wspd','wdir'],
times=times10min[::2],
showlegend=True,
fieldlimits={'wspd':(5,22),'wdir':(280,330),'theta':(280,290)},
heightlimits=[0, 1500],
cmap='viridis'
)
ax[1].set_title('Mesoscale (10-min mean)')
plt.show()
###Output
_____no_output_____
###Markdown
Time-height plots
###Code
# Mesoscale
fig, axs = plt.subplots(nrows=3,sharex=True,sharey=True,figsize=(15,12))
for ax in axs.reshape(-1):
ax.axvline(LESfrom, color='r', linestyle='--', lw=1)
ax.axvline(LESto, color='r', linestyle='--', lw=1)
axs[0].set_title('Mesoscale')
fig,axs,cbars = plot_timeheight(dfwrf,
fields=['wspd','wdir','theta'],
heightlimits=(0,2000),
timelimits=[LESfrom,LESto],
fieldlimits={'wspd':(12,21),'wdir':(270,330),'theta':(280,290)},
subfigsize=(15, 4),
fig=fig, ax=axs)
# Microscale
fig, axs = plt.subplots(nrows=3,sharex=True,sharey=True,figsize=(15,12))
for ax in axs.reshape(-1):
ax.axvline(LESfrom, color='r', linestyle='--', lw=1)
axs[0].set_title('Microscale')
fig,axs,cbars = plot_timeheight(dfpa,
fields=['wspd','wdir','theta'],
timelimits=[LESfrom,LESto],
fieldlimits={'wspd':(12,21),'wdir':(270,330),'theta':(280,290)},
subfigsize=(15, 4),
fig=fig, ax=axs)
###Output
_____no_output_____ |
Project_Test_Perceptual_Phenomenon/Test_Perceptual_Phenomenon.ipynb | ###Markdown
Analyzing the Stroop EffectPerform the analysis in the space below. Remember to follow [the instructions](https://docs.google.com/document/d/1-OkpZLjG_kX9J6LIQ5IltsqMzVWjh36QpnP2RYpVdPU/pub?embedded=True) and review the [project rubric](https://review.udacity.com/!/rubrics/71/view) before submitting. Once you've completed the analysis and write-up, download this file as a PDF or HTML file, upload that PDF/HTML into the workspace here (click on the orange Jupyter icon in the upper left then Upload), then use the Submit Project button at the bottom of this page. This will create a zip file containing both this .ipynb doc and the PDF/HTML doc that will be submitted for your project.(1) What is the independent variable? What is the dependent variable? Answer:Independent variable is the congruency of the colour and text.Dependent variable is duration (time) to recognise the colours or the texts. (2) What is an appropriate set of hypotheses for this task? Specify your null and alternative hypotheses, and clearly define any notation used. Justify your choices. Answer: Null (H0): A difference between the average time to name the ink colors from the INCONGRUENT words and the average time to name the ink colors from the CONGRUENT words is LESS than or EQUAL to zero.Alternative (H1) : A difference between the average time to name to ink colors from the INCONGRUENT words and the the average time to name the ink colors from the CONGRUENT words is GREATER than zero. $$ H_0: \mu_{incongruent} - \mu_{congruent} <= 0 $$$$ H_1: \mu_{incongruent} - \mu_{congruent} > 0 $$$\mu_{incongruent}$ is the average time to name the ink colors from the INCONGRUENT words$\mu_{congruent}$ is the average time to name the ink colors from the CONGRUENT words Before start testing, we will clean up the data first by inspecting some descriptive statistics of the dataset to find any outliers and remove them.Then, since the dataset that we will be using for this hypothesis testing is quite small (the sample size is less than 25), we will use the bootstrapping technique to sample data with replacement and find a difference between the mean values of the INCONGRUENT and CONGRUENT data using the bootstrapped samples and repeat this step many times to create a large enough sample size (Law of Large Numbers - as our sample size increases, the sample mean gets closer to the population mean) to create a sampling distribution of a difference in the mean values of the Incongruent and Congruent data (Central Limit Theorem - with a large enough sample size the sampling distribution of the mean will be normally distributed).We will use this sampling distribution to find a confident interval, and uses a standard deviation value of this sampling distribution to find a p-value. Then, we will use these metrics to draw conclusions for this hypothesis testing (reject or fail to reject the null hypothesis). Source: Lesson 10 - Sampling Distribution and the Central Limit Theorem (Data Analyst Nanodegree Term 1) (3) Report some descriptive statistics regarding this dataset. Include at least one measure of central tendency and at least one measure of variability. The name of the data file is 'stroopdata.csv'.
###Code
# Perform the analysis here
import pandas as pd
import matplotlib.pyplot as plt
% matplotlib inline
df = pd.read_csv('stroopdata.csv')
df.describe()
# Find Median
df.median()
###Output
_____no_output_____
###Markdown
Answer:Measure of central tendency:Average time to name ink colors from the "congruent" words is 14.051125 secondsAverage time to name ink colors from the "incongruent" words is 22.015917 secondsAverage time to name ink colors from the "congruent" words is 14.3565 secondsAverage time to name ink colors from the "incongruent" words is 21.0175 seconds Measure of variability: Standard deviation of time to name ink colors from the "congruent" words is 3.559358 secondsStandard deviation of time to name ink colors from the "incongruent" words is 4.797057 seconds (4) Provide one or two visualizations that show the distribution of the sample data. Write one or two sentences noting what you observe about the plot or plots.
###Code
# Build the visualizations here
# Distribution of the sample data
plt.hist(df['Congruent'], alpha = 0.5);
plt.hist(df['Incongruent'], alpha = 0.5);
plt.title('Distribution of the sample data');
# Box plot of the sample data
df.boxplot();
plt.suptitle('Box plot of the sample data');
###Output
_____no_output_____
###Markdown
Answer:The first plot shows a distribution of the sample data and we can see that the shape of the distribution of the Congruent and Incongruent data adhere to the normal distribution and, more specifically, the plot of the Incongruent data shows outliers in the sample data.The second plot shows a box of the sample data and it confirms what we see in the first plot where outliers are identified in the Incongruent data. (5) Now, perform the statistical test and report your results. What is your confidence level or Type I error associated with your test? What is your conclusion regarding the hypotheses you set up? Did the results match up with your expectations? **Hint:** Think about what is being measured on each individual, and what statistic best captures how an individual reacts in each environment.
###Code
# Perform the statistical test here
# Remove outliers by removing samples where the Incongruent data is greater than 30
df_new = df.query('Incongruent < 30')
# Inspect the new box plot
df_new.boxplot();
# Distribution of the new sample data
plt.hist(df_final['Congruent'], alpha = 0.5);
plt.hist(df_final['Incongruent'], alpha = 0.5);
plt.title('Distribution of the new sample data');
# Looks like we now have an outlier in the Congruent data
# Remove the outliers and inspect the box plot again
df_final = df_new.query('Congruent < 21')
df_final.boxplot();
df_final.describe()
lenData = df_final.shape[0];
diffs = [];
for _ in range(10000):
bootsample = df_final.sample(lenData, replace=True)
mean_incongruent = bootsample['Incongruent'].mean()
mean_congruent = bootsample['Congruent'].mean()
diffs.append(mean_incongruent - mean_congruent)
import numpy as np
conf_interval = np.percentile(diffs, 0.5), np.percentile(diffs, 99.5)
print('99% Confidence interval: {}'.format(conf_interval))
plt.hist(diffs);
plt.title('Sampling distribution of the differences in the mean value of Incongruent and Congruent data');
# Generage null values to generate the normal distribution with the std value of diffs
null_vals = np.random.normal(0, np.std(diffs), 10000);
# Find a difference between mean_Incongruent and mean_Congruent
new_diff_sample_mean = df_final['Incongruent'].mean() - df_final['Congruent'].mean()
plt.hist(null_vals);
plt.title('Distribution of the null hypothesis');
plt.axvline(x=new_diff_sample_mean, color='r', linewidth=2);
# Find the p-value
# H0: mu_Incongruent - mu_Congruent <= 0
(new_diff_sample_mean <= null_vals).mean()
###Output
_____no_output_____ |
notebooks/2_3_EDA_Time_Analysis.ipynb | ###Markdown
2.3 EDA Considering behavior of target and features over time
###Code
# importing libraries
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
# loading dataset
df = pd.read_csv('../data/GEFCom2014Data/Wind/clean_data.csv', parse_dates= ['TIMESTAMP'])#,
#index_col= 'TIMESTAMP' )
###Output
_____no_output_____
###Markdown
Target over hours over all windfarms in average.
###Code
sns.barplot(data = df, x = 'HOUR', y = 'TARGETVAR', color='gray')
plt.title('Electricity production by hour')
plt.show()
###Output
_____no_output_____
###Markdown
Target over hour for every windfarm
###Code
fig, ax = plt.subplots(2,5,figsize=(30,10),sharey=True)
ax = ax.ravel()
for zone in range(1,11):
sns.barplot(data = df[df.ZONEID==zone], x = 'HOUR', y = 'TARGETVAR', color='gray', ax=ax[zone - 1])
ax[zone - 1].set_title('ZONE- {}'.format(zone))
fig.tight_layout()
###Output
_____no_output_____
###Markdown
Target over month for every windfarm
###Code
fig, ax = plt.subplots(2,5,figsize=(30,10),sharey=True)
ax = ax.ravel()
for zone in range(1,11):
sns.barplot(data = df[df.ZONEID==zone], x = 'MONTH', y = 'TARGETVAR', color='gray', ax=ax[zone - 1])
ax[zone - 1].set_title('ZONE- {}'.format(zone))
fig.tight_layout()
###Output
_____no_output_____
###Markdown
Plotting electricity production by day of the week
###Code
fontsize = 8
df['weekday'] = df.TIMESTAMP.dt.weekday
weekdays = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']
weekdays_dict = {i:key for i,key in enumerate(weekdays)}
fig, ax = plt.subplots(figsize = (6,3), dpi = 400)
sns.barplot(data = df, x = 'weekday', y = 'TARGETVAR', color = 'darkturquoise', errcolor='gray',ax = ax, ci = None)
ax.tick_params(axis='both', labelsize=fontsize)
plt.ylabel('Energy production', fontsize = fontsize + 1)
plt.xlabel('')
ax.set_xticklabels(weekdays)
plt.show()
###Output
_____no_output_____
###Markdown
Plotting energy production in every season
###Code
# seasons in Australia
months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
season_dict = {1: "Summer", 2:"Autumn", 3:"Winter", 4:"Spring"}
# // is operator to do integer division (i.e., quotient without remainder)
month_season_dict = {month_idx + 1: season_dict[month_idx // 3 + 1] for month_idx, month in enumerate(months)}
df['SEASON'] = df.MONTH.apply(lambda x: month_season_dict[x])
fig, ax = plt.subplots(figsize = (6,3), dpi = 400)
sns.barplot(data = df, x = 'MONTH', y = 'TARGETVAR', hue = 'SEASON', color='blue', ax = ax, ci=None, dodge=False)
ax.tick_params(axis='both', labelsize=fontsize)
plt.ylabel('')
plt.xlabel('')
plt.ylim([0,0.6])
plt.legend(loc = 'upper left', bbox_to_anchor=(0, 0, 1, 1))
ax.set_xticklabels(months)
plt.show()
###Output
_____no_output_____ |
embeddings/GloVe Embeddings.ipynb | ###Markdown
GloVe EmbeddingsGloVe is another commonly used method of obtaining pre-trained embeddings. GloVe aims to achieve two goals:- Create word vectors that capture meaning in vector space- Takes advantage of global count statistics instead of only local informationThere are a lot of online material available to explain the concept about GloVe. So my focus here will be on, how to use pre-trained Glove word embeddings. I will provide relevant resources to look into more details. Resources:- [Glove Paper Explaination](https://mlexplained.com/2018/04/29/paper-dissected-glove-global-vectors-for-word-representation-explained/)- [Colyer blog on GloVe](https://blog.acolyer.org/2016/04/22/glove-global-vectors-for-word-representation/)- [Code and Pretrained Embeddings](https://nlp.stanford.edu/projects/glove/)- [Stanford Lecture](https://www.youtube.com/watch?v=ASn7ExxLZws)- [GloVe Paper](https://www-nlp.stanford.edu/pubs/glove.pdf) Difference between Word2Vec and GloVe**Global information:** word2vec does not have any explicit global information embedded in it by default. GloVe creates a global co-occurrence matrix by estimating the probability a given word will co-occur with other words. This presence of global information makes GloVe ideally work better. Although in a practical sense, they work almost similar and people have found similar performance with both.**Presence of Neural Networks:** GloVe does not use neural networks while word2vec does. In GloVe, the loss function is the difference between the product of word embeddings and the log of the probability of co-occurrence. We try to reduce that and use SGD but solve it as we would solve a linear regression. While in the case of word2vec, we either train the word on its context (skip-gram) or train the context on the word (continuous bag of words) using a 1-hidden layer neural network.[source 1](https://www.quora.com/How-is-GloVe-different-from-word2vec)[source 2](http://deeplearning.lipingyang.org/wp-content/uploads/2017/12/How-is-GloVe-different-from-word2vec_-Quora.pdf) Download the pre-trained glove file I will be using glove.6B file which is trained on Wikipedia 2014 + Gigaword 5 (6B tokens, 400K vocab, uncased, 300d vectors, 822 MB download). You can find the other files [here](https://nlp.stanford.edu/projects/glove/)
###Code
!wget http://nlp.stanford.edu/data/wordvecs/glove.6B.zip
!ls
###Output
glove.6B.zip sample_data
###Markdown
Upload the Data to Google Drive (Optional)If the notebook shutdown means, the data which is present is also lost. Inorder to save the data we can sync it google drive by mounting the drive. Instead of downloading the glove file again we can simply mount the drive and get the data
###Code
from google.colab import drive
drive.mount('/content/drive')
# run for the first time
!mv glove.6B.zip "/content/drive/My Drive"
###Output
_____no_output_____
###Markdown
Extract the contents
###Code
# update the path to zip file accordingly if not uploaded to drive means
!unzip "./drive/My Drive/glove.6B.zip"
!ls
###Output
drive glove.6B.200d.txt glove.6B.50d.txt
glove.6B.100d.txt glove.6B.300d.txt sample_data
###Markdown
I will be using **glove.6B.300d.txt**. The same logic applies for all the other versions Using Gensim to load pre-trained Glove EmbeddingsGensim is an open-source library for unsupervised topic modeling and natural language processing, using modern statistical machine learning.Gensim includes streamed parallelized implementations of fastText,word2vec and doc2vec algorithms, as well as latent semantic analysis (LSA, LSI, SVD), non-negative matrix factorization (NMF), latent Dirichlet allocation (LDA), tf-idf and random projections. [source](https://en.wikipedia.org/wiki/Gensim)References:- [My code on Word2Vec](https://github.com/graviraja/100-Days-of-NLP/blob/master/embeddings/Word2Vec.ipynb)- [Machine Learning Mastery blog on using gensim](https://machinelearningmastery.com/develop-word-embeddings-python-gensim/)
###Code
import gensim
from gensim.models import KeyedVectors
from gensim.scripts.glove2word2vec import glove2word2vec
glove_file_path = "./glove.6B.300d.txt"
# converting glove file to word2vec format so that it can be loaded by gensim
word2vec_output_file = 'glove.6B.300d.txt.word2vec'
glove2word2vec(glove_file_path, word2vec_output_file)
!ls
# Note that the converted file is ASCII format, not binary, so we set binary=False when loading.
model = KeyedVectors.load_word2vec_format(word2vec_output_file, binary=False)
###Output
/usr/local/lib/python3.6/dist-packages/smart_open/smart_open_lib.py:253: UserWarning: This function is deprecated, use smart_open.open instead. See the migration notes for details: https://github.com/RaRe-Technologies/smart_open/blob/master/README.rst#migrating-to-the-new-open-function
'See the migration notes for details: %s' % _MIGRATION_NOTES_URL
###Markdown
Word SimilaritiesHere, we will see how similar are two words to each other
###Code
print(f'Similarity between night and nights: {model.similarity("night", "nights")}')
print(f'Similarity between reb and blue: {model.similarity("red", "blue")}')
print(f'Similarity between hello and hi: {model.similarity("hello", "hi")}')
print(f'Similarity between king and queen: {model.similarity("king", "queen")}')
print(f'Similarity between london and moscow: {model.similarity("london", "moscow")}')
print(f'Similarity between car and bike: {model.similarity("car", "bike")}')
###Output
Similarity between night and nights: 0.6768945455551147
Similarity between reb and blue: 0.6736692786216736
Similarity between hello and hi: 0.3302616477012634
Similarity between king and queen: 0.6336469054222107
Similarity between london and moscow: 0.39354825019836426
Similarity between car and bike: 0.4672122299671173
###Markdown
Most Similar WordsHere, we will ask our model to find the words which are most similar
###Code
similar = model.most_similar("january")
for i in similar:
print(i)
###Output
('february', 0.9652106761932373)
('december', 0.9620600938796997)
('october', 0.9580933451652527)
('november', 0.9528316855430603)
('september', 0.9462947845458984)
('august', 0.935489296913147)
('april', 0.9315787553787231)
('june', 0.928554356098175)
('july', 0.9246786832809448)
('march', 0.898531436920166)
###Markdown
Odd-One-OutHere, we ask our model to give us the word that does not belong to the list!
###Code
print(model.doesnt_match("breakfast cereal dinner lunch".split()))
###Output
cereal
###Markdown
Analogy differenceWhich word is to women as king is to queen?
###Code
model.most_similar(positive=["women", "king"], negative=["queen"])
def analogy(x1, x2, y1):
result = model.most_similar(positive=[y1, x2], negative=[x1])
return result[0][0]
analogy('japan', 'japanese', 'china')
###Output
/usr/local/lib/python3.6/dist-packages/gensim/matutils.py:737: FutureWarning: Conversion of the second argument of issubdtype from `int` to `np.signedinteger` is deprecated. In future, it will be treated as `np.int64 == np.dtype(int).type`.
if np.issubdtype(vec.dtype, np.int):
|
notebooks/lab_session.ipynb | ###Markdown
Constraint programmingWe will use and import the `facile` library ([documentation](https://facile.readthedocs.io)) which gives access to a constraint programming API in Python. This notebook goes through basic notions of constraint programming, at a reasonable pace. The lecturer will explain more advanced concepts as the group moves forward and/or on demand.- Solutions appear on demand as you uncomment the `%load` comments, but the point of the session is to **try first** without being stuck.- You will find in the notebook blocks of different colors:**Questions** appear in yellow.You should **fully understand** what appears in red.Blue blocks push *beyond the scope* of this course.
###Code
import facile
%matplotlib inline
###Output
_____no_output_____
###Markdown
Example :We consider the problem of two variables taking their values on $a,b \in \{0,1\}$, and constrained by $a \neq b$.Find admissible values for $a$ and $b$. The basic syntax of `facile` goes as follows:
###Code
# Variables
#
# facile.variable may be used with an inclusive min value and an inclusive max value as parameters
# e.g. facile.variable(0,3) --> domain is {0,1,2,3}
# OR
# facile.variable may be given a list of possible values
# e.g. facile.variable([0,3]) --> domain is {0,3} (and neither 1 nor 2 are possible values)
#
# a and b are both defined on {0, 1}
a = facile.variable([0, 1])
b = facile.variable([0, 1])
# Constraints
# Expressions and constraints can be built with usual operators: +, *, <=, etc.
facile.constraint(a != b)
# Resolution
# We want a solution for a and b and get their values with method .value()
sol = facile.solve([a, b])
assert sol, "No solution found"
print ("Solution found : a={}, b={}".format(a.value(), b.value()))
###Output
_____no_output_____
###Markdown
Basic problems Problem 1: (super easy)Consider the following problem. Modify it so as it has a solution.
###Code
# Variables
a = facile.variable([0, 1])
b = facile.variable([0, 1])
c = facile.variable([0, 1])
# Constraints
facile.constraint(a != b)
facile.constraint(b != c)
facile.constraint(c != a)
# Resolution
if facile.solve([a, b, c]):
print ("Solution found : a=%d, b=%d, c=%d" % (a.value(), b.value(), c.value()))
else:
print ("No solution found")
###Output
_____no_output_____
###Markdown
Problem 2: (easy)Find four integers so that their sum is 711 and their product is 711000000.The original problem is stated as follows:A guy walks into a 7-11 store and selects four items to buy. The clerk at the counter informs the gentleman that the total cost of the four items is 7.11 dollars. He was completely surprised that the cost was the same as the name of the store. The clerk informed the man that he simply multiplied the cost of each item and arrived at the total. The customer calmly informed the clerk that the items should be added and not multiplied. The clerk then added the items together and informed the customer that the total was still exactly 7.11 dollars.What are the exact costs of each item?We can find a beautiful [algebraic resolution](http://everydayexplanations.blogspot.fr/2011/08/711-problem.html) which may help to define the domains of each value.
###Code
# Variables
# There is a risk of integer overflow when computing a*b*c*d
# We need small domains...
a = facile.variable(range(0, 320))
b = facile.variable(range(0, 160))
c = facile.variable(range(0, 130))
d = facile.variable(range(0, 130))
# Constraints
# Resolution
sol = facile.solve([a, b, c, d], backtrack=True)
print("Solution found: a={}, b={}, c={}, d={}".format(*sol.solution))
###Output
_____no_output_____
###Markdown
**Let's look further into this!** (out of scope)Let's check how many backtracks occur during the resolution process, and have a look at the constraint propagation effect on the domain of each variable.
###Code
# %load solutions/seven_eleven.py
###Output
_____no_output_____
###Markdown
Problem 3: (easy)Solve SEND + MORE = MONEY (two methods). Pretty-print the result.You may need to use the following constraint:```pythonc1 = facile.alldifferent([a, b, c, ...]) to be posted to the solverfacile.constraint(c1)```
###Code
# %load solutions/send_more_money.py
# %load solutions/send_more_money_alt.py
###Output
_____no_output_____
###Markdown
**Important note**: Even though it is not explicitly mentioned in the problem, **do not** forget to add the $s>0$ and $m>0$ constraints.Look by yourself how the solution makes no sense:
###Code
# %load solutions/send_more_money_wrong.py
###Output
_____no_output_____
###Markdown
Petersen's graph> Let's play with graphical possibilities of Python!This graph is a particular graph with 10 nodes and 15 edges. We want to find a colouring of this graph, i.e. colour the nodes so that no two neighbouring nodes have the same colour.**Important note**: You do not need to worry about the coordinates of each point are they are no decision variables (they just help to plot). However, you should have a look at each `plt.plot` command in `plot_edges` as they are related to a constraint you have to write.**Really important note**: Take some time on paper first to think about what to choose as **decision variables**.
###Code
from math import pi, cos, sin
import matplotlib.pyplot as plt
# Five angles π/2 + i * 72°
angles = [i * 2*pi/5 + pi/2 for i in range(5)]
# The five nodes in the inner circle
points = [(cos(t), sin(t)) for t in angles]
# The five nodes in the outer circle
points += [(2*cos(t), 2*sin(t)) for t in angles]
# Shortcut for the x-y coordinates of each node
x = [x for x, _ in points]
y = [y for _, y in points]
def plot_edges():
"""Plot the graph without colouring the nodes."""
plt.axes(frameon=False, aspect=1)
plt.xlim(-2.5, 2.5)
plt.ylim(-2.5, 2.5)
# Build edges between the five nodes in the inner circle
for i in range(5):
j, j_ = i, (i+2)%5 # % (modulo -> j=4, j_=0)
plt.plot([x[j], x[j_]], [y[j], y[j_]], color='grey')
# Build edges between the inner and the outer circle
for i in range(5):
plt.plot([x[i], x[i+5]], [y[i], y[i+5]], color='grey')
# Build edges between the five nodes on the outer circle
for i in range(5):
j, j_ = 5 + i, 5 + (i+1)%5 # % (modulo -> j=9, j_=5)
plt.plot([x[j], x[j_]], [y[j], y[j_]], color='grey')
plot_edges()
# Colouring nodes
for i, (x_, y_) in enumerate(points):
plt.plot(x_, y_, 'ro') # 'r' pour red, 'o' pour la forme du point
###Output
_____no_output_____
###Markdown
Problem 4: (easy)How many colours do you need to colour Petersen's graph? Print the coloured graph.
###Code
# %load solutions/petersen.py
###Output
_____no_output_____
###Markdown
**Important note (click to unfold):**The real question to answer is "how many colours do you need?". You may feel the urge to model the problem with a variable representing the number of colours. But take a step back, and think differently: try to solve the problem with 1 colour, then 2 colours and you find no solution. If you try 3 colours you find a solution, so 3 is the answer you want. The $n$-queen problem Problem 5: (intermediate)Solve the 8-queen problem and pretty-print the solution. You may generalise your procedure for $n$ queens.
###Code
# %load solutions/nqueens.py
###Output
_____no_output_____
###Markdown
**Important note (click to unfold):**The use of the *alldifferent* constraint is fundamental. You may think it is easier to write all the $\neq$-constraints, but look by yourself:
###Code
# %load solutions/lazy_nqueens.py
%timeit n_queens(12)
%timeit lazy_n_queens(12)
###Output
_____no_output_____
###Markdown
**Let's look further into this!** (out of scope)The heuristics on the choice of the next variable have a significant impact on the number of backtracks.By default, we choose the queens in order:- `min_domain` chooses the next variable as the one with the smallest domain after propagation;- `min_min` chooses the next variable as the one with the smallest smallest value in its domain after propagation;- `queen` is the optimal strategy for the n-queen problem and combines the two previous strategies.
###Code
def queen_strategy(queens):
if len([q.value() for q in queens if q.value() is None]) == 0:
return -1
else:
min_ = min((len(q.domain()), q.min(), i)
for i, q in enumerate(queens) if q.value() is None)
return min_[2]
print (n_queens(10, backtrack=True))
print (n_queens(10, strategy="min_domain", backtrack=True))
print (n_queens(10, strategy="min_min", backtrack=True))
print (n_queens(10, strategy=queen_strategy, backtrack=True))
fig = plt.figure()
fig.set_size_inches(8, 5.6)
interval = range(2, 20)
plt.plot(interval, [(n_queens(i, backtrack=True).backtrack) for i in interval])
plt.plot(interval, [(n_queens(i, strategy="min_domain", backtrack=True).backtrack) for i in interval])
plt.plot(interval, [(n_queens(i, strategy=queen_strategy, backtrack=True).backtrack) for i in interval], lw=2)
plt.axis((interval.start, interval.stop, 0, 70))
plt.legend(["regular", "min_domain", "queen"])
print (n_queens(1001, strategy="queen", backtrack=True))
###Output
_____no_output_____
###Markdown
Optimisation`facile.solve` solves constraint satisfaction problem. You may solve optimisation problems with `facile.minimize`. **Example :**Find $x,y \in [0,3]$, constrained by $x \neq y$ and so that $x + y$ is maximum.
###Code
x, y = [facile.variable(range(5)) for i in range(2)]
facile.constraint(x != y)
# The second parameter represents the expression to minimize.
sol = facile.minimize([x, y], y)
print(sol)
# You may have access to different parameters in the solution
print(sol.keys())
# The most useful are probably the two following ones
sol.evaluation, sol.solution
###Output
_____no_output_____
###Markdown
**Problem 6: (intermediate)** A Golomb ruler is a set of integers (marks) $a_1 j$) are distinct. Clearly we may assume $a_1 = 0$. Then $a_k$ is the length of the Golomb ruler. For a given number of marks $n$, we want to find the shortest Golomb rulers.*Note:* Above $n = 10$ the resolution time may be too long.
###Code
# %load solutions/golomb.py
###Output
_____no_output_____
###Markdown
**Important note (click to unfold):**Note how you may build a list of expressions $t_i-t_j$ and pass it to *alldifferent*.You may also wonder why the correction mentions a domain of variables between 0 and $2^n$.Consider the assignment `ticks[i] = 2**i` and see how it is a solution to our satisfaction problem. See how you write $2^i - 2^j$ in binary to convince yourself. Then our problem consists of finding the shortest Golomb rule, that is shorter than $2^n$.Obviously, you are not expected to find this trick by yourself. Initialising the domain to some *reasonably* big range is enough to solve small instances of the problem.**Branch & Bound**: Note how many more backtracks you need to confirm that your better evaluation is the optimal one. Problem 7: (difficult)You are given an 8 pint bucket of water, and two empty buckets which can contain 5 and 3 pints respectively. You are required to divide the water into two by pouring water between buckets (that is, to end up with 4 pints in the 8 pint bucket, and 4 pints in the 5 pint bucket).What is the minimum number of transfers of water between buckets? **Solution for the modelling part (click to unfold)**:The difficult part here is to get what will be the variables we manipulate. We do not know in advance the number of steps. **The number of steps determines the number of variables, so we cannot make it a variable**: we make it a **constant parameter** that we grow until we find a solution.So we have the following table of variables, with the first constraints set:| | $s_3$ | $s_5$ | $s_8$ ||-------|-------|-------|-------|| $t_1$ | 0 | 0 | 8 || $t_2$ | ... | ... | ... || $t_3$ | ... | ... | ... || ... | ... | ... | ... || ... | ... | ... | ... || $t_n$ | 0 | 4 | 4 |Then the constraints that we have to program:- Between two consecutive steps, *exactly two buckets* see their water amount change;- At each step, the total volume of water is constant;- Between two consecutive steps, for all pairs of buckets: - *either* one of the buckets keeps the same amount of water; - *or*, one of the two buckets ends up full; - *or*, one of the two buckets ends up empty. **Important note**: Writing *exactly two buckets see their water amount change* is a bit tricky. You cannot have it with a simple disjunction of constraint (the Python operator `|` which stands for **or**).You would probably end up saying "the first bucket stays untouched" **or** "the second bucket stays untouched" **or** "the third bucket stays untouched" which does not capture the idea that if one bucket stays untouched, there is a movement of water between the others.The key here is to use a mechanism called *constraint reification*, that is we (automatically) associate a binary variable to a constraint, taking the value 0 if the constraint is violated in the current assignment and 1 if the constraint is true.So when we write :```python facile.constraint(sum([e[t][i] != e[t+1][i] for i in something]) == 2)```that is, $$\sum \left(e_{t,i} \neq e_{t+1, i} \right) = 2$$we mean:- that **exactly** `2` "$\neq$" constraints in the included list are verified,- **and** that **exactly** `(len(something) - 2)` "$\neq$" constraints in the included list are violated.
###Code
# %load solutions/buckets.py
###Output
_____no_output_____
###Markdown
Constraint programmingWe will use and import the `facile` library ([documentation](https://facile.readthedocs.io)) which gives access to a constraint programming API in Python. This notebook goes through basic notions of constraint programming, at a reasonable pace. The lecturer will explain more advanced concepts as the group moves forward and/or on demand.- Solutions appear on demand as you uncomment the `%load` comments, but the point of the session is to **try first** without being stuck.- You will find in the notebook blocks of different colors:**Questions** appear in yellow.You should **fully understand** what appears in red.Blue blocks push *beyond the scope* of this course.
###Code
import facile
%matplotlib inline
###Output
_____no_output_____
###Markdown
Example :We consider the problem of two variables taking their values on $a,b \in \{0,1\}$, and constrained by $a \neq b$.Find admissible values for $a$ and $b$. The basic syntax of `facile` goes as follows:
###Code
# Variables
#
# facile.variable may be used with an inclusive min value and an inclusive max value as parameters
# e.g. facile.variable(0,3) --> domain is {0,1,2,3}
# OR
# facile.variable may be given a list of possible values
# e.g. facile.variable([0,3]) --> domain is {0,3} (and neither 1 nor 2 are possible values)
#
# a and b are both defined on {0, 1}
a = facile.variable([0, 1])
b = facile.variable([0, 1])
# Constraints
# Expressions and constraints can be built with usual operators: +, *, <=, etc.
facile.constraint(a != b)
# Resolution
# We want a solution for a and b and get their values with method .value()
sol = facile.solve([a, b])
assert sol, "No solution found"
print ("Solution found : a={}, b={}".format(a.value(), b.value()))
###Output
Solution found : a=0, b=1
###Markdown
Basic problems Problem 1: (super easy)Consider the following problem. Modify it so as it has a solution.
###Code
# Variables
a = facile.variable([0, 1])
b = facile.variable([0, 1])
c = facile.variable([0, 1])
# Constraints
facile.constraint(a != b)
facile.constraint(b != c)
# facile.constraint(c != a)
# Resolution
if facile.solve([a, b, c]):
print ("Solution found : a=%d, b=%d, c=%d" % (a.value(), b.value(), c.value()))
else:
print ("No solution found")
###Output
Solution found : a=0, b=1, c=0
###Markdown
Problem 2: (easy)Find four integers so that their sum is 711 and their product is 711000000.The original problem is stated as follows:A guy walks into a 7-11 store and selects four items to buy. The clerk at the counter informs the gentleman that the total cost of the four items is 7.11 dollars. He was completely surprised that the cost was the same as the name of the store. The clerk informed the man that he simply multiplied the cost of each item and arrived at the total. The customer calmly informed the clerk that the items should be added and not multiplied. The clerk then added the items together and informed the customer that the total was still exactly 7.11 dollars.What are the exact costs of each item?We can find a beautiful [algebraic resolution](http://everydayexplanations.blogspot.fr/2011/08/711-problem.html) which may help to define the domains of each value.
###Code
# Variables
# There is a risk of integer overflow when computing a*b*c*d
# We need small domains...
a = facile.variable(range(0, 320))
b = facile.variable(range(0, 160))
c = facile.variable(range(0, 130))
d = facile.variable(range(0, 130))
# Constraints
facile.constraint(a + b + c + d == 711)
facile.constraint(a * b * c * d == 711000000)
# Resolution
sol = facile.solve([a, b, c, d], backtrack=True)
print("Solution found: a={}, b={}, c={}, d={}".format(*sol.solution))
###Output
Solution found: a=316, b=150, c=120, d=125
###Markdown
**Let's look further into this!** (out of scope)Let's check how many backtracks occur during the resolution process, and have a look at the constraint propagation effect on the domain of each variable.
###Code
# %load solutions/seven_eleven.py
# Variables
# There is a risk of integer overflow when computing a*b*c*d
# We need small domains...
import facile
a = facile.variable(range(0, 321))
b = facile.variable(range(0, 161))
c = facile.variable(range(0, 131))
d = facile.variable(range(0, 131))
# Constraints
# The problem
facile.constraint(a + b + c + d == 711)
print("Domains after posting the sum constraint")
for x in [a, b, c, d]:
domain = x.domain()
print(" {!r} (size {})".format(domain, len(domain)))
facile.constraint(a * b * c * d == 711000000)
print("\nDomains after posting the mul constraint")
for x in [a, b, c, d]:
domain = x.domain()
print(" {!r} (size {})".format(domain, len(domain)))
print()
# Resolution
sol = facile.solve([a, b, c, d], backtrack=True)
# wow ! Only two backtracks !!
print(sol)
print("Solution found: a={}, b={}, c={}, d={}".format(*sol.solution))
###Output
Domains after posting the sum constraint
[291, 292, 293, 294, 295, 296, ..., 320] (size 30)
[131, 132, 133, 134, 135, 136, ..., 160] (size 30)
[101, 102, 103, 104, 105, 106, ..., 130] (size 30)
[101, 102, 103, 104, 105, 106, ..., 130] (size 30)
Domains after posting the mul constraint
[300, 301, 302, 303, 304, 305, ..., 320] (size 21)
[132, 133, 134, 135, 136, 137, ..., 160] (size 29)
[107, 108, 109, 110, 111, 112, ..., 130] (size 24)
[107, 108, 109, 110, 111, 112, ..., 130] (size 24)
Backtracks : 2
Current solution : [316, 150, 120, 125]
Resolution status : True
Resolution time : 9.5e-05s
Solution found: a=316, b=150, c=120, d=125
###Markdown
Problem 3: (easy)Solve SEND + MORE = MONEY (two methods). Pretty-print the result.You may need to use the following constraint:```pythonc1 = facile.alldifferent([a, b, c, ...]) to be posted to the solverfacile.constraint(c1)```
###Code
[s, e, n, d, m, o, r, y] = [facile.variable(range(10)) for i in range(8)]
send = s*1000+e*100+n*10+d
more = m*1000+o*100+r*10+e
money = m*10000+o*1000+n*100+e*10+y
facile.constraint(s>0)
facile.constraint(m>0)
facile.constraint(send + more == money)
facile.constraint(facile.alldifferent([s, e, n, d, m, o, r, y]))
sol = facile.solve([s, e, n, d, m, o, r, y], backtrack=True)
print(sol)
print("Solution found: s={}, e={}, n={}, d={}, m={}, o={}, r={}, y={}".format(*sol.solution))
# %load solutions/send_more_money.py
# The list comprehension mechanism is always helpful!
[s, e, n, d, m, o, r, y] = [facile.variable(range(10)) for i in range(8)]
# A shortcut
letters = [s, e, n, d, m, o, r, y]
# Constraints
facile.constraint(s > 0)
facile.constraint(m > 0)
facile.constraint(facile.alldifferent(letters))
import functools # I am too lazy to write 1000 * s + 100 * etc.
send = functools.reduce(lambda x, y: 10*x + y, [s, e, n, d])
more = functools.reduce(lambda x, y: 10*x + y, [m, o, r, e])
money = functools.reduce(lambda x, y: 10*x + y, [m, o, n, e, y])
facile.constraint (send + more == money)
if facile.solve(letters):
[vs, ve, vn, vd, vm, vo, vr, vy] = [x.value() for x in letters]
print ("Solution found :")
print
print (" %d%d%d%d" % (vs, ve, vn, vd))
print ("+ %d%d%d%d" % (vm, vo, vr, ve))
print ("------")
print (" %d%d%d%d%d" % (vm, vo, vn, ve, vy))
else:
print ("No solution found")
# %load solutions/send_more_money_alt.py
# The list comprehension mechanism is always helpful!
[s, e, n, d, m, o, r, y] = [facile.variable(range(10)) for i in range(8)]
# A shortcut
letters = [s, e, n, d, m, o, r, y]
# Retenues
[c0, c1, c2] = [facile.variable([0, 1]) for i in range(3)]
# Constraints
facile.constraint(s > 0)
facile.constraint(m > 0)
facile.constraint(facile.alldifferent(letters))
facile.constraint(d + e == y + 10 * c0)
facile.constraint(c0 + n + r == e + 10 * c1)
facile.constraint(c1 + e + o == n + 10 * c2)
facile.constraint(c2 + s + m == o + 10 * m)
if facile.solve(letters):
[vs, ve, vn, vd, vm, vo, vr, vy] = [x.value() for x in letters]
print ("Solution found :")
print
print (" %d%d%d%d" % (vs, ve, vn, vd))
print ("+ %d%d%d%d" % (vm, vo, vr, ve))
print ("------")
print (" %d%d%d%d%d" % (vm, vo, vn, ve, vy))
else:
print ("No solution found")
###Output
Solution found :
9567
+ 1085
------
10652
###Markdown
**Important note**: Even though it is not explicitly mentioned in the problem, **do not** forget to add the $s>0$ and $m>0$ constraints.Look by yourself how the solution makes no sense:
###Code
# %load solutions/send_more_money_wrong.py
# The list comprehension mechanism is always helpful!
[s, e, n, d, m, o, r, y] = [facile.variable(range(10)) for i in range(8)]
# A shortcut
letters = [s, e, n, d, m, o, r, y]
# Retenues
[c0, c1, c2] = [facile.variable([0, 1]) for i in range(3)]
# Constraints
# facile.constraint(s > 0)
# facile.constraint(m > 0)
facile.constraint(facile.alldifferent(letters))
facile.constraint(d + e == y + 10 * c0)
facile.constraint(c0 + n + r == e + 10 * c1)
facile.constraint(c1 + e + o == n + 10 * c2)
facile.constraint(c2 + s + m == o + 10 * m)
if facile.solve(letters):
[vs, ve, vn, vd, vm, vo, vr, vy] = [x.value() for x in letters]
print ("Solution found :")
print
print (" %d%d%d%d" % (vs, ve, vn, vd))
print ("+ %d%d%d%d" % (vm, vo, vr, ve))
print ("------")
print (" %d%d%d%d%d" % (vm, vo, vn, ve, vy))
else:
print ("No solution found")
###Output
Solution found :
2817
+ 0368
------
03185
###Markdown
Petersen's graph> Let's play with graphical possibilities of Python!This graph is a particular graph with 10 nodes and 15 edges. We want to find a colouring of this graph, i.e. colour the nodes so that no two neighbouring nodes have the same colour.**Important note**: You do not need to worry about the coordinates of each point are they are no decision variables (they just help to plot). However, you should have a look at each `plt.plot` command in `plot_edges` as they are related to a constraint you have to write.**Really important note**: Take some time on paper first to think about what to choose as **decision variables**.
###Code
from math import pi, cos, sin
import matplotlib.pyplot as plt
# Five angles π/2 + i * 72°
angles = [i * 2*pi/5 + pi/2 for i in range(5)]
# The five nodes in the inner circle
points = [(cos(t), sin(t)) for t in angles]
# The five nodes in the outer circle
points += [(2*cos(t), 2*sin(t)) for t in angles]
# Shortcut for the x-y coordinates of each node
x = [x for x, _ in points]
y = [y for _, y in points]
def plot_edges():
"""Plot the graph without colouring the nodes."""
plt.axes(frameon=False, aspect=1)
plt.xlim(-2.5, 2.5)
plt.ylim(-2.5, 2.5)
# Build edges between the five nodes in the inner circle
for i in range(5):
j, j_ = i, (i+2)%5 # % (modulo -> j=4, j_=0)
plt.plot([x[j], x[j_]], [y[j], y[j_]], color='grey')
# Build edges between the inner and the outer circle
for i in range(5):
plt.plot([x[i], x[i+5]], [y[i], y[i+5]], color='grey')
# Build edges between the five nodes on the outer circle
for i in range(5):
j, j_ = 5 + i, 5 + (i+1)%5 # % (modulo -> j=9, j_=5)
plt.plot([x[j], x[j_]], [y[j], y[j_]], color='grey')
plot_edges()
# Colouring nodes
for i, (x_, y_) in enumerate(points):
plt.plot(x_, y_, 'ro') # 'r' pour red, 'o' pour la forme du point
###Output
_____no_output_____
###Markdown
Problem 4: (easy)How many colours do you need to colour Petersen's graph? Print the coloured graph.
###Code
colors = ['ro', 'go', 'bo']
point_color = [facile.variable(range(3)) for i in enumerate(points)]
# Build edges between the five nodes in the inner circle
for i in range(5):
j, j_ = i, (i+2)%5 # % (modulo -> j=4, j_=0)
facile.constraint(point_color[j] != point_color[j_])
# Build edges between the inner and the outer circle
for i in range(5):
facile.constraint(point_color[i] != point_color[i+5])
# Build edges between the five nodes on the outer circle
for i in range(5):
j, j_ = 5 + i, 5 + (i+1)%5 # % (modulo -> j=9, j_=5)
facile.constraint(point_color[j] != point_color[j_])
sol = facile.solve(point_color, backtrack=True)
print(sol)
plot_edges()
# Colouring nodes
for i, (x_, y_) in enumerate(points):
plt.plot(x_, y_, colors[point_color[i].value()])
# %load solutions/petersen.py
colouring = [facile.variable(range(3)) for i, _ in enumerate(points)]
colours = ['ro', 'bo', 'go']
# Build edges between the five nodes in the inner circle
for i in range(5):
j, j_ = i, (i+2)%5 # % (modulo -> j=4, j_=0)
facile.constraint(colouring[j] != colouring[j_])
# Build edges between the inner and the outer circle
for i in range(5):
facile.constraint(colouring[i] != colouring[i+5])
# Build edges between the five nodes on the outer circle
for i in range(5):
j, j_ = 5 + i, 5 + (i+1)%5 # % (modulo -> j=9, j_=5)
facile.constraint(colouring[j] != colouring[j_])
plot_edges()
if facile.solve(colouring):
for i, (x_, y_) in enumerate(points):
plt.plot(x_, y_, colours[colouring[i].value()])
else:
print ("No solution found")
###Output
_____no_output_____
###Markdown
**Important note (click to unfold):**The real question to answer is "how many colours do you need?". You may feel the urge to model the problem with a variable representing the number of colours. But take a step back, and think differently: try to solve the problem with 1 colour, then 2 colours and you find no solution. If you try 3 colours you find a solution, so 3 is the answer you want. The $n$-queen problem Problem 5: (intermediate)Solve the 8-queen problem and pretty-print the solution. You may generalise your procedure for $n$ queens.
###Code
# %load solutions/nqueens.py
def n_queens(n, *args, **kwargs):
queens = [facile.variable(range(n)) for i in range(n)]
diag1 = [queens[i] + i for i in range(n)]
diag2 = [queens[i] - i for i in range(n)]
facile.constraint(facile.alldifferent(queens))
facile.constraint(facile.alldifferent(diag1))
facile.constraint(facile.alldifferent(diag2))
return facile.solve(queens, *args, **kwargs)
def print_line(val, n):
cumul = ""
for i in range(n):
if val == i:
cumul = cumul + "♛ "
else:
cumul = cumul + "- "
print (cumul)
n = 8
solutions = n_queens(n).solution
if solutions is not None:
print ("Solution found :")
print
[print_line(s, n) for s in solutions]
else:
print ("No solution found")
###Output
Solution found :
♛ - - - - - - -
- - - - ♛ - - -
- - - - - - - ♛
- - - - - ♛ - -
- - ♛ - - - - -
- - - - - - ♛ -
- ♛ - - - - - -
- - - ♛ - - - -
###Markdown
**Important note (click to unfold):**The use of the *alldifferent* constraint is fundamental. You may think it is easier to write all the $\neq$-constraints, but look by yourself:
###Code
# %load solutions/lazy_nqueens.py
def lazy_n_queens(n, *args, **kwargs):
queens = [facile.variable(range(n)) for i in range(n)]
diag1 = [queens[i] + i for i in range(n)]
diag2 = [queens[i] - i for i in range(n)]
# facile.constraint(facile.alldifferent(queens))
for i, q1 in enumerate(queens):
for q2 in queens[i+1:]:
facile.constraint(q1 != q2)
# facile.constraint(facile.alldifferent(diag1))
for i, q1 in enumerate(diag1):
for q2 in diag1[i+1:]:
facile.constraint(q1 != q2)
# facile.constraint(facile.alldifferent(diag2))
for i, q1 in enumerate(diag2):
for q2 in diag2[i+1:]:
facile.constraint(q1 != q2)
return facile.solve(queens, *args, **kwargs)
%timeit n_queens(12)
%timeit lazy_n_queens(12)
###Output
9.37 ms ± 1.32 ms per loop (mean ± std. dev. of 7 runs, 100 loops each)
###Markdown
**Let's look further into this!** (out of scope)The heuristics on the choice of the next variable have a significant impact on the number of backtracks.By default, we choose the queens in order:- `min_domain` chooses the next variable as the one with the smallest domain after propagation;- `min_min` chooses the next variable as the one with the smallest smallest value in its domain after propagation;- `queen` is the optimal strategy for the n-queen problem and combines the two previous strategies.
###Code
def queen_strategy(queens):
if len([q.value() for q in queens if q.value() is None]) == 0:
return -1
else:
min_ = min((len(q.domain()), q.min(), i)
for i, q in enumerate(queens) if q.value() is None)
return min_[2]
print (n_queens(10, backtrack=True))
print (n_queens(10, strategy="min_domain", backtrack=True))
print (n_queens(10, strategy="min_min", backtrack=True))
print (n_queens(10, strategy=queen_strategy, backtrack=True))
fig = plt.figure()
fig.set_size_inches(8, 5.6)
interval = range(2, 20)
plt.plot(interval, [(n_queens(i, backtrack=True).backtrack) for i in interval])
plt.plot(interval, [(n_queens(i, strategy="min_domain", backtrack=True).backtrack) for i in interval])
plt.plot(interval, [(n_queens(i, strategy=queen_strategy, backtrack=True).backtrack) for i in interval], lw=2)
plt.axis((interval.start, interval.stop, 0, 70))
plt.legend(["regular", "min_domain", "queen"])
print (n_queens(1001, strategy="queen", backtrack=True))
###Output
Backtracks : 0
Current solution : [0, 501, 1, 502, 2, 503, ...]
Resolution status : True
Resolution time : 25s
###Markdown
Optimisation`facile.solve` solves constraint satisfaction problem. You may solve optimisation problems with `facile.minimize`. **Example :**Find $x,y \in [0,3]$, constrained by $x \neq y$ and so that $x + y$ is maximum.
###Code
x, y = [facile.variable(range(5)) for i in range(2)]
facile.constraint(x != y)
# The second parameter represents the expression to minimize.
sol = facile.minimize([x, y], y)
print(sol)
# You may have access to different parameters in the solution
print(sol.keys())
# The most useful are probably the two following ones
sol.evaluation, sol.solution
###Output
dict_keys(['solved', 'evaluation', 'solution', 'time'])
###Markdown
**Problem 6: (intermediate)** A Golomb ruler is a set of integers (marks) $a_1 j$) are distinct. Clearly we may assume $a_1 = 0$. Then $a_k$ is the length of the Golomb ruler. For a given number of marks $n$, we want to find the shortest Golomb rulers.*Note:* Above $n = 10$ the resolution time may be too long.
###Code
# %load solutions/golomb.py
def golomb(n):
ticks = [facile.variable(range(2**n)) for i in range(n)]
# First tick at the start of the ruler
facile.constraint(ticks[0] == 0)
# Ticks are ordered
for i in range(n-1):
facile.constraint(ticks[i] < ticks[i+1])
# All distances
distances = []
for i in range(n-1):
for j in range(i + 1, n):
distances.append(facile.variable(ticks[j] - ticks[i]))
facile.constraint(facile.alldifferent(distances))
for d in distances:
facile.constraint(d > 0)
# Breaking the symmetry
size = len(distances)
facile.constraint(distances[size - 1] > distances[0])
return (facile.minimize(ticks, ticks[n-1], backtrack=True, on_solution=print))
print (golomb(9))
###Output
Backtracks : 0
Current evaluation : 65
Current solution : [0, 1, 3, 7, 12, 20, ...]
Resolution status : False
Resolution time : 0.0054s
Backtracks : 1
Current evaluation : 61
Current solution : [0, 1, 3, 7, 12, 20, ...]
Resolution status : False
Resolution time : 0.006s
Backtracks : 2
Current evaluation : 59
Current solution : [0, 1, 3, 7, 12, 20, ...]
Resolution status : False
Resolution time : 0.0064s
Backtracks : 14
Current evaluation : 57
Current solution : [0, 1, 3, 7, 12, 26, ...]
Resolution status : False
Resolution time : 0.016s
Backtracks : 24
Current evaluation : 53
Current solution : [0, 1, 3, 7, 15, 24, ...]
Resolution status : False
Resolution time : 0.021s
Backtracks : 36
Current evaluation : 52
Current solution : [0, 1, 3, 7, 16, 21, ...]
Resolution status : False
Resolution time : 0.024s
Backtracks : 63
Current evaluation : 50
Current solution : [0, 1, 3, 7, 18, 28, ...]
Resolution status : False
Resolution time : 0.12s
Backtracks : 279
Current evaluation : 47
Current solution : [0, 1, 3, 10, 16, 21, ...]
Resolution status : False
Resolution time : 0.25s
Backtracks : 827
Current evaluation : 45
Current solution : [0, 1, 4, 13, 24, 30, ...]
Resolution status : False
Resolution time : 0.5s
Backtracks : 1154
Current evaluation : 44
Current solution : [0, 1, 5, 12, 25, 27, ...]
Resolution status : False
Resolution time : 0.66s
Backtracks : 5915
Current evaluation : 44
Current solution : [0, 1, 5, 12, 25, 27, ...]
Resolution status : True
Resolution time : 3.4s
###Markdown
**Important note (click to unfold):**Note how you may build a list of expressions $t_i-t_j$ and pass it to *alldifferent*.You may also wonder why the correction mentions a domain of variables between 0 and $2^n$.Consider the assignment `ticks[i] = 2**i` and see how it is a solution to our satisfaction problem. See how you write $2^i - 2^j$ in binary to convince yourself. Then our problem consists of finding the shortest Golomb rule, that is shorter than $2^n$.Obviously, you are not expected to find this trick by yourself. Initialising the domain to some *reasonably* big range is enough to solve small instances of the problem.**Branch & Bound**: Note how many more backtracks you need to confirm that your better evaluation is the optimal one. Problem 7: (difficult)You are given an 8 pint bucket of water, and two empty buckets which can contain 5 and 3 pints respectively. You are required to divide the water into two by pouring water between buckets (that is, to end up with 4 pints in the 8 pint bucket, and 4 pints in the 5 pint bucket).What is the minimum number of transfers of water between buckets? **Solution for the modelling part (click to unfold)**:The difficult part here is to get what will be the variables we manipulate. We do not know in advance the number of steps. **The number of steps determines the number of variables, so we cannot make it a variable**: we make it a **constant parameter** that we grow until we find a solution.So we have the following table of variables, with the first constraints set:| | $s_3$ | $s_5$ | $s_8$ ||-------|-------|-------|-------|| $t_1$ | 0 | 0 | 8 || $t_2$ | ... | ... | ... || $t_3$ | ... | ... | ... || ... | ... | ... | ... || ... | ... | ... | ... || $t_n$ | 0 | 4 | 4 |Then the constraints that we have to program:- Between two consecutive steps, *exactly two buckets* see their water amount change;- At each step, the total volume of water is constant;- Between two consecutive steps, for all pairs of buckets: - *either* one of the buckets keeps the same amount of water; - *or*, one of the two buckets ends up full; - *or*, one of the two buckets ends up empty. **Important note**: Writing *exactly two buckets see their water amount change* is a bit tricky. You cannot have it with a simple disjunction of constraint (the Python operator `|` which stands for **or**).You would probably end up saying "the first bucket stays untouched" **or** "the second bucket stays untouched" **or** "the third bucket stays untouched" which does not capture the idea that if one bucket stays untouched, there is a movement of water between the others.The key here is to use a mechanism called *constraint reification*, that is we (automatically) associate a binary variable to a constraint, taking the value 0 if the constraint is violated in the current assignment and 1 if the constraint is true.So when we write :```python facile.constraint(sum([e[t][i] != e[t+1][i] for i in something]) == 2)```that is, $$\sum \left(e_{t,i} \neq e_{t+1, i} \right) = 2$$we mean:- that **exactly** `2` "$\neq$" constraints in the included list are verified,- **and** that **exactly** `(len(something) - 2)` "$\neq$" constraints in the included list are violated.
###Code
# %load solutions/buckets.py
# Number of buckets
nb = 3
# Number of steps (let's say we know... :p)
steps = 8
# The capacity of each bucket
capacity = [8, 5, 3]
buckets = [ [facile.variable(range(capacity[b]+1)) for b in range(nb)] for i in range(steps)]
facile.constraint(buckets[0][0] == 8)
facile.constraint(buckets[0][1] == 0)
facile.constraint(buckets[0][2] == 0)
facile.constraint(buckets[steps - 1][0] == 4)
facile.constraint(buckets[steps - 1][1] == 4)
facile.constraint(buckets[steps - 1][2] == 0)
for i in range(steps - 1):
# we change the contents of two buckets at a time
facile.constraint( sum([buckets[i][b] != buckets[i+1][b] for b in range(nb)]) == 2)
# we play with a constant amount of water
facile.constraint(sum([buckets[i][b] for b in range(nb)]) == 8)
for b1 in range(nb):
for b2 in range(b1):
facile.constraint(
# either the content of the bucket does not change
(buckets[i][b1] == buckets[i+1][b1]) |
(buckets[i][b2] == buckets[i+1][b2]) |
# or the bucket ends up empty or full
(buckets[i+1][b1] == 0) | (buckets[i+1][b1] == capacity[b1]) |
(buckets[i+1][b2] == 0) | (buckets[i+1][b2] == capacity[b2])
)
print (facile.solve([b for sub in buckets for b in sub], backtrack=True))
for sub in buckets:
print ([b.value() for b in sub])
###Output
Backtracks : 422
Current solution : [8, 0, 0, 3, 5, 0, ...]
Resolution status : True
Resolution time : 0.046s
[8, 0, 0]
[3, 5, 0]
[3, 2, 3]
[6, 2, 0]
[6, 0, 2]
[1, 5, 2]
[1, 4, 3]
[4, 4, 0]
###Markdown
Constraint programmingWe will use and import the `facile` library ([documentation](https://facile.readthedocs.io)) which gives access to a constraint programming API in Python. This notebook goes through basic notions of constraint programming, at a reasonable pace. The lecturer will explain more advanced concepts as the group moves forward and/or on demand.- Solutions appear on demand as you uncomment the `%load` comments, but the point of the session is to **try first** without being stuck.- You will find in the notebook blocks of different colors:**Questions** appear in yellow.You should **fully understand** what appears in red.Blue blocks push *beyond the scope* of this course.
###Code
import facile
%load_ext lab_black
###Output
_____no_output_____
###Markdown
Example :We consider the problem of two variables taking their values on $a,b \in \{0,1\}$, and constrained by $a \neq b$.Find admissible values for $a$ and $b$. The basic syntax of `facile` goes as follows:
###Code
# Variables
#
# facile.variable may be used with an inclusive min value and an inclusive max value as parameters
# e.g. facile.variable(0,3) --> domain is {0,1,2,3}
# OR
# facile.variable may be given a list of possible values
# e.g. facile.variable([0,3]) --> domain is {0,3} (and neither 1 nor 2 are possible values)
#
# a and b are both defined on {0, 1}
a = facile.variable([0, 1])
b = facile.variable([0, 1])
# Constraints
# Expressions and constraints can be built with usual operators: +, *, <=, etc.
facile.constraint(a != b)
# Resolution
# We want a solution for a and b and get their values with method .value()
sol = facile.solve([a, b])
assert sol, "No solution found"
print("Solution found : a={}, b={}".format(a.value(), b.value()))
###Output
_____no_output_____
###Markdown
Basic problems Problem 1: (super easy)Consider the following problem. Modify it so as it has a solution.
###Code
# Variables
a = facile.variable([0, 1])
b = facile.variable([0, 1])
c = facile.variable([0, 1])
# Constraints
facile.constraint(a != b)
facile.constraint(b != c)
facile.constraint(c != a)
# Resolution
if facile.solve([a, b, c]):
print("Solution found : a=%d, b=%d, c=%d" % (a.value(), b.value(), c.value()))
else:
print("No solution found")
###Output
_____no_output_____
###Markdown
Problem 2: (easy)Find four integers so that their sum is 711 and their product is 711000000.The original problem is stated as follows:A guy walks into a 7-11 store and selects four items to buy. The clerk at the counter informs the gentleman that the total cost of the four items is 7.11 dollars. He was completely surprised that the cost was the same as the name of the store. The clerk informed the man that he simply multiplied the cost of each item and arrived at the total. The customer calmly informed the clerk that the items should be added and not multiplied. The clerk then added the items together and informed the customer that the total was still exactly 7.11 dollars.What are the exact costs of each item?We can find a beautiful [algebraic resolution](http://everydayexplanations.blogspot.fr/2011/08/711-problem.html) which may help to define the domains of each value.
###Code
# Variables
# There is a risk of integer overflow when computing a*b*c*d
# We need small domains...
a = facile.variable(range(0, 320))
b = facile.variable(range(0, 160))
c = facile.variable(range(0, 130))
d = facile.variable(range(0, 130))
# Constraints
# Resolution
sol = facile.solve([a, b, c, d], backtrack=True)
print("Solution found: a={}, b={}, c={}, d={}".format(*sol.solution))
###Output
_____no_output_____
###Markdown
**Let's look further into this!** (out of scope)Let's check how many backtracks occur during the resolution process, and have a look at the constraint propagation effect on the domain of each variable.
###Code
# %load solutions/seven_eleven.py
###Output
_____no_output_____
###Markdown
Problem 3: (easy)Solve SEND + MORE = MONEY (two methods). Pretty-print the result.You may need to use the following constraint:```pythonc1 = facile.alldifferent([a, b, c, ...]) to be posted to the solverfacile.constraint(c1)```
###Code
# %load solutions/send_more_money.py
# %load solutions/send_more_money_alt.py
###Output
_____no_output_____
###Markdown
**Important note**: Even though it is not explicitly mentioned in the problem, **do not** forget to add the $s>0$ and $m>0$ constraints.Look by yourself how the solution makes no sense:
###Code
# %load solutions/send_more_money_wrong.py
###Output
_____no_output_____
###Markdown
Petersen's graph> Let's play with graphical possibilities of Python!This graph is a particular graph with 10 nodes and 15 edges. We want to find a colouring of this graph, i.e. colour the nodes so that no two neighbouring nodes have the same colour.**Important note**: You do not need to worry about the coordinates of each point are they are no decision variables (they just help to plot). However, you should have a look at each `plt.plot` command in `plot_edges` as they are related to a constraint you have to write.**Really important note**: Take some time on paper first to think about what to choose as **decision variables**.
###Code
from math import pi, cos, sin
import matplotlib.pyplot as plt
# Five angles π/2 + i * 72°
angles = [i * 2 * pi / 5 + pi / 2 for i in range(5)]
# The five nodes in the inner circle
points = [(cos(t), sin(t)) for t in angles]
# The five nodes in the outer circle
points += [(2 * cos(t), 2 * sin(t)) for t in angles]
# Shortcut for the x-y coordinates of each node
x = [x for x, _ in points]
y = [y for _, y in points]
def plot_edges():
"""Plot the graph without colouring the nodes."""
plt.axes(frameon=False, aspect=1)
plt.xlim(-2.5, 2.5)
plt.ylim(-2.5, 2.5)
# Build edges between the five nodes in the inner circle
for i in range(5):
j, j_ = i, (i + 2) % 5 # % (modulo -> j=4, j_=0)
plt.plot([x[j], x[j_]], [y[j], y[j_]], color="grey")
# Build edges between the inner and the outer circle
for i in range(5):
plt.plot([x[i], x[i + 5]], [y[i], y[i + 5]], color="grey")
# Build edges between the five nodes on the outer circle
for i in range(5):
j, j_ = 5 + i, 5 + (i + 1) % 5 # % (modulo -> j=9, j_=5)
plt.plot([x[j], x[j_]], [y[j], y[j_]], color="grey")
plot_edges()
# Colouring nodes
for i, (x_, y_) in enumerate(points):
plt.plot(x_, y_, "ro") # 'r' pour red, 'o' pour la forme du point
###Output
_____no_output_____
###Markdown
Problem 4: (easy)How many colours do you need to colour Petersen's graph? Print the coloured graph.
###Code
# %load solutions/petersen.py
###Output
_____no_output_____
###Markdown
**Important note (click to unfold):**The real question to answer is "how many colours do you need?". You may feel the urge to model the problem with a variable representing the number of colours. But take a step back, and think differently: try to solve the problem with 1 colour, then 2 colours and you find no solution. If you try 3 colours you find a solution, so 3 is the answer you want. The $n$-queen problem Problem 5: (intermediate)Solve the 8-queen problem and pretty-print the solution. You may generalise your procedure for $n$ queens.
###Code
# %load solutions/nqueens.py
###Output
_____no_output_____
###Markdown
**Important note (click to unfold):**The use of the *alldifferent* constraint is fundamental. You may think it is easier to write all the $\neq$-constraints, but look by yourself:
###Code
# %load solutions/lazy_nqueens.py
%timeit n_queens(12)
%timeit lazy_n_queens(12)
###Output
_____no_output_____
###Markdown
**Let's look further into this!** (out of scope)The heuristics on the choice of the next variable have a significant impact on the number of backtracks.By default, we choose the queens in order:- `min_domain` chooses the next variable as the one with the smallest domain after propagation;- `min_min` chooses the next variable as the one with the smallest smallest value in its domain after propagation;- `queen` is the optimal strategy for the n-queen problem and combines the two previous strategies.
###Code
def queen_strategy(queens):
if len([q.value() for q in queens if q.value() is None]) == 0:
return -1
else:
min_ = min(
(len(q.domain()), q.min(), i)
for i, q in enumerate(queens)
if q.value() is None
)
return min_[2]
print(n_queens(10, backtrack=True))
print(n_queens(10, strategy="min_domain", backtrack=True))
print(n_queens(10, strategy="min_min", backtrack=True))
print(n_queens(10, strategy=queen_strategy, backtrack=True))
fig = plt.figure()
fig.set_size_inches(8, 5.6)
interval = range(2, 20)
plt.plot(interval, [(n_queens(i, backtrack=True).backtrack) for i in interval])
plt.plot(
interval,
[(n_queens(i, strategy="min_domain", backtrack=True).backtrack) for i in interval],
)
plt.plot(
interval,
[
(n_queens(i, strategy=queen_strategy, backtrack=True).backtrack)
for i in interval
],
lw=2,
)
plt.axis((interval.start, interval.stop, 0, 70))
plt.legend(["regular", "min_domain", "queen"])
print(n_queens(1001, strategy="queen", backtrack=True))
###Output
_____no_output_____
###Markdown
Optimisation`facile.solve` solves constraint satisfaction problem. You may solve optimisation problems with `facile.minimize`. **Example :**Find $x,y \in [0,3]$, constrained by $x \neq y$ and so that $x + y$ is maximum.
###Code
x, y = [facile.variable(range(5)) for i in range(2)]
facile.constraint(x != y)
# The second parameter represents the expression to minimize.
sol = facile.minimize([x, y], y)
print(sol)
# You may have access to different parameters in the solution
print(sol.keys())
# The most useful are probably the two following ones
sol.evaluation, sol.solution
###Output
_____no_output_____
###Markdown
**Problem 6: (intermediate)** A Golomb ruler is a set of integers (marks) $a_1 j$) are distinct. Clearly we may assume $a_1 = 0$. Then $a_k$ is the length of the Golomb ruler. For a given number of marks $n$, we want to find the shortest Golomb rulers.*Note:* Above $n = 10$ the resolution time may be too long.
###Code
# %load solutions/golomb.py
###Output
_____no_output_____
###Markdown
**Important note (click to unfold):**Note how you may build a list of expressions $t_i-t_j$ and pass it to *alldifferent*.You may also wonder why the correction mentions a domain of variables between 0 and $2^n$.Consider the assignment `ticks[i] = 2**i` and see how it is a solution to our satisfaction problem. See how you write $2^i - 2^j$ in binary to convince yourself. Then our problem consists of finding the shortest Golomb rule, that is shorter than $2^n$.Obviously, you are not expected to find this trick by yourself. Initialising the domain to some *reasonably* big range is enough to solve small instances of the problem.**Branch & Bound**: Note how many more backtracks you need to confirm that your better evaluation is the optimal one. Problem 7: (difficult)You are given an 8 pint bucket of water, and two empty buckets which can contain 5 and 3 pints respectively. You are required to divide the water into two by pouring water between buckets (that is, to end up with 4 pints in the 8 pint bucket, and 4 pints in the 5 pint bucket).What is the minimum number of transfers of water between buckets? **Solution for the modelling part (click to unfold)**:The difficult part here is to get what will be the variables we manipulate. We do not know in advance the number of steps. **The number of steps determines the number of variables, so we cannot make it a variable**: we make it a **constant parameter** that we grow until we find a solution.So we have the following table of variables, with the first constraints set:| | $s_3$ | $s_5$ | $s_8$ ||-------|-------|-------|-------|| $t_1$ | 0 | 0 | 8 || $t_2$ | ... | ... | ... || $t_3$ | ... | ... | ... || ... | ... | ... | ... || ... | ... | ... | ... || $t_n$ | 0 | 4 | 4 |Then the constraints that we have to program:- Between two consecutive steps, *exactly two buckets* see their water amount change;- At each step, the total volume of water is constant;- Between two consecutive steps, for all pairs of buckets: - *either* one of the buckets keeps the same amount of water; - *or*, one of the two buckets ends up full; - *or*, one of the two buckets ends up empty. **Important note**: Writing *exactly two buckets see their water amount change* is a bit tricky. You cannot have it with a simple disjunction of constraint (the Python operator `|` which stands for **or**).You would probably end up saying "the first bucket stays untouched" **or** "the second bucket stays untouched" **or** "the third bucket stays untouched" which does not capture the idea that if one bucket stays untouched, there is a movement of water between the others.The key here is to use a mechanism called *constraint reification*, that is we (automatically) associate a binary variable to a constraint, taking the value 0 if the constraint is violated in the current assignment and 1 if the constraint is true.So when we write :```python facile.constraint(sum([e[t][i] != e[t+1][i] for i in something]) == 2)```that is, $$\sum \left(e_{t,i} \neq e_{t+1, i} \right) = 2$$we mean:- that **exactly** `2` "$\neq$" constraints in the included list are verified,- **and** that **exactly** `(len(something) - 2)` "$\neq$" constraints in the included list are violated.
###Code
# %load solutions/buckets.py
###Output
_____no_output_____ |
zz__Test_Predict_VG.ipynb | ###Markdown
Notes- 'asin' = product ID- 'reviewText' = the review text- 'overall' = the star rating
###Code
vg.info()
# clean up nan values and change datatype
vg = vg.dropna(how='any')
vg.loc[:,'overall'] = vg.overall.astype('int16')
vg.info()
vg.shape
vg.overall.value_counts()
# map the sentiment
vg.loc[:,'sentiment'] = vg.overall.map({1: 1, 2: 1, 3: 2, 4: 3, 5: 3}).astype('category')
# map the sentiment
vg.loc[:,'pt_sentiment'] = vg.overall.map({1: 0, 2: 0, 3: 1,
4: 1, 5: 1}).astype('int16')
vg.info()
vg.pt_sentiment.value_counts()
# split the data for fasttext
train_text, test_text, train_labels, test_labels = train_test_split(vg.reviewText,
vg.pt_sentiment,
test_size=0.25,
random_state=42,
stratify=vg.pt_sentiment)
from flair.models import TextClassifier
classifier = TextClassifier.load('en-sentiment')
from flair.data import Sentence
scores = defaultdict(float)
values = defaultdict(str)
i = 0
len(test_text)
# a list of your sentences
# sentences = [Sentence(text) for text in test_text]
sentences1 = []
sentences2 = []
sentences3 = []
sentences4 = []
sentences5 = []
test_text1 = test_text[:25000]
test_text2 = test_text[25000:50000]
test_text3 = test_text[50000:75000]
test_text4 = test_text[75000:100000]
test_text5 = test_text[100000:]
print(len(test_text1) + len(test_text2) + len(test_text3) + len(test_text4) + len(test_text5))
sentences1 = [Sentence(text) for text in test_text1]
# create default dicts for predictions
from collections import defaultdict
scores = defaultdict(float)
values = defaultdict(str)
# predict for all sentences
classifier.predict(sentences1, mini_batch_size=32)
# check predictions
i = 0
for sentence in sentences1:
scores[i] = sentence.labels[0].score
values[i] = sentence.labels[0].value
i+=1
test_predictions = pd.DataFrame({'probability': scores, 'prediction': values})
# append the other sentencesi results when finished
###Output
_____no_output_____ |
Past/DSS/Math/180202_Optimization_3.ipynb | ###Markdown
등식 제한 조건이 있는 최적화 문제 현실의 최적화 문제에서는 여러가지 제한 조건이 있는 최적화 문제가 많다.가장 간단한 경우는 등식 제한이 있는 경우이다.$$ x^* = \arg \min_{x}f(x) \;\; (x \in \mathbf{R}^N)$$$$ g_j(x) = 0 \;\; (j = 1, \cdots, M) $$목적 함수가 다음과 같은 경우,$$ f(x_1, x_2) = x_1^2 + x_2^2 $$그리고 등식(방정식) 제한 조건이 있는 경우를 생각해보자.$$ g(x_1, x_2) = x_1 + x_2 -1 = 0 $$이 문제는 $g(x_1, x_2) = 0$으로 정의되는 직선 상에서 가장 $f(x_1, x_2)$ 값이 작아지는 점을 찾는 문제가 된다.
###Code
def f1(x1, x2):
return x1 ** 2 + x2 ** 2
x1 = np.linspace(-5, 5, 100)
x2 = np.linspace(-3, 3, 100)
X1, X2 = np.meshgrid(x1, x2)
Y = f1(X1, X2)
# constraint function g(x) = x1 + x2 - 1 = 0
x2_g = 1 - x1
%matplotlib inline
plt.contour(X1, X2, Y, colors="gray", levels=[0.5, 2, 8, 32])
plt.plot(x1, x2_g, 'g-')
plt.plot([0], [0], 'rP')
plt.plot([0.5], [0.5], 'ro', ms=10)
plt.xlim(-5, 5)
plt.ylim(-3, 3)
plt.xticks(np.linspace(-4, 4, 9))
plt.show()
###Output
_____no_output_____
###Markdown
This kind of constraind optimization problems are optimizable by Langrange multiplierLangrange multiplelier uses $h(x, \lambda) $ rather than $f(x)$ as target function$$ h(x,\lambda) = f(x) + \sum_{j=1}^{M}\lambda_jg_j(x) $$$h$ should fullfill these conditions$$\begin{eqnarray}\dfrac{\partial h(x, \lambda)}{\partial x_1} &=& \dfrac{\partial f}{\partial x_1} + \sum_{j=1}^M \lambda_j\dfrac{\partial g_j}{\partial x_1} = 0 \\\dfrac{\partial h(x, \lambda)}{\partial x_2} &=& \dfrac{\partial f}{\partial x_2} + \sum_{j=1}^M \lambda_j\dfrac{\partial g_j}{\partial x_2} = 0 \\\vdots & & \\\dfrac{\partial h(x, \lambda)}{\partial x_N} &=& \dfrac{\partial f}{\partial x_N} + \sum_{j=1}^M \lambda_j\dfrac{\partial g_j}{\partial x_N} = 0 \\\dfrac{\partial h(x, \lambda)}{\partial \lambda_1} &=& g_1 = 0 \\\vdots & & \\\dfrac{\partial h(x, \lambda)}{\partial \lambda_M} &=& g_M = 0 \end{eqnarray}$$위 $N + M$개의 연립 방정식을 풀면 $N+M$개의 미지수를 구할 수 있다.$x_1, x_2, \ldots, x_N, , \lambda_1, \ldots , \lambda_M$ 여기서 $x$값들이 제한 조건을 만족하는 최소값 위치를 나타낸다. ex)$$f(x_1, x_2) = - \log{x_1} - \log{x_2} \\ x_1, x_2 > 0\\ \text{s.t.} \;\; x_1 + x_2 = 1 $$제약 조건이 g가 된다.$$\text{s.t} \;\;x_1 + x_2 - 1 = 0 = g(x_1, x_2) $$라그랑지 승수를 적용하여 미분이 0인 위치를 구한다.$$ h = f + \lambda g = -\log x_1 - \log x_2 + \lambda(x_1 + x_2 -1) $$$$\begin{eqnarray}\dfrac{\partial h}{\partial x_1} &=& -\dfrac{1}{x_1} + \lambda = 0 \\\dfrac{\partial h}{\partial x_2} &=& -\dfrac{1}{x_2} + \lambda = 0 \\\dfrac{\partial h}{\partial \lambda} &=& x_1 + x_2 - 1 = 0 \end{eqnarray}$$$$x_1 = x_2 = \dfrac{1}{2}, \;\;\; \lambda = 2$$ SciPy : optimize pakage- 제한 최적화 문제를 풀기위해 `fmin_slsqp` 명령을 제공한다. - slsqp : Sequential Least SQuares Programming - eqcons : 제약조건, s.t(subject to)
###Code
def f1logs(x):
return -np.log(x[0]) - np.log(x[1])
def eq_constraint(x):
return x[0] +x[1] -1
sp.optimize.fmin_slsqp(f1logs, np.array([1, 1]), eqcons=[eq_constraint])
###Output
Optimization terminated successfully. (Exit mode 0)
Current function value: 1.3862943611198901
Iterations: 2
Function evaluations: 8
Gradient evaluations: 2
###Markdown
--- 부등식 제한 조건이 있는 최적화 문제 등식 제한 조건과 동일한 목적함수 $h$를 가지고 최적화한다.다만 몇가지 다른점이 있다. 제한 조건의 범위 $g_j(x) \leq 0 \;\;(j = 1, \cdots, M)$가 달라진다.그리고 KKT 조건이라고 하며 3개의 필요 조건이 있다.(1) 모든 독립 변수에 대한 미분이 0- 이것은 등식 제한 조건과 동일하다.$$ \dfrac{\partial h(x, \lambda)}{\partial x_i} = 0$$(2) **모든 라그랑지 승수와 부등식의 곱이 0**$$\lambda_j \cdot \dfrac{\partial h(x, \lambda)}{\partial \lambda_j} = \lambda \cdot g_j = 0$$(3) 음수가 아닌 라그랑지 승수$$ \lambda_j \geq 0 $$마지막 조건은 KKT 조건이 실제 부등식 제한 조건이 있는 문제임을 보장하기 위한 조건이다.두번째 조건은 목적함수에 대한 미분값이 반드시 0이 될 필요는 없다는 것을 말한다. $g$ 가 0이 아닐 경우, 라그랑지 승수인 $\lambda$ 값이 0이 되어도 성립한다.그 이유는 부등식 제한 조건이 있는 최적화 문제를 풀 경우, 그 제한 조건은 실제로 다음 두 가지 경우 중 하나가 된다.1. 최적화 결과에 전혀 영향을 주지 않는 쓸모 없는 제한 조건1. 최적화 결과에 영향을 주는 **등식인** 제한 조건- $h(x^*, \lambda^*) $ 에서 $\lambda^* = 0$이면 $h(x^*, \lambda^*) = f(x^*)$이므로 제한 조건이 의미가 없게 된다.- $g(x^*) = 0$ 이면 이 조건은 등식 제한 조건이 된다. SciPy `fmin_slsqp` 명령은 부등식 제한 조건이 있는 경우에도 사용이 가능하다.- 다만 ieqcons 인수에 들어가는 부등호의 부호가 0 또는 양수이어야 한다.
###Code
def f2(x):
return np.sqrt((x[0] - 4) ** 2 + (x[1] - 2)** 2)
# def eq_constraint(x):
# return x[0] +x[1] -1
def ieq_constraint(x):
return np.atleast_1d(1 - np.sum(np.abs(x)))
sp.optimize.fmin_slsqp(f2, np.array([0, 0]), ieqcons=[ieq_constraint])
###Output
Optimization terminated successfully. (Exit mode 0)
Current function value: 3.6055512804550336
Iterations: 11
Function evaluations: 77
Gradient evaluations: 11
###Markdown
--- LP, QP Linear Programming 문제방정식이나 부등식 제한 조건을 가지는 선형 모형(linear model, linear combination)의 값을 최소화하는 문제를 LP 문제라고 한다.$$\begin{eqnarray}\min_x c^Tx \\Ax = b \\x \geq 0\end{eqnarray}$$두번째 식이 제한조건, 마지막 식은 벡터 x의 모든 원소가 0 이상이어야 한다는 것을 의미한다.이러한 형태를 LP문제 기본형(standard form)이라고 한다. 답이 존재하는 경우 **실현 가능(feasible)** 하다고 한다. 불가능한 경우는 **실현 불가능(infeasible)** ex)$$\min_x \begin{bmatrix} -4 & -3 & 0 & 0 & 0 \end{bmatrix}\begin{bmatrix}x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\\end{bmatrix}$$$$\begin{bmatrix}1 & 1 & 1 & 0 & 0 \\2 & 1 & 0 & 1 & 0 \\3 & 4 & 0 & 0 & 1 \\\end{bmatrix}\begin{bmatrix}x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5\end{bmatrix}=\begin{bmatrix}100 \\ 150 \\ 360\end{bmatrix}$$$$\begin{bmatrix}x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5\end{bmatrix}\geq\begin{bmatrix}0 \\ 0 \\ 0 \\ 0 \\ 0\end{bmatrix}$$ SciPy `scipy.optimize.linprog` 명령으로 LP문제를 풀 수 있다. - 인수 이름 `A_eq`, `b_eq`는 반드시 써주어야 한다.$$\begin{eqnarray}\min_x c^Tx \\\text{s.t.} \;\;Ax = b \\x \geq 0\end{eqnarray}$$
###Code
A = np.array([[1, 1, 1, 0, 0],
[2, 1, 0, 1, 0],
[3, 4, 0, 0, 1]])
b = np.array([100, 150, 350])
c = np.array([-4, -3, 0, 0, 0])
sp.optimize.linprog(c, A_eq=A, b_eq=b) # 이와 같은 경우는 feasible
###Output
_____no_output_____ |
.ipynb_checkpoints/LSM_SVMclassifier-checkpoint.ipynb | ###Markdown
Reservoir Dimension set
###Code
nx = 5
ny = 5
nz = 5
N = nx*ny*nz # Reservoir size
N_read = 10 # No. of Readout neurons
###Output
_____no_output_____
###Markdown
Important constants related to LIF neuron and synaptic model
###Code
global vrest, vth, t_refrac
vrest, vth, t_refrac = 0, 20, 2
tau_m = 32
params_potential = {'C':1, 'g_L':1/tau_m, 'E_L':vrest, 'V_T':vth, 'R_p':t_refrac}
Delay = 1 #constant delay for all synapses in ms
tau_c = 64
C_theta = 5
del_C = 3
n_bits = 8
delta_c = 1
params_conc = {'C_theta':C_theta, 'del_C':del_C, 'tau_c':64, 'nbits':n_bits, 'delta_c':delta_c}
syn_string = "second-order"
sampling_freq = 12.5 # in khz
h = 1 # in ms
α_w = 0.8
time_params = {'h':h, 'Delay':Delay}
###Output
_____no_output_____
###Markdown
Set Reservoir Connections
###Code
# Storing the IDs of the neurons
LSM_ID = np.zeros((nx,ny,nz),dtype=np.int64)
l = 0
for i in range(nx):
for j in range(ny):
for k in range(nz):
LSM_ID[i,j,k] = l
l = l + 1
# Storing the synapse connections, and creating the initial weight matrix
k_prob = [0.45, 0.3, 0.6, 0.15]
r_sq = 2**2
W_arr = [3, 6, -2, -2]
W_init = 3
Weights_temp = np.zeros((N,N))
N_in = int(N*0.8)
neuron_type = [ int(i<N_in) for i in range(N)]
seed(seedval)
shuffle(neuron_type) # 1 for excitatory, 0 for inhibitory
synapes = [dict() for i in range(N)] # an array of dictonaries which store the location of neuron,
# type of neuron, and the IDs of the neurons it is connected to
for l in range(N):
loc = CompactLSM.ID_to_ind(nx,ny,nz,l)
n_type = neuron_type[l]
cons = []
for i in range(nx):
for j in range(ny):
for k in range(nz):
if l != int(LSM_ID[i,j,k]):
dist_sq = (loc[0]-i)**2 + (loc[1]-j)**2 + (loc[2]-k)**2
k_probl = 0
if n_type == 1:
if neuron_type[int(LSM_ID[i,j,k])] == 1:
k_probl = k_prob[0]
W_init = W_arr[0]
else:
k_probl = k_prob[1]
W_init = W_arr[1]
else:
if neuron_type[int(LSM_ID[i,j,k])] == 1:
k_probl = k_prob[2]
W_init = W_arr[2]
else:
k_probl = k_prob[3]
W_init = W_arr[3]
probability = k_probl* exp(-1*dist_sq/r_sq)
# print(probability)
check = binomial(1,probability)
if check == 1:
cons.append(int(LSM_ID[i,j,k]))
Weights_temp[l,int(LSM_ID[i,j,k])] = W_init
synapes[l] = {"Location":loc, "Neuron_type":n_type, "connections":cons}
global Weights
Weights = Weights_temp * α_w
print("Total synapse:", len(np.argwhere(Weights!=0)),
",E --> E :", len(np.argwhere(Weights==W_arr[0] * α_w)),
",E --> I:",len(np.argwhere(Weights==W_arr[1] * α_w)),
",I --> E/I:",len(np.argwhere(Weights==W_arr[2] * α_w)))
i = 64
print("Total Connections: for neuron {}:{}, {}".format(i,synapes[i]["Neuron_type"],synapes[i]["connections"]) )
Weights[1,2]
###Output
Total synapse: 1115 ,E --> E : 724 ,E --> I: 121 ,I --> E/I: 270
Total Connections: for neuron 64:1, [7, 33, 38, 58, 61, 67, 69, 74, 78, 83, 98, 107, 118, 119, 122]
###Markdown
Set Readout neuron initial Weights
###Code
All_labels = [str(x) for x in range(10)]
# N_read = 10 # No. of Readout neurons
Weights_temp_readOut = -8 + 16 * np.random.rand(N_read, N) # random weight initialization
synapes_read = [] # an array of dictonaries which store the label of neuron,
# and the IDs of the neurons it is connected to
for l in range(N_read):
label = All_labels[l]
synapes_read.append(label)
Weights_readOut = Weights_temp_readOut
# creating file location and label arrays for train and validate
base = 'PreProcessing/trainBSA'
os.listdir(base)
All_Labels = []
file_name_List = []
for human in os.listdir(base):
base_up = base + '/' + human
for train_sample in os.listdir(base_up):
train_Label = train_sample[0:2]
file_loc = base_up + '/' + train_sample
file_name_List.append(file_loc)
All_Labels.append(train_Label)
L = 78
Fin = 4
reservoir_ID = [i for i in range(N)]
seed(seedval)
Input_CXNs = choice(reservoir_ID, size = (L,Fin))
sign_win_matrix = (binomial(1,1/2, size = (L, Fin)) - 0.5)*2
# Input_CXNs * sign_win_matrix
###Output
_____no_output_____
###Markdown
Finding input neurons to reservoir current and then using the spike train to find the current input to the reservoir
###Code
# print("Input neurons =",L)
print("Size of Reservoir =",nx,"X",ny,"X",nz,",Total total neurons =",N)
print("Total no.of read out neurons =",N_read)
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay, accuracy_score
from sklearn.preprocessing import normalize
from sklearn.model_selection import train_test_split
from sklearn import svm
import matplotlib.pyplot as plt
import matplotlib
from matplotlib import rc
matplotlib.rcParams['text.usetex'] = True
import matplotlib.pylab as pylab
params = {'legend.fontsize': 'large',
'figure.figsize': (6.4 * 2, 4.8 * 2),
'axes.labelsize': 'x-large',
'axes.titlesize':'x-large',
'xtick.labelsize':'medium',
'ytick.labelsize':'medium',
'figure.titlesize':'xx-large'}
pylab.rcParams.update(params)
###Output
_____no_output_____
###Markdown
Network Learning with different synaptic configurations
###Code
syn_string = "static"
############### Training ###################
Input_gen_func = CompactLSM.Input_current_gen(file_name_List,
syn_string,
N,
time_params,
Input_CXNs,
sign_win_matrix,
training=True,
train_Labels=All_Labels)
NUM_INPUTS = len(All_Labels)
X = np.zeros((NUM_INPUTS,N))
Y = np.zeros((NUM_INPUTS,))
for i in range(NUM_INPUTS):
In_app, L, M, Label, input_num, In_spikes = next(
Input_gen_func) # Generates next input
[Reservoir_potential, Reservoir_Spikes
] = CompactLSM.reservoir_solver(N, Delay, synapes, M, h, In_app,
params_potential, Weights, syn_string)
X[i,:] = np.sum(Reservoir_Spikes,axis=1)
Y[i] = Label
X_train, X_val, Y_train, Y_val = train_test_split(X, Y, test_size=0.2, random_state=seedval)
#Create a svm Classifier
clf = svm.SVC(kernel='linear', verbose=True, tol=1e-10) # Linear Kernel
#Train the model using the training sets
clf.fit(X_train, Y_train)
print('Training Completed \n')
#Predict the response for test dataset
Y_pred = clf.predict(X_val)
Acc = accuracy_score(Y_val, Y_pred)
print('Validation Completed \n')
display_string = "Accuracy:{}".format(Acc * 100)
print(display_string)
cm_validate = confusion_matrix(Y_val,
Y_pred,
labels=[i for i in range(10)])
disp_validate = ConfusionMatrixDisplay(
confusion_matrix=cm_validate, display_labels=[i for i in range(10)])
disp_validate.plot();
plt.show()
syn_string = "first-order"
############### Training ###################
Input_gen_func = CompactLSM.Input_current_gen(file_name_List,
syn_string,
N,
time_params,
Input_CXNs,
sign_win_matrix,
training=True,
train_Labels=All_Labels)
NUM_INPUTS = len(All_Labels)
X = np.zeros((NUM_INPUTS,N))
Y = np.zeros((NUM_INPUTS,))
for i in range(NUM_INPUTS):
In_app, L, M, Label, input_num, In_spikes = next(
Input_gen_func) # Generates next input
[Reservoir_potential, Reservoir_Spikes
] = CompactLSM.reservoir_solver(N, Delay, synapes, M, h, In_app,
params_potential, Weights, syn_string)
X[i,:] = np.sum(Reservoir_Spikes,axis=1)
Y[i] = Label
X_train, X_val, Y_train, Y_val = train_test_split(X, Y, test_size=0.2, random_state=seedval)
#Create a svm Classifier
clf = svm.SVC(kernel='linear', verbose=True, tol=1e-10) # Linear Kernel
#Train the model using the training sets
clf.fit(X_train, Y_train)
print('Training Completed \n')
#Predict the response for test dataset
Y_pred = clf.predict(X_val)
Acc = accuracy_score(Y_val, Y_pred)
print('Validation Completed \n')
display_string = "Accuracy:{}".format(Acc * 100)
print(display_string)
cm_validate = confusion_matrix(Y_val,
Y_pred,
labels=[i for i in range(10)])
disp_validate = ConfusionMatrixDisplay(
confusion_matrix=cm_validate, display_labels=[i for i in range(10)])
disp_validate.plot();
plt.show()
syn_string = "second-order"
############### Training ###################
Input_gen_func = CompactLSM.Input_current_gen(file_name_List,
syn_string,
N,
time_params,
Input_CXNs,
sign_win_matrix,
training=True,
train_Labels=All_Labels)
NUM_INPUTS = len(All_Labels)
X = np.zeros((NUM_INPUTS,N))
Y = np.zeros((NUM_INPUTS,))
for i in range(NUM_INPUTS):
In_app, L, M, Label, input_num, In_spikes = next(
Input_gen_func) # Generates next input
[Reservoir_potential, Reservoir_Spikes
] = CompactLSM.reservoir_solver(N, Delay, synapes, M, h, In_app,
params_potential, Weights, syn_string)
X[i,:] = np.sum(Reservoir_Spikes,axis=1)
Y[i] = Label
X_train, X_val, Y_train, Y_val = train_test_split(X, Y, test_size=0.2, random_state=seedval)
#Create a svm Classifier
clf = svm.SVC(kernel='linear', verbose=True, tol=1e-10) # Linear Kernel
#Train the model using the training sets
clf.fit(X_train, Y_train)
print('Training Completed \n')
#Predict the response for test dataset
Y_pred = clf.predict(X_val)
Acc = accuracy_score(Y_val, Y_pred)
print('Validation Completed \n')
display_string = "Accuracy:{}".format(Acc * 100)
print(display_string)
cm_validate = confusion_matrix(Y_val,
Y_pred,
labels=[i for i in range(10)])
disp_validate = ConfusionMatrixDisplay(
confusion_matrix=cm_validate, display_labels=[i for i in range(10)])
disp_validate.plot();
plt.show()
###Output
[LibSVM]Training Completed
Validation Completed
Accuracy:92.5
###Markdown
Network Learning without reservoir, only input spikes
###Code
############### Training ###################
Input_gen_func = CompactLSM.Input_current_gen(file_name_List,
syn_string,
N,
time_params,
Input_CXNs,
sign_win_matrix,
training=True,
train_Labels=All_Labels)
NUM_INPUTS = len(All_Labels)
X = np.zeros((NUM_INPUTS,78))
Y = np.zeros((NUM_INPUTS,))
for i in range(NUM_INPUTS):
In_app, L, M, Label, input_num, In_spikes = next(
Input_gen_func) # Generates next input
X[i,:] = np.sum(In_spikes,axis=1)
Y[i] = Label
X_train, X_val, Y_train, Y_val = train_test_split(X, Y, test_size=0.2, random_state=seedval)
#Create a svm Classifier
clf = svm.SVC(kernel='linear', verbose=True, tol=1e-10) # Linear Kernel
#Train the model using the training sets
clf.fit(X_train, Y_train)
print('Training Completed \n')
#Predict the response for test dataset
Y_pred = clf.predict(X_val)
Acc = accuracy_score(Y_val, Y_pred)
print('Validation Completed \n')
display_string = "Accuracy:{}".format(Acc * 100)
print(display_string)
cm_validate = confusion_matrix(Y_val,
Y_pred,
labels=[i for i in range(10)])
disp_validate = ConfusionMatrixDisplay(
confusion_matrix=cm_validate, display_labels=[i for i in range(10)])
disp_validate.plot();
plt.show()
###Output
[LibSVM]Training Completed
Validation Completed
Accuracy:91.66666666666666
|
tutorials/W2D2_LinearSystems/student/W2D2_Outro.ipynb | ###Markdown
[](https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W2D2_LinearSystems/student/W2D2_Outro.ipynb) Outro **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** Video
###Code
# @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1hz4y1D7us", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"r8Jj81r7vK8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
ReflectionsDon't forget to complete your reflections and content checks! Slides
###Code
# @markdown
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/dx6je/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
Outro **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** Video
###Code
# @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1hz4y1D7us", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"r8Jj81r7vK8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
ReflectionsDon't forget to complete your reflections and content checks! Slides
###Code
# @markdown
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/dx6je/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
Outro **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** Video
###Code
# @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1hz4y1D7us", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"r8Jj81r7vK8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Daily surveyDon't forget to complete your reflections and content check in the daily survey! Please be patient after logging in as there isa small delay before you will be redirected to the survey. Slides
###Code
# @markdown
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/dx6je/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____
###Markdown
Outro **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** Video
###Code
# @markdown
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = "https://player.bilibili.com/player.html?bvid={0}&page={1}".format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id=f"BV1hz4y1D7us", width=854, height=480, fs=1)
print("Video available at https://www.bilibili.com/video/{0}".format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id=f"r8Jj81r7vK8", width=854, height=480, fs=1, rel=0)
print("Video available at https://youtube.com/watch?v=" + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
###Output
_____no_output_____
###Markdown
Daily surveyDon't forget to complete your reflections and content check in the daily survey! Please be patient after logging in as there is a small delay before you will be redirected to the survey. Slides
###Code
# @markdown
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/dx6je/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
###Output
_____no_output_____ |
Convnet Kaggle.ipynb | ###Markdown
Using convnets with small datasets
###Code
import keras
keras.__version__
###Output
Using TensorFlow backend.
###Markdown
Downloading the data
###Code
import os, shutil
# The path to the directory where the original
# dataset was uncompressed
original_dataset_dir = '/Users/GAO/Downloads/kaggle_original_data'
# The directory where we will
# store our smaller dataset
base_dir = '/Users/GAO/Downloads/cats_and_dogs_small'
os.mkdir(base_dir)
# Directories for our training,
# validation and test splits
train_dir = os.path.join(base_dir, 'train')
os.mkdir(train_dir)
validation_dir = os.path.join(base_dir, 'validation')
os.mkdir(validation_dir)
test_dir = os.path.join(base_dir, 'test')
os.mkdir(test_dir)
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
os.mkdir(train_cats_dir)
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
os.mkdir(train_dogs_dir)
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
os.mkdir(validation_cats_dir)
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
os.mkdir(validation_dogs_dir)
# Directory with our validation cat pictures
test_cats_dir = os.path.join(test_dir, 'cats')
os.mkdir(test_cats_dir)
# Directory with our validation dog pictures
test_dogs_dir = os.path.join(test_dir, 'dogs')
os.mkdir(test_dogs_dir)
# Copy first 1000 cat images to train_cats_dir
fnames = ['cat.{}.jpg'.format(i) for i in range(1000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(train_cats_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 cat images to validation_cats_dir
fnames = ['cat.{}.jpg'.format(i) for i in range(1000, 1500)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(validation_cats_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 cat images to test_cats_dir
fnames = ['cat.{}.jpg'.format(i) for i in range(1500, 2000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(test_cats_dir, fname)
shutil.copyfile(src, dst)
# Copy first 1000 dog images to train_dogs_dir
fnames = ['dog.{}.jpg'.format(i) for i in range(1000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(train_dogs_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 dog images to validation_dogs_dir
fnames = ['dog.{}.jpg'.format(i) for i in range(1000, 1500)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(validation_dogs_dir, fname)
shutil.copyfile(src, dst)
# Copy next 500 dog images to test_dogs_dir
fnames = ['dog.{}.jpg'.format(i) for i in range(1500, 2000)]
for fname in fnames:
src = os.path.join(original_dataset_dir, fname)
dst = os.path.join(test_dogs_dir, fname)
shutil.copyfile(src, dst)
print('total training cat images:', len(os.listdir(train_cats_dir)))
print('total training dog images:', len(os.listdir(train_dogs_dir)))
print('total validation cat images:', len(os.listdir(validation_cats_dir)))
print('total validation dog images:', len(os.listdir(validation_dogs_dir)))
print('total test cat images:', len(os.listdir(test_cats_dir)))
print('total test dog images:', len(os.listdir(test_dogs_dir)))
###Output
total training cat images: 1000
total training dog images: 1000
total validation cat images: 500
total validation dog images: 500
total test cat images: 500
total test dog images: 500
###Markdown
Building our network
###Code
from keras import layers
from keras import models
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.summary()
from keras import optimizers
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
###Output
_____no_output_____
###Markdown
Data preprocessing
###Code
from keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1./255)
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=20,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='binary')
for data_batch, labels_batch in train_generator:
print('data batch shape:', data_batch.shape)
print('labels batch shape:', labels_batch.shape)
break
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50, verbose=2)
model.save('data/cats_and_dogs_small_1.h5')
import matplotlib.pyplot as plt
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Using data augmentation
###Code
datagen = ImageDataGenerator(
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
# This is module with image preprocessing utilities
from keras.preprocessing import image
fnames = [os.path.join(train_cats_dir, fname) for fname in os.listdir(train_cats_dir)]
# We pick one image to "augment"
img_path = fnames[3]
# Read the image and resize it
img = image.load_img(img_path, target_size=(150, 150))
# Convert it to a Numpy array with shape (150, 150, 3)
x = image.img_to_array(img)
# Reshape it to (1, 150, 150, 3)
x = x.reshape((1,) + x.shape)
# The .flow() command below generates batches of randomly transformed images.
# It will loop indefinitely, so we need to `break` the loop at some point!
i = 0
for batch in datagen.flow(x, batch_size=1):
plt.figure(i)
imgplot = plt.imshow(image.array_to_img(batch[0]))
i += 1
if i % 4 == 0:
break
plt.show()
model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',
input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-4),
metrics=['acc'])
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,)
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=32,
class_mode='binary')
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=100,
validation_data=validation_generator,
validation_steps=50, verbose=2)
model.save('data/cats_and_dogs_small_2.h5')
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.show()
###Output
_____no_output_____ |
diabetes.ipynb | ###Markdown
###Code
from sklearn.datasets import load_diabetes
data = load_diabetes()
data.data
data.DESCR
data.target
###Output
_____no_output_____
###Markdown
**Importing the Libraries**
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import scipy
###Output
_____no_output_____
###Markdown
Use the diabetes data set from UCI and Pima Indians Diabetes data set for performing the following:(i) Univariate analysis: Frequency, Mean, Median, Mode, Variance, Standard Deviation, Skewness and Kurtosis.(ii) Bivariate analysis: Linear and logistic regression modeling (iii) Multiple Regression analysis (iv) Also compare the results of the above analysis for the two data sets. **Reading from Dataset**
###Code
data = pd.read_csv("./Data/pima-diabetes.csv")
data.head(10)
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 768 entries, 0 to 767
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Pregnancies 768 non-null int64
1 Glucose 768 non-null int64
2 BloodPressure 768 non-null int64
3 SkinThickness 768 non-null int64
4 Insulin 768 non-null int64
5 BMI 768 non-null float64
6 DiabetesPedigreeFunction 768 non-null float64
7 Age 768 non-null int64
8 Outcome 768 non-null int64
dtypes: float64(2), int64(7)
memory usage: 54.1 KB
###Markdown
**Checking the dataset**
###Code
print("No of columns with empty values = ", data.isnull().any().sum())
###Output
No of columns with empty values = 0
###Markdown
**Univariate Analysis**
###Code
data.describe()
###Output
_____no_output_____
###Markdown
In the above description,**Frequency** = count column, no of values in the column**Mean** = average of the values **Median** = 50th percentile given in the column is the median**Standard Deviation** = std column
###Code
# Mode
data.mode()
# Variance
data.var()
# Skewness
data.skew()
# Kurtosis
data.kurtosis()
###Output
_____no_output_____
###Markdown
Correlation
###Code
corr = data.corr()
sns.heatmap(corr, xticklabels=corr.columns, yticklabels=corr.columns)
###Output
_____no_output_____
###Markdown
Bivariate Analysis Linear Regression
###Code
from sklearn import preprocessing, svm
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
# Separating the data into independent and dependent variables
# Converting each dataframe into a numpy array
# since each dataframe contains only one column
X = np.array(data['Glucose']).reshape(-1, 1)
y = np.array(data['Insulin']).reshape(-1, 1)
# Dropping any rows with Nan values
data.dropna(inplace = True)
# Splitting the data into training and testing data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.30)
regr = LinearRegression()
regr.fit(X_train, y_train)
print(regr.score(X_test, y_test))
y_pred = regr.predict(X_test)
plt.scatter(X_test, y_test, color ='b')
plt.plot(X_test, y_pred, color ='g')
plt.show()
###Output
_____no_output_____
###Markdown
Logistic Regression -Input values (X) are combined linearly using weights or coefficient values to predict an output value (y).The output value being modeled is a binary value (0 or 1) rather than a numeric value. Linear Regression Equation:y = β0 + β1X1 + β2X2 …. + βnXnWhere,y stands for the dependent variable that needs to be predicted.β0 is the Y-intercept, which is basically the point on the line which touches the y-axis.β1 is the slope of the line (the slope can be negative or positive depending on the relationship between the dependent variable and the independent variable.)X here represents the independent variable that is used to predict our resultant dependent value. Sigmoid function:z = 1 / 1 + e - yApply sigmoid function on the linear regression equation. Logistic Regression equation:z = 1 / 1 + e-(β0 + β1X1 + β2X2 …. + βnXn)β0 = β0 + learning_rate (y - z) z (1 - z)βi = βi + learning_rate (y - z) z (1 - z) X Sklearn Logistic Regression
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
import warnings
warnings.filterwarnings("ignore")
diabetes_df = data.values
X = diabetes_df[:,0:8] #Predictors
y = diabetes_df[:,8] #Target
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.3)
logistic_model = LogisticRegression(fit_intercept=True,C=1e15)
logistic_model.fit(X_train,y_train)
predicted = logistic_model.predict(X_test)
print("Confusion Matrix")
matrix = confusion_matrix(y_test,predicted)
print(matrix)
lr_accuracy = accuracy_score(y_test, predicted)
print('Logistic Regression Accuracy of Scikit Model: {:.2f}%'.format(lr_accuracy*100))
###Output
_____no_output_____
###Markdown
Now our one layer is created!!
###Code
model.get_config() # now we can see here that we have created the layer with 8 feature and 4 neuron
#To check how many layer we have
model.get_layer #here we can see we have only one layer
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 4) 36
=================================================================
Total params: 36
Trainable params: 36
Non-trainable params: 0
_________________________________________________________________
###Markdown
It shows we have one layer and it will give 4 output and 36 parameter(8[feature] * 4[neurons] + 4[bias] == 36 parameter)
###Code
# add one more layer
# we dont need to specify the dimention as it will automatically detect
model.add(Dense(units=4,
activation ="relu",
kernel_initializer = "zeros",
bias_initializer = "zeros"
))
###Output
_____no_output_____
###Markdown
Now we have created 2nd layer
###Code
model.get_layer
model.get_config()
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 4) 36
_________________________________________________________________
dense_1 (Dense) (None, 4) 20
=================================================================
Total params: 56
Trainable params: 56
Non-trainable params: 0
_________________________________________________________________
###Markdown
dense_1 is 2nd layer it shows it will give 4 output and has 20 parameter( 4[1st layer neuron] * 4[neurons] + 4[bias])
###Code
# Last layer: it usually have single neuron
model.add(Dense(units=1,
activation="sigmoid", # sigmoid function -> gives output in 0 and 1
kernel_initializer="zeros",
bias_initializer = "zeros"
))
###Output
_____no_output_____
###Markdown
last layer is created which will give output : output layer
###Code
model.get_config()
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 4) 36
_________________________________________________________________
dense_1 (Dense) (None, 4) 20
_________________________________________________________________
dense_2 (Dense) (None, 1) 5
=================================================================
Total params: 61
Trainable params: 61
Non-trainable params: 0
_________________________________________________________________
###Markdown
here dense_2 is our 3rd layer which will give output-> 1 (either 0 or 1 ) and has parametehere we can see total 61 parameter it will find imagine in big data , image processing millions of paramenter they will find and it takes lots of computing power
###Code
#After model creation we need to compile the model to tell what is our loss funtion and which optimiser we are going to use
from keras.optimizers import Adam
model.compile(optimizer = Adam(), loss="binary_crossentropy")
#binary_crossentropy is a loss funciton used in binary classification works in 0 and 1.
model.fit(X,y) #here we can see the loss we have
#if we are not happy with the weight and bias then we can go back and reset the weight and bias and again train model
# this is called backpropogation
# epochs is use to set how many times we need to go back
model.fit(X,y, epochs=100)
###Output
Epoch 1/100
24/24 [==============================] - 0s 610us/step - loss: 0.6886
Epoch 2/100
24/24 [==============================] - 0s 699us/step - loss: 0.6857
Epoch 3/100
24/24 [==============================] - 0s 665us/step - loss: 0.6831
Epoch 4/100
24/24 [==============================] - 0s 706us/step - loss: 0.6804
Epoch 5/100
24/24 [==============================] - 0s 708us/step - loss: 0.6781
Epoch 6/100
24/24 [==============================] - 0s 744us/step - loss: 0.6758
Epoch 7/100
24/24 [==============================] - 0s 737us/step - loss: 0.6737
Epoch 8/100
24/24 [==============================] - 0s 666us/step - loss: 0.6719
Epoch 9/100
24/24 [==============================] - 0s 678us/step - loss: 0.6698
Epoch 10/100
24/24 [==============================] - 0s 701us/step - loss: 0.6682
Epoch 11/100
24/24 [==============================] - 0s 655us/step - loss: 0.6666
Epoch 12/100
24/24 [==============================] - 0s 658us/step - loss: 0.6651
Epoch 13/100
24/24 [==============================] - 0s 676us/step - loss: 0.6636
Epoch 14/100
24/24 [==============================] - 0s 744us/step - loss: 0.6623
Epoch 15/100
24/24 [==============================] - ETA: 0s - loss: 0.668 - 0s 709us/step - loss: 0.6611
Epoch 16/100
24/24 [==============================] - 0s 706us/step - loss: 0.6599
Epoch 17/100
24/24 [==============================] - 0s 686us/step - loss: 0.6588
Epoch 18/100
24/24 [==============================] - 0s 672us/step - loss: 0.6580
Epoch 19/100
24/24 [==============================] - 0s 641us/step - loss: 0.6569
Epoch 20/100
24/24 [==============================] - 0s 683us/step - loss: 0.6561
Epoch 21/100
24/24 [==============================] - 0s 667us/step - loss: 0.6553
Epoch 22/100
24/24 [==============================] - 0s 784us/step - loss: 0.6547
Epoch 23/100
24/24 [==============================] - 0s 747us/step - loss: 0.6539
Epoch 24/100
24/24 [==============================] - 0s 768us/step - loss: 0.6534
Epoch 25/100
24/24 [==============================] - 0s 1ms/step - loss: 0.6527
Epoch 26/100
24/24 [==============================] - 0s 861us/step - loss: 0.6523
Epoch 27/100
24/24 [==============================] - 0s 664us/step - loss: 0.6517
Epoch 28/100
24/24 [==============================] - 0s 742us/step - loss: 0.6513
Epoch 29/100
24/24 [==============================] - 0s 707us/step - loss: 0.6509
Epoch 30/100
24/24 [==============================] - 0s 726us/step - loss: 0.6505
Epoch 31/100
24/24 [==============================] - 0s 714us/step - loss: 0.6502
Epoch 32/100
24/24 [==============================] - 0s 694us/step - loss: 0.6499
Epoch 33/100
24/24 [==============================] - 0s 693us/step - loss: 0.6496
Epoch 34/100
24/24 [==============================] - 0s 658us/step - loss: 0.6493
Epoch 35/100
24/24 [==============================] - 0s 667us/step - loss: 0.6491
Epoch 36/100
24/24 [==============================] - 0s 747us/step - loss: 0.6489
Epoch 37/100
24/24 [==============================] - 0s 703us/step - loss: 0.6487
Epoch 38/100
24/24 [==============================] - 0s 754us/step - loss: 0.6485
Epoch 39/100
24/24 [==============================] - 0s 665us/step - loss: 0.6483
Epoch 40/100
24/24 [==============================] - 0s 657us/step - loss: 0.6482
Epoch 41/100
24/24 [==============================] - 0s 666us/step - loss: 0.6481
Epoch 42/100
24/24 [==============================] - 0s 687us/step - loss: 0.6479
Epoch 43/100
24/24 [==============================] - 0s 687us/step - loss: 0.6478
Epoch 44/100
24/24 [==============================] - 0s 675us/step - loss: 0.6477
Epoch 45/100
24/24 [==============================] - 0s 695us/step - loss: 0.6477
Epoch 46/100
24/24 [==============================] - 0s 748us/step - loss: 0.6476
Epoch 47/100
24/24 [==============================] - 0s 698us/step - loss: 0.6475
Epoch 48/100
24/24 [==============================] - 0s 623us/step - loss: 0.6474
Epoch 49/100
24/24 [==============================] - 0s 656us/step - loss: 0.6474
Epoch 50/100
24/24 [==============================] - 0s 653us/step - loss: 0.6473
Epoch 51/100
24/24 [==============================] - 0s 757us/step - loss: 0.6472
Epoch 52/100
24/24 [==============================] - 0s 568us/step - loss: 0.6472
Epoch 53/100
24/24 [==============================] - 0s 726us/step - loss: 0.6472
Epoch 54/100
24/24 [==============================] - 0s 797us/step - loss: 0.6471
Epoch 55/100
24/24 [==============================] - 0s 720us/step - loss: 0.6471
Epoch 56/100
24/24 [==============================] - 0s 625us/step - loss: 0.6470
Epoch 57/100
24/24 [==============================] - 0s 584us/step - loss: 0.6470
Epoch 58/100
24/24 [==============================] - 0s 604us/step - loss: 0.6470
Epoch 59/100
24/24 [==============================] - 0s 592us/step - loss: 0.6470
Epoch 60/100
24/24 [==============================] - 0s 596us/step - loss: 0.6470
Epoch 61/100
24/24 [==============================] - 0s 609us/step - loss: 0.6470
Epoch 62/100
24/24 [==============================] - 0s 491us/step - loss: 0.6469
Epoch 63/100
24/24 [==============================] - 0s 665us/step - loss: 0.6469
Epoch 64/100
24/24 [==============================] - 0s 640us/step - loss: 0.6469
Epoch 65/100
24/24 [==============================] - 0s 611us/step - loss: 0.6469
Epoch 66/100
24/24 [==============================] - 0s 535us/step - loss: 0.6469
Epoch 67/100
24/24 [==============================] - 0s 539us/step - loss: 0.6469
Epoch 68/100
24/24 [==============================] - 0s 563us/step - loss: 0.6469
Epoch 69/100
24/24 [==============================] - 0s 581us/step - loss: 0.6469
Epoch 70/100
24/24 [==============================] - 0s 548us/step - loss: 0.6469
Epoch 71/100
24/24 [==============================] - 0s 579us/step - loss: 0.6469
Epoch 72/100
24/24 [==============================] - 0s 617us/step - loss: 0.6469
Epoch 73/100
24/24 [==============================] - 0s 618us/step - loss: 0.6468
Epoch 74/100
24/24 [==============================] - 0s 614us/step - loss: 0.6468
Epoch 75/100
24/24 [==============================] - 0s 631us/step - loss: 0.6469
Epoch 76/100
24/24 [==============================] - 0s 638us/step - loss: 0.6468
Epoch 77/100
24/24 [==============================] - 0s 622us/step - loss: 0.6468
Epoch 78/100
24/24 [==============================] - 0s 708us/step - loss: 0.6468
Epoch 79/100
24/24 [==============================] - 0s 628us/step - loss: 0.6468
Epoch 80/100
24/24 [==============================] - 0s 671us/step - loss: 0.6468
Epoch 81/100
24/24 [==============================] - 0s 703us/step - loss: 0.6468
Epoch 82/100
24/24 [==============================] - 0s 790us/step - loss: 0.6468
Epoch 83/100
24/24 [==============================] - 0s 831us/step - loss: 0.6468
Epoch 84/100
24/24 [==============================] - 0s 791us/step - loss: 0.6468
Epoch 85/100
24/24 [==============================] - 0s 708us/step - loss: 0.6468
Epoch 86/100
24/24 [==============================] - 0s 689us/step - loss: 0.6468
Epoch 87/100
24/24 [==============================] - 0s 693us/step - loss: 0.6468
Epoch 88/100
24/24 [==============================] - 0s 696us/step - loss: 0.6468
Epoch 89/100
24/24 [==============================] - 0s 665us/step - loss: 0.6468
Epoch 90/100
24/24 [==============================] - 0s 657us/step - loss: 0.6468
Epoch 91/100
24/24 [==============================] - 0s 605us/step - loss: 0.6468
Epoch 92/100
24/24 [==============================] - 0s 571us/step - loss: 0.6468
Epoch 93/100
24/24 [==============================] - 0s 632us/step - loss: 0.6468
Epoch 94/100
24/24 [==============================] - 0s 563us/step - loss: 0.6468
Epoch 95/100
24/24 [==============================] - 0s 540us/step - loss: 0.6468
Epoch 96/100
24/24 [==============================] - 0s 572us/step - loss: 0.6468
Epoch 97/100
24/24 [==============================] - 0s 606us/step - loss: 0.6468
Epoch 98/100
24/24 [==============================] - 0s 645us/step - loss: 0.6469
Epoch 99/100
###Markdown
Here we can see epochs train the model 100 times and we can see loss is decreased
###Code
model.save("dia_model.h5")
###Output
_____no_output_____
###Markdown
A. Load data, preprocess, and calculate accuracy
###Code
X, y, le = read_data(dataset)
# Append classifier to preprocessing pipeline.
# Now we have a full prediction pipeline.
categorical_features=[]
preprocessor = get_preprocessor(X, categorical_features)
rf = RandomForestClassifier(n_jobs=-1, random_state=seed)
clf = Pipeline(steps=[('preprocessor', preprocessor),
('classifier', rf)])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=seed)
clf.fit(X_train, y_train)
print("model score: %.3f" % clf.score(X_test, y_test))
###Output
model score: 0.812
###Markdown
B. Plot confidence scores for X_train
###Code
y_prob = rf.predict_proba(X_train)
y_conf_train = y_prob[:, 0] # confidence scores
plot_confidence_levels(y_conf_train, "Confidence scores for X_train")
###Output
_____no_output_____
###Markdown
C. Train CTGAN and plot the training loss
###Code
z_features = get_noise_features(X_train, categorical_features)
z_rows = int(0.25 * X_train.shape[0])
z = gen_random_noise(shape=(z_rows, z_features))
batch_size = 50
epochs = 50
confidence_level = 0.9
gen_lr = 2e-5
loss = 'log'
rf_ctgan = CTGANSynthesizer(batch_size=batch_size,
blackbox_model=rf,
preprocessing_pipeline=preprocessor,
bb_loss=loss
)
hist = rf_ctgan.fit(train_data=z,
epochs=epochs,
confidence_level=confidence_level,
gen_lr=gen_lr,
verbose=False
)
# rf_ctgan.save(f"{MODELS_PATH}/{dataset}_ctgan_c_{confidence_level}.pkl")
plot_losses(hist, title=f'{dataset} loss, c = {confidence_level}')
print()
###Output
_____no_output_____
###Markdown
D. Plot confidence scores for 100 generated samples
###Code
# check confidence for the generated samples
samples = 100
gen_data = rf_ctgan.sample(samples)
y_prob = rf.predict_proba(gen_data)
y_conf_gen = y_prob[:, 0] # confidence scores
plot_confidence_levels(y_conf_gen, f"Scores of generated {samples} samples (c={confidence_level})")
###Output
_____no_output_____
###Markdown
E. Find generated samples above the confidence score
###Code
# find samples s such that s.confidence > c
indices = np.argwhere(y_conf_gen>confidence_level).squeeze()
print(f"indecies:\n\t{indices}\nconfidence levels:\n\t{y_conf_gen[indices]}")
gen_indices = indices if indices.shape != () else [indices] # avoide no shape arrays
# inverse the generated data
scaler = get_scaler(preprocessor)
gen_data_above_c_before = gen_data.iloc[gen_indices]
gen_data_above_c = scaler.inverse_transform(gen_data_above_c_before)
gen_data_above_c = pd.DataFrame(gen_data_above_c).set_index(gen_data_above_c_before.index)
###Output
_____no_output_____
###Markdown
F. Print most similar examples (X_similiar)
###Code
similarities = calc_similarities(gen_data_above_c, X_train)
X_similiar_indices = [el[0] for el in similarities.values()]
print(f"gen_sample_above_c -> (most_similiar_sample_x_train, cosine_score)\n\n{similarities}")
###Output
gen_sample_above_c -> (most_similiar_sample_x_train, cosine_score)
{}
###Markdown
G. Print confidence scores for X_similiar
###Code
# extract X_similiar
X_train_pd = pd.DataFrame(X_train)
X_similiar = X_train_pd.iloc[X_similiar_indices]
# print confidence scores
# y_prob_similar = rf.predict_proba(X_similiar)
# y_conf_similar = y_prob_similar[:, 0]
print(f"confidence scores for similar samples:\n{y_conf_train[X_similiar_indices]}")
###Output
confidence scores for similar samples:
[]
###Markdown
Plot as table
###Code
data = []
for gen_idx, value in similarities.items():
similar_idx = value[0]
similarity = value[1]
gen_conf = y_conf_gen[gen_idx]
similar_conf = y_conf_train[similar_idx]
data.append([gen_idx, gen_conf, similar_idx, similar_conf, similarity])
columns = ['gen_idx', 'score', 'sim_idx', 'score', 'similarity']
results = pd.DataFrame(data, columns=columns)
results
plot_similarities_dist(gen_data_above_c, X_train)
###Output
_____no_output_____
###Markdown
Saving as a Pickle file
###Code
with open('diabetes_model.pkl', 'wb') as files:
pickle.dump(regr, files)
###Output
_____no_output_____
###Markdown
Dash plotly
###Code
import dash
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output
import plotly.graph_objs as go
import base64
import datetime
import io
import dash
from dash.dependencies import Input, Output, State
import dash_core_components as dcc
import dash_html_components as html
import dash_table
import pandas as pd
training_data = joblib.load("C:/Users/Prashant/Documents/Bizmetric/Iris RShiny App/training_data.pkl")
training_labels = joblib.load("C:/Users/Prashant/Documents/Bizmetric/Iris RShiny App/training_labels.pkl")
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
#app.layout = html.Div([
app.layout = html.Div(children=[
html.H1(children='Diabetes Prediction '),
html.Div(children='''
Dash: A web application framework for your data.
'''),
dcc.Upload(
id='upload-data',
children=html.Div([
'Drag and Drop or ',
html.A('Select Files')
]),
style={
'width': '20%',
'height': '60px',
'lineHeight': '60px',
'borderWidth': '1px',
'borderStyle': 'dashed',
'borderRadius': '5px',
'textAlign': 'center',
'margin': '10px'
},
# Allow multiple files to be uploaded
multiple=True
),
html.Div(id='output-data-upload'),
])
def parse_contents(contents, filename, date):
content_type, content_string = contents.split(',')
decoded = base64.b64decode(content_string)
try:
if 'csv' in filename:
# Assume that the user uploaded a CSV file
df = pd.read_csv(
io.StringIO(decoded.decode('utf-8')))
elif 'xls' in filename:
# Assume that the user uploaded an excel file
df = pd.read_excel(io.BytesIO(decoded))
except Exception as e:
print(e)
return html.Div([
'There was an error processing this file.'
])
return html.Div([
html.H5(filename),
html.H6(datetime.datetime.fromtimestamp(date)),
dash_table.DataTable(
data=df.to_dict('records'),
columns=[{'name': i, 'id': i} for i in df.columns]
),
html.Hr(), # horizontal line
# For debugging, display the raw contents provided by the web browser
html.Div('Raw Content'),
html.Pre(contents[0:200] + '...', style={
'whiteSpace': 'pre-wrap',
'wordBreak': 'break-all'
})
])
@app.callback(Output('output-data-upload', 'children'),
Input('upload-data', 'contents'),
State('upload-data', 'filename'),
State('upload-data', 'last_modified'))
def update_output(list_of_contents, list_of_names, list_of_dates):
if list_of_contents is not None:
children = [
parse_contents(c, n, d) for c, n, d in
zip(list_of_contents, list_of_names, list_of_dates)]
return children
if __name__ == '__main__':
with open('diabetes_model_pkl', 'rb') as f:
my_list = pickle.load(f)
app.run_server()
import dash
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output
import numpy as np
import plotly.graph_objects as go
import plotly.express as px
from sklearn.model_selection import train_test_split
from sklearn import linear_model, tree, neighbors
df = px.data.tips()
X = df.total_bill.values[:, None]
X_train, X_test, y_train, y_test = train_test_split(
X, df.tip, random_state=42)
models = {'Regression': linear_model.LinearRegression,
'Decision Tree': tree.DecisionTreeRegressor,
'k-NN': neighbors.KNeighborsRegressor}
app = dash.Dash(__name__)
app.layout = html.Div([
html.P("Select Model:"),
dcc.Dropdown(
id='model-name',
options=[{'label': x, 'value': x}
for x in models],
value='Regression',
clearable=False
),
dcc.Graph(id="graph"),
])
@app.callback(
Output("graph", "figure"),
[Input('model-name', "value")])
def train_and_display(name):
model = models[name]()
model.fit(X_train, y_train)
x_range = np.linspace(X.min(), X.max(), 100)
y_range = model.predict(x_range.reshape(-1, 1))
fig = go.Figure([
go.Scatter(x=X_train.squeeze(), y=y_train,
name='train', mode='markers'),
go.Scatter(x=X_test.squeeze(), y=y_test,
name='test', mode='markers'),
go.Scatter(x=x_range, y=y_range,
name='prediction')
])
return fig
if __name__ == '__main__':
with open('diabetes_model_pkl', 'rb') as f:
my_list = pickle.load(f)
app.run_server()
app = dash.Dash()
training_data = joblib.load("C:/Users/Prashant/Documents/Bizmetric/Iris RShiny App/training_data.pkl")
training_labels = joblib.load("C:/Users/Prashant/Documents/Bizmetric/Iris RShiny App/training_labels.pkl")
ALLOWED_TYPES = (
"text", "number", "password", "email", "search",
"tel", "url", "range", "hidden",
)
app.layout = html.Div(
[
dcc.Input(
id="input_{}".format(_),
type=_,
placeholder="input type {}".format(_),
)
for _ in ALLOWED_TYPES
]
+ [html.Div(id="out-all-types")]
)
@app.callback(
Output("out-all-types", "children"),
[Input("input_{}".format(_), "value") for _ in ALLOWED_TYPES],
)
def cb_render(*vals):
return " | ".join((str(val) for val in vals if val))
app = dash.Dash()
training_data = joblib.load("C:/Users/Prashant/Documents/Bizmetric/Iris RShiny App/training_data.pkl")
training_labels = joblib.load("C:/Users/Prashant/Documents/Bizmetric/Iris RShiny App/training_labels.pkl")
app.layout = html.Div(children=[
html.H1(children='Simple Logistic Regression', style={'textAlign': 'center'}),
html.Div(children=[
html.Label('Enter glucose: '),
dcc.Input(id='glucose', placeholder='glucose', type='text'),
html.Div(id='result')
], style={'textAlign': 'center'}),
])
@app.callback(
Output(component_id='result', component_property='children'),
[Input(component_id='glucose', component_property='value')])
def update_df(glucose):
if glucose is not None and glucose is not '':
try:
pred = model.predict(float(glucose))[0]
return 'With {} glucose you should have {} diabetes'.format(glucose, pred[0])
except ValueError:
return 'Unable to give years of experience'
if __name__ == '__main__':
with open('diabetes_model_pkl', 'rb') as f:
my_list = pickle.load(f)
app.run_server()
###Output
Dash is running on http://127.0.0.1:8050/
* Serving Flask app "__main__" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
###Markdown
data collection and analysis:
###Code
#PIMA diabetes dataset
#loading the diabetes dataset to a pandas dataframe
diabetes_dataset=pd.read_csv("/content/diabetes.csv")
#printing 5 rows of dataset
diabetes_dataset.head()
#no.of.rows and columns in the dataset
diabetes_dataset.shape
#getting the statestical measure of the data.
diabetes_dataset.describe()
diabetes_dataset["Outcome"].value_counts()
#separating the data and labels.
x=diabetes_dataset.drop(columns="Outcome",axis=1)
y=diabetes_dataset["Outcome"]
print(x)
print(y)
#datastandardization
Scaler=StandardScaler()
Scaler.fit(x)
#satandardscaler(copy=time,with_mean=true,with_std=true)
standardized_data=Scaler.transform(x)
print(standardized_data)
x=standardized_data
y=diabetes_dataset["Outcome"]
print(x)
print(y)
#train test split
x_train,x_test,y_train,y_test=train_test_split(x,y, test_size= 0.2,stratify=y,random_state=2)
print(x.shape,x_train.shape,x_test.shape)
#train the model:
classifier=svm.SVC(kernel="linear")
#training the svm classifier
classifier.fit(x_train,y_train)
###Output
_____no_output_____
###Markdown
model evaluation
###Code
#accuracy score
#accuracy score on training data
x_train_prediction=classifier.predict(x_train)
training_data_accuracy=accuracy_score(x_train_prediction,y_train)
print("Accuracy score of the training data:",training_data_accuracy)
#accuracy score on training data
x_test_prediction=classifier.predict(x_test)
test_data_accuracy=accuracy_score(x_test_prediction,y_test)
print("Accuracy score of the test data:",test_data_accuracy)
###Output
Accuracy score of the test data: 0.7727272727272727
###Markdown
making predective system
###Code
input_data = (5,166,72,19,175,25.8,0.587,51)
# changing the input_data to numpy array
input_data_as_numpy_array = np.asarray(input_data)
# reshape the array as we are predicting for one instance
input_data_reshaped = input_data_as_numpy_array.reshape(1,-1)
# standardize the input data
std_data=Scaler.transform(input_data_reshaped)
print(std_data)
prediction = classifier.predict(input_data_reshaped)
print(prediction)
if(prediction[0]==0):
print("The person is not diabetic")
else:
print("The person is diabetic")
###Output
The person is diabetic
###Markdown
Predicting Diabetes
###Code
from path import Path
import pandas as pd
data = Path('./Resources/diabetes.csv')
df = pd.read_csv(data)
df.head()
###Output
_____no_output_____
###Markdown
Separate the Features (X) from the Target (y)
###Code
y = df["Outcome"]
X = df.drop(columns="Outcome")
###Output
_____no_output_____
###Markdown
Split our data into training and testing
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,
y,
random_state=1,
stratify=y)
X_train.shape
###Output
_____no_output_____
###Markdown
Create a Logistic Regression Model
###Code
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(solver='lbfgs',
max_iter=200,
random_state=1)
###Output
_____no_output_____
###Markdown
Fit (train) or model using the training data
###Code
classifier.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Make predictions
###Code
y_pred = classifier.predict(X_test)
results = pd.DataFrame({"Prediction": y_pred, "Actual": y_test}).reset_index(drop=True)
results.head(20)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, y_pred))
#Accuracy = (True Positives + True Negatives)/ all obervations
from sklearn.metrics import confusion_matrix, classification_report
matrix = confusion_matrix(y_test, y_pred)
print(matrix)
# this is saying there is
# Prediced true that is true 113 (correct)
# Predicted true that is false is 31
# Predicted false that is true is 12
# Predicted false that is false is 36 (correct)
report = classification_report(y_test, y_pred)
print(report)
###Output
precision recall f1-score support
0 0.78 0.90 0.84 125
1 0.75 0.54 0.63 67
accuracy 0.78 192
macro avg 0.77 0.72 0.73 192
weighted avg 0.77 0.78 0.77 192
###Markdown
Glucose, Age and BMI are the most Correlated features with the 'Outcome' Bloodpressure, SkinThikness have tiny Correlation with the outcome, hummm ! Age with Pregnancies are the most Correlated features Insulin with Glucuse ' BIOLOGY :) " finnaly SkinThikness with Insulin, that's odd
###Code
sns.set()
cols = ['Pregnancies','Glucose','BloodPressure','Insulin','BMI','DiabetesPedigreeFunction','Age','Outcome']
sns.pairplot(df[cols], size = 2.5)
plt.show();
x = df.iloc[:,:-1]
y = df.iloc[:,-1]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.33, random_state=42)
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, make_scorer,f1_score
###Output
_____no_output_____
###Markdown
LogisticRegression
###Code
lr = LogisticRegression()
lr.fit(X_train ,y_train )
lr.score(X_train ,y_train)
y_pred = lr.predict([[8,183,64,0,0,23.3,0.672,32]])
y_pred[0]
###Output
_____no_output_____
###Markdown
random forest classifier
###Code
rf = RandomForestClassifier( n_estimators=5)
rf.fit(X_train ,y_train )
rf.score(X_train ,y_train)
y_pred = rf.predict([[8,183,64,0,0,23.3,0.672,32]])
y_pred[0]
###Output
_____no_output_____
###Markdown
KNeighborsClassifier
###Code
knn=KNeighborsClassifier(n_neighbors=3)
knn.fit(X_train ,y_train )
knn.score(X_train ,y_train)
y_pred = knn.predict([[8,183,64,0,0,23.3,0.672,32]])
y_pred[0]
###Output
_____no_output_____
###Markdown
DecisionTreeClassifier
###Code
dt=DecisionTreeClassifier()
dt.fit(X_train ,y_train )
dt.score(X_train ,y_train)
y_pred = dt.predict(X_test)
from sklearn.metrics import classification_report
print (classification_report(y_test, y_pred))
from sklearn.metrics import accuracy_score, make_scorer,f1_score
fi_score = f1_score(y_true, y_pred, average='weighted')
accuracy_score = accuracy_score(y_true, y_pred)
###Output
_____no_output_____
###Markdown
###Code
import numpy as np
import pandas as pd
diabetes_data=pd.read_csv("/content/diabetes.csv")
diabetes_data.head()
diabetes_data.groupby('Outcome').mean()
X=diabetes_data.drop(columns='Outcome',axis=1)
Y=diabetes_data['Outcome']
from sklearn.preprocessing import StandardScaler
scale=StandardScaler()
new_x=scale.fit_transform(X)
new_x=pd.DataFrame(new_x)
from sklearn.model_selection import train_test_split
X_train,X_test,Y_train,Y_test=train_test_split(new_x,Y,random_state=0)
from sklearn.linear_model import LogisticRegression
lreg=LogisticRegression()
lreg.fit(X_train,Y_train)
from sklearn.metrics import accuracy_score
lreg.score(X_train,Y_train)
input_data=[[6 ,148, 72 ,35 ,0, 33.6, 0.627, 50] ]
Y_pred=lreg.predict(input_data)
Y_pred
###Output
_____no_output_____
###Markdown
which is the right output .
###Code
###Output
_____no_output_____
###Markdown
Understanding BusinessDiabetes is a disease that occurs when your blood glucose, also called blood sugar, is too high. Blood glucose is your main source of energy and comes from the food you eat. Insulin, a hormone made by the pancreas, helps glucose from food get into your cells to be used for energy. Sometimes your body doesn’t make enough—or any—insulin or doesn’t use insulin well. Glucose then stays in your blood and doesn’t reach your cells.We are trying to figure out bellow topics from the Data.1. Which age group is vulnerable to diabetes?2. Medical issues in males3. Medical issues in females Data Gathering
###Code
# Import Liberaries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
#get data
df = pd.read_csv("./diabetes_data.csv")
#Know your data
df.head()
###Output
_____no_output_____
###Markdown
Know the Data
###Code
#size and columns names
print("Rows: ",df.shape[0], "Columns: ",df.shape[1],"\n\n")
print("Column names",list(df.columns))
###Output
Rows: 520 Columns: 17
Column names ['age', 'gender', 'polyuria', 'polydipsia', 'sudden_weight_loss', 'weakness', 'polyphagia', 'genital_thrush', 'visual_blurring', 'itching', 'irritability', 'delayed_healing', 'partial_paresis', 'muscle_stiffness', 'alopecia', 'obesity', 'class']
###Markdown
Understanding DataWe have data of 520 people who have diabetes and who do not have diabetes but have symptoms. We know age and gender of the paitents along with the medical issues they are facing. Below is the list of medical issues:1. Polyuria: Whether the patient experienced excessive urination or not.2. polydipisia: Whether the patient experienced excessive thirst/excess drinking or not.3. sudden weight loss: Whether patient had an episode of sudden weight loss or not.4. weakness: Whether patient had an episode of feeling weak.5. polyphagia: Whether patient had an episode of excessive/extreme hunger or not.6. genital thrush: Whether patient had a yeast infection or not.7. visual blurring: Whether patient had an episode of blurred vision.8. itching: Whether patient had an episode of itch.9. irritablity: Whether patient had an episode of irritability.10. delayed healing: Whether patient had an noticed delayed healing when wounded.11. partial paresis: Whether patient had an episode of weakening of a muscle/group of muscles or not.12. muscle stiffness: Whether patient had an episode of muscle stiffness.13. alopecia: Whether patient experienced hair loss or not.14. class: Whether patient can be considered obese or not using his body mass index. Data Preperation / Cleaning age:We have ages to people in integer values and it is getting really difficult to analyse the data based on age. so we are diving the age into the age_groups of 0-10, 11-20, 21-30, 31-40, 41-50, 51-60, 61-70 and 70 above sort:After dividing the age into age_groups we now sort the data on age.
###Code
# dividing data into age groups
df["age_group"] = pd.cut(df['age'], bins=[0,10,20,30,40,50,60,70,np.inf], labels=["0-10","10-20","20-30","30-40","41-50","51-60","61-70",">70"])
df
###Output
_____no_output_____
###Markdown
Data PreparationTO understand much better we now do the following:1. Remove the people with no diabetes2. Seperate the male data and female data3. Calulate the count of people after grouping by age_group
###Code
# Seperating male and female data with diabetes and sorting with age
male_df = df[(df["gender"] == "Male") & (df["class"] == 1)].sort_values("age")
female_df = df[(df["gender"] == "Female") & (df["class"] == 1)].sort_values("age")
# Age group that is being most affected
most_affected_age = df.groupby(["gender","age_group"]).size().reset_index(name='counts')
most_affected_age
###Output
_____no_output_____
###Markdown
Categorizing data in age groups
###Code
female_df = female_df.groupby("age_group").agg("mean").round(2)
male_df = male_df.groupby("age_group").agg("mean").round(2)
female_df
male_df
###Output
_____no_output_____
###Markdown
Age male/female histogram
###Code
# create data
x = np.arange(8)
y1 = list(most_affected_age[most_affected_age["gender"]=="Male"]["counts"])
y2 = list(most_affected_age[most_affected_age["gender"]=="Female"]["counts"])
width = 0.2
plt.bar(x-0.2, y1, width, color='cyan')
plt.bar(x, y2, width, color='orange')
plt.xticks(x, ['0-10', '10-20', '20-30', '30-40', '40-50',"50-60","60-70",">70"])
plt.xlabel("Age Group")
plt.ylabel("People with diabetes")
plt.legend(["Male", "Female"])
plt.show()
###Output
_____no_output_____
###Markdown
Modeling 1. Which age Group is most Vulnerable ?By seeing into the above graph it is clear that people with age group from 30–60 are more prone to diabetes. This study was carried on the data of 520 people who have diabetes and who do not have diabetes but have symptoms of diabetes. From this, we can conclude that males are more prone to diabetes than females. Most of the females get diagnosed with diabetes in their 30’s which is not the same for males, males get diagnosed with diabetes majorly in their 40's.
###Code
# Analyse Female data with respect to age
corr_matrix = female_df.corr()
corr_matrix["age"].sort_values(ascending=False)
# Analyse male data with respect to males
corr_matrix = male_df.corr()
corr_matrix["age"].sort_values(ascending=False)
male_df
male_df.drop(['gender', 'age_group'], axis=1, inplace=True)
female_df.drop(['gender', 'age_group'], axis=1, inplace=True)
# Normalize data
normalized_male_df=(male_df-male_df.min())/(male_df.max()-male_df.min())
normalized_female_df=(female_df-female_df.min())/(female_df.max()-female_df.min())
male_df
# Most seen issues in males vs female
normalized_male_df[["age","weakness","irritability","partial_paresis","muscle_stiffness"]].plot(figsize=(20, 10),legend=True, title="Issues in males")
normalized_female_df[["age","weakness","irritability","partial_paresis","muscle_stiffness"]].plot(figsize=(20, 10),title="Issue not related to age in female")
###Output
_____no_output_____
###Markdown
2. Which issues are common in males with respect to their age?By further going deep into the data we found that few medical issues are directly related to the age of males. These Medical issues are:1. Weakness2. Irritability3. Partial Paresis4. Muscle StiffnessAs age increases the more people grow these medical issues. However, we saw these medical issues are not correlated to age in females. Below is the graph supporting this statement.However, we saw these medical issues are not correlated to age in females. Below is the graph supporting this statement.
###Code
#most seen issues in females vs male
normalized_female_df[["age","visual_blurring","itching","genital_thrush"]].plot(figsize=(20, 10),legend=True)
normalized_male_df[["age","visual_blurring","itching","genital_thrush"]].plot(figsize=(20, 10),legend=True)
###Output
_____no_output_____
###Markdown
Step 0: Import libraries and Dataset
###Code
# Importing libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
# Importing dataset
dataset = pd.read_csv(r'diabetes.csv')
###Output
_____no_output_____
###Markdown
Step 1: Descriptive Statistics
###Code
# Preview data
dataset.head()
# Dataset dimensions - (rows, columns)
dataset.shape
# Features data-type
dataset.info()
# Statistical summary
dataset.describe().T
# Count of null values
dataset.isnull().sum()
###Output
_____no_output_____
###Markdown
Step 2: Data Visualization
###Code
# Outcome countplot
sns.countplot(x = 'Outcome',data = dataset)
# Histogram of each feature
import itertools
col = dataset.columns[:8]
plt.subplots(figsize = (20, 15))
length = len(col)
for i, j in itertools.zip_longest(col, range(length)):
plt.subplot((length/2), 3, j + 1)
plt.subplots_adjust(wspace = 0.1,hspace = 0.5)
dataset[i].hist(bins = 20)
plt.title(i)
plt.show()
# Pairplot
sns.pairplot(data = dataset, hue = 'Outcome')
plt.show()
# Heatmap
sns.heatmap(dataset.corr(), annot = True)
plt.show()
###Output
_____no_output_____
###Markdown
Step 3: Data Preprocessing
###Code
dataset_new = dataset
# Replacing zero values with NaN
dataset_new[["Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI"]] = dataset_new[["Glucose", "BloodPressure", "SkinThickness", "Insulin", "BMI"]].replace(0, np.NaN)
# Count of NaN
dataset_new.isnull().sum()
# Replacing NaN with mean values
dataset_new["Glucose"].fillna(dataset_new["Glucose"].mean(), inplace = True)
dataset_new["BloodPressure"].fillna(dataset_new["BloodPressure"].mean(), inplace = True)
dataset_new["SkinThickness"].fillna(dataset_new["SkinThickness"].mean(), inplace = True)
dataset_new["Insulin"].fillna(dataset_new["Insulin"].mean(), inplace = True)
dataset_new["BMI"].fillna(dataset_new["BMI"].mean(), inplace = True)
# Statistical summary
dataset_new.describe().T
# Feature scaling using MinMaxScaler
from sklearn.preprocessing import MinMaxScaler
sc = MinMaxScaler(feature_range = (0, 1))
dataset_scaled = sc.fit_transform(dataset_new)
dataset_scaled = pd.DataFrame(dataset_scaled)
# Selecting features - [Glucose, Insulin, BMI, Age]
X = dataset_scaled.iloc[:, [1, 4, 5, 7]].values
Y = dataset_scaled.iloc[:, 8].values
# Splitting X and Y
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.20, random_state = 42, stratify = dataset_new['Outcome'] )
# Checking dimensions
print("X_train shape:", X_train.shape)
print("X_test shape:", X_test.shape)
print("Y_train shape:", Y_train.shape)
print("Y_test shape:", Y_test.shape)
###Output
X_train shape: (614, 4)
X_test shape: (154, 4)
Y_train shape: (614,)
Y_test shape: (154,)
###Markdown
Step 4: Data Modelling
###Code
# Logistic Regression Algorithm
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression(random_state = 42)
logreg.fit(X_train, Y_train)
# K nearest neighbors Algorithm
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors = 4, metric = 'minkowski', p = 1)
knn.fit(X_train, Y_train)
# Support Vector Classifier Algorithm
from sklearn.svm import SVC
svc = SVC(kernel = 'linear', random_state = 42)
svc.fit(X_train, Y_train)
# Naive Bayes Algorithm
from sklearn.naive_bayes import GaussianNB
nb = GaussianNB()
nb.fit(X_train, Y_train)
# Decision tree Algorithm
from sklearn.tree import DecisionTreeClassifier
dectree = DecisionTreeClassifier(criterion = 'entropy', random_state = 42)
dectree.fit(X_train, Y_train)
# Random forest Algorithm
from sklearn.ensemble import RandomForestClassifier
ranfor = RandomForestClassifier(n_estimators = 4, criterion = 'entropy', random_state = 42)
ranfor.fit(X_train, Y_train)
# Making predictions on test dataset
Y_pred_logreg = logreg.predict(X_test)
Y_pred_knn = knn.predict(X_test)
Y_pred_svc = svc.predict(X_test)
Y_pred_nb = nb.predict(X_test)
Y_pred_dectree = dectree.predict(X_test)
Y_pred_ranfor = ranfor.predict(X_test)
###Output
_____no_output_____
###Markdown
Step 5: Model Evaluation
###Code
# Evaluating using accuracy_score metric
from sklearn.metrics import accuracy_score
accuracy_logreg = accuracy_score(Y_test, Y_pred_logreg)
accuracy_knn = accuracy_score(Y_test, Y_pred_knn)
accuracy_svc = accuracy_score(Y_test, Y_pred_svc)
accuracy_nb = accuracy_score(Y_test, Y_pred_nb)
accuracy_dectree = accuracy_score(Y_test, Y_pred_dectree)
accuracy_ranfor = accuracy_score(Y_test, Y_pred_ranfor)
# Accuracy on test set
print("Logistic Regression: " + str(accuracy_logreg * 100))
print("K Nearest neighbors: " + str(accuracy_knn * 100))
print("Support Vector Classifier: " + str(accuracy_svc * 100))
print("Naive Bayes: " + str(accuracy_nb * 100))
print("Decision tree: " + str(accuracy_dectree * 100))
print("Random Forest: " + str(accuracy_ranfor * 100))
###Output
Logistic Regression: 71.42857142857143
K Nearest neighbors: 72.72727272727273
Support Vector Classifier: 73.37662337662337
Naive Bayes: 71.42857142857143
Decision tree: 68.18181818181817
Random Forest: 72.72727272727273
|
Copy_of_udacity_cs344_hw2.ipynb | ###Markdown
###Code
# Homework 2 for Udacity CS344 Course, Intro to Parallel Programming
# clone the code repo,
!git clone https://github.com/depctg/udacity-cs344-colab
!pip install git+git://github.com/depctg/nvcc4jupyter.git
# load cuda plugin
%config NVCCPluginV2.static_dir = True
%config NVCCPluginV2.relative_dir = "udacity-cs344-colab/src/HW2"
%load_ext nvcc_plugin
# change to work directory, generate makefiles
!mkdir udacity-cs344-colab/build
%cd udacity-cs344-colab/build
!cmake ../src
%%cuda --name student_func.cu
// Homework 2
// Image Blurring
//
// In this homework we are blurring an image. To do this, imagine that we have
// a square array of weight values. For each pixel in the image, imagine that we
// overlay this square array of weights on top of the image such that the center
// of the weight array is aligned with the current pixel. To compute a blurred
// pixel value, we multiply each pair of numbers that line up. In other words, we
// multiply each weight with the pixel underneath it. Finally, we add up all of the
// multiplied numbers and assign that value to our output for the current pixel.
// We repeat this process for all the pixels in the image.
// To help get you started, we have included some useful notes here.
//****************************************************************************
// For a color image that has multiple channels, we suggest separating
// the different color channels so that each color is stored contiguously
// instead of being interleaved. This will simplify your code.
// That is instead of RGBARGBARGBARGBA... we suggest transforming to three
// arrays (as in the previous homework we ignore the alpha channel again):
// 1) RRRRRRRR...
// 2) GGGGGGGG...
// 3) BBBBBBBB...
//
// The original layout is known an Array of Structures (AoS) whereas the
// format we are converting to is known as a Structure of Arrays (SoA).
// As a warm-up, we will ask you to write the kernel that performs this
// separation. You should then write the "meat" of the assignment,
// which is the kernel that performs the actual blur. We provide code that
// re-combines your blurred results for each color channel.
//****************************************************************************
// You must fill in the gaussian_blur kernel to perform the blurring of the
// inputChannel, using the array of weights, and put the result in the outputChannel.
// Here is an example of computing a blur, using a weighted average, for a single
// pixel in a small image.
//
// Array of weights:
//
// 0.0 0.2 0.0
// 0.2 0.2 0.2
// 0.0 0.2 0.0
//
// Image (note that we align the array of weights to the center of the box):
//
// 1 2 5 2 0 3
// -------
// 3 |2 5 1| 6 0 0.0*2 + 0.2*5 + 0.0*1 +
// | |
// 4 |3 6 2| 1 4 -> 0.2*3 + 0.2*6 + 0.2*2 + -> 3.2
// | |
// 0 |4 0 3| 4 2 0.0*4 + 0.2*0 + 0.0*3
// -------
// 9 6 5 0 3 9
//
// (1) (2) (3)
//
// A good starting place is to map each thread to a pixel as you have before.
// Then every thread can perform steps 2 and 3 in the diagram above
// completely independently of one another.
// Note that the array of weights is square, so its height is the same as its width.
// We refer to the array of weights as a filter, and we refer to its width with the
// variable filterWidth.
//****************************************************************************
// Your homework submission will be evaluated based on correctness and speed.
// We test each pixel against a reference solution. If any pixel differs by
// more than some small threshold value, the system will tell you that your
// solution is incorrect, and it will let you try again.
// Once you have gotten that working correctly, then you can think about using
// shared memory and having the threads cooperate to achieve better performance.
//****************************************************************************
// Also note that we've supplied a helpful debugging function called checkCudaErrors.
// You should wrap your allocation and copying statements like we've done in the
// code we're supplying you. Here is an example of the unsafe way to allocate
// memory on the GPU:
//
// cudaMalloc(&d_red, sizeof(unsigned char) * numRows * numCols);
//
// Here is an example of the safe way to do the same thing:
//
// checkCudaErrors(cudaMalloc(&d_red, sizeof(unsigned char) * numRows * numCols));
//
// Writing code the safe way requires slightly more typing, but is very helpful for
// catching mistakes. If you write code the unsafe way and you make a mistake, then
// any subsequent kernels won't compute anything, and it will be hard to figure out
// why. Writing code the safe way will inform you as soon as you make a mistake.
// Finally, remember to free the memory you allocate at the end of the function.
//****************************************************************************
#include "utils.h"
#include <cmath>
#include <cassert>
#include <stdio.h>
__global__
void gaussian_blur(const unsigned char* const inputChannel,
unsigned char* const outputChannel,
int numRows, int numCols,
const float* const filter, const int filterWidth){
// TODO
// NOTE: Be sure to compute any intermediate results in floating point
// before storing the final result as unsigned char.
// NOTE: Be careful not to try to access memory that is outside the bounds of
// the image. You'll want code that performs the following check before accessing
// GPU memory:
//
// if ( absolute_image_position_x >= numCols ||
// absolute_image_position_y >= numRows )
// {
// return;
// }
// NOTE: If a thread's absolute position 2D position is within the image, but some of
// its neighbors are outside the image, then you will need to be extra careful. Instead
// of trying to read such a neighbor value from GPU memory (which won't work because
// the value is out of bounds), you should explicitly clamp the neighbor values you read
// to be within the bounds of the image. If this is not clear to you, then please refer
// to sequential reference solution for the exact clamping semantics you should follow.
int image_c = blockIdx.x * blockDim.x + threadIdx.x;
int image_r = blockIdx.y * blockDim.y + threadIdx.y;
if(image_r >= numRows || image_c >= numCols ){
return;
}
float tmp_value = 0;
int left_index = -filterWidth/2;
int right_index = filterWidth/2+1;
for(int filter_r = left_index;filter_r < right_index;++ filter_r){
for(int filter_c = left_index;filter_c < right_index;++ filter_c){
int tmp_r = min(max(image_r+filter_r,0),numRows-1);
int tmp_c = min(max(image_c+filter_c,0),numCols-1);
float image_value = static_cast<float>(inputChannel[tmp_r*numCols+tmp_c]);
float filter_value = filter[(filter_r+filterWidth/2)*filterWidth+filter_c+filterWidth/2];
tmp_value += image_value * filter_value;
}
}
outputChannel[image_r *numCols+image_c] = tmp_value;
}
//This kernel takes in an image represented as a uchar4 and splits
//it into three images consisting of only one color channel each
__global__
void separateChannels(const uchar4* const inputImageRGBA,
int numRows,
int numCols,f
unsigned char* const redChannel,
unsigned char* const greenChannel,
unsigned char* const blueChannel){
// TODO
//
// NOTE: Be careful not to try to access memory that is outside the bounds of
// the image. You'll want code that performs the following check before accessing
// GPU memory:
//
// if ( absolute_image_position_x >= numCols ||
// absolute_image_position_y >= numRows )
// {
// return;
// }
int image_c = blockIdx.x * blockDim.x + threadIdx.x;
int image_r = blockIdx.y * blockDim.y + threadIdx.y;
int tmp_index = image_r * numCols + image_c;
if (image_r >= numRows || image_c >= numCols ){
return;
}
redChannel[tmp_index] = inputImageRGBA[tmp_index].x;
greenChannel[tmp_index] = inputImageRGBA[tmp_index].y;
blueChannel[tmp_index] = inputImageRGBA[tmp_index].z;
}
ombines them
//into one image. The alpha channel is set to 255 to represent
//that this image has no transparency.
__global__
void recombineChannels(const unsigned char* const redChannel,
const unsigned char* const greenChannel,
const unsigned char* const blueChannel,
uchar4* const outputImageRGBA,
int numRows,
int numCols){
const int2 thread_2D_pos = make_int2( blockIdx.x * blockDim.x + threadIdx.x,
blockIdx.y * blockDim.y + threadIdx.y);
const int thread_1D_pos = thread_2D_pos.y * numCols + thread_2D_pos.x;
//make sure we don't try and access memory outside the image
//by having any threads mapped there return early
if (thread_2D_pos.x >= numCols || thread_2D_pos.y >= numRows)
return;
unsigned char red = redChannel[thread_1D_pos];
unsigned char green = greenChannel[thread_1D_pos];
unsigned char blue = blueChannel[thread_1D_pos];
//Alpha should be 255 for no transparency
uchar4 outputPixel = make_uchar4(red, green, blue, 255);
outputImageRGBA[thread_1D_pos] = outputPixel;
}
unsigned char *d_red, *d_green, *d_blue;
float *d_filter;
void allocateMemoryAndCopyToGPU(const size_t numRowsImage, const size_t numColsImage,
const float* const h_filter, const size_t filterWidth){
//allocate memory for the three different channels
//original
checkCudaErrors(cudaMalloc(&d_red, sizeof(unsigned char) * numRowsImage * numColsImage));
checkCudaErrors(cudaMalloc(&d_green, sizeof(unsigned char) * numRowsImage * numColsImage));
checkCudaErrors(cudaMalloc(&d_blue, sizeof(unsigned char) * numRowsImage * numColsImage));
//TODO:
//Allocate memory for the filter on the GPU
//Use the pointer d_filter that we have already declared for you
//You need to allocate memory for the filter with cudaMalloc
//be sure to use checkCudaErrors like the above examples to
//be able to tell if anything goes wrong
//IMPORTANT: Notice that we pass a pointer to a pointer to cudaMalloc
checkCudaErrors(cudaMalloc(&d_filter,sizeof(float)*filterWidth*filterWidth));
//TODO:
//Copy the filter on the host (h_filter) to the memory you just allocated
//on the GPU. cudaMemcpy(dst, src, numBytes, cudaMemcpyHostToDevice);
//Remember to use checkCudaErrors!
checkCudaErrors(cudaMemcpy(d_filter, h_filter, sizeof(float) * filterWidth * filterWidth, cudaMemcpyHostToDevice));
}
void your_gaussian_blur(const uchar4 * const h_inputImageRGBA, uchar4 * const d_inputImageRGBA,
uchar4* const d_outputImageRGBA, const size_t numRows, const size_t numCols,
unsigned char *d_redBlurred,
unsigned char *d_greenBlurred,
unsigned char *d_blueBlurred,
const int filterWidth){
//TODO: Set reasonable block size (i.e., number of threads per block)
const int block_len = 32;
const dim3 blockSize(block_len,block_len);
//TODO:
//Compute correct grid size (i.e., number of blocks per kernel launch)
//from the image size and and block size.
const int grid_width = (numCols-1)/block_len+1;
const int grid_height = (numRows-1)/block_len +1;
const dim3 gridSize(grid_width,grid_height);
//TODO: Launch a kernel for separating the RGBA image into different color channels
separateChannels<<<gridSize,blockSize>>>(d_inputImageRGBA,
numRows,
numCols,
d_red,
d_green,
d_blue);
// Call cudaDeviceSynchronize(), then call checkCudaErrors() immediately after
// launching your kernel to make sure that you didn't make any mistakes.
cudaDeviceSynchronize();
checkCudaErrors(cudaGetLastError());
//TODO: Call your convolution kernel here 3 times, once for each color channel.
gaussian_blur<<<gridSize,blockSize>>>(d_red,d_redBlurred,numRows,numCols,d_filter,filterWidth);
gaussian_blur<<<gridSize,blockSize>>>(d_green,d_greenBlurred,numRows,numCols,d_filter,filterWidth);
gaussian_blur<<<gridSize,blockSize>>>(d_blue,d_blueBlurred,numRows,numCols,d_filter,filterWidth);
// Again, call cudaDeviceSynchronize(), then call checkCudaErrors() immediately after
// launching your kernel to make sure that you didn't make any mistakes.
cudaDeviceSynchronize(); checkCudaErrors(cudaGetLastError());
// Now we recombine your results. We take care of launching this kernel for you.
//
// NOTE: This kernel launch depends on the gridSize and blockSize variables,
// which you must set yourself.
recombineChannels<<<gridSize, blockSize>>>(d_redBlurred,
d_greenBlurred,
d_blueBlurred,
d_outputImageRGBA,
numRows,
numCols);
cudaDeviceSynchronize(); checkCudaErrors(cudaGetLastError());
}
//Free all the memory that we allocated
//TODO: make sure you free any arrays that you allocated
void cleanup() {
checkCudaErrors(cudaFree(d_red));
checkCudaErrors(cudaFree(d_green));
checkCudaErrors(cudaFree(d_blue));
}
# make the cuda project
!make HW2
print("\n====== RESULT OF HW2 =======\n")
!bin/HW2 ../src/HW1/cinque_terre.gold
# plot output images
import matplotlib.pyplot as plt
_,ax = plt.subplots(2,2, dpi=150)
ax[0][0].imshow(plt.imread("../src/HW1/cinque_terre_small.jpg"))
ax[0][0].set_title("original")
ax[0][0].grid(False)
ax[0][1].imshow(plt.imread("HW2_output.png"))
ax[0][1].set_title("output")
ax[0][1].grid(False)
ax[1][0].imshow(plt.imread("HW2_reference.png"))
ax[1][0].set_title("reference")
ax[1][0].grid(False)
ax[1][1].imshow(plt.imread("HW2_differenceImage.png"))
ax[1][1].set_title("difference")
ax[1][1].grid(False)
plt.show()
###Output
_____no_output_____ |
examples/09_Pyro_Integration/Pyro_SVDKL_GridInterp.ipynb | ###Markdown
SV-DKL with Pyro
###Code
import math
import torch
import gpytorch
import pyro
from matplotlib import pyplot as plt
# Make plots inline
%matplotlib inline
import urllib.request
import os.path
from scipy.io import loadmat
from math import floor
if not os.path.isfile('3droad.mat'):
print('Downloading \'3droad\' UCI dataset...')
urllib.request.urlretrieve('https://www.dropbox.com/s/f6ow1i59oqx05pl/3droad.mat?dl=1', '3droad.mat')
data = torch.Tensor(loadmat('3droad.mat')['data'])
X = data[:, :-1]
X = X - X.min(0)[0]
X = 2 * (X / X.max(0)[0]) - 1
y = data[:, -1]
# Use the first 80% of the data for training, and the last 20% for testing.
train_n = int(floor(0.8*len(X)))
train_x = X[:train_n, :].contiguous().cuda()
train_y = y[:train_n].contiguous().cuda()
test_x = X[train_n:, :].contiguous().cuda()
test_y = y[train_n:].contiguous().cuda()
from torch.utils.data import TensorDataset, DataLoader
train_dataset = TensorDataset(train_x, train_y)
train_loader = DataLoader(train_dataset, batch_size=1024, shuffle=True)
data_dim = train_x.size(-1)
class LargeFeatureExtractor(torch.nn.Sequential):
def __init__(self):
super(LargeFeatureExtractor, self).__init__()
self.add_module('linear1', torch.nn.Linear(data_dim, 1000))
self.add_module('bn1', torch.nn.BatchNorm1d(1000))
self.add_module('relu1', torch.nn.ReLU())
self.add_module('linear2', torch.nn.Linear(1000, 500))
self.add_module('bn2', torch.nn.BatchNorm1d(500))
self.add_module('relu2', torch.nn.ReLU())
self.add_module('linear3', torch.nn.Linear(500, 50))
self.add_module('bn3', torch.nn.BatchNorm1d(50))
self.add_module('relu3', torch.nn.ReLU())
self.add_module('linear4', torch.nn.Linear(50, 2))
feature_extractor = LargeFeatureExtractor().cuda()
# num_features is the number of final features extracted by the neural network, in this case 2.
num_features = 2
from gpytorch.models import PyroVariationalGP
from gpytorch.variational import CholeskyVariationalDistribution, GridInterpolationVariationalStrategy
class PyroSVDKLGridInterpModel(PyroVariationalGP):
def __init__(self, likelihood, grid_size=32, grid_bounds=[(-1, 1), (-1, 1)], name_prefix="svdkl_grid_example"):
variational_distribution = CholeskyVariationalDistribution(num_inducing_points=(grid_size ** num_features))
variational_strategy = GridInterpolationVariationalStrategy(self,
grid_size=grid_size,
grid_bounds=grid_bounds,
variational_distribution=variational_distribution)
super(PyroSVDKLGridInterpModel, self).__init__(variational_strategy,
likelihood,
num_data=train_y.numel(),
name_prefix=name_prefix)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel(
lengthscale_prior=gpytorch.priors.SmoothedBoxPrior(0.001, 1., sigma=0.1, transform=torch.exp
))
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
class DKLModel(gpytorch.Module):
def __init__(self, likelihood, feature_extractor, num_features, grid_bounds=(-1., 1.)):
super(DKLModel, self).__init__()
self.feature_extractor = feature_extractor
self.gp_layer = PyroSVDKLGridInterpModel(likelihood)
self.grid_bounds = grid_bounds
self.num_features = num_features
def features(self, x):
features = self.feature_extractor(x)
features = gpytorch.utils.grid.scale_to_bounds(features, self.grid_bounds[0], self.grid_bounds[1])
return features
def forward(self, x):
res = self.gp_layer(self.features(x))
return res
def guide(self, x, y):
self.gp_layer.guide(self.features(x), y)
def model(self, x, y):
pyro.module(self.gp_layer.name_prefix + ".feature_extractor", self.feature_extractor)
self.gp_layer.model(self.features(x), y)
likelihood = gpytorch.likelihoods.GaussianLikelihood().cuda()
model = DKLModel(likelihood, feature_extractor, num_features=num_features).cuda()
from pyro import optim
from pyro import infer
optimizer = optim.Adam({"lr": 0.1})
elbo = infer.Trace_ELBO(num_particles=256, vectorize_particles=True)
svi = infer.SVI(model.model, model.guide, optimizer, elbo)
num_epochs = 3
# Not enough for this model to converge, but enough for a fast example
for i in range(num_epochs):
# Within each iteration, we will go over each minibatch of data
for minibatch_i, (x_batch, y_batch) in enumerate(train_loader):
loss = svi.step(x_batch, y_batch)
print('Epoch {} Loss {}'.format(i + 1, loss))
model.eval()
likelihood.eval()
with torch.no_grad():
preds = model(test_x)
print('Test MAE: {}'.format(torch.mean(torch.abs(preds.mean - test_y))))
###Output
Test MAE: 10.290990829467773
###Markdown
SV-DKL with Pyro
###Code
import math
import torch
import gpytorch
import pyro
from matplotlib import pyplot as plt
# Make plots inline
%matplotlib inline
import urllib.request
import os.path
from scipy.io import loadmat
from math import floor
if not os.path.isfile('3droad.mat'):
print('Downloading \'3droad\' UCI dataset...')
urllib.request.urlretrieve('https://www.dropbox.com/s/f6ow1i59oqx05pl/3droad.mat?dl=1', '3droad.mat')
data = torch.Tensor(loadmat('3droad.mat')['data'])
X = data[:, :-1]
X = X - X.min(0)[0]
X = 2 * (X / X.max(0)[0]) - 1
y = data[:, -1]
# Use the first 80% of the data for training, and the last 20% for testing.
train_n = int(floor(0.8*len(X)))
train_x = X[:train_n, :].contiguous().cuda()
train_y = y[:train_n].contiguous().cuda()
test_x = X[train_n:, :].contiguous().cuda()
test_y = y[train_n:].contiguous().cuda()
from torch.utils.data import TensorDataset, DataLoader
train_dataset = TensorDataset(train_x, train_y)
train_loader = DataLoader(train_dataset, batch_size=1024, shuffle=True)
data_dim = train_x.size(-1)
class LargeFeatureExtractor(torch.nn.Sequential):
def __init__(self):
super(LargeFeatureExtractor, self).__init__()
self.add_module('linear1', torch.nn.Linear(data_dim, 1000))
self.add_module('bn1', torch.nn.BatchNorm1d(1000))
self.add_module('relu1', torch.nn.ReLU())
self.add_module('linear2', torch.nn.Linear(1000, 500))
self.add_module('bn2', torch.nn.BatchNorm1d(500))
self.add_module('relu2', torch.nn.ReLU())
self.add_module('linear3', torch.nn.Linear(500, 50))
self.add_module('bn3', torch.nn.BatchNorm1d(50))
self.add_module('relu3', torch.nn.ReLU())
self.add_module('linear4', torch.nn.Linear(50, 2))
feature_extractor = LargeFeatureExtractor().cuda()
# num_features is the number of final features extracted by the neural network, in this case 2.
num_features = 2
from gpytorch.models import PyroGP
from gpytorch.variational import CholeskyVariationalDistribution, GridInterpolationVariationalStrategy
class PyroSVDKLGridInterpModel(PyroGP):
def __init__(self, likelihood, grid_size=32, grid_bounds=[(-1, 1), (-1, 1)], name_prefix="svdkl_grid_example"):
variational_distribution = CholeskyVariationalDistribution(num_inducing_points=(grid_size ** num_features))
variational_strategy = GridInterpolationVariationalStrategy(self,
grid_size=grid_size,
grid_bounds=grid_bounds,
variational_distribution=variational_distribution)
super(PyroSVDKLGridInterpModel, self).__init__(variational_strategy,
likelihood,
num_data=train_y.numel(),
name_prefix=name_prefix)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel(
lengthscale_prior=gpytorch.priors.SmoothedBoxPrior(0.001, 1., sigma=0.1, transform=torch.exp
))
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
class DKLModel(gpytorch.Module):
def __init__(self, likelihood, feature_extractor, num_features, grid_bounds=(-1., 1.)):
super(DKLModel, self).__init__()
self.feature_extractor = feature_extractor
self.gp_layer = PyroSVDKLGridInterpModel(likelihood)
self.grid_bounds = grid_bounds
self.num_features = num_features
def features(self, x):
features = self.feature_extractor(x)
features = gpytorch.utils.grid.scale_to_bounds(features, self.grid_bounds[0], self.grid_bounds[1])
return features
def forward(self, x):
res = self.gp_layer(self.features(x))
return res
def guide(self, x, y):
self.gp_layer.guide(self.features(x), y)
def model(self, x, y):
pyro.module(self.gp_layer.name_prefix + ".feature_extractor", self.feature_extractor)
self.gp_layer.model(self.features(x), y)
likelihood = gpytorch.likelihoods.GaussianLikelihood().cuda()
model = DKLModel(likelihood, feature_extractor, num_features=num_features).cuda()
from pyro import optim
from pyro import infer
optimizer = optim.Adam({"lr": 0.1})
elbo = infer.Trace_ELBO(num_particles=256, vectorize_particles=True)
svi = infer.SVI(model.model, model.guide, optimizer, elbo)
num_epochs = 3
# Not enough for this model to converge, but enough for a fast example
for i in range(num_epochs):
# Within each iteration, we will go over each minibatch of data
for minibatch_i, (x_batch, y_batch) in enumerate(train_loader):
loss = svi.step(x_batch, y_batch)
print('Epoch {} Loss {}'.format(i + 1, loss))
model.eval()
likelihood.eval()
with torch.no_grad():
preds = model(test_x)
print('Test MAE: {}'.format(torch.mean(torch.abs(preds.mean - test_y))))
###Output
Test MAE: 10.290990829467773
###Markdown
SV-DKL with Pyro
###Code
import math
import torch
import gpytorch
import pyro
from matplotlib import pyplot as plt
# Make plots inline
%matplotlib inline
import urllib.request
import os.path
from scipy.io import loadmat
from math import floor
if not os.path.isfile('3droad.mat'):
print('Downloading \'3droad\' UCI dataset...')
urllib.request.urlretrieve('https://www.dropbox.com/s/f6ow1i59oqx05pl/3droad.mat?dl=1', '3droad.mat')
data = torch.Tensor(loadmat('3droad.mat')['data'])
X = data[:, :-1]
X = X - X.min(0)[0]
X = 2 * (X / X.max(0)[0]) - 1
y = data[:, -1]
# Use the first 80% of the data for training, and the last 20% for testing.
train_n = int(floor(0.8*len(X)))
train_x = X[:train_n, :].contiguous().cuda()
train_y = y[:train_n].contiguous().cuda()
test_x = X[train_n:, :].contiguous().cuda()
test_y = y[train_n:].contiguous().cuda()
from torch.utils.data import TensorDataset, DataLoader
train_dataset = TensorDataset(train_x, train_y)
train_loader = DataLoader(train_dataset, batch_size=1024, shuffle=True)
data_dim = train_x.size(-1)
class LargeFeatureExtractor(torch.nn.Sequential):
def __init__(self):
super(LargeFeatureExtractor, self).__init__()
self.add_module('linear1', torch.nn.Linear(data_dim, 1000))
self.add_module('bn1', torch.nn.BatchNorm1d(1000))
self.add_module('relu1', torch.nn.ReLU())
self.add_module('linear2', torch.nn.Linear(1000, 500))
self.add_module('bn2', torch.nn.BatchNorm1d(500))
self.add_module('relu2', torch.nn.ReLU())
self.add_module('linear3', torch.nn.Linear(500, 50))
self.add_module('bn3', torch.nn.BatchNorm1d(50))
self.add_module('relu3', torch.nn.ReLU())
self.add_module('linear4', torch.nn.Linear(50, 2))
feature_extractor = LargeFeatureExtractor().cuda()
# num_features is the number of final features extracted by the neural network, in this case 2.
num_features = 2
from gpytorch.models import PyroVariationalGP
from gpytorch.variational import CholeskyVariationalDistribution, GridInterpolationVariationalStrategy
class PyroSVDKLGridInterpModel(PyroVariationalGP):
def __init__(self, likelihood, grid_size=32, grid_bounds=[(-1, 1), (-1, 1)], name_prefix="svdkl_grid_example"):
variational_distribution = CholeskyVariationalDistribution(num_inducing_points=(grid_size ** num_features))
variational_strategy = GridInterpolationVariationalStrategy(self,
grid_size=grid_size,
grid_bounds=grid_bounds,
variational_distribution=variational_distribution)
super(PyroSVDKLGridInterpModel, self).__init__(variational_strategy,
likelihood,
num_data=train_y.numel(),
name_prefix=name_prefix)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel(
lengthscale_prior=gpytorch.priors.SmoothedBoxPrior(0.001, 1., sigma=0.1, log_transform=True)
))
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
class DKLModel(gpytorch.Module):
def __init__(self, likelihood, feature_extractor, num_features, grid_bounds=(-1., 1.)):
super(DKLModel, self).__init__()
self.feature_extractor = feature_extractor
self.gp_layer = PyroSVDKLGridInterpModel(likelihood)
self.grid_bounds = grid_bounds
self.num_features = num_features
def features(self, x):
features = self.feature_extractor(x)
features = gpytorch.utils.grid.scale_to_bounds(features, self.grid_bounds[0], self.grid_bounds[1])
return features
def forward(self, x):
res = self.gp_layer(self.features(x))
return res
def guide(self, x, y):
self.gp_layer.guide(self.features(x), y)
def model(self, x, y):
pyro.module(self.gp_layer.name_prefix + ".feature_extractor", self.feature_extractor)
self.gp_layer.model(self.features(x), y)
likelihood = gpytorch.likelihoods.GaussianLikelihood().cuda()
model = DKLModel(likelihood, feature_extractor, num_features=num_features).cuda()
from pyro import optim
from pyro import infer
optimizer = optim.Adam({"lr": 0.1})
elbo = infer.Trace_ELBO(num_particles=256, vectorize_particles=True)
svi = infer.SVI(model.model, model.guide, optimizer, elbo)
num_epochs = 3
# Not enough for this model to converge, but enough for a fast example
for i in range(num_epochs):
# Within each iteration, we will go over each minibatch of data
for minibatch_i, (x_batch, y_batch) in enumerate(train_loader):
loss = svi.step(x_batch, y_batch)
print('Epoch {} Loss {}'.format(i + 1, loss))
model.eval()
likelihood.eval()
with torch.no_grad():
preds = model(test_x)
print('Test MAE: {}'.format(torch.mean(torch.abs(preds.mean - test_y))))
###Output
Test MAE: 10.290990829467773
|
ml/feature_scaling.ipynb | ###Markdown
[Feature Scaling — Effect Of Different Scikit-Learn Scalers: Deep Dive](https://towardsdatascience.com/feature-scaling-effect-of-different-scikit-learn-scalers-deep-dive-8dec775d4946) Feature scaling is a vital element of data preprocessing for machine learning. Implementing the right scaler is equally important for precise foresight with machine learning algorithms. In supervised machine learning, we calculate the value of the output variable by supplying input variable values to an algorithm. Machine learning algorithm relates the input and output variable with a mathematical function.```Output variable value = (2.4* Input Variable 1 )+ (6*Input Variable 2) + 3.5```There are a few specific assumptions behind each of the machine learning algorithms. To build an accurate model, we need to ensure that the input data meets those assumptions. In case, the data fed to machine learning algorithms do not satisfy the assumptions then prediction accuracy of the model is compromised.Most of the supervised algorithms in sklearn require standard normally distributed input data centred around zero and have variance in the same order. If the value range from 1 to 10 for an input variable and 4000 to 700,000 for the other variable then the second input variable values will dominate and the algorithm will not be able to learn from other features correctly as expected.In this article, I will illustrate the effect of scaling the input variables with different scalers in scikit-learn and three different regression algorithms.---- 0. Preparation In the below code, we import the packages we will be using for the analysis. We will create the test data with the help of make_regression
###Code
# Import necessary libraries
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import make_regression
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import *
from sklearn.linear_model import *
###Output
_____no_output_____
###Markdown
We will use the sample size of 100 records with three independent (input) variables. Further, we will inject three outliers using the method “np.random.normal”
###Code
X, y, coef = make_regression(n_samples=100, n_features=3,noise=2,tail_strength=0.5,coef=True, random_state=0)
X[:3] = 1 + 0.9 * np.random.normal(size=(3,3))
y[:3] = 1 + 2 * np.random.normal(size=3)
###Output
_____no_output_____
###Markdown
We will print the real coefficients of the sample datasets as a reference and compare with predicted coefficients.
###Code
print('The real coefficients are:')
print(coef)
###Output
The real coefficients are:
[39.84342586 6.2712952 62.88984391]
###Markdown
We will train the algorithm with 80 records and reserve the remaining 20 samples unseen by the algorithm earlier for testing the accuracy of the model.
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
###Output
_____no_output_____
###Markdown
We will study the scaling effect with the scikit-learn StandardScaler, MinMaxScaler, power transformers, RobustScaler and, MaxAbsScaler.
###Code
regressors=[StandardScaler(),MinMaxScaler(),
PowerTransformer(method='yeo-johnson'),
RobustScaler(quantile_range=(25,75)),MaxAbsScaler()]
###Output
_____no_output_____
###Markdown
All the regression model we will be using is mentioned in a list object.
###Code
models=[Ridge(alpha=1.0),HuberRegressor(),LinearRegression()]
###Output
_____no_output_____
###Markdown
In the code below, we scale the training and test sample input variable by calling each scaler in succession from the regressor list defined earlier. We will draw a scatter plot of the original first input variable and scaled the first input variable to get an insight on various scaling. We see each of these plots little later in this article.Further, we fit each of the models with scaled input variables from different scalers and predict the values of dependent variables for test sample dataset.
###Code
#plt.title?
for regressor in regressors:
X_train_scaled=regressor.fit_transform(X_train)
X_test_scaled=regressor.transform(X_test)
Scaled =plt.scatter(X_train_scaled[:,0],y_train, marker='^', alpha=0.8)
Original=plt.scatter(X_train[:,0],y_train)
plt.legend((Scaled, Original),('Scaled', 'Original'),loc='best',fontsize=13)
plt.title(label=regressor, fontdict={'fontsize': 18})
plt.xlabel("Feature 1")
plt.ylabel("Train Target")
plt.show()
for model in models:
reg_lin=model.fit(X_train_scaled, y_train)
y_pred=reg_lin.predict(X_test_scaled)
print("The calculated coeffiects with ", model , "and", regressor, reg_lin.coef_)
###Output
_____no_output_____ |
data_scraping.ipynb | ###Markdown
Credentials
###Code
!pip install tweepy
import tweepy
import pandas as pd
####input your credentials here
consumer_key = ''
consumer_secret = ''
access_token = ''
access_token_secret = ''
###Output
_____no_output_____
###Markdown
Scraping
###Code
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
# will notify user on ratelimit and will wait by it self no need of sleep.
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
search_words = ""
date_since = "2007-01-01"
#date_until = "2020-12-31"
geoc="-1.28333,36.81667,300km"
# use the .Cursor method to get an object containing tweets containing the hashtag
#use .Cursor() to search twitter for tweets containing the search term.
#You can restrict the number of tweets returned by specifying a number in the .items() method.
tweets = tweepy.Cursor(api.search,
q=search_words,
lang="en",
since=date_since,
#until=date_until,
geocode=geoc).items()
[tweet.text for tweet in tweets]
#Sometimes you may want to remove retweets as they contain duplicate content
#that might skew your analysis if you are only looking at word frequency
#below you ignore all retweets by adding -filter:retweets to your query
new_search = search_words + " -filter:retweets" + " -filter:links"
new_search
tweets = tweepy.Cursor(api.search,
q=new_search,
lang="en",
since=date_since, #until=date_until,
geocode=geoc,
tweet_mode= 'extended').items()
users_locs = [[tweet.user.screen_name,
tweet.full_text,
tweet.user.location,
tweet.user.description,
tweet.user.friends_count,
tweet.user.followers_count,
tweet.user.statuses_count,
tweet.created_at,
tweet.retweet_count,
tweet.favorite_count,
tweet.entities['hashtags']] for tweet in tweets]
users_locs
hate1 = pd.DataFrame(data=users_locs,
columns=['user', 'tweet',"location", 'description','friends_count','followers_count',
'statuses_count', 'tweet_date', 'retweet_count','likes', 'hashtags'])
hate1
# let us export our final dataframe to csv
hate1.to_csv('sampledata1.csv')
###Output
_____no_output_____
###Markdown
Merging the Data
###Code
import pandas as pd
a = pd.read_csv('/content/sampledata1.csv')
b = pd.read_csv('/content/sampledata2.csv')
c = pd.read_csv('/content/sampledata3.csv')
d = pd.read_csv('/content/sampledata4.csv')
e = pd.read_csv('/content/sampledata5.csv')
frames = [a, b, c, d, e]
sample_dataset2 = pd.concat(frames)
sample_dataset2.shape
import pandas as pd
f = pd.read_csv('/content/Sampledata6.csv')
g = pd.read_csv('/content/sampledata7.csv')
h = pd.read_csv('/content/sampledata8.csv')
i = pd.read_csv('/content/sampledata9.csv')
j = pd.read_csv('/content/sampledata10.csv')
frames = [f, g, h, i, j]
sample_dataset3 = pd.concat(frames)
sample_dataset3.shape
# let us export our final dataframe to csv
sample_dataset3.to_csv('hatespeech_sample_data2.csv')
sample_dataset5.shape
###Output
_____no_output_____
###Markdown
###Code
###Output
_____no_output_____
###Markdown
**structure of URL **
Page 1 : https://www.coursera.org/learn/inferential-statistical-analysis-python/reviews
Page 2 : https://www.coursera.org/learn/inferential-statistical-analysis-python/reviews?page=2
Page 10 : https://www.coursera.org/learn/inferential-statistical-analysis-python/reviews?page=10
Question : When to stop ? if we dont know the last page n = ??
store in in CSV for cleaning or then uploaded to SQL then python request for cleaning ????
if it's a dynamic plaform ???
request with time if not will dammage the website
extract only rating star in separate column ?
we need to scapping the feed back as much as possible in different subject ( statistic/Python/SQL/...)
###Code
#importation of beautifulSoup
from bs4 import BeautifulSoup
import requests
URL = 'https://www.coursera.org/learn/inferential-statistical-analysis-python/reviews'
page = requests.get(URL)
# Html parsing
soup = BeautifulSoup(page.content, 'html.parser')
# print the resquest page with function Prettify
print(soup.prettify())
#print title of the page
title=soup.title #print title
print(title.text) #get only text
#print example 1 reveiwText / in python class is a function , then in Beautifulsoup will be class_
reviewText=soup.find('div',class_='reviewText')
print(reviewText)
#clean somes codes to take only text. p stands for paragraph.
match=reviewText.p.text
print(match)
#find all the reviewtext and cleaning the other code #How to separate the text and save it in
for reviewText in soup.find_all('div',class_='reviewText'):
match=reviewText.p ##add .text will have only
print(match)
###Output
_____no_output_____
###Markdown
Data acquisition Getting a list with videosList of videos using the YouTube Data API [YouTube Data API](https://tools.digitalmethods.net/netvizz/youtube/mod_videos_list.php)Querying for the terms: `Global warming`, `Climate change`, `Paris agreement`, `Climate realism`. Getting all comments (including replies) to all videos in the listGet all comments to a video using the [CommentThreads method of YouTube Developer API](https://developers.google.com/youtube/v3/docs/commentThreads/list)The API documentation of CommentsThread states that it might not contain all replies: >A commentThread resource contains information about a YouTube comment thread, which comprises a top-level comment and replies, if any exist, to that comment. A commentThread resource can represent comments about either a video or a channel.>Both the top-level comment and the replies are actually comment resources nested inside the commentThread resource. The commentThread resource does not necessarily contain all replies to a comment, and you need to use the comments.list method if you want to retrieve all replies for a particular comment. Also note that some comments do not have replies.Therefore we use the [Coments list method](https://developers.google.com/youtube/v3/docs/commentThreads/list) to get all replies to a comment.
###Code
API_KEY = 'AIzaSyAGegTsA3vp5N544npMDkbfDwZuqCOjeh0'
data_path = 'videolist_search500_2021_02_07-00_46_57_climate_crisis.tab'
def load_videos(data_path, min_comments_count = 3):
videos = pd.read_csv(data_path, sep='\t',header=(0))
#remove entries where commentCount is None
videos = videos.dropna(how='all', subset=['commentCount'])
#remove videos where comments count is lesser then minimum
videos.drop(videos[videos['commentCount'] < min_comments_count].index, inplace = True)
videos = videos.sort_values(['commentCount'], ascending=[False])
return videos
###Output
_____no_output_____
###Markdown
Class to load all comments of a video
###Code
class Video_comments:
def __init__(self, api_key):
self.api_key = api_key
#self.video_id = video_id
self.max_results = 100
self.comments_df = None
self.video_published_at = None
self.search_term = None
'''load all replies of top level comments and append dataframe witth all top level comments and replies.
(appendingt to df and loading replies should be devided to different methods.)'''
def _add_to_dataframe(self, response):
for i, main_comment in enumerate(response['items']):
comment = main_comment['snippet']['topLevelComment']['snippet']
new_row = pd.Series(data={
'id':main_comment['snippet']['topLevelComment']['id'],
'threadId':main_comment['snippet']['topLevelComment']['id'],
'published_at':comment['publishedAt'] ,
'author_name': comment['authorDisplayName'],
'text': comment['textOriginal'],
'likeCount':comment['likeCount'],
'replyCount':main_comment['snippet']['totalReplyCount'],
'authorChannelId':comment['authorChannelId']['value'],
'is_reply': 0,
'video_id': comment['videoId'],
'video_published_at':self.video_published_at,
'search_term':self.search_term})
self.comments_df = self.comments_df.append(new_row, ignore_index=True)
#check if the top level comment has replies. If yey then get these too and add to df
request_replies = requests.get(f"https://youtube.googleapis.com/youtube/v3/comments?part=snippet&parentId={main_comment['snippet']['topLevelComment']['id']}&key={self.api_key}")
response_replies = json.loads(request_replies.text)
#if response_replies['items'] > 0 then the main comment has replies
if(len(response_replies['items']) > 0):
for i, main_reply in enumerate(response_replies['items']):
reply = main_reply['snippet']
new_row = pd.Series(data={
'id':reply['parentId'],
'threadId':main_comment['snippet']['topLevelComment']['id'],
'published_at':reply['publishedAt'] ,
'author_name': reply['authorDisplayName'],
'text': reply['textOriginal'],
'likeCount':reply['likeCount'],
'replyCount': 0,
'authorChannelId':reply['authorChannelId']['value'],
'is_reply': 1,
'video_id': comment['videoId'],
'video_published_at':self.video_published_at,
'search_term':self.search_term})
self.comments_df = self.comments_df.append(new_row, ignore_index=True)
'''Load (and append comments dataframe) recursively comments from next page until there are no next page. '''
def _get_next_page(self, response):
request1 = requests.get(f"https://www.googleapis.com/youtube/v3/commentThreads?part=snippet&maxResults={self.max_results}&pageToken={str(response['nextPageToken'])}&videoId={self.video_id}&key={self.api_key}")
response1 = json.loads(request1.text)
self._add_to_dataframe(response1)
if ('nextPageToken' in response1.keys()):
self._get_next_page(response1)
'''Start loading comments. Paginated.'''
def get_comments(self, video_id, video_published_at, search_term):
self.search_term = search_term
self.video_published_at = video_published_at
self.comments_df = pd.DataFrame({
'id':[],
'replyCount': [],
'likeCount': [],
'published_at': [],
'author_name': [],
'text': [],
'authorChannelId':[],
'is_reply': [],
'threadId':[],
'video_id':[],
'video_published_at': [],
'search_term':[]},
columns = [ 'id',
'replyCount',
'likeCount',
'published_at',
'author_name',
'text',
'authorChannelId',
'is_reply',
'threadId',
'video_id',
'video_published_at',
'search_term'])
self.video_id = video_id
request = requests.get(f"https://www.googleapis.com/youtube/v3/commentThreads?part=snippet&maxResults={self.max_results}&videoId={self.video_id}&key={self.api_key}")
response = json.loads(request.text)
#print(len(self.comments_df))
#print('ADDING FIRST PAGE')
self._add_to_dataframe(response)
if 'nextPageToken' in response.keys():
self._get_next_page(response)
self.video_published_at = None
self.search_term = None
return self.comments_df
###Output
_____no_output_____
###Markdown
vid_comments = Video_comments('AIzaSyAMTJJtNemBqO6TKRj-khTO9zT2uCQsJvc')comments_df = vid_comments.get_comments('wAwIR1CEqP0', 'blaa', 'blubb') comments_df.shape
###Code
"""List with all API keys"""
api_keys = np.array([
'AIzaSyAGegTsA3vp5N544npMDkbfDwZuqCOjeh0',
])
#'AIzaSyDJdq6pbdqIdkQ_atIc29hAj7tye7Zv0as',
# 'AIzaSyBgQr5rzBrDK9Y19ZhvgmeSGuONI0bsJLg'
# 'AIzaSyAGegTsA3vp5N544npMDkbfDwZuqCOjeh0',
#'AIzaSyBObUNQjuCFbwrbrc1-KPbueNb3N1Uawmg',
# 'AIzaSyAjVtZXdprTpvnaTKVIErQyDaBVRuV75Rk',
#videos = pd.read_csv("summery_vid_lists/2021-03-13-12-42-01_master_video_list_below_5000.csv", sep="\t")
#videos[videos.Comments_Downloaded==False].iloc[:35].commentCount.sum()
#videos[videos.Comments_Downloaded==False].iloc[35:120].to_csv("summery_vid_lists/2021-03-13-12-42-01_master_video_list_below_5000_Flo.csv", sep="\t")
def create_comments_csv(videolist_name, api_keys, max_download=10000):
"""
This method creates a csv files of comments by iterating through the videos in the specified videolist.
A Google API key needs to be provided.
The final csv is stored at data_raw/{number videos}_videos_{number comments}_comments_{your videlist}.csv
"""
videos = load_videos(videolist_name)
key = 0
vid_comments = Video_comments(api_keys[key])
totalVideoCount = videos.shape[0]
counter = 1
#max_download = 10000
all_comments_df = pd.DataFrame()
for i, video in videos[1:len(videos)].iterrows():
#überprüfe ob Comments_Downloaded True ist. Wenn True dann nächstes Video.
if(video.Comments_Downloaded == False):
if((len(all_comments_df) + video.commentCount) < max_download):
try:
print('video: ',counter,' of ',totalVideoCount,' # of comments: ',video.commentCount)
comments_df = vid_comments.get_comments(video.videoId, video.publishedAt, video.search_Term)
all_comments_df = pd.concat([all_comments_df, comments_df], axis=0)
#Setze Comments_Downloaded auf True.
#video.Comments_Downloaded = True # This does not change the original dataframe
videos.loc[i, "Comments_Downloaded"] = True
print(all_comments_df.shape,' ',comments_df.shape)
counter+=1
except Exception as e:
print('Error while downloading video: ',video.videoId,' ',repr(e))
print('Currently using key ', key+1, 'of ', len(api_keys))
elif(key < (len(api_keys)-1)):
'''if a new videos comments would exceed the limit with the api keys we have
then take the next key from the list and expand the max_download with 10000'''
key += 1
print(len(all_comments_df),' + ',video.commentCount,' > 10K therefore new API key')
vid_comments = Video_comments(api_keys[key])
max_download += 10000
#store updated list of videos
videos.to_csv(videolist_name, sep='\t', index = True)
#store the downloaded comments
try:
all_comments_df.to_csv('data_raw/comments/' + str(counter) + '_videos_' + str(len(all_comments_df)) + '_comments_' + videolist_name[-8:-4] + '.csv', index = True)
return all_comments_df
except:
print("Dataframe was not saved, but will be returned.")
return all_comments_df
download = True
if(download):
df = create_comments_csv('summery_vid_lists/2021-03-13-12-42-01_master_video_list_below_5000_Flo.csv', api_keys, 1000)
def read_folder(csv_folder):
''' Input is a folder with csv files; return list of data frames.'''
csv_folder = pathlib.Path(csv_folder).absolute()
csv_files = [f for f in csv_folder.iterdir() if f.name.endswith('csv')]
# the assign() method adds a helper column
dfs = [
pd.read_csv(csv_file)for idx, csv_file in enumerate(csv_files, 1)
]
return dfs
def read_folder(csv_folder):
''' Input is a folder with csv files; return list of data frames.'''
csv_folder = Path(csv_folder).absolute()
csv_files = [f for f in csv_folder.iterdir() if f.name.endswith('csv')]
# the assign() method adds a helper column
dfs = [
pd.read_csv(csv_file)for idx, csv_file in enumerate(csv_files, 1)
]
return dfs
def concat_csv_files(folder_name='data_raw\comments'):
dfs = read_folder(folder_name)
all_comments_df = pd.DataFrame()
for df in dfs:
df = df.drop(['Unnamed: 0'], axis=1)
all_comments_df = pd.concat([all_comments_df, df]).drop_duplicates().reset_index(drop=True)
return all_comments_df
all_df = concat_csv_files()
all_df.to_csv('data_raw/comments/concatenated_date_140321_videos_' + str(len(np.unique(all_df.video_id))) + '_comments_' + str(all_df.shape[0]) + '.csv', index = True)
videos = pd.read_csv('summery_vid_lists/2021-03-18-12-40-11_master_video_list_above_5000.csv', sep='\t')
videos[videos.Comments_Downloaded == False].commentCount
###Output
_____no_output_____
###Markdown
Data Scraping The first step is getting data. Fotunately, [NOAA](https://www.noaa.gov/) had provided many climate dataset.In this totorial, we will use the dataset from [Global Forecast System (GFS)](https://www.ncdc.noaa.gov/data-access/model-data/model-datasets/global-forcast-system-gfs) And scrape the `GRIB data(.grb)` and `GRIB2 data(.grb2)` from `Product Types` > `GFS Analysis` > `GFS-ANL, Historical Model` which is this [link](https://www.ncei.noaa.gov/data/global-forecast-system/access/historical/analysis/)The `period of record` of this dataset is `01Jan2007–15May2020` and the data was collected `4 times per day` `(00, 06, 12 and 18 UTC)`
###Code
from bs4 import BeautifulSoup
import pandas as pd
import requests
import urllib
import os
from tqdm import tqdm
url_template = 'https://www.ncei.noaa.gov/data/global-forecast-system/access/historical/analysis/'
dates = pd.date_range("2007-01-01","2020-05-15")
def create_dir_if_not_exist(directory):
if not os.path.exists(directory):
os.makedirs(directory)
root_dir = 'data/'
create_dir_if_not_exist(root_dir)
for date in tqdm(dates):
try:
year_month = date.strftime("%Y%m")
year_month_day = date.strftime("%Y%m%d")
dir = root_dir + year_month + '/' + year_month_day + '/'
create_dir_if_not_exist(dir)
url = url_template + year_month + '/' + year_month_day + '/'
res = requests.get(url)
soup = BeautifulSoup(res.text, 'html.parser')
file_links = []
file_links = soup.findAll('a', href=lambda link: '000.grb2' in link and 'gfsanl_4' in link)
if len(file_links) == 0:
threes = soup.findAll('a', href=lambda link: '000.grb' in link and 'gfsanl_3' in link)
if len(threes) == 0:
continue
else:
file_links = threes
for link in file_links:
file_url = url + link['href']
urllib.request.urlretrieve(file_url, dir + link['href'])
except Exception as e:
print(e)
continue
!zip -r data.zip data
###Output
_____no_output_____
###Markdown
Data CollectionTo build the dataset of products and ingredients, I scraped the most popular skincare products from Walgreen's website and Watson's website. These are the most popular drugstores in America and across Asia, respectively. I then looked up each product's ingredients on CosDNA.com, my favorite website for analyzing cosmetics and their ingredients. This was all done with VBA in excel, which, if I were to do this again, I would definitely choose python! But, you live and learn.
###Code
Private Sub IE_Autiomation()
Dim i As Long
Dim ie As Object
Dim doc As Object
Dim htmlname As Object
Dim ing As Range
Dim html As Object ' HTML document
Dim myLinks As Object ' Links collection
Dim myLink As Object 'Single Link
Dim result As String
Dim myURL As String 'Web Links on worksheet
Dim LastRow As Integer ' VBA execution should stop here
Dim row As Integer
Dim j As Integer
Dim el As Object
Dim objElement As Object
Dim objCollection As Object
' GO COLLECT PRODUCT NAMES FROM DRUGSTORE WEBSITES SKINCARE SECTION
' Create InternetExplorer Object
Set ie = CreateObject("InternetExplorer.Application")
Set html = CreateObject("htmlfile")
ie.Visible = True
For Each URL In Sheets(1).Range("G2:G254")
row = URL.row
' Send the form data To URL As POST binary request
If ie.Busy = False Then
ie.navigate URL.Value
End If
' Wait while IE loading...
Do While ie.Busy
Application.Wait DateAdd("s", 1, Now)
Loop
result = ie.document.body.innerHTML
'now place all the data extracted from the web page into the new html document
html.body.innerHTML = result
Set myLinks = html.getElementsByTagName("a")
' LOOK UP PRODUCT INGREDIENTS ON COSDNA.COM
' Loop through the collected links and get a specific link defined by the conditions
For Each myLink In myLinks
If Left(myLink, 15) = "about:cosmetic_" Then
Sheets(1).Range("H" & row).Value = "http://www.cosdna.com/eng/" & Right(myLink, Len(myLink) - 6)
Exit For
End If
' Go to the next link
Next myLink
Set myLinks = Nothing
' Visit each link
Set ie = CreateObject("InternetExplorer.Application")
Set html = CreateObject("htmlfile")
ie.Visible = True
For Each URL In Sheets(1).Range("G2", Cells("G", LastRow)
row = URL.row
' Send the form data To URL As POST binary request
If ie.Busy = False Then
ie.navigate URL.Value
End If
' Wait while IE loading...
Do While ie.Busy
Application.Wait DateAdd("s", 1, Now)
Loop
' Get the ingredients from the website
Set objCollection = IE.document.getElementsByTagName("input")
Set objCollection = doc.getElementsByClassName("iStuffList")
class="iStuffList"
' Put comma delimited ingredients into excel
Set ing = Cells(9, Url.Row)
For Each el In ie.document.getElementsByClassName("iStuffList")
ing.Value = ing.Value & ", " & el.innerText
Next el
err_clear:
If Err <> 0 Then
Err.Clear
Resume Next
End If
' Clean up
Set ie = Nothing
Set objElement = Nothing
Set objCollection = Nothing
Application.StatusBar = ""
End Sub
###Output
_____no_output_____
###Markdown
**structure of URL **
Page 1 : https://www.coursera.org/learn/inferential-statistical-analysis-python/reviews?page=1
Page 2 : https://www.coursera.org/learn/inferential-statistical-analysis-python/reviews?page=2
Page 10 : https://www.coursera.org/learn/inferential-statistical-analysis-python/reviews?page=10
Question : When to stop in case we do not know how many review pages of the course is ? Anwser : using request error as stop criterial
Question : how to count the rating star for each review ?
Remark: we need to scrape the feed back as much as possible in different subject ( statistic/Python/SQL/...)
Remark : need to add headers and time of request in resquest code avoid the scraping by bot.
Remark : use pandas to export the scv into 3 columns (name of reviewer / comment/ rating )
###Code
#importation of beautifulSoup
from bs4 import BeautifulSoup
import requests
import re
import pdb
URL = 'https://www.coursera.org/learn/inferential-statistical-analysis-python/reviews'
page = requests.get(URL)
# Html parsing
soup = BeautifulSoup(page.content)
# print the resquest page with function Prettify
#print(soup.prettify())
block1st=soup.find('div',class_='_ng613ig review-text') #1st block of review section ( inlucding text and rating star )
review=block1st.find('div',class_='reviewText').find('p') #extract 1st review text
print(review.prettify()) #extract 1st block related to the review dessus
rating_block1st=block1st.find('div' ,class_="_1mzojlvw", role="img",)
emptyStar=re.findall('"fill:rgba.\w+', str(rating_block1st)) #fcking ratingstar took me a day for solution
rating=len(rating_block1st)-len(emptyStar)
print(rating)
# rating star for 1st block sur 25 blocks of review
rating_blocks=soup.body.findAll('div',class_='_jyhj5r review review-page-review m-b-2')
parttern=re.compile(r'fill:rgba')
match=parttern.findall(str(rating_blocks[0]))
rating=5-len(match)
print('Rating of 1st comment is :', rating)
#review text
alls_review=[]
#make a loop
blocks=soup.find_all('div',class_='reviewText')
# review=Block.find('p')
# alls_review.append(review)
for reviews in blocks:
review=reviews.find('p')
alls_review.append(review)
print(alls_review)
len(alls_review)
#looping
# rgba is not filled '#f7bb56' = filled
rating_blocks=soup.body.findAll('div',class_='_jyhj5r review review-page-review m-b-2')
parttern=re.compile(r'fill:rgba')
ratings=[]
for i in list(range(25)):
matches=parttern.findall(str(rating_blocks[i]))
a = 5-len(matches)
ratings.append(a)
print(ratings)
#1st block of review section ( inlucding text and rating star )
block1st=soup.find('div',class_='_ng613ig review-text')
rating_block1st=block1st.find('div' ,class_="_1mzojlvw", role="img",)
#fcking ratingstar took me a day for solution
emptyStar=re.findall('"fill:rgba.\w+', str(rating_block1st))
rating=len(rating_block1st)-len(emptyStar)
print('Rating of first comment: \n', rating)
alls = []
for d in soup.findAll('div', attrs={'class':'top-review'}):
top_review = d.find('p', attrs={'class':'top-review_comment'},recursive=True)
if top_review is not None:
alls.append(top_review)
print(top_review)
###Output
<p class="top-review_comment">This is a very great course. Statistics by itself is a very powerful tool for solving real world problems. Combine it with the knowledge of Python, there no limit to what you can achieve.</p>
<p class="top-review_comment">Very good course content and mentors & teachers. The course content was very structured. I learnt a lot from the course and gained skills which will definitely gonna help me in future.</p>
|
book-d2l-en/chapter_crashcourse/naive-bayes.ipynb | ###Markdown
Naive Bayes ClassificationConditional independence is useful when dealing with data, since it simplifies a lot of equations. A popular (and very simple) algorithm is the Naive Bayes Classifier.Its key assumption is that the attributes are all independent of each other, given the labels. In other words, we have:$$p(\mathbf{x} | y) = \prod_i p(x_i | y)$$Using Bayes Theorem this leads to the classifier $p(y | \mathbf{x}) = \frac{\prod_i p(x_i | y) p(y)}{p(\mathbf{x})}$. Unfortunately, this is still intractable, since we don't know $p(x)$. Fortunately, we don't need it, since we know that $\sum_y p(y | \mathbf{x}) = 1$, hence we can always recover the normalization from$$p(y | \mathbf{x}) \propto \prod_i p(x_i | y) p(y).$$To illustrate this a bit, consider classifying emails into spam and ham. It's fair to say that the occurrence of the words `Nigeria`, `prince`, `money`, `rich` are all likely indicators that the e-mail might be spam, whereas `theorem`, `network`, `Bayes` or `statistics` are pretty good indicators that there's substance in the message. Thus, we could model the probability of occurrence for each of these words, given the respective class and then use it to score the likelihood of a text. In fact, for a long time this *is* what many so-called [Bayesian spam filters](https://en.wikipedia.org/wiki/Naive_Bayes_spam_filtering) used. Optical Character RecognitionSince images are much easier to deal with, we will illustrate the workings of a Naive Bayes classifier for distinguishing digits on the MNIST dataset. The problem is that we don't actually know $p(y)$ and $p(x_i | y)$. So we need to *estimate* it given some training data first. This is what is called *training* the model. Estimating $p(y)$ is not too hard. Since we are only dealing with 10 classes, this is pretty easy - simply count the number of occurrences $n_y$ for each of the digits and divide it by the total amount of data $n$. For instance, if digit 8 occurs $n_8 = 5,800$ times and we have a total of $n = 60,000$ images, the probability estimate is $p(y=8) = 0.0967$.Now on to slightly more difficult things - $p(x_i | y)$. Since we picked black and white images, $p(x_i | y)$ denotes the probability that pixel $i$ is switched on for class $y$. Just like before we can go and count the number of times $n_{iy}$ such that an event occurs and divide it by the total number of occurrences of y, i.e. $n_y$. But there's something slightly troubling: certain pixels may never be black (e.g. for very well cropped images the corner pixels might always be white). A convenient way for statisticians to deal with this problem is to add pseudo counts to all occurrences. Hence, rather than $n_{iy}$ we use $n_{iy}+1$ and instead of $n_y$ we use $n_{y} + 1$. This is also called [Laplace Smoothing](https://en.wikipedia.org/wiki/Additive_smoothing).
###Code
%matplotlib inline
from matplotlib import pyplot as plt
from IPython import display
display.set_matplotlib_formats('svg')
import mxnet as mx
from mxnet import nd
import numpy as np
# We go over one observation at a time (speed doesn't matter here)
def transform(data, label):
return (nd.floor(data/128)).astype(np.float32), label.astype(np.float32)
mnist_train = mx.gluon.data.vision.MNIST(train=True, transform=transform)
mnist_test = mx.gluon.data.vision.MNIST(train=False, transform=transform)
# Initialize the counters
xcount = nd.ones((784,10))
ycount = nd.ones((10))
for data, label in mnist_train:
y = int(label)
ycount[y] += 1
xcount[:,y] += data.reshape((784))
# using broadcast again for division
py = ycount / ycount.sum()
px = (xcount / ycount.reshape(1,10))
###Output
_____no_output_____
###Markdown
Now that we computed per-pixel counts of occurrence for all pixels, it's time to see how our model behaves. Time to plot it. This is where it is so much more convenient to work with images. Visualizing 28x28x10 probabilities (for each pixel for each class) would typically be an exercise in futility. However, by plotting them as images we get a quick overview. The astute reader probably noticed by now that these are some mean looking digits ...
###Code
import matplotlib.pyplot as plt
fig, figarr = plt.subplots(1, 10, figsize=(10, 10))
for i in range(10):
figarr[i].imshow(xcount[:, i].reshape((28, 28)).asnumpy(), cmap='hot')
figarr[i].axes.get_xaxis().set_visible(False)
figarr[i].axes.get_yaxis().set_visible(False)
plt.show()
print('Class probabilities', py)
###Output
_____no_output_____
###Markdown
Now we can compute the likelihoods of an image, given the model. This is statistician speak for $p(x | y)$, i.e. how likely it is to see a particular image under certain conditions (such as the label). Our Naive Bayes model which assumed that all pixels are independent tells us that$$p(\mathbf{x} | y) = \prod_{i} p(x_i | y)$$Using Bayes' rule, we can thus compute $p(y | \mathbf{x})$ via$$p(y | \mathbf{x}) = \frac{p(\mathbf{x} | y) p(y)}{\sum_{y'} p(\mathbf{x} | y')}$$Let's try this ...
###Code
# Get the first test item
data, label = mnist_test[0]
data = data.reshape((784,1))
# Compute the per pixel conditional probabilities
xprob = (px * data + (1-px) * (1-data))
# Take the product
xprob = xprob.prod(0) * py
print('Unnormalized Probabilities', xprob)
# Normalize
xprob = xprob / xprob.sum()
print('Normalized Probabilities', xprob)
###Output
Unnormalized Probabilities
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
<NDArray 10 @cpu(0)>
Normalized Probabilities
[nan nan nan nan nan nan nan nan nan nan]
<NDArray 10 @cpu(0)>
###Markdown
This went horribly wrong! To find out why, let's look at the per pixel probabilities. They're typically numbers between $0.001$ and $1$. We are multiplying $784$ of them. At this point it is worth mentioning that we are calculating these numbers on a computer, hence with a fixed range for the exponent. What happens is that we experience *numerical underflow*, i.e. multiplying all the small numbers leads to something even smaller until it is rounded down to zero. At that point we get division by zero with `nan` as a result.To fix this we use the fact that $\log a b = \log a + \log b$, i.e. we switch to summing logarithms. This will get us unnormalized probabilities in log-space. To normalize terms we use the fact that$$\frac{\exp(a)}{\exp(a) + \exp(b)} = \frac{\exp(a + c)}{\exp(a + c) + \exp(b + c)}$$In particular, we can pick $c = -\max(a,b)$, which ensures that at least one of the terms in the denominator is $1$.
###Code
logpx = nd.log(px)
logpxneg = nd.log(1-px)
logpy = nd.log(py)
def bayespost(data):
# We need to incorporate the prior probability p(y) since p(y|x) is
# proportional to p(x|y) p(y)
logpost = logpy.copy()
logpost += (logpx * data + logpxneg * (1-data)).sum(0)
# Normalize to prevent overflow or underflow by subtracting the largest
# value
logpost -= nd.max(logpost)
# Compute the softmax using logpx
post = nd.exp(logpost).asnumpy()
post /= np.sum(post)
return post
fig, figarr = plt.subplots(2, 10, figsize=(10, 3))
# Show 10 images
ctr = 0
for data, label in mnist_test:
x = data.reshape((784,1))
y = int(label)
post = bayespost(x)
# Bar chart and image of digit
figarr[1, ctr].bar(range(10), post)
figarr[1, ctr].axes.get_yaxis().set_visible(False)
figarr[0, ctr].imshow(x.reshape((28, 28)).asnumpy(), cmap='hot')
figarr[0, ctr].axes.get_xaxis().set_visible(False)
figarr[0, ctr].axes.get_yaxis().set_visible(False)
ctr += 1
if ctr == 10:
break
plt.show()
###Output
_____no_output_____
###Markdown
As we can see, this classifier works pretty well in many cases. However, the second last digit shows that it can be both incompetent and overly confident of its incorrect estimates. That is, even if it is horribly wrong, it generates probabilities close to 1 or 0. Not a classifier we should use very much nowadays any longer. To see how well it performs overall, let's compute the overall accuracy of the classifier.
###Code
# Initialize counter
ctr = 0
err = 0
for data, label in mnist_test:
ctr += 1
x = data.reshape((784,1))
y = int(label)
post = bayespost(x)
if (post[y] < post.max()):
err += 1
print('Naive Bayes has an error rate of', err/ctr)
###Output
Naive Bayes has an error rate of 0.1574
|
Kaggle_Code_Notebooks/faceforensics++_baseline.ipynb | ###Markdown
NOTE: To run this, download this package from Kaggle...https://www.kaggle.com/robikscube/deepfakemodelspackagesAnd place into a folder titled 'input' within your working directory. Running FaceForensics++ in a Kaggle NotebookIn this notebook I test out the state of the art FaceForensics++ package on the provided dataset. Later in this notebook I modify the code slightly to predict for our dataset.This notebook imports from a dataset for:- dlib package wheel (needs to be installed for code to work)- other required packages not in the default kernel- pretrained models by FaceForensics++- The paper that was published can be found here: https://arxiv.org/pdf/1901.08971.pdf- The github repo is here: https://github.com/ondyari/FaceForensicsReference:```@inproceedings{roessler2019faceforensicspp, author = {Andreas R\"ossler and Davide Cozzolino and Luisa Verdoliva and Christian Riess and Justus Thies and Matthias Nie{\ss}ner}, title = {Face{F}orensics++: Learning to Detect Manipulated Facial Images}, booktitle= {International Conference on Computer Vision (ICCV)}, year = {2019}}```
###Code
import numpy as np
import pandas as pd
import matplotlib.pylab as plt
import cv2
import seaborn as sns
from sklearn.metrics import log_loss
XCEPTION_MODEL = '../input/deepfakemodelspackages/xception-b5690688.pth'
%%time
# Install packages
!pip install ../input/deepfakemodelspackages/Pillow-6.2.1-cp36-cp36m-manylinux1_x86_64.whl -f ./ --no-index
!pip install ../input/deepfakemodelspackages/munch-2.5.0-py2.py3-none-any.whl -f ./ --no-index
!pip install ../input/deepfakemodelspackages/numpy-1.17.4-cp36-cp36m-manylinux1_x86_64.whl -f ./ --no-index
!pip install ../input/deepfakemodelspackages/pretrainedmodels-0.7.4/pretrainedmodels-0.7.4/ -f ./ --no-index
!pip install ../input/deepfakemodelspackages/six-1.13.0-py2.py3-none-any.whl -f ./ --no-index
!pip install ../input/deepfakemodelspackages/torchvision-0.4.2-cp36-cp36m-manylinux1_x86_64.whl -f ./ --no-index
!pip install ../input/deepfakemodelspackages/tqdm-4.40.2-py2.py3-none-any.whl -f ./ --no-index
###Output
_____no_output_____
###Markdown
Install dlibBuilding takes roughly ~6 minutes
###Code
%%time
!pip install ../input/deepfakemodelspackages/dlib-19.19.0/dlib-19.19.0/ -f ./ --no-index
###Output
_____no_output_____
###Markdown
Copy in FaceForensics Code and Modify to run in kernel- Change the `models.py` file to reference our `xception-b5690688.pth` file.- Change the code to be provided the pytorch model and not the model path. This works better for our purposes.Change the line here:``` state_dict = torch.load( '/home/ondyari/.torch/models/xception-b5690688.pth') XCEPTION_MODEL)```
###Code
## xception.py
"""
Ported to pytorch thanks to [tstandley](https://github.com/tstandley/Xception-PyTorch)
@author: tstandley
Adapted by cadene
Creates an Xception Model as defined in:
Francois Chollet
Xception: Deep Learning with Depthwise Separable Convolutions
https://arxiv.org/pdf/1610.02357.pdf
This weights ported from the Keras implementation. Achieves the following performance on the validation set:
Loss:0.9173 Prec@1:78.892 Prec@5:94.292
REMEMBER to set your image size to 3x299x299 for both test and validation
normalize = transforms.Normalize(mean=[0.5, 0.5, 0.5],
std=[0.5, 0.5, 0.5])
The resize parameter of the validation transform should be 333, and make sure to center crop at 299x299
"""
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.model_zoo as model_zoo
from torch.nn import init
pretrained_settings = {
'xception': {
'imagenet': {
'url': 'http://data.lip6.fr/cadene/pretrainedmodels/xception-b5690688.pth',
'input_space': 'RGB',
'input_size': [3, 299, 299],
'input_range': [0, 1],
'mean': [0.5, 0.5, 0.5],
'std': [0.5, 0.5, 0.5],
'num_classes': 1000,
'scale': 0.8975 # The resize parameter of the validation transform should be 333, and make sure to center crop at 299x299
}
}
}
class SeparableConv2d(nn.Module):
def __init__(self,in_channels,out_channels,kernel_size=1,stride=1,padding=0,dilation=1,bias=False):
super(SeparableConv2d,self).__init__()
self.conv1 = nn.Conv2d(in_channels,in_channels,kernel_size,stride,padding,dilation,groups=in_channels,bias=bias)
self.pointwise = nn.Conv2d(in_channels,out_channels,1,1,0,1,1,bias=bias)
def forward(self,x):
x = self.conv1(x)
x = self.pointwise(x)
return x
class Block(nn.Module):
def __init__(self,in_filters,out_filters,reps,strides=1,start_with_relu=True,grow_first=True):
super(Block, self).__init__()
if out_filters != in_filters or strides!=1:
self.skip = nn.Conv2d(in_filters,out_filters,1,stride=strides, bias=False)
self.skipbn = nn.BatchNorm2d(out_filters)
else:
self.skip=None
self.relu = nn.ReLU(inplace=True)
rep=[]
filters=in_filters
if grow_first:
rep.append(self.relu)
rep.append(SeparableConv2d(in_filters,out_filters,3,stride=1,padding=1,bias=False))
rep.append(nn.BatchNorm2d(out_filters))
filters = out_filters
for i in range(reps-1):
rep.append(self.relu)
rep.append(SeparableConv2d(filters,filters,3,stride=1,padding=1,bias=False))
rep.append(nn.BatchNorm2d(filters))
if not grow_first:
rep.append(self.relu)
rep.append(SeparableConv2d(in_filters,out_filters,3,stride=1,padding=1,bias=False))
rep.append(nn.BatchNorm2d(out_filters))
if not start_with_relu:
rep = rep[1:]
else:
rep[0] = nn.ReLU(inplace=False)
if strides != 1:
rep.append(nn.MaxPool2d(3,strides,1))
self.rep = nn.Sequential(*rep)
def forward(self,inp):
x = self.rep(inp)
if self.skip is not None:
skip = self.skip(inp)
skip = self.skipbn(skip)
else:
skip = inp
x+=skip
return x
class Xception(nn.Module):
"""
Xception optimized for the ImageNet dataset, as specified in
https://arxiv.org/pdf/1610.02357.pdf
"""
def __init__(self, num_classes=1000):
""" Constructor
Args:
num_classes: number of classes
"""
super(Xception, self).__init__()
self.num_classes = num_classes
self.conv1 = nn.Conv2d(3, 32, 3,2, 0, bias=False)
self.bn1 = nn.BatchNorm2d(32)
self.relu = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(32,64,3,bias=False)
self.bn2 = nn.BatchNorm2d(64)
#do relu here
self.block1=Block(64,128,2,2,start_with_relu=False,grow_first=True)
self.block2=Block(128,256,2,2,start_with_relu=True,grow_first=True)
self.block3=Block(256,728,2,2,start_with_relu=True,grow_first=True)
self.block4=Block(728,728,3,1,start_with_relu=True,grow_first=True)
self.block5=Block(728,728,3,1,start_with_relu=True,grow_first=True)
self.block6=Block(728,728,3,1,start_with_relu=True,grow_first=True)
self.block7=Block(728,728,3,1,start_with_relu=True,grow_first=True)
self.block8=Block(728,728,3,1,start_with_relu=True,grow_first=True)
self.block9=Block(728,728,3,1,start_with_relu=True,grow_first=True)
self.block10=Block(728,728,3,1,start_with_relu=True,grow_first=True)
self.block11=Block(728,728,3,1,start_with_relu=True,grow_first=True)
self.block12=Block(728,1024,2,2,start_with_relu=True,grow_first=False)
self.conv3 = SeparableConv2d(1024,1536,3,1,1)
self.bn3 = nn.BatchNorm2d(1536)
#do relu here
self.conv4 = SeparableConv2d(1536,2048,3,1,1)
self.bn4 = nn.BatchNorm2d(2048)
self.fc = nn.Linear(2048, num_classes)
# #------- init weights --------
# for m in self.modules():
# if isinstance(m, nn.Conv2d):
# n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
# m.weight.data.normal_(0, math.sqrt(2. / n))
# elif isinstance(m, nn.BatchNorm2d):
# m.weight.data.fill_(1)
# m.bias.data.zero_()
# #-----------------------------
def features(self, input):
x = self.conv1(input)
x = self.bn1(x)
x = self.relu(x)
x = self.conv2(x)
x = self.bn2(x)
x = self.relu(x)
x = self.block1(x)
x = self.block2(x)
x = self.block3(x)
x = self.block4(x)
x = self.block5(x)
x = self.block6(x)
x = self.block7(x)
x = self.block8(x)
x = self.block9(x)
x = self.block10(x)
x = self.block11(x)
x = self.block12(x)
x = self.conv3(x)
x = self.bn3(x)
x = self.relu(x)
x = self.conv4(x)
x = self.bn4(x)
return x
def logits(self, features):
x = self.relu(features)
x = F.adaptive_avg_pool2d(x, (1, 1))
x = x.view(x.size(0), -1)
x = self.last_linear(x)
return x
def forward(self, input):
x = self.features(input)
x = self.logits(x)
return x
def xception(num_classes=1000, pretrained='imagenet'):
model = Xception(num_classes=num_classes)
if pretrained:
settings = pretrained_settings['xception'][pretrained]
assert num_classes == settings['num_classes'], \
"num_classes should be {}, but is {}".format(settings['num_classes'], num_classes)
model = Xception(num_classes=num_classes)
model.load_state_dict(model_zoo.load_url(settings['url']))
model.input_space = settings['input_space']
model.input_size = settings['input_size']
model.input_range = settings['input_range']
model.mean = settings['mean']
model.std = settings['std']
# TODO: ugly
model.last_linear = model.fc
del model.fc
return model
## models.py
"""
Author: Andreas Rössler
"""
import os
import argparse
import torch
# import pretrainedmodels
import torch.nn as nn
import torch.nn.functional as F
# from network.xception import xception
import math
import torchvision
def return_pytorch04_xception(pretrained=True):
# Raises warning "src not broadcastable to dst" but thats fine
model = xception(pretrained=False)
if pretrained:
# Load model in torch 0.4+
model.fc = model.last_linear
del model.last_linear
state_dict = torch.load(
#'/home/ondyari/.torch/models/xception-b5690688.pth')
XCEPTION_MODEL)
for name, weights in state_dict.items():
if 'pointwise' in name:
state_dict[name] = weights.unsqueeze(-1).unsqueeze(-1)
model.load_state_dict(state_dict)
model.last_linear = model.fc
del model.fc
return model
class TransferModel(nn.Module):
"""
Simple transfer learning model that takes an imagenet pretrained model with
a fc layer as base model and retrains a new fc layer for num_out_classes
"""
def __init__(self, modelchoice, num_out_classes=2, dropout=0.0):
super(TransferModel, self).__init__()
self.modelchoice = modelchoice
if modelchoice == 'xception':
self.model = return_pytorch04_xception()
# Replace fc
num_ftrs = self.model.last_linear.in_features
if not dropout:
self.model.last_linear = nn.Linear(num_ftrs, num_out_classes)
else:
print('Using dropout', dropout)
self.model.last_linear = nn.Sequential(
nn.Dropout(p=dropout),
nn.Linear(num_ftrs, num_out_classes)
)
elif modelchoice == 'resnet50' or modelchoice == 'resnet18':
if modelchoice == 'resnet50':
self.model = torchvision.models.resnet50(pretrained=True)
if modelchoice == 'resnet18':
self.model = torchvision.models.resnet18(pretrained=True)
# Replace fc
num_ftrs = self.model.fc.in_features
if not dropout:
self.model.fc = nn.Linear(num_ftrs, num_out_classes)
else:
self.model.fc = nn.Sequential(
nn.Dropout(p=dropout),
nn.Linear(num_ftrs, num_out_classes)
)
else:
raise Exception('Choose valid model, e.g. resnet50')
def set_trainable_up_to(self, boolean, layername="Conv2d_4a_3x3"):
"""
Freezes all layers below a specific layer and sets the following layers
to true if boolean else only the fully connected final layer
:param boolean:
:param layername: depends on network, for inception e.g. Conv2d_4a_3x3
:return:
"""
# Stage-1: freeze all the layers
if layername is None:
for i, param in self.model.named_parameters():
param.requires_grad = True
return
else:
for i, param in self.model.named_parameters():
param.requires_grad = False
if boolean:
# Make all layers following the layername layer trainable
ct = []
found = False
for name, child in self.model.named_children():
if layername in ct:
found = True
for params in child.parameters():
params.requires_grad = True
ct.append(name)
if not found:
raise Exception('Layer not found, cant finetune!'.format(
layername))
else:
if self.modelchoice == 'xception':
# Make fc trainable
for param in self.model.last_linear.parameters():
param.requires_grad = True
else:
# Make fc trainable
for param in self.model.fc.parameters():
param.requires_grad = True
def forward(self, x):
x = self.model(x)
return x
def model_selection(modelname, num_out_classes,
dropout=None):
"""
:param modelname:
:return: model, image size, pretraining<yes/no>, input_list
"""
if modelname == 'xception':
return TransferModel(modelchoice='xception',
num_out_classes=num_out_classes), 299, \
True, ['image'], None
elif modelname == 'resnet18':
return TransferModel(modelchoice='resnet18', dropout=dropout,
num_out_classes=num_out_classes), \
224, True, ['image'], None
else:
raise NotImplementedError(modelname)
# if __name__ == '__main__':
# model, image_size, *_ = model_selection('resnet18', num_out_classes=2)
# print(model)
# model = model.cuda()
# from torchsummary import summary
# input_s = (3, image_size, image_size)
# print(summary(model, input_s))
## transform.py
"""
Author: Andreas Rössler
"""
from torchvision import transforms
xception_default_data_transforms = {
'train': transforms.Compose([
transforms.Resize((299, 299)),
transforms.ToTensor(),
transforms.Normalize([0.5]*3, [0.5]*3)
]),
'val': transforms.Compose([
transforms.Resize((299, 299)),
transforms.ToTensor(),
transforms.Normalize([0.5] * 3, [0.5] * 3)
]),
'test': transforms.Compose([
transforms.Resize((299, 299)),
transforms.ToTensor(),
transforms.Normalize([0.5] * 3, [0.5] * 3)
]),
}
## detect_from_video.py
"""
Evaluates a folder of video files or a single file with a xception binary
classification network.
Usage:
python detect_from_video.py
-i <folder with video files or path to video file>
-m <path to model file>
-o <path to output folder, will write one or multiple output videos there>
Author: Andreas Rössler
"""
import os
import argparse
from os.path import join
import cv2
import dlib
import torch
import torch.nn as nn
from PIL import Image as pil_image
from tqdm.notebook import tqdm
# from network.models import model_selection
# from dataset.transform import xception_default_data_transforms
def get_boundingbox(face, width, height, scale=1.3, minsize=None):
"""
Expects a dlib face to generate a quadratic bounding box.
:param face: dlib face class
:param width: frame width
:param height: frame height
:param scale: bounding box size multiplier to get a bigger face region
:param minsize: set minimum bounding box size
:return: x, y, bounding_box_size in opencv form
"""
x1 = face.left()
y1 = face.top()
x2 = face.right()
y2 = face.bottom()
size_bb = int(max(x2 - x1, y2 - y1) * scale)
if minsize:
if size_bb < minsize:
size_bb = minsize
center_x, center_y = (x1 + x2) // 2, (y1 + y2) // 2
# Check for out of bounds, x-y top left corner
x1 = max(int(center_x - size_bb // 2), 0)
y1 = max(int(center_y - size_bb // 2), 0)
# Check for too big bb size for given x, y
size_bb = min(width - x1, size_bb)
size_bb = min(height - y1, size_bb)
return x1, y1, size_bb
def preprocess_image(image, cuda=True):
"""
Preprocesses the image such that it can be fed into our network.
During this process we envoke PIL to cast it into a PIL image.
:param image: numpy image in opencv form (i.e., BGR and of shape
:return: pytorch tensor of shape [1, 3, image_size, image_size], not
necessarily casted to cuda
"""
# Revert from BGR
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Preprocess using the preprocessing function used during training and
# casting it to PIL image
preprocess = xception_default_data_transforms['test']
preprocessed_image = preprocess(pil_image.fromarray(image))
# Add first dimension as the network expects a batch
preprocessed_image = preprocessed_image.unsqueeze(0)
if cuda:
preprocessed_image = preprocessed_image.cuda()
return preprocessed_image
def predict_with_model(image, model, post_function=nn.Softmax(dim=1),
cuda=True):
"""
Predicts the label of an input image. Preprocesses the input image and
casts it to cuda if required
:param image: numpy image
:param model: torch model with linear layer at the end
:param post_function: e.g., softmax
:param cuda: enables cuda, must be the same parameter as the model
:return: prediction (1 = fake, 0 = real)
"""
# Preprocess
preprocessed_image = preprocess_image(image, cuda)
# Model prediction
output = model(preprocessed_image)
output = post_function(output)
# Cast to desired
_, prediction = torch.max(output, 1) # argmax
prediction = float(prediction.cpu().numpy())
return int(prediction), output
def test_full_image_network(video_path, model, output_path,
start_frame=0, end_frame=None, cuda=True):
"""
Reads a video and evaluates a subset of frames with the a detection network
that takes in a full frame. Outputs are only given if a face is present
and the face is highlighted using dlib.
:param video_path: path to video file
:param model_path: path to model file (should expect the full sized image)
:param output_path: path where the output video is stored
:param start_frame: first frame to evaluate
:param end_frame: last frame to evaluate
:param cuda: enable cuda
:return:
# Modified to take in the model file instead of model
"""
#print('Starting: {}'.format(video_path))
# Read and write
reader = cv2.VideoCapture(video_path)
video_fn = video_path.split('/')[-1].split('.')[0]+'.avi'
os.makedirs(output_path, exist_ok=True)
fourcc = cv2.VideoWriter_fourcc(*'MJPG')
fps = reader.get(cv2.CAP_PROP_FPS)
num_frames = int(reader.get(cv2.CAP_PROP_FRAME_COUNT))
writer = None
# Face detector
face_detector = dlib.get_frontal_face_detector()
# Load model
# model, *_ = model_selection(modelname='xception', num_out_classes=2)
# if model_path is not None:
# model = torch.load(model_path)
# print('Model found in {}'.format(model_path))
# else:
# print('No model found, initializing random model.')
# if cuda:
# model = model.cuda()
# Text variables
font_face = cv2.FONT_HERSHEY_SIMPLEX
thickness = 2
font_scale = 1
# Frame numbers and length of output video
frame_num = 0
assert start_frame < num_frames - 1
end_frame = end_frame if end_frame else num_frames
pbar = tqdm(total=end_frame-start_frame)
while reader.isOpened():
_, image = reader.read()
if image is None:
break
frame_num += 1
if frame_num < start_frame:
continue
pbar.update(1)
# Image size
# print('getting image size')
height, width = image.shape[:2]
# Init output writer
# print('init output writer')
if writer is None:
writer = cv2.VideoWriter(join(output_path, video_fn), fourcc, fps,
(height, width)[::-1])
# 2. Detect with dlib
# print('detect with dlib')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
faces = face_detector(gray, 1)
if len(faces):
# For now only take biggest face
face = faces[0]
# --- Prediction ---------------------------------------------------
# Face crop with dlib and bounding box scale enlargement
x, y, size = get_boundingbox(face, width, height)
cropped_face = image[y:y+size, x:x+size]
# Actual prediction using our model
prediction, output = predict_with_model(cropped_face, model,
cuda=cuda)
# ------------------------------------------------------------------
# Text and bb
x = face.left()
y = face.top()
w = face.right() - x
h = face.bottom() - y
label = 'fake' if prediction == 1 else 'real'
color = (0, 255, 0) if prediction == 0 else (0, 0, 255)
output_list = ['{0:.2f}'.format(float(x)) for x in
output.detach().cpu().numpy()[0]]
cv2.putText(image, str(output_list)+'=>'+label, (x, y+h+30),
font_face, font_scale,
color, thickness, 2)
# draw box over face
cv2.rectangle(image, (x, y), (x + w, y + h), color, 2)
if frame_num >= end_frame:
break
# Show
# print('show result')
# cv2.imshow('test', image)
# cv2.waitKey(33) # About 30 fps
writer.write(image)
pbar.close()
if writer is not None:
writer.release()
#print('Finished! Output saved under {}'.format(output_path))
else:
pass
#print('Input video file was empty')
return
###Output
_____no_output_____
###Markdown
Use the `classification` package per the github instructionsReference: https://github.com/ondyari/FaceForensics/tree/master/classificationclassificationNow that we have the code available to us in the kernel, we will use the instructions as provided on the the github page. Write Files to disk for importing model- Since we are importing pretrained models, the code expects a `network` package with some files from the github repo- We will write these files to disk so the model import will run correctly.
###Code
!mkdir network
%%writefile network/__init__.py
# init
%%writefile network/xception.py
"""
Ported to pytorch thanks to [tstandley](https://github.com/tstandley/Xception-PyTorch)
@author: tstandley
Adapted by cadene
Creates an Xception Model as defined in:
Francois Chollet
Xception: Deep Learning with Depthwise Separable Convolutions
https://arxiv.org/pdf/1610.02357.pdf
This weights ported from the Keras implementation. Achieves the following performance on the validation set:
Loss:0.9173 Prec@1:78.892 Prec@5:94.292
REMEMBER to set your image size to 3x299x299 for both test and validation
normalize = transforms.Normalize(mean=[0.5, 0.5, 0.5],
std=[0.5, 0.5, 0.5])
The resize parameter of the validation transform should be 333, and make sure to center crop at 299x299
"""
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.model_zoo as model_zoo
from torch.nn import init
pretrained_settings = {
'xception': {
'imagenet': {
'url': 'http://data.lip6.fr/cadene/pretrainedmodels/xception-b5690688.pth',
'input_space': 'RGB',
'input_size': [3, 299, 299],
'input_range': [0, 1],
'mean': [0.5, 0.5, 0.5],
'std': [0.5, 0.5, 0.5],
'num_classes': 1000,
'scale': 0.8975 # The resize parameter of the validation transform should be 333, and make sure to center crop at 299x299
}
}
}
class SeparableConv2d(nn.Module):
def __init__(self,in_channels,out_channels,kernel_size=1,stride=1,padding=0,dilation=1,bias=False):
super(SeparableConv2d,self).__init__()
self.conv1 = nn.Conv2d(in_channels,in_channels,kernel_size,stride,padding,dilation,groups=in_channels,bias=bias)
self.pointwise = nn.Conv2d(in_channels,out_channels,1,1,0,1,1,bias=bias)
def forward(self,x):
x = self.conv1(x)
x = self.pointwise(x)
return x
class Block(nn.Module):
def __init__(self,in_filters,out_filters,reps,strides=1,start_with_relu=True,grow_first=True):
super(Block, self).__init__()
if out_filters != in_filters or strides!=1:
self.skip = nn.Conv2d(in_filters,out_filters,1,stride=strides, bias=False)
self.skipbn = nn.BatchNorm2d(out_filters)
else:
self.skip=None
self.relu = nn.ReLU(inplace=True)
rep=[]
filters=in_filters
if grow_first:
rep.append(self.relu)
rep.append(SeparableConv2d(in_filters,out_filters,3,stride=1,padding=1,bias=False))
rep.append(nn.BatchNorm2d(out_filters))
filters = out_filters
for i in range(reps-1):
rep.append(self.relu)
rep.append(SeparableConv2d(filters,filters,3,stride=1,padding=1,bias=False))
rep.append(nn.BatchNorm2d(filters))
if not grow_first:
rep.append(self.relu)
rep.append(SeparableConv2d(in_filters,out_filters,3,stride=1,padding=1,bias=False))
rep.append(nn.BatchNorm2d(out_filters))
if not start_with_relu:
rep = rep[1:]
else:
rep[0] = nn.ReLU(inplace=False)
if strides != 1:
rep.append(nn.MaxPool2d(3,strides,1))
self.rep = nn.Sequential(*rep)
def forward(self,inp):
x = self.rep(inp)
if self.skip is not None:
skip = self.skip(inp)
skip = self.skipbn(skip)
else:
skip = inp
x+=skip
return x
class Xception(nn.Module):
"""
Xception optimized for the ImageNet dataset, as specified in
https://arxiv.org/pdf/1610.02357.pdf
"""
def __init__(self, num_classes=1000):
""" Constructor
Args:
num_classes: number of classes
"""
super(Xception, self).__init__()
self.num_classes = num_classes
self.conv1 = nn.Conv2d(3, 32, 3,2, 0, bias=False)
self.bn1 = nn.BatchNorm2d(32)
self.relu = nn.ReLU(inplace=True)
self.conv2 = nn.Conv2d(32,64,3,bias=False)
self.bn2 = nn.BatchNorm2d(64)
#do relu here
self.block1=Block(64,128,2,2,start_with_relu=False,grow_first=True)
self.block2=Block(128,256,2,2,start_with_relu=True,grow_first=True)
self.block3=Block(256,728,2,2,start_with_relu=True,grow_first=True)
self.block4=Block(728,728,3,1,start_with_relu=True,grow_first=True)
self.block5=Block(728,728,3,1,start_with_relu=True,grow_first=True)
self.block6=Block(728,728,3,1,start_with_relu=True,grow_first=True)
self.block7=Block(728,728,3,1,start_with_relu=True,grow_first=True)
self.block8=Block(728,728,3,1,start_with_relu=True,grow_first=True)
self.block9=Block(728,728,3,1,start_with_relu=True,grow_first=True)
self.block10=Block(728,728,3,1,start_with_relu=True,grow_first=True)
self.block11=Block(728,728,3,1,start_with_relu=True,grow_first=True)
self.block12=Block(728,1024,2,2,start_with_relu=True,grow_first=False)
self.conv3 = SeparableConv2d(1024,1536,3,1,1)
self.bn3 = nn.BatchNorm2d(1536)
#do relu here
self.conv4 = SeparableConv2d(1536,2048,3,1,1)
self.bn4 = nn.BatchNorm2d(2048)
self.fc = nn.Linear(2048, num_classes)
# #------- init weights --------
# for m in self.modules():
# if isinstance(m, nn.Conv2d):
# n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
# m.weight.data.normal_(0, math.sqrt(2. / n))
# elif isinstance(m, nn.BatchNorm2d):
# m.weight.data.fill_(1)
# m.bias.data.zero_()
# #-----------------------------
def features(self, input):
x = self.conv1(input)
x = self.bn1(x)
x = self.relu(x)
x = self.conv2(x)
x = self.bn2(x)
x = self.relu(x)
x = self.block1(x)
x = self.block2(x)
x = self.block3(x)
x = self.block4(x)
x = self.block5(x)
x = self.block6(x)
x = self.block7(x)
x = self.block8(x)
x = self.block9(x)
x = self.block10(x)
x = self.block11(x)
x = self.block12(x)
x = self.conv3(x)
x = self.bn3(x)
x = self.relu(x)
x = self.conv4(x)
x = self.bn4(x)
return x
def logits(self, features):
x = self.relu(features)
x = F.adaptive_avg_pool2d(x, (1, 1))
x = x.view(x.size(0), -1)
x = self.last_linear(x)
return x
def forward(self, input):
x = self.features(input)
x = self.logits(x)
return x
def xception(num_classes=1000, pretrained='imagenet'):
model = Xception(num_classes=num_classes)
if pretrained:
settings = pretrained_settings['xception'][pretrained]
assert num_classes == settings['num_classes'], \
"num_classes should be {}, but is {}".format(settings['num_classes'], num_classes)
model = Xception(num_classes=num_classes)
model.load_state_dict(model_zoo.load_url(settings['url']))
model.input_space = settings['input_space']
model.input_size = settings['input_size']
model.input_range = settings['input_range']
model.mean = settings['mean']
model.std = settings['std']
# TODO: ugly
model.last_linear = model.fc
del model.fc
return model
%%writefile network/models.py
"""
Author: Andreas Rössler
"""
import os
import argparse
import torch
#import pretrainedmodels
import torch.nn as nn
import torch.nn.functional as F
from network.xception import xception
import math
import torchvision
def return_pytorch04_xception(pretrained=True):
# Raises warning "src not broadcastable to dst" but thats fine
model = xception(pretrained=False)
if pretrained:
# Load model in torch 0.4+
model.fc = model.last_linear
del model.last_linear
state_dict = torch.load(
'/home/ondyari/.torch/models/xception-b5690688.pth')
for name, weights in state_dict.items():
if 'pointwise' in name:
state_dict[name] = weights.unsqueeze(-1).unsqueeze(-1)
model.load_state_dict(state_dict)
model.last_linear = model.fc
del model.fc
return model
class TransferModel(nn.Module):
"""
Simple transfer learning model that takes an imagenet pretrained model with
a fc layer as base model and retrains a new fc layer for num_out_classes
"""
def __init__(self, modelchoice, num_out_classes=2, dropout=0.0):
super(TransferModel, self).__init__()
self.modelchoice = modelchoice
if modelchoice == 'xception':
self.model = return_pytorch04_xception()
# Replace fc
num_ftrs = self.model.last_linear.in_features
if not dropout:
self.model.last_linear = nn.Linear(num_ftrs, num_out_classes)
else:
print('Using dropout', dropout)
self.model.last_linear = nn.Sequential(
nn.Dropout(p=dropout),
nn.Linear(num_ftrs, num_out_classes)
)
elif modelchoice == 'resnet50' or modelchoice == 'resnet18':
if modelchoice == 'resnet50':
self.model = torchvision.models.resnet50(pretrained=True)
if modelchoice == 'resnet18':
self.model = torchvision.models.resnet18(pretrained=True)
# Replace fc
num_ftrs = self.model.fc.in_features
if not dropout:
self.model.fc = nn.Linear(num_ftrs, num_out_classes)
else:
self.model.fc = nn.Sequential(
nn.Dropout(p=dropout),
nn.Linear(num_ftrs, num_out_classes)
)
else:
raise Exception('Choose valid model, e.g. resnet50')
def set_trainable_up_to(self, boolean, layername="Conv2d_4a_3x3"):
"""
Freezes all layers below a specific layer and sets the following layers
to true if boolean else only the fully connected final layer
:param boolean:
:param layername: depends on network, for inception e.g. Conv2d_4a_3x3
:return:
"""
# Stage-1: freeze all the layers
if layername is None:
for i, param in self.model.named_parameters():
param.requires_grad = True
return
else:
for i, param in self.model.named_parameters():
param.requires_grad = False
if boolean:
# Make all layers following the layername layer trainable
ct = []
found = False
for name, child in self.model.named_children():
if layername in ct:
found = True
for params in child.parameters():
params.requires_grad = True
ct.append(name)
if not found:
raise Exception('Layer not found, cant finetune!'.format(
layername))
else:
if self.modelchoice == 'xception':
# Make fc trainable
for param in self.model.last_linear.parameters():
param.requires_grad = True
else:
# Make fc trainable
for param in self.model.fc.parameters():
param.requires_grad = True
def forward(self, x):
x = self.model(x)
return x
def model_selection(modelname, num_out_classes,
dropout=None):
"""
:param modelname:
:return: model, image size, pretraining<yes/no>, input_list
"""
if modelname == 'xception':
return TransferModel(modelchoice='xception',
num_out_classes=num_out_classes), 299, \
True, ['image'], None
elif modelname == 'resnet18':
return TransferModel(modelchoice='resnet18', dropout=dropout,
num_out_classes=num_out_classes), \
224, True, ['image'], None
else:
raise NotImplementedError(modelname)
if __name__ == '__main__':
model, image_size, *_ = model_selection('resnet18', num_out_classes=2)
print(model)
model = model.cuda()
from torchsummary import summary
input_s = (3, image_size, image_size)
print(summary(model, input_s))
###Output
_____no_output_____
###Markdown
Pretrained Models from FaceForensics++Pretrained models were downloaded from the referenced website here:http://kaldir.vc.in.tum.de/FaceForensics/models/faceforensics++_models.zipWe can load the model files using torch like this on cpu:```model = torch.load(model_path, map_location=torch.device('cpu'))```or with gpu/cuda:```model = torch.load(model_path)model = model.cuda()``` The authors provide different types of models:From Appendix 4 of the [paper](https://arxiv.org/pdf/1901.08971.pdf) we see the published accuracy:We have:- Full image models- Face_detected models- RAW, compressed 23, and compressed 40 Create a prediction function.- This function is provided a video file, and runs the `test_full_image_network`- Takes in the model file.- Save the output avi file- Display some of the frames of the output video
###Code
metadata = pd.read_json('../input/deepfake-detection-challenge/train_sample_videos/metadata.json').T
def predict_model(video_fn, model,
start_frame=0, end_frame=30,
plot_every_x_frames = 5):
"""
Given a video and model, starting frame and end frame.
Predict on all frames.
"""
fn = video_fn.split('.')[0]
label = metadata.loc[video_fn]['label']
original = metadata.loc[video_fn]['original']
video_path = f'../input/deepfake-detection-challenge/train_sample_videos/{video_fn}'
output_path = './'
test_full_image_network(video_path, model, output_path, start_frame=0, end_frame=30, cuda=False)
# Read output
vidcap = cv2.VideoCapture(f'{fn}.avi')
success,image = vidcap.read()
count = 0
fig, axes = plt.subplots(3, 2, figsize=(20, 15))
axes = axes = axes.flatten()
i = 0
while success:
# Show every xth frame
if count % plot_every_x_frames == 0:
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
axes[i].imshow(image)
axes[i].set_title(f'{fn} - frame {count} - true label: {label}')
axes[i].xaxis.set_visible(False)
axes[i].yaxis.set_visible(False)
i += 1
success,image = vidcap.read()
count += 1
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Testing Full Image Models full_raw.p (full resolution)
###Code
model_path = '../input/deepfakemodelspackages/faceforensics_models/faceforensics++_models_subset/full/xception/full_raw.p'
model = torch.load(model_path, map_location=torch.device('cpu'))
predict_model('bbhtdfuqxq.mp4', model)
predict_model('crezycjqyk.mp4', model)
predict_model('ebchwmwayp.mp4', model)
###Output
_____no_output_____
###Markdown
full_c40.p model
###Code
model_path_full_c40 = '../input/deepfakemodelspackages/faceforensics_models/faceforensics++_models_subset/full/xception/full_c40.p'
model_full_c40 = torch.load(model_path_full_c40, map_location=torch.device('cpu'))
predict_model('bbhtdfuqxq.mp4', model_full_c40)
predict_model('crezycjqyk.mp4', model_full_c40)
predict_model('ebchwmwayp.mp4', model_full_c40)
###Output
_____no_output_____
###Markdown
full_c23.p model
###Code
model_path_full23 = '../input/deepfakemodelspackages/faceforensics_models/faceforensics++_models_subset/full/xception/full_c23.p'
model_full_c23 = torch.load(model_path_full23, map_location=torch.device('cpu'))
predict_model('bbhtdfuqxq.mp4', model_full_c23)
predict_model('crezycjqyk.mp4', model_full_c23)
predict_model('ebchwmwayp.mp4', model_full_c23)
###Output
_____no_output_____
###Markdown
Testing face_detection/xcepion modelsThese models are specific to detecting just the face. face_detection/xcepion all_raw.p model
###Code
model_path_face_allraw = '../input/deepfakemodelspackages/faceforensics_models/faceforensics++_models_subset/face_detection/xception/all_raw.p'
model_face_allraw = torch.load(model_path_face_allraw, map_location=torch.device('cpu'))
predict_model('bbhtdfuqxq.mp4', model_face_allraw)
predict_model('crezycjqyk.mp4', model_face_allraw)
predict_model('ebchwmwayp.mp4', model_face_allraw)
###Output
_____no_output_____
###Markdown
face_detection/xcepion all_c40.p model
###Code
model_path_face_all_c40 = '../input/deepfakemodelspackages/faceforensics_models/faceforensics++_models_subset/face_detection/xception/all_c40.p'
model_face_all_c40 = torch.load(model_path_face_all_c40, map_location=torch.device('cpu'))
predict_model('bbhtdfuqxq.mp4', model_face_all_c40)
predict_model('crezycjqyk.mp4', model_face_all_c40)
predict_model('ebchwmwayp.mp4', model_face_all_c40)
###Output
_____no_output_____
###Markdown
face_detection/xcepion all_c23.p model
###Code
model, *_ = model_selection(modelname='xception', num_out_classes=2)
model_path_face_allc23 = '../input/deepfakemodelspackages/faceforensics_models/faceforensics++_models_subset/face_detection/xception/all_c23.p'
model_face_all_c23 = torch.load(model_path_face_allc23, map_location=torch.device('cpu'))
predict_model('bbhtdfuqxq.mp4', model_face_all_c23)
predict_model('crezycjqyk.mp4', model_face_all_c23)
predict_model('ebchwmwayp.mp4', model_face_all_c23)
###Output
_____no_output_____
###Markdown
Validate Predictions on train set- Predict for 50 frames- Average the prediction of these frames to make a single prediction- Test for some FAKE and REAL- Return the maximum, minimum and average "fake" prediction of all the fames sampled- If unable to detect a face, predict 0.5
###Code
def video_file_frame_pred(video_path, model,
start_frame=0, end_frame=300,
cuda=True, n_frames=5):
"""
Predict and give result as numpy array
"""
pred_frames = [int(round(x)) for x in np.linspace(start_frame, end_frame, n_frames)]
predictions = []
outputs = []
# print('Starting: {}'.format(video_path))
# Read and write
reader = cv2.VideoCapture(video_path)
video_fn = video_path.split('/')[-1].split('.')[0]+'.avi'
fourcc = cv2.VideoWriter_fourcc(*'MJPG')
fps = reader.get(cv2.CAP_PROP_FPS)
num_frames = int(reader.get(cv2.CAP_PROP_FRAME_COUNT))
writer = None
# Face detector
face_detector = dlib.get_frontal_face_detector()
# Text variables
font_face = cv2.FONT_HERSHEY_SIMPLEX
thickness = 2
font_scale = 1
# Frame numbers and length of output video
frame_num = 0
assert start_frame < num_frames - 1
end_frame = end_frame if end_frame else num_frames
while reader.isOpened():
_, image = reader.read()
if image is None:
break
frame_num += 1
if frame_num in pred_frames:
height, width = image.shape[:2]
# 2. Detect with dlib
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
faces = face_detector(gray, 1)
if len(faces):
# For now only take biggest face
face = faces[0]
# --- Prediction ---------------------------------------------------
# Face crop with dlib and bounding box scale enlargement
x, y, size = get_boundingbox(face, width, height)
cropped_face = image[y:y+size, x:x+size]
# Actual prediction using our model
prediction, output = predict_with_model(cropped_face, model,
cuda=cuda)
predictions.append(prediction)
outputs.append(output)
# ------------------------------------------------------------------
if frame_num >= end_frame:
break
# Figure out how to do this with torch
preds_np = [x.detach().cpu().numpy()[0][1] for x in outputs]
if len(preds_np) == 0:
return predictions, outputs, 0.5, 0.5, 0.5
try:
mean_pred = np.mean(preds_np)
except:
# couldnt find faces
mean_pred = 0.5
min_pred = np.min(preds_np)
max_pred = np.max(preds_np)
return predictions, outputs, mean_pred, min_pred, max_pred
###Output
_____no_output_____
###Markdown
Because the face_detection/xception/all_c23.p model seemed to be returning the best restuls in our testing, we will use it for running
###Code
torch.nn.Module.dump_patches = True
model_path_23 = '../input/deepfakemodelspackages/faceforensics_models/faceforensics++_models_subset/face_detection/xception/all_c23.p'
model_23 = torch.load(model_path_23, map_location=torch.device('cpu'))
model_path_raw = '../input/deepfakemodelspackages/faceforensics_models/faceforensics++_models_subset/face_detection/xception/all_raw.p'
model_raw = torch.load(model_path_raw, map_location=torch.device('cpu'))
# Read metadata
metadata = pd.read_json('../input/deepfake-detection-challenge/train_sample_videos/metadata.json').T
# Predict Fake
for video_fn in tqdm(metadata.query('label == "FAKE"').sample(77).index):
video_path = f'../input/deepfake-detection-challenge/train_sample_videos/{video_fn}'
predictions, outputs, mean_pred, min_pred, max_pred = video_file_frame_pred(video_path, model_23, n_frames=4, cuda=False)
metadata.loc[video_fn, 'avg_pred_c23'] = mean_pred
metadata.loc[video_fn, 'min_pred_c23'] = min_pred
metadata.loc[video_fn, 'max_pred_c23'] = max_pred
predictions, outputs, mean_pred, min_pred, max_pred = video_file_frame_pred(video_path, model_raw, n_frames=4, cuda=False)
metadata.loc[video_fn, 'avg_pred_raw'] = mean_pred
metadata.loc[video_fn, 'min_pred_raw'] = min_pred
metadata.loc[video_fn, 'max_pred_raw'] = max_pred
# Predict Real
for video_fn in tqdm(metadata.query('label == "REAL"').sample(77).index):
video_path = f'../input/deepfake-detection-challenge/train_sample_videos/{video_fn}'
predictions, outputs, mean_pred, min_pred, max_pred = video_file_frame_pred(video_path, model_23, n_frames=4, cuda=False)
metadata.loc[video_fn, 'avg_pred_c23'] = mean_pred
metadata.loc[video_fn, 'min_pred_c23'] = min_pred
metadata.loc[video_fn, 'max_pred_c23'] = max_pred
predictions, outputs, mean_pred, min_pred, max_pred = video_file_frame_pred(video_path, model_raw, n_frames=4, cuda=False)
metadata.loc[video_fn, 'avg_pred_raw'] = mean_pred
metadata.loc[video_fn, 'min_pred_raw'] = min_pred
metadata.loc[video_fn, 'max_pred_raw'] = max_pred
###Output
_____no_output_____
###Markdown
Compute estimated score for 154 training samples
###Code
preds_df = metadata.dropna(subset=['avg_pred_raw']).copy()
preds_df['label_binary'] = 0
preds_df.loc[preds_df['label'] == "FAKE", 'label_binary'] = 1
preds_df[['min_pred_c23','max_pred_c23',
'min_pred_raw','max_pred_raw']] = preds_df[['min_pred_c23','max_pred_c23',
'min_pred_raw','max_pred_raw']].fillna(0.5)
preds_df['naive_pred'] = 0.5
score_avg23 = log_loss(preds_df['label_binary'], preds_df['avg_pred_c23'])
score_min23 = log_loss(preds_df['label_binary'], preds_df['min_pred_c23'])
score_max23 = log_loss(preds_df['label_binary'], preds_df['max_pred_c23'])
score_avgraw = log_loss(preds_df['label_binary'], preds_df['avg_pred_raw'])
score_minraw = log_loss(preds_df['label_binary'], preds_df['min_pred_raw'])
score_maxraw = log_loss(preds_df['label_binary'], preds_df['max_pred_raw'])
score_naive = log_loss(preds_df['label_binary'], preds_df['naive_pred'])
preds_df['max_pred_clipped'] = preds_df['max_pred_c23'].clip(0.4, 1)
score_max_clipped = log_loss(preds_df['label_binary'], preds_df['max_pred_clipped'])
preds_df['max_pred_clipped_raw'] = preds_df['max_pred_raw'].clip(0.4, 1)
score_max_clipped_raw = log_loss(preds_df['label_binary'], preds_df['max_pred_clipped_raw'])
print('Score using average prediction of all frames all_c23.p: {:0.4f}'.format(score_avg23))
print('Score using minimum prediction of all frames all_c23.p: {:0.4f}'.format(score_min23))
print('Score using maximum prediction of all frames all_c23.p: {:0.4f}'.format(score_max23))
print('Score using 0.5 prediction of all frames: {:0.4f}'.format(score_naive))
print('Score using maximum clipped prediction of all frames: {:0.4f}'.format(score_max_clipped))
print('Score using average prediction of all frames all_raw.p: {:0.4f}'.format(score_avgraw))
print('Score using minimum prediction of all frames all_raw.p: {:0.4f}'.format(score_minraw))
print('Score using maximum prediction of all frames all_raw.p: {:0.4f}'.format(score_maxraw))
print('Score using maximum clipped prediction of all frames all_raw.p: {:0.4f}'.format(score_max_clipped_raw))
###Output
_____no_output_____
###Markdown
Plot the Average vs Max Prediction Probability - Fake vs RealThe model appears to preform fairly poorly but shows that it is picking up on some signal. c23 model results
###Code
fig, ax = plt.subplots(1,1, figsize=(10, 10))
sns.scatterplot(x='avg_pred_c23', y='max_pred_c23', data=metadata.dropna(subset=['avg_pred_c23']), hue='label')
plt.show()
###Output
_____no_output_____
###Markdown
raw model results
###Code
fig, ax = plt.subplots(1,1, figsize=(10, 10))
sns.scatterplot(x='avg_pred_raw', y='max_pred_raw', data=metadata.dropna(subset=['avg_pred_raw']), hue='label')
plt.show()
for i, d in metadata.groupby('label'):
d['avg_pred_c23'].plot(kind='hist', figsize=(15, 5), bins=20, alpha=0.8, title='Average Prediction distribution c23')
plt.legend(['FAKE','REAL'])
plt.show()
for i, d in metadata.groupby('label'):
d['max_pred_c23'].plot(kind='hist', figsize=(15, 5), bins=20, title='Max Prediction distribution c23', alpha=0.8)
plt.legend(['FAKE','REAL'])
plt.show()
for i, d in metadata.groupby('label'):
d['avg_pred_raw'].plot(kind='hist',
figsize=(15, 5),
bins=20,
alpha=0.8,
title='Average Prediction distribution raw')
plt.legend(['FAKE','REAL'])
plt.show()
for i, d in metadata.groupby('label'):
d['max_pred_raw'].plot(kind='hist',
figsize=(15, 5),
bins=20,
title='Max Prediction distribution raw',
alpha=0.8)
plt.legend(['FAKE','REAL'])
plt.show()
metadata['max_pred_c23'] = metadata['max_pred_c23'].round(6)
metadata.dropna(subset=['max_pred_c23']).sort_values('label')
metadata['label_binary'] = 0
metadata.loc[metadata['label'] == "FAKE", 'label_binary'] = 1
###Output
_____no_output_____
###Markdown
Predict on test set.- Predict for a subset of frames per video to cut down on run time- Save the min/max and average prediction of all frames- Save results to sample submission CSV.
###Code
import pandas as pd
ss = pd.read_csv('../input/deepfake-detection-challenge/sample_submission.csv')
for video_fn in tqdm(ss['filename'].unique()):
video_path = f'../input/deepfake-detection-challenge/test_videos/{video_fn}'
predictions, outputs, mean_pred, min_pred, max_pred = video_file_frame_pred(video_path, model, n_frames=4, cuda=False)
ss.loc[ss['filename'] == video_fn, 'avg_pred'] = mean_pred
ss.loc[ss['filename'] == video_fn, 'min_pred'] = min_pred
ss.loc[ss['filename'] == video_fn, 'max_pred'] = max_pred
# Use the Maximum frame predicted as "Fake" to be the final prediction
ss['label'] = ss['max_pred'].fillna(0.5).clip(0.4, 0.8)
ss['label'].plot(kind='hist', figsize=(15, 5), bins=50)
plt.show()
ss[['filename','label']].to_csv('submission.csv', index=False)
ss.to_csv('submission_min_max.csv', index=False)
ss.head(20)
###Output
_____no_output_____ |
1 - Aulas/aula10/video.ipynb | ###Markdown
1° passo: construir a base de dados
###Code
import cv2
import numpy as np
import matplotlib.pyplot as plt
import os
files_path = [os.path.abspath(x) for x in os.listdir('./') if x.endswith('.png')]
def extrair_caracteristica(img):
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
_, imgBinaria = cv2.threshold(gray, 250, 255, cv2.THRESH_BINARY) #seleciona apenas os pixels dentro do intervalo [250,255]
momentos = cv2.moments(imgBinaria)
momentosDeHu = cv2.HuMoments(momentos)
feature = (-np.sign(momentosDeHu) * np.log10(np.abs(momentosDeHu)))
return feature
base_teste = []
#extraindo as características das imagens na base de dados
for i in files_path:
diretorio, arquivo = os.path.split(i)
print(arquivo)
imagem = cv2.imread(arquivo)
carac = extrair_caracteristica(imagem)
classe = arquivo.split('-')
base_teste.append((carac, classe[0]))
print('base de dados construida')
###Output
1-.png
1-1.png
1-2.png
2-.png
2-1.png
2-2.png
3-.png
3-1.png
3-2.png
base de dados construida
###Markdown
2° passo: criar função chamada 'classifica' que recebe o vetor_consulta e retorna a classe que ele pertence usando K-NN
###Code
from statistics import mode
def distancia(a, b):
M = len(a)
soma = 0
for i in range(M):
soma = soma + ((a[i]-b[i])**2)
return np.sqrt(soma)
def classifica(vetor_consulta):
d = []
for feat in base_teste:
vetor = feat[0]
dist = distancia(vetor, vetor_consulta)
d.append((dist, feat[1]))
e = sorted(d)
k1 = e[0][1]
k2 = e[1][1]
k3 = e[2][1]
a = mode([k1,k2,k3,9])
return a
###Output
_____no_output_____
###Markdown
3° passo: extrair o objeto do video
###Code
def extrair_caracteristica_img_consulta(imgBinaria):
momentos = cv2.moments(imgBinaria)
momentosDeHu = cv2.HuMoments(momentos)
feature = (-np.sign(momentosDeHu) * np.log10(np.abs(momentosDeHu)))
return feature
import cv2
import matplotlib.pyplot as plt
cam = cv2.VideoCapture(0)
texto = "nao identificado"
while (True):
ret, frame = cam.read()
imgOriginal = frame.copy()
imgBinaria = cv2.inRange(frame, (40, 40, 120), (140, 140, 255))
#imgBinaria = cv2.erode(imgBinaria, elementoEstruturante, iterations = 2 )
cv2.imshow('mask', imgBinaria)
(lx, contornos, tree) = cv2.findContours(imgBinaria, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
for cp in contornos:
if(len(cp) > 500):
x, y, w, h = cv2.boundingRect(cp)
# Desenhando o retângulo na imagem
cv2.rectangle(frame, (x,y), (x+w, y+h), (0, 255, 0), 5)
cv2.putText(frame, texto, (100,100), cv2.FONT_HERSHEY_PLAIN, fontScale=2, color=(0, 255, 255), thickness = 2)
vetor_consulta = extrair_caracteristica_img_consulta(imgBinaria[y:y+h, x:x+w])
a = classifica(vetor_consulta)
texto = "Classe: "+ a
# Desenhando o contorno
#cv2.drawContours(frame, cp, -1, (0, 0, 255), 2)
cv2.imshow('teste', frame)
k = cv2.waitKey(1) & 0xFF
if k == 27: # se o usuário apertou a tecla ESC
break
if k == 13: # se o usuário apertou a tecla ENTER
plt.imshow(imgOriginal[y:y+h, x:x+w, ::-1])
plt.show()
#plt.imshow(imgOriginal[y:y+h, x:x+w, ::-1])
cv2.destroyAllWindows()
cam.release()
###Output
_____no_output_____ |
notebook/1_Data_Understanding_Data_Description.ipynb | ###Markdown
Data Understanding
###Code
from google.colab import drive
drive.mount('/content/drive')
%cd '/content/drive/My Drive/Colab Notebooks/PlatziMaster/Proyecto Agencia de Viajes/datasets'
!ls
###Output
/content/drive/My Drive/Colab Notebooks/PlatziMaster/Proyecto Agencia de Viajes/datasets
DataAcomodacion.csv funciones_auxiliares.ipynb train_data.txt
###Markdown
Dependencias
###Code
import pandas as pd
import numpy as np
import csv
pd.__version__
###Output
_____no_output_____
###Markdown
Carga de Datos
###Code
%run funciones_auxiliares.ipynb
f = open("train_data.txt", "r")
data_acomodation = load_data(f)
data_acomodation.head()
###Output
_____no_output_____
###Markdown
Descripcion de los Datos Cantidad de DatosEn la mayoría de técnicas de modelado, los tamaños de datos tienen un equilibrio relacionado. Los grandes conjuntos de datos pueden producir modelos más precisos, pero también pueden aumentar el tiempo de procesamiento. Considere utilizar un subconjunto de datos. Cuando tome notas para el informe final, asegúrese de incluir estadísticos de tamaños para todos los conjuntos de datos y recuerde tener en cuenta tanto el número de registros como los campos (atributos) cuando describa los datos.
###Code
#tamaño del dataset
data_acomodation.shape
###Output
_____no_output_____
###Markdown
Vemos que tenemos 9868 filas y 7 columnas Tipos de ValoresLos datos pueden incluir una variedad de formatos, como numérico, categórico (cadena) o Booleano (verdadero/falso). Si presta atención al tipo de valor puede evitar posteriores problemas durante la fase de modelado.
###Code
data_acomodation.info()
data_acomodation['id_viaje'] = data_acomodation['id_viaje'].astype(int)
data_acomodation['duracion_estadia'] = data_acomodation['duracion_estadia'].astype(int)
data_acomodation['edad'] = data_acomodation['edad'].astype(int, errors='ignore')
data_acomodation['genero'] = data_acomodation['genero'].astype('category')
data_acomodation['ninos'] = data_acomodation['ninos'].astype(int, errors = 'ignore')
###Output
_____no_output_____
###Markdown
Inicialmente tenemos todas las variable de tipo `object`, de modo que realizamos la conversion mas adecuada para el analisis. Esquemas de codificaciónCon frecuencia, los valores de la base de datos son representaciones de características como género o tipo de producto. Por ejemplo, un conjunto de datos puede utilizar H y M para representar hombre y mujer, mientras que otro puede utilizar los valores numéricos 1 y 2. Observe cualquier esquema conflictivo en el informe de datos.
###Code
data_acomodation.nunique()
###Output
_____no_output_____ |
JHMDB/jhmdb_1D.ipynb | ###Markdown
Initialize the setting
###Code
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="1"
random.seed(1234)
class Config():
def __init__(self):
self.frame_l = 32 # the length of frames
self.joint_n = 14 # the number of joints
self.joint_d = 2 # the dimension of joints
self.clc_num = 3 # the number of class
self.feat_d = 91
self.filters = 64
self.data_dir = './JHMDB_processed/'
C = Config()
lr = 1e-4
class_weight = {0: 1, 1: 1.5, 2:1}
def data_generator(T,C,le):
X_0 = []
X_1 = []
Y = []
for i in tqdm(range(len(T['pose']))):
p = np.copy(T['pose'][i])
x_scaled = norm_scale(p[:, : , 0])
y_scaled = norm_scale(p[:, : , 1])
p = np.stack((x_scaled, y_scaled), axis=-1)
p = zoom(p,target_l=C.frame_l,joints_num=C.joint_n,joints_dim=C.joint_d)
label = np.zeros(C.clc_num)
label[le.transform(T['label'])[i]-1] = 1
M = get_CG(p,C)
X_0.append(M)
X_1.append(p)
Y.append(label)
X_0 = np.stack(X_0)
X_1 = np.stack(X_1)
Y = np.stack(Y)
return X_0,X_1,Y
###Output
_____no_output_____
###Markdown
Building the model
###Code
def poses_diff(x):
H, W = x.get_shape()[1],x.get_shape()[2]
x = tf.subtract(x[:,1:,...],x[:,:-1,...])
x = tf.image.resize_nearest_neighbor(x,size=[H.value,W.value],align_corners=False) # should not alignment here
return x
def pose_motion(P,frame_l):
H, W = P.get_shape()[-2],P.get_shape()[-1]
P_diff_slow = Lambda(lambda x: poses_diff(x))(P)
P_diff_slow = Reshape((frame_l, H*W))(P_diff_slow)
P_fast = Lambda(lambda x: x[:,::2,...])(P)
P_diff_fast = Lambda(lambda x: poses_diff(x))(P_fast)
P_diff_fast = Reshape((int(frame_l/2),H*W))(P_diff_fast)
return P_diff_slow,P_diff_fast
def c1D(x,filters,kernel):
x = Conv1D(filters, kernel_size=kernel,padding='same',use_bias=False)(x)
x = BatchNormalization()(x)
x = LeakyReLU(alpha=0.2)(x)
return x
def block(x,filters):
x = c1D(x,filters,3)
x = c1D(x,filters,3)
return x
def d1D(x,filters):
x = Dense(filters,use_bias=False)(x)
x = BatchNormalization()(x)
x = LeakyReLU(alpha=0.2)(x)
return x
def build_FM(frame_l=32,joint_n=22,joint_d=2,feat_d=231,filters=16):
M = Input(shape=(frame_l,feat_d))
P = Input(shape=(frame_l,joint_n,joint_d))
diff_slow,diff_fast = pose_motion(P,frame_l)
x = c1D(M,filters*2,1)
x = SpatialDropout1D(0.1)(x)
x = c1D(x,filters,3)
x = SpatialDropout1D(0.1)(x)
x = c1D(x,filters,1)
x = MaxPooling1D(2)(x)
x = SpatialDropout1D(0.1)(x)
x_d_slow = c1D(diff_slow,filters*2,1)
x_d_slow = SpatialDropout1D(0.1)(x_d_slow)
x_d_slow = c1D(x_d_slow,filters,3)
x_d_slow = SpatialDropout1D(0.1)(x_d_slow)
x_d_slow = c1D(x_d_slow,filters,1)
x_d_slow = MaxPool1D(2)(x_d_slow)
x_d_slow = SpatialDropout1D(0.1)(x_d_slow)
x_d_fast = c1D(diff_fast,filters*2,1)
x_d_fast = SpatialDropout1D(0.1)(x_d_fast)
x_d_fast = c1D(x_d_fast,filters,3)
x_d_fast = SpatialDropout1D(0.1)(x_d_fast)
x_d_fast = c1D(x_d_fast,filters,1)
x_d_fast = SpatialDropout1D(0.1)(x_d_fast)
x = concatenate([x,x_d_slow,x_d_fast])
x = block(x,filters*2)
x = MaxPool1D(2)(x)
x = SpatialDropout1D(0.1)(x)
x = block(x,filters*4)
x = MaxPool1D(2)(x)
x = SpatialDropout1D(0.1)(x)
x = block(x,filters*8)
x = SpatialDropout1D(0.1)(x)
return Model(inputs=[M,P],outputs=x)
def build_DD_Net(C):
M = Input(name='M', shape=(C.frame_l,C.feat_d))
P = Input(name='P', shape=(C.frame_l,C.joint_n,C.joint_d))
FM = build_FM(C.frame_l,C.joint_n,C.joint_d,C.feat_d,C.filters)
x = FM([M,P])
x = GlobalMaxPool1D()(x)
x = d1D(x,128)
x = Dropout(0.1)(x)
x = d1D(x,128)
x = Dropout(0.1)(x)
x = Dense(C.clc_num, activation='softmax')(x)
######################Self-supervised part
model = Model(inputs=[M,P],outputs=x)
return model
# def build_FM(frame_l=32,joint_n=22,joint_d=2,feat_d=231,filters=16):
# M = Input(shape=(frame_l,feat_d))
# P = Input(shape=(frame_l,joint_n,joint_d))
# diff_slow,diff_fast = pose_motion(P,frame_l)
# x = c1D(M,filters*2,1)
# # x = SpatialDropout1D(0.1)(x)
# x = BatchNormalization()(x)
# x = c1D(x,filters,3)
# # x = SpatialDropout1D(0.1)(x)
# x = BatchNormalization()(x)
# x = c1D(x,filters,1)
# x = MaxPooling1D(2)(x)
# # x = SpatialDropout1D(0.1)(x)
# x = BatchNormalization()(x)
# x_d_slow = c1D(diff_slow,filters*2,1)
# # x_d_slow = SpatialDropout1D(0.1)(x_d_slow)
# x_d_slow = BatchNormalization()(x_d_slow)
# x_d_slow = c1D(x_d_slow,filters,3)
# # x_d_slow = SpatialDropout1D(0.1)(x_d_slow)
# x_d_slow = BatchNormalization()(x_d_slow)
# x_d_slow = c1D(x_d_slow,filters,1)
# x_d_slow = MaxPool1D(2)(x_d_slow)
# # x_d_slow = SpatialDropout1D(0.1)(x_d_slow)
# x_d_slow = BatchNormalization()(x_d_slow)
# x_d_fast = c1D(diff_fast,filters*2,1)
# # x_d_fast = SpatialDropout1D(0.1)(x_d_fast)
# x_d_fast = BatchNormalization()(x_d_fast)
# x_d_fast = c1D(x_d_fast,filters,3)
# # x_d_fast = SpatialDropout1D(0.1)(x_d_fast)
# x_d_fast = BatchNormalization()(x_d_fast)
# x_d_fast = c1D(x_d_fast,filters,1)
# # x_d_fast = SpatialDropout1D(0.1)(x_d_fast)
# x_d_fast = BatchNormalization()(x_d_fast)
# x = concatenate([x,x_d_slow,x_d_fast])
# x = block(x,filters*2)
# x = MaxPool1D(2)(x)
# # x = SpatialDropout1D(0.1)(x)
# x = BatchNormalization()(x)
# x = block(x,filters*4)
# x = MaxPool1D(2)(x)
# # x = SpatialDropout1D(0.1)(x)
# x = BatchNormalization()(x)
# x = block(x,filters*8)
# # x = SpatialDropout1D(0.1)(x)
# x = BatchNormalization()(x)
# return Model(inputs=[M,P],outputs=x)
# def build_DD_Net(C):
# M = Input(name='M', shape=(C.frame_l,C.feat_d))
# P = Input(name='P', shape=(C.frame_l,C.joint_n,C.joint_d))
# FM = build_FM(C.frame_l,C.joint_n,C.joint_d,C.feat_d,C.filters)
# x = FM([M,P])
# x = GlobalMaxPool1D()(x)
# x = d1D(x,128)
# # x = Dropout(0.1)(x)
# x = BatchNormalization()(x)
# x = d1D(x,128)
# # x = Dropout(0.1)(x)
# x = BatchNormalization()(x)
# x = Dense(C.clc_num, activation='softmax')(x)
# ######################Self-supervised part
# model = Model(inputs=[M,P],outputs=x)
# return model
DD_Net = build_DD_Net(C)
DD_Net.summary()
###Output
WARNING:tensorflow:From /home/am/.conda/envs/DD-Net/lib/python3.7/site-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
M (InputLayer) [(None, 32, 91)] 0
__________________________________________________________________________________________________
P (InputLayer) [(None, 32, 14, 2)] 0
__________________________________________________________________________________________________
model (Model) (None, 4, 512) 1712512 M[0][0]
P[0][0]
__________________________________________________________________________________________________
global_max_pooling1d (GlobalMax (None, 512) 0 model[1][0]
__________________________________________________________________________________________________
dense (Dense) (None, 128) 65536 global_max_pooling1d[0][0]
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 128) 512 dense[0][0]
__________________________________________________________________________________________________
leaky_re_lu_15 (LeakyReLU) (None, 128) 0 batch_normalization_15[0][0]
__________________________________________________________________________________________________
dropout (Dropout) (None, 128) 0 leaky_re_lu_15[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 128) 16384 dropout[0][0]
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 128) 512 dense_1[0][0]
__________________________________________________________________________________________________
leaky_re_lu_16 (LeakyReLU) (None, 128) 0 batch_normalization_16[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 128) 0 leaky_re_lu_16[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 3) 387 dropout_1[0][0]
==================================================================================================
Total params: 1,795,843
Trainable params: 1,790,211
Non-trainable params: 5,632
__________________________________________________________________________________________________
###Markdown
Train and test on GT_split 1
###Code
Train_1 = pickle.load(open(C.data_dir+"GT_train_1.pkl", "rb"))
Test_1 = pickle.load(open(C.data_dir+"GT_test_1.pkl", "rb"))
le = preprocessing.LabelEncoder()
le.fit(Train_1['label'])
print(le.classes_)
X_0,X_1,Y = data_generator(Train_1,C,le)
X_test_0,X_test_1,Y_test = data_generator(Test_1,C,le)
DD_Net.compile(loss="categorical_crossentropy",optimizer=Adam(lr),metrics=['accuracy'])
lrScheduler = ReduceLROnPlateau(monitor='loss', factor=0.5, patience=5, cooldown=5, min_lr=5e-6)
modelCpkt = ModelCheckpoint('./weights_split_1/epoch_{epoch:02d}-val_acc_{val_acc:.2f}', monitor='val_acc',
mode='max', save_best_only=True, save_weights_only=True,
load_weights_on_restart=True)
history = DD_Net.fit([X_0,X_1], Y,
batch_size=len(Y),
epochs=700,
verbose=True,
shuffle=True,
callbacks=[lrScheduler, modelCpkt],
validation_data=([X_test_0,X_test_1],Y_test),
class_weight=class_weight
)
# Plot training & validation accuracy values
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Train and test on GT_split 2
###Code
Train_2 = pickle.load(open(C.data_dir+"GT_train_2.pkl", "rb"))
Test_2 = pickle.load(open(C.data_dir+"GT_test_2.pkl", "rb"))
le = preprocessing.LabelEncoder()
le.fit(Train_2['label'])
print(le.classes_)
X_0,X_1,Y = data_generator(Train_2,C,le)
X_test_0,X_test_1,Y_test = data_generator(Test_2,C,le)
# Re-initialize weights, since training and testing data switch
DD_Net = build_DD_Net(C)
DD_Net.compile(loss="categorical_crossentropy",optimizer=Adam(lr),metrics=['accuracy'])
lrScheduler = ReduceLROnPlateau(monitor='loss', factor=0.5, patience=5, cooldown=5, min_lr=5e-6)
modelCpkt = ModelCheckpoint('./weights_split_2/epoch_{epoch:02d}-val_acc_{val_acc:.2f}', monitor='val_acc',
mode='max', save_best_only=True, save_weights_only=True,
load_weights_on_restart=True)
history = DD_Net.fit([X_0,X_1], Y,
batch_size=len(Y),
epochs=700,
verbose=True,
shuffle=True,
callbacks=[lrScheduler, modelCpkt],
validation_data=([X_test_0,X_test_1],Y_test),
class_weight=class_weight
)
# Plot training & validation accuracy values
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Train and test on GT_split 3
###Code
Train_3 = pickle.load(open(C.data_dir+"GT_train_3.pkl", "rb"))
Test_3 = pickle.load(open(C.data_dir+"GT_test_3.pkl", "rb"))
le = preprocessing.LabelEncoder()
le.fit(Train_3['label'])
print(le.classes_)
X_0,X_1,Y = data_generator(Train_3,C,le)
X_test_0,X_test_1,Y_test = data_generator(Test_3,C,le)
# Re-initialize weights, since training and testing data switch
DD_Net = build_DD_Net(C)
DD_Net.compile(loss="categorical_crossentropy",optimizer=Adam(lr),metrics=['accuracy'])
lrScheduler = ReduceLROnPlateau(monitor='loss', factor=0.5, patience=5, cooldown=5, min_lr=5e-6)
modelCpkt = ModelCheckpoint('./weights_split_3/epoch_{epoch:02d}-val_acc_{val_acc:.2f}', monitor='val_acc',
mode='max', save_best_only=True, save_weights_only=True,
load_weights_on_restart=True)
history = DD_Net.fit([X_0,X_1], Y,
batch_size=len(Y),
epochs=700,
verbose=True,
shuffle=True,
callbacks=[lrScheduler, modelCpkt],
validation_data=([X_test_0,X_test_1],Y_test),
class_weight=class_weight
)
# Plot training & validation accuracy values
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# DD_Net.save_weights('dd_net_weights/weights')
# Train_1 = pickle.load(open(C.data_dir+"GT_train_1.pkl", "rb"))
# Test_1 = pickle.load(open(C.data_dir+"GT_test_1.pkl", "rb"))
# Train_2 = pickle.load(open(C.data_dir+"GT_train_2.pkl", "rb"))
# Test_2 = pickle.load(open(C.data_dir+"GT_test_2.pkl", "rb"))
# Train_3 = pickle.load(open(C.data_dir+"GT_train_3.pkl", "rb"))
# Test_3 = pickle.load(open(C.data_dir+"GT_test_3.pkl", "rb"))
Train_1['pose'].extend(Train_2['pose'])
Train_1['label'].extend(Train_2['label'])
Test_1['pose'].extend(Test_2['pose'])
Test_1['label'].extend(Test_2['label'])
Train_1['pose'].extend(Train_3['pose'])
Train_1['label'].extend(Train_3['label'])
Test_1['pose'].extend(Test_3['pose'])
Test_1['label'].extend(Test_3['label'])
# print(len(Train_1['pose']))
# print(len(Train_1['label']))
# print(len(Test_1['pose']))
# print(len(Test_1['label']))
le = preprocessing.LabelEncoder()
le.fit(Train_1['label'])
print(le.classes_)
X_0,X_1,Y = data_generator(Train_1,C,le)
X_test_0,X_test_1,Y_test = data_generator(Test_1,C,le)
DD_Net = build_DD_Net(C)
DD_Net.compile(loss="categorical_crossentropy",optimizer=Adam(lr),metrics=['accuracy'])
lrScheduler = ReduceLROnPlateau(monitor='loss', factor=0.5, patience=5, cooldown=5, min_lr=5e-6)
modelCpkt = ModelCheckpoint('./weights/epoch_{epoch:02d}-val_acc_{val_acc:.2f}', monitor='val_acc',
mode='max', save_best_only=True, save_weights_only=True,
load_weights_on_restart=True)
history = DD_Net.fit([X_0,X_1], Y,
batch_size=len(Y),
epochs=700,
verbose=True,
shuffle=True,
callbacks=[lrScheduler, modelCpkt],
validation_data=([X_test_0,X_test_1],Y_test),
class_weight=class_weight
)
# Plot training & validation accuracy values
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# DD_Net.save('dd-net.h5')
# le.inverse_transform(list(range(14)))
###Output
_____no_output_____ |
src/load_preprocessing/numpy.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Load NumPy data View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial provides an example of loading data from NumPy arrays into a `tf.data.Dataset`.This example loads the MNIST dataset from a `.npz` file. However, the source of the NumPy arrays is not important. Setup
###Code
import numpy as np
import tensorflow as tf
###Output
2021-07-29 10:21:36.951394: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-07-29 10:21:36.951417: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
###Markdown
Load from `.npz` file
###Code
DATA_URL = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz'
path = tf.keras.utils.get_file('mnist.npz', DATA_URL)
with np.load(path) as data:
train_examples = data['x_train']
train_labels = data['y_train']
test_examples = data['x_test']
test_labels = data['y_test']
###Output
_____no_output_____
###Markdown
Load NumPy arrays with `tf.data.Dataset` Assuming you have an array of examples and a corresponding array of labels, pass the two arrays as a tuple into `tf.data.Dataset.from_tensor_slices` to create a `tf.data.Dataset`.
###Code
train_dataset = tf.data.Dataset.from_tensor_slices((train_examples, train_labels))
test_dataset = tf.data.Dataset.from_tensor_slices((test_examples, test_labels))
###Output
2021-07-29 10:21:38.361679: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2021-07-29 10:21:38.361702: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-07-29 10:21:38.361719: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (masternode): /proc/driver/nvidia/version does not exist
2021-07-29 10:21:38.361967: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
###Markdown
Use the datasets Shuffle and batch the datasets
###Code
BATCH_SIZE = 64
SHUFFLE_BUFFER_SIZE = 100
train_dataset = train_dataset.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
test_dataset = test_dataset.batch(BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Build and train a model
###Code
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(optimizer=tf.keras.optimizers.RMSprop(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['sparse_categorical_accuracy'])
model.fit(train_dataset, epochs=10)
model.evaluate(test_dataset)
###Output
157/157 [==============================] - 0s 2ms/step - loss: 0.6944 - sparse_categorical_accuracy: 0.9584
|
ConsultAPIandLinearRegression.ipynb | ###Markdown
Test 1:Aplicación de Extracción de Informacion del API de Google junto con la aplicacion de la creación de un modelo de ML de regresion Lineal para el calculo de presion de las acciones en un instante de tiempo siguiente. Disclaimer: La aplicacion del algoritmo de regresion lineal no necesariamente sea lo adecuado para la prediccion de los precios de acciones a futuro. Se usa en este caso solo como ejemplo inicial. Mas adelante se buscara presentar la implementacion correcta para este problema. --- Descargar librerias Importar Librerias
###Code
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
from datetime import datetime
import requests
from matplotlib import rcParams
###Output
_____no_output_____
###Markdown
Inicializacion de la data global
###Code
RAPID_KEY = '3381c28cc2mshabe64e59906964dp16b7c7jsn200269cdcffd'
RAPID_HOST = 'apidojo-yahoo-finance-v1.p.rapidapi.com'
symbol_string = ''
inputdata = {}
###Output
_____no_output_____
###Markdown
Obtener Data del API
###Code
def fetchStockData(symbol):
url = "https://apidojo-yahoo-finance-v1.p.rapidapi.com/market/get-charts"
querystring = {
"comparisons":"%5EGDAXI%2C%5EFCHI",
"region":"US",
"lang":"en",
"symbol":symbol,
"interval":"1d",
"range":"3mo"
}
headers = {
'x-rapidapi-host': "apidojo-yahoo-finance-v1.p.rapidapi.com",
'x-rapidapi-key': "3381c28cc2mshabe64e59906964dp16b7c7jsn200269cdcffd"
}
response = requests.request("GET", url, headers=headers, params=querystring)
if response.status_code == 200:
return response.json()
else:
return None
# testdata = fetchStockData('AAPL')
# print(testdata)
print(type(inputdata))
###Output
<class 'dict'>
###Markdown
Extraer la data
###Code
#Parse and get timestamp
def parseTimestamp(inputdata):
timestamplist = []
timestamplist.extend(inputdata['chart']['result'][0]['timestamp'])
timestamplist.extend(inputdata['chart']['result'][0]['timestamp'])
#Se corre dos veces, una para open y otra para closing
calendartime = []
for ts in timestamplist:
dt = datetime.fromtimestamp(ts)
calendartime.append(dt.strftime("%m/%d/%Y"))
return calendartime
#FOR TEST
#print(parseTimestamp(testdata))
#Extract opening and closing.
def parseValues(inputdata):
valueList = []
valueList.extend(inputdata['chart']['result'][0]['indicators']['quote'][0]['open'])
valueList.extend(inputdata['chart']['result'][0]['indicators']['quote'][0]['close'])
return valueList
#FOR TEST
#print(parseValues(inputdata))
#Get the open and close events
def attachEvent(inputdata):
eventlist = []
for i in range(0, len(inputdata['chart']['result'][0]['timestamp'])):
eventlist.append('open')
for i in range(0, len(inputdata['chart']['result'][0]['timestamp'])):
eventlist.append('close')
return eventlist
#FOR TEST
#print(attachEvent(inputdata))
###Output
_____no_output_____
###Markdown
Extraccion de Data en un DataFrame de Pandas
###Code
#Verificacion de la entrada del usuario.
def main():
symbol_string = ''
try:
while (len(symbol_string) <= 2):
symbol_string = input('Introduzca el simbolo de la accion: ')
retdata = fetchStockData(symbol=symbol_string)
if (None != inputdata):
inputdata["Timestamp"] = parseTimestamp(retdata)
inputdata["Values"] = parseValues(retdata)
inputdata["Events"] = attachEvent(retdata)
df = pd.DataFrame(inputdata)
#print(df)
sns.set(style='darkgrid')
rcParams['figure.figsize'] = 13,5
rcParams['figure.subplot.bottom'] = 0.2
ax = sns.lineplot(
x='Timestamp',
y='Values',
hue='Events',
dashes=False,
markers=True,
data=df,
sort=False
)
ax.set_title('Symbol: ' + symbol_string)
plt.xticks(
rotation=45,
horizontalalignment='right',
fontweight='light',
fontsize='xx-small'
)
plt.show()
except:
print('Error ')
print(e)
###Output
_____no_output_____
###Markdown
Correr la Aplicacion: >Dadole Play a la siguiente seccion de codigo se puede lograr ver los precios de las acciones de la compañia que se desee. Como por ejemplo para Apple se puede poner AAPL
###Code
main()
###Output
Introduzca el simbolo de la accion: TSLA
|
L0/03 If Statements.ipynb | ###Markdown
If Statements By allowing you to respond selectively to different situations and conditions, if statements open up whole new possibilities for your programs. In this section, you will learn how to test for certain conditions, and then respond in appropriate ways to those conditions. Contents- [What is an *if* statement?](what) - [Example](example)- [Logical tests](logical_tests) - [Equality](equality) - [Inequality](inequality) - [Other inequalities](other_inequalities) - [Checking if an item is in a list](in_list) - [Exercises](exercises_logical_tests)- [The if-elif...else chain](if-elif-else) - [Simple if statements](simple_if) - [if-else statements](if-else) - [if-elif...else chains](if-elif-else_chains) - [Exercises](exercises_if-elif-else)- [More than one passing test](more_than_one)- [True and False values](true_false)- [Overall Challenges](overall_challenges) What is an *if* statement?===An *if* statement tests for a condition, and then responds to that condition. If the condition is true, then whatever action is listed next gets carried out. You can test for multiple conditions at the same time, and respond appropriately to each condition. Example---Here is an example that shows a number of the desserts I like. It lists those desserts, but lets you know which one is my favorite.
###Code
# A list of desserts I like.
desserts = ['ice cream', 'chocolate', 'apple crisp', 'cookies']
favorite_dessert = 'apple crisp'
# Print the desserts out, but let everyone know my favorite dessert.
for dessert in desserts:
if dessert == favorite_dessert:
# This dessert is my favorite, let's let everyone know!
print("%s is my favorite dessert!" % dessert.title())
else:
# I like these desserts, but they are not my favorite.
print("I like %s." % dessert)
###Output
I like ice cream.
I like chocolate.
Apple Crisp is my favorite dessert!
I like cookies.
###Markdown
What happens in this program?- The program starts out with a list of desserts, and one dessert is identified as a favorite.- The for loop runs through all the desserts.- Inside the for loop, each item in the list is tested. - If the current value of *dessert* is equal to the value of *favorite_dessert*, a message is printed that this is my favorite. - If the current value of *dessert* is not equal to the value of *favorite_dessert*, a message is printed that I just like the dessert. You can test as many conditions as you want in an if statement, as you will see in a little bit. [top](top) Logical Tests===Every if statement evaluates to *True* or *False*. *True* and *False* are Python keywords, which have special meanings attached to them. You can test for the following conditions in your if statements:- [equality](equality) (==)- [inequality](inequality) (!=)- [other inequalities](other_inequalities) - greater than (>) - greater than or equal to (>=) - less than (<) - less than or equal to (<=)- [You can test if an item is **in** a list.](in_list) WhitespaceRemember [learning about](04_lists_tuples.htmlpep8) PEP 8? There is a [section of PEP 8](http://www.python.org/dev/peps/pep-0008/other-recommendations) that tells us it's a good idea to put a single space on either side of all of these comparison operators. If you're not sure what this means, just follow the style of the examples you see below. Equality--- Two items are *equal* if they have the same value. You can test for equality between numbers, strings, and a number of other objects which you will learn about later. Some of these results may be surprising, so take a careful look at the examples below. In Python, as in many programming languages, two equals signs tests for equality. **Watch out!** Be careful of accidentally using one equals sign, which can really throw things off because that one equals sign actually sets your item to the value you are testing for! Examples
###Code
5 == 5
3 == 5
5 == 5.0
'eric' == 'eric'
'Eric' == 'eric'
'Eric'.lower() == 'eric'.lower()
'5' == 5
'5' == str(5)
###Output
_____no_output_____
###Markdown
[top](top) Inequality--- Two items are *inequal* if they do not have the same value. In Python, we test for inequality using the exclamation point and one equals sign.Sometimes you want to test for equality and if that fails, assume inequality. Sometimes it makes more sense to test for inequality directly.
###Code
3 != 5
5 != 5
'Eric' != 'eric'
###Output
_____no_output_____
###Markdown
[top](top) Other Inequalities--- greater than
###Code
5 > 3
###Output
_____no_output_____
###Markdown
greater than or equal to
###Code
5 >= 3
3 >= 3
###Output
_____no_output_____
###Markdown
less than
###Code
3 < 5
###Output
_____no_output_____
###Markdown
less than or equal to
###Code
3 <= 5
3 <= 3
###Output
_____no_output_____
###Markdown
[top](top) Checking if an item is **in** a list---You can check if an item is in a list using the **in** keyword.
###Code
vowels = ['a', 'e', 'i', 'o', 'u']
'a' in vowels
vowels = ['a', 'e', 'i', 'o', 'u']
'b' in vowels
###Output
_____no_output_____
###Markdown
Exercises--- True and False- Write a program that consists of at least ten lines, each of which has a logical statement on it. The output of your program should be 5 **True**s and 5 **False**s.- Note: You will probably need to write `print(5 > 3)`, not just `5 > 3`.
###Code
# Ex 5.1 : True and False
# put your code here
###Output
_____no_output_____
###Markdown
[top](top) The if-elif...else chain===You can test whatever series of conditions you want to, and you can test your conditions in any combination you want. Simple if statements---The simplest test has a single **if** statement, and a single statement to execute if the condition is **True**.
###Code
dogs = ['willie', 'hootz', 'peso', 'juno']
if len(dogs) > 3:
print("Wow, we have a lot of dogs here!")
###Output
Wow, we have a lot of dogs here!
###Markdown
In this situation, nothing happens if the test does not pass.
###Code
dogs = ['willie', 'hootz']
if len(dogs) > 3:
print("Wow, we have a lot of dogs here!")
###Output
_____no_output_____
###Markdown
Notice that there are no errors. The condition `len(dogs) > 3` evaluates to False, and the program moves on to any lines after the **if** block. if-else statements---Many times you will want to respond in two possible ways to a test. If the test evaluates to **True**, you will want to do one thing. If the test evaluates to **False**, you will want to do something else. The **if-else** structure lets you do that easily. Here's what it looks like:
###Code
dogs = ['willie', 'hootz', 'peso', 'juno']
if len(dogs) > 3:
print("Wow, we have a lot of dogs here!")
else:
print("Okay, this is a reasonable number of dogs.")
###Output
Wow, we have a lot of dogs here!
###Markdown
Our results have not changed in this case, because if the test evaluates to **True** only the statements under the **if** statement are executed. The statements under **else** area only executed if the test fails:
###Code
dogs = ['willie', 'hootz']
if len(dogs) > 3:
print("Wow, we have a lot of dogs here!")
else:
print("Okay, this is a reasonable number of dogs.")
###Output
Okay, this is a reasonable number of dogs.
###Markdown
The test evaluated to **False**, so only the statement under `else` is run. if-elif...else chains---Many times, you will want to test a series of conditions, rather than just an either-or situation. You can do this with a series of if-elif-else statementsThere is no limit to how many conditions you can test. You always need one if statement to start the chain, and you can never have more than one else statement. But you can have as many elif statements as you want.
###Code
dogs = ['willie', 'hootz', 'peso', 'monty', 'juno', 'turkey']
if len(dogs) >= 5:
print("Holy mackerel, we might as well start a dog hostel!")
elif len(dogs) >= 3:
print("Wow, we have a lot of dogs here!")
else:
print("Okay, this is a reasonable number of dogs.")
###Output
Holy mackerel, we might as well start a dog hostel!
###Markdown
It is important to note that in situations like this, only the first test is evaluated. In an if-elif-else chain, once a test passes the rest of the conditions are ignored.
###Code
dogs = ['willie', 'hootz', 'peso', 'monty']
if len(dogs) >= 5:
print("Holy mackerel, we might as well start a dog hostel!")
elif len(dogs) >= 3:
print("Wow, we have a lot of dogs here!")
else:
print("Okay, this is a reasonable number of dogs.")
###Output
Wow, we have a lot of dogs here!
###Markdown
The first test failed, so Python evaluated the second test. That test passed, so the statement corresponding to `len(dogs) >= 3` is executed.
###Code
dogs = ['willie', 'hootz']
if len(dogs) >= 5:
print("Holy mackerel, we might as well start a dog hostel!")
elif len(dogs) >= 3:
print("Wow, we have a lot of dogs here!")
else:
print("Okay, this is a reasonable number of dogs.")
###Output
Okay, this is a reasonable number of dogs.
###Markdown
In this situation, the first two tests fail, so the statement in the else clause is executed. Note that this statement would be executed even if there are no dogs at all:
###Code
dogs = []
if len(dogs) >= 5:
print("Holy mackerel, we might as well start a dog hostel!")
elif len(dogs) >= 3:
print("Wow, we have a lot of dogs here!")
else:
print("Okay, this is a reasonable number of dogs.")
###Output
Okay, this is a reasonable number of dogs.
###Markdown
Note that you don't have to take any action at all when you start a series of if statements. You could simply do nothing in the situation that there are no dogs by replacing the `else` clause with another `elif` clause:
###Code
###highlight=[8]
dogs = []
if len(dogs) >= 5:
print("Holy mackerel, we might as well start a dog hostel!")
elif len(dogs) >= 3:
print("Wow, we have a lot of dogs here!")
elif len(dogs) >= 1:
print("Okay, this is a reasonable number of dogs.")
###Output
_____no_output_____
###Markdown
In this case, we only print a message if there is at least one dog present. Of course, you could add a new `else` clause to respond to the situation in which there are no dogs at all:
###Code
###highlight=[10,11]
dogs = []
if len(dogs) >= 5:
print("Holy mackerel, we might as well start a dog hostel!")
elif len(dogs) >= 3:
print("Wow, we have a lot of dogs here!")
elif len(dogs) >= 1:
print("Okay, this is a reasonable number of dogs.")
else:
print("I wish we had a dog here.")
###Output
I wish we had a dog here.
###Markdown
As you can see, the if-elif-else chain lets you respond in very specific ways to any given situation. Exercises--- Three is a Crowd- Make a list of names that includes at least four people.- Write an if test that prints a message about the room being crowded, if there are more than three people in your list.- Modify your list so that there are only two people in it. Use one of the methods for removing people from the list, don't just redefine the list.- Run your if test again. There should be no output this time, because there are less than three people in the list.- **Bonus:** Store your if test in a function called something like `crowd_test`. Three is a Crowd - Part 2- Save your program from *Three is a Crowd* under a new name.- Add an `else` statement to your if tests. If the `else` statement is run, have it print a message that the room is not very crowded. Six is a Mob- Save your program from *Three is a Crowd - Part 2* under a new name.- Add some names to your list, so that there are at least six people in the list.- Modify your tests so that - If there are more than 5 people, a message is printed about there being a mob in the room. - If there are 3-5 people, a message is printed about the room being crowded. - If there are 1 or 2 people, a message is printed about the room not being crowded. - If there are no people in the room, a message is printed abou the room being empty.
###Code
# Ex 5.2 : Three is a Crowd
# put your code here
# Ex 5.3 : Three is a Crowd - Part 2
# put your code here
# Ex 5.4 : Six is a Mob
# put your code here
###Output
_____no_output_____
###Markdown
[top](top) More than one passing test===In all of the examples we have seen so far, only one test can pass. As soon as the first test passes, the rest of the tests are ignored. This is really good, because it allows our code to run more efficiently. Many times only one condition can be true, so testing every condition after one passes would be meaningless.There are situations in which you want to run a series of tests, where every single test runs. These are situations where any or all of the tests could pass, and you want to respond to each passing test. Consider the following example, where we want to greet each dog that is present:
###Code
dogs = ['willie', 'hootz']
if 'willie' in dogs:
print("Hello, Willie!")
if 'hootz' in dogs:
print("Hello, Hootz!")
if 'peso' in dogs:
print("Hello, Peso!")
if 'monty' in dogs:
print("Hello, Monty!")
###Output
Hello, Willie!
Hello, Hootz!
###Markdown
If we had done this using an if-elif-else chain, only the first dog that is present would be greeted:
###Code
###highlight=[6,7,8,9,10,11]
dogs = ['willie', 'hootz']
if 'willie' in dogs:
print("Hello, Willie!")
elif 'hootz' in dogs:
print("Hello, Hootz!")
elif 'peso' in dogs:
print("Hello, Peso!")
elif 'monty' in dogs:
print("Hello, Monty!")
###Output
Hello, Willie!
###Markdown
Of course, this could be written much more cleanly using lists and for loops. See if you can follow this code.
###Code
dogs_we_know = ['willie', 'hootz', 'peso', 'monty', 'juno', 'turkey']
dogs_present = ['willie', 'hootz']
# Go through all the dogs that are present, and greet the dogs we know.
for dog in dogs_present:
if dog in dogs_we_know:
print("Hello, %s!" % dog.title())
###Output
Hello, Willie!
Hello, Hootz!
###Markdown
This is the kind of code you should be aiming to write. It is fine to come up with code that is less efficient at first. When you notice yourself writing the same kind of code repeatedly in one program, look to see if you can use a loop or a function to make your code more efficient. ADD LOGICAL OPERATORS [top](top) True and False values===Every value can be evaluated as True or False. The general rule is that any non-zero or non-empty value will evaluate to True. If you are ever unsure, you can open a Python terminal and write two lines to find out if the value you are considering is True or False. Take a look at the following examples, keep them in mind, and test any value you are curious about. I am using a slightly longer test just to make sure something gets printed each time.
###Code
if 0:
print("This evaluates to True.")
else:
print("This evaluates to False.")
if 1:
print("This evaluates to True.")
else:
print("This evaluates to False.")
# Arbitrary non-zero numbers evaluate to True.
if 1253756:
print("This evaluates to True.")
else:
print("This evaluates to False.")
# Negative numbers are not zero, so they evaluate to True.
if -1:
print("This evaluates to True.")
else:
print("This evaluates to False.")
# An empty string evaluates to False.
if '':
print("This evaluates to True.")
else:
print("This evaluates to False.")
# Any other string, including a space, evaluates to True.
if ' ':
print("This evaluates to True.")
else:
print("This evaluates to False.")
# Any other string, including a space, evaluates to True.
if 'hello':
print("This evaluates to True.")
else:
print("This evaluates to False.")
# None is a special object in Python. It evaluates to False.
if None:
print("This evaluates to True.")
else:
print("This evaluates to False.")
###Output
This evaluates to False.
###Markdown
[top](top) Overall Challenges=== Alien Points- Make a list of ten aliens, each of which is one color: 'red', 'green', or 'blue'. - You can shorten this to 'r', 'g', and 'b' if you want, but if you choose this option you have to include a comment explaining what r, g, and b stand for.- Red aliens are worth 5 points, green aliens are worth 10 points, and blue aliens are worth 20 points.- Use a for loop to determine the number of points a player would earn for destroying all of the aliens in your list.- [hint](hint_alien_points)
###Code
# Overall Challenge: Alien Points
# Put your code here
###Output
_____no_output_____ |
notebooks/solutions/13-Ex-Create-module-Solution.ipynb | ###Markdown
Solution: create a module and reuse code from it (1 h) Extend the exercise from today by applying what you've just learned about packages and code reusability. Outline 1. Put the function into a separate .py file2. Create yet another function that takes the name of the region as an input and returns SST values for the corresponding region3. Use `import` to access these functions from another file or a notebook4. Create the wind speed data5. Create a dictionary of data, whose keys are names of regions and values are lists of heat fluxes data6. Save the dictionary to text file (bonus: to a json file), both keys and values > ***Since this is the solution, we are skipping the step of creating a separate script***
###Code
def calc_heat_flux(u_atm, t_sea, rho=1.2, c_p=1004.5, c_h=1.2e-3, u_sea=1, t_atm=17):
q = rho * c_p * c_h * (u_atm - u_sea) * (t_sea - t_atm)
return q
###Output
_____no_output_____
###Markdown
**2. Create yet another function that takes the name of the region as an input and returns SST values for the corresponding region** * This function can look something like the one below* Feel free to modify or extend it* You can replace `region_name` by `experiment_name` or whatever you prefer* For convenience, make sure the length of the returned list is the same
###Code
def create_sst(region_name):
"""
Create fake SST data (degC) for a given region
Inputs
------
region_name: ...continue the docstring...
n: integer, optional. Length of the returned data list
Returns
-------
...continue the docstring...
"""
if region_name == 'NS':
# North Sea
sst = list(range(5, 15, 1))
elif region_name == 'WS':
# White Sea
sst = list(range(0, 10, 1))
elif region_name == 'BS':
# Black Sea
sst = list(range(15, 25, 1))
else:
raise ValueError('Input value of {} is not recognised'.format(region_name))
return sst
###Output
_____no_output_____
###Markdown
**4. Create the wind speed data**
###Code
wind_speed = list(range(0,20,2))
###Output
_____no_output_____
###Markdown
**5. Create a dictionary of data, whose keys are names of regions and values are lists of heat fluxes data** * Create a list of names of the regions/experiments* Create an empty dictionary, named `hf_dict` or whatever sounds better to you* Loop over the names, call the `create_sst()` function and assign it to a variable, e.g. `fake_sst`* Still inside the name-loop, write **another loop** to iterate over SST and wind values, just as you did in the previous exercise, and calculate the **heat flux**.* Assign the result to the corresponding key of `hf_dict`
###Code
regions = ['WS', 'BS']
hf_dict = dict()
for reg in regions:
fake_sst = create_sst(reg)
heat_flux = []
for u, t in zip(wind_speed, fake_sst):
q = calc_heat_flux(u, t)
heat_flux.append(q)
hf_dict[reg] = heat_flux
###Output
_____no_output_____
###Markdown
Print the result to test yourself.
###Code
hf_dict
###Output
_____no_output_____
###Markdown
**6. Save the dictionary to text file, both keys and values** * You can copy the code for writing data to a text file from the previous exercise* Modify it so that the output file would include `hf_dict`'s keys as row (column) names
###Code
with open('heat_flux_var_sst_bycol.txt', 'w') as f:
column_names = sorted(hf_dict.keys())
f.write(','.join(column_names)+'\n')
for tup in zip(*[hf_dict[i] for i in column_names]):
f.write(','.join([str(i) for i in tup])+'\n')
with open('heat_flux_var_sst_byrow.txt', 'w') as f:
for k, v in hf_dict.items():
line = k + ','
for i in v:
line += str(i) + ','
line = line[:-1]+ '\n'
f.write(line)
!more {f.name}
###Output
BS,2.8929599999999995,-1.4464799999999998,0.0,7.232399999999998,20.2507199999999
98,39.054959999999994,63.64511999999999,94.02119999999998,130.18319999999997,172
.13111999999998
WS,24.590159999999997,-23.143679999999996,-65.0916,-101.25359999999998,-131.6296
7999999998,-156.21983999999998,-175.02407999999997,-188.04239999999996,-195.2747
9999999997,-196.72127999999998
###Markdown
**Bonus**
###Code
import json
###Output
_____no_output_____
###Markdown
Solution: create a module and reuse code from it (1 h) Extend the exercise from today by applying what you've just learned about packages and code reusability. Outline 1. Put your `calc_co2e` function into a separate .py file2. Create yet another function that calculates the distance between two cities3. Use `import` to access these functions from another file or a notebook **1 Copy your `calc_co2e` function to a new file, called mymod.py for "my module" ***You can use Jupyter notebook master page to create a New->Text File in the same directory as this notebook.The file can then be renamed from untitled.txt to mymod.txt in the page's File menu.
###Code
def calc_co2e(dist,
returnf=False,
firstclass=False,
radforc=2.0,
):
"""
calculate equivalent carbon emissions from flights
Parameters
==========
dist - flight distance in km
Optional inputs
---------------
returnf - Return flight (default=False)
firstclass - First class flight (default=False)
radforc - radiative forcing factor (default=2.0)
Returns
=======
CO2 equivalent emissions in kg
Emission factors (kg CO2e/pkm)
https://flygrn.com/blog/carbon-emission-factors-used-by-flygrn
0.26744 < 700 km
0.15845 700 – 2500
0.15119 > 2500 km
"""
if dist < 700:
emm_factor = 0.26744
elif dist > 2500:
emm_factor = 0.15119
else:
emm_factor = 0.15845
co2e = emm_factor * dist
if returnf:
co2e = co2e * 2
if firstclass:
co2e = co2e * 2
co2e = co2e / 2.0 * radforc
return co2e
###Output
_____no_output_____
###Markdown
**2.Create another function to calculate the distance between two locations** * The function `get_latlon` is being provided to obtain the latitude and logitude for a given location from the openstreetmap.org API.* Test this function for several locations
###Code
import requests
import urllib.parse
def get_latlon(location):
url = 'https://nominatim.openstreetmap.org/search/' + urllib.parse.quote(location) +'?format=json'
response = requests.get(url).json()
if not response:
print('Location not found:',location)
lat = float( response[0]["lat"] )
lon = float( response[0]["lon"] )
return (lat,lon)
# get_latlon(location)
###Output
_____no_output_____
###Markdown
* Write function `calc_dist` to calculate the distance between an origin and destination. The coodinates are obtained from `get_latlon()`.
###Code
from math import cos, sin, acos, pi
def calc_dist(origin, destination):
"""
Calculate distances for a given itenerary
Inputs
------
origin, destination - names of the cities
Returns
-------
distance in km
Uses Great circle approximation for spherical earth
dist = 6378.388 * acos(sin(lat1) * sin(lat2) + cos(lat1) * cos(lat2) * cos(lon2 - lon1))
where lat and lon are in radians (rad=deg/180*pi)
"""
(lat1,lon1) = get_latlon(origin)
(lat2,lon2) = get_latlon(destination)
lat1 = lat1/180*pi
lon1 = lon1/180*pi
lat2 = lat2/180*pi
lon2 = lon2/180*pi
dist = 6378.388 * acos(sin(lat1) * sin(lat2) + cos(lat1) * cos(lat2) * cos(lon2 - lon1))
return dist
###Output
_____no_output_____
###Markdown
Check that your function is working
###Code
calc_dist('London', 'Auckland')
###Output
_____no_output_____
###Markdown
**3. Copy the three functions `get_latlon`,`calc_dist` and `calc_co2e` to a file mymod.py*** Use import to access these functions from another file or a notebook* Call `calc_dist` followed by `calc_co2e` to calculate carbon emissions between two locations
###Code
import mymod
"""
Note: This is only done once, any changes to
the kernel will be ignored until the kernel
is restarted
"""
origin='London'
destination = 'Ibiza'
dist = mymod.calc_dist(origin,destination)
co2e = mymod.calc_co2e(dist)
print( f'Carbon emissions for flight {origin}->{destination}: {co2e:.0f}kg')
###Output
Carbon emissions for flight London->Ibiza: 222kg
|
dan_classifier.ipynb | ###Markdown
Deep Averaging Networks for text classification- pytorch, spacy, deep averaging networks- GloVe word embeddings, frozen and fine-tuned weights- S&P Key DevelopmentsTerence Lim
###Code
# jupyter-notebook --NotebookApp.iopub_data_rate_limit=1.0e12
import numpy as np
import os
import time
import re
import csv, gzip, json
import pandas as pd
from pandas import DataFrame, Series
import matplotlib.pyplot as plt
from tqdm import tqdm
from collections import Counter
from nltk.tokenize import RegexpTokenizer
import torch
import torch.nn as nn
import random
from finds.database import MongoDB
from finds.unstructured import Unstructured
from finds.structured import PSTAT
from finds.learning import TextualData
from settings import settings
from settings import pickle_dump, pickle_load
mongodb = MongoDB(**settings['mongodb'])
keydev = Unstructured(mongodb, 'KeyDev')
imgdir = os.path.join(settings['images'], 'classify')
memdir = settings['memmap']
event_ = PSTAT.event_
role_ = PSTAT.role_
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
## Retrieve headline+situation text
events = [28, 16, 83, 41, 81, 23, 87, 45, 80, 97, 231, 46, 31, 77, 29,
232, 101, 42, 47, 86, 93, 3, 22, 102, 82]
if False:
lines = []
event_all = []
tokenizer = RegexpTokenizer(r"\b[^\d\W][^\d\W][^\d\W]+\b")
for event in events:
docs = keydev['events'].find({'keydeveventtypeid':{'$eq':event}}, {'_id':0})
doc = [tokenizer.tokenize((d['headline'] + " " + d['situation']).lower())
for d in docs]
lines.extend(doc)
event_all.extend([event] * len(doc))
with gzip.open(os.path.join(imgdir, 'lines.json.gz'), 'wt') as f:
json.dump(lines, f)
with gzip.open(os.path.join(imgdir,'event_all.json.gz'), 'wt') as f:
json.dump(event_all, f)
print(lines[1000000])
if False:
with gzip.open(os.path.join(imgdir, 'lines.json.gz'), 'rt') as f:
lines = json.load(f)
with gzip.open(os.path.join(imgdir,'event_all.json.gz'), 'rt') as f:
event_all = json.load(f)
###Output
_____no_output_____
###Markdown
Collect all text into data, and encode labels
###Code
## Encode class labels
from sklearn.preprocessing import LabelEncoder
event_encoder = LabelEncoder().fit(event_all) # .inverse_transform()
num_classes = len(np.unique(event_all))
y_all = event_encoder.transform(event_all)
Series(event_all).value_counts().rename(index=event_).rename('count').to_frame()
###Output
_____no_output_____
###Markdown
split into train and test indices- stratify by event label frequencies
###Code
from sklearn.model_selection import train_test_split
train_idx, test_idx = train_test_split(np.arange(len(y_all)), random_state=42,
stratify=y_all, test_size=0.2)
###Output
_____no_output_____
###Markdown
Load spacy and vocab
###Code
import spacy
lang = 'en_core_web_lg'
nlp = spacy.load(lang, disable=['parser', 'tagger', 'ner', 'lemmatizer'])
for w in ['yen', 'jpy', 'eur', 'dkk', 'cny', 'sfr']:
nlp.vocab[w].is_stop = True # Mark customized stop words
n_vocab, vocab_dim = nlp.vocab.vectors.shape
print('Language:', lang, ' vocab:', n_vocab, ' dim:', vocab_dim)
###Output
Language: en_core_web_lg vocab: 684830 dim: 300
###Markdown
Precompute word embeddings input
###Code
def form_input(line, nlp):
"""Return spacy average vector from valid words"""
tokens = [tok.vector for tok in nlp(" ".join(line))
if not(tok.is_stop or tok.is_punct or tok.is_oov or tok.is_space)]
return (np.array(tokens).mean(axis=0) if len(tokens) else
np.zeros(nlp.vocab.vectors.shape[1]))
args = {'dtype': 'float32'}
memdir = '/home/terence/Downloads/stocks2020/memmap/'
if False:
args.update({'mode': 'r+', 'shape': (len(lines), vocab_dim)})
X = np.memmap(os.path.join(memdir, "X.{}_{}".format(*args['shape'])),**args)
for i, line in tqdm(enumerate(lines)):
X[i] = form_input(line, nlp).astype(args['dtype'])
args.update({'shape': (1224251, vocab_dim), 'mode': 'r'})
X = np.memmap(os.path.join(memdir, "X.{}_{}".format(*args['shape'])), **args)
###Output
_____no_output_____
###Markdown
Feed Forward Network in Pytorch- FFNN with averaged bag of word embeddings
###Code
class FFNN(nn.Module):
"""Deep Averaging Network for classification"""
def __init__(self, vocab_dim, num_classes, hidden, dropout=0.3):
super().__init__()
V = nn.Linear(vocab_dim, hidden[0])
nn.init.xavier_uniform_(V.weight)
L = [V, nn.Dropout(dropout)]
for g, h in zip(hidden, hidden[1:] + [num_classes]):
W = nn.Linear(g, h)
nn.init.xavier_uniform_(W.weight)
L.extend([nn.ReLU(), W])
self.network = nn.Sequential(*L)
self.classifier = nn.LogSoftmax(dim=-1) # output is (N, C) logits
def forward(self, x):
"""Return tensor of log probabilities"""
return self.classifier(self.network(x))
def predict(self, x):
"""Return predicted int class of input tensor vector"""
return torch.argmax(self(x), dim=1).int().tolist()
def save(self, filename):
"""save model state to filename"""
return torch.save(self.state_dict(), filename)
def load(self, filename):
"""load model name from filename"""
self.load_state_dict(torch.load(filename, map_location='cpu'))
return self
###Output
_____no_output_____
###Markdown
Training Loops- Instantiate model, optimizer, scheduler, loss_function for range of layers selection- Loops over epochs and batches
###Code
accuracy = {} # to store computed metrics
max_layers, hidden = 2, 300 #3, 300
batch_sz, lr, num_lr, step_sz, eval_skip = 64, 0.01, 4, 10, 5 #3, 3, 3
num_epochs = step_sz * num_lr + 1
for layers in [max_layers]:
# Instantiate model, optimizer, scheduler, loss_function
model = FFNN(vocab_dim=vocab_dim, num_classes=num_classes,
hidden=[hidden]*layers).to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, gamma=0.1,
step_size=step_sz)
loss_function = nn.NLLLoss()
accuracy[layers] = {}
# Loop over epochs and batches
for epoch in range(0, num_epochs):
tic = time.time()
idxs = [i for i in train_idx]
random.shuffle(idxs)
batches = [idxs[i:(i+batch_sz)] for i in range(0, len(idxs), batch_sz)]
total_loss = 0.0
model.train()
for i, batch in enumerate(batches): # loop over minibatches
x = torch.FloatTensor(X[batch]).to(device)
y = torch.tensor([y_all[idx] for idx in batch]).to(device)
model.zero_grad() # reset model gradient
log_probs = model(x) # run model
loss = loss_function(log_probs, y) # compute loss
total_loss += float(loss)
loss.backward() # loss step
optimizer.step() # optimizer step
#print(i, batches, i/len(batches), total_loss, end='\r')
scheduler.step() # scheduler step
model.eval()
model.save(os.path.join(imgdir, f"dan{layers}.pt"))
#print(f"Loss on epoch {epoch}: {total_loss:.1f}")
with torch.no_grad():
if epoch % eval_skip == 0:
gold = np.asarray([int(y) for y in y_all])
batches = [test_idx[i:(i+128)]
for i in range(0, len(test_idx), 128)]
test_gold, test_pred = [], []
for batch in tqdm(batches):
test_pred.extend(model.predict(
torch.FloatTensor(X[batch]).to(device)))
test_gold.extend(gold[batch])
test_correct = (np.asarray(test_pred) ==
np.asarray(test_gold)).sum()
batches = [train_idx[i:(i+128)]
for i in range(0, len(train_idx), 128)]
train_gold, train_pred = [], []
for batch in tqdm(batches):
train_pred.extend(model.predict(
torch.FloatTensor(X[batch]).to(device)))
train_gold.extend(gold[batch])
train_correct = (np.asarray(train_pred) ==
np.asarray(train_gold)).sum()
accuracy[layers][epoch] = {
'loss': total_loss,
'train': train_correct/len(train_idx),
'test': test_correct/len(test_idx)}
print(layers, epoch, int(time.time()-tic),
optimizer.param_groups[0]['lr'],
train_correct/len(train_idx), test_correct/len(test_idx))
###Output
100%|██████████| 1913/1913 [00:01<00:00, 1445.12it/s]
100%|██████████| 7652/7652 [00:05<00:00, 1454.71it/s]
###Markdown
Show accuracy
###Code
from sklearn import metrics
print(model) # show accuracy metrics for this layer
pd.concat([
Series({'Accuracy': metrics.accuracy_score(test_gold, test_pred),
'Precision': metrics.precision_score(test_gold, test_pred,
average='weighted'),
'Recall': metrics.recall_score(test_gold, test_pred,
average='weighted')},
name='Test Set').to_frame().T,
Series({'Accuracy': metrics.accuracy_score(train_gold, train_pred),
'Precision': metrics.precision_score(train_gold, train_pred,
average='weighted'),
'Recall': metrics.recall_score(train_gold, train_pred,
average='weighted')},
name='Train Set').to_frame().T], axis=0)
###Output
FFNN(
(network): Sequential(
(0): Linear(in_features=300, out_features=300, bias=True)
(1): Dropout(p=0.3, inplace=False)
(2): ReLU()
(3): Linear(in_features=300, out_features=300, bias=True)
(4): ReLU()
(5): Linear(in_features=300, out_features=25, bias=True)
)
(classifier): LogSoftmax(dim=-1)
)
###Markdown
Plot accuracy by epoch
###Code
accuracy
fig, ax = plt.subplots(num=1, clear=True, figsize=(10,6))
DataFrame.from_dict({err: {k: v[err] for k,v in accuracy[max_layers].items()}
for err in ['train', 'test']}).plot(ax=ax)
ax.set_title(f'Accuracy of DAN with frozen embedding weights')
ax.set_xlabel('Steps')
ax.set_ylabel('Accuracy')
ax.legend(['Train Set', 'Test Set'], loc='upper left')
plt.tight_layout()
plt.savefig(os.path.join(imgdir, f"frozen_accuracy.jpg"))
plt.show()
###Output
_____no_output_____
###Markdown
Confusion Matrix- Both test set and especially training set classification accuracy were poorer compared to statistical learning models, which also used stemmed and lemmatized word inputs encoded as one-hot features.- When specific words are required to determine classification, average word embeddings (where many words appear similar) may not work as well as distinct word indexes.
###Code
from sklearn.metrics import confusion_matrix
labels = [event_[e] for e in event_encoder.classes_]
cf_train = DataFrame(confusion_matrix(train_gold, train_pred),
index=pd.MultiIndex.from_product([['Actual'], labels]),
columns=pd.MultiIndex.from_product([['Predicted'], labels]))
cf_train
cf_test = DataFrame(confusion_matrix(test_gold, test_pred),
index=pd.MultiIndex.from_product([['Actual'], labels]),
columns=pd.MultiIndex.from_product([['Predicted'], labels]))
cf_test
import seaborn as sns
for num, (title, cf) in enumerate({'Training':cf_train,'Test':cf_test}.items()):
fig, ax = plt.subplots(num=1+num, clear=True, figsize=(10,6))
sns.heatmap(cf, ax=ax, annot= False, fmt='d', cmap='viridis', robust=True,
yticklabels=[f"{lab} {e}"
for lab, e in zip(labels, event_encoder.classes_)],
xticklabels=event_encoder.classes_)
ax.set_title(f'{title} Set Confusion Matrix')
ax.set_xlabel('Predicted')
ax.set_ylabel('Actual')
ax.yaxis.set_tick_params(labelsize=8, rotation=0)
ax.xaxis.set_tick_params(labelsize=8, rotation=0)
plt.subplots_adjust(left=0.35, bottom=0.25)
plt.savefig(os.path.join(imgdir, f"frozen_{title}.jpg"))
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
DAN with GloVe embeddings and fine-tune weights- Construct vocab, and convert str to word indexes- TextualData class of convenience methods
###Code
textdata = TextualData()
if False:
vocab = textdata.counter(lines) # count words for vocab
textdata(vocab.most_common(20000), 0) # vocab is most common 20000
textdata.dump('textdata.json', imgdir)
x_all = textdata[lines] # convert str docs to word indexes
with gzip.open(os.path.join(imgdir, 'x_all.json.gz'), 'wt') as f:
json.dump(x_all, f)
else:
with gzip.open(os.path.join(imgdir, 'x_all.json.gz'), 'rt') as f:
x_all = json.load(f)
textdata.load('textdata.json', imgdir)
print('vocab size', textdata.n)
###Output
vocab size 20001
###Markdown
Relativize GloVe embeddings- Load GloVe embeddings weights, and drop rows not in vocabwget http://nlp.stanford.edu/data/wordvecs/glove.6B.zipunzip glove.6B.zipArchive: glove.6B.zip inflating: glove.6B.100d.txt inflating: glove.6B.200d.txt inflating: glove.6B.300d.txt inflating: glove.6B.50d.txt
###Code
vocab_dim = 300
glovefile = f"/home/terence/Downloads/sshfs/glove/glove.6B.{vocab_dim}d.txt"
if False:
glove = textdata.relativize(glovefile)
pickle_dump(glove, f"glove{vocab_dim}rel.pkl", imgdir)
else:
glove = pickle_load(f"glove{vocab_dim}rel.pkl", imgdir)
print('glove dimensions', glove.shape)
###Output
glove dimensions (20001, 300)
###Markdown
train_test split stratified by y_all
###Code
textdata.form_splits(y_all, random_state=42, test_size=0.2)
###Output
_____no_output_____
###Markdown
Define DAN with tunable word embeddings
###Code
class DAN(nn.Module):
"""Deep Averaging Network for classification"""
def __init__(self, vocab_dim, num_classes, hidden, embedding,
dropout=0.3, requires_grad=False):
super().__init__()
self.embedding = nn.EmbeddingBag.from_pretrained(embedding)
self.embedding.weight.requires_grad = requires_grad
V = nn.Linear(vocab_dim, hidden[0])
nn.init.xavier_uniform_(V.weight)
L = [V, nn.Dropout(dropout)]
for g, h in zip(hidden, hidden[1:] + [num_classes]):
W = nn.Linear(g, h)
nn.init.xavier_uniform_(W.weight)
L.extend([nn.ReLU(), W])
self.network = nn.Sequential(*L)
self.classifier = nn.LogSoftmax(dim=-1) # output is (N, C) logits
def tune(self, requires_grad=False):
self.embedding.weight.requires_grad = requires_grad
def forward(self, x):
"""Return tensor of log probabilities"""
return self.classifier(self.network(self.embedding(x)))
def predict(self, x):
"""Return predicted int class of input tensor vector"""
return torch.argmax(self(x), dim=1).int().tolist()
def save(self, filename):
"""save model state to filename"""
return torch.save(self.state_dict(), filename)
def load(self, filename):
"""load model name from filename"""
self.load_state_dict(torch.load(filename, map_location='cpu'))
return self
layers = 2
hidden = vocab_dim #100, 300
model = DAN(vocab_dim, num_classes, hidden=[hidden]*layers,
embedding=torch.FloatTensor(glove)).to(device)
print(model)
###Output
DAN(
(embedding): EmbeddingBag(20001, 300, mode=mean)
(network): Sequential(
(0): Linear(in_features=300, out_features=300, bias=True)
(1): Dropout(p=0.3, inplace=False)
(2): ReLU()
(3): Linear(in_features=300, out_features=300, bias=True)
(4): ReLU()
(5): Linear(in_features=300, out_features=25, bias=True)
)
(classifier): LogSoftmax(dim=-1)
)
###Markdown
Training loop
###Code
accuracy = dict()
for tune in [False, True]:
# define model, optimizer, scheduler, loss_function
model.tune(tune)
batch_sz, lr, num_lr, step_sz, eval_skip = 64, 0.01, 4, 5, 5 #3, 3, 3 #
num_epochs = step_sz * num_lr + 1
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, gamma=0.1,
step_size=step_sz)
loss_function = nn.NLLLoss()
accuracy[tune] = dict()
# Loop over epochs and batches
for epoch in range(0, num_epochs):
tic = time.time()
batches = textdata.form_batches(batch_sz)
total_loss = 0.0
model.train()
for batch in tqdm(batches): # train by batch
x = textdata.form_input([x_all[idx] for idx in batch]).to(device)
y = torch.tensor([y_all[idx] for idx in batch]).to(device)
model.zero_grad() # reset model gradient
log_probs = model(x) # run model
loss = loss_function(log_probs, y) # compute loss
total_loss += float(loss)
loss.backward() # loss step
optimizer.step() # optimizer step
scheduler.step() # scheduler step for learning rate
model.eval()
model.save(os.path.join(imgdir, f"danGloVe.pt"))
print(f"Loss on epoch {epoch} (tune={tune}): {total_loss:.1f}")
with torch.no_grad():
if epoch % eval_skip == 0:
test_pred = [model.predict(textdata.form_input(
[x_all[i]]).to(device))[0] for i in textdata.test_idx]
test_gold = [int(y_all[idx]) for idx in textdata.test_idx]
test_correct = (np.asarray(test_pred) ==
np.asarray(test_gold)).sum()
train_pred = [model.predict(textdata.form_input(
[x_all[i]]).to(device))[0] for i in textdata.train_idx]
train_gold = [int(y_all[idx]) for idx in textdata.train_idx]
train_correct = (np.asarray(train_pred) ==
np.asarray(train_gold)).sum()
accuracy[tune][epoch] = {
'loss': total_loss,
'train': train_correct/len(train_gold),
'test': test_correct/len(test_gold)}
print(tune, epoch, int(time.time()-tic),
optimizer.param_groups[0]['lr'],
train_correct/len(train_gold),
test_correct/len(test_gold))
###Output
100%|██████████| 15304/15304 [02:59<00:00, 85.47it/s]
###Markdown
Confusion matrix
###Code
from sklearn.metrics import confusion_matrix
labels = [event_[e] for e in event_encoder.classes_]
cf_train = DataFrame(confusion_matrix(train_gold, train_pred),
index=pd.MultiIndex.from_product([['Actual'], labels]),
columns=pd.MultiIndex.from_product([['Predicted'], labels]))
cf_test = DataFrame(confusion_matrix(test_gold, test_pred),
index=pd.MultiIndex.from_product([['Actual'], labels]),
columns=pd.MultiIndex.from_product([['Predicted'], labels]))
import seaborn as sns
for num, (title, cf) in enumerate({'Training':cf_train,'Test':cf_test}.items()):
fig, ax = plt.subplots(num=1+num, clear=True, figsize=(10,6))
sns.heatmap(cf, ax=ax, annot= False, fmt='d', cmap='viridis', robust=True,
yticklabels=[f"{label} {e}"
for label,e in zip(labels,event_encoder.classes_)],
xticklabels=event_encoder.classes_)
ax.set_title(f'DAN Tuned GloVe {title} Set Confusion Matrix')
ax.set_xlabel('Predicted')
ax.set_ylabel('Actual')
ax.yaxis.set_tick_params(labelsize=8, rotation=0)
ax.xaxis.set_tick_params(labelsize=8, rotation=0)
plt.subplots_adjust(left=0.35, bottom=0.25)
plt.savefig(os.path.join(imgdir, f"tuned_{title}.jpg"))
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Plot accuracy by epoch
###Code
print_skip = 1
fig, ax = plt.subplots(num=1, clear=True, figsize=(10,6))
DataFrame.from_dict({err: {(t*len(accuracy[t]) + k) * print_skip: v[err]
for t in [False, True]
for k,v in enumerate(accuracy[t].values())}
for err in ['train', 'test']}).plot(ax=ax)
ax.axvline((len(accuracy[False]) - 0.5) * print_skip, c='grey', alpha=1)
ax.set_title(f'Accuracy of DAN with fine-tuned GloVe weights')
ax.set_xlabel('Steps')
ax.set_ylabel('Accuracy')
ax.legend(['Train Set', 'Test Set','Fine-tune Weights'], loc='upper left')
plt.tight_layout()
plt.savefig(os.path.join(imgdir, f"tuned_accuracy.jpg"))
plt.show()
###Output
_____no_output_____
###Markdown
Spacy Explorations- hashkey = nlp.vocab.strings[v] : general hash table between vocab strings and ids- vec = nlp.vocab[v].vector : np array with word embedding vector from vocab string- row = nlp.vocab.vectors.key2row : dict from word's hashkey to int- emb = nn.Embedding.from_pretrained(torch.FloatTensor(nlp.vocab.vectors.data))- emb(row) == vec : equivalence of torch embedding and spacy vector- token = nlp('king man queen woman')[0]- token.lower : hashkey- token.lower_: str- token.lex_id : row of word vector- token.has_vector : has word vector representation
###Code
doc = nlp('king queen man woman a23kj4j')
line = [tok.lex_id for tok in doc
if not(tok.is_stop or tok.is_punct or tok.is_oov or tok.is_space)]
vec = (nlp.vocab['king'].vector
- nlp.vocab['man'].vector
+ nlp.vocab['woman'].vector)
print(vec.shape)
sim = nlp.vocab.vectors.most_similar(vec[None,:], n=10)
[nlp.vocab.strings[hashkey] for hashkey in sim[0][0]]
# Load pretrained embeddings
emb = nn.Embedding.from_pretrained(torch.FloatTensor(nlp.vocab.vectors.data))
# test for Spacy.nlp and torch.embeddings
test_vocab = ['king', 'man', 'woman', 'queen', 'e9s82j']
for w in test_vocab:
vocab_id = nlp.vocab.strings[w]
spacy_vec = nlp.vocab[w].vector
row = nlp.vocab.vectors.key2row.get(vocab_id, None) # dict
if row is None:
print('{} is oov'.format(w))
continue
vocab_row = torch.tensor(row, dtype=torch.long)
embed_vec = emb(vocab_row)
print(np.allclose(spacy_vec, embed_vec.detach().numpy()))
for key, row in nlp.vocab.vectors.key2row.items():
if row == 0:
print(nlp.vocab.strings[key])
###Output
True
True
True
True
e9s82j is oov
.
|
notebooks/05-Empaquetado_de_modelos.ipynb | ###Markdown
Empaquetado de modelos> **Rodolfo Ferro** > Google Dev Expert en ML, 2020.>> _Redes:_> - GitHub - [RodolfoFerro](https://github.com/RodolfoFerro)> - Twitter - [@FerroRodolfo](https://twitter.com/FerroRodolfo)> - Instagram - [@rodo_ferro](https://instagram.com/rodo_ferro) Contenidos **Sección VI**1. **Código:** Un nuevo problema2. **Código:** Guardado de modelos en TF **Sección VI** El dataset de dígitos escritos a manoComencemos importando TensorFlow.
###Code
import tensorflow as tf
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Los datos de MNIST están disponibles directamente en la API de conjuntos de datos de `tf.keras` y podemos importarlos de la misma manera que hicimos ocn el de modas.Llamar a `load_data` en este objeto nos dará dos conjuntos con los valores de entrenamiento y prueba para los gráficos que contienen las prendas y sus etiquetas.
###Code
mnist = tf.keras.datasets.mnist
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
###Output
_____no_output_____
###Markdown
¿Cómo se ven estos valores?Imprimamos una imagen de entrenamiento y una etiqueta de entrenamiento para ver.
###Code
import numpy as np
import matplotlib.pyplot as plt
np.set_printoptions(linewidth=200)
# Set index of image to be seen
img_index = 0
# Plot image
plt.imshow(training_images[img_index], cmap='gray')
plt.axis(False)
print("Label:", training_labels[img_index])
print("Matrix:\n", training_images[img_index])
###Output
_____no_output_____
###Markdown
Preparación de los datosNotarás que todos los valores están entre 0 y 255. Si estamos entrenando una red neuronal, por varias razones es más fácil si transformamos los valores para tratar todos con valores entre 0 y 1. Este proceso se llama **normalización**.Además, para este proceso añadiremos la expansión de dimensiones para poder alimentar a la red.
###Code
training_images = training_images.reshape(60000, 28, 28, 1)
training_images = training_images / 255.0
test_images = test_images.reshape(10000, 28, 28, 1)
test_images = test_images/255.0
###Output
_____no_output_____
###Markdown
Creación del modeloPodemos hacer uso del potencial de una red neuronal convolucional.
###Code
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(2, 2),
# Añadamos algunas capas convolucionales extra
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
optimizer=tf.optimizers.Adam(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy']
)
###Output
_____no_output_____
###Markdown
Entrenamiento del modelo
###Code
model.fit(training_images, training_labels, epochs=3)
###Output
_____no_output_____
###Markdown
Evaluación del modelo
###Code
model.evaluate(test_images, test_labels)
###Output
_____no_output_____
###Markdown
Predicción
###Code
test_index = 30
plt.imshow(np.squeeze(test_images[test_index]), cmap='gray')
plt.axis(False)
print("Label:", test_labels[test_index])
prediction = model.predict(np.expand_dims(test_images[test_index], axis=0))
print("Prediction:", np.argmax(prediction))
###Output
_____no_output_____
###Markdown
Guardado del modelo entrenadoLo que haremos será guardar los pesos y la arquitectura del modelo en 2 archivos distintos.
###Code
# Serialize model to JSON:
model_json = model.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
# Serialize weights to HDF5 (h5py needed):
model.save_weights("model.h5")
print("Model saved to disk.")
###Output
_____no_output_____
###Markdown
Los mismos archivos pueden ser cargados utilizando las funciones correspondientes:
###Code
# Load json and create model:
model_from_json = tf.keras.models.model_from_json
json_file = open('model.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# Load weights into loaded model:
loaded_model.load_weights("model.h5")
print("Model loaded from disk.")
###Output
_____no_output_____
###Markdown
Ejemplo de predicción con el modelo cargado:
###Code
test_index = 30
plt.imshow(np.squeeze(test_images[test_index]), cmap='gray')
plt.axis(False)
print("Label:", test_labels[test_index])
prediction = loaded_model.predict(np.expand_dims(test_images[test_index], axis=0))
print("Prediction:", np.argmax(prediction))
###Output
_____no_output_____ |
JW_iris_dataset_lab_regression.ipynb | ###Markdown
Part 2 Now it's time to do it yourself. There are two problems in this lab, one is for linear regression and the other is for perceptron. **For the first problem you need to download advertising.csv from github.** Data loading is done for you (**you still need to assign columns to X and y and do train/test split**), and you need to implement everything else yourself. For the problem, we are using the advertising dataset, which use advertising information on TV, Radio and Newspaper to predict sales number.
###Code
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
df = pd.read_csv('./advertising.csv')
df.head()
###Output
_____no_output_____
###Markdown
Implement the rest yourself, don't forget you can use seaborn to visualize the dataset.
###Code
# your code for data generation, what is the targer for this dataset?
X = df[['TV', 'Radio', 'Newspaper']]
y = df['Sales']#should be the target column
# Don't forget to do train test split!
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.4)
regressor = LinearRegression()
regressor.fit(X_train, y_train)
#Load the model from sklearn and use it for prediction
predictions = regressor.predict(X_test)
print(predictions)
plt.scatter(y_test,predictions)
###Output
[12.25333748 21.05042599 20.9811572 7.39878055 15.82946484 15.07650349
8.35008291 9.47559409 15.49240765 24.59670906 19.13628269 8.33038136
17.96740012 15.51351507 10.04699329 10.58725207 12.62737815 23.8934928
9.41216829 18.02001642 23.30465192 15.7470194 12.39208618 7.37572797
10.54033239 13.41032189 12.74943451 21.83962076 13.2443006 13.77964906
8.09907336 12.52474104 13.30231377 16.36742631 19.40801307 12.52049741
19.23629923 11.28198135 11.87978656 9.48878414 10.27096795 17.74814604
15.81608501 14.75232372 11.60790354 9.01870825 16.85494497 18.08321957
6.38818983 7.47014766 10.04383739 17.15430459 23.46201509 5.74619529
15.00181641 9.96638268 19.93698689 22.83889007 8.21369265 20.89584576
16.97129649 14.06277585 20.00887346 10.80062287 16.74671979 16.51626409
10.13527109 18.79330122 10.00791153 18.69938822 23.15857862 20.46250665
20.59376573 15.22066601 20.95382913 11.01223169 9.07418708 12.9919148
9.56390912 22.39567696]
###Markdown
Now let's try to apply MLP on the famous iris datasetIris dataset contains 3 classes of 50 instances each, where each class refers to a type of iris plant. It's a good toy dataset for classification tasks
###Code
#First download the dataset from sklearn, and then loaded into panda dataframe for easier visualization and processing.
#!wget -L https://raw.githubusercontent.com/scikit-learn/scikit-learn/main/sklearn/datasets/data/iris.csv
import numpy as np
import pandas as pd
from sklearn.datasets import load_iris
iris = load_iris()
data_iris = pd.DataFrame(data= np.c_[iris['data'], iris['target']],
columns= iris['feature_names'] + ['target'])
#Now print the columns
data_iris.info()
data_iris.head()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 150 entries, 0 to 149
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 sepal length (cm) 150 non-null float64
1 sepal width (cm) 150 non-null float64
2 petal length (cm) 150 non-null float64
3 petal width (cm) 150 non-null float64
4 target 150 non-null float64
dtypes: float64(5)
memory usage: 6.0 KB
###Markdown
Now use try to creat your own train and test set from the loaded panda dataframe, remember to first create x and Y by choosing corresponding colomns. Here y should be the target colummn. For your convinence, this time we will not implement Perceptron from scratch. Again we use the model from sklearn.
###Code
from sklearn.linear_model import Perceptron
model = Perceptron()#use the perceptron model provided by sklearn
#complete the rest of the code to finish the training
#The format of using the perceptron model is almost same as regression model
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
###Output
_____no_output_____
###Markdown
Finally, use sklearn's accuraccy score function to evalutate the classification accuracy of the trained model on test set
###Code
# your code for data generation.
X = data_iris[['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)', 'petal width (cm)']]
y = data_iris['target']
# Don't forget to do train test split!
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
X_train, X_test, y_train, y_test = train_test_split(X,y, test_size=0.2, random_state = 101)
regressor = LinearRegression()
regressor.fit(X_train, y_train)
#Load the model from sklearn and use it for prediction
predictions = regressor.predict(X_test)
print(predictions)
plt.scatter(y_test,predictions)
from sklearn.metrics import accuracy_score
print(accuracy_score(y_test, y_pred)) #change y_true and y_pred to your own name if necessery
###Output
0.8333333333333334
|
home/cnfunds/cnfund007.ipynb | ###Markdown
A股(至少是指数)买跌卖涨
###Code
import pandas as pd
import matplotlib
from matplotlib import font_manager
import matplotlib.pyplot as plt
from datetime import datetime
import json
import numpy as np
import cnfundutils
import libtrdb2
import trading2_pb2
import tradingdb2_pb2
import tradingdb2_pb2_grpc
import plotly.express as px
isStaticImg = True
cnfundutils.init2()
trdb2cfg = libtrdb2.loadConfig('./trdb2.yaml')
FUNDRESULTS_PATH = './calcfundfull.csv'
dfFundResults = cnfundutils.loadFundResult(FUNDRESULTS_PATH)
FUNDS_PATH = './funds.xlsx'
dfFunds = cnfundutils.loadFundBasicInfo(FUNDS_PATH)
paramsaip = trading2_pb2.AIPParams(
money=10000,
type=trading2_pb2.AIPTT_MONTHDAY,
day=1,
)
lstdf = []
assets = ['cnfunds.510310']
buy0 = trading2_pb2.CtrlCondition(
name='indicatorsv',
vals=[-0.015],
operators=['<='],
strVals=['roc.1'],
)
buy1 = trading2_pb2.CtrlCondition(
name='indicatorsp',
operators=['up'],
strVals=['ema.10'],
)
sell0 = trading2_pb2.CtrlCondition(
name='indicatorsv',
vals=[-0.02],
operators=['<='],
strVals=['roc.1'],
)
sell1 = trading2_pb2.CtrlCondition(
name='indicatorsp',
operators=['down'],
strVals=['ema.30'],
)
paramsbuy = trading2_pb2.BuyParams(
perHandMoney=1,
)
paramssell = trading2_pb2.SellParams(
perVolume=1,
)
paramsinit = trading2_pb2.InitParams(
money=10000,
)
df0 = libtrdb2.simTradingEx2(trdb2cfg, assets, 0, -1, [buy0, buy1], [sell0, sell1], paramsbuy, paramssell, paramsinit, None)
lstdf.append({'title': '1.5:2', 'df': df0})
# baseline
buy0 = trading2_pb2.CtrlCondition(
indicator='buyandhold',
)
paramsbuy = trading2_pb2.BuyParams(
perHandMoney=1,
)
paramsinit = trading2_pb2.InitParams(
money=10000,
)
df0 = libtrdb2.simTradingEx(trdb2cfg, assets, 0, -1, buy0, None, paramsbuy, None, paramsinit, None)
lstdf.append({'title': 'baseline', 'df': df0})
libtrdb2.showSimTradingEx('跌幅超过1.5买入涨幅超过3卖掉', lstdf, False)
###Output
_____no_output_____ |
.ipynb_checkpoints/Elexon Data Download-checkpoint.ipynb | ###Markdown
Elexon Data Download Libraries and Setup
###Code
######################### Libraries ###########################
from datetime import date, timedelta, datetime
import os
import pandas as pd
import numpy as np
from tqdm import tqdm_notebook
import matplotlib.pyplot as plt
%matplotlib inline
from ElexonAPI.elexonpy import API
######################### Directories ###########################
wd = os.getcwd()
data_path = "\\data\\" #Input data path
# API parameters
APIKEY = ''#Enter APIkey
begin = datetime(2016,1,1)
end = datetime(2020,4,6)
epy = API(APIKEY)
###Output
_____no_output_____
###Markdown
Actual Generation by Fuel Type
###Code
import fnmatch
## Extract historical generation data
months = int((end - begin)/pd.Timedelta(1, 'M'))
pbar = tqdm_notebook(total = months, initial = 0)
gen_data = pd.DataFrame()
for i in range(months):
start_date = begin + pd.DateOffset(months = i)
end_date = start_date + pd.DateOffset(months = 1) - pd.DateOffset(days = 1)
temp = epy.get_generation__by_fuel(start_date,end_date)
gen_data = pd.concat([gen_data, temp], sort = False)
pbar.update(1)
gen_data['INT'] = gen_data[fnmatch.filter(gen_data.columns, 'INT*')].sum(axis = 1)
gen_data['HYDRO'] = gen_data[['NPSHYD', 'PS']].sum(axis = 1)
gen_data['TOT'] = gen_data[['CCGT','OIL','COAL','NUCLEAR','WIND','HYDRO','BIOMASS','INT', 'SOLAR']].sum(axis = 1)
for year in gen_data.index.year.unique():
gen_data[gen_data.index.year == year].to_pickle(wd + data_path + 'gen_data_{}.pkl'.format(year))
del temp, gen_data
fig, axes = plt.subplots(figsize=(16,8),nrows = 1, ncols = 1)
fig.tight_layout(pad=0, w_pad=5, h_pad=5)
year = 2018
first = pd.datetime(year,4,17)
gen_data = pd.read_pickle(wd + data_path + 'gen_data_{}.pkl'.format(year))
gen_data[['NUCLEAR','INT','OTHER','HYDRO','BIOMASS','COAL', 'WIND','CCGT','OCGT']]\
[first:(first+pd.DateOffset(days = 1))].plot.area(linewidth = 0, ax = axes);
del gen_data
###Output
_____no_output_____
###Markdown
Actual & Forecast Renewable Generation Actual & Forecast Load
###Code
## Extract historical load data
days = int((end - begin)/pd.Timedelta(1, 'D'))
pbar = tqdm_notebook(total = days, initial = 0)
act_dem_data = pd.DataFrame()
fore_dem_data = pd.DataFrame()
for i in range(days):
start_date = begin + pd.DateOffset(days = i)
temp1 = epy.get_actual_demand(start_date,'*')
act_dem_data = pd.concat([act_dem_data, temp1], sort = False)
try:
temp2 = epy.get_dayahead_demand(start_date,'*')
except:
temp2['Forecast'] = np.nan
temp2.index = temp2.index + pd.DateOffset(days = 1)
fore_dem_data = pd.concat([fore_dem_data, temp2], sort = False)
pbar.update(1)
dem_data = pd.merge(act_dem_data,fore_dem_data[['Forecast']], left_index= True, right_index= True, how = 'outer')
dem_data['Error'] = dem_data['Actual'] - dem_data['Forecast']
dem_data['Percentage Error'] = dem_data['Error']/dem_data['Actual']
for year in dem_data.index.year.unique():
dem_data[dem_data.index.year == year].to_pickle(wd + data_path + 'dem_data_{}.pkl'.format(year))
del temp1,temp2, fore_dem_data, act_dem_data
fig, axes = plt.subplots(figsize=(12,4),nrows = 1, ncols = 2)
fig.tight_layout(pad=0, w_pad=5, h_pad=5)
year = 2018
first = pd.datetime(year,4,17)
dem_data = pd.read_pickle(wd + data_path + 'dem_data_{}.pkl'.format(year))
dem_data[['Actual','Forecast']][first:(first+pd.DateOffset(days = 1))].plot(ax = axes[0]);
dem_data['Percentage Error'][first:(first+pd.DateOffset(days = 1))].plot(ax = axes[1]);
###Output
_____no_output_____
###Markdown
BM System Prices
###Code
## Extract historical generation data
months = int((end - begin)/pd.Timedelta(1, 'M'))
pbar = tqdm_notebook(total = months, initial = 0)
sys_price_data = pd.DataFrame()
for i in range(months):
start_date = begin + pd.DateOffset(months = i)
end_date = start_date + pd.DateOffset(months = 1) - pd.DateOffset(days = 1)
temp = epy.get_system_prices(start_date,end_date)
sys_price_data = pd.concat([sys_price_data, temp], sort = False)
pbar.update(1)
for year in sys_price_data.index.year.unique():
sys_price_data[sys_price_data.index.year == year].to_pickle(wd + data_path + 'sys_price_data_{}.pkl'.format(year))
fig, axes = plt.subplots(figsize=(12,4),nrows = 1, ncols = 2)
fig.tight_layout(pad=0, w_pad=5, h_pad=5)
year = 2018
first = pd.datetime(year,4,17)
sys_price_data = pd.read_pickle(wd + data_path + 'sys_price_data_{}.pkl'.format(year))
sys_price_data['SSP'][first:(first+pd.DateOffset(days = 1))].plot(ax = axes[0]);
sys_price_data['NIV'][first:(first+pd.DateOffset(days = 1))].plot(ax = axes[1]);
###Output
_____no_output_____
###Markdown
BM Bid-Offer Curves
###Code
# Extract historical generation data
begin = pd.datetime(2018,1,1)
end = pd.datetime(2018,12,31)
periods = int((end - begin)/pd.Timedelta(30, 'm'))
pbar = tqdm_notebook(total = periods, initial = 0)
bo_data = pd.DataFrame()
for i in range(periods):
start_date = begin + pd.DateOffset(minutes = 30*i)
period = int(start_date.hour*2 + (start_date.minute/30)) + 1
temp = epy.get_bo_stack(start_date,period)
bo_data = pd.concat([bo_data, temp], sort = False)
year = start_date.year
bo_data.to_pickle(wd + data_path + 'bo_data_{}.pkl'.format(year))
pbar.update(1)
year = 2018
bo_data = pd.read_pickle(wd + data_path + 'bo_data_{}.pkl'.format(year))
speak = 16*2
epeak = 19*2
fig, ax = plt.subplots(figsize = (8,4))
bo_data[(bo_data['Settlement Period'] < speak)|(bo_data['Settlement Period'] > epeak)]\
.plot(ax = ax, x = 'Arb Vol', y = 'BO Price',kind = 'scatter', color = 'b', s= 2);
bo_data[(bo_data['Settlement Period'] >= speak)&(bo_data['Settlement Period'] <= epeak)]\
.plot(ax = ax, x = 'Arb Vol', y = 'BO Price',kind = 'scatter', color = 'r', s = 2);
fig, ax = plt.subplots(nrows = 1, ncols = 1, figsize=(16, 4))
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
x = 49
for i in range(1,x,8):
offer_stack = bo_data[['Arb Vol', 'BO Price']][(bo_data['Settlement Period'] == i)& (bo_data['Arb Vol'] > 0)]
offer_stack = offer_stack.sort_values(by=['BO Price'])
offer_stack['Cum Volume'] = offer_stack['Arb Vol'].cumsum()
bid_stack = bo_data[['Arb Vol', 'BO Price']][(bo_data['Settlement Period'] == i)& (bo_data['Arb Vol'] < 0)]
bid_stack = bid_stack.sort_values(by=['BO Price'], ascending = False)
bid_stack['Cum Volume'] = bid_stack['Arb Vol'].cumsum()
idx = int(i/5)
offer_stack.plot(ax = ax, x = 'Cum Volume', y = 'BO Price', color = colors[idx], label = '{}'.format(i))
bid_stack.plot(ax = ax, x = 'Cum Volume', y = 'BO Price', color = colors[idx], legend = False)
###Output
_____no_output_____
###Markdown
APX Market Prices (Short-Term)
###Code
## Extract historical generation data
days = int((end - begin)/pd.Timedelta(1, 'D'))
pbar = tqdm_notebook(total = days, initial = 0)
market_price_data = pd.DataFrame()
for i in range(days):
start_date = begin + pd.DateOffset(days = i)
end_date = start_date + pd.DateOffset(days = 1) - pd.DateOffset(minutes = 30)
temp = epy.get_market_prices(start_date,end_date)
market_price_data = pd.concat([market_price_data, temp], sort = False)
pbar.update(1)
for year in market_price_data.index.year.unique():
market_price_data[market_price_data.index.year == year]\
.to_pickle(wd + data_path + 'market_price_data_{}.pkl'.format(year))
fig, axes = plt.subplots(figsize=(12,4),nrows = 1, ncols = 2)
fig.tight_layout(pad=0, w_pad=5, h_pad=5)
year = 2018
first = pd.datetime(year,4,17)
market_price_data = pd.read_pickle(wd + data_path + 'market_price_data_{}.pkl'.format(year))
market_price_data['Price'][first:(first+pd.DateOffset(days = 7))].plot(ax = axes[0]);
market_price_data['Volume'][first:(first+pd.DateOffset(days = 7))].plot(ax = axes[1]);
###Output
_____no_output_____
###Markdown
UK-Wide Temperature
###Code
## Extract historical generation data
months = int((end - begin)/pd.Timedelta(1, 'M'))
pbar = tqdm_notebook(total = months, initial = 0)
temp_data = pd.DataFrame()
for i in range(months):
start_date = begin + pd.DateOffset(months = i)
end_date = start_date + pd.DateOffset(months = 1) - pd.DateOffset(days = 1)
temp = epy.get_temperature(start_date,end_date)
temp_data = pd.concat([temp_data, temp], sort = False)
pbar.update(1)
for year in temp_data.index.year.unique():
temp_data.to_pickle(wd + data_path + 'temp_data_{}.pkl'.format(year))
year = 2018
first = pd.datetime(year,4,17)
temp_data = pd.read_pickle(wd + data_path + 'temp_data_{}.pkl'.format(year))
cols = ['Temp', 'Temp_Norm','Temp_Low', 'Temp_High']
temp_data[cols][first:(first+pd.DateOffset(months = 12))].plot();
###Output
_____no_output_____
###Markdown
Installed Generation Capacity
###Code
## Extract historical generation data
years = int((end - begin)/pd.Timedelta(1, 'Y'))
pbar = tqdm_notebook(total = years, initial = 0)
gen_cap_data = pd.DataFrame()
for i in range(years):
year = (begin + pd.DateOffset(years = i)).year
temp = epy.get_installed_cap(year)
gen_cap_data = pd.concat([gen_cap_data, temp], sort = False)
pbar.update(1)
gen_cap_data.to_pickle(wd + data_path + 'gen_cap_data.pkl')
del gen_cap_data, temp
gen_cap_data = pd.read_pickle(wd + data_path + 'gen_cap_data.pkl')
fig,ax = plt.subplots(figsize = (16,6))
gen_cap_data.sort_values(by = gen_cap_data.index[0], axis = 1).plot(ax=ax,kind = 'bar');
###Output
_____no_output_____ |
Election_Project.ipynb | ###Markdown
Importing Dataset and Modules
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Previewing the dataset in a pandas dataframe.
###Code
dataset = pd.read_csv('LS_2.0.csv')
dataset
dataset.shape
###Output
_____no_output_____
###Markdown
Our dataset has 2264 rows and 14 columns Analysing the data The dataset has the following columns.
###Code
dataset.columns
###Output
_____no_output_____
###Markdown
The null values depict the missing values in case of NOTA candidates.
###Code
dataset.isnull().sum(axis = 0)
###Output
_____no_output_____
###Markdown
FIlling the unavailable values with 0 in Assets and Liabilities
###Code
dataset['ASSETS'] = dataset['ASSETS'].fillna(0)
dataset['ASSETS']
dataset['LIABILITIES'] = dataset['LIABILITIES'].fillna(0)
dataset['LIABILITIES']
###Output
_____no_output_____
###Markdown
Information about rows and columns
###Code
dataset.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2263 entries, 0 to 2262
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 STATE 2263 non-null object
1 CONSTITUENCY 2263 non-null object
2 NAME 2263 non-null object
3 WINNER 2263 non-null int64
4 PARTY 2263 non-null object
5 GENDER 2018 non-null object
6 CRIMINAL
CASES 2018 non-null object
7 AGE 2018 non-null float64
8 CATEGORY 2018 non-null object
9 EDUCATION 2018 non-null object
10 ASSETS 2263 non-null float64
11 LIABILITIES 1995 non-null float64
12 TOTAL
VOTES 2263 non-null int64
13 TOTAL ELECTORS 2263 non-null int64
dtypes: float64(3), int64(3), object(8)
memory usage: 247.6+ KB
###Markdown
Data Insights Total Constituencies
###Code
list_of_constituencies = dataset['CONSTITUENCY'].unique()
len(list_of_constituencies)
###Output
_____no_output_____
###Markdown
Constituencies per State
###Code
constituencies_per_state = dataset.groupby('STATE')['CONSTITUENCY'].nunique().reset_index().sort_values('CONSTITUENCY',ascending = False)
constituencies_per_state
ax = constituencies_per_state[['STATE','CONSTITUENCY']].plot(kind='bar', title ="State vs Constituency", figsize=(15, 10), legend=True, fontsize=12).set_xticklabels(constituencies_per_state['STATE'])
###Output
_____no_output_____ |
S7-Assignment/EVA_S7.ipynb | ###Markdown
Importing Libraries
###Code
import torch
from torchvision import datasets
import torchvision.transforms as transforms
%matplotlib inline
import random # for random image index
import torch.nn as nn # for network
from tqdm import tqdm # for beautiful model training updates
from model import Network
###Output
_____no_output_____
###Markdown
Seed and Cuda
###Code
# check for cuda
cuda = torch.cuda.is_available()
print (f' Cuda Status : {cuda}')
# setting seed
SEED = 42 # arbit seed, why 42 - because in hitch hikers guide to galaxy it is answer to everything
# torch.cuda.seed(SEED)
torch.cuda.manual_seed_all(SEED) if cuda else torch.manual_seed(SEED)
###Output
Cuda Status : True
###Markdown
1. Loading Data
###Code
## downloading training data, using this to calculate mean and standard deviation
train_mean_std_dev = datasets.CIFAR10(
root = './',# directory where data needs to be stored
train = True, # get the training portion of the dataset
download = True, # downloads
transform = transforms.ToTensor()# converts to tesnor
)
train_mean_std_dev.data.shape ## looking at the shape of the data
# Getting Mean and Standard Deviation of CIFAR 10 dataset
def get_mean_std_dev(dataset):
'''
reference : https://stackoverflow.com/questions/66678052/how-to-calculate-the-mean-and-the-std-of-cifar10-data
'''
data = dataset.data / 255 # data is numpy array
mean = data.mean(axis = (0,1,2))
std = data.std(axis = (0,1,2))
# print(f"Mean : {mean} STD: {std}") #Mean : [0.491 0.482 0.446] STD: [0.247 0.243 0.261]
return tuple(mean), tuple(std)
mean, std_dev = get_mean_std_dev(train_mean_std_dev)
print(f"Mean : {mean} STD: {std_dev}") #Mean : [0.491 0.482 0.446] STD: [0.247 0.243 0.261]
!pip install -U albumentations
import albumentations as A
from albumentations.pytorch.transforms import ToTensorV2
def apply_transforms(mean,std_dev):
train_transforms = A.Compose([
A.HorizontalFlip(p=0.2),
A.ShiftScaleRotate(shift_limit=0.1, scale_limit=0.1, rotate_limit=10, p=0.2),
A.CoarseDropout(
max_holes=1, max_height=16, max_width=16, min_holes=1, min_height=16, min_width=16, fill_value=tuple((x * 255.0 for x in mean)), p=0.2,
),
A.ToGray(p=0.15),
A.Normalize(mean=mean, std=std_dev, always_apply=True),
ToTensorV2(),
])
test_transforms = A.Compose([
A.Normalize(mean=mean, std=std_dev, always_apply=True),
ToTensorV2(),
])
return lambda img: train_transforms(image=np.array(img))["image"], lambda img: test_transforms(image=np.array(img))["image"]
train_transforms, test_transforms = apply_transforms(mean,std_dev)
# transform = transforms.Compose(
# [transforms.ToTensor(),
# transforms.Normalize(mean, std_dev)])
batch_size = 128
trainset = datasets.CIFAR10(root='./data', train=True,
download=True, transform=train_transforms)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=batch_size,
shuffle=True, num_workers=2)
testset = datasets.CIFAR10(root='./data', train=False,
download=True, transform=test_transforms)
testloader = torch.utils.data.DataLoader(testset, batch_size=batch_size,
shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
###Output
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz
###Markdown
Visualizing Images
###Code
import matplotlib.pyplot as plt # for visualizing images
import numpy as np
import torchvision
def imshow(img):
'''
function to show an image
'''
img = img / 2 + 0.5 # unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
# get some random training images
dataiter = iter(trainloader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(batch_size)))
###Output
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
###Markdown
2. NetworkDefining CNN
###Code
# import torch
# import torch.nn as nn # for network
# import torch.nn.functional as F # for forward method
# drop_out_value = 0.1
# class Network(nn.Module):
# def __init__(self):
# super(Network,self).__init__() # extending super class method
# # Input block
# self.convblock_input= nn.Sequential(
# nn.Conv2d(3,32,3,padding=1), # In- 3x32x32, Out- 32x32x32, RF- 3x3, Jump_in -1, Jump_out -1
# nn.ReLU(),
# nn.BatchNorm2d(32),
# nn.Dropout(drop_out_value)
# )
# # CONV BLOCK 1
# self.convblock1 = nn.Sequential(
# nn.Conv2d(32,32,3,padding=1), # In- 32x32x32, Out- 32x32x32, RF- 5x5, Jump_in -1, Jump_out -1
# nn.ReLU(),
# nn.BatchNorm2d(32),
# nn.Dropout(drop_out_value)
# ,
# nn.Conv2d(32,32,3,padding=1), # In- 32x32x32, Out- 32x32x32, RF- 7x7, Jump_in -1, Jump_out -1
# nn.ReLU(),
# nn.BatchNorm2d(32),
# nn.Dropout(drop_out_value)
# )
# # TRANSITION BLOCK 1
# # STRIDED CONVOLUTION LAYER
# self.transitionblock1 = nn.Sequential(
# nn.Conv2d(32,32,3,stride=2,padding=1), # In- 32x32x32, Out- 32x16x16, RF- 9x9, Jump_in -1, Jump_out -2
# nn.ReLU(),
# nn.BatchNorm2d(32),
# nn.Dropout(drop_out_value)
# )
# # CONV BLOCK 2
# # Depthwise Separable Convolution Layer
# self.convblock2 = nn.Sequential(
# nn.Conv2d(32,32,3,padding=1,groups=32),# In- 32x16x16, Out- 32x16x16, RF- 13x13, Jump_in -2, Jump_out -2
# nn.Conv2d(32,32,1,padding=0), # In-32x16x16 , Out- 32x16x16, RF- 13x13, Jump_in -2, Jump_out -2
# nn.ReLU(),
# nn.BatchNorm2d(32),
# nn.Dropout(drop_out_value)
# ,
# nn.Conv2d(32,32,3,padding=1), # In-32x16x16 , Out-32x16x16 , RF- 17x17, Jump_in -2, Jump_out -2
# nn.ReLU(),
# nn.BatchNorm2d(32),
# nn.Dropout(drop_out_value)
# )
# # TRANSITION BLOCK 2
# # STRIDED CONVOLUTION LAYER
# self.transitionblock2 = nn.Sequential(
# nn.Conv2d(32,32,3,stride=2,padding=1), # In- 32x16x16, Out-32x8x8 , RF- 21x21, Jump_in -2, Jump_out -4
# nn.ReLU(),
# nn.BatchNorm2d(32),
# nn.Dropout(drop_out_value)
# )
# # CONV BLOCK 3
# # Dilated Convolution Layer
# self.convblock3 = nn.Sequential(
# nn.Conv2d(32,32,3,padding=1,dilation=2),# In- 32x8x8, Out-32x6x6 , RF- 29x29, Jump_in -4, Jump_out -4
# nn.ReLU(),
# nn.BatchNorm2d(32),
# nn.Dropout(drop_out_value)
# ,
# nn.Conv2d(32,32,3,padding=1), # In-32x6x6 , Out- 32x6x6, RF- 37x37, Jump_in -4, Jump_out -4
# nn.ReLU(),
# nn.BatchNorm2d(32),
# nn.Dropout(drop_out_value)
# )
# # TRANSITION BLOCK 3
# # STRIDED CONVOLUTION LAYER
# self.transitionblock3 = nn.Sequential(
# nn.Conv2d(32,32,3,stride=2,padding=1), # In-32x6x6 , Out-32x3x3 , RF- 45x45, Jump_in -4, Jump_out -8
# nn.ReLU(),
# nn.BatchNorm2d(32),
# nn.Dropout(drop_out_value)
# )
# # CONV BLOCK 4
# # Depthwise Separable Convolution Layer
# self.convblock4 = nn.Sequential(
# nn.Conv2d(32,32,3,padding=1), # In- 32x3x3, Out-32x3x3 , RF- 61x61, Jump_in -8, Jump_out -8
# nn.ReLU(),
# nn.BatchNorm2d(32),
# nn.Dropout(drop_out_value)
# ,
# nn.Conv2d(32,32,3,padding=1,groups=32), # In-32x3x3 , Out-32x3x3 , RF- 77x77, Jump_in -8, Jump_out -8
# nn.Conv2d(32,10,1,padding=0) # In- 32x3x3, Out-10x3x3 , RF- 77x77, Jump_in -8, Jump_out -8
# # ,
# # nn.ReLU(),
# # nn.BatchNorm2d(10),
# # nn.Dropout(drop_out_value)
# )
# # Output BLOCK
# # GAP Layer
# self.gap = nn.AvgPool2d(3) # In- 10x3x3, Out-10x1x1 , RF- 77x77, Jump_in -8, Jump_out -8
# def forward(self, x):
# x = self.convblock_input(x)
# x = self.convblock1(x)
# x = self.transitionblock1(x)
# x = self.convblock2(x)
# x = self.transitionblock2(x)
# x = self.convblock3(x)
# x = self.transitionblock3(x)
# x = self.convblock4(x)
# x = self.gap(x)
# x = x.view(-1, 10)
# return F.log_softmax(x, dim=1)
###Output
_____no_output_____
###Markdown
Model ParamsChecking the model summary and number of parameters
###Code
device = torch.device("cuda" if cuda else "cpu")
print(device)
model = Network().to(device)
# print(model)
# !pip install torchsummary
from torchsummary import summary # for model summary and params
summary(model, input_size=(3, 32, 32))
###Output
cuda
----------------------------------------------------------------
Layer (type) Output Shape Param #
================================================================
Conv2d-1 [-1, 32, 32, 32] 896
ReLU-2 [-1, 32, 32, 32] 0
BatchNorm2d-3 [-1, 32, 32, 32] 64
Dropout-4 [-1, 32, 32, 32] 0
Conv2d-5 [-1, 32, 32, 32] 9,248
ReLU-6 [-1, 32, 32, 32] 0
BatchNorm2d-7 [-1, 32, 32, 32] 64
Dropout-8 [-1, 32, 32, 32] 0
Conv2d-9 [-1, 32, 32, 32] 9,248
ReLU-10 [-1, 32, 32, 32] 0
BatchNorm2d-11 [-1, 32, 32, 32] 64
Dropout-12 [-1, 32, 32, 32] 0
Conv2d-13 [-1, 32, 16, 16] 9,248
ReLU-14 [-1, 32, 16, 16] 0
BatchNorm2d-15 [-1, 32, 16, 16] 64
Dropout-16 [-1, 32, 16, 16] 0
Conv2d-17 [-1, 32, 16, 16] 320
Conv2d-18 [-1, 32, 16, 16] 1,056
ReLU-19 [-1, 32, 16, 16] 0
BatchNorm2d-20 [-1, 32, 16, 16] 64
Dropout-21 [-1, 32, 16, 16] 0
Conv2d-22 [-1, 32, 16, 16] 9,248
ReLU-23 [-1, 32, 16, 16] 0
BatchNorm2d-24 [-1, 32, 16, 16] 64
Dropout-25 [-1, 32, 16, 16] 0
Conv2d-26 [-1, 32, 8, 8] 9,248
ReLU-27 [-1, 32, 8, 8] 0
BatchNorm2d-28 [-1, 32, 8, 8] 64
Dropout-29 [-1, 32, 8, 8] 0
Conv2d-30 [-1, 32, 6, 6] 9,248
ReLU-31 [-1, 32, 6, 6] 0
BatchNorm2d-32 [-1, 32, 6, 6] 64
Dropout-33 [-1, 32, 6, 6] 0
Conv2d-34 [-1, 32, 6, 6] 9,248
ReLU-35 [-1, 32, 6, 6] 0
BatchNorm2d-36 [-1, 32, 6, 6] 64
Dropout-37 [-1, 32, 6, 6] 0
Conv2d-38 [-1, 32, 3, 3] 9,248
ReLU-39 [-1, 32, 3, 3] 0
BatchNorm2d-40 [-1, 32, 3, 3] 64
Dropout-41 [-1, 32, 3, 3] 0
Conv2d-42 [-1, 32, 3, 3] 9,248
ReLU-43 [-1, 32, 3, 3] 0
BatchNorm2d-44 [-1, 32, 3, 3] 64
Dropout-45 [-1, 32, 3, 3] 0
Conv2d-46 [-1, 32, 3, 3] 320
Conv2d-47 [-1, 10, 3, 3] 330
AvgPool2d-48 [-1, 10, 1, 1] 0
================================================================
Total params: 86,858
Trainable params: 86,858
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.01
Forward/backward pass size (MB): 3.97
Params size (MB): 0.33
Estimated Total Size (MB): 4.31
----------------------------------------------------------------
###Markdown
3. Training and Testing* includes test and train functions* includes loop function, where test can happen after each epoch is trained
###Code
import torch.optim as optim # for optimizer
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)
# Training Function
train_losses = [] # to capture train losses over training epochs
train_accuracy = [] # to capture train accuracy over training epochs
def train(model,device, train_loader,optimizer,epoch):
model.train() # setting the model in training
pbar = tqdm(train_loader) # putting the iterator in pbar
correct = 0 # for accuracy numerator
processed =0 # for accuracy denominator
for batch_idx, (images,labels) in enumerate(pbar):
images, labels = images.to(device),labels.to(device)#sending data to CPU or GPU as per device
optimizer.zero_grad() # setting gradients to zero to avoid accumulation
y_preds = model(images) # forward pass, result captured in y_preds (plural as there are many images in a batch)
# the predictions are in one hot vector
loss = criterion(y_preds,labels) # capturing loss
train_losses.append(loss) # to capture loss over many epochs
loss.backward() # backpropagation
optimizer.step() # updating the params
preds = y_preds.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += preds.eq(labels.view_as(preds)).sum().item()
processed += len(images)
pbar.set_description(desc= f'Loss={loss.item()} Batch_id={batch_idx} Accuracy={100*correct/processed:0.2f}')
train_accuracy.append(100*correct/processed)
# Test Function
test_losses = [] # to capture test losses
test_accuracy = [] # to capture test accuracy
def test(model,device, test_loader):
model.eval() # setting the model in evaluation mode
test_loss = 0
correct = 0 # for accuracy numerator
with torch.no_grad():
for (images,labels) in test_loader:
images, labels = images.to(device),labels.to(device)#sending data to CPU or GPU as per device
outputs = model(images) # forward pass, result captured in outputs (plural as there are many images in a batch)
# the outputs are in batch size x one hot vector
test_loss = criterion(outputs,labels).item() # sum up batch loss
preds = outputs.argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct += preds.eq(labels.view_as(preds)).sum().item()
test_loss /= len(test_loader.dataset) # average test loss
test_losses.append(test_loss) # to capture loss over many batches
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.2f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
test_accuracy.append(100*correct/len(test_loader.dataset))
EPOCHS = 170
# EPOCHS = 2
for epoch in range(EPOCHS):
print("EPOCH:", epoch+1)
train(model, device, trainloader, optimizer, epoch)
test(model, device, testloader)
###Output
EPOCH: 1
###Markdown
4. Checking results
###Code
import seaborn as sns
def plot_metrics(train_accuracy, train_losses, test_accuracy, test_losses):
sns.set(font_scale=1)
plt.rcParams["figure.figsize"] = (25,6)
# Plot the learning curve.
fig, (ax1,ax2) = plt.subplots(1,2)
ax1.plot(np.array(train_losses), 'b', label="Train Loss")
# Label the plot.
ax1.set_title("Train Loss")
ax1.set_xlabel("Epoch")
ax1.set_ylabel("Loss")
ax1.legend()
ax2.plot(np.array(train_accuracy), 'b', label="Train Accuracy")
# Label the plot.
ax2.set_title("Train Accuracy")
ax2.set_xlabel("Epoch")
ax2.set_ylabel("Loss")
ax2.legend()
plt.show()
# Plot the learning curve.
fig, (ax1,ax2) = plt.subplots(1,2)
ax1.plot(np.array(test_losses), 'b', label="Test Loss")
# Label the plot.
ax1.set_title("Test Loss")
ax1.set_xlabel("Epoch")
ax1.set_ylabel("Loss")
ax1.legend()
ax2.plot(np.array(test_accuracy), 'b', label="Test Accuracy")
# Label the plot.
ax2.set_title("Test Accuracy")
ax2.set_xlabel("Epoch")
ax2.set_ylabel("Loss")
ax2.legend()
plt.show()
plot_metrics(train_accuracy, train_losses, test_accuracy, test_losses)
def show_predicted_actual(model, device, dataset, classes):
dataiter = iter(dataset)
images, labels = dataiter.next()
img_list = range(5, 10)
# print images
imshow(torchvision.utils.make_grid(images[img_list]))
print('GroundTruth: ', ' '.join('%5s' % classes[labels[j]] for j in img_list))
images = images.to(device)
outputs = model(images)
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]]
for j in img_list))
show_predicted_actual(model, device, testloader, classes)
def evaluate_classwise_accuracy(model, device, classes, test_loader):
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
for images, labels in test_loader:
images, labels = images.to(device), labels.to(device)
outputs = model(images)
_, predicted = torch.max(outputs, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
evaluate_classwise_accuracy(model, device, classes, testloader)
###Output
Accuracy of plane : 82 %
Accuracy of car : 100 %
Accuracy of bird : 81 %
Accuracy of cat : 82 %
Accuracy of deer : 88 %
Accuracy of dog : 78 %
Accuracy of frog : 91 %
Accuracy of horse : 88 %
Accuracy of ship : 93 %
Accuracy of truck : 87 %
|
examples/asl_parser.ipynb | ###Markdown
Ground trith pose of Scanner
###Code
pose_scanner_leica = pd.read_csv(f'asl_laser/{world}/leica/pose_scanner_leica.csv')
T00 = pose_scanner_leica[' T00']
T01 = pose_scanner_leica[' T01']
T02 = pose_scanner_leica[' T02']
T03 = pose_scanner_leica[' T03']
T10 = pose_scanner_leica[' T10']
T11 = pose_scanner_leica[' T11']
T12 = pose_scanner_leica[' T12']
T13 = pose_scanner_leica[' T13']
T20 = pose_scanner_leica[' T20']
T21 = pose_scanner_leica[' T21']
T22 = pose_scanner_leica[' T22']
T23 = pose_scanner_leica[' T23']
T30 = pose_scanner_leica[' T30']
T31 = pose_scanner_leica[' T31']
T32 = pose_scanner_leica[' T32']
T33 = pose_scanner_leica[' T33']
N = len(T00)
T = []
for i in range(N):
t = [[T00[i], T01[i], T02[i], T03[i]],
[T10[i], T11[i], T12[i], T13[i]],
[T20[i], T21[i], T22[i], T23[i]],
[T30[i], T31[i], T32[i], T33[i]]]
T.append(t)
T = np.asarray(T)
assert T.shape == (N, 4, 4)
plt.plot(T[:, 0, 3], T[:, 1, 3], '.')
###Output
_____no_output_____
###Markdown
Load point clouds
###Code
import os
import open3d as o3d
import sys
# only needed for tutorial, monkey patches visualization
sys.path.append('/home/ruslan/subt/thirdparty/Open3D/examples/python/')
import open3d_tutorial as o3dtut
# change to True if you want to interact with the visualization windows
o3dtut.interactive = not "CI" in os.environ
# pcs = [pd.read_csv(f'asl_laser/{world}/csv_global/PointCloud{i}.csv') for i in range(N)]
pcs = [pd.read_csv(f'asl_laser/{world}/csv_local/Hokuyo_{i}.csv') for i in range(N)]
i = np.random.choice(N)
pts = np.asarray([pcs[i]['x'], pcs[i]['y'], pcs[i]['z']]).T
assert pts.shape[1] == 3
plt.figure(figsize=(10, 10))
# plt.plot(pcs[0]['x'][::10], pcs[0]['y'][::10], '.')
plt.plot(pts[:, 0][::10], pts[:, 1][::10], '.')
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(pts)
pcd = pcd.voxel_down_sample(voxel_size=0.5)
# Flip it, otherwise the pointcloud will be upside down
pcd.transform([[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]])
o3d.visualization.draw_geometries([pcd])
pcd.estimate_normals()
pcd.normalize_normals()
pcd.normals = o3d.utility.Vector3dVector(np.asarray(pcd.normals) / 4.)
pcd.orient_normals_consistent_tangent_plane(k=15)
o3d.visualization.draw_geometries([pcd], point_show_normal=True)
###Output
_____no_output_____ |
notebooks/HiCAT/3_Get images from HiCAT simulator with IrisAO.ipynb | ###Markdown
Getting images from HiCAT simulator and controlling the IrisAO
###Code
import matplotlib.pyplot as plt
from matplotlib.colors import LogNorm
import numpy as np
import hicat.simulators
%matplotlib inline
###Output
_____no_output_____
###Markdown
Instantiate hicat
###Code
hc = hicat.simulators.hicat_sim.HICAT_Sim()
hc.testbed_state
print(hc.describe())
###Output
_____no_output_____
###Markdown
Set the testbed into the correct hardware state
###Code
hc.pupil_maskmask = 'circular' # I will likely have to implement a new pupil mask
hc.iris_ao = 'iris_ao'
hc.apodizer = 'no_apodizer'
hc.lyot_stop = 'circular'
hc.detector = 'imager'
hc.testbed_state
###Output
_____no_output_____
###Markdown
Get an image from the simulator with flat IrisAO, unnormalized
###Code
# All the images
plt.figure(figsize=(14,14))
psf, waves = hc.calc_psf(display=True, return_intermediates=True)
hicat_psf = psf[0].data
print(type(hicat_psf))
print(hicat_psf.shape)
plt.figure(figsize=(10,10))
plt.imshow(np.log10(hicat_psf))
###Output
_____no_output_____
###Markdown
Get normalized coro PSF Calculate direct image
###Code
hc.include_fpm = False
psf_direct_data = hc.calc_psf()
psf_direct = psf_direct_data[0].data
norm = psf_direct.max()
plt.figure(figsize=(10,10))
plt.title('Direct PSF')
plt.imshow(np.log10(psf_direct), cmap='inferno')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Calculate normalized coro image
###Code
hc.include_fpm = True
psf_coro_data = hc.calc_psf()
psf_coro = psf_coro_data[0].data/norm
plt.figure(figsize=(10,10))
plt.title('Coro PSF')
plt.imshow(np.log10(psf_coro), cmap='inferno')
plt.colorbar()
###Output
_____no_output_____
###Markdown
Aberrate a segment on the IrisAO
###Code
hc.iris_dm.flatten()
hc.iris_dm.set_actuator(0, 50e-9, 0, 0)
plt.figure(figsize=(14,14))
one_seg_data, inter = hc.calc_psf(display=True, return_intermediates=True)
one_seg = one_seg_data[0].data
# Display PSF and IrisAo OPD
plt.figure(figsize=(7, 7))
plt.imshow(one_seg/norm, norm=LogNorm(), cmap='inferno')
plt.colorbar()
plt.figure(figsize=(7, 7))
inter[1].display(what='phase')
###Output
_____no_output_____
###Markdown
Aberrate pair of segments0 is the center segment
###Code
hc.iris_dm.flatten()
seg1 = 4
seg2 = 23
ampl = 50e-9 # nm
hc.iris_dm.set_actuator(seg1, ampl, 0, 0)
hc.iris_dm.set_actuator(seg2, ampl, 0, 0)
pair_psf_data, inter = hc.calc_psf(return_intermediates=True)
pair_psf = pair_psf_data[0].data
plt.figure(figsize=(7, 7))
plt.imshow(pair_psf/norm, norm=LogNorm(), cmap='inferno')
plt.colorbar()
plt.figure(figsize=(7, 7))
inter[1].display(what='phase')
###Output
_____no_output_____
###Markdown
Create DH mask and measure mean contrast
###Code
# lifted from util
def create_dark_hole(pup_im, iwa, owa, samp):
"""
Create a dark hole on pupil image pup_im.
:param pup_im: np.array of pupil image
:param iwa: inner working angle in lambda/D
:param owa: outer working angle in lambda/D
:param samp: sampling factor
:return: dh_area: np.array
"""
circ_inner = circle_mask(pup_im, pup_im.shape[0]/2., pup_im.shape[1]/2., iwa * samp) * 1 # *1 converts from booleans to integers
circ_outer = circle_mask(pup_im, pup_im.shape[0]/2., pup_im.shape[1]/2., owa * samp) * 1
dh_area = circ_outer - circ_inner
return dh_area
def circle_mask(im, xc, yc, rcirc):
""" Create a circle on array im centered on xc, yc with radius rcirc; inside circle equals 1."""
x, y = np.shape(im)
newy, newx = np.mgrid[0:y,0:x]
circ = (newx-xc)**2 + (newy-yc)**2 < rcirc**2
return circ
def dh_mean(im, dh):
"""
Return the dark hole contrast.
Calculate the mean intensity in the dark hole area dh of the image im.
im and dh have to have the same array size and shape.
:param im: array, normalized (by direct PSF peak pixel) image
:param dh: array, dark hole mask
"""
darkh = im * dh
con = np.mean(darkh[np.where(dh != 0)])
return con
iwa = 6
owa = 11
sampling = 3.1364
dh_mask = create_dark_hole(pair_psf, iwa, owa, sampling)
plt.figure(figsize=(10, 5))
plt.subplot(1, 2, 1)
plt.imshow(dh_mask)
plt.title('dh_mask')
plt.subplot(1, 2, 2)
plt.imshow(pair_psf/norm, norm=LogNorm())
plt.imshow(dh_mask, alpha=0.5)
plt.title('Dark hole')
plt.figure(figsize=(7, 7))
plt.imshow(pair_psf/norm*dh_mask, norm=LogNorm(), cmap='inferno')
plt.colorbar()
contrast = dh_mean(pair_psf/norm, dh_mask)
print(contrast)
hc.display_pupil_overlaps()
###Output
_____no_output_____ |
Prediction/MobilityData_Report(2020).ipynb | ###Markdown
***Understanding the data :***Data collected from google maps , locationThe **baseline day** is the median value from the 5‑week period Jan 3 – Feb 6, 2020.Parks typically means official national parks and not the general outdoors found in rural areas. **Parks**Mobility trends for places like local parks, national parks, public beaches, marinas, dog parks, plazas, and public gardens.**Transit stations** Mobility trends for places like public transport hubs such as subway, bus, and train stations.**Retail & recreation** Mobility trends for places like restaurants, cafes, shopping centers, theme parks, museums, libraries, and movie theaters.**Residential**Mobility trends for places of residence.**Workplaces**Mobility trends for places of work.AttributionIMP - If you publish results based on this data set, please cite as:Google LLC "Google COVID-19 Community Mobility Reports".https://www.google.com/covid19/mobility/ Accessed: .
###Code
import pandas as pd
#importing dataset
df = pd.read_csv('/content/drive/MyDrive/Covid Dashboard Project/Raw Data/Maharashtra_MobilityData_2020.csv')
df
df.drop(['country_region_code', 'country_region','metro_area', 'iso_3166_2_code', 'census_fips_code', 'place_id'], axis = 1,inplace = True)
df
df.isnull().sum() #here 321 because it counts values of entire maharashtra
df[df.isna().any(axis=1)]
pd.isnull(df['retail_and_recreation_percent_change_from_baseline'])
import matplotlib.pyplot as plt
df[0:321].plot(x='date', y=['retail_and_recreation_percent_change_from_baseline','grocery_and_pharmacy_percent_change_from_baseline','parks_percent_change_from_baseline','transit_stations_percent_change_from_baseline', 'workplaces_percent_change_from_baseline', 'residential_percent_change_from_baseline'] ,figsize = (15,8))
plt.title("Visiting places percent change from baseline for MAHARASHTRA")
plt.xlabel("Date")
plt.ylabel("Percent change (%)") #0 denotes baseline i. e. no change
plt.show()
import matplotlib.pyplot as plt
df[321:].plot(x='sub_region_2', y=['retail_and_recreation_percent_change_from_baseline','grocery_and_pharmacy_percent_change_from_baseline','parks_percent_change_from_baseline','transit_stations_percent_change_from_baseline', 'workplaces_percent_change_from_baseline', 'residential_percent_change_from_baseline'] ,figsize = (15,8))
plt.title("Visiting places percent change from baseline for Subregions")
plt.xlabel("Subregions")
plt.ylabel("Percent change (%)") #0 denotes baseline i. e. no change
plt.show()
import matplotlib.pyplot as plt
df[df.sub_region_2 == 'Pune'].plot(x='date', y=['retail_and_recreation_percent_change_from_baseline','grocery_and_pharmacy_percent_change_from_baseline','parks_percent_change_from_baseline','transit_stations_percent_change_from_baseline', 'workplaces_percent_change_from_baseline', 'residential_percent_change_from_baseline'],figsize = (15,8))
plt.title("Visiting places percent change from baseline for PUNE")
plt.xlabel("Date")
plt.ylabel("Percent change (%)") #0 denotes baseline i. e. no change
plt.show()
import matplotlib.pyplot as plt
df[df.sub_region_2 == 'Jalgaon'].plot(x='date', y=['retail_and_recreation_percent_change_from_baseline','grocery_and_pharmacy_percent_change_from_baseline','parks_percent_change_from_baseline','transit_stations_percent_change_from_baseline', 'workplaces_percent_change_from_baseline', 'residential_percent_change_from_baseline'],figsize = (15,8))
plt.title("Visiting places percent change from baseline for JALGAON")
plt.xlabel("Date")
plt.ylabel("Percent change (%)") #0 denotes baseline i. e. no change
plt.show()
###Output
_____no_output_____ |
Lab5_Hadoop/Tutorial - Hadoop 1.ipynb | ###Markdown
Tutorial: Hadoop and Hadoop Distributed File System (HDFS)In this tutorial, you will:* Create a MapReduce task using the Hadoop "Streaming" API (Python)* Import a spam dataset into the Hadoop Distributed File System (HDFS)* Run a MapReduce task using HDFS Setup* This tutorial expects you to be using the COMP6235 Virtual Machine for VirtualBox. No support is provided for other solutions. Setup instructions are available at http://edshare.soton.ac.uk/id/document/324163* Run "run-jupyter" to start Jupyter Notebook* Download the .ipynb file at http://edshare.soton.ac.uk/19650/ and import it into Jupyter Refresher: What is Hadoop?Apache Hadoop is an open-source software framework for distributing the processing of large amounts of data across multiple machines. It has an emphasis on fault-tolerant processing of data on large clusters. Hadoop has three important components:**Hadoop Distributed File System** - A distributed file-system that stores data and facilitates the sharing of data between different machines in a Hadoop cluster (group of machines).**Hadoop YARN** - A platform for managing the computing resources available to Hadoop, notably performing the task of scheduling jobs to run on other machines.**Hadoop MapReduce** - Support for the MapReduce programming model for large-scale data processingAll of these are already set up and (mostly) configured in the Virtual Machine, though this tutorial will walk you through starting and using these tools. Firstly: Start a new terminalIn addition to running Notebooks, Jupyter is also capable of running a terminal, an interactive text-based interface to the Virtual machine. On the main menu on the `Home` page, you can start a new terminal by clicking on `New` -> `Terminal`. We'll be using this to run some of the commands necessary to configure Hadoop. Hadoop Modes of OperationHadoop has three main modes of operation:**Standalone Mode** - This is the default mode used by Hadoop. It's localised to the current machine, and doesn't use HDFS, instead reading files from the local filesystem. It's primarily used for debugging.**Pseudo-Distributed Mode** - This is where Hadoop uses a cluster consisting of only a single machine, with every Hadoop daemon (a type of program that sits there doing work in that background) running on that machine. This is mainly used for testing the Hadoop setup. **Fully-Distributed Mode** - This is where data and processing is split between multiple machines. This enables Hadoop to horizontally scale and leverage the resources of multiple machines. This is the main mode used by Hadoop in production.In this tutorial, we'll only be using Standalone and Pseudo-Distributed modes. MapReduceMapReduce is a programming model used by Hadoop to process large amounts of data in parallel. It accepts input data in the form of a set of key-value pairs . It divides this set into individual chunks and assigns them as tasks to be processed on individual machines. It works in two phases: A Map phase and a Reduce phase.The Map phase takes these key-value pairs in the form and maps (processes them) into other, intermediate key-value pairs .These pairs are then sorted by their key, and passed into the Reduce phase.The Reduce phase takes these keys and produces a third (smaller) set of keys, combining the elements from the intermediate pairs that share a common key.In summary:**** is *mapped* to **** which is *reduced* to a smaller set of ****.Don't worry if it's all a bit abstract - there'll be examples in the rest of the tutorial. Importing dataThe first thing we're going to do, is download some data. We will store this on our VMs, but could represent data which is remote, or in a datacentre somewhere. Run the following code:
###Code
%%bash
wget https://archive.ics.uci.edu/ml/machine-learning-databases/00380/YouTube-Spam-Collection-v1.zip \
-O YouTube-Spam-Collection-v1.zip
unzip -o YouTube-Spam-Collection-v1.zip
ls -lh *.csv
###Output
_____no_output_____
###Markdown
Having downloaded the data, we want to be able to do a MapReduce task on it. To do this, we will use the Hadoop Streaming API, which allows us to write Python code rather than the usual Java. When we call the Hadoop process, we pass two Python files to the command - one which maps, and one which reduces.First, let's look at the data:
###Code
%%bash
head -n 10 Youtube04-Eminem.csv
###Output
_____no_output_____
###Markdown
Check that Hadoop is runningThe next thing to do is to check that we have Hadoop installed and running. Open a terminal, and type in: hadoop version This should show you that the version you have is Hadoop 2.8.5. Word countingNow we have our CSV files, let's get started processing them.The first thing we want to do is to set up a MapReduce function which will allow us to count the number of each individual word from the `comment` field of the file.The streaming API uses streams, which means that the information passed in to the map process is information from the output of one of our CSV files, and the data is then passed between the map and reduce process is output which is printed to stdout.The streaming API provides us a stream of data to the program's "standard input", more commonly called "stdin". In this case, we'll get each of the lines of our CSV file as the input to the mapper. The mapper will then then process this, and put it to "standard output" or "stdout". This will then be used as the input to the reduce process, and so on.It helps to first think of what the inputs and outputs of each stage of the process are. For word counting, we could do something like this:**Line of CSV Data** is mapped to **** is mapped to ****By default, the Streaming API uses a *tab character* as a seperator between the key and the value. The output of your map function might look something like: "Banana\t1" or "Banana 1"Some code has been provided to you below, including the libraries used in our answer. However, other solutions are possible that do not use these libraries.The cells below use the %%writefile magic keyword to write their contents to a file, instead of executing them.If you wish to execute them, comment this out with a ``.
###Code
%%writefile mapper.py
#!/usr/bin/env python
# MAPPER
import csv
import sys
import re
lines = sys.stdin.readlines()
csvreader = csv.reader(lines)
# YOUR CODE GOES BELOW
# Example output
# print(token + "\t" + "1")
%%writefile reducer.py
#!/usr/bin/env python
# REDUCER
import sys
from collections import defaultdict
# Keep simple example in for now, switch to stdin later
input_pairs = [
'+447935454150 1',
'lovely 1',
'girl 1',
'talk 1',
'to 1',
'me 1'
'xxx 1'
]
# Once we test this with streams, we can uncomment this next line
# input_pairs = sys.stdin.readlines()
# YOUR CODE GOES BELOW
###Output
_____no_output_____
###Markdown
Ensure the above files have been written to two files: `mapper.py` and `reducer.py`. The easiest way to do this is make sure the `%%writefile mapper/reducer.py` lines are uncommented, then run the cell.Since the mapper and reducer accept a stream into their stdin and output to stdout, we can test whether the scripts above work as a pipeline without using Hadoop!The below command reads the .csv file, then pipes the output to `mapper.py`'s stdin. The mapper's output is then piped to `reducer.py`, and so on.
###Code
%%bash
cat Youtube04-Eminem.csv | ./mapper.py | ./reducer.py | sort
###Output
_____no_output_____
###Markdown
Now we've tested our pipeline works, it's time to integrate it into hadoop. The below commands clear the output folder, ensure Hadoop is in standalone mode, then run our pipeline.The parameters are as follows:`-files` - Ensures these files are provided to every machine in our cluster.`-input` - The data sources to be passed to the pipeline.`-mapper` - The mapper to use.`-reducer` - The reducer to use.`-output` - The output folder.Test out your pipeline by running the command below!
###Code
%%bash
# Clear output
rm -rf output
# Make sure hadoop is in standalone mode
hadoop-standalone-mode.sh
# Main pipeline command
hadoop jar $HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-*.jar \
-files mapper.py,reducer.py \
-input Youtube04-Eminem.csv \
-mapper ./mapper.py \
-reducer ./reducer.py \
-output output
###Output
_____no_output_____
###Markdown
Setting up HDFSNow you've had a chance to use Hadoop in standalone mode, it's time we set it to pseudo-distributed mode and set up HDFS.To speed things up, some commands have been provided to easily configure Hadoop. Start by running in your terminal: hadoop-pseudo-distributed-mode.sh If you're curious how this works, feel free to have a read of the code, you'll find it in `~/vm_creation/scripts`. We should now have HDFS configured for pseudo-distributed mode. We will now need to create a disk for HDFS, which will use the configurations we just set: hdfs namenode -format Starting servicesNow we need to start the different services and we can get to work! Run the following command in the terminal to start the HDFS: start-dfs.shYou'll also need to start YARN in order to run any MapReduce jobs, so let's do that now: start-yarn.sh To see what this has left you with, you can see the processes which are running on the JVM by running the `jps` command:
###Code
%%bash
jps
###Output
_____no_output_____
###Markdown
You should see something similar to the following:```XXXX ResourceManagerXXXX SecondaryNameNodeXXXX NameNodeXXXX DataNodeXXXX NodeManager```If any of these aren't running, double check that you've run all of the above commands. If any are still missing, you may encounter errors later, so please contact one of the demonstrators. Now that we have a HDFS disk, and the appropriate Hadoop services running in pseudo-distributed mode, we can start to import the data into the new HDFS filesystem and run the MapReduce task there. Fully-distributed mode runs on the exact same principles described below, so we could apply MapReduce over various machines.In summary, we need to: * Create a directory for the input* Import the data from the local file to the HDFS datanode* Run the MapReduce job* View the outputThe Commands on HDFS are similar to standard linux CLI commands, except for the fact that they are prefixed by either `hadoop fs` or `hdfs dfs`.The `hadoop fs` command is more general, as it can cope with different types of filesystem, such as the one on the local disk. As such, this is a better choice to use for commands relating solely to HDFS.The command to create a directory is `-mkdir`. Create a directory `/input` on the HDFS system. Use the `hdfs dfs` command below to achieve this.
###Code
%%bash
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Next, we need to import our data into HDFS. Here, we are dealing with two different filesystems: the local system and the HDFS node so we will use `hadoop fs`, with the `-copyFromLocal` command. This command copies files from the local filesystem to HDFS, accepting two arguments: file source and destination.HDFS filesystems are defined by a URI prefixed by `hdfs://`, and the `hdfs dfs` and `hadoop fs` commands will normally expect to see them.If they are not specified, the default location of the filesystem is specified in `core-site.xml`, which is one of the config files we imported earlier. The value can be seen from the following command:
###Code
%%bash
cat $HADOOP_HOME/etc/hadoop/core-site.xml
###Output
_____no_output_____
###Markdown
If you are interested in learning more about the configuration options we have specified for Hadoop, check out the documentation for Hadoop, as well as the `~/vm_creation/hadoop` folder.For the `-copyFromLocal` we can either specify `hdfs://localhost:9000/` or leave it out, instead using `hdfs:///`. For example, `hdfs://localhost:9000/input` and `hdfs:///input` refer to the same location.The local file can be specified with a relative command, leaving the import command as one of the following two. Pick one and execute it in the cell below.
###Code
%%bash
# With fully specified URI
hadoop fs -copyFromLocal *.csv hdfs://localhost:9000/input
# Explicit HDFS, but with the default host
# hadoop fs -copyFromLocal *.csv hdfs:///input
# Implied URI based on default
# hadoop fs -copyFromLocal *.csv /input
###Output
_____no_output_____
###Markdown
Next, we'll check that the files have been successfully imported.
###Code
%%bash
hadoop fs -ls /input
###Output
_____no_output_____
###Markdown
Perform the same for the `mapper.py` and `reducer.py` files we created for the MapReduce task earlier, keeping those in the `input` directory as well.You may need to add `-p` and `-f` as options. These options preserve file permissions, and force the new files to overwrite any existing files, respectively.
###Code
%%bash
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
Now we run the hadoop command again, this time sourcing our files from HDFS instead of the local filesystem.Note: If you run this command more than once, Hadoop will throw an error due to the output directory already existing. You may need to erase the existing directory or output to one with a different name.
###Code
%%bash
hadoop jar $HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-*.jar \
-files hdfs:///input/mapper.py,hdfs:///input/reducer.py \
-input hdfs:///input/Youtube04-Eminem.csv \
-mapper ./mapper.py \
-reducer ./reducer.py \
-output hdfs://localhost:9000/output_2
###Output
_____no_output_____
###Markdown
In the cell below, write a command to view the files listed in the `/output_2` directory.
###Code
%%bash
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
The `_SUCCCESS` file indicates that the job was a success, which is good. The other file, `part-00000` contains the result. Write code in the cell below to get the output (from HDFS)
###Code
%%bash
# YOUR CODE HERE
###Output
_____no_output_____
###Markdown
You can include multiple `-input` parameters to operate on more than one file. Update the streaming command above to include all 5 files in the cell below. Make sure you include a new output directory!
###Code
%%bash
# Update this command to include multiple files
hadoop jar $HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-*.jar \
-files hdfs:///input/mapper.py,hdfs:///input/reducer.py \
-input hdfs://localhost:9000/input/Youtube04-Eminem.csv \
-mapper ./mapper.py \
-reducer ./reducer.py \
-output hdfs://localhost:9000/output_8
###Output
_____no_output_____ |
test/BBC news categorizer.ipynb | ###Markdown
BBC 뉴스 public dataset으로 categorizer 만들기* 아래의 kaggle을 그대로 따라감* https://www.kaggle.com/bbose71/bbc-news-classification
###Code
import os
import sys
import glob
import pandas as pd
import numpy as np
import json
import matplotlib.pyplot as plt
from IPython.core.debugger import set_trace
from IPython.display import display
from pathlib import Path
from smart_open import open
from gensim.utils import simple_preprocess
from tqdm.auto import tqdm
tqdm.pandas()
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_selection import chi2
from sklearn.model_selection import cross_val_score, ShuffleSplit, train_test_split
from sklearn.manifold import TSNE
from sklearn.metrics import confusion_matrix
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.externals import joblib
import seaborn as sns
%matplotlib inline
data = []
for cat in tqdm(os.listdir('bbc')):
for fname in glob.glob('bbc/' + cat + '/*.txt'):
file = Path(fname)
iid = cat + '-' + file.name.split('.')[0]
try:
txt = file.read_text(encoding='latin-1')
except:
print(iid)
data.append([iid, txt, cat])
df = pd.DataFrame(data, columns=['ArticleId', 'Text', 'Category']); df
df['category_id'] = df['Category'].factorize()[0]
category_id_df = df[['Category', 'category_id']].drop_duplicates().sort_values('category_id'); category_id_df
category_to_id = dict(category_id_df.values); category_to_id
id_to_category = dict(category_id_df[['category_id', 'Category']].values); id_to_category
df.sample(5, random_state=0)
df.groupby('Category').category_id.count().plot.bar(ylim=0)
tfidf = TfidfVectorizer(sublinear_tf=True, min_df=5, norm='l2', encoding='latin-1', ngram_range=(1,2), stop_words='english')
features = tfidf.fit_transform(df.Text).toarray()
labels = df.category_id; features.shape
N = 3
for Category, category_id in tqdm(sorted(category_to_id.items())):
features_chi2 = chi2(features, labels==category_id)
indices = np.argsort(features_chi2[0])
feature_names = np.array(tfidf.get_feature_names())[indices]
unigrams = [v for v in feature_names if len(v.split(' '))==1]
bigrams = [v for v in feature_names if len(v.split(' '))==2]
print('# {}'.format(Category))
print(' .Most correlated unigrams:\n .{}'.format('\n .'.join(unigrams[-N:])))
print(' .Most correlated bigrams:\n .{}'.format('\n .'.join(bigrams[-N:])))
SAMPLE_SIZE = int(len(features)*0.3)
np.random.seed(0)
indices = np.random.choice(range(len(features)), size=SAMPLE_SIZE, replace=False)
projected_features = TSNE(n_components=2, random_state=0).fit_transform(features[indices]); projected_features.shape
category_id_test = 0
projected_features[labels[indices]==category_id_test]
colors = ['pink', 'green', 'midnightblue', 'orange', 'darkgrey']
for category, category_id in sorted(category_to_id.items()):
points = projected_features[labels[indices]==category_id]
plt.scatter(points[:,0], points[:,1], s=30, c=colors[category_id], label=category)
plt.title('TF-IDF feature vector for each article\n (projected on 2 dimensions)', fontdict=dict(fontsize=15))
plt.legend();
models = [
RandomForestClassifier(n_estimators=200, max_depth=3, random_state=0),
MultinomialNB(),
#SVC(kernel='rbf', probability=True, gamma='auto', C=5),
SGDClassifier(random_state=0, max_iter=200, tol=1e-3, loss='log'),
LogisticRegression(random_state=0, solver='lbfgs', multi_class='auto')
]
CV = 5
shufflesplit = ShuffleSplit(n_splits=CV, test_size=0.2, random_state=0)
entries = []
for model in tqdm(models):
model_name = model.__class__.__name__
accuracies = cross_val_score(model, features, labels, scoring='accuracy', cv=shufflesplit)
for fold_idx, accuracy in enumerate(accuracies):
entries.append((model_name, fold_idx, accuracy))
cv_df = pd.DataFrame(entries, index=range(CV*len(models)), columns=['model_name', 'fold_idx', 'accuracy']); cv_df
cv_df.groupby('model_name').accuracy.mean()
sns.boxplot(x='model_name', y='accuracy', data=cv_df)
sns.stripplot(x='model_name', y='accuracy', data=cv_df, size=8, jitter=True, edgecolor='gray', linewidth=2)
# model = LogisticRegression(random_state=0, solver='lbfgs', multi_class='auto')
model = SGDClassifier(random_state=0, max_iter=200, tol=1e-3, loss='log')
X_train, X_test, y_train, y_test, indices_train, indices_test = train_test_split(features, labels, df.index, test_size=0.33, random_state=0)
model.fit(X_train, y_train)
y_pred_proba = model.predict_proba(X_test)
y_pred = model.predict(X_test)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
y_pred_proba
conf_mat = confusion_matrix(y_test, y_pred); conf_mat
sns.heatmap(conf_mat, annot=True, fmt='d', xticklabels=category_id_df.Category.values, yticklabels=category_id_df.Category.values)
plt.ylabel('Actual')
plt.xlabel('Predicted')
for predicted in category_id_df.category_id:
for actual in category_id_df.category_id:
if predicted != actual and conf_mat[actual, predicted] >= 2:
print('{} predicted as {}: {} examples'.format(id_to_category[actual], id_to_category[predicted], conf_mat[actual, predicted]))
display(df.loc[indices_test[(y_test==actual) & (y_pred==predicted)]]['Text'])
model.fit(features, labels)
N = 3
for Category, category_id in tqdm(sorted(category_to_id.items())):
indices = np.argsort(model.coef_[category_id])
feature_names = np.array(tfidf.get_feature_names())[indices]
unigrams = [v for v in feature_names if len(v.split(' '))==1]
bigrams = [v for v in feature_names if len(v.split(' '))==2]
print('# {}'.format(Category))
print(' .Most correlated unigrams:\n .{}'.format('\n .'.join(unigrams[-N:])))
print(' .Most correlated bigrams:\n .{}'.format('\n .'.join(bigrams[-N:])))
fnames_test = glob.glob('newsdata/downloaded/*.json')[-20:]
test_data = []
for fname in fnames_test:
with open(fname, encoding='UTF-8-sig') as f:
content = json.load(f)
test_data.append(content['text'])
fnames_test
test_features = tfidf.transform(test_data)
Y_pred = model.predict(test_features); Y_pred
Y_pred_proba = model.predict_proba(test_features).max(axis=1); Y_pred_proba
[id_to_category[cat_id] for cat_id in Y_pred]
model.cat = id_to_category
model.tfidf = tfidf
model_name = 'categorizer.model'
joblib.dump(model, model_name)
_model = joblib.load(model_name)
probas = _model.predict_proba(test_features).max(axis=1); probas
[_model.cat[cat_id] for cat_id in _model.predict(test_features)]
_model = joblib.load(model_name)
df_text = pd.DataFrame(test_data, columns=['text'])
_features = _model.tfidf.transform(df_text.text)
_proba = _model.predict_proba(_features).max(axis=1)
_pred = _model.predict(_features)
_cats = [_model.cat[cat_id] for cat_id in _pred]
df_text['category'] = _cats
df_text['category_proba'] = _proba; df_text
df_text = pd.DataFrame(test_data, columns=['text'])
_features = _model.tfidf.transform(df_text.text)
_proba = _model.predict_proba(_features).max(axis=1)
_pred = _model.predict(_features)
_cats = [_model.cat[cat_id] for cat_id in _pred]
df_text['category'] = _cats
df_text['category_proba'] = _proba; df_text
df_text.text.iloc[2]
###Output
_____no_output_____ |
Sequences, Time Series and Prediction/S+P_Week_4_Exercise_Question.ipynb | ###Markdown
###Code
!pip install tensorflow-gpu==2.0.0
import tensorflow as tf
print(tf.__version__)
import numpy as np
import matplotlib.pyplot as plt
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(True)
!wget --no-check-certificate \
https://raw.githubusercontent.com/jbrownlee/Datasets/master/daily-min-temperatures.csv \
-O /tmp/daily-min-temperatures.csv
import csv
time_step = []
temps = []
with open('/tmp/daily-min-temperatures.csv') as csvfile:
# YOUR CODE HERE. READ TEMPERATURES INTO TEMPS
# HAVE TIME STEPS BE A SIMPLE ARRAY OF 1, 2, 3, 4 etc
reader = csv.reader(csvfile, delimiter=',')
next(reader) # skip first row with column names
index = 0
for row in reader:
temps.append(float(row[1]))
time_step.append(int(index))
index = index + 1
series = np.array(temps)
time = np.array(time_step)
plt.figure(figsize=(10, 6))
plot_series(time, series)
split_time = 2500
time_train = time[:split_time] # YOUR CODE HERE
x_train = series[:split_time]# YOUR CODE HERE
time_valid = time[split_time:]# YOUR CODE HERE
x_valid = series[split_time:]# YOUR CODE HERE
window_size = 30
batch_size = 32
shuffle_buffer_size = 1000
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
# YOUR CODE HERE
series = tf.expand_dims(series, axis=-1)
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size+1, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size+1))
ds = ds.shuffle(shuffle_buffer)
ds = ds.map(lambda w: (w[:-1], w[1:]))
return ds.batch(batch_size).prefetch(1)
def model_forecast(model, series, window_size):
# YOUR CODE HERE
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size))
ds = ds.batch(32).prefetch(1)
forecast = model.predict(ds)
return forecast
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
window_size = 64
batch_size = 256
train_set = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
print(train_set)
print(x_train.shape)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=32, kernel_size=5,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 1]),
tf.keras.layers.LSTM(64, return_sequences=True),
tf.keras.layers.LSTM(64, return_sequences=True),
tf.keras.layers.Dense(30, activation="relu"),
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 400)
])
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 20))
optimizer = tf.keras.optimizers.SGD(lr=1e-8, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set, epochs=100, callbacks=[lr_schedule])
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-8, 1e-4, 0, 60])
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
train_set = windowed_dataset(x_train, window_size=60, batch_size=100, shuffle_buffer=shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=64, kernel_size=5,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 1]),
tf.keras.layers.LSTM(64, return_sequences=True),
tf.keras.layers.LSTM(64, return_sequences=True),
tf.keras.layers.Dense(30, activation="relu"),
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 400)
])
optimizer = tf.keras.optimizers.SGD(lr=1e-5, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set,epochs=200) # YOUR CODE HERE
# EXPECTED OUTPUT SHOULD SEE AN MAE OF <2 WITHIN ABOUT 30 EPOCHS
rnn_forecast = model_forecast(model, series[..., np.newaxis], window_size)
rnn_forecast = rnn_forecast[split_time - window_size:-1, -1, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, rnn_forecast)
# EXPECTED OUTPUT. PLOT SHOULD SHOW PROJECTIONS FOLLOWING ORIGINAL DATA CLOSELY
tf.keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy()
# EXPECTED OUTPUT MAE < 2 -- I GOT 1.789626
print(rnn_forecast)
# EXPECTED OUTPUT -- ARRAY OF VALUES IN THE LOW TEENS
###Output
_____no_output_____ |
docs/tutorial5_updates.ipynb | ###Markdown
Tutorial 5: Data and package updatesThe datasets that cptac distributes are still being actively worked on by the teams that generated them. Additionally, we periodically make improvements to the cptac package itself. Thus, we regularly release new versions of the data and the package. This tutorial will go over how to access both those data and package updates.Note: In this tutorial, we intentionally get cptac to generate the various errors and warnings it gives when your data or package is out of date. We do this on purpose, so you can see what it looks like; the tutorial is not broken. Updating the packageEach time you import cptac into a Python environment, it automatically checks whether you have the most recent release of the package. If you don't, it will print a warning like this:
###Code
import cptac
###Output
Warning: Your version of cptac (0.6.2) is out-of-date. Latest is 0.6.3. Please run 'pip install --upgrade cptac' to update it. (/home/caleb/anaconda3/envs/cptac-dev/lib/python3.7/site-packages/ipykernel_launcher.py, line 1)
###Markdown
As the warning directs, simply run pip install --upgrade cptac to get the latest version of the package. This will ensure that you have all the latest functionality of the package, and that you're able to access the latest versions of all the datasets. Watching the repository for new releasesEach time there's a new version of the package, we release the new version on PyPI, and also post a release page on GitHub. You can use GitHub's "Watch" feature to get an email sent to you every time we do this. Simply log in to GitHub, browse to the main page for our repository, click on the "Watch" button in the upper right corner of the page, and select the "Releases only" option from the drop-down box, as shown below. You will then get an email every time we release another version of the package. Accessing data updatesPeriodically, there will be data updates released for different datasets. cptac automatically checks for this whenever you load a dataset, and if you don't manually specify a version when loading a dataset, it will raise an exception if your latest installed version of the data doesn't match the latest data version that's released. The error message will give you instructions for downloading the new data version.Note: The error information below is rather long. This is because Jupyter Notebooks automatically prints the entire stack trace that accompanies an error. The informative error message is at the bottom. If you were using cptac in the command line or in a script, only the informative error message at the bottom would be printed.
###Code
gb = cptac.Gbm()
###Output
###Markdown
To download the new data version, run the cptac.download function as the error message directs. cptac will notify you that it is downloading new data.
###Code
cptac.download(dataset="gbm", version="latest")
###Output
Checking that index is up-to-date...
###Markdown
You can then load the dataset, and cptac will automatically load the latest data version.
###Code
gb = cptac.Gbm()
gb.version()
###Output
###Markdown
Accessing old data versions after updatesAfter you have updated a dataset, you can still access old versions of the data. This is helpful, for example, if you want to compare your analyses between data versions. To load an older version of the data, simply pass the desired version number to the version parameter when loading the dataset:
###Code
gb = cptac.Gbm(version="1.0")
gb.version()
###Output
Loading dataframes...
|
pper_notebooks/Cumulative_Days.ipynb | ###Markdown
Cumulative day analysis
###Code
import matplotlib.pyplot as plt
import numpy as np
import xarray as xr
%matplotlib inline
cities = {'Bismarck, ND': (-100.773703, 46.801942),
'Minneapolis, MN': (-93.2650, 44.9778),
'Albany, NY': (-73.7562, 42.6526),
'Omaha, NE': (-95.9345, 41.2565),
'Columbus, OH': (-82.9988, 39.9612),
'Denver, CO':(-104.9903, 39.7392),
'St. Louis, MO': (-90.1994, 38.6270),
'Charlotte, NC': (-80.8431, 35.2271),
'Oklahoma City, OK':(-97.5164, 35.4676),
'Tuscaloosa, AL': (-87.5692, 33.2098),
'San Antonio, TX': (-98.4936, 29.4241),
'Orlando, FL': (-81.3792, 28.5383),
}
out_dir = "../figures/cumulative_days/"
###Output
_____no_output_____
###Markdown
Read in the lat/lon data from the NAM 212 grid.
###Code
coords = xr.open_dataset('../data/nam212.nc')
lats = coords.gridlat_212.values
lons = coords.gridlon_212.values
###Output
_____no_output_____
###Markdown
Run through the city lats and lons, find the closest grid to that point, and save it in a dictionary
###Code
from scipy.spatial import KDTree
import numpy as np
city_lookup = {}
pts = np.stack([lons.ravel(), lats.ravel()], axis=1)
flons, flats = pts[:, 0], pts[:, 1]
test = np.zeros(shape=(lats.shape))
tree = KDTree(list(zip(flons, flats)))
for key, value in cities.items():
loc = tree.query(value)
idx = np.unravel_index(loc[1], shape=lats.shape)
test[idx] = 1
city_lookup[key] = idx
print(idx, lons[idx], lats[idx], value)
city_lookup
###Output
(82, 92) -100.91847 46.8055 (-100.773703, 46.801942)
(76, 108) -93.06891 44.845203 (-93.265, 44.9778)
(73, 149) -73.676025 42.597034 (-73.7562, 42.6526)
(66, 102) -95.933105 41.39053 (-95.9345, 41.2565)
(63, 130) -83.031204 39.93604 (-82.9988, 39.9612)
(62, 82) -105.10153 39.699127 (-104.9903, 39.7392)
(58, 115) -90.01058 38.50195 (-90.1994, 38.627)
(50, 136) -80.90761 35.120552 (-80.8431, 35.2271)
(49, 98) -97.64347 35.33841 (-97.5164, 35.4676)
(44, 121) -87.63795 33.391075 (-87.5692, 33.2098)
(33, 96) -98.35003 29.529873 (-98.4936, 29.4241)
(32, 137) -81.27024 28.59184 (-81.3792, 28.5383)
###Markdown
Plot 5% Tornado data for the cities
###Code
colors = {'Bismarck, ND': "#e41a1c",
'Minneapolis, MN': "#377eb8",
'Albany, NY': "#4daf4a",
'Omaha, NE': "#984ea3",
'Columbus, OH': "#ff7f00",
'Denver, CO': "#000000",
'St. Louis, MO': "#e41a1c",
'Charlotte, NC': "#377eb8",
'Oklahoma City, OK': "#4daf4a",
'Tuscaloosa, AL': "#984ea3",
'San Antonio, TX': "#ff7f00",
'Orlando, FL': "#000000"}
lstyle = {'Bismarck, ND': "-",
'Minneapolis, MN': "-",
'Albany, NY': "-",
'Omaha, NE': "-",
'Columbus, OH': "-",
'Denver, CO': "-",
'St. Louis, MO': "--",
'Charlotte, NC': "--",
'Oklahoma City, OK': "--",
'Tuscaloosa, AL': "--",
'San Antonio, TX': "--",
'Orlando, FL': "--"}
def get_cumulative_count_city(ax, dset, name, var):
res = np.zeros(shape=(40, 365), dtype=int)
for year in range(1979, 2019):
dsub = dset.sel(time=slice(str(year) + '-01-01', str(year) + '-12-31'))
y, x = city_lookup[name]
if var != None:
vals = dsub[var].sel(y=y, x=x)
else:
vals = dsub.sel(y=y, x=x)
vals = vals.groupby('time.dayofyear').sum()
day_values = np.zeros(shape=(365), dtype=int)
if calendar.isleap(year):
values = vals.values
values[58] += values[59]
day_values[:59] = values[:59]
day_values[59:] = values[60:]
else:
day_values = vals.values
res[year-1979,:] = np.cumsum(day_values)
ax.plot([date.fromordinal(x) for x in list(range(736330, 736695))],
np.mean(res, axis=0), ls=lstyle[name], color=colors[name], lw=2, label=name)
return ax
import matplotlib.pyplot as plt
import xarray as xr
import calendar
from datetime import date
import numpy as np
import matplotlib.dates as mdates
monthsFmt = mdates.DateFormatter('%b')
months = mdates.MonthLocator()
%matplotlib inline
plt.rcParams['figure.figsize'] = 15, 10
plt.rcParams['ytick.labelsize'] = 20
plt.rcParams['xtick.labelsize'] = 20
dset = xr.open_dataset("../data/tor_day_footprints_79-18_sid-212_grid.nc")
ax = plt.subplot(1, 1, 1)
for key, val in cities.items():
print(key)
ax = get_cumulative_count_city(ax, dset, key, 'pp_05')
ax.set_xlabel("Day of Year", fontsize=20)
ax.set_ylabel("Mean Annual Cumulative Event Days", fontsize=20)
ax.xaxis.set_major_locator(months)
ax.xaxis.set_major_formatter(monthsFmt)
ax.xaxis.set_minor_locator(mdates.DayLocator(interval=7))
#Use 2017 as a placeholder, we care only about day of year
ax.axvspan('2017-01-01', '2017-03-01', facecolor='0.5', alpha=0.1)
ax.axvspan('2017-06-01', '2017-09-01', facecolor='0.5', alpha=0.1)
ax.axvspan('2017-12-01', '2017-12-31', facecolor='0.5', alpha=0.1)
ax.set_xlim('2017-01-01', '2017-12-31')
plt.legend(prop={'size':18}, loc=2, bbox_to_anchor=(0, .95))
ax.set_ylim(0, 9)
plt.grid()
#ax.annotate("b)", xy=('2017-01-05',9.1), fontsize=35,
# bbox=dict(facecolor='w', edgecolor='k', pad=6.0), zorder=15)
ax.set_title('5% Tornado', fontsize=20)
plt.savefig(out_dir + "cumulative_tor_days_212.png", bbox_inches='tight', dpi=300)
###Output
Bismarck, ND
###Markdown
Plot Hail cumulative days
###Code
dset = xr.open_dataset("../data/hail_day_footprints_79-18_sid-212_grid.nc")
ax = plt.subplot(1, 1, 1)
for key, val in colors.items():
ax = get_cumulative_count_city(ax, dset, key, 'pp_15')
ax.set_xlabel("Day of Year", fontsize=20)
ax.set_ylabel("Mean Annual Cumulative Event Days", fontsize=20)
ax.xaxis.set_major_locator(months)
ax.xaxis.set_major_formatter(monthsFmt)
ax.xaxis.set_minor_locator(mdates.DayLocator(interval=7))
ax.axvspan('2017-01-01', '2017-03-01', facecolor='0.5', alpha=0.1)
ax.axvspan('2017-06-01', '2017-09-01', facecolor='0.5', alpha=0.1)
ax.axvspan('2017-12-01', '2017-12-31', facecolor='0.5', alpha=0.1)
ax.set_xlim('2017-01-01', '2017-12-31')
plt.legend(prop={'size':18}, loc=2, bbox_to_anchor=(0, .95))
ax.set_ylim(0, 27)
plt.grid()
#ax.annotate("c)", xy=('2017-01-05',27), fontsize=35,
# bbox=dict(facecolor='w', edgecolor='k', pad=6.0), zorder=15)
ax.set_title('15% Hail', fontsize=20)
plt.savefig(out_dir + "cumulative_hail_days_212.png", bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Plot Wind cumulative days
###Code
dset = xr.open_dataset("../data/wind_day_footprints_79-18_sid-212_grid.nc")
ax = plt.subplot(1, 1, 1)
for key, val in colors.items():
ax = get_cumulative_count_city(ax, dset, key, 'pp_15')
ax.set_xlabel("Day of Year", fontsize=20)
ax.set_ylabel("Mean Annual Cumulative Event Days", fontsize=20)
ax.xaxis.set_major_locator(months)
ax.xaxis.set_major_formatter(monthsFmt)
ax.xaxis.set_minor_locator(mdates.DayLocator(interval=7))
ax.axvspan('2017-01-01', '2017-03-01', facecolor='0.5', alpha=0.1)
ax.axvspan('2017-06-01', '2017-09-01', facecolor='0.5', alpha=0.1)
ax.axvspan('2017-12-01', '2017-12-31', facecolor='0.5', alpha=0.1)
ax.set_xlim('2017-01-01', '2017-12-31')
plt.legend(prop={'size':18}, loc=2, bbox_to_anchor=(0, .95))
ax.set_ylim(0, 23)
plt.grid()
#ax.annotate("d)", xy=('2017-01-05',23), fontsize=35,
# bbox=dict(facecolor='w', edgecolor='k', pad=6.0), zorder=15)
ax.set_title('15% Wind', fontsize=20)
plt.savefig(out_dir + "cumulative_wind_days_212.png", bbox_inches='tight', dpi=300)
###Output
_____no_output_____
###Markdown
Calculating "slight risk" occurrence
###Code
from copy import deepcopy
tor_dset = xr.open_dataset('../data/tor_day_footprints_79-18_sid-212_grid.nc')
hail_dset = xr.open_dataset('../data/hail_day_footprints_79-18_sid-212_grid.nc')
wind_dset = xr.open_dataset('../data/wind_day_footprints_79-18_sid-212_grid.nc')
slgt_tor = deepcopy(tor_dset['pp_05'])
slgt_hail = deepcopy(hail_dset['pp_15'])
slgt_wind = deepcopy(wind_dset['pp_15'])
#If tor, hail, OR wind is equal to 1, set that value to 1 in slight
slight = 1*((slgt_tor + slgt_hail + slgt_wind)>0)
ax = plt.subplot(1, 1, 1)
for key, val in colors.items():
ax = get_cumulative_count_city(ax, slight, key, None)
ax.set_xlabel("Day of Year", fontsize=20)
ax.set_ylabel("Mean Annual Cumulative Event Days", fontsize=20)
ax.xaxis.set_major_locator(months)
ax.xaxis.set_major_formatter(monthsFmt)
ax.xaxis.set_minor_locator(mdates.DayLocator(interval=7))
ax.axvspan('2017-01-01', '2017-03-01', facecolor='0.5', alpha=0.1)
ax.axvspan('2017-06-01', '2017-09-01', facecolor='0.5', alpha=0.1)
ax.axvspan('2017-12-01', '2017-12-31', facecolor='0.5', alpha=0.1)
ax.set_xlim('2017-01-01', '2017-12-31')
plt.legend(prop={'size':18}, loc=2, bbox_to_anchor=(0, .95))
ax.set_ylim(0, 34)
plt.grid()
#ax.annotate("a)", xy=('2017-01-05',34), fontsize=35,
# bbox=dict(facecolor='w', edgecolor='k', pad=6.0), zorder=15)
ax.set_title('5% Tornado or 15% Wind or 15% Hail', fontsize=20)
plt.savefig(out_dir + "cumulative_slight_days_212.png", bbox_inches='tight', dpi=300)
###Output
_____no_output_____ |
notebooks/experimental/MPC PubSub Bob - Prototype.ipynb | ###Markdown
InstructionsThis notebook is a prototype initial implementation of an MPC Tensor over IPFS's pubsub sockets. Run the Alice notebook first to start a server (which should be in the same folder as this one).- installed IPFS (https://ipfs.io/docs/install/)- run the command `ipfs daemon --enable-pubsub-experiment`- run `python3 setup.py install` from the root directory of the OpenMined/Grid project (this project)Then you're ready to run this notebook!
###Code
from grid import ipfsapi
import base64
import random
import torch
import keras
import json
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
import numpy as np
from grid.pubsub.base import PubSub
BASE = 10
KAPPA = 9 # ~29 bits
PRECISION_INTEGRAL = 2
PRECISION_FRACTIONAL = 7
PRECISION = PRECISION_INTEGRAL + PRECISION_FRACTIONAL
BOUND = BASE**PRECISION
# Q field
Q = 6497992661811505123# < 64 bits
Q_MAXDEGREE = 2
assert Q > BASE**(PRECISION * Q_MAXDEGREE) # supported multiplication degree (without truncation)
assert Q > 2*BOUND * BASE**KAPPA # supported kappa when in positive range
# P field
P = 1802216888453791673313287943102424579859887305661122324585863735744776691801009887 # < 270 bits
P_MAXDEGREE = 9
assert P > Q
assert P > BASE**(PRECISION * P_MAXDEGREE)
class MPCTensor(object):
def __init__(self,grid,json_str=None,value=None,public=None,private=None,share=None,field=Q,id=None,channel=None):
if(json_str is None):
if(value is not None or private is not None):
self.is_owner = True
else:
self.is_owner = False
self._share = share
self.field = field
self.value = value
self.grid = grid
self.precision_fractional=PRECISION_FRACTIONAL
if(id is None):
id = str(random.randint(0,1000000))
self.channel = channel
self.id = str(id)
else:
self.deserialize(json_str)
self.channel = channel
def serialize(self):
d = {}
if(self.value is not None):
d['v'] = self.value.tolist()
if(self._share is not None):
d['_share'] = self._share.tolist()
d['id'] = self.id
d['f'] = self.field
d['p'] = self.precision_fractional
d['o'] = self.is_owner
return json.dumps(d)
def __str__(self):
return self.serialize()
def deserialize(self,json_encoding):
d = json.loads(json_encoding)
keys = d.keys()
if('v' in keys):
self.value = np.array(d['v'],dtype='object')
else:
self.value = None
if('_share' in keys):
self._share = np.array(d['_share'],dtype='object')
else:
self._share = None
self.id = d['id']
self.field = d['f']
self.precision_fractional = d['p']
self.is_owner = d['o']
def value2encoded_(self):
upscaled = (self.value * BASE**self.precision_fractional).astype('object')
field_elements = upscaled % self.field
self.encoded_value = field_elements
return self.encoded_value
def encoded2value_(self):
mask = (self.encoded_value <= self.field/2).astype('object')
true_value = self.encoded_value
false_value = self.encoded_value - self.field
upscaled = (mask * true_value) + ((1 - mask) * false_value)
rational = upscaled / BASE**self.precision_fractional
return rational
def encoded2shares_(self):
public = (np.random.rand(*self.value.shape) * self.field).astype('object')
private = ((self.encoded_value - public) % self.field).astype('object')
self._share = private
self.share_is_private = True
return (public,private)
def shares2encoded_(self,shares):
self.encoded_value = (shares[0] + shares[1]) % Q
return self.encoded_value
def value2shares(self):
self.value2encoded_()
return self.encoded2shares_()
def shares2value(self,shares):
self.shares2encoded_(shares)
self.value = self.encoded2value_()
return self.value
def __add__(self,y,publish=True,z_id=None):
if(z_id is None):
z_id = np.random.randint(0,10000000)
if(publish):
command = {}
command['cmd'] = 'add_elem'
command['x'] = self.id
command['y'] = y.id
command['z'] = z_id
grid.api.pubsub_pub(topic=self.channel,payload=json.dumps(command))
new_share = (self._share + y._share) % (self.field)
return MPCTensor(grid,share=new_share,id=z_id,channel=self.channel)
def __mul__(self,y,publish=True,z_id=None):
y = int(y)
if(y >= 1):
if(z_id is None):
z_id = np.random.randint(0,10000000)
if(publish):
command = {}
command['cmd'] = 'mult_scalar'
command['x'] = self.id
command['y'] = y
command['z'] = z_id
grid.api.pubsub_pub(topic=self.channel,payload=json.dumps(command))
new_share = ((self._share * y).astype('object') % self.field).astype('object')
return MPCTensor(grid,share=new_share,id=z_id,channel=self.channel,field=self.field)
else:
print("Cannot divide yet")
def __sub__(self,y):
if(self.is_owner and not y.is_owner):
new_share = (self.private - y.public) % self.field
elif(y.is_owner and not self.is_owner):
new_share = (self.public - y.private) % self.field
elif(not self.is_owner and not y.is_owner):
new_share = (self.public - y.public) % self.field
else:
new_share = (self.private - y.private) % self.field
return MPCTensor(grid,private=new_share)
def reconstruct(self):
def send_request():
command = {}
command['cmd'] = "send_tensor"
command['id'] = str(self.id)
grid.api.pubsub_pub(topic=self.channel,payload=json.dumps(command))
def receive_tensor(message):
command = json.loads(message['data'])
if(command['cmd'] == 'receive_tensor'):
tensor = MPCTensor(grid,json_str=command['data'],channel=self.channel)
if(int(tensor.id) == int(self.id)):
return tensor
y = grid.listen_to_channel(channel=self.channel,handle_message=receive_tensor,init_function=send_request,ignore_from_self=False)
self.shares2value([y._share,self._share])
return self
def share(self,alice):
self.channel = alice.channel
public,private = self.value2shares()
public_tensor = MPCTensor(self.grid,share=public,id=self.id)
self._share = private
command = {}
command['cmd'] = 'receive_tensor_share'
command['data'] = str(public_tensor)
grid.api.pubsub_pub(topic=alice.channel,payload=json.dumps(command))
return self
class MPCGrid(object):
def __init__(self,grid,channel):
self._tensors = {}
self.grid = grid
self.channel = channel
def process_message(self,msg):
command = json.loads(msg['data'])
if('cmd' in command.keys()):
if(command['cmd'] == "receive_tensor_share"):
tensor = MPCTensor(self.grid,json_str=command["data"],channel=self.channel)
if(tensor.id not in self._tensors.keys()):
self._tensors[tensor.id] = tensor
print("Received Tensor:" + str(tensor.id))
else:
print("Ignoring Tensor: " + str(tensor.id) + " because I seem to already have a tensor with the same name." )
if(command['cmd'] == "send_tensor_share"):
tensor_to_share = self._tensors[command['id']]
tensor_to_share.share(self)
elif(command['cmd'] == 'add_elem'):
print("Adding " + str(command['x']) + " + " + str(command['y']) + "-> " + str(command['z']))
z = self._tensors[command['x']].__add__(self._tensors[command['y']],False,z_id=command['z'])
self._tensors[z.id] = z
elif(command['cmd'] == 'mult_scalar'):
print("Multiplying " + str(command['x']) + " * " + str(command['y']) + "-> " + str(command['z']))
z = self._tensors[command['x']].__mul__(float(command['y']),False,z_id=command['z'])
self._tensors[z.id] = z
elif(command['cmd'] == "send_tensor"):
tensor_to_send = str(self._tensors[command['id']])
command = {}
command['cmd'] = 'receive_tensor'
command['data'] = tensor_to_send
grid.api.pubsub_pub(topic=self.channel,payload=json.dumps(command))
elif(command['cmd'] == "what_tensors_are_available"):
command = {}
command['cmd'] = "available_tensors"
available_tensors = list()
for k,v in self._tensors.items():
if(v.value is not None):
available_tensors.append([k,v.value.shape])
elif(v._share is not None):
available_tensors.append([k,v._share.shape])
command['available_tensors'] = available_tensors
grid.api.pubsub_pub(topic=self.channel,payload=json.dumps(command))
def work(self):
self.grid.listen_to_channel(channel=self.channel,handle_message=self.process_message,ignore_from_self=False)
def available_tensors(self):
def send_request():
command = {}
command['cmd'] = "what_tensors_are_available"
grid.api.pubsub_pub(topic=self.channel,payload=json.dumps(command))
def receive_tensor(message):
command = json.loads(message['data'])
if(command['cmd'] == 'available_tensors'):
return command['available_tensors']
available_tensors = grid.listen_to_channel(channel=self.channel,handle_message=receive_tensor,init_function=send_request,ignore_from_self=False)
return available_tensors
def get_tensor_share(self,id):
def send_request():
command = {}
command['cmd'] = "send_tensor_share"
command['id'] = str(id)
grid.api.pubsub_pub(topic=self.channel,payload=json.dumps(command))
def receive_tensor(message):
command = json.loads(message['data'])
if(command['cmd'] == 'receive_tensor_share'):
tensor = MPCTensor(grid,json_str=command['data'],channel=self.channel)
if(str(tensor.id) == str(id)):
return tensor
return grid.listen_to_channel(channel=self.channel,handle_message=receive_tensor,init_function=send_request,ignore_from_self=False)
def get_tensor(self,id):
def send_request():
command = {}
command['cmd'] = "send_tensor"
command['id'] = str(id)
grid.api.pubsub_pub(topic=self.channel,payload=json.dumps(command))
def receive_tensor(message):
command = json.loads(message['data'])
if(command['cmd'] == 'receive_tensor'):
tensor = MPCTensor(grid,json_str=command['data'],channel=self.channel)
if(str(tensor.id) == str(id)):
return tensor
return grid.listen_to_channel(channel=self.channel,handle_message=receive_tensor,init_function=send_request,ignore_from_self=False)
def tensors(self):
return self.available_tensors()
def __repr__(self):
tens = self.tensors()
if(len(tens) < 10):
s = "MPC Grid with Tensors:\n"
for t in tens:
s += "\t" + str(t) +"\n"
return s
return "< MPCGrid tensors:" + str(len(tens)) + " >"
def __getitem__(self,id):
return self.get_tensor_share(id)
grid = PubSub()
grid.id
alice = MPCGrid(grid,channel='bob <-> alice')
x = MPCTensor(grid,value=np.random.rand(3,3)).share(alice)
x.value
y = x * 3
z = y * 2
y.reconstruct()
y.value
z.reconstruct()
z.value
alice
a = MPCTensor(grid,value=np.random.rand(4,2)).share(alice)
b = alice['xor_input'] + a
b.reconstruct()
b.value
a.value
x = MPCTensor(grid,value=np.random.rand(3,3))
y = MPCTensor(grid,value=np.random.rand(3,3))
x.share(alice)
y.share(alice)
z = x + y
z.reconstruct().value
x.value + y.value
###Output
_____no_output_____ |
docs/source/nb_examples/DotProduct.ipynb | ###Markdown
Dot Product
###Code
%pylab inline
from gps_helper.prn import PRN
from sk_dsp_comm import sigsys as ss
from sk_dsp_comm import digitalcom as dc
from caf_verilog.dot_product import DotProduct
###Output
_____no_output_____
###Markdown
Test Signals
###Code
prn = PRN(15)
prn2 = PRN(20)
fs = 625e3
Ns = fs / 125e3
prn_seq = prn.prn_seq()
prn_seq2 = prn2.prn_seq()
prn_seq,b = ss.NRZ_bits2(array(prn_seq), Ns)
prn_seq2,b2 = ss.NRZ_bits2(array(prn_seq2), Ns)
###Output
_____no_output_____
###Markdown
Dot Product Implementation
###Code
dp = DotProduct(prn_seq[:10], prn_seq[:10])
dp.gen_tb()
###Output
_____no_output_____ |
nlp/.ipynb_checkpoints/iris-elves-checkpoint.ipynb | ###Markdown
Goal: predict iris from test set1. load data2. determine feature importance3. fit 4. train5. predict [tutorial](http://scikit-learn.org/stable/tutorial/statistical_inference/supervised_learning.html)[machine-learning-in-python-step-by-step](https://machinelearningmastery.com/machine-learning-in-python-step-by-step/)
###Code
import numpy as np
from sklearn import datasets
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
import xgboost as xgb
%matplotlib inline
iris = datasets.load_iris()
def printRows():
count = 0;
for n in iris.data:
count += 1
if (count < 10):
print(n)
printRows()
names_df = pd.read_csv('../albon/old_norwegiann_names.csv')
# names_df.head()
# len(names_df)
iris_df = pd.DataFrame(data= np.c_[iris['data'], iris['target']],
columns= iris['feature_names'] + ['target'])
iris_df['species'] = pd.Categorical.from_codes(iris.target, iris.target_names)
iris_df.head()
elves = []
count = 0
elf_columns = ['Name', 'Gender', 'Species', 'Height', 'Weight', 'Strength', 'Stamina']
def get_weight(len):
return len * 120
for index, row in iris_df.iterrows():
if count < len(names_df):
name = names_df.at[count, 'Name']
gender = names_df.at[count, 'Gender']
elf = (name, gender, row['species'], row['petal length (cm)'], get_weight(row['petal length (cm)']), row['sepal length (cm)'], row['sepal width (cm)'])
elves.append(elf)
count += 1
elves_df = pd.DataFrame(elves, columns = elf_columns)
import csv
def write_csv():
f = open('elves.csv', 'w')
writer = csv.writer(f)
writer.writerow(elf_columns)
for row in elves:
name = row[0]
gender = row[1]
species = row[2]
height = row[3]
weight = row[4]
strength = row[5]
stamina = row[6]
writer.writerow([name, gender, species, height, weight, strength, stamina])
f.close()
write_csv()
# look up the meaning by the name
name = elves_df.at[20,"Name"]
elves_meaning_df = pd.merge(elves_df, names_df, how='left',
left_on='Name', right_on='Name')
# meaning = elves_meaning_df.at[0,"Meaning"]
# print (name, " : ", meaning)
def get_meaning(name):
return (names_df.loc[names_df['Name'] == name]).values[0][1]
meaning = get_meaning(name)
meaning
elves_df.head(20)
# Import necessary libraries
import seaborn as sns
import matplotlib.pyplot as plt
# Set context to `"paper"`
sns.set_context("paper", font_scale=2, rc={"font.size":8,"axes.labelsize":8})
# Load iris data
iris = sns.load_dataset("iris")
# Construct iris plot
sns.swarmplot(x="Species", y="Height", data=elves_df)
# Show plot
plt.show()
elves_df.head()
elves_df.describe()
elves_df.shape
elves_df.groupby('Species').size()
# box and whisker plots
elves_df.plot(kind='box', subplots=True, layout=(2,2), sharex=False, sharey=False)
plt.show()
# box and whisker plots
elves_df.plot(kind='box', subplots=True, layout=(2,2), sharex=False, sharey=False)
plt.show()
# histograms
elves_df.hist()
plt.show()
# scatter plot matrix
from pandas.plotting import scatter_matrix
scatter_matrix(elves_df)
plt.show()
# array([[' Emil', 'Boy', 'setosa', 5.1, 160.0, 5.1, 3.5],
from sklearn import model_selection
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn import datasets, linear_model
from sklearn.preprocessing import LabelEncoder
import random
import numpy as np
# we don't want all the targets to be 0 so for now just randonmly select 2 or 1
# if all 0, then the prediction is simply 0
iris_dataset = datasets.load_iris()
iris_limited = []
count = 0
targets = iris_dataset.target.tolist()
for target in targets:
if count < 128:
iris_limited.append(target)
count += 1
print(len(iris_limited))
# https://stackoverflow.com/questions/37292872/how-can-i-one-hot-encode-in-python
def dummyEncode(df):
columnsToEncode = list(df.select_dtypes(include=['category','object']))
le = LabelEncoder()
for feature in columnsToEncode:
try:
df[feature] = le.fit_transform(df[feature])
except:
print('Error encoding '+feature)
return df
# Split-out validation dataset
array = elves_df.values
Y = iris_limited
# elves_df.drop('Name', axis=1, inplace=True)
encoded_elves_df = dummyEncode(elves_df)
X = encoded_elves_df
validation_size = 0.20
seed = 7
X_train, X_validation, Y_train, Y_validation = train_test_split(X, Y, test_size=validation_size)
# Test options and evaluation metric
scoring = 'accuracy'
# encoded_elves_df.head(5)
# print (Y_train)
# Spot Check Algorithms
models = []
models.append(('LR', LogisticRegression()))
models.append(('LDA', LinearDiscriminantAnalysis()))
models.append(('KNN', KNeighborsClassifier()))
models.append(('CART', DecisionTreeClassifier()))
models.append(('NB', GaussianNB()))
models.append(('SVM', SVC()))
# evaluate each model in turn
results = []
names = []
for name, model in models:
kfold = model_selection.KFold(n_splits=10, random_state=seed)
cv_results = model_selection.cross_val_score(model, X_train, Y_train, cv=kfold, scoring=scoring)
results.append(cv_results)
names.append(name)
msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())
print(msg)
# Compare Algorithms
fig = plt.figure()
fig.suptitle('Algorithm Comparison')
ax = fig.add_subplot(111)
plt.boxplot(results)
ax.set_xticklabels(names)
plt.show()
# Make predictions on validation dataset
knn = KNeighborsClassifier()
knn.fit(X_train, Y_train)
predictions = knn.predict(X_validation)
print(accuracy_score(Y_validation, predictions))
print(confusion_matrix(Y_validation, predictions))
print(classification_report(Y_validation, predictions))
###Output
0.9615384615384616
[[ 9 0 0]
[ 0 10 1]
[ 0 0 6]]
precision recall f1-score support
0 1.00 1.00 1.00 9
1 1.00 0.91 0.95 11
2 0.86 1.00 0.92 6
micro avg 0.96 0.96 0.96 26
macro avg 0.95 0.97 0.96 26
weighted avg 0.97 0.96 0.96 26
|
workshop/lessons/06_new_systems/06 - New Systems - filled.ipynb | ###Markdown
Exploring New Alloy Systems with Pymatgen Author: Rachel Woods-Robinson Version: July 29, 2020  Outline 1. Select a test-case system * 1.1 Exercise: `Structure` and `MPRester` refresher * 1.2 Lesson: add oxidation states to a `Structure` * 1.3 Bonus: plot `BandStructure` 2. Select an alloy partner * 2.1 Lesson: find possible dopants * 2.2 Exercise: find the best alloy partner (A = ?) for AxZn1-xS * 2.3 Lesson: explore phase diagrams 3. Transform to make a new CuxZn1-xS alloy * 3.1 Lesson: structure transformation * 3.2 Exercise: try your own transformation on CuZnS2 4. Calculate new properties * 4.1 Lesson: volume prediction and XRD plot * 4.2 Exercise: try this on your CuZnS2 structure 5. Test your skills * 5.1 Exercise: compare relaxed DFT structures to estimates * 5.2 Lesson: add computed entries to phase diagram * 5.3 Next steps 1. Select a test-case system ***In this notebook we will focus on cubic zinc-blende ZnS, a wide band gap (transparent) semiconductor. In my PhD research I study p-type transparent semiconductors, so I will pose the question: how can we use ZnS as a starting point to create a p-type transparent semiconductor, and how can pymatgen help with this?*** Import the `MPRester` client:
###Code
from pymatgen import MPRester
###Output
_____no_output_____
###Markdown
The Materials ID (mp-id) of zinc-blende ZnS is mp-10695, see https://materialsproject.org/materials/mp-10695/.
###Code
ZnS_mpid = "mp-10695"
###Output
_____no_output_____
###Markdown
1.1 Exercise: `Structure` and `MPRester` refresher Get the structure
###Code
with MPRester() as mpr: # YOUR API KEY GOES IN THIS FUNCTION! copy from materialsproject.org/dashboard
ZnS_structure = mpr.get_structure_by_material_id(ZnS_mpid)
#### if you're having problems with your internet or API key:
# from monty.serialization import loadfn
# ZnS_structure = loadfn("assets/ZnS_structure.json")
###Output
_____no_output_____
###Markdown
Get space group information
###Code
ZnS_structure.get_space_group_info()
###Output
_____no_output_____
###Markdown
If you want to, try it out on our web app [here](https://materialsproject.org/apps/xtaltoolkit/%7B%22input%22%3A0%2C%22materialIDs%22%3A%22mp-10695%22%7D).- Click "Draw atoms outside unit cell bonded to atoms within unit cell"- Play around with it!  1.2 Lesson: add oxidation states to a `Structure` Pymatgen has a simple transformation to estimate the likely oxidation state of each specie in stoichiometric compounds using a bond-valence analysis approach. This information is needed to compare ionic radii and assess substitutional dopant probability. You can also enter the oxidation states manually if you'd prefer.
###Code
from pymatgen.transformations.standard_transformations import AutoOxiStateDecorationTransformation
###Output
_____no_output_____
###Markdown
Initialize this transformation:
###Code
oxi_transformation = AutoOxiStateDecorationTransformation()
ZnS_structure_oxi = oxi_transformation.apply_transformation(ZnS_structure)
print(ZnS_structure_oxi)
###Output
Full Formula (Zn1 S1)
Reduced Formula: ZnS
abc : 3.853923 3.853923 3.853923
angles: 60.000000 60.000000 60.000000
Sites (2)
# SP a b c magmom
--- ---- ---- ---- ---- --------
0 Zn2+ 0 0 0 0
1 S2- 0.25 0.25 0.25 -0
###Markdown
1.3 Bonus: plot `BandStructure`
###Code
from pymatgen.electronic_structure.plotter import BSPlotter
###Output
_____no_output_____
###Markdown
This code retrieves a `BandStructureSymmLine` object which contains all the information about a line-mode band structure.
###Code
with MPRester() as mpr: # YOUR API KEY GOES IN THIS FUNCTION! copy from materialsproject.org/dashboard
ZnS_bs = mpr.get_bandstructure_by_material_id(ZnS_mpid)
#### if you're having problems with your internet or API key
# from monty.serialization import loadfn
# ZnS_bs = loadfn("assets/ZnS_bs.json")
###Output
_____no_output_____
###Markdown
This band structure can be plotted using `BSPlotter`:
###Code
ZnS_bsp = BSPlotter(ZnS_bs)
ZnS_bsp.show() # takes a second
###Output
_____no_output_____
###Markdown
Band gap correction
###Code
ZnS_bs.get_band_gap()
###Output
_____no_output_____
###Markdown
ZnS has an experimental gap of approximately 3.5 eV, but the GGA calculated gap is far too low! We can apply a "scissor" to this band structure to correct for this.Scissor corrections are only appropriate if they're clearly acknowledged! They're used here because we expect the shape of the bands to be correct from our GGA calculation, and we know experimentally the gap is 3.5 eV. We do not use any scissor corrections on the Materials Project website or in the database.
###Code
ZnS_bs_scissor = ZnS_bs.apply_scissor(new_band_gap=3.5)
ZnS_bsp_scissor = BSPlotter(ZnS_bs_scissor)
ZnS_bsp_scissor.show()
###Output
_____no_output_____
###Markdown
2. Select an alloy partner 2.1 Lesson: find possible dopants ***Scientific question: Which p-type dopants are most likely to sit at substitutional sites in ZnS?*** Pymatgen has a machine-learned method for estimating the probability that one ion will substitute for another ([Hautier et al. 2011](https://doi.org/10.1021/ic102031h)), and reports the results ranked in order of probability. Note the input structure has to be "decorated" with oxidation states for this method to work.
###Code
from pymatgen.analysis.structure_prediction.dopant_predictor import get_dopants_from_substitution_probabilities
substitutional_dopants = get_dopants_from_substitution_probabilities(
ZnS_structure_oxi, num_dopants=10)
###Output
_____no_output_____
###Markdown
Here are some options to dope ZnS p-type:
###Code
p_dopants = substitutional_dopants['p_type']
###Output
_____no_output_____
###Markdown
We can see this returns a list of dictionaries:
###Code
print(p_dopants)
###Output
[{'probability': 0.03517771488410044, 'dopant_species': Specie Na+, 'original_species': Specie Zn2+}, {'probability': 0.029318032846742993, 'dopant_species': Specie Cu+, 'original_species': Specie Zn2+}, {'probability': 0.018723180961333987, 'dopant_species': Specie N3-, 'original_species': Specie S2-}, {'probability': 0.01642070079106222, 'dopant_species': Specie K+, 'original_species': Specie Zn2+}, {'probability': 0.015108126565956573, 'dopant_species': Specie Li+, 'original_species': Specie Zn2+}, {'probability': 0.0052407403799116705, 'dopant_species': Specie Ag+, 'original_species': Specie Zn2+}, {'probability': 0.005068671654693549, 'dopant_species': Specie O2-, 'original_species': Specie Zn2+}, {'probability': 0.002675916081854698, 'dopant_species': Specie Au+, 'original_species': Specie Zn2+}, {'probability': 0.0026755200755797705, 'dopant_species': Specie Rb+, 'original_species': Specie Zn2+}, {'probability': 0.0026755200755797705, 'dopant_species': Specie Tl+, 'original_species': Specie Zn2+}]
###Markdown
To make this easier to read we can use the `pandas` package:
###Code
import pandas as pd
pd.DataFrame(p_dopants)
###Output
_____no_output_____
###Markdown
2.2 Exercise: find the best alloy partner (A = ?) for AxZn1-xS ***Scientific question: is a p-type zinc-blende AxZn1-xS alloy possible?*** Let's see if zinc-blende binaries exist for these ternaries, and how far off the hull they sit. Find dopants First, find a list of possible cation dopant elements:
###Code
# I've pre-written this code block for convenience
# all it does is take the possible dopants list given previously, takes the cations, and makes a list of their elements
possible_cation_dopants = []
for x in p_dopants:
specie = x["dopant_species"]
if specie.oxi_state > 0:
possible_cation_dopants.append(str(specie.element))
print(possible_cation_dopants)
###Output
['Na', 'Cu', 'K', 'Li', 'Ag', 'Au', 'Rb', 'Tl']
###Markdown
Query for end-point structure Next, let's query the `MPRester` to make a table of all of the binary compounds with a space group `"F-43m"` that contain sulfur and one of these `possible_cation_dopants`. Note that the query criteria are listed on the [mapidoc](https://github.com/materialsproject/mapidoc/tree/master/materials).
###Code
with MPRester() as mpr: # YOUR API KEY GOES IN THIS FUNCTION! copy from materialsproject.org/dashboard
query = mpr.query(
{
# the query criteria
"elements": {
"$all": ["S"], "$in": possible_cation_dopants
},
"nelements": 2,
"spacegroup.symbol": "F-43m"
},
# the properties we want to return
[
"task_id",
"e_above_hull",
"pretty_formula",
"theoretical",
"spacegroup.symbol"
]
)
#### if you're having problems with your internet or API key
# from monty.serialization import loadfn
# query = loadfn("assets/alloy_partner_query.json")
pd.DataFrame(query)
###Output
_____no_output_____
###Markdown
Which cation should we pick? Cu! Ag8S a theroetical intermetallic, and it's energy is ridiculously high. CuS is a theoretical compound and is not "on the hull," but it's close at only 0.01 eV/atom, meaning it is only slightly metastable. Ok, so let's pick Cu+ to use as a p-type dopant, which I've explored experimentally in the past (see [Woods-Robinson et al. 2019](https://doi.org/10.1016/j.matt.2019.06.019)). Retrieve end-point structure To proceed, we have to retrieve the `Structure` for CuS:
###Code
CuS_mpid = query[0]["task_id"]
with MPRester() as mpr: # YOUR API KEY GOES IN THIS FUNCTION! copy from materialsproject.org/dashboard
CuS_structure = mpr.get_structure_by_material_id(CuS_mpid)
#### if you're having problems with your internet or API key
# from monty.serialization import loadfn
# CuS_structure = loadfn("assets/CuS_structure.json")
###Output
_____no_output_____
###Markdown
Yep! We’re not done, but this is a good starting point for dopants to investigate with further defect calculations. This can be accomplished using workflows from packages like [PyCDT (Broberg et al. 2018)](https://doi.org/10.1016/j.cpc.2018.01.004) which integrate with `pymatgen`'s defect capabilities. 2.3 Lesson: explore phase diagrams ***Scientific question: what does Cu-Zn-S phase space look like?*** There are many built-in tools to explore phase diagrams in `pymatgen`. To build a phase diagram, you must define a set of `ComputedEntries` with compositions, formation energies, corrections, and other calculation details.
###Code
from pymatgen.analysis.phase_diagram import PhaseDiagram, PDPlotter, CompoundPhaseDiagram, GrandPotentialPhaseDiagram
###Output
_____no_output_____
###Markdown
We can import entries in this system using the `MPRester`. This gives a list of all of the `ComputedEntries` on the database:
###Code
with MPRester() as mpr: # YOUR API KEY GOES IN THIS FUNCTION! copy from materialsproject.org/dashboard
entries = mpr.get_entries_in_chemsys(['Cu', 'Zn', 'S'])
#### if you're having problems with your internet or API key:
# from monty.serialization import loadfn
# entries = loadfn("assets/Cu-Zn-S_entries.json")
phase_diagram = PhaseDiagram(entries)
###Output
_____no_output_____
###Markdown
Conventional phase diagram
###Code
plotter = PDPlotter(phase_diagram, show_unstable=True, markersize=20) # we increase the marker size here to make it easier to see the stable points
plotter.show()
###Output
_____no_output_____
###Markdown
Contour phase diagram
###Code
fig = plotter.get_contour_pd_plot()
###Output
_____no_output_____
###Markdown
Binary phase diagram Let's zoom in on the tie-line between ZnS and CuS, which is where we are interested in alloying.
###Code
from pymatgen import Composition
cpd = CompoundPhaseDiagram(entries, [Composition("ZnS"), Composition("CuS")], normalize_terminal_compositions=False)
compound_plotter = PDPlotter(cpd, show_unstable=100, markersize=20)
compound_plotter.show()
###Output
_____no_output_____
###Markdown
Mapping out chemical potential of cations This may be a useful tool to think about tuning chemical potential for synthesis.
###Code
from pymatgen import Element
plotter.get_chempot_range_map_plot([Element("Cu"), Element("Zn")]).show()
###Output
_____no_output_____
###Markdown
There are a lot of different types of phase diagrams available (see the [`pymatgen.analysis.phase_diagram` module](https://pymatgen.org/pymatgen.analysis.phase_diagram.html)).Our key takeaway here is that in MP, the Cu-Zn-S ternary space is EMPTY!! So let's fill it in... 3. Transform to make a new CuxZn1-xS alloy 3.1 Lesson: structure transformation Substitute your dopant to create a disordered structure Now, so let's substitute 1/4 of the Zn2+ with Cu+ ions (note: we will be ignoring charge compensation here, but this is important to take into account in real calculations!). That is, let's set substitutional fraction `x = 1/4` in CuxZn1-xS. Doing so using `Structure.replace_species()` will create a ***disordered structure object***.
###Code
x = 1/4
disordered_structure = ZnS_structure_oxi.copy()
disordered_structure.replace_species({"Zn2+": {"Cu+": x, "Zn2+": 1 - x}})
print(disordered_structure)
###Output
Full Formula (Zn0.75 Cu0.25 S1)
Reduced Formula: Zn0.75Cu0.25S1
abc : 3.853923 3.853923 3.853923
angles: 60.000000 60.000000 60.000000
Sites (2)
# SP a b c magmom
--- --------------------- ---- ---- ---- --------
0 Zn2+:0.750, Cu+:0.250 0 0 0 0
1 S2- 0.25 0.25 0.25 -0
###Markdown
We can print the integer formula of this composition:
###Code
disordered_structure.composition
###Output
_____no_output_____
###Markdown
Let's rename this structure with its chemical formula to avoid confusion later on:
###Code
CuZn3S4_disordered = disordered_structure
# if you want to download this file and load it elsewhere
CuZn3S4_disordered.to("cif", "assets/CuZn3S4_disordered.cif")
###Output
_____no_output_____
###Markdown
Here's a screenshot of the CuZn3S4 disordered structure, where each cation site has partial occupancy of a Zn and Cu atom.  Transform structure Though disorder may indeed be more representative of a real crystal structure, we need to convert this to an ordered structure to perform DFT calculations. This is because DFT can only perform simulations on whole atoms, not fractional atoms!Pymatgen supports a variety of structural "transformations" (a list of supported transformations is available [here](https://pymatgen.org/pymatgen.transformations.html)). Here are three methods from the `pymatgen.transformations.advanced_transformations` module to take a disordered structure, and order it:1. `OrderDisorderStructureTransformation`: a highly simplified method to create an ordered supercell ranked by Ewald sums.2. `EnumerateStructureTransformation`: a method to order a disordered structure that requires [the `enumlib` code](https://github.com/msg-byu/enumlib) to also be installed.3. `SQSTransformation`: a method that requires the [`ATAT` code (Van de Walle et al. 2013)](https://doi.org/10.1016/j.calphad.2013.06.006) to be installed that creates a special quasirandom structure (SQS) from a structure with partial occupancies. For this demo, we'll be focusing on the most simple transformation: `OrderDisorderStructureTransformation`
###Code
from pymatgen.transformations.advanced_transformations import OrderDisorderedStructureTransformation
odst = OrderDisorderedStructureTransformation()
odst.apply_transformation(CuZn3S4_disordered)
###Output
_____no_output_____
###Markdown
We have to be careful though!! If we just apply this transformation, it doesn't fail, but it returns a structure where all the Cu+ is gone! `OrderDisorderedStructureTransformation` will round up or down if the cell is not large enough to account for `x`. Thus, we need to first make a supercell and then apply the transformation. Make a supercell With this transformation, we have to first create a disordered ***supercell*** to transform into. A supercell is just a structure that is scaled by a matrix so that it repeats several times. Here, the supercell must be large enough such that the composition in question can be achieved. Let's scale the structure by 8x. I like to use the `numpy` package to construct scaling matrices here (a 4x supercell would be sufficient for `x = 1/4`, but this leaves room to try e.g. `x = 1/8`):
###Code
import numpy as np
scaling_matrix = np.array([
[2, 0, 0],
[0, 2, 0],
[0, 0, 2]
])
###Output
_____no_output_____
###Markdown
We can see that this would scale the cell's volume by 8, but to verify:
###Code
scaling_volume = np.linalg.det(scaling_matrix)
###Output
_____no_output_____
###Markdown
For convenience, you can also simply use `scaling_matrix = 2` if you're scaling the same in all directions, or `scaling_matrix = [2, 2, 2]`. These are the same in practice.
###Code
CuZn3S4_disordered_supercell = CuZn3S4_disordered * scaling_matrix
print(CuZn3S4_disordered_supercell)
CuZn3S4_ordered_structures = odst.apply_transformation(CuZn3S4_disordered_supercell, return_ranked_list = 10)
print(CuZn3S4_ordered_structures)
###Output
[{'energy': -285.4743424481398, 'energy_above_minimum': 0.0, 'structure': Structure Summary
Lattice
abc : 7.707845752595208 7.707845752595208 7.707845752595208
angles : 59.99999999999999 59.99999999999999 59.99999999999999
volume : 323.8053704338693
A : 0.0 5.45027 5.45027
B : 5.45027 0.0 5.45027
C : 5.45027 5.45027 0.0
PeriodicSite: Zn2+ (0.0000, 0.0000, 0.0000) [0.0000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 0.0000, 2.7251) [0.0000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 2.7251, 2.7251) [0.0000, 0.5000, 0.5000]
PeriodicSite: Zn2+ (0.0000, 2.7251, 2.7251) [0.5000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 5.4503, 2.7251) [0.5000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (5.4503, 5.4503, 5.4503) [0.5000, 0.5000, 0.5000]
PeriodicSite: Cu+ (2.7251, 2.7251, 0.0000) [0.0000, 0.0000, 0.5000]
PeriodicSite: Cu+ (2.7251, 2.7251, 5.4503) [0.5000, 0.5000, 0.0000]
PeriodicSite: S2- (1.3626, 1.3626, 1.3626) [0.1250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 4.0877, 1.3626) [0.1250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 1.3626, 4.0877) [0.1250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 4.0877, 4.0877) [0.1250, 0.6250, 0.6250]
PeriodicSite: S2- (1.3626, 4.0877, 4.0877) [0.6250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 6.8128, 4.0877) [0.6250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 4.0877, 6.8128) [0.6250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 6.8128, 6.8128) [0.6250, 0.6250, 0.6250]}, {'energy': -285.4743424481398, 'energy_above_minimum': 0.0, 'structure': Structure Summary
Lattice
abc : 7.707845752595208 7.707845752595208 7.707845752595208
angles : 59.99999999999999 59.99999999999999 59.99999999999999
volume : 323.8053704338693
A : 0.0 5.45027 5.45027
B : 5.45027 0.0 5.45027
C : 5.45027 5.45027 0.0
PeriodicSite: Zn2+ (0.0000, 0.0000, 0.0000) [0.0000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 2.7251, 0.0000) [0.0000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (5.4503, 2.7251, 2.7251) [0.0000, 0.5000, 0.5000]
PeriodicSite: Zn2+ (0.0000, 2.7251, 2.7251) [0.5000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 2.7251, 5.4503) [0.5000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 5.4503, 5.4503) [0.5000, 0.5000, 0.5000]
PeriodicSite: Cu+ (2.7251, 0.0000, 2.7251) [0.0000, 0.5000, 0.0000]
PeriodicSite: Cu+ (2.7251, 5.4503, 2.7251) [0.5000, 0.0000, 0.5000]
PeriodicSite: S2- (1.3626, 1.3626, 1.3626) [0.1250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 4.0877, 1.3626) [0.1250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 1.3626, 4.0877) [0.1250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 4.0877, 4.0877) [0.1250, 0.6250, 0.6250]
PeriodicSite: S2- (1.3626, 4.0877, 4.0877) [0.6250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 6.8128, 4.0877) [0.6250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 4.0877, 6.8128) [0.6250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 6.8128, 6.8128) [0.6250, 0.6250, 0.6250]}, {'energy': -285.4743424481398, 'energy_above_minimum': 0.0, 'structure': Structure Summary
Lattice
abc : 7.707845752595208 7.707845752595208 7.707845752595208
angles : 59.99999999999999 59.99999999999999 59.99999999999999
volume : 323.8053704338693
A : 0.0 5.45027 5.45027
B : 5.45027 0.0 5.45027
C : 5.45027 5.45027 0.0
PeriodicSite: Zn2+ (0.0000, 0.0000, 0.0000) [0.0000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 2.7251, 0.0000) [0.0000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 0.0000, 2.7251) [0.0000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 5.4503, 2.7251) [0.5000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 2.7251, 5.4503) [0.5000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 5.4503, 5.4503) [0.5000, 0.5000, 0.5000]
PeriodicSite: Cu+ (5.4503, 2.7251, 2.7251) [0.0000, 0.5000, 0.5000]
PeriodicSite: Cu+ (0.0000, 2.7251, 2.7251) [0.5000, 0.0000, 0.0000]
PeriodicSite: S2- (1.3626, 1.3626, 1.3626) [0.1250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 4.0877, 1.3626) [0.1250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 1.3626, 4.0877) [0.1250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 4.0877, 4.0877) [0.1250, 0.6250, 0.6250]
PeriodicSite: S2- (1.3626, 4.0877, 4.0877) [0.6250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 6.8128, 4.0877) [0.6250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 4.0877, 6.8128) [0.6250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 6.8128, 6.8128) [0.6250, 0.6250, 0.6250]}, {'energy': -285.47434244813974, 'energy_above_minimum': 3.552713678800501e-15, 'structure': Structure Summary
Lattice
abc : 7.707845752595208 7.707845752595208 7.707845752595208
angles : 59.99999999999999 59.99999999999999 59.99999999999999
volume : 323.8053704338693
A : 0.0 5.45027 5.45027
B : 5.45027 0.0 5.45027
C : 5.45027 5.45027 0.0
PeriodicSite: Zn2+ (2.7251, 2.7251, 0.0000) [0.0000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 0.0000, 2.7251) [0.0000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 2.7251, 2.7251) [0.0000, 0.5000, 0.5000]
PeriodicSite: Zn2+ (0.0000, 2.7251, 2.7251) [0.5000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 5.4503, 2.7251) [0.5000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 2.7251, 5.4503) [0.5000, 0.5000, 0.0000]
PeriodicSite: Cu+ (0.0000, 0.0000, 0.0000) [0.0000, 0.0000, 0.0000]
PeriodicSite: Cu+ (5.4503, 5.4503, 5.4503) [0.5000, 0.5000, 0.5000]
PeriodicSite: S2- (1.3626, 1.3626, 1.3626) [0.1250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 4.0877, 1.3626) [0.1250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 1.3626, 4.0877) [0.1250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 4.0877, 4.0877) [0.1250, 0.6250, 0.6250]
PeriodicSite: S2- (1.3626, 4.0877, 4.0877) [0.6250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 6.8128, 4.0877) [0.6250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 4.0877, 6.8128) [0.6250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 6.8128, 6.8128) [0.6250, 0.6250, 0.6250]}, {'energy': -284.80431475499915, 'energy_above_minimum': 0.041876730821289954, 'structure': Structure Summary
Lattice
abc : 7.707845752595208 7.707845752595208 7.707845752595208
angles : 59.99999999999999 59.99999999999999 59.99999999999999
volume : 323.8053704338693
A : 0.0 5.45027 5.45027
B : 5.45027 0.0 5.45027
C : 5.45027 5.45027 0.0
PeriodicSite: Zn2+ (2.7251, 0.0000, 2.7251) [0.0000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 2.7251, 2.7251) [0.0000, 0.5000, 0.5000]
PeriodicSite: Zn2+ (0.0000, 2.7251, 2.7251) [0.5000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 5.4503, 2.7251) [0.5000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 2.7251, 5.4503) [0.5000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 5.4503, 5.4503) [0.5000, 0.5000, 0.5000]
PeriodicSite: Cu+ (0.0000, 0.0000, 0.0000) [0.0000, 0.0000, 0.0000]
PeriodicSite: Cu+ (2.7251, 2.7251, 0.0000) [0.0000, 0.0000, 0.5000]
PeriodicSite: S2- (1.3626, 1.3626, 1.3626) [0.1250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 4.0877, 1.3626) [0.1250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 1.3626, 4.0877) [0.1250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 4.0877, 4.0877) [0.1250, 0.6250, 0.6250]
PeriodicSite: S2- (1.3626, 4.0877, 4.0877) [0.6250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 6.8128, 4.0877) [0.6250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 4.0877, 6.8128) [0.6250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 6.8128, 6.8128) [0.6250, 0.6250, 0.6250]}, {'energy': -284.80431475499915, 'energy_above_minimum': 0.041876730821289954, 'structure': Structure Summary
Lattice
abc : 7.707845752595208 7.707845752595208 7.707845752595208
angles : 59.99999999999999 59.99999999999999 59.99999999999999
volume : 323.8053704338693
A : 0.0 5.45027 5.45027
B : 5.45027 0.0 5.45027
C : 5.45027 5.45027 0.0
PeriodicSite: Zn2+ (2.7251, 2.7251, 0.0000) [0.0000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (5.4503, 2.7251, 2.7251) [0.0000, 0.5000, 0.5000]
PeriodicSite: Zn2+ (0.0000, 2.7251, 2.7251) [0.5000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 5.4503, 2.7251) [0.5000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 2.7251, 5.4503) [0.5000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 5.4503, 5.4503) [0.5000, 0.5000, 0.5000]
PeriodicSite: Cu+ (0.0000, 0.0000, 0.0000) [0.0000, 0.0000, 0.0000]
PeriodicSite: Cu+ (2.7251, 0.0000, 2.7251) [0.0000, 0.5000, 0.0000]
PeriodicSite: S2- (1.3626, 1.3626, 1.3626) [0.1250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 4.0877, 1.3626) [0.1250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 1.3626, 4.0877) [0.1250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 4.0877, 4.0877) [0.1250, 0.6250, 0.6250]
PeriodicSite: S2- (1.3626, 4.0877, 4.0877) [0.6250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 6.8128, 4.0877) [0.6250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 4.0877, 6.8128) [0.6250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 6.8128, 6.8128) [0.6250, 0.6250, 0.6250]}, {'energy': -284.80431475499915, 'energy_above_minimum': 0.041876730821289954, 'structure': Structure Summary
Lattice
abc : 7.707845752595208 7.707845752595208 7.707845752595208
angles : 59.99999999999999 59.99999999999999 59.99999999999999
volume : 323.8053704338693
A : 0.0 5.45027 5.45027
B : 5.45027 0.0 5.45027
C : 5.45027 5.45027 0.0
PeriodicSite: Zn2+ (2.7251, 2.7251, 0.0000) [0.0000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 0.0000, 2.7251) [0.0000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 2.7251, 2.7251) [0.0000, 0.5000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 5.4503, 2.7251) [0.5000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 2.7251, 5.4503) [0.5000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 5.4503, 5.4503) [0.5000, 0.5000, 0.5000]
PeriodicSite: Cu+ (0.0000, 0.0000, 0.0000) [0.0000, 0.0000, 0.0000]
PeriodicSite: Cu+ (0.0000, 2.7251, 2.7251) [0.5000, 0.0000, 0.0000]
PeriodicSite: S2- (1.3626, 1.3626, 1.3626) [0.1250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 4.0877, 1.3626) [0.1250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 1.3626, 4.0877) [0.1250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 4.0877, 4.0877) [0.1250, 0.6250, 0.6250]
PeriodicSite: S2- (1.3626, 4.0877, 4.0877) [0.6250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 6.8128, 4.0877) [0.6250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 4.0877, 6.8128) [0.6250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 6.8128, 6.8128) [0.6250, 0.6250, 0.6250]}, {'energy': -284.80431475499915, 'energy_above_minimum': 0.041876730821289954, 'structure': Structure Summary
Lattice
abc : 7.707845752595208 7.707845752595208 7.707845752595208
angles : 59.99999999999999 59.99999999999999 59.99999999999999
volume : 323.8053704338693
A : 0.0 5.45027 5.45027
B : 5.45027 0.0 5.45027
C : 5.45027 5.45027 0.0
PeriodicSite: Zn2+ (2.7251, 2.7251, 0.0000) [0.0000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 0.0000, 2.7251) [0.0000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 2.7251, 2.7251) [0.0000, 0.5000, 0.5000]
PeriodicSite: Zn2+ (0.0000, 2.7251, 2.7251) [0.5000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 2.7251, 5.4503) [0.5000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 5.4503, 5.4503) [0.5000, 0.5000, 0.5000]
PeriodicSite: Cu+ (0.0000, 0.0000, 0.0000) [0.0000, 0.0000, 0.0000]
PeriodicSite: Cu+ (2.7251, 5.4503, 2.7251) [0.5000, 0.0000, 0.5000]
PeriodicSite: S2- (1.3626, 1.3626, 1.3626) [0.1250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 4.0877, 1.3626) [0.1250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 1.3626, 4.0877) [0.1250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 4.0877, 4.0877) [0.1250, 0.6250, 0.6250]
PeriodicSite: S2- (1.3626, 4.0877, 4.0877) [0.6250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 6.8128, 4.0877) [0.6250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 4.0877, 6.8128) [0.6250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 6.8128, 6.8128) [0.6250, 0.6250, 0.6250]}, {'energy': -284.80431475499915, 'energy_above_minimum': 0.041876730821289954, 'structure': Structure Summary
Lattice
abc : 7.707845752595208 7.707845752595208 7.707845752595208
angles : 59.99999999999999 59.99999999999999 59.99999999999999
volume : 323.8053704338693
A : 0.0 5.45027 5.45027
B : 5.45027 0.0 5.45027
C : 5.45027 5.45027 0.0
PeriodicSite: Zn2+ (2.7251, 2.7251, 0.0000) [0.0000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 0.0000, 2.7251) [0.0000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 2.7251, 2.7251) [0.0000, 0.5000, 0.5000]
PeriodicSite: Zn2+ (0.0000, 2.7251, 2.7251) [0.5000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 5.4503, 2.7251) [0.5000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (5.4503, 5.4503, 5.4503) [0.5000, 0.5000, 0.5000]
PeriodicSite: Cu+ (0.0000, 0.0000, 0.0000) [0.0000, 0.0000, 0.0000]
PeriodicSite: Cu+ (2.7251, 2.7251, 5.4503) [0.5000, 0.5000, 0.0000]
PeriodicSite: S2- (1.3626, 1.3626, 1.3626) [0.1250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 4.0877, 1.3626) [0.1250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 1.3626, 4.0877) [0.1250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 4.0877, 4.0877) [0.1250, 0.6250, 0.6250]
PeriodicSite: S2- (1.3626, 4.0877, 4.0877) [0.6250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 6.8128, 4.0877) [0.6250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 4.0877, 6.8128) [0.6250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 6.8128, 6.8128) [0.6250, 0.6250, 0.6250]}, {'energy': -284.80431475499915, 'energy_above_minimum': 0.041876730821289954, 'structure': Structure Summary
Lattice
abc : 7.707845752595208 7.707845752595208 7.707845752595208
angles : 59.99999999999999 59.99999999999999 59.99999999999999
volume : 323.8053704338693
A : 0.0 5.45027 5.45027
B : 5.45027 0.0 5.45027
C : 5.45027 5.45027 0.0
PeriodicSite: Zn2+ (0.0000, 0.0000, 0.0000) [0.0000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 2.7251, 2.7251) [0.0000, 0.5000, 0.5000]
PeriodicSite: Zn2+ (0.0000, 2.7251, 2.7251) [0.5000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 5.4503, 2.7251) [0.5000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 2.7251, 5.4503) [0.5000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 5.4503, 5.4503) [0.5000, 0.5000, 0.5000]
PeriodicSite: Cu+ (2.7251, 2.7251, 0.0000) [0.0000, 0.0000, 0.5000]
PeriodicSite: Cu+ (2.7251, 0.0000, 2.7251) [0.0000, 0.5000, 0.0000]
PeriodicSite: S2- (1.3626, 1.3626, 1.3626) [0.1250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 4.0877, 1.3626) [0.1250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 1.3626, 4.0877) [0.1250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 4.0877, 4.0877) [0.1250, 0.6250, 0.6250]
PeriodicSite: S2- (1.3626, 4.0877, 4.0877) [0.6250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 6.8128, 4.0877) [0.6250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 4.0877, 6.8128) [0.6250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 6.8128, 6.8128) [0.6250, 0.6250, 0.6250]}]
###Markdown
This is a list of ten ordered structures ranked by ***Ewald sum*** (dict key `"energy"`). Note that this does NOT correlate with the lowest energy structure! Let's just use the first entry for our example:
###Code
CuZn3S4_ordered_structure = CuZn3S4_ordered_structures[0]["structure"]
print(CuZn3S4_ordered_structure)
###Output
Full Formula (Zn6 Cu2 S8)
Reduced Formula: Zn3CuS4
abc : 7.707846 7.707846 7.707846
angles: 60.000000 60.000000 60.000000
Sites (16)
# SP a b c magmom
--- ---- ----- ----- ----- --------
0 Zn2+ 0 0 0 0
1 Zn2+ 0 0.5 0 0
2 Zn2+ 0 0.5 0.5 0
3 Zn2+ 0.5 0 0 0
4 Zn2+ 0.5 0 0.5 0
5 Zn2+ 0.5 0.5 0.5 0
6 Cu+ 0 0 0.5 0
7 Cu+ 0.5 0.5 0 0
8 S2- 0.125 0.125 0.125 -0
9 S2- 0.125 0.125 0.625 -0
10 S2- 0.125 0.625 0.125 -0
11 S2- 0.125 0.625 0.625 -0
12 S2- 0.625 0.125 0.125 -0
13 S2- 0.625 0.125 0.625 -0
14 S2- 0.625 0.625 0.125 -0
15 S2- 0.625 0.625 0.625 -0
###Markdown
If you want to download this file:
###Code
CuZn3S4_ordered_structure.to("cif", "assets/CuZn3S4_ordered_structure.cif")
# (note: "-0" is actually just 0, as this is a non-magnetic configuration)
###Output
_____no_output_____
###Markdown
BOOM! Now we have an alloy structure!! To view this structure you can upload your "CuZn3S4_ordered_structure.cif" file on [Crystal Toolkit](https://materialsproject.org/apps/xtaltoolkit).  3.2 Exercise: try your own transformation on CuZnS2 Set a new composition, `x = 1/2` (simpler fractions are easier in DFT calculations because supercells can be smaller!). This will yield a structure with composition CuZnS2.
###Code
x_CuZnS2 = 1/2
CuZnS2_disordered = ZnS_structure_oxi.copy()
CuZnS2_disordered.replace_species(
{
"Zn2+": {
"Cu+": x_CuZnS2,
"Zn2+": 1 - x_CuZnS2
}
}
)
###Output
_____no_output_____
###Markdown
Reminder: for more complex fractions (e.g. `x = 1/16`), supercells need to be scaled accordingly!
###Code
scaling_matrix = np.array([2, 2, 2])
CuZnS2_disordered_supercell = CuZnS2_disordered * scaling_matrix
CuZnS2_ordered_structures = odst.apply_transformation(CuZnS2_disordered_supercell,
return_ranked_list = 10)
###Output
_____no_output_____
###Markdown
Pick one:
###Code
CuZnS2_ordered_structure = CuZnS2_ordered_structures[0]["structure"]
print(CuZnS2_ordered_structure)
###Output
Full Formula (Zn4 Cu4 S8)
Reduced Formula: ZnCuS2
abc : 7.707846 7.707846 7.707846
angles: 60.000000 60.000000 60.000000
Sites (16)
# SP a b c magmom
--- ---- ----- ----- ----- --------
0 Zn2+ 0 0.5 0 0
1 Zn2+ 0 0.5 0.5 0
2 Zn2+ 0.5 0 0 0
3 Zn2+ 0.5 0 0.5 0
4 Cu+ 0 0 0 0
5 Cu+ 0 0 0.5 0
6 Cu+ 0.5 0.5 0 0
7 Cu+ 0.5 0.5 0.5 0
8 S2- 0.125 0.125 0.125 -0
9 S2- 0.125 0.125 0.625 -0
10 S2- 0.125 0.625 0.125 -0
11 S2- 0.125 0.625 0.625 -0
12 S2- 0.625 0.125 0.125 -0
13 S2- 0.625 0.125 0.625 -0
14 S2- 0.625 0.625 0.125 -0
15 S2- 0.625 0.625 0.625 -0
###Markdown
Check that this is the composition you expect:
###Code
CuZnS2_ordered_structure.composition.reduced_formula
###Output
_____no_output_____
###Markdown
And check the space group:
###Code
CuZnS2_ordered_structure.get_space_group_info()
###Output
_____no_output_____
###Markdown
Is it the same as ZnS? Because of the Cu substitution, this structure has a different space group than ZnS! 4. Calculate new properties 4.1 Lesson: volume prediction and XRD plot So far we just have a really rough guess of an alloy structure, and the lattice parameters are still equal to those of ZnS. We can estimate the new volume $V_{x-guess}$ after the substitution using Vegard's Law (assuming zero bowing). $V_{x-estimate} = V_{scaling } \times [ V_{CuS}(x) + V_{ZnS}(1-x) ] $ $V_{CuZn_3S_4-estimate} = [2\times2\times2] \times [ V_{CuS}(0.25) + V_{ZnS}(0.75) ] $
###Code
scaling_matrix
x
scaling_volume = scaling_matrix.prod()
CuZn3S4_estimated_volume = scaling_volume * ((ZnS_structure.volume) * (1 - x) +
(CuS_structure.volume) * x)
print(CuZn3S4_ordered_structure.volume)
print(CuZn3S4_estimated_volume)
CuZn3S4_structure_estimate = CuZn3S4_ordered_structure.copy()
CuZn3S4_structure_estimate.scale_lattice(CuZn3S4_estimated_volume)
print(CuZn3S4_structure_estimate)
###Output
Full Formula (Zn6 Cu2 S8)
Reduced Formula: Zn3CuS4
abc : 7.621518 7.621518 7.621518
angles: 60.000000 60.000000 60.000000
Sites (16)
# SP a b c magmom
--- ---- ----- ----- ----- --------
0 Zn2+ 0 0 0 0
1 Zn2+ 0 0.5 0 0
2 Zn2+ 0 0.5 0.5 0
3 Zn2+ 0.5 0 0 0
4 Zn2+ 0.5 0 0.5 0
5 Zn2+ 0.5 0.5 0.5 0
6 Cu+ 0 0 0.5 0
7 Cu+ 0.5 0.5 0 0
8 S2- 0.125 0.125 0.125 -0
9 S2- 0.125 0.125 0.625 -0
10 S2- 0.125 0.625 0.125 -0
11 S2- 0.125 0.625 0.625 -0
12 S2- 0.625 0.125 0.125 -0
13 S2- 0.625 0.125 0.625 -0
14 S2- 0.625 0.625 0.125 -0
15 S2- 0.625 0.625 0.625 -0
###Markdown
This is better but still wrong, and does not take into account any structural distortions. Note that there are some other methods on pymatgen to guess structure volume (see `pymatgen.analysis.structure_prediction.volume_predictor`), but in my experience Vegard's law is usually just as helpful. Your next step would be to relax this new structure using DFT or another method (see below). Calculate XRD, compare to original structure Now we can compare this structure to our original ZnS and CuS structure to, for example, see how the ***X-ray diffraction (XRD)*** patterns are expected to shift as `x` increases in CuxZn1-xS:
###Code
from pymatgen.analysis.diffraction.xrd import XRDCalculator
###Output
_____no_output_____
###Markdown
Initialize the `XRDCalculator` with the conventional Cu-K$\alpha$ wavelength (note: Cu here has nothing to do with the Cu we're adding to the structure):
###Code
xrd = XRDCalculator()
structures = [
ZnS_structure,
CuZn3S4_structure_estimate,
CuS_structure
]
xrd_plots = xrd.plot_structures(
structures,
two_theta_range=[25, 65],
annotate_peaks=False, # to keep the plot cleaner
size_kwargs={'w': 10, 'h': 9}, # these options are optional to make the plot look nicer
)
###Output
_____no_output_____
###Markdown
You can see how the $2\theta$ peaks shift slightly to the right with addition of Cu! 4.2 Exercise: try this on your CuZnS2 structure Guess the structure volume using Vegard's Law, and correct for this: $V_{x-estimate} = V_{scaling } \times [ V_{CuS}(?) + V_{ZnS}(?) ] $
###Code
x_CuZnS2
scaling_volume
CuZnS2_structure_estimate = CuZnS2_ordered_structure.copy()
CuZnS2_structure_estimate.scale_lattice(scaling_volume *
((ZnS_structure.volume) * (1 - x_CuZnS2) +
(CuS_structure.volume) * x_CuZnS2))
###Output
_____no_output_____
###Markdown
Print the new structure
###Code
print(CuZnS2_structure_estimate)
###Output
Full Formula (Zn4 Cu4 S8)
Reduced Formula: ZnCuS2
abc : 7.533188 7.533188 7.533188
angles: 60.000000 60.000000 60.000000
Sites (16)
# SP a b c magmom
--- ---- ----- ----- ----- --------
0 Zn2+ 0 0.5 0 0
1 Zn2+ 0 0.5 0.5 0
2 Zn2+ 0.5 0 0 0
3 Zn2+ 0.5 0 0.5 0
4 Cu+ 0 0 0 0
5 Cu+ 0 0 0.5 0
6 Cu+ 0.5 0.5 0 0
7 Cu+ 0.5 0.5 0.5 0
8 S2- 0.125 0.125 0.125 -0
9 S2- 0.125 0.125 0.625 -0
10 S2- 0.125 0.625 0.125 -0
11 S2- 0.125 0.625 0.625 -0
12 S2- 0.625 0.125 0.125 -0
13 S2- 0.625 0.125 0.625 -0
14 S2- 0.625 0.625 0.125 -0
15 S2- 0.625 0.625 0.625 -0
###Markdown
Add this structure to the series of XRD plots to compare XRD for `x = 0, 0.25, 0.5, 1`:
###Code
structures = [
ZnS_structure,
CuZn3S4_structure_estimate,
CuZnS2_structure_estimate,
CuS_structure
]
xrd_plots = xrd.plot_structures(
structures,
annotate_peaks=False,
two_theta_range=[25,65],
size_kwargs={'w': 10, 'h': 12}
)
###Output
_____no_output_____
###Markdown
5. Test your skills This is the wee beginning of making an alloy. Here are some follow-up steps: I constructed similar alloys to those that we just explored, at `x = 1/4` and `x = 1/2`, and relaxed them with DFT. We'll explore my results here: 5.1 Exercise: compare relaxed DFT structures to estimates
###Code
from pymatgen import Structure
###Output
_____no_output_____
###Markdown
These are my output .cif files from one of our DFT workflows. See `fireworks` and `atomate` packages for details [here](https://atomate.org/atomate.vasp.fireworks.html).
###Code
CuZn3S4_relaxed = Structure.from_file("assets/Zn3CuS4_Amm2.cif")
CuZnS2_relaxed = Structure.from_file("assets/ZnCuS2_R3m.cif")
###Output
_____no_output_____
###Markdown
How do these space groups compare to our estimates?
###Code
CuZn3S4_structure_estimate.get_space_group_info()
CuZn3S4_relaxed.get_space_group_info()
CuZnS2_structure_estimate.get_space_group_info()
CuZnS2_relaxed.get_space_group_info()
###Output
_____no_output_____
###Markdown
Are they higher or lower in symmetry? After relaxation, both structures are in a different space group (lower symmetry) than the alloys we just made. This is likely due to structural distortions. Add in DFT structure to XRD Replace the alloy structures in the previous XRD exercise with the two relaxed alloy structures, again comparing XRD for `x = 0, 0.25, 0.5, 1`:
###Code
structures = [
ZnS_structure,
CuZn3S4_relaxed,
CuZnS2_relaxed,
CuS_structure
]
xrd_plots = xrd.plot_structures(
structures,
annotate_peaks=False,
two_theta_range=[25,65],
size_kwargs={'w': 10, 'h': 12},
)
###Output
_____no_output_____
###Markdown
Peak splittings are now present in the diffraction patterns, and the shift to higher $2\theta$ is not as significant. 5.2 Lesson: add computed entries to phase diagram ***Scientific question: are these new phases stable?*** To assess the stability of these new phases, let's look at JSON files containing `ComputedEntry` data:
###Code
from monty.serialization import loadfn
Zn3CuS4_Amm2_entry = loadfn("assets/Zn3CuS4_Amm2_entry.json")
ZnCuS2_R3m_entry = loadfn("assets/ZnCuS2_R3m_entry.json")
###Output
_____no_output_____
###Markdown
These entries were created by relaxing the above structure using one of our DFT workflows. An "entry" is mainly just a composition and an energy, so can be created manually, without performing a calculation, or even from experimental data
###Code
print(ZnCuS2_R3m_entry)
###Output
None ComputedEntry - Zn1 Cu1 S2 (ZnCuS2)
Energy (Uncorrected) = -61.8816 eV (-15.4704 eV/atom)
Correction = -1.3269 eV (-0.3317 eV/atom)
Energy (Final) = -63.2085 eV (-15.8021 eV/atom)
Energy Adjustments:
MP Gas Correction : 0.0000 eV (0.0000 eV/atom)
MP Anion Correction : -1.3269 eV (-0.3317 eV/atom)
MP Advanced Correction : 0.0000 eV (0.0000 eV/atom)
Parameters:
run_type = GGA
is_hubbard = False
Data:
###Markdown
We can add these two entries to `entries`, our set of `ComputedEntry` data from MP in the Cu-Zn-S phase space:
###Code
new_entries = entries + [Zn3CuS4_Amm2_entry, ZnCuS2_R3m_entry]
new_phase_diagram = PhaseDiagram(new_entries)
###Output
_____no_output_____
###Markdown
Conventional phase diagram
###Code
new_plotter = PDPlotter(new_phase_diagram, show_unstable=10, markersize=20)
x = new_plotter.get_plot(new_phase_diagram, label_unstable=False)
###Output
_____no_output_____
###Markdown
We see our two new phases show up here! How does the energy landscape change? Contour phase diagram
###Code
new_fig = new_plotter.get_contour_pd_plot()
###Output
_____no_output_____
###Markdown
Compare to the phase diagram before new phases were added:
###Code
fig = plotter.get_contour_pd_plot()
###Output
_____no_output_____
###Markdown
Binary phase diagram
###Code
from pymatgen import Composition
new_cpd = CompoundPhaseDiagram(new_entries, [Composition("ZnS"), Composition("CuS")])
new_compound_plotter = PDPlotter(new_cpd, show_unstable=10, markersize=20)
new_compound_plotter.show()
new_phase_diagram.get_e_above_hull(ZnCuS2_R3m_entry)
new_phase_diagram.get_e_above_hull(Zn3CuS4_Amm2_entry)
###Output
_____no_output_____
###Markdown
Exploring New Alloy Systems with Pymatgen Author: Rachel Woods-Robinson Version: July 29, 2020 Outline 1. Select a test-case system * 1.1 Exercise: `Structure` and `MPRester` refresher * 1.2 Lesson: add oxidation states to a `Structure` * 1.3 Bonus: plot `BandStructure` 2. Select an alloy partner * 2.1 Lesson: find possible dopants * 2.2 Exercise: find the best alloy partner (A = ?) for AxZn1-xS * 2.3 Lesson: explore phase diagrams3. Transform to make a new CuxZn1-xS alloy * 3.1 Lesson: structure transformation * 3.2 Exercise: try your own transformation on CuZnS24. Calculate new properties * 4.1 Lesson: volume prediction and XRD plot * 4.2 Exercise: try this on your CuZnS2 structure5. Test your skills * 5.1 Exercise: compare relaxed DFT structures to estimates * 5.2 Lesson: add computed entries to phase diagram * 5.3 Next steps 1. Select a test-case system ***In this notebook we will focus on cubic zinc-blende ZnS, a wide band gap (transparent) semiconductor. In my PhD research I study p-type transparent semiconductors, so I will pose the question: how can we use ZnS as a starting point to create a p-type transparent semiconductor, and how can pymatgen help with this?*** Import the `MPRester` client:
###Code
from pymatgen import MPRester
###Output
_____no_output_____
###Markdown
The Materials ID (mp-id) of zinc-blende ZnS is mp-10695, see https://materialsproject.org/materials/mp-10695/.
###Code
ZnS_mpid = "mp-10695"
###Output
_____no_output_____
###Markdown
1.1 Exercise: `Structure` and `MPRester` refresher Get the structure
###Code
with MPRester() as mpr: # YOUR API KEY GOES IN THIS FUNCTION! copy from materialsproject.org/dashboard
ZnS_structure = mpr.get_structure_by_material_id(ZnS_mpid)
#### if you're having problems with your internet or API key:
# from monty.serialization import loadfn
# ZnS_structure = loadfn("assets/ZnS_structure.json")
###Output
_____no_output_____
###Markdown
Get space group information
###Code
ZnS_structure.get_space_group_info()
###Output
_____no_output_____
###Markdown
If you want to, try it out on our web app [here](https://materialsproject.org/apps/xtaltoolkit/%7B%22input%22%3A0%2C%22materialIDs%22%3A%22mp-10695%22%7D).- Click "Draw atoms outside unit cell bonded to atoms within unit cell"- Play around with it! 1.2 Lesson: add oxidation states to a `Structure` Pymatgen has a simple transformation to estimate the likely oxidation state of each specie in stoichiometric compounds using a bond-valence analysis approach. This information is needed to compare ionic radii and assess substitutional dopant probability. You can also enter the oxidation states manually if you'd prefer.
###Code
from pymatgen.transformations.standard_transformations import AutoOxiStateDecorationTransformation
###Output
_____no_output_____
###Markdown
Initialize this transformation:
###Code
oxi_transformation = AutoOxiStateDecorationTransformation()
ZnS_structure_oxi = oxi_transformation.apply_transformation(ZnS_structure)
print(ZnS_structure_oxi)
###Output
Full Formula (Zn1 S1)
Reduced Formula: ZnS
abc : 3.853923 3.853923 3.853923
angles: 60.000000 60.000000 60.000000
Sites (2)
# SP a b c magmom
--- ---- ---- ---- ---- --------
0 Zn2+ 0 0 0 0
1 S2- 0.25 0.25 0.25 -0
###Markdown
1.3 Bonus: plot `BandStructure`
###Code
from pymatgen.electronic_structure.plotter import BSPlotter
###Output
_____no_output_____
###Markdown
This code retrieves a `BandStructureSymmLine` object which contains all the information about a line-mode band structure.
###Code
with MPRester() as mpr: # YOUR API KEY GOES IN THIS FUNCTION! copy from materialsproject.org/dashboard
ZnS_bs = mpr.get_bandstructure_by_material_id(ZnS_mpid)
#### if you're having problems with your internet or API key
# from monty.serialization import loadfn
# ZnS_bs = loadfn("assets/ZnS_bs.json")
###Output
_____no_output_____
###Markdown
This band structure can be plotted using `BSPlotter`:
###Code
ZnS_bsp = BSPlotter(ZnS_bs)
ZnS_bsp.show() # takes a second
###Output
_____no_output_____
###Markdown
Band gap correction
###Code
ZnS_bs.get_band_gap()
###Output
_____no_output_____
###Markdown
ZnS has an experimental gap of approximately 3.5 eV, but the GGA calculated gap is far too low! We can apply a "scissor" to this band structure to correct for this.Scissor corrections are only appropriate if they're clearly acknowledged! They're used here because we expect the shape of the bands to be correct from our GGA calculation, and we know experimentally the gap is 3.5 eV. We do not use any scissor corrections on the Materials Project website or in the database.
###Code
ZnS_bs_scissor = ZnS_bs.apply_scissor(new_band_gap=3.5)
ZnS_bsp_scissor = BSPlotter(ZnS_bs_scissor)
ZnS_bsp_scissor.show()
###Output
_____no_output_____
###Markdown
2. Select an alloy partner 2.1 Lesson: find possible dopants ***Scientific question: Which p-type dopants are most likely to sit at substitutional sites in ZnS?*** Pymatgen has a machine-learned method for estimating the probability that one ion will substitute for another ([Hautier et al. 2011](https://doi.org/10.1021/ic102031h)), and reports the results ranked in order of probability. Note the input structure has to be "decorated" with oxidation states for this method to work.
###Code
from pymatgen.analysis.structure_prediction.dopant_predictor import get_dopants_from_substitution_probabilities
substitutional_dopants = get_dopants_from_substitution_probabilities(
ZnS_structure_oxi, num_dopants=10)
###Output
_____no_output_____
###Markdown
Here are some options to dope ZnS p-type:
###Code
p_dopants = substitutional_dopants['p_type']
###Output
_____no_output_____
###Markdown
We can see this returns a list of dictionaries:
###Code
print(p_dopants)
###Output
[{'probability': 0.03517771488410044, 'dopant_species': Specie Na+, 'original_species': Specie Zn2+}, {'probability': 0.029318032846742993, 'dopant_species': Specie Cu+, 'original_species': Specie Zn2+}, {'probability': 0.018723180961333987, 'dopant_species': Specie N3-, 'original_species': Specie S2-}, {'probability': 0.01642070079106222, 'dopant_species': Specie K+, 'original_species': Specie Zn2+}, {'probability': 0.015108126565956573, 'dopant_species': Specie Li+, 'original_species': Specie Zn2+}, {'probability': 0.0052407403799116705, 'dopant_species': Specie Ag+, 'original_species': Specie Zn2+}, {'probability': 0.005068671654693549, 'dopant_species': Specie O2-, 'original_species': Specie Zn2+}, {'probability': 0.002675916081854698, 'dopant_species': Specie Au+, 'original_species': Specie Zn2+}, {'probability': 0.0026755200755797705, 'dopant_species': Specie Rb+, 'original_species': Specie Zn2+}, {'probability': 0.0026755200755797705, 'dopant_species': Specie Tl+, 'original_species': Specie Zn2+}]
###Markdown
To make this easier to read we can use the `pandas` package:
###Code
import pandas as pd
pd.DataFrame(p_dopants)
###Output
_____no_output_____
###Markdown
2.2 Exercise: find the best alloy partner (A = ?) for AxZn1-xS ***Scientific question: is a p-type zinc-blende AxZn1-xS alloy possible?*** Let's see if zinc-blende binaries exist for these ternaries, and how far off the hull they sit. Find dopants First, find a list of possible cation dopant elements:
###Code
# I've pre-written this code block for convenience
# all it does is take the possible dopants list given previously, takes the cations, and makes a list of their elements
possible_cation_dopants = []
for x in p_dopants:
specie = x["dopant_species"]
if specie.oxi_state > 0:
possible_cation_dopants.append(str(specie.element))
print(possible_cation_dopants)
###Output
['Na', 'Cu', 'K', 'Li', 'Ag', 'Au', 'Rb', 'Tl']
###Markdown
Query for end-point structure Next, let's query the `MPRester` to make a table of all of the binary compounds with a space group `"F-43m"` that contain sulfur and one of these `possible_cation_dopants`. Note that the query criteria are listed on the [mapidoc](https://github.com/materialsproject/mapidoc/tree/master/materials).
###Code
with MPRester() as mpr: # YOUR API KEY GOES IN THIS FUNCTION! copy from materialsproject.org/dashboard
query = mpr.query(
{
# the query criteria
"elements": {
"$all": ["S"], "$in": possible_cation_dopants
},
"nelements": 2,
"spacegroup.symbol": "F-43m"
},
# the properties we want to return
[
"task_id",
"e_above_hull",
"pretty_formula",
"theoretical",
"spacegroup.symbol"
]
)
#### if you're having problems with your internet or API key
# from monty.serialization import loadfn
# query = loadfn("assets/alloy_partner_query.json")
pd.DataFrame(query)
###Output
_____no_output_____
###Markdown
Which cation should we pick? Cu! Ag8S a theroetical intermetallic, and it's energy is ridiculously high. CuS is a theoretical compound and is not "on the hull," but it's close at only 0.01 eV/atom, meaning it is only slightly metastable. Ok, so let's pick Cu+ to use as a p-type dopant, which I've explored experimentally in the past (see [Woods-Robinson et al. 2019](https://doi.org/10.1016/j.matt.2019.06.019)). Retrieve end-point structure To proceed, we have to retrieve the `Structure` for CuS:
###Code
CuS_mpid = query[0]["task_id"]
with MPRester() as mpr: # YOUR API KEY GOES IN THIS FUNCTION! copy from materialsproject.org/dashboard
CuS_structure = mpr.get_structure_by_material_id(CuS_mpid)
#### if you're having problems with your internet or API key
# from monty.serialization import loadfn
# CuS_structure = loadfn("assets/CuS_structure.json")
###Output
_____no_output_____
###Markdown
Yep! We’re not done, but this is a good starting point for dopants to investigate with further defect calculations. This can be accomplished using workflows from packages like [PyCDT (Broberg et al. 2018)](https://doi.org/10.1016/j.cpc.2018.01.004) which integrate with `pymatgen`'s defect capabilities. 2.3 Lesson: explore phase diagrams ***Scientific question: what does Cu-Zn-S phase space look like?*** There are many built-in tools to explore phase diagrams in `pymatgen`. To build a phase diagram, you must define a set of `ComputedEntries` with compositions, formation energies, corrections, and other calculation details.
###Code
from pymatgen.analysis.phase_diagram import PhaseDiagram, PDPlotter, CompoundPhaseDiagram, GrandPotentialPhaseDiagram
###Output
_____no_output_____
###Markdown
We can import entries in this system using the `MPRester`. This gives a list of all of the `ComputedEntries` on the database:
###Code
with MPRester() as mpr: # YOUR API KEY GOES IN THIS FUNCTION! copy from materialsproject.org/dashboard
entries = mpr.get_entries_in_chemsys(['Cu', 'Zn', 'S'])
#### if you're having problems with your internet or API key:
# from monty.serialization import loadfn
# entries = loadfn("assets/Cu-Zn-S_entries.json")
phase_diagram = PhaseDiagram(entries)
###Output
_____no_output_____
###Markdown
Conventional phase diagram
###Code
plotter = PDPlotter(phase_diagram, show_unstable=True, markersize=20) # we increase the marker size here to make it easier to see the stable points
plotter.show()
###Output
_____no_output_____
###Markdown
Contour phase diagram
###Code
fig = plotter.get_contour_pd_plot()
###Output
_____no_output_____
###Markdown
Binary phase diagram Let's zoom in on the tie-line between ZnS and CuS, which is where we are interested in alloying.
###Code
from pymatgen import Composition
cpd = CompoundPhaseDiagram(entries, [Composition("ZnS"), Composition("CuS")], normalize_terminal_compositions=False)
compound_plotter = PDPlotter(cpd, show_unstable=100, markersize=20)
compound_plotter.show()
###Output
_____no_output_____
###Markdown
Mapping out chemical potential of cations This may be a useful tool to think about tuning chemical potential for synthesis.
###Code
from pymatgen import Element
plotter.get_chempot_range_map_plot([Element("Cu"), Element("Zn")]).show()
###Output
_____no_output_____
###Markdown
There are a lot of different types of phase diagrams available (see the [`pymatgen.analysis.phase_diagram` module](https://pymatgen.org/pymatgen.analysis.phase_diagram.html)).Our key takeaway here is that in MP, the Cu-Zn-S ternary space is EMPTY!! So let's fill it in... 3. Transform to make a new CuxZn1-xS alloy 3.1 Lesson: structure transformation Substitute your dopant to create a disordered structure Now, so let's substitute 1/4 of the Zn2+ with Cu+ ions (note: we will be ignoring charge compensation here, but this is important to take into account in real calculations!). That is, let's set substitutional fraction `x = 1/4` in CuxZn1-xS. Doing so using `Structure.replace_species()` will create a ***disordered structure object***.
###Code
x = 1/4
disordered_structure = ZnS_structure_oxi.copy()
disordered_structure.replace_species({"Zn2+": {"Cu+": x, "Zn2+": 1 - x}})
print(disordered_structure)
###Output
Full Formula (Zn0.75 Cu0.25 S1)
Reduced Formula: Zn0.75Cu0.25S1
abc : 3.853923 3.853923 3.853923
angles: 60.000000 60.000000 60.000000
Sites (2)
# SP a b c magmom
--- --------------------- ---- ---- ---- --------
0 Zn2+:0.750, Cu+:0.250 0 0 0 0
1 S2- 0.25 0.25 0.25 -0
###Markdown
We can print the integer formula of this composition:
###Code
disordered_structure.composition
###Output
_____no_output_____
###Markdown
Let's rename this structure with its chemical formula to avoid confusion later on:
###Code
CuZn3S4_disordered = disordered_structure
# if you want to download this file and load it elsewhere
CuZn3S4_disordered.to("cif", "assets/CuZn3S4_disordered.cif")
###Output
_____no_output_____
###Markdown
Here's a screenshot of the CuZn3S4 disordered structure, where each cation site has partial occupancy of a Zn and Cu atom. Transform structure Though disorder may indeed be more representative of a real crystal structure, we need to convert this to an ordered structure to perform DFT calculations. This is because DFT can only perform simulations on whole atoms, not fractional atoms!Pymatgen supports a variety of structural "transformations" (a list of supported transformations is available [here](https://pymatgen.org/pymatgen.transformations.html)). Here are three methods from the `pymatgen.transformations.advanced_transformations` module to take a disordered structure, and order it:1. `OrderDisorderStructureTransformation`: a highly simplified method to create an ordered supercell ranked by Ewald sums.2. `EnumerateStructureTransformation`: a method to order a disordered structure that requires [the `enumlib` code](https://github.com/msg-byu/enumlib) to also be installed.3. `SQSTransformation`: a method that requires the [`ATAT` code (Van de Walle et al. 2013)](https://doi.org/10.1016/j.calphad.2013.06.006) to be installed that creates a special quasirandom structure (SQS) from a structure with partial occupancies. For this demo, we'll be focusing on the most simple transformation: `OrderDisorderStructureTransformation`
###Code
from pymatgen.transformations.advanced_transformations import OrderDisorderedStructureTransformation
odst = OrderDisorderedStructureTransformation()
odst.apply_transformation(CuZn3S4_disordered)
###Output
_____no_output_____
###Markdown
We have to be careful though!! If we just apply this transformation, it doesn't fail, but it returns a structure where all the Cu+ is gone! `OrderDisorderedStructureTransformation` will round up or down if the cell is not large enough to account for `x`. Thus, we need to first make a supercell and then apply the transformation. Make a supercell With this transformation, we have to first create a disordered ***supercell*** to transform into. A supercell is just a structure that is scaled by a matrix so that it repeats several times. Here, the supercell must be large enough such that the composition in question can be achieved. Let's scale the structure by 8x. I like to use the `numpy` package to construct scaling matrices here (a 4x supercell would be sufficient for `x = 1/4`, but this leaves room to try e.g. `x = 1/8`):
###Code
import numpy as np
scaling_matrix = np.array([
[2, 0, 0],
[0, 2, 0],
[0, 0, 2]
])
###Output
_____no_output_____
###Markdown
We can see that this would scale the cell's volume by 8, but to verify:
###Code
scaling_volume = np.linalg.det(scaling_matrix)
###Output
_____no_output_____
###Markdown
For convenience, you can also simply use `scaling_matrix = 2` if you're scaling the same in all directions, or `scaling_matrix = [2, 2, 2]`. These are the same in practice.
###Code
CuZn3S4_disordered_supercell = CuZn3S4_disordered * scaling_matrix
print(CuZn3S4_disordered_supercell)
CuZn3S4_ordered_structures = odst.apply_transformation(CuZn3S4_disordered_supercell, return_ranked_list = 10)
print(CuZn3S4_ordered_structures)
###Output
[{'energy': -285.4743424481398, 'energy_above_minimum': 0.0, 'structure': Structure Summary
Lattice
abc : 7.707845752595208 7.707845752595208 7.707845752595208
angles : 59.99999999999999 59.99999999999999 59.99999999999999
volume : 323.8053704338693
A : 0.0 5.45027 5.45027
B : 5.45027 0.0 5.45027
C : 5.45027 5.45027 0.0
PeriodicSite: Zn2+ (0.0000, 0.0000, 0.0000) [0.0000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 0.0000, 2.7251) [0.0000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 2.7251, 2.7251) [0.0000, 0.5000, 0.5000]
PeriodicSite: Zn2+ (0.0000, 2.7251, 2.7251) [0.5000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 5.4503, 2.7251) [0.5000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (5.4503, 5.4503, 5.4503) [0.5000, 0.5000, 0.5000]
PeriodicSite: Cu+ (2.7251, 2.7251, 0.0000) [0.0000, 0.0000, 0.5000]
PeriodicSite: Cu+ (2.7251, 2.7251, 5.4503) [0.5000, 0.5000, 0.0000]
PeriodicSite: S2- (1.3626, 1.3626, 1.3626) [0.1250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 4.0877, 1.3626) [0.1250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 1.3626, 4.0877) [0.1250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 4.0877, 4.0877) [0.1250, 0.6250, 0.6250]
PeriodicSite: S2- (1.3626, 4.0877, 4.0877) [0.6250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 6.8128, 4.0877) [0.6250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 4.0877, 6.8128) [0.6250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 6.8128, 6.8128) [0.6250, 0.6250, 0.6250]}, {'energy': -285.4743424481398, 'energy_above_minimum': 0.0, 'structure': Structure Summary
Lattice
abc : 7.707845752595208 7.707845752595208 7.707845752595208
angles : 59.99999999999999 59.99999999999999 59.99999999999999
volume : 323.8053704338693
A : 0.0 5.45027 5.45027
B : 5.45027 0.0 5.45027
C : 5.45027 5.45027 0.0
PeriodicSite: Zn2+ (0.0000, 0.0000, 0.0000) [0.0000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 2.7251, 0.0000) [0.0000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (5.4503, 2.7251, 2.7251) [0.0000, 0.5000, 0.5000]
PeriodicSite: Zn2+ (0.0000, 2.7251, 2.7251) [0.5000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 2.7251, 5.4503) [0.5000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 5.4503, 5.4503) [0.5000, 0.5000, 0.5000]
PeriodicSite: Cu+ (2.7251, 0.0000, 2.7251) [0.0000, 0.5000, 0.0000]
PeriodicSite: Cu+ (2.7251, 5.4503, 2.7251) [0.5000, 0.0000, 0.5000]
PeriodicSite: S2- (1.3626, 1.3626, 1.3626) [0.1250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 4.0877, 1.3626) [0.1250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 1.3626, 4.0877) [0.1250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 4.0877, 4.0877) [0.1250, 0.6250, 0.6250]
PeriodicSite: S2- (1.3626, 4.0877, 4.0877) [0.6250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 6.8128, 4.0877) [0.6250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 4.0877, 6.8128) [0.6250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 6.8128, 6.8128) [0.6250, 0.6250, 0.6250]}, {'energy': -285.4743424481398, 'energy_above_minimum': 0.0, 'structure': Structure Summary
Lattice
abc : 7.707845752595208 7.707845752595208 7.707845752595208
angles : 59.99999999999999 59.99999999999999 59.99999999999999
volume : 323.8053704338693
A : 0.0 5.45027 5.45027
B : 5.45027 0.0 5.45027
C : 5.45027 5.45027 0.0
PeriodicSite: Zn2+ (0.0000, 0.0000, 0.0000) [0.0000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 2.7251, 0.0000) [0.0000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 0.0000, 2.7251) [0.0000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 5.4503, 2.7251) [0.5000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 2.7251, 5.4503) [0.5000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 5.4503, 5.4503) [0.5000, 0.5000, 0.5000]
PeriodicSite: Cu+ (5.4503, 2.7251, 2.7251) [0.0000, 0.5000, 0.5000]
PeriodicSite: Cu+ (0.0000, 2.7251, 2.7251) [0.5000, 0.0000, 0.0000]
PeriodicSite: S2- (1.3626, 1.3626, 1.3626) [0.1250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 4.0877, 1.3626) [0.1250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 1.3626, 4.0877) [0.1250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 4.0877, 4.0877) [0.1250, 0.6250, 0.6250]
PeriodicSite: S2- (1.3626, 4.0877, 4.0877) [0.6250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 6.8128, 4.0877) [0.6250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 4.0877, 6.8128) [0.6250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 6.8128, 6.8128) [0.6250, 0.6250, 0.6250]}, {'energy': -285.47434244813974, 'energy_above_minimum': 3.552713678800501e-15, 'structure': Structure Summary
Lattice
abc : 7.707845752595208 7.707845752595208 7.707845752595208
angles : 59.99999999999999 59.99999999999999 59.99999999999999
volume : 323.8053704338693
A : 0.0 5.45027 5.45027
B : 5.45027 0.0 5.45027
C : 5.45027 5.45027 0.0
PeriodicSite: Zn2+ (2.7251, 2.7251, 0.0000) [0.0000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 0.0000, 2.7251) [0.0000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 2.7251, 2.7251) [0.0000, 0.5000, 0.5000]
PeriodicSite: Zn2+ (0.0000, 2.7251, 2.7251) [0.5000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 5.4503, 2.7251) [0.5000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 2.7251, 5.4503) [0.5000, 0.5000, 0.0000]
PeriodicSite: Cu+ (0.0000, 0.0000, 0.0000) [0.0000, 0.0000, 0.0000]
PeriodicSite: Cu+ (5.4503, 5.4503, 5.4503) [0.5000, 0.5000, 0.5000]
PeriodicSite: S2- (1.3626, 1.3626, 1.3626) [0.1250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 4.0877, 1.3626) [0.1250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 1.3626, 4.0877) [0.1250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 4.0877, 4.0877) [0.1250, 0.6250, 0.6250]
PeriodicSite: S2- (1.3626, 4.0877, 4.0877) [0.6250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 6.8128, 4.0877) [0.6250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 4.0877, 6.8128) [0.6250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 6.8128, 6.8128) [0.6250, 0.6250, 0.6250]}, {'energy': -284.80431475499915, 'energy_above_minimum': 0.041876730821289954, 'structure': Structure Summary
Lattice
abc : 7.707845752595208 7.707845752595208 7.707845752595208
angles : 59.99999999999999 59.99999999999999 59.99999999999999
volume : 323.8053704338693
A : 0.0 5.45027 5.45027
B : 5.45027 0.0 5.45027
C : 5.45027 5.45027 0.0
PeriodicSite: Zn2+ (2.7251, 0.0000, 2.7251) [0.0000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 2.7251, 2.7251) [0.0000, 0.5000, 0.5000]
PeriodicSite: Zn2+ (0.0000, 2.7251, 2.7251) [0.5000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 5.4503, 2.7251) [0.5000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 2.7251, 5.4503) [0.5000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 5.4503, 5.4503) [0.5000, 0.5000, 0.5000]
PeriodicSite: Cu+ (0.0000, 0.0000, 0.0000) [0.0000, 0.0000, 0.0000]
PeriodicSite: Cu+ (2.7251, 2.7251, 0.0000) [0.0000, 0.0000, 0.5000]
PeriodicSite: S2- (1.3626, 1.3626, 1.3626) [0.1250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 4.0877, 1.3626) [0.1250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 1.3626, 4.0877) [0.1250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 4.0877, 4.0877) [0.1250, 0.6250, 0.6250]
PeriodicSite: S2- (1.3626, 4.0877, 4.0877) [0.6250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 6.8128, 4.0877) [0.6250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 4.0877, 6.8128) [0.6250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 6.8128, 6.8128) [0.6250, 0.6250, 0.6250]}, {'energy': -284.80431475499915, 'energy_above_minimum': 0.041876730821289954, 'structure': Structure Summary
Lattice
abc : 7.707845752595208 7.707845752595208 7.707845752595208
angles : 59.99999999999999 59.99999999999999 59.99999999999999
volume : 323.8053704338693
A : 0.0 5.45027 5.45027
B : 5.45027 0.0 5.45027
C : 5.45027 5.45027 0.0
PeriodicSite: Zn2+ (2.7251, 2.7251, 0.0000) [0.0000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (5.4503, 2.7251, 2.7251) [0.0000, 0.5000, 0.5000]
PeriodicSite: Zn2+ (0.0000, 2.7251, 2.7251) [0.5000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 5.4503, 2.7251) [0.5000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 2.7251, 5.4503) [0.5000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 5.4503, 5.4503) [0.5000, 0.5000, 0.5000]
PeriodicSite: Cu+ (0.0000, 0.0000, 0.0000) [0.0000, 0.0000, 0.0000]
PeriodicSite: Cu+ (2.7251, 0.0000, 2.7251) [0.0000, 0.5000, 0.0000]
PeriodicSite: S2- (1.3626, 1.3626, 1.3626) [0.1250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 4.0877, 1.3626) [0.1250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 1.3626, 4.0877) [0.1250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 4.0877, 4.0877) [0.1250, 0.6250, 0.6250]
PeriodicSite: S2- (1.3626, 4.0877, 4.0877) [0.6250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 6.8128, 4.0877) [0.6250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 4.0877, 6.8128) [0.6250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 6.8128, 6.8128) [0.6250, 0.6250, 0.6250]}, {'energy': -284.80431475499915, 'energy_above_minimum': 0.041876730821289954, 'structure': Structure Summary
Lattice
abc : 7.707845752595208 7.707845752595208 7.707845752595208
angles : 59.99999999999999 59.99999999999999 59.99999999999999
volume : 323.8053704338693
A : 0.0 5.45027 5.45027
B : 5.45027 0.0 5.45027
C : 5.45027 5.45027 0.0
PeriodicSite: Zn2+ (2.7251, 2.7251, 0.0000) [0.0000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 0.0000, 2.7251) [0.0000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 2.7251, 2.7251) [0.0000, 0.5000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 5.4503, 2.7251) [0.5000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 2.7251, 5.4503) [0.5000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 5.4503, 5.4503) [0.5000, 0.5000, 0.5000]
PeriodicSite: Cu+ (0.0000, 0.0000, 0.0000) [0.0000, 0.0000, 0.0000]
PeriodicSite: Cu+ (0.0000, 2.7251, 2.7251) [0.5000, 0.0000, 0.0000]
PeriodicSite: S2- (1.3626, 1.3626, 1.3626) [0.1250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 4.0877, 1.3626) [0.1250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 1.3626, 4.0877) [0.1250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 4.0877, 4.0877) [0.1250, 0.6250, 0.6250]
PeriodicSite: S2- (1.3626, 4.0877, 4.0877) [0.6250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 6.8128, 4.0877) [0.6250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 4.0877, 6.8128) [0.6250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 6.8128, 6.8128) [0.6250, 0.6250, 0.6250]}, {'energy': -284.80431475499915, 'energy_above_minimum': 0.041876730821289954, 'structure': Structure Summary
Lattice
abc : 7.707845752595208 7.707845752595208 7.707845752595208
angles : 59.99999999999999 59.99999999999999 59.99999999999999
volume : 323.8053704338693
A : 0.0 5.45027 5.45027
B : 5.45027 0.0 5.45027
C : 5.45027 5.45027 0.0
PeriodicSite: Zn2+ (2.7251, 2.7251, 0.0000) [0.0000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 0.0000, 2.7251) [0.0000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 2.7251, 2.7251) [0.0000, 0.5000, 0.5000]
PeriodicSite: Zn2+ (0.0000, 2.7251, 2.7251) [0.5000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 2.7251, 5.4503) [0.5000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 5.4503, 5.4503) [0.5000, 0.5000, 0.5000]
PeriodicSite: Cu+ (0.0000, 0.0000, 0.0000) [0.0000, 0.0000, 0.0000]
PeriodicSite: Cu+ (2.7251, 5.4503, 2.7251) [0.5000, 0.0000, 0.5000]
PeriodicSite: S2- (1.3626, 1.3626, 1.3626) [0.1250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 4.0877, 1.3626) [0.1250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 1.3626, 4.0877) [0.1250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 4.0877, 4.0877) [0.1250, 0.6250, 0.6250]
PeriodicSite: S2- (1.3626, 4.0877, 4.0877) [0.6250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 6.8128, 4.0877) [0.6250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 4.0877, 6.8128) [0.6250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 6.8128, 6.8128) [0.6250, 0.6250, 0.6250]}, {'energy': -284.80431475499915, 'energy_above_minimum': 0.041876730821289954, 'structure': Structure Summary
Lattice
abc : 7.707845752595208 7.707845752595208 7.707845752595208
angles : 59.99999999999999 59.99999999999999 59.99999999999999
volume : 323.8053704338693
A : 0.0 5.45027 5.45027
B : 5.45027 0.0 5.45027
C : 5.45027 5.45027 0.0
PeriodicSite: Zn2+ (2.7251, 2.7251, 0.0000) [0.0000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 0.0000, 2.7251) [0.0000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 2.7251, 2.7251) [0.0000, 0.5000, 0.5000]
PeriodicSite: Zn2+ (0.0000, 2.7251, 2.7251) [0.5000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 5.4503, 2.7251) [0.5000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (5.4503, 5.4503, 5.4503) [0.5000, 0.5000, 0.5000]
PeriodicSite: Cu+ (0.0000, 0.0000, 0.0000) [0.0000, 0.0000, 0.0000]
PeriodicSite: Cu+ (2.7251, 2.7251, 5.4503) [0.5000, 0.5000, 0.0000]
PeriodicSite: S2- (1.3626, 1.3626, 1.3626) [0.1250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 4.0877, 1.3626) [0.1250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 1.3626, 4.0877) [0.1250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 4.0877, 4.0877) [0.1250, 0.6250, 0.6250]
PeriodicSite: S2- (1.3626, 4.0877, 4.0877) [0.6250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 6.8128, 4.0877) [0.6250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 4.0877, 6.8128) [0.6250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 6.8128, 6.8128) [0.6250, 0.6250, 0.6250]}, {'energy': -284.80431475499915, 'energy_above_minimum': 0.041876730821289954, 'structure': Structure Summary
Lattice
abc : 7.707845752595208 7.707845752595208 7.707845752595208
angles : 59.99999999999999 59.99999999999999 59.99999999999999
volume : 323.8053704338693
A : 0.0 5.45027 5.45027
B : 5.45027 0.0 5.45027
C : 5.45027 5.45027 0.0
PeriodicSite: Zn2+ (0.0000, 0.0000, 0.0000) [0.0000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 2.7251, 2.7251) [0.0000, 0.5000, 0.5000]
PeriodicSite: Zn2+ (0.0000, 2.7251, 2.7251) [0.5000, 0.0000, 0.0000]
PeriodicSite: Zn2+ (2.7251, 5.4503, 2.7251) [0.5000, 0.0000, 0.5000]
PeriodicSite: Zn2+ (2.7251, 2.7251, 5.4503) [0.5000, 0.5000, 0.0000]
PeriodicSite: Zn2+ (5.4503, 5.4503, 5.4503) [0.5000, 0.5000, 0.5000]
PeriodicSite: Cu+ (2.7251, 2.7251, 0.0000) [0.0000, 0.0000, 0.5000]
PeriodicSite: Cu+ (2.7251, 0.0000, 2.7251) [0.0000, 0.5000, 0.0000]
PeriodicSite: S2- (1.3626, 1.3626, 1.3626) [0.1250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 4.0877, 1.3626) [0.1250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 1.3626, 4.0877) [0.1250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 4.0877, 4.0877) [0.1250, 0.6250, 0.6250]
PeriodicSite: S2- (1.3626, 4.0877, 4.0877) [0.6250, 0.1250, 0.1250]
PeriodicSite: S2- (4.0877, 6.8128, 4.0877) [0.6250, 0.1250, 0.6250]
PeriodicSite: S2- (4.0877, 4.0877, 6.8128) [0.6250, 0.6250, 0.1250]
PeriodicSite: S2- (6.8128, 6.8128, 6.8128) [0.6250, 0.6250, 0.6250]}]
###Markdown
This is a list of ten ordered structures ranked by ***Ewald sum*** (dict key `"energy"`). Note that this does NOT correlate with the lowest energy structure! Let's just use the first entry for our example:
###Code
CuZn3S4_ordered_structure = CuZn3S4_ordered_structures[0]["structure"]
print(CuZn3S4_ordered_structure)
###Output
Full Formula (Zn6 Cu2 S8)
Reduced Formula: Zn3CuS4
abc : 7.707846 7.707846 7.707846
angles: 60.000000 60.000000 60.000000
Sites (16)
# SP a b c magmom
--- ---- ----- ----- ----- --------
0 Zn2+ 0 0 0 0
1 Zn2+ 0 0.5 0 0
2 Zn2+ 0 0.5 0.5 0
3 Zn2+ 0.5 0 0 0
4 Zn2+ 0.5 0 0.5 0
5 Zn2+ 0.5 0.5 0.5 0
6 Cu+ 0 0 0.5 0
7 Cu+ 0.5 0.5 0 0
8 S2- 0.125 0.125 0.125 -0
9 S2- 0.125 0.125 0.625 -0
10 S2- 0.125 0.625 0.125 -0
11 S2- 0.125 0.625 0.625 -0
12 S2- 0.625 0.125 0.125 -0
13 S2- 0.625 0.125 0.625 -0
14 S2- 0.625 0.625 0.125 -0
15 S2- 0.625 0.625 0.625 -0
###Markdown
If you want to download this file:
###Code
CuZn3S4_ordered_structure.to("cif", "assets/CuZn3S4_ordered_structure.cif")
# (note: "-0" is actually just 0, as this is a non-magnetic configuration)
###Output
_____no_output_____
###Markdown
BOOM! Now we have an alloy structure!! To view this structure you can upload your "CuZn3S4_ordered_structure.cif" file on [Crystal Toolkit](https://materialsproject.org/apps/xtaltoolkit). 3.2 Exercise: try your own transformation on CuZnS2 Set a new composition, `x = 1/2` (simpler fractions are easier in DFT calculations because supercells can be smaller!). This will yield a structure with composition CuZnS2.
###Code
x_CuZnS2 = 1/2
CuZnS2_disordered = ZnS_structure_oxi.copy()
CuZnS2_disordered.replace_species(
{
"Zn2+": {
"Cu+": x_CuZnS2,
"Zn2+": 1 - x_CuZnS2
}
}
)
###Output
_____no_output_____
###Markdown
Reminder: for more complex fractions (e.g. `x = 1/16`), supercells need to be scaled accordingly!
###Code
scaling_matrix = np.array([2, 2, 2])
CuZnS2_disordered_supercell = CuZnS2_disordered * scaling_matrix
CuZnS2_ordered_structures = odst.apply_transformation(CuZnS2_disordered_supercell,
return_ranked_list = 10)
###Output
_____no_output_____
###Markdown
Pick one:
###Code
CuZnS2_ordered_structure = CuZnS2_ordered_structures[0]["structure"]
print(CuZnS2_ordered_structure)
###Output
Full Formula (Zn4 Cu4 S8)
Reduced Formula: ZnCuS2
abc : 7.707846 7.707846 7.707846
angles: 60.000000 60.000000 60.000000
Sites (16)
# SP a b c magmom
--- ---- ----- ----- ----- --------
0 Zn2+ 0 0.5 0 0
1 Zn2+ 0 0.5 0.5 0
2 Zn2+ 0.5 0 0 0
3 Zn2+ 0.5 0 0.5 0
4 Cu+ 0 0 0 0
5 Cu+ 0 0 0.5 0
6 Cu+ 0.5 0.5 0 0
7 Cu+ 0.5 0.5 0.5 0
8 S2- 0.125 0.125 0.125 -0
9 S2- 0.125 0.125 0.625 -0
10 S2- 0.125 0.625 0.125 -0
11 S2- 0.125 0.625 0.625 -0
12 S2- 0.625 0.125 0.125 -0
13 S2- 0.625 0.125 0.625 -0
14 S2- 0.625 0.625 0.125 -0
15 S2- 0.625 0.625 0.625 -0
###Markdown
Check that this is the composition you expect:
###Code
CuZnS2_ordered_structure.composition.reduced_formula
###Output
_____no_output_____
###Markdown
And check the space group:
###Code
CuZnS2_ordered_structure.get_space_group_info()
###Output
_____no_output_____
###Markdown
Is it the same as ZnS? Because of the Cu substitution, this structure has a different space group than ZnS! 4. Calculate new properties 4.1 Lesson: volume prediction and XRD plot So far we just have a really rough guess of an alloy structure, and the lattice parameters are still equal to those of ZnS. We can estimate the new volume $V_{x-guess}$ after the substitution using Vegard's Law (assuming zero bowing). $V_{x-estimate} = V_{scaling } \times [ V_{CuS}(x) + V_{ZnS}(1-x) ] $ $V_{CuZn_3S_4-estimate} = [2\times2\times2] \times [ V_{CuS}(0.25) + V_{ZnS}(0.75) ] $
###Code
scaling_matrix
x
scaling_volume = scaling_matrix.prod()
CuZn3S4_estimated_volume = scaling_volume * ((ZnS_structure.volume) * (1 - x) +
(CuS_structure.volume) * x)
print(CuZn3S4_ordered_structure.volume)
print(CuZn3S4_estimated_volume)
CuZn3S4_structure_estimate = CuZn3S4_ordered_structure.copy()
CuZn3S4_structure_estimate.scale_lattice(CuZn3S4_estimated_volume)
print(CuZn3S4_structure_estimate)
###Output
Full Formula (Zn6 Cu2 S8)
Reduced Formula: Zn3CuS4
abc : 7.621518 7.621518 7.621518
angles: 60.000000 60.000000 60.000000
Sites (16)
# SP a b c magmom
--- ---- ----- ----- ----- --------
0 Zn2+ 0 0 0 0
1 Zn2+ 0 0.5 0 0
2 Zn2+ 0 0.5 0.5 0
3 Zn2+ 0.5 0 0 0
4 Zn2+ 0.5 0 0.5 0
5 Zn2+ 0.5 0.5 0.5 0
6 Cu+ 0 0 0.5 0
7 Cu+ 0.5 0.5 0 0
8 S2- 0.125 0.125 0.125 -0
9 S2- 0.125 0.125 0.625 -0
10 S2- 0.125 0.625 0.125 -0
11 S2- 0.125 0.625 0.625 -0
12 S2- 0.625 0.125 0.125 -0
13 S2- 0.625 0.125 0.625 -0
14 S2- 0.625 0.625 0.125 -0
15 S2- 0.625 0.625 0.625 -0
###Markdown
This is better but still wrong, and does not take into account any structural distortions. Note that there are some other methods on pymatgen to guess structure volume (see `pymatgen.analysis.structure_prediction.volume_predictor`), but in my experience Vegard's law is usually just as helpful. Your next step would be to relax this new structure using DFT or another method (see below). Calculate XRD, compare to original structure Now we can compare this structure to our original ZnS and CuS structure to, for example, see how the ***X-ray diffraction (XRD)*** patterns are expected to shift as `x` increases in CuxZn1-xS:
###Code
from pymatgen.analysis.diffraction.xrd import XRDCalculator
###Output
_____no_output_____
###Markdown
Initialize the `XRDCalculator` with the conventional Cu-K$\alpha$ wavelength (note: Cu here has nothing to do with the Cu we're adding to the structure):
###Code
xrd = XRDCalculator()
structures = [
ZnS_structure,
CuZn3S4_structure_estimate,
CuS_structure
]
xrd_plots = xrd.plot_structures(
structures,
two_theta_range=[25, 65],
annotate_peaks=False, # to keep the plot cleaner
size_kwargs={'w': 10, 'h': 9}, # these options are optional to make the plot look nicer
)
###Output
_____no_output_____
###Markdown
You can see how the $2\theta$ peaks shift slightly to the right with addition of Cu! 4.2 Exercise: try this on your CuZnS2 structure Guess the structure volume using Vegard's Law, and correct for this: $V_{x-estimate} = V_{scaling } \times [ V_{CuS}(?) + V_{ZnS}(?) ] $
###Code
x_CuZnS2
scaling_volume
CuZnS2_structure_estimate = CuZnS2_ordered_structure.copy()
CuZnS2_structure_estimate.scale_lattice(scaling_volume *
((ZnS_structure.volume) * (1 - x_CuZnS2) +
(CuS_structure.volume) * x_CuZnS2))
###Output
_____no_output_____
###Markdown
Print the new structure
###Code
print(CuZnS2_structure_estimate)
###Output
Full Formula (Zn4 Cu4 S8)
Reduced Formula: ZnCuS2
abc : 7.533188 7.533188 7.533188
angles: 60.000000 60.000000 60.000000
Sites (16)
# SP a b c magmom
--- ---- ----- ----- ----- --------
0 Zn2+ 0 0.5 0 0
1 Zn2+ 0 0.5 0.5 0
2 Zn2+ 0.5 0 0 0
3 Zn2+ 0.5 0 0.5 0
4 Cu+ 0 0 0 0
5 Cu+ 0 0 0.5 0
6 Cu+ 0.5 0.5 0 0
7 Cu+ 0.5 0.5 0.5 0
8 S2- 0.125 0.125 0.125 -0
9 S2- 0.125 0.125 0.625 -0
10 S2- 0.125 0.625 0.125 -0
11 S2- 0.125 0.625 0.625 -0
12 S2- 0.625 0.125 0.125 -0
13 S2- 0.625 0.125 0.625 -0
14 S2- 0.625 0.625 0.125 -0
15 S2- 0.625 0.625 0.625 -0
###Markdown
Add this structure to the series of XRD plots to compare XRD for `x = 0, 0.25, 0.5, 1`:
###Code
structures = [
ZnS_structure,
CuZn3S4_structure_estimate,
CuZnS2_structure_estimate,
CuS_structure
]
xrd_plots = xrd.plot_structures(
structures,
annotate_peaks=False,
two_theta_range=[25,65],
size_kwargs={'w': 10, 'h': 12}
)
###Output
_____no_output_____
###Markdown
5. Test your skills This is the wee beginning of making an alloy. Here are some follow-up steps: I constructed similar alloys to those that we just explored, at `x = 1/4` and `x = 1/2`, and relaxed them with DFT. We'll explore my results here: 5.1 Exercise: compare relaxed DFT structures to estimates
###Code
from pymatgen import Structure
###Output
_____no_output_____
###Markdown
These are my output .cif files from one of our DFT workflows. See `fireworks` and `atomate` packages for details [here](https://atomate.org/atomate.vasp.fireworks.html).
###Code
CuZn3S4_relaxed = Structure.from_file("assets/Zn3CuS4_Amm2.cif")
CuZnS2_relaxed = Structure.from_file("assets/ZnCuS2_R3m.cif")
###Output
_____no_output_____
###Markdown
How do these space groups compare to our estimates?
###Code
CuZn3S4_structure_estimate.get_space_group_info()
CuZn3S4_relaxed.get_space_group_info()
CuZnS2_structure_estimate.get_space_group_info()
CuZnS2_relaxed.get_space_group_info()
###Output
_____no_output_____
###Markdown
Are they higher or lower in symmetry? After relaxation, both structures are in a different space group (lower symmetry) than the alloys we just made. This is likely due to structural distortions. Add in DFT structure to XRD Replace the alloy structures in the previous XRD exercise with the two relaxed alloy structures, again comparing XRD for `x = 0, 0.25, 0.5, 1`:
###Code
structures = [
ZnS_structure,
CuZn3S4_relaxed,
CuZnS2_relaxed,
CuS_structure
]
xrd_plots = xrd.plot_structures(
structures,
annotate_peaks=False,
two_theta_range=[25,65],
size_kwargs={'w': 10, 'h': 12},
)
###Output
_____no_output_____
###Markdown
Peak splittings are now present in the diffraction patterns, and the shift to higher $2\theta$ is not as significant. 5.2 Lesson: add computed entries to phase diagram ***Scientific question: are these new phases stable?*** To assess the stability of these new phases, let's look at JSON files containing `ComputedEntry` data:
###Code
from monty.serialization import loadfn
Zn3CuS4_Amm2_entry = loadfn("assets/Zn3CuS4_Amm2_entry.json")
ZnCuS2_R3m_entry = loadfn("assets/ZnCuS2_R3m_entry.json")
###Output
_____no_output_____
###Markdown
These entries were created by relaxing the above structure using one of our DFT workflows. An "entry" is mainly just a composition and an energy, so can be created manually, without performing a calculation, or even from experimental data
###Code
print(ZnCuS2_R3m_entry)
###Output
None ComputedEntry - Zn1 Cu1 S2 (ZnCuS2)
Energy (Uncorrected) = -61.8816 eV (-15.4704 eV/atom)
Correction = -1.3269 eV (-0.3317 eV/atom)
Energy (Final) = -63.2085 eV (-15.8021 eV/atom)
Energy Adjustments:
MP Gas Correction : 0.0000 eV (0.0000 eV/atom)
MP Anion Correction : -1.3269 eV (-0.3317 eV/atom)
MP Advanced Correction : 0.0000 eV (0.0000 eV/atom)
Parameters:
run_type = GGA
is_hubbard = False
Data:
###Markdown
We can add these two entries to `entries`, our set of `ComputedEntry` data from MP in the Cu-Zn-S phase space:
###Code
new_entries = entries + [Zn3CuS4_Amm2_entry, ZnCuS2_R3m_entry]
new_phase_diagram = PhaseDiagram(new_entries)
###Output
_____no_output_____
###Markdown
Conventional phase diagram
###Code
new_plotter = PDPlotter(new_phase_diagram, show_unstable=10, markersize=20)
x = new_plotter.get_plot(new_phase_diagram, label_unstable=False)
###Output
_____no_output_____
###Markdown
We see our two new phases show up here! How does the energy landscape change? Contour phase diagram
###Code
new_fig = new_plotter.get_contour_pd_plot()
###Output
_____no_output_____
###Markdown
Compare to the phase diagram before new phases were added:
###Code
fig = plotter.get_contour_pd_plot()
###Output
_____no_output_____
###Markdown
Binary phase diagram
###Code
from pymatgen import Composition
new_cpd = CompoundPhaseDiagram(new_entries, [Composition("ZnS"), Composition("CuS")])
new_compound_plotter = PDPlotter(new_cpd, show_unstable=10, markersize=20)
new_compound_plotter.show()
new_phase_diagram.get_e_above_hull(ZnCuS2_R3m_entry)
new_phase_diagram.get_e_above_hull(Zn3CuS4_Amm2_entry)
###Output
_____no_output_____
###Markdown
Exploring New Alloy Systems with Pymatgen Author: Rachel Woods-Robinson Version: July 29, 2020  Outline 1. Select a test-case system * 1.1 Exercise: `Structure` and `MPRester` refresher * 1.2 Lesson: add oxidation states to a `Structure` * 1.3 Bonus: plot `BandStructure` 2. Select an alloy partner * 2.1 Lesson: find possible dopants * 2.2 Exercise: find the best alloy partner (A = ?) for AxZn1-xS * 2.3 Lesson: explore phase diagrams 3. Transform to make a new CuxZn1-xS alloy * 3.1 Lesson: structure transformation * 3.2 Exercise: try your own transformation on CuZnS2 4. Calculate new properties * 4.1 Lesson: volume prediction and XRD plot * 4.2 Exercise: try this on your CuZnS2 structure 5. Test your skills * 5.1 Exercise: compare relaxed DFT structures to estimates * 5.2 Lesson: add computed entries to phase diagram * 5.3 Next steps 1. Select a test-case system ***In this notebook we will focus on cubic zinc-blende ZnS, a wide band gap (transparent) semiconductor. In my PhD research I study p-type transparent semiconductors, so I will pose the question: how can we use ZnS as a starting point to create a p-type transparent semiconductor, and how can pymatgen help with this?*** Import the `MPRester` client:
###Code
from pymatgen.ext.matproj import MPRester
###Output
_____no_output_____
###Markdown
The Materials ID (mp-id) of zinc-blende ZnS is mp-10695, see https://materialsproject.org/materials/mp-10695/.
###Code
ZnS_mpid = "mp-10695"
###Output
_____no_output_____
###Markdown
1.1 Exercise: `Structure` and `MPRester` refresher Get the structure
###Code
with MPRester() as mpr: # YOUR API KEY GOES IN THIS FUNCTION! copy from materialsproject.org/dashboard
ZnS_structure = mpr.get_structure_by_material_id(ZnS_mpid)
#### if you're having problems with your internet or API key:
# from monty.serialization import loadfn
# ZnS_structure = loadfn("assets/ZnS_structure.json")
###Output
_____no_output_____
###Markdown
Get space group information
###Code
ZnS_structure.get_space_group_info()
###Output
_____no_output_____
###Markdown
If you want to, try it out on our web app [here](https://materialsproject.org/apps/xtaltoolkit/%7B%22input%22%3A0%2C%22materialIDs%22%3A%22mp-10695%22%7D).- Click "Draw atoms outside unit cell bonded to atoms within unit cell"- Play around with it!  1.2 Lesson: add oxidation states to a `Structure` Pymatgen has a simple transformation to estimate the likely oxidation state of each specie in stoichiometric compounds using a bond-valence analysis approach. This information is needed to compare ionic radii and assess substitutional dopant probability. You can also enter the oxidation states manually if you'd prefer.
###Code
from pymatgen.transformations.standard_transformations import AutoOxiStateDecorationTransformation
###Output
_____no_output_____
###Markdown
Initialize this transformation:
###Code
oxi_transformation = AutoOxiStateDecorationTransformation()
ZnS_structure_oxi = oxi_transformation.apply_transformation(ZnS_structure)
print(ZnS_structure_oxi)
###Output
_____no_output_____
###Markdown
1.3 Bonus: plot `BandStructure`
###Code
from pymatgen.electronic_structure.plotter import BSPlotter
###Output
_____no_output_____
###Markdown
This code retrieves a `BandStructureSymmLine` object which contains all the information about a line-mode band structure.
###Code
with MPRester() as mpr: # YOUR API KEY GOES IN THIS FUNCTION! copy from materialsproject.org/dashboard
ZnS_bs = mpr.get_bandstructure_by_material_id(ZnS_mpid)
#### if you're having problems with your internet or API key
# from monty.serialization import loadfn
# ZnS_bs = loadfn("assets/ZnS_bs.json")
###Output
_____no_output_____
###Markdown
This band structure can be plotted using `BSPlotter`:
###Code
ZnS_bsp = BSPlotter(ZnS_bs)
ZnS_bsp.show() # takes a second
###Output
_____no_output_____
###Markdown
Band gap correction
###Code
ZnS_bs.get_band_gap()
###Output
_____no_output_____
###Markdown
ZnS has an experimental gap of approximately 3.5 eV, but the GGA calculated gap is far too low! We can apply a "scissor" to this band structure to correct for this.Scissor corrections are only appropriate if they're clearly acknowledged! They're used here because we expect the shape of the bands to be correct from our GGA calculation, and we know experimentally the gap is 3.5 eV. We do not use any scissor corrections on the Materials Project website or in the database.
###Code
ZnS_bs_scissor = ZnS_bs.apply_scissor(new_band_gap=3.5)
ZnS_bsp_scissor = BSPlotter(ZnS_bs_scissor)
ZnS_bsp_scissor.show()
###Output
_____no_output_____
###Markdown
2. Select an alloy partner 2.1 Lesson: find possible dopants ***Scientific question: Which p-type dopants are most likely to sit at substitutional sites in ZnS?*** Pymatgen has a machine-learned method for estimating the probability that one ion will substitute for another ([Hautier et al. 2011](https://doi.org/10.1021/ic102031h)), and reports the results ranked in order of probability. Note the input structure has to be "decorated" with oxidation states for this method to work.
###Code
from pymatgen.analysis.structure_prediction.dopant_predictor import get_dopants_from_substitution_probabilities
substitutional_dopants = get_dopants_from_substitution_probabilities(
ZnS_structure_oxi, num_dopants=10)
###Output
_____no_output_____
###Markdown
Here are some options to dope ZnS p-type:
###Code
p_dopants = substitutional_dopants['p_type']
###Output
_____no_output_____
###Markdown
We can see this returns a list of dictionaries:
###Code
print(p_dopants)
###Output
_____no_output_____
###Markdown
To make this easier to read we can use the `pandas` package:
###Code
import pandas as pd
pd.DataFrame(p_dopants)
###Output
_____no_output_____
###Markdown
2.2 Exercise: find the best alloy partner (A = ?) for AxZn1-xS ***Scientific question: is a p-type zinc-blende AxZn1-xS alloy possible?*** Let's see if zinc-blende binaries exist for these ternaries, and how far off the hull they sit. Find dopants First, find a list of possible cation dopant elements:
###Code
# I've pre-written this code block for convenience
# all it does is take the possible dopants list given previously, takes the cations, and makes a list of their elements
possible_cation_dopants = []
for x in p_dopants:
specie = x["dopant_species"]
if specie.oxi_state > 0:
possible_cation_dopants.append(str(specie.element))
print(possible_cation_dopants)
###Output
_____no_output_____
###Markdown
Query for end-point structure Next, let's query the `MPRester` to make a table of all of the binary compounds with a space group `"F-43m"` that contain sulfur and one of these `possible_cation_dopants`. Note that the query criteria are listed on the [mapidoc](https://github.com/materialsproject/mapidoc/tree/master/materials).
###Code
with MPRester() as mpr: # YOUR API KEY GOES IN THIS FUNCTION! copy from materialsproject.org/dashboard
query = mpr.query(
{
# the query criteria
"elements": {
"$all": ["S"], "$in": possible_cation_dopants
},
"nelements": 2,
"spacegroup.symbol": "F-43m"
},
# the properties we want to return
[
"task_id",
"e_above_hull",
"pretty_formula",
"theoretical",
"spacegroup.symbol"
]
)
#### if you're having problems with your internet or API key
# from monty.serialization import loadfn
# query = loadfn("assets/alloy_partner_query.json")
pd.DataFrame(query)
###Output
_____no_output_____
###Markdown
Which cation should we pick? Cu! Ag8S a theroetical intermetallic, and it's energy is ridiculously high. CuS is a theoretical compound and is not "on the hull," but it's close at only 0.01 eV/atom, meaning it is only slightly metastable. Ok, so let's pick Cu+ to use as a p-type dopant, which I've explored experimentally in the past (see [Woods-Robinson et al. 2019](https://doi.org/10.1016/j.matt.2019.06.019)). Retrieve end-point structure To proceed, we have to retrieve the `Structure` for CuS:
###Code
CuS_mpid = query[0]["task_id"]
with MPRester() as mpr: # YOUR API KEY GOES IN THIS FUNCTION! copy from materialsproject.org/dashboard
CuS_structure = mpr.get_structure_by_material_id(CuS_mpid)
#### if you're having problems with your internet or API key
# from monty.serialization import loadfn
# CuS_structure = loadfn("assets/CuS_structure.json")
###Output
_____no_output_____
###Markdown
Yep! We’re not done, but this is a good starting point for dopants to investigate with further defect calculations. This can be accomplished using workflows from packages like [PyCDT (Broberg et al. 2018)](https://doi.org/10.1016/j.cpc.2018.01.004) which integrate with `pymatgen`'s defect capabilities. 2.3 Lesson: explore phase diagrams ***Scientific question: what does Cu-Zn-S phase space look like?*** There are many built-in tools to explore phase diagrams in `pymatgen`. To build a phase diagram, you must define a set of `ComputedEntries` with compositions, formation energies, corrections, and other calculation details.
###Code
from pymatgen.analysis.phase_diagram import PhaseDiagram, PDPlotter, CompoundPhaseDiagram, GrandPotentialPhaseDiagram
###Output
_____no_output_____
###Markdown
We can import entries in this system using the `MPRester`. This gives a list of all of the `ComputedEntries` on the database:
###Code
with MPRester() as mpr: # YOUR API KEY GOES IN THIS FUNCTION! copy from materialsproject.org/dashboard
entries = mpr.get_entries_in_chemsys(['Cu', 'Zn', 'S'])
#### if you're having problems with your internet or API key:
# from monty.serialization import loadfn
# entries = loadfn("assets/Cu-Zn-S_entries.json")
phase_diagram = PhaseDiagram(entries)
###Output
_____no_output_____
###Markdown
Conventional phase diagram
###Code
plotter = PDPlotter(phase_diagram, show_unstable=True, markersize=20) # we increase the marker size here to make it easier to see the stable points
plotter.show()
###Output
_____no_output_____
###Markdown
Contour phase diagram
###Code
fig = plotter.get_contour_pd_plot()
###Output
_____no_output_____
###Markdown
Binary phase diagram Let's zoom in on the tie-line between ZnS and CuS, which is where we are interested in alloying.
###Code
from pymatgen.core.composition import Composition
cpd = CompoundPhaseDiagram(entries, [Composition("ZnS"), Composition("CuS")], normalize_terminal_compositions=False)
compound_plotter = PDPlotter(cpd, show_unstable=100, markersize=20)
compound_plotter.show()
###Output
_____no_output_____
###Markdown
Mapping out chemical potential of cations This may be a useful tool to think about tuning chemical potential for synthesis.
###Code
from pymatgen.core import Element
plotter.get_chempot_range_map_plot([Element("Cu"), Element("Zn")]).show()
###Output
_____no_output_____
###Markdown
There are a lot of different types of phase diagrams available (see the [`pymatgen.analysis.phase_diagram` module](https://pymatgen.org/pymatgen.analysis.phase_diagram.html)).Our key takeaway here is that in MP, the Cu-Zn-S ternary space is EMPTY!! So let's fill it in... 3. Transform to make a new CuxZn1-xS alloy 3.1 Lesson: structure transformation Substitute your dopant to create a disordered structure Now, so let's substitute 1/4 of the Zn2+ with Cu+ ions (note: we will be ignoring charge compensation here, but this is important to take into account in real calculations!). That is, let's set substitutional fraction `x = 1/4` in CuxZn1-xS. Doing so using `Structure.replace_species()` will create a ***disordered structure object***.
###Code
x = 1/4
disordered_structure = ZnS_structure_oxi.copy()
disordered_structure.replace_species({"Zn2+": {"Cu+": x, "Zn2+": 1 - x}})
print(disordered_structure)
###Output
_____no_output_____
###Markdown
We can print the integer formula of this composition:
###Code
disordered_structure.composition
###Output
_____no_output_____
###Markdown
Let's rename this structure with its chemical formula to avoid confusion later on:
###Code
CuZn3S4_disordered = disordered_structure
# if you want to download this file and load it elsewhere
CuZn3S4_disordered.to("cif", "assets/CuZn3S4_disordered.cif")
###Output
_____no_output_____
###Markdown
Here's a screenshot of the CuZn3S4 disordered structure, where each cation site has partial occupancy of a Zn and Cu atom.  Transform structure Though disorder may indeed be more representative of a real crystal structure, we need to convert this to an ordered structure to perform DFT calculations. This is because DFT can only perform simulations on whole atoms, not fractional atoms!Pymatgen supports a variety of structural "transformations" (a list of supported transformations is available [here](https://pymatgen.org/pymatgen.transformations.html)). Here are three methods from the `pymatgen.transformations.advanced_transformations` module to take a disordered structure, and order it:1. `OrderDisorderStructureTransformation`: a highly simplified method to create an ordered supercell ranked by Ewald sums.2. `EnumerateStructureTransformation`: a method to order a disordered structure that requires [the `enumlib` code](https://github.com/msg-byu/enumlib) to also be installed.3. `SQSTransformation`: a method that requires the [`ATAT` code (Van de Walle et al. 2013)](https://doi.org/10.1016/j.calphad.2013.06.006) to be installed that creates a special quasirandom structure (SQS) from a structure with partial occupancies. For this demo, we'll be focusing on the most simple transformation: `OrderDisorderStructureTransformation`
###Code
from pymatgen.transformations.advanced_transformations import OrderDisorderedStructureTransformation
odst = OrderDisorderedStructureTransformation()
odst.apply_transformation(CuZn3S4_disordered)
###Output
_____no_output_____
###Markdown
We have to be careful though!! If we just apply this transformation, it doesn't fail, but it returns a structure where all the Cu+ is gone! `OrderDisorderedStructureTransformation` will round up or down if the cell is not large enough to account for `x`. Thus, we need to first make a supercell and then apply the transformation. Make a supercell With this transformation, we have to first create a disordered ***supercell*** to transform into. A supercell is just a structure that is scaled by a matrix so that it repeats several times. Here, the supercell must be large enough such that the composition in question can be achieved. Let's scale the structure by 8x. I like to use the `numpy` package to construct scaling matrices here (a 4x supercell would be sufficient for `x = 1/4`, but this leaves room to try e.g. `x = 1/8`):
###Code
import numpy as np
scaling_matrix = np.array([
[2, 0, 0],
[0, 2, 0],
[0, 0, 2]
])
###Output
_____no_output_____
###Markdown
We can see that this would scale the cell's volume by 8, but to verify:
###Code
scaling_volume = np.linalg.det(scaling_matrix)
###Output
_____no_output_____
###Markdown
For convenience, you can also simply use `scaling_matrix = 2` if you're scaling the same in all directions, or `scaling_matrix = [2, 2, 2]`. These are the same in practice.
###Code
CuZn3S4_disordered_supercell = CuZn3S4_disordered * scaling_matrix
print(CuZn3S4_disordered_supercell)
CuZn3S4_ordered_structures = odst.apply_transformation(CuZn3S4_disordered_supercell, return_ranked_list = 10)
print(CuZn3S4_ordered_structures)
###Output
_____no_output_____
###Markdown
This is a list of ten ordered structures ranked by ***Ewald sum*** (dict key `"energy"`). Note that this does NOT correlate with the lowest energy structure! Let's just use the first entry for our example:
###Code
CuZn3S4_ordered_structure = CuZn3S4_ordered_structures[0]["structure"]
print(CuZn3S4_ordered_structure)
###Output
_____no_output_____
###Markdown
If you want to download this file:
###Code
CuZn3S4_ordered_structure.to("cif", "assets/CuZn3S4_ordered_structure.cif")
# (note: "-0" is actually just 0, as this is a non-magnetic configuration)
###Output
_____no_output_____
###Markdown
BOOM! Now we have an alloy structure!! To view this structure you can upload your "CuZn3S4_ordered_structure.cif" file on [Crystal Toolkit](https://materialsproject.org/apps/xtaltoolkit).  3.2 Exercise: try your own transformation on CuZnS2 Set a new composition, `x = 1/2` (simpler fractions are easier in DFT calculations because supercells can be smaller!). This will yield a structure with composition CuZnS2.
###Code
x_CuZnS2 = 1/2
CuZnS2_disordered = ZnS_structure_oxi.copy()
CuZnS2_disordered.replace_species(
{
"Zn2+": {
"Cu+": x_CuZnS2,
"Zn2+": 1 - x_CuZnS2
}
}
)
###Output
_____no_output_____
###Markdown
Reminder: for more complex fractions (e.g. `x = 1/16`), supercells need to be scaled accordingly!
###Code
scaling_matrix = np.array([2, 2, 2])
CuZnS2_disordered_supercell = CuZnS2_disordered * scaling_matrix
CuZnS2_ordered_structures = odst.apply_transformation(CuZnS2_disordered_supercell,
return_ranked_list = 10)
###Output
_____no_output_____
###Markdown
Pick one:
###Code
CuZnS2_ordered_structure = CuZnS2_ordered_structures[0]["structure"]
print(CuZnS2_ordered_structure)
###Output
_____no_output_____
###Markdown
Check that this is the composition you expect:
###Code
CuZnS2_ordered_structure.composition.reduced_formula
###Output
_____no_output_____
###Markdown
And check the space group:
###Code
CuZnS2_ordered_structure.get_space_group_info()
###Output
_____no_output_____
###Markdown
Is it the same as ZnS? Because of the Cu substitution, this structure has a different space group than ZnS! 4. Calculate new properties 4.1 Lesson: volume prediction and XRD plot So far we just have a really rough guess of an alloy structure, and the lattice parameters are still equal to those of ZnS. We can estimate the new volume $V_{x-guess}$ after the substitution using Vegard's Law (assuming zero bowing). $V_{x-estimate} = V_{scaling } \times [ V_{CuS}(x) + V_{ZnS}(1-x) ] $ $V_{CuZn_3S_4-estimate} = [2\times2\times2] \times [ V_{CuS}(0.25) + V_{ZnS}(0.75) ] $
###Code
scaling_matrix
x
scaling_volume = scaling_matrix.prod()
CuZn3S4_estimated_volume = scaling_volume * ((ZnS_structure.volume) * (1 - x) +
(CuS_structure.volume) * x)
print(CuZn3S4_ordered_structure.volume)
print(CuZn3S4_estimated_volume)
CuZn3S4_structure_estimate = CuZn3S4_ordered_structure.copy()
CuZn3S4_structure_estimate.scale_lattice(CuZn3S4_estimated_volume)
print(CuZn3S4_structure_estimate)
###Output
_____no_output_____
###Markdown
This is better but still wrong, and does not take into account any structural distortions. Note that there are some other methods on pymatgen to guess structure volume (see `pymatgen.analysis.structure_prediction.volume_predictor`), but in my experience Vegard's law is usually just as helpful. Your next step would be to relax this new structure using DFT or another method (see below). Calculate XRD, compare to original structure Now we can compare this structure to our original ZnS and CuS structure to, for example, see how the ***X-ray diffraction (XRD)*** patterns are expected to shift as `x` increases in CuxZn1-xS:
###Code
from pymatgen.analysis.diffraction.xrd import XRDCalculator
###Output
_____no_output_____
###Markdown
Initialize the `XRDCalculator` with the conventional Cu-K$\alpha$ wavelength (note: Cu here has nothing to do with the Cu we're adding to the structure):
###Code
xrd = XRDCalculator()
structures = [
ZnS_structure,
CuZn3S4_structure_estimate,
CuS_structure
]
xrd_plots = xrd.plot_structures(
structures,
two_theta_range=[25, 65],
annotate_peaks=False, # to keep the plot cleaner
size_kwargs={'w': 10, 'h': 9}, # these options are optional to make the plot look nicer
)
###Output
_____no_output_____
###Markdown
You can see how the $2\theta$ peaks shift slightly to the right with addition of Cu! 4.2 Exercise: try this on your CuZnS2 structure Guess the structure volume using Vegard's Law, and correct for this: $V_{x-estimate} = V_{scaling } \times [ V_{CuS}(?) + V_{ZnS}(?) ] $
###Code
x_CuZnS2
scaling_volume
CuZnS2_structure_estimate = CuZnS2_ordered_structure.copy()
CuZnS2_structure_estimate.scale_lattice(scaling_volume *
((ZnS_structure.volume) * (1 - x_CuZnS2) +
(CuS_structure.volume) * x_CuZnS2))
###Output
_____no_output_____
###Markdown
Print the new structure
###Code
print(CuZnS2_structure_estimate)
###Output
_____no_output_____
###Markdown
Add this structure to the series of XRD plots to compare XRD for `x = 0, 0.25, 0.5, 1`:
###Code
structures = [
ZnS_structure,
CuZn3S4_structure_estimate,
CuZnS2_structure_estimate,
CuS_structure
]
xrd_plots = xrd.plot_structures(
structures,
annotate_peaks=False,
two_theta_range=[25,65],
size_kwargs={'w': 10, 'h': 12}
)
###Output
_____no_output_____
###Markdown
5. Test your skills This is the wee beginning of making an alloy. Here are some follow-up steps: I constructed similar alloys to those that we just explored, at `x = 1/4` and `x = 1/2`, and relaxed them with DFT. We'll explore my results here: 5.1 Exercise: compare relaxed DFT structures to estimates
###Code
from pymatgen.core.structure import Structure
###Output
_____no_output_____
###Markdown
These are my output .cif files from one of our DFT workflows. See `fireworks` and `atomate` packages for details [here](https://atomate.org/atomate.vasp.fireworks.html).
###Code
CuZn3S4_relaxed = Structure.from_file("assets/Zn3CuS4_Amm2.cif")
CuZnS2_relaxed = Structure.from_file("assets/ZnCuS2_R3m.cif")
###Output
_____no_output_____
###Markdown
How do these space groups compare to our estimates?
###Code
CuZn3S4_structure_estimate.get_space_group_info()
CuZn3S4_relaxed.get_space_group_info()
CuZnS2_structure_estimate.get_space_group_info()
CuZnS2_relaxed.get_space_group_info()
###Output
_____no_output_____
###Markdown
Are they higher or lower in symmetry? After relaxation, both structures are in a different space group (lower symmetry) than the alloys we just made. This is likely due to structural distortions. Add in DFT structure to XRD Replace the alloy structures in the previous XRD exercise with the two relaxed alloy structures, again comparing XRD for `x = 0, 0.25, 0.5, 1`:
###Code
structures = [
ZnS_structure,
CuZn3S4_relaxed,
CuZnS2_relaxed,
CuS_structure
]
xrd_plots = xrd.plot_structures(
structures,
annotate_peaks=False,
two_theta_range=[25,65],
size_kwargs={'w': 10, 'h': 12},
)
###Output
_____no_output_____
###Markdown
Peak splittings are now present in the diffraction patterns, and the shift to higher $2\theta$ is not as significant. 5.2 Lesson: add computed entries to phase diagram ***Scientific question: are these new phases stable?*** To assess the stability of these new phases, let's look at JSON files containing `ComputedEntry` data:
###Code
from monty.serialization import loadfn
Zn3CuS4_Amm2_entry = loadfn("assets/Zn3CuS4_Amm2_entry.json")
ZnCuS2_R3m_entry = loadfn("assets/ZnCuS2_R3m_entry.json")
###Output
_____no_output_____
###Markdown
These entries were created by relaxing the above structure using one of our DFT workflows. An "entry" is mainly just a composition and an energy, so can be created manually, without performing a calculation, or even from experimental data
###Code
print(ZnCuS2_R3m_entry)
###Output
_____no_output_____
###Markdown
We can add these two entries to `entries`, our set of `ComputedEntry` data from MP in the Cu-Zn-S phase space:
###Code
new_entries = entries + [Zn3CuS4_Amm2_entry, ZnCuS2_R3m_entry]
new_phase_diagram = PhaseDiagram(new_entries)
###Output
_____no_output_____
###Markdown
Conventional phase diagram
###Code
new_plotter = PDPlotter(new_phase_diagram, show_unstable=10, markersize=20)
x = new_plotter.get_plot(new_phase_diagram, label_unstable=False)
###Output
_____no_output_____
###Markdown
We see our two new phases show up here! How does the energy landscape change? Contour phase diagram
###Code
new_fig = new_plotter.get_contour_pd_plot()
###Output
_____no_output_____
###Markdown
Compare to the phase diagram before new phases were added:
###Code
fig = plotter.get_contour_pd_plot()
###Output
_____no_output_____
###Markdown
Binary phase diagram
###Code
from pymatgen.core.composition import Composition
new_cpd = CompoundPhaseDiagram(new_entries, [Composition("ZnS"), Composition("CuS")])
new_compound_plotter = PDPlotter(new_cpd, show_unstable=10, markersize=20)
new_compound_plotter.show()
new_phase_diagram.get_e_above_hull(ZnCuS2_R3m_entry)
new_phase_diagram.get_e_above_hull(Zn3CuS4_Amm2_entry)
###Output
_____no_output_____ |
_posts/python/statistical/histogram/histograms.ipynb | ###Markdown
New to Plotly?Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).You can set up Plotly to work in [online](https://plot.ly/python/getting-started/initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/start-plotting-online).We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started! Version CheckRun `pip install plotly --upgrade` to update your Plotly version
###Code
import plotly
plotly.__version__
###Output
_____no_output_____
###Markdown
Basic Histogram
###Code
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x = np.random.randn(500)
data = [go.Histogram(x=x)]
py.iplot(data, filename='basic histogram')
###Output
_____no_output_____
###Markdown
Normalized Histogram
###Code
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x = np.random.randn(500)
data = [go.Histogram(x=x,
histnorm='probability')]
py.iplot(data, filename='normalized histogram')
###Output
_____no_output_____
###Markdown
Horizontal Histogram
###Code
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
y = np.random.randn(500)
data = [go.Histogram(y=y)]
py.iplot(data, filename='horizontal histogram')
###Output
_____no_output_____
###Markdown
Overlaid Histogram
###Code
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x0 = np.random.randn(500)
x1 = np.random.randn(500)+1
trace1 = go.Histogram(
x=x0,
opacity=0.75
)
trace2 = go.Histogram(
x=x1,
opacity=0.75
)
data = [trace1, trace2]
layout = go.Layout(barmode='overlay')
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='overlaid histogram')
###Output
_____no_output_____
###Markdown
Stacked Histograms
###Code
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x0 = np.random.randn(500)
x1 = np.random.randn(500)
trace0 = go.Histogram(
x=x0
)
trace1 = go.Histogram(
x=x1
)
data = [trace0, trace1]
layout = go.Layout(barmode='stack')
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='stacked histogram')
###Output
_____no_output_____
###Markdown
Styled Histogram
###Code
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x0 = np.random.randn(500)
x1 = np.random.randn(500)+1
trace1 = go.Histogram(
x=x0,
histnorm='percent',
name='control',
xbins=dict(
start=-4.0,
end=3.0,
size=0.5
),
marker=dict(
color='#FFD7E9',
),
opacity=0.75
)
trace2 = go.Histogram(
x=x1,
name='experimental',
xbins=dict(
start=-3.0,
end=4,
size=0.5
),
marker=dict(
color='#EB89B5'
),
opacity=0.75
)
data = [trace1, trace2]
layout = go.Layout(
title='Sampled Results',
xaxis=dict(
title='Value'
),
yaxis=dict(
title='Count'
),
bargap=0.2,
bargroupgap=0.1
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='styled histogram')
###Output
_____no_output_____
###Markdown
Cumulative Histogram
###Code
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x = np.random.randn(500)
data = [go.Histogram(x=x,
cumulative=dict(enabled=True))]
py.iplot(data, filename='cumulative histogram')
###Output
_____no_output_____
###Markdown
Specify Binning Function
###Code
import plotly.plotly as py
import plotly.graph_objs as go
x = ["Apples","Apples","Apples","Oranges", "Bananas"]
y = ["5","10","3","10","5"]
data = [
go.Histogram(
histfunc = "count",
y = y,
x = x,
name = "count"
),
go.Histogram(
histfunc = "sum",
y = y,
x = x,
name = "sum"
)
]
py.iplot(data, filename='binning function')
###Output
_____no_output_____
###Markdown
Custom BinningFor custom binning along x-axis, use the attribute [`nbinsx`](https://plot.ly/python/reference/histogram-nbinsx). Please note that the autobin algorithm will choose a 'nice' round bin size that may result in somewhat fewer than `nbinsx` total bins. Alternatively, you can set the exact values for [`xbins`](https://plot.ly/python/reference/histogram-xbins) along with `autobinx = False`.
###Code
from plotly import tools
import plotly.plotly as py
import plotly.graph_objs as go
x = ['1970-01-01', '1970-01-01', '1970-02-01', '1970-04-01', '1970-01-02', '1972-01-31', '1970-02-13', '1971-04-19']
trace0 = go.Histogram(
x=x,
nbinsx = 4,
)
trace1 = go.Histogram(
x=x,
nbinsx = 8,
)
trace2 = go.Histogram(
x=x,
nbinsx = 10,
)
trace3 = go.Histogram(
x=x,
xbins=dict(
start='1969-11-15',
end='1972-03-31',
size= 'M18'),
autobinx = False
)
trace4 = go.Histogram(
x=x,
xbins=dict(
start='1969-11-15',
end='1972-03-31',
size= 'M4'),
autobinx = False
)
trace5 = go.Histogram(
x=x,
xbins=dict(
start='1969-11-15',
end='1972-03-31',
size= 'M2'),
autobinx = False
)
fig = tools.make_subplots(rows=3, cols=2)
fig.append_trace(trace0, 1, 1)
fig.append_trace(trace1, 1, 2)
fig.append_trace(trace2, 2, 1)
fig.append_trace(trace3, 2, 2)
fig.append_trace(trace4, 3, 1)
fig.append_trace(trace5, 3, 2)
py.iplot(fig, filename='custom binning')
###Output
This is the format of your plot grid:
[ (1,1) x1,y1 ] [ (1,2) x2,y2 ]
[ (2,1) x3,y3 ] [ (2,2) x4,y4 ]
[ (3,1) x5,y5 ] [ (3,2) x6,y6 ]
###Markdown
ReferenceSee https://plot.ly/python/reference/histogram for more information and chart attribute options!
###Code
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/csshref="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
!pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'histograms.ipynb', 'python/histograms/', 'Python Histograms | plotly',
'How to make Histograms in Python with Plotly.',
title = 'Python Histograms | plotly',
name = 'Histograms',
has_thumbnail='true', thumbnail='thumbnail/histogram.jpg',
language='python', page_type='example_index',
display_as='statistical', order=4, redirect_from='/python/histogram-tutorial/',
ipynb= '~notebook_demo/22')
###Output
_____no_output_____
###Markdown
New to Plotly?Plotly's Python library is free and open source! [Get started](https://plot.ly/python/getting-started/) by downloading the client and [reading the primer](https://plot.ly/python/getting-started/).You can set up Plotly to work in [online](https://plot.ly/python/getting-started/initialization-for-online-plotting) or [offline](https://plot.ly/python/getting-started/initialization-for-offline-plotting) mode, or in [jupyter notebooks](https://plot.ly/python/getting-started/start-plotting-online).We also have a quick-reference [cheatsheet](https://images.plot.ly/plotly-documentation/images/python_cheat_sheet.pdf) (new!) to help you get started! Version CheckRun `pip install plotly --upgrade` to update your Plotly version
###Code
import plotly
plotly.__version__
###Output
_____no_output_____
###Markdown
Basic Histogram
###Code
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x = np.random.randn(500)
data = [go.Histogram(x=x)]
py.iplot(data, filename='basic histogram')
###Output
_____no_output_____
###Markdown
Normalized Histogram
###Code
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x = np.random.randn(500)
data = [go.Histogram(x=x,
histnorm='probability')]
py.iplot(data, filename='normalized histogram')
###Output
_____no_output_____
###Markdown
Horizontal Histogram
###Code
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
y = np.random.randn(500)
data = [go.Histogram(y=y)]
py.iplot(data, filename='horizontal histogram')
###Output
_____no_output_____
###Markdown
Overlaid Histogram
###Code
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x0 = np.random.randn(500)
x1 = np.random.randn(500)+1
trace1 = go.Histogram(
x=x0,
opacity=0.75
)
trace2 = go.Histogram(
x=x1,
opacity=0.75
)
data = [trace1, trace2]
layout = go.Layout(barmode='overlay')
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='overlaid histogram')
###Output
_____no_output_____
###Markdown
Stacked Histograms
###Code
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x0 = np.random.randn(500)
x1 = np.random.randn(500)
trace0 = go.Histogram(
x=x0
)
trace1 = go.Histogram(
x=x1
)
data = [trace0, trace1]
layout = go.Layout(barmode='stack')
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='stacked histogram')
###Output
_____no_output_____
###Markdown
Styled Histogram
###Code
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x0 = np.random.randn(500)
x1 = np.random.randn(500)+1
trace1 = go.Histogram(
x=x0,
histnorm='percent',
name='control',
xbins=dict(
start=-4.0,
end=3.0,
size=0.5
),
marker=dict(
color='#FFD7E9',
),
opacity=0.75
)
trace2 = go.Histogram(
x=x1,
name='experimental',
xbins=dict(
start=-3.0,
end=4,
size=0.5
),
marker=dict(
color='#EB89B5'
),
opacity=0.75
)
data = [trace1, trace2]
layout = go.Layout(
title='Sampled Results',
xaxis=dict(
title='Value'
),
yaxis=dict(
title='Count'
),
bargap=0.2,
bargroupgap=0.1
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='styled histogram')
###Output
_____no_output_____
###Markdown
Cumulative Histogram
###Code
import plotly.plotly as py
import plotly.graph_objs as go
import numpy as np
x = np.random.randn(500)
data = [go.Histogram(x=x,
cumulative=dict(enabled=True))]
py.iplot(data, filename='cumulative histogram')
###Output
_____no_output_____
###Markdown
Specify Binning Function
###Code
import plotly.plotly as py
import plotly.graph_objs as go
x = ["Apples","Apples","Apples","Oranges", "Bananas"]
y = ["5","10","3","10","5"]
data = [
go.Histogram(
histfunc = "count",
y = y,
x = x,
name = "count"
),
go.Histogram(
histfunc = "sum",
y = y,
x = x,
name = "sum"
)
]
py.iplot(data, filename='binning function')
###Output
_____no_output_____
###Markdown
Custom BinningFor custom binning along x-axis, use the attribute [`nbinsx`](https://plot.ly/python/reference/histogram-nbinsx). Please note that the autobin algorithm will choose a 'nice' round bin size that may result in somewhat fewer than `nbinsx` total bins. Alternatively, you can set the exact values for [`xbins`](https://plot.ly/python/reference/histogram-xbins) along with `autobinx = False`.
###Code
from plotly import tools
import plotly.plotly as py
import plotly.graph_objs as go
x = ['1970-01-01', '1970-01-01', '1970-02-01', '1970-04-01', '1970-01-02', '1972-01-31', '1970-02-13', '1971-04-19']
trace0 = go.Histogram(
x=x,
nbinsx = 4,
)
trace1 = go.Histogram(
x=x,
nbinsx = 8,
)
trace2 = go.Histogram(
x=x,
nbinsx = 10,
)
trace3 = go.Histogram(
x=x,
xbins=dict(
start='1969-11-15',
end='1972-03-31',
size= 'M18'),
autobinx = False
)
trace4 = go.Histogram(
x=x,
xbins=dict(
start='1969-11-15',
end='1972-03-31',
size= 'M4'),
autobinx = False
)
trace5 = go.Histogram(
x=x,
xbins=dict(
start='1969-11-15',
end='1972-03-31',
size= 'M2'),
autobinx = False
)
fig = tools.make_subplots(rows=3, cols=2)
fig.append_trace(trace0, 1, 1)
fig.append_trace(trace1, 1, 2)
fig.append_trace(trace2, 2, 1)
fig.append_trace(trace3, 2, 2)
fig.append_trace(trace4, 3, 1)
fig.append_trace(trace5, 3, 2)
py.iplot(fig, filename='custom binning')
###Output
This is the format of your plot grid:
[ (1,1) x1,y1 ] [ (1,2) x2,y2 ]
[ (2,1) x3,y3 ] [ (2,2) x4,y4 ]
[ (3,1) x5,y5 ] [ (3,2) x6,y6 ]
###Markdown
Dash Example
###Code
from IPython.display import IFrame
IFrame(src= "https://dash-simple-apps.plotly.host/dash-histogramplot/", width="100%", height="650px", frameBorder="0")
###Output
_____no_output_____
###Markdown
Find the dash app source code [here](https://github.com/plotly/simple-example-chart-apps/tree/master/histogram) ReferenceSee https://plot.ly/python/reference/histogram for more information and chart attribute options!
###Code
from IPython.display import display, HTML
display(HTML('<link href="//fonts.googleapis.com/css?family=Open+Sans:600,400,300,200|Inconsolata|Ubuntu+Mono:400,700rel="stylesheet" type="text/css" />'))
display(HTML('<link rel="stylesheet" type="text/csshref="http://help.plot.ly/documentation/all_static/css/ipython-notebook-custom.css">'))
!pip install git+https://github.com/plotly/publisher.git --upgrade
import publisher
publisher.publish(
'histograms.ipynb', 'python/histograms/', 'Python Histograms | plotly',
'How to make Histograms in Python with Plotly.',
title = 'Python Histograms | plotly',
name = 'Histograms',
has_thumbnail='true', thumbnail='thumbnail/histogram.jpg',
language='python', page_type='example_index',
display_as='statistical', order=4, redirect_from='/python/histogram-tutorial/',
ipynb= '~notebook_demo/22')
###Output
_____no_output_____ |
doc/source/cookbook/tipsy_and_yt.ipynb | ###Markdown
Loading Files Alright, let's start with some basics. Before we do anything, we will need to load a snapshot. You can do this using the ```load``` convenience function. yt will autodetect that you have a tipsy snapshot, and automatically set itself up appropriately.
###Code
import yt
###Output
_____no_output_____
###Markdown
We will be looking at a fairly low resolution dataset. In the next cell, the `ds` object has an atribute called `n_ref` that tells the oct-tree how many particles to refine on. The default is 64, but we'll get prettier plots (at the expense of a deeper tree) with 8. Just passing the argument `n_ref=8` to load does this for us. >This dataset is available for download at http://yt-project.org/data/TipsyGalaxy.tar.gz (10 MB).
###Code
ds = yt.load('TipsyGalaxy/galaxy.00300', n_ref=8)
###Output
_____no_output_____
###Markdown
We now have a `TipsyDataset` object called `ds`. Let's see what fields it has.
###Code
ds.field_list
###Output
_____no_output_____
###Markdown
yt also defines so-called "derived" fields. These fields are functions of the on-disk fields that live in the `field_list`. There is a `derived_field_list` attribute attached to the `Dataset` object - let's take look at the derived fields in this dataset:
###Code
ds.derived_field_list
###Output
_____no_output_____
###Markdown
All of the field in the `field_list` are arrays containing the values for the associated particles. These haven't been smoothed or gridded in any way. We can grab the array-data for these particles using `ds.all_data()`. For example, let's take a look at a temperature-colored scatterplot of the gas particles in this output.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
dd = ds.all_data()
xcoord = dd['Gas','Coordinates'][:,0].v
ycoord = dd['Gas','Coordinates'][:,1].v
logT = np.log10(dd['Gas','Temperature'])
plt.scatter(xcoord, ycoord, c=logT, s=2*logT, marker='o', edgecolor='none', vmin=2, vmax=6)
plt.xlim(-20,20)
plt.ylim(-20,20)
cb = plt.colorbar()
cb.set_label('$\log_{10}$ Temperature')
plt.gcf().set_size_inches(15,10)
###Output
_____no_output_____
###Markdown
Making Smoothed Images yt will automatically generate smoothed versions of these fields that you can use to plot. Let's make a temperature slice and a density projection.
###Code
yt.SlicePlot(ds, 'z', ('gas','density'), width=(40, 'kpc'), center='m')
yt.ProjectionPlot(ds, 'z', ('gas','density'), width=(40, 'kpc'), center='m')
###Output
_____no_output_____
###Markdown
Not only are the values in the tipsy snapshot read and automatically smoothed, the auxiliary files that have physical significance are also smoothed. Let's look at a slice of Iron mass fraction.
###Code
yt.SlicePlot(ds, 'z', ('gas', 'Fe_fraction'), width=(40, 'kpc'), center='m')
###Output
_____no_output_____
###Markdown
Loading Files Alright, let's start with some basics. Before we do anything, we will need to load a snapshot. You can do this using the ```load``` convenience function. yt will autodetect that you have a tipsy snapshot, and automatically set itself up appropriately.
###Code
import yt
###Output
_____no_output_____
###Markdown
We will be looking at a fairly low resolution dataset. In the next cell, the `ds` object has an attribute called `n_ref` that tells the oct-tree how many particles to refine on. The default is 64, but we'll get prettier plots (at the expense of a deeper tree) with 8. Just passing the argument `n_ref=8` to load does this for us. >This dataset is available for download at http://yt-project.org/data/TipsyGalaxy.tar.gz (10 MB).
###Code
ds = yt.load('TipsyGalaxy/galaxy.00300', n_ref=8)
###Output
_____no_output_____
###Markdown
We now have a `TipsyDataset` object called `ds`. Let's see what fields it has.
###Code
ds.field_list
###Output
_____no_output_____
###Markdown
yt also defines so-called "derived" fields. These fields are functions of the on-disk fields that live in the `field_list`. There is a `derived_field_list` attribute attached to the `Dataset` object - let's take look at the derived fields in this dataset:
###Code
ds.derived_field_list
###Output
_____no_output_____
###Markdown
All of the field in the `field_list` are arrays containing the values for the associated particles. These haven't been smoothed or gridded in any way. We can grab the array-data for these particles using `ds.all_data()`. For example, let's take a look at a temperature-colored scatterplot of the gas particles in this output.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
dd = ds.all_data()
xcoord = dd['Gas','Coordinates'][:,0].v
ycoord = dd['Gas','Coordinates'][:,1].v
logT = np.log10(dd['Gas','Temperature'])
plt.scatter(xcoord, ycoord, c=logT, s=2*logT, marker='o', edgecolor='none', vmin=2, vmax=6)
plt.xlim(-20,20)
plt.ylim(-20,20)
cb = plt.colorbar()
cb.set_label('$\log_{10}$ Temperature')
plt.gcf().set_size_inches(15,10)
###Output
_____no_output_____
###Markdown
Making Smoothed Images yt will automatically generate smoothed versions of these fields that you can use to plot. Let's make a temperature slice and a density projection.
###Code
yt.SlicePlot(ds, 'z', ('gas','density'), width=(40, 'kpc'), center='m')
yt.ProjectionPlot(ds, 'z', ('gas','density'), width=(40, 'kpc'), center='m')
###Output
_____no_output_____
###Markdown
Not only are the values in the tipsy snapshot read and automatically smoothed, the auxiliary files that have physical significance are also smoothed. Let's look at a slice of Iron mass fraction.
###Code
yt.SlicePlot(ds, 'z', ('gas', 'Fe_fraction'), width=(40, 'kpc'), center='m')
###Output
_____no_output_____
###Markdown
Loading Files Alright, let's start with some basics. Before we do anything, we will need to load a snapshot. You can do this using the ```load_sample``` convenience function. yt will autodetect that you want a tipsy snapshot and download it from the yt hub.
###Code
import yt
###Output
_____no_output_____
###Markdown
We will be looking at a fairly low resolution dataset. >This dataset is available for download at https://yt-project.org/data/TipsyGalaxy.tar.gz (10 MB).
###Code
ds = yt.load_sample('TipsyGalaxy')
###Output
_____no_output_____
###Markdown
We now have a `TipsyDataset` object called `ds`. Let's see what fields it has.
###Code
ds.field_list
###Output
_____no_output_____
###Markdown
yt also defines so-called "derived" fields. These fields are functions of the on-disk fields that live in the `field_list`. There is a `derived_field_list` attribute attached to the `Dataset` object - let's take look at the derived fields in this dataset:
###Code
ds.derived_field_list
###Output
_____no_output_____
###Markdown
All of the field in the `field_list` are arrays containing the values for the associated particles. These haven't been smoothed or gridded in any way. We can grab the array-data for these particles using `ds.all_data()`. For example, let's take a look at a temperature-colored scatterplot of the gas particles in this output.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
ad = ds.all_data()
xcoord = ad['Gas', 'Coordinates'][:,0].v
ycoord = ad['Gas', 'Coordinates'][:,1].v
logT = np.log10(ad['Gas', 'Temperature'])
plt.scatter(xcoord, ycoord, c=logT, s=2*logT, marker='o', edgecolor='none', vmin=2, vmax=6)
plt.xlim(-20,20)
plt.ylim(-20,20)
cb = plt.colorbar()
cb.set_label('$\log_{10}$ Temperature')
plt.gcf().set_size_inches(15,10)
###Output
_____no_output_____
###Markdown
Making Smoothed Images yt will automatically generate smoothed versions of these fields that you can use to plot. Let's make a temperature slice and a density projection.
###Code
yt.SlicePlot(ds, 'z', ('gas','density'), width=(40, 'kpc'), center='m')
yt.ProjectionPlot(ds, 'z', ('gas','density'), width=(40, 'kpc'), center='m')
###Output
_____no_output_____
###Markdown
Not only are the values in the tipsy snapshot read and automatically smoothed, the auxiliary files that have physical significance are also smoothed. Let's look at a slice of Iron mass fraction.
###Code
yt.SlicePlot(ds, 'z', ('gas', 'Fe_fraction'), width=(40, 'kpc'), center='m')
###Output
_____no_output_____
###Markdown
Loading Files Alright, let's start with some basics. Before we do anything, we will need to load a snapshot. You can do this using the ```load``` convenience function. yt will autodetect that you have a tipsy snapshot, and automatically set itself up appropriately.
###Code
import yt
###Output
_____no_output_____
###Markdown
We will be looking at a fairly low resolution dataset. In the next cell, the `ds` object has an attribute called `n_ref` that tells the oct-tree how many particles to refine on. The default is 64, but we'll get prettier plots (at the expense of a deeper tree) with 8. Just passing the argument `n_ref=8` to load does this for us. >This dataset is available for download at https://yt-project.org/data/TipsyGalaxy.tar.gz (10 MB).
###Code
ds = yt.load('TipsyGalaxy/galaxy.00300', n_ref=8)
###Output
_____no_output_____
###Markdown
We now have a `TipsyDataset` object called `ds`. Let's see what fields it has.
###Code
ds.field_list
###Output
_____no_output_____
###Markdown
yt also defines so-called "derived" fields. These fields are functions of the on-disk fields that live in the `field_list`. There is a `derived_field_list` attribute attached to the `Dataset` object - let's take look at the derived fields in this dataset:
###Code
ds.derived_field_list
###Output
_____no_output_____
###Markdown
All of the field in the `field_list` are arrays containing the values for the associated particles. These haven't been smoothed or gridded in any way. We can grab the array-data for these particles using `ds.all_data()`. For example, let's take a look at a temperature-colored scatterplot of the gas particles in this output.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
dd = ds.all_data()
xcoord = dd['Gas','Coordinates'][:,0].v
ycoord = dd['Gas','Coordinates'][:,1].v
logT = np.log10(dd['Gas','Temperature'])
plt.scatter(xcoord, ycoord, c=logT, s=2*logT, marker='o', edgecolor='none', vmin=2, vmax=6)
plt.xlim(-20,20)
plt.ylim(-20,20)
cb = plt.colorbar()
cb.set_label('$\log_{10}$ Temperature')
plt.gcf().set_size_inches(15,10)
###Output
_____no_output_____
###Markdown
Making Smoothed Images yt will automatically generate smoothed versions of these fields that you can use to plot. Let's make a temperature slice and a density projection.
###Code
yt.SlicePlot(ds, 'z', ('gas','density'), width=(40, 'kpc'), center='m')
yt.ProjectionPlot(ds, 'z', ('gas','density'), width=(40, 'kpc'), center='m')
###Output
_____no_output_____
###Markdown
Not only are the values in the tipsy snapshot read and automatically smoothed, the auxiliary files that have physical significance are also smoothed. Let's look at a slice of Iron mass fraction.
###Code
yt.SlicePlot(ds, 'z', ('gas', 'Fe_fraction'), width=(40, 'kpc'), center='m')
###Output
_____no_output_____
###Markdown
Loading Files Alright, let's start with some basics. Before we do anything, we will need to load a snapshot. You can do this using the ```load_sample``` convenience function. yt will autodetect that you want a tipsy snapshot and download it from the yt hub.
###Code
import yt
###Output
_____no_output_____
###Markdown
We will be looking at a fairly low resolution dataset. >This dataset is available for download at https://yt-project.org/data/TipsyGalaxy.tar.gz (10 MB).
###Code
ds = yt.load_sample("TipsyGalaxy")
###Output
_____no_output_____
###Markdown
We now have a `TipsyDataset` object called `ds`. Let's see what fields it has.
###Code
ds.field_list
###Output
_____no_output_____
###Markdown
yt also defines so-called "derived" fields. These fields are functions of the on-disk fields that live in the `field_list`. There is a `derived_field_list` attribute attached to the `Dataset` object - let's take look at the derived fields in this dataset:
###Code
ds.derived_field_list
###Output
_____no_output_____
###Markdown
All of the field in the `field_list` are arrays containing the values for the associated particles. These haven't been smoothed or gridded in any way. We can grab the array-data for these particles using `ds.all_data()`. For example, let's take a look at a temperature-colored scatterplot of the gas particles in this output.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
ad = ds.all_data()
xcoord = ad["Gas", "Coordinates"][:, 0].v
ycoord = ad["Gas", "Coordinates"][:, 1].v
logT = np.log10(ad["Gas", "Temperature"])
plt.scatter(
xcoord, ycoord, c=logT, s=2 * logT, marker="o", edgecolor="none", vmin=2, vmax=6
)
plt.xlim(-20, 20)
plt.ylim(-20, 20)
cb = plt.colorbar()
cb.set_label(r"$\log_{10}$ Temperature")
plt.gcf().set_size_inches(15, 10)
###Output
_____no_output_____
###Markdown
Making Smoothed Images yt will automatically generate smoothed versions of these fields that you can use to plot. Let's make a temperature slice and a density projection.
###Code
yt.SlicePlot(ds, "z", ("gas", "density"), width=(40, "kpc"), center="m")
yt.ProjectionPlot(ds, "z", ("gas", "density"), width=(40, "kpc"), center="m")
###Output
_____no_output_____
###Markdown
Not only are the values in the tipsy snapshot read and automatically smoothed, the auxiliary files that have physical significance are also smoothed. Let's look at a slice of Iron mass fraction.
###Code
yt.SlicePlot(ds, "z", ("gas", "Fe_fraction"), width=(40, "kpc"), center="m")
###Output
_____no_output_____ |
lessons/python/ep1a-introduction.ipynb | ###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60``` From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5``` In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0``` And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)``` We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)``` Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)``` The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)``` To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)``` Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)``` Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)``` Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
_____no_output_____
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg = 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text = 'weight in kilograms:'
print(weight_kg)
###Output
60.0
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print(weight_kg_text, weight_kg)
###Output
weight in kilograms: 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print('weight in pounds:', 2.2 * weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg = 65.0
print('weight in kilograms is now:', weight_kg)
###Output
weight in kilograms is now: 65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
weight_lb = 2.2 * weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
###Output
weight in kilograms: 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
###Output
weight in kilograms is now: 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
data = numpy.loadtxt('data/inflammation-01.csv',,',',,,,,)
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))```
###Code
print(type(data))
###Output
<class 'numpy.ndarray'>
###Markdown
The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
###Output
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0, 0])
print('middle value in data:', data[30, 20])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4, 0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10, 0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small = data[:3, 36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata = (data * 2.0)
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original:')
print(data[:3, 36:])
print('doubledata:')
print(doubledata[:3, 36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata:')
print(tripledata[:3, 36:])
###Output
tripledata:
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass = 47.5
age = 122
mass = mass * 2.0
age = age - 20
print(mass, age)
###Output
95.0 102
###Markdown
Solution: Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
###Output
Hopper Grace
###Markdown
Solution: Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element = 'oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
print(element[:4], element[4:],element[:])
print(element[-1], element[-2])
###Output
oxyg en oxygen
n e
###Markdown
Solution: Given those answers, explain what `element[1:-1]` does.
###Code
print (element[1:-1])
###Output
xyge
###Markdown
Solution: Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
data[3:3, :]
###Output
_____no_output_____
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg = 60.0
weight_kg_text = 'weigh in kilograms:'
print(weight_kg)
###Output
60.0
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
print(weight_kg)
print('weight_kg_text')
###Output
weight_kg_text
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print(weight_kg_text, weight_kg)
###Output
weigh in kilograms: 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print('weight in pounds:', 2.2. * weight_kg)
print('weight in pounds:', 2,2 * weight_kg)
print('weight in pounds:', 2.2 * weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg = 65.0
print('weight in kilograms is now:', weight_kg)
###Output
weight in kilograms is now: 65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# There are 2.2. pounds per kilogram
weight_lb = 2.2. * weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
# There are 2.2 pounds per kilogram
weight_lb = 2.2 * weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
###Output
weigh in kilograms: 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
weight_lb = 2.2 * weight_kg
print(weight_lb)
###Output
220.00000000000003
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname = 'data/inflammation-01.csv', delimiter =',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))```
###Code
print(type(data))
###Output
<class 'numpy.ndarray'>
###Markdown
The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
data.dtype
###Output
_____no_output_____
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data: ', data[0, 0])
print('middle value in data: ', data[30, 20])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4, 0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10, 0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small = data[:3, 36:]
print('small is: ')
print(small)
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata = data * 2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original: ')
print(data[:3, 36:])
print('doubledata: ')
print(doubledata[:3, 36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata: ')
print(tripledata[:3, 36:])
###Output
tripledata:
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass = 47.5
###Output
_____no_output_____
###Markdown
Solution:
###Code
age = 122
mass = mass * 2.0
age = age - 20
print(mass, age)
###Output
95.0 102
###Markdown
Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
###Output
Hopper Grace
###Markdown
Solution: Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element= 'oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
print('up to :4', element[:4])
###Output
up to :4 oxyg
###Markdown
Solution:
###Code
print('4: and everything after:', element[4:])
print('everything:', element[:])
###Output
everything: oxygen
###Markdown
Given those answers, explain what `element[1:-1]` does.
###Code
print('after 1 to 2nd last', element[1:-1])
###Output
after 1 to 2nd last xyge
###Markdown
Solution: Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
print(element[3:3])
data[3:3, 4:4]
print('what is data 3:3, 4:4', data[3:3, 4:4])
print('what is data 3:3, :', data[3:3, :])
###Output
what is data 3:3, : []
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg=60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0``` And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)``` We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)``` Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)``` The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)``` To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)``` Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)``` Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)``` Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))```
###Code
print(type(data))
###Output
<class 'numpy.ndarray'>
###Markdown
The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
###Output
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0,0])
print('middle calue in data:', data[30,20])
###Output
first value in data: 0.0
middle calue in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4, 0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10, 0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small=data[:3,36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata=data*2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original:')
print(data[:3,36:])
print('doubledata:')
print(doubledata[:3,36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata=doubledata+data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata:')
print(tripledata[:3,36:])
###Output
tripledata:
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass = 47.5
age=122
mass=mass*2.0
age=age-20
print(mass, age)
###Output
95.0 102
###Markdown
Solution: Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second='Grace', 'Hopper'
third, fourth = second, first
print(third,fourth)
###Output
Hopper Grace
###Markdown
Solution: Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element = 'oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
###Output
first three characters: oxy
last three characters: gen
###Markdown
Solution:
###Code
print(element[:4])
print(element[4:])
print(element[:])
print(element[-1])
print(element[3:6])
###Output
oxyg
en
oxygen
n
gen
###Markdown
Given those answers, explain what `element[1:-1]` does.
###Code
prints elements from 1 (l) to -1 (e)
lement
###Output
_____no_output_____
###Markdown
Solution:
###Code
print(element[1:-1])
###Output
xyge
###Markdown
Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
print(data[3:3,4:4])
print(data[3:3, :])
###Output
[]
[]
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg = 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text = 'weight in kilograms:'
print(weight_kg)
###Output
60.0
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print(weight_kg_text, weight_kg)
###Output
weight in kilograms: 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print('weight in pounds:', 2.2 * weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg = 65.0
print('weight in kilograms is now:', weight_kg)
###Output
weight in kilograms is now: 65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# There are 2.2 pounds per kilogram
weight_lb = 2.2 * weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
###Output
weight in kilograms: 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
###Output
weight in kilograms is now: 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`.
###Code
weight_lb = 2.2 * weight_kg
print(weight_lb)
###Output
220.00000000000003
###Markdown
LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname = 'data/inflammation-01.csv', delimiter = ',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))```
###Code
print(type(data))
###Output
<class 'numpy.ndarray'>
###Markdown
The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
###Output
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
data.dtype
###Output
_____no_output_____
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0,0])
print('middle value in data:', data[30,20])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4,0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10,0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small = data[:3,36:]
print('small is', small)
###Output
small is [[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata = data * 2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original')
print(data[:3,36:])
print('doubledata')
print(doubledata[:3,36:])
###Output
original
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata')
print(tripledata[:3,36:])
###Output
tripledata
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass = 47.5
age = 122
mass = mass * 2.0
age = age - 20
print(mass, age)
###Output
95.0 102
###Markdown
Solution: Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third,fourth)
###Output
Hopper Grace
###Markdown
Solution: Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element = 'oxygen'
print('first three characters', element[0:3])
print('last three characters', element[3:6])
print(element[:4])
print(element[4:])
print(element[:])
print(element[-1])
print(element[-2])
###Output
e
###Markdown
Solution: Given those answers, explain what `element[1:-1]` does.
###Code
print(element[1:-1])
###Output
xyge
###Markdown
Solution: Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
print(data[3:3, 4:4])
print(data[3:3, :])
###Output
[]
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg +5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg=60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print('weight in kilograms:', weight_kg)
###Output
weight in kilograms: 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print('weight in pounds:', 2.2 * weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg = 65.0
print('weight in kg is now:', weight_kg)
###Output
weight in kg is now: 65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# there are 2.2 pounds per kg
weight_kg_text = 'weight in kg'
weight_lb = 2.2 * weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
###Output
weight in kg 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print('weight in kg is now:', weight_kg, 'and weright in pounds is still', weight_lb)
###Output
weight in kg is now: 100.0 and weright in pounds is still 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
numpy.loadtxt(fname='data/inflammation-01.csv', delimiter = ',')
###Output
_____no_output_____
###Markdown
The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter = ',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))``` The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(type(data))
print(data.dtype)
###Output
<class 'numpy.ndarray'>
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0,0])
print('middle value in data', data[30,20])
###Output
first value in data: 0.0
middle value in data 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4, 0:10])
###Output
_____no_output_____
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10, 0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small = data[:3,36:]
print('small is')
print(small)
###Output
small is
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata = data * 2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original:')
print(data[:3,36:])
print('doubledata is')
print(doubledata[:3, 36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata is
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('triple data is')
print(tripledata[:3,36:])
###Output
triple data is
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass = 47.5
age = 122
mass = mass * 2.0
age = age - 20
print(mass, age)
###Output
95.0 102
###Markdown
Solution: Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
###Output
Hopper Grace
###Markdown
Solution: Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element = 'oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:6], element[1:-1])
###Output
first three characters: oxy
last three characters: gen xyge
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg = 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text = 'weight in kilograms'
###Output
_____no_output_____
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print(weight_kg_text, weight_kg)
###Output
weight in kilograms 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print('weight in pounds:', 2.2 * weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg = 65.0
print('weight in kilograms is now:', weight_kg)
###Output
weight in kilograms is now: 65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# There are 2.2 pounds per kilogram
weight_lb = 2.2 * weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
###Output
weight in kilograms 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
###Output
weight in kilograms is now: 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))```
###Code
print(type(data))
###Output
<class 'numpy.ndarray'>
###Markdown
The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
###Output
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0,0])
print('middle value in data', data[30,20])
###Output
first value in data: 0.0
middle value in data 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4, 0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10, 0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small = data[:3, 36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata = data * 2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original:')
print(data[:3, 36:])
print('doubledata:')
print(doubledata[:3, 36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata:')
print(tripledata[:3, 36:])
###Output
tripledata:
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass = 47.5
age = 122
mass = mass * 2.0
age = age - 20
###Output
_____no_output_____
###Markdown
Solution:
###Code
print(mass, age)
###Output
95.0 102
###Markdown
Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace', 'Hopper'
third, fourth = 'second', 'first'
###Output
_____no_output_____
###Markdown
Solution:
###Code
print(third, fourth)
###Output
second first
###Markdown
Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element = 'oxygen'
print('first three charachters:', element[0:3])
print('last three charachters:', element[3:6])
# indexes the first 4 charachetrs
element[:4]
# indexes the last 2 charachters
element[4:]
# indexes all charachters
element[:]
# indexes the first charachter from the end
element[-1]
# indexes the second last two charachters from the end
element[-2]
element[2]
###Output
_____no_output_____
###Markdown
Solution: Given those answers, explain what `element[1:-1]` does.
###Code
# selects second character (since indexing 1), to n (since slice ends as -1)
element[1:-1]
###Output
_____no_output_____
###Markdown
Solution: Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
element[3:3]
print(element[3:3])
###Output
###Markdown
Solution:
###Code
data[3:3, 4:4]
print(data[3:3, 4:4])
data[3:3, :]
print(data[3:3, :])
###Output
[]
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg = 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg = 'weight in kilograms'
print(weight_kg) #print ans without single quotes
weight_kg #print ans with quotes
###Output
weight in kilograms
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
weigth_kg_text = 'Weight in Kilograms: '
weight_kg = 60.0
print(weigth_kg_text, weight_kg)
###Output
Weight in Kilograms: 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print(weigth_kg_text, 2.2*weight_kg)
###Output
Weight in Kilograms: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg = 65.0
print(weigth_kg_text, weight_kg) #Readable
print('weigth in kg now:',weight_kg)
###Output
Weight in Kilograms: 65.0
weigth in kg now: 65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
#There are 2.2 pounds per Kg
weight_lb = 2.2*weight_kg
print(weigth_kg_text, weight_kg, 'and in pounds:', weight_lb)
###Output
Weight in Kilograms: 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print('Weight in Kilogram now is:', weight_kg, 'and in pounds is still:', weight_lb)
###Output
Weight in Kilogram now is: 100.0 and in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))``` The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(type(data)) #type of dataset
print(data.dtype) #type of data itself
###Output
<class 'numpy.ndarray'>
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape) #dimension of dataset
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('First value in data:', data[0,0])
print('Middle value in data:', data[30,20])
###Output
First value in data: 0.0
Middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4,0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10,0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small = data[:3, 36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata = data*2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('Original:')
print(data[:3, 36:])
print('doubledata:')
print(doubledata[:3, 36:])
###Output
Original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata: ')
print(tripledata[:3, 36:])
###Output
tripledata:
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass = 47.5
age = 122
mass = mass*2.0
age = age-20
print('Mass is:', mass, 'Age is:', age)
###Output
Mass is: 95.0 Age is: 102
###Markdown
Solution: mass is doubled (multiplication) and age is reduced by 20 (subtraction). Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
###Output
Hopper Grace
###Markdown
Solution: Hopper Grace Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element = 'oxygen'
print('First three characters:', element[0:3])
print(element[:4])
print('Last three characters:', element[3:6])
print(element[4:])
element[:]
element[-5]
###Output
First three characters: oxy
oxyg
Last three characters: gen
en
###Markdown
Solution:oxy, gen, oxygen Given those answers, explain what `element[1:-1]` does.
###Code
element[1:-1]
###Output
_____no_output_____
###Markdown
Solution: Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
element[3:3]
data[3:3,4:4]
data[3:3,:]
###Output
_____no_output_____
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg = 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text = 'weight in kilograms:'
###Output
_____no_output_____
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print(weight_kg_text, weight_kg)
###Output
weight in kilograms: 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print(weight_kg_text, weight_kg)
###Output
weight in kilograms: 60.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print('weight in pounds:', 2.2 * weight_kg)
###Output
weight in pounds: 132.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# There are 2.2 pounds per kilogram
weight_lb = 2.2 * weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
###Output
weight in kilograms: 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
###Output
weight in kilograms is now: 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter = ',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))```
###Code
print(type(data))
###Output
<class 'numpy.ndarray'>
###Markdown
The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
###Output
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0, 0])
print('middle value in data:', data[30, 20])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4, 0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10, 0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small = data[:3, 36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata = data * 2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original:')
print(data[:3, 36:])
print('doubledata:')
print(doubledata[:3, 36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata:')
print(tripledata[:3, 36:])
###Output
tripledata:
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass = 47.5
age = 122
mass = mass * 2.0
age = age - 20
print (mass, age)
###Output
95.0 102
###Markdown
Solution: Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
###Output
Hopper Grace
###Markdown
Solution: Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element = 'oxygen'
print('first three characters:', element[0:3])
print('last three characters', element[3:6])
###Output
first three characters: oxy
last three characters gen
###Markdown
Solution: Given those answers, explain what `element[1:-1]` does.
###Code
print(element[1:-1])
###Output
xyge
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg = 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text = 'weight in kilograms:'
print(weight_kg)
###Output
60.0
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print(weight_kg_text,weight_kg)
###Output
weight in kilograms: 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print('weight in pounds:', 2.2 * weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg = 65.0
print('weight in kilograms is now:' , weight_kg)
###Output
weight in kilograms is now: 65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# There are 2.2 pounds per kilogram
weight_lb = 2.2 * weight_kg
print(weight_kg_text , weight_kg , 'and in punds:' , weight_lb)
###Output
weight in kilograms: 65.0 and in punds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print('weight in kilograms is now:' , weight_kg , 'and weight in pounds is still:' , weight_lb)
###Output
weight in kilograms is now: 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
numpy.loadtxt(fname='data/inflammation-01.csv' , delimiter =',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
data=numpy.loadtxt(fname='data/inflammation-01.csv' , delimiter =',')
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))```
###Code
print(type(data))
###Output
<class 'numpy.ndarray'>
###Markdown
The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
###Output
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:' , data [0 , 0])
print ('middle value in data:' , data[30 , 20])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4 , 0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10 , 0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small = data[:3 , 36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata = data * 2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original')
print(data[:3 , 36:])
print('doubledata:')
print(doubledata[:3 , 36:])
###Output
original
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata')
print(tripledata[:3 , 36:])
###Output
tripledata
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass = 47.5
age = 122
mass = mass * 2.0
age = age - 20
print(mass, age)
###Output
95.0 102
###Markdown
Solution: mass=95.0 age=102 Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace' , 'Hopper'
third , fourth = second , first
print(third , fourth)
###Output
Hopper Grace
###Markdown
Solution: Hopper Grace Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element = 'oxygen'
print('first three characters:' , element[0:3])
print('last three characters:' , element[3:6])
print(element[:4])
print(element[4:])
print(element[:])
###Output
oxygen
###Markdown
Solution: element [:4] = oxyg , element[:4] = en, element[:] = oxygen
###Code
Given those answers, explain what `element[1:-1]` does.
###Output
_____no_output_____
###Markdown
element[1:-1]
###Code
Solution: Cuts off 0 value (first letter) from the front and last letter (due to -1)
###Output
_____no_output_____
###Markdown
Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
element[3:3]
data[3:3 , 4:4]
data[3:3 , :]
data[3:3]
###Output
_____no_output_____
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg = 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text = 'weight in kilograms'
print(weight_kg)
###Output
60.0
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print(weight_kg_text, weight_kg)
###Output
weight in kilograms 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print('weight in pounds:', 2.2 * weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg = 65
print('weight in kilograms is now:', weight_kg)
###Output
weight in kilograms is now: 65
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# There are 2.2 pounds per kilogram
weight_lb = 2.2 * weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
###Output
weight in kilograms 65 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
###Output
weight in kilograms is now: 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname = 'data/inflammation-01.csv', delimiter = ',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))```
###Code
print(type(data))
###Output
<class 'numpy.ndarray'>
###Markdown
The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
###Output
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0, 0])
print('middle value in data:', data[30, 20])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4, 0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10, 0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small = data[:3, 36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata = data * 2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original:')
print(data[:3, 36:])
print('doubledata:')
print(doubledata[:3, 36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata:')
print(tripledata[:3, 36:])
###Output
tripledata:
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass = 47.5
age = 122
mass = mass * 2.0
age = age - 20
print (mass, age)
###Output
95.0 102
###Markdown
Solution: Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
###Output
Hopper Grace
###Markdown
Solution: Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element = 'oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
print('last two charachters:', element[:4])
print(element[4:])
print(element[:])
###Output
first three characters: oxy
last three characters: gen
last two charachters: oxyg
en
oxygen
###Markdown
Solution: Given those answers, explain what `element[1:-1]` does.
###Code
element[1:-1] # negative number deletes sections of the array that the number refers to
###Output
_____no_output_____
###Markdown
Solution: Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
data = numpy.loadtxt(fname = 'data/inflammation-01.csv', delimiter = ',')
print(data)
print(data[3:3, 4:4])
print(data[3:3, :])
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
selecteddata:
[]
[]
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg = 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text = 'weight in kilograms:'
print(weight_kg)
###Output
60.0
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print(weight_kg_text, weight_kg)
###Output
weight in kilograms: 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print('weight in pounds:', 2.2 * weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg = 65.0
print('weight in kilograms is now:', weight_kg)
###Output
weight in kilograms is now: 65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# There 2.2 pounds per kilogram
weight_lb = 2.2 * weight_kg
print(weight_kg_text, weight_kg), 'and in pounds', weight_lb
###Output
weight in kilograms: 65.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
###Output
weight in kilograms is now: 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))``` The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
###Output
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0, 0])
print('middle value in data:', data[30, 20])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4, 0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10, 0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small = data[:3, 36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata = data * 2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original:')
print(data[:3, 36:])
print('doubledata:')
print(doubledata[:3, 36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata:')
print(tripledata[:3, 36])
###Output
tripledata:
[6. 3. 6.]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass = 47.5
age = 122
mass = mass * 2.0
age = age - 20
print(mass, age)
###Output
95.0 102
###Markdown
Solution: Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
###Output
Hopper Grace
###Markdown
Solution: Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element = 'oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
element[:6]
element[-2]
###Output
_____no_output_____
###Markdown
Solution: Given those answers, explain what `element[1:-1]` does.
###Code
element[1:-1]
###Output
_____no_output_____
###Markdown
Solution: Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
element[3:3]
data[3:3, 4:4]
data[3:3, :]
array([], shape=(0, 40), dtype=float64)
###Output
_____no_output_____
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg = 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text = 'weight in kilograms:'
###Output
_____no_output_____
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print(weight_kg_text, weight_kg)
###Output
weight in kilograms: 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print("weight in pounds:", 2.2 * weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg = 65.0
print("weight in kilograms is now:", weight_kg)
###Output
weight in kilograms is now: 65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# There are 2.2 pounds per kilogram
weight_lb = 2.2 * weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
###Output
weight in kilograms: 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
###Output
weight in kilograms is now: 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
numpy.loadtxt(fname = 'data/inflammation-01.csv', delimiter = ',')
###Output
_____no_output_____
###Markdown
The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname = 'data/inflammation-01.csv', delimiter = ',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
data
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))```
###Code
type(data)
###Output
_____no_output_____
###Markdown
The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
data.dtype
print(data.dtype)
###Output
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
data.shape
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
data
data[0, 2]
###Output
_____no_output_____
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
data[0:4, 0:10]
###Output
_____no_output_____
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
data[5:10, 0:10]
###Output
_____no_output_____
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
data[:3, 36:]
###Output
_____no_output_____
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
data
doubledata = data * 2
doubledata
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
data[:3, 36:]
doubledata[:3, 36:]
###Output
_____no_output_____
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
tripledata
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
tripledata[:3, 36:]
###Output
_____no_output_____
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass = 47.5
mass = mass * 2.0
age = 122
age = age - 20
print(mass, age)
###Output
95.0 102
###Markdown
Solution: Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
###Output
Hopper Grace
###Markdown
Solution: Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element = 'oxygen'
element[0 : 3]
element[3 : 6]
element[:4]
element[4:]
element[:]
element[-1]
element[-2]
###Output
_____no_output_____
###Markdown
Solution: Given those answers, explain what `element[1:-1]` does.
###Code
element[1:-1]
###Output
_____no_output_____
###Markdown
Solution: Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
element[3:3]
data
data[3:3, 4:4]
data[3:3, :]
###Output
_____no_output_____
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60``` From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5``` In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg = 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text = 'weight in kilograms:'
print(weight_kg)
###Output
60.0
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print(weight_kg_text, weight_kg)
###Output
weight in kilograms: 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print('weight in pounds:', 2.2 * weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg = 65.0
print('weight in kilograms is now:', weight_kg)
###Output
weight in kilograms is now: 65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# There are 2.2 pounds per kilogram
weight_lb = 2.2 * weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
###Output
weight in kilograms: 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
###Output
weight in kilograms is now: 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))``` The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(type(data))
print(data.dtype)
###Output
<class 'numpy.ndarray'>
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0, 0])
print('middle value in data:', data[30, 20])
# data[0,0]
# print(data[0,0])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4, 0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10, 0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small = data[:3, 36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata = data * 2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original:')
print(data[:3, 36:])
print('doubledata:')
print(doubledata[:3, 36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata:')
print(tripledata[:3, 36:])
###Output
tripledata:
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass = 47.5
age = 122
mass = mass * 2.0
age = age - 20
print(mass, age)
###Output
95.0 102
###Markdown
Solution: mass = 95.0, age = 102 Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
###Output
Hopper Grace
###Markdown
Solution: Hopper Grace Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element = 'oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
element[:4]
element[4:]
element[:]
element[-1]
element[-2]
###Output
first three characters: oxy
last three characters: gen
###Markdown
Solution: "oxyge"; "en"; "oxygen"; "n"; "e". Given those answers, explain what `element[1:-1]` does.
###Code
element[1:-1]
###Output
_____no_output_____
###Markdown
Solution: It prints out all characters starting from the 2nd up to the one before last (i.e. "xyge"). Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
element[3:3]
data[3:3, 4:4]
data[3:3, :]
###Output
_____no_output_____
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg = 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text = 'weight in kilograms'
print(weight_kg)
###Output
60.0
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print(weight_kg_text, weight_kg)
###Output
weight in kilograms 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print('weight in pounds:', 2.2 * weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg = 65.0
print('weight in kilograms is now', weight_kg)
###Output
weight in kilograms is now 65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# There are 2.2 pounds per kilogram
weight_lb = 2.2 * weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
###Output
weight in kilograms 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg =100.0
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
###Output
weight in kilograms is now: 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))```
###Code
print(data.dtype)
###Output
float64
###Markdown
The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
###Output
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0, 0])
print('middle value in data:', data[30, 20])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4, 0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10, 0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small = data[:3, 36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata = data * 2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original:')
print(data[:3, 36:])
print('doubledata:')
print(doubledata[:3, 36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata:')
print(tripledata[:3, 36:])
###Output
tripledata:
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass = 47.5
age = 122
mass = mass * 2.0
age = age - 20
print(mass, age)
###Output
95.0 102
###Markdown
Solution: Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
###Output
Hopper Grace
###Markdown
Solution: Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element = 'oxygen'
print('first four characters:', element[:4])
print('last four characters:', element[2:6])
###Output
first four characters: oxyg
last four characters: ygen
###Markdown
Solution: Given those answers, explain what `element[1:-1]` does. Solution: Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
print element
###Output
_____no_output_____
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg=60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg+5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg=60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text='weight in kilogrms'
print(weight_kg)
###Output
60.0
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print(weight_kg_text, weight_kg)
###Output
weight in kilogrms 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print('weight in pounds:',2.2*weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
weight_kg=65.0
###Output
_____no_output_____
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
print('weight in kilograms is now',weight_kg)
###Output
weight in kilograms is now 65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# There are 2.2 pounds per kilogram
weight_lb=2.2*weight_kg
print(weight_kg_text, weight_kg,"and in pounds:",weight_lb)
###Output
weight in kilogrms 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg=100.00
print('weight in kilograms is now',weight_kg,'and weight in pounds is still:', weight_lb)
###Output
weight in kilograms is now 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
numpy.loadtxt(fname='data/inflammation-01.csv',delimiter=',')
###Output
_____no_output_____
###Markdown
The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data=numpy.loadtxt(fname='data/inflammation-01.csv',delimiter=',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
print(data[0])
data[0]=data[0]*2.5
data[1:3]=data[1:3]*2.5
print(data)
data[:][1]=data[:][1] *2
print data[]
###Output
_____no_output_____
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))```
###Code
print(type(data))
###Output
<class 'numpy.ndarray'>
###Markdown
The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
###Output
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:',data[0,0])
print('middle value in data:',data[30,20])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:40,0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]
[0. 1. 1. 3. 3. 1. 3. 5. 2. 4.]
[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]
[0. 1. 0. 0. 4. 3. 3. 5. 5. 4.]
[0. 1. 0. 0. 3. 4. 2. 7. 8. 5.]
[0. 0. 2. 1. 4. 3. 6. 4. 6. 7.]
[0. 0. 0. 0. 1. 3. 1. 6. 6. 5.]
[0. 1. 2. 1. 1. 1. 4. 1. 5. 2.]
[0. 1. 1. 0. 1. 2. 4. 3. 6. 4.]
[0. 0. 0. 0. 2. 3. 6. 5. 7. 4.]
[0. 0. 0. 1. 2. 1. 4. 3. 6. 7.]
[0. 0. 2. 1. 2. 5. 4. 2. 7. 8.]
[0. 1. 2. 0. 1. 4. 3. 2. 2. 7.]
[0. 1. 1. 3. 1. 4. 4. 1. 8. 2.]
[0. 0. 2. 3. 2. 3. 2. 6. 3. 8.]
[0. 0. 0. 3. 4. 5. 1. 7. 7. 8.]
[0. 1. 1. 1. 1. 3. 3. 2. 6. 3.]
[0. 1. 1. 1. 2. 3. 5. 3. 6. 3.]
[0. 0. 2. 1. 3. 3. 2. 7. 4. 4.]
[0. 0. 1. 2. 4. 2. 2. 3. 5. 7.]
[0. 0. 1. 1. 1. 5. 1. 5. 2. 2.]
[0. 0. 2. 2. 3. 4. 6. 3. 7. 6.]
[0. 0. 0. 1. 4. 4. 6. 3. 8. 6.]
[0. 1. 1. 0. 3. 2. 4. 6. 8. 6.]
[0. 0. 2. 3. 3. 4. 5. 3. 6. 7.]
[0. 1. 2. 2. 2. 3. 6. 6. 6. 7.]
[0. 0. 2. 1. 3. 5. 6. 7. 5. 8.]
[0. 0. 1. 2. 4. 1. 5. 5. 2. 3.]
[0. 0. 0. 3. 1. 3. 6. 4. 3. 4.]
[0. 1. 2. 2. 2. 5. 5. 1. 4. 6.]
[0. 1. 1. 2. 3. 1. 5. 1. 2. 2.]
[0. 1. 0. 3. 2. 4. 1. 1. 5. 9.]
[0. 1. 1. 3. 1. 1. 5. 5. 3. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10,0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small=data[:3,36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata=data*2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original:')
print(data[:3,36:])
print('doubledata')
print(doubledata[:3,36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata=doubledata+data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata:')
print(tripledata[:3,36:])
###Output
tripledata:
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass=47.5
age=122
mass=int(mass*2)
age=age-20
print(mass,age)
###Output
95 102
###Markdown
Solution: Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first,second='grace','hopper'
print(first,second)
third,fourth=second,first
print(third,fourth)
###Output
grace hopper
hopper grace
###Markdown
Solution: Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element='oxygen'
print('first three characters:',element[1:3])
print('last three characters:',element[2:6])
###Output
first three characters: xy
last three characters: ygen
###Markdown
Solution: Given those answers, explain what `element[1:-1]` does.
###Code
print('trial:',element[1:-1])
###Output
trial: xyge
###Markdown
Solution: Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
print('trial2:',element[3:3])
print('trial3:',data[3:3,4:4])
print('trial4:',data[3:3, :])
###Output
trial2:
trial3: []
trial4: []
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg = 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text = 'weight in kilograms:'
print(weight_kg)
###Output
60.0
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print(weight_kg_text, weight_kg)
###Output
weight in kilograms: 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print('weight in pounds:', 2.2*weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg = 65.0
print('weight in kilograms is now:', weight_kg)
###Output
weight in kilograms is now: 65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# There are 2.2 pounds per kilogram
weight_lb = 2.2 * weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
###Output
weight in kilograms: 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
###Output
weight in kilograms is now: 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`.
###Code
weight_lb = 2.2*weight_kg
print(weight_lb)
###Output
220.00000000000003
###Markdown
LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))```
###Code
print(type(data))
###Output
<class 'numpy.ndarray'>
###Markdown
The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
###Output
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0, 0])
print('middle value in data:', data[30, 20])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4, 0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10, 0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small = data[:3, 36:]
print('small is')
print(small)
###Output
small is
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata = data * 2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original:')
print(data[:3, 36:])
print('doubledata:')
print(doubledata[:3, 36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata:')
print(tripledata[:3, 36:])
###Output
tripledata:
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass = 47.5
age = 122
mass = mass * 2.0
age = age - 20
print(mass, age)
###Output
95.0 102
###Markdown
Solution: Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
###Output
Hopper Grace
###Markdown
Solution: Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element = 'oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
print(element[:4])
print(element[4:])
print(element[:])
print(element[1:-1])
###Output
first three characters: oxy
last three characters: gen
oxyg
en
oxygen
xyge
###Markdown
Solution: Given those answers, explain what `element[1:-1]` does.
###Code
print(element[1:-1])
###Output
xyge
###Markdown
Solution: Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
print(element[3:3])
print(data[3:3])
###Output
_____no_output_____
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg = 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
weight_kg_text = "weight in kilograms:"
print (weight_kg_text, weight_kg)
###Output
weight in kilograms: 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print ("weight in ponds:", 2.2* weight_kg)
###Output
weight in ponds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg = 65.0
print('weight in kilo is now', weight_kg)
print(weight_kg)
###Output
weight in kilo is now 65.0
65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# There are 2.2 pounds per kilogram
weight_lb = 2.2 * weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
###Output
weight in kilograms: 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is:', weight_lb)
###Output
weight in kilograms is now: 100.0 and weight in pounds is: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))```
###Code
print(type(data))
###Output
<class 'numpy.ndarray'>
###Markdown
The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
data.dtype, data.shape, data.size
###Output
_____no_output_____
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])``` The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4, 0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])``` and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)``` The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0``` will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])``` If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data``` will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])``` Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass = 47.5
age = 122.0
mass = mass * 2.0
age = age - 20
print(mass, age)
###Output
95.0 102.0
###Markdown
Solution: Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
###Output
Hopper Grace
###Markdown
Solution: Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element = 'oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
print('up to :4:', element[:4])
print('4: and after', element[4:])
print('everything', element[:])
print('index -1:', element[-1])
print('index -2:', element[-2])
###Output
first three characters: oxy
last three characters: gen
up to :4: oxyg
4: and after en
everything oxygen
index -1: n
index -2: e
###Markdown
Solution: Given those answers, explain what `element[1:-1]` does.
###Code
print('iddd2:', element[1:-1])
###Output
iddd2: xyge
###Markdown
Solution: Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
print(data[3:3], )
###Output
[]
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg = 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text = 'weight in kilograms:'
print(weight_kg_text)
###Output
weight in kilograms:
###Markdown
print(weight_kg)
###Code
We can display multiple things at once using only one print command:
```
print(weight_kg_text, weight_kg)
```
###Output
_____no_output_____
###Markdown
print(weight_kg_text, weight_kg)
###Code
Moreover, we can do arithmetic with variables right inside the print function:
```
print('weight in pounds:', 2.2 * weight_kg)
```
###Output
_____no_output_____
###Markdown
print('weight in pounds:', 2.2 * weight_kg)
###Code
The above command, however, did not change the value of ``weight_kg``:
```
print(weight_kg)
```
###Output
_____no_output_____
###Markdown
print(weight_kg)
###Code
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:
```
weight_kg = 65.0
print('weight in kilograms is now:', weight_kg)
```
###Output
_____no_output_____
###Markdown
weight_kg = 65.0print('weight in kilograms is now:', weight_kg)
###Code
#### Variables as Sticky Notes
A variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.
This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:
```
# There are 2.2 pounds per kilogram
weight_lb = 2.2 * weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
```
###Output
_____no_output_____
###Markdown
There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds', weight_lb)
###Code
#### Updating a Variable
Variables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):
```
weight_kg = 100.0
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
```
###Output
_____no_output_____
###Markdown
weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
###Code
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`.
## Libraries
Words are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed.
### Loading data into Python
In order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python).
In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:
```
import numpy
```
###Output
_____no_output_____
###Markdown
import numpy
###Code
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:
```
numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
```
The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.
As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.
`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.
Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.
Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:
```
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
```
###Output
_____no_output_____
###Markdown
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Code
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:
```
print(data)
```
###Output
_____no_output_____
###Markdown
print(data)
###Code
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:
```
print(type(data))
```
The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements.
#### Data Type
A NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.
```
print(data.dtype)
```
###Output
_____no_output_____
###Markdown
print(data.dtype)
###Code
This tells us that the NumPy array's elements are floating-point numbers.
With the following command, we can see the array's shape:
```
print(data.shape)
```
###Output
_____no_output_____
###Markdown
print(data.shape)
###Code
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.
If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:
```
print('first value in data:', data[0, 0])
print('middle value in data:', data[30, 20])
```
###Output
_____no_output_____
###Markdown
print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])
###Code
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might.
#### Zero Indexing
Programming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post).
As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want.
#### In the Corner
What may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data.
#### Slicing data
An index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:
```
print(data[0:4, 0:10])
```
###Output
_____no_output_____
###Markdown
print(data[0:4, 0:10])
###Code
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*.
Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.
Also, we don't have to start slices at `0`:
```
print(data[5:10, 0:10])
```
###Output
_____no_output_____
###Markdown
print(data[5:10, 0:10])
###Code
and we don't have to include the upper or lower bound on the slice.
If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:
```
small = data[:3, 36:]
print('small is:')
print(small)
```
###Output
_____no_output_____
###Markdown
small = data[:3, 36:]print('small is:')print(small)
###Code
The above example selects rows 0 through 2 and columns 36 through to the end of the array.
thus small is:
```
[[ 2. 3. 0. 0.]
[ 1. 1. 0. 1.]
[ 2. 2. 1. 1.]]
```
Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:
```
doubledata = data * 2.0
```
###Output
_____no_output_____
###Markdown
doubledata = data * 2.0
###Code
will create a new array doubledata each element of which is twice the value of the corresponding element in data:
```
print('original:')
print(data[:3, 36:])
print('doubledata:')
print(doubledata[:3, 36:])
```
###Output
_____no_output_____
###Markdown
print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])
###Code
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:
```
tripledata = doubledata + data
```
###Output
_____no_output_____
###Markdown
tripledata = doubledata + data
###Code
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.
```
print('tripledata:')
print(tripledata[:3, 36:])
```
###Output
_____no_output_____
###Markdown
print('tripledata:')print(tripledata[:3, 36:])
###Code
## Exercises
### Variables
What values do the variables mass and age have after each statement in the following program?
```
mass = 47.5
age = 122
mass = mass * 2.0
age = age - 20
print(mass, age)
```
Test your answers by executing the commands.
###Output
_____no_output_____
###Markdown
mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)
###Code
Solution:
### Sorting Out References
What does the following program print out?
```
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
```
###Output
_____no_output_____
###Markdown
first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)
###Code
Solution:
### Slicing Strings
A section of an array is called a slice. We can take slices of character strings as well:
```
element = 'oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
```
What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?
What about `element[-1]` and `element[-2]` ?
###Output
_____no_output_____
###Markdown
element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6]) print('first four characters:', element[:4]) print('last two characters', element[4:]) print(element[-1])
###Code
Solution: The value of element[:4] is the the first four characters oxyg, the value of the element [4:] is the last two characters en, the element [:] is the whole word oxygen, there are no values for the element[-1] or element [-2]
Given those answers, explain what `element[1:-1]` does.
###Output
_____no_output_____
###Markdown
print(element[1:-1])
###Code
Solution:Print from the second character to the second last character
###Output
_____no_output_____
###Markdown
Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
print(data[3:3, 4:4])
print(data[3:3, :])
###Output
[]
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:
```
weight_kg = 60
```
weight_kg=60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight = 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text = 'weight in kilograms:'
###Output
_____no_output_____
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print (weight_kg_text, weight_kg)
###Output
weight in kilograms: 60
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print ('weight in pounds:' , 2.2 * weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print (weight_kg)
###Output
60
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg = 65.0
print('weigh in kilograms is now:' , weight_kg)
###Output
weigh in kilograms is now: 65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# There are 2.2 pounds per kilogram
weight_lb = 2.2 * weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
###Output
weight in kilograms: 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print ('weight in kilograms is now:' , weight_kg, 'and weight in pounds is still:' , weight_lb)
###Output
weight in kilograms is now: 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))```
###Code
print(type(data))
###Output
<class 'numpy.ndarray'>
###Markdown
The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
###Output
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0,0])
print('middle value in data:', data[30, 20])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4, 0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print (data [5:10, 0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small = data[:3, 36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
print (data)
doubledata = data*2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original')
print('original:')
print(data[:3, 36:])
print('doubledata:')
print(doubledata[:3, 36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata:')
print(tripledata[:3, 36:])
###Output
tripledata:
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass = 47.5
age = 122
mass = mass * 2.0
age = age - 20
print (mass, age)
###Output
95.0 102
###Markdown
Solution: Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
###Output
Hopper Grace
###Markdown
Solution: Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element = 'oxygen'
###Output
_____no_output_____
###Markdown
Solution:
###Code
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
###Output
last three characters: gen
###Markdown
Given those answers, explain what `element[1:-1]` does.
###Code
print('character:',element[1:-1])
print('characters:', element[:4])
print('characters:', element[4:])
print('characters:', element[:])
print('characters:', element[-1])
print('characters:', element[-2])
###Output
characters: e
###Markdown
Solution: Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
print(element[3:3])
data[3:3,4:4]
data[3:3,:]
###Output
_____no_output_____
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg = 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text = 'weight in kilograms:'
print(weight_kg)
###Output
60.0
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print(weight_kg_text, weight_kg)
###Output
weight in kilograms: 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print ('weight in pounds: ', 2.2*weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print (weight_kg)
###Output
60.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg = 65.0
print('weight in kgs is now:', weight_kg)
###Output
weight in kgs is now: 65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# There are 2.2 pounds per kg
weight_lb = 2.2*weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:' , weight_lb)
###Output
weight in kilograms: 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
###Output
weight in kilograms is now: 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))``` The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
###Output
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0, 0])
print('middle low value in data:', data[29, 19])
print('middle high value in data:', data[30, 20])
print('end value in data:', data[59, 39])
###Output
_____no_output_____
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4, 0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10, 0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small = data[:3, 36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata = data * 2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original:')
print(data[:3, 36:])
print('doubledata: ')
print(doubledata[:3, 36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata:')
print(tripledata[:3, 36:])
###Output
tripledata:
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
print('mass = 47.5')
print('age = 122')
mass = 47.5
mass = mass *2.0
age = 122
age = age-20+mass
print(
age)
###Output
mass = 47.5
age = 122
197.0
###Markdown
Solution: Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(second, fourth)
###Output
Hopper Grace
###Markdown
Solution: Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element = 'oxygen'
print('fourth character:', element[3:4])
###Output
fourth character: g
###Markdown
Solution: Given those answers, explain what `element[1:-1]` does.
###Code
element = 'oxygen'
print('fourth character:', element[1:-1])
###Output
fourth character: xyge
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg = 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text = 'weight in kilograms:'
print(weight_kg)
###Output
60.0
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print(weight_kg_text, weight_kg)
###Output
weight in kilograms: 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print('weight in pounds:', 2.2*weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg = 65.0
print('weight in kilograms is now:', weight_kg)
###Output
weight in kilograms is now: 65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# There are 2.2 pounds per kilogram
weight_lb = 2.2 * weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
###Output
weight in kilograms: 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
###Output
weight in kilograms is now: 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`.
###Code
weight_lb = 2.2*weight_kg
print(weight_lb)
###Output
220.00000000000003
###Markdown
LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))```
###Code
print(type(data))
###Output
<class 'numpy.ndarray'>
###Markdown
The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
data.dtype
###Output
_____no_output_____
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0, 0])
print('middle value in data:', data[30, 20])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4, 0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10, 0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small = data[:3, 36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata = data * 2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original:')
print(data[:3, 36:])
print('doubledata:')
print(doubledata[:3, 36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata:')
print(tripledata[:3, 36:])
###Output
tripledata:
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands. Solution: Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)``` Solution: Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ? Solution: Given those answers, explain what `element[1:-1]` does. Solution: Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
data[3:3, 4:4]
print(data[3:3, 4:4])
###Output
_____no_output_____
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg = 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text = 'weight in kilograms'
###Output
_____no_output_____
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print(weight_kg_text)
###Output
weight in kilograms
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print('weight in pounds:', 2.2 * weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg = 65.0
print('weight in kilograms is now:', weight_kg)
###Output
weight in kilograms is now: 65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# There are 2.2 pounds per kilogram
weight_lb = 2.2 * weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
###Output
weight in kilograms 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
###Output
weight in kilograms is now: 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
print(type(data))
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
<class 'numpy.ndarray'>
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))```
###Code
print(type(data))
###Output
<class 'numpy.ndarray'>
###Markdown
The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
###Output
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0, 0])
print('middle value in data:', data[30, 20])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4, 0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10, 0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small = data[:3, 36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata = data * 2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original:')
print(data[:3, 36:])
print('doubledata:')
print(doubledata[:3, 36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata:')
print(tripledata[:3, 36:])
###Output
tripledata:
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass = 47.5
age = 122
mass = mass * 2.0
age = age - 20
print(mass, age)
###Output
95.0 102
###Markdown
Solution: Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
###Output
Hopper Grace
###Markdown
Solution: Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element = 'oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
###Output
first three characters: oxy
last three characters: gen
###Markdown
Solution: Given those answers, explain what `element[1:-1]` does.
###Code
element[1:-1]
###Output
_____no_output_____
###Markdown
Solution: Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
element[3:3]
data[3:3, 4:4]
data[3:3, :]
###Output
_____no_output_____
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
kellys_weight = 40
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
kellys_weight + 4
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
kellys_weight = 50.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
kellys_kg_text = 'weight in kilograms'
print (kellys_weight)
###Output
50.0
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print(kellys_kg_text,kellys_weight)
###Output
weight in kilograms 50.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print('weight in pounds', 2.2*kellys_weight)
###Output
_____no_output_____
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print(kellys_weight)
###Output
50.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg=65
print('weight in kilograms is now:', weight_kg)
###Output
weight in kilograms is now: 65
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
#2.2 pounds per kilograms
weight_lb=2.2*weight_kg
print('weight_kg_text',weight_kg,'and in pounds:', weight_lb)
###Output
weight_kg_text 65 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg=100
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
###Output
weight in kilograms is now: 100 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
numpy.loadtxt(fname='data/inflammation-01.csv',delimiter=',')
import numpy
data=numpy.loadtxt(fname='data/inflammation-01.csv',delimiter=',')
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
_____no_output_____
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))```
###Code
print(type(data))
###Output
<class 'numpy.ndarray'>
###Markdown
The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
###Output
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:',data[0,0])
print('middle value in data:',data[30,20])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4,0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10,0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small=data[:3,36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata=data*2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original:')
print(data[:3,36:])
print('doubledata:')
print(doubledata[:3,36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata=doubledata+data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata:')
print(tripledata[:3,36:])
###Output
tripledata:
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass=47.5
age=122
mass=mass*2
age=age-20
print(mass,age)
###Output
95.0 102
###Markdown
Solution: Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first,second='Grace','Hopper'
third,fourth=second,first
print(third,fourth)
###Output
Hopper Grace
###Markdown
Solution: Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element='oxygen'
print('first three characters:',element[0:3])
print('last three characters:',element[3:6])
print('all?',element[:])
print('-2',element[-1])
###Output
first three characters: oxy
last three characters: gen
all? oxygen
-2 n
###Markdown
Solution: Given those answers, explain what `element[1:-1]` does.
###Code
element[1:-1]
###Output
_____no_output_____
###Markdown
Solution: Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
print(data[3:4,:])
###Output
[[ 0. 0. 2. 0. 4. 2. 2. 1. 6. 7. 10. 7. 9. 13. 8. 8. 15. 10.
10. 7. 17. 4. 4. 7. 6. 15. 6. 4. 9. 11. 3. 5. 6. 3. 3. 4.
2. 3. 2. 1.]]
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg = 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text = 'weight in kilograms'
print(weight_kg)
###Output
60.0
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print(weight_kg_text, weight_kg)
###Output
weight in kilograms 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print('weight in pounds:', 2.2 * weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
weight_kg = 65.0
print ('weight in kilograms is now:', weight_kg)
###Output
weight in kilograms is now: 65.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)``` Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# There are 2.2 pounds per kilogram
weight_lb = 2.2 * weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
###Output
weight in kilograms 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print ('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
###Output
weight in kilograms is now: 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data.dtype)
###Output
float64
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))``` The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)``` The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0,0])
print('middle value in data:', data[30,20])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])``` The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10, 0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small = data [:3, 36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata = data * 2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original:')
print(data[:3, 36:])
print('doubledata:')
print(doubledata[:3, 36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata:')
print(tripledata[:3, 36:])
###Output
tripledata:
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass = 47.5
age = 122
mass = mass * 2.0
age = age - 20
print(mass, age)
###Output
95.0 102
###Markdown
Solution: Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
###Output
Hopper Grace
###Markdown
Solution: Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element = 'oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
###Output
first three characters: oxy
last three characters: gen
###Markdown
Solution: Given those answers, explain what `element[1:-1]` does.
###Code
element = 'oxygen'
print('first three characters:', element[1:-1])
print('last three characters:', element[1:-1])
###Output
first three characters: xyge
last three characters: xyge
###Markdown
Solution: Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
print(data[3,4])
###Output
4.0
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg=60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg+5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg=60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text='weight in kilograms'
print(weight_kg)
###Output
60.0
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print(weight_kg_text, weight_kg)
###Output
weight in kilograms 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print('weight in pounds:', 2.2*weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg=65
print ('weight in kilograms is now:', weight_kg)
###Output
weight in kilograms is now: 65
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
#There are 2.2 pounds per kilogram
weight_lb=2.2*weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
###Output
weight in kilograms 65 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg=100.0
print('weight in kilograms is now:', weight_kg,'and weight in pounds is still:',weight_lb)
###Output
weight in kilograms is now: 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data=numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))``` The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
###Output
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0,0])
print ('middle value in data:', data[30,20])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4,0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10,0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small=data[:3,36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata=data*2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original:')
print(data[:3,36:])
print('doubledata:')
print(doubledata[:3,36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata=doubledata+data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata:')
print(tripledata[:3,36:])
###Output
tripledata:
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
print('mass=95.0')
print('age= 102')
###Output
mass=95.0
age= 102
###Markdown
Solution: Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
###Output
Hopper Grace
###Markdown
Solution: Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element='oxygen'
print('fourth character:',element[3:4])
print('fourth character:',element[3:4])
###Output
fourth character: g
fourth character: g
###Markdown
Solution: Given those answers, explain what `element[1:-1]` does.
###Code
print('fourth character:',element[1:-1])
print('fourth character:',element[1:5])
###Output
fourth character: xyge
fourth character: xyge
###Markdown
Solution: Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
element[3:3]
###Output
_____no_output_____
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg
weight_kg=60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg+5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg=60.0
weight_kg
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text='weight in kg:'
weight_kg_text
weight_kg
print(weight_kg_text,weight_kg)
###Output
weight in kg: 60.0
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)``` Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print('weight in pounds:',2.2*weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)``` To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg=65.0
print('weight in kg is now:', weight_kg)
###Output
weight in kg is now: 65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
#there are 2.2 pounds per kg jkoijpojpojpj kjpjpjjpojpooppppppppppppppppppppppppppp
#pppppppppppppppppppppppppppppppppppppppppppppppppppppppppp
weight_lb=2.2*weight_kg
print(weight_kg_text,weight_kg, 'and in pounds:', weight_lb)
###Output
weight in kg: 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg=100.0
print('weight in kg is now :', weight_kg,'and weight in pounds is still:', weight_lb)
###Output
weight in kg is now : 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
#import csv file
numpy.loadtxt(fname='data/inflammation-01.csv',delimiter=',')
###Output
_____no_output_____
###Markdown
The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
#saving the csv data into a file variable call 'data'
data=numpy.loadtxt(fname='data/inflammation-01.csv',delimiter=',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))```
###Code
print(type(data))
###Output
<class 'numpy.ndarray'>
###Markdown
The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
###Output
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0, 0])
print('middle value in data:', data[30, 20])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
#slicing data by selecting first 4 rows and first 10 columns
print(data[0:4,0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10,0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small=data[:3,36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata=data*data
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original:')
print(data[:3,36:])
print('doubledata:')
print(doubledata[:3,36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 9. 0. 0.]
[1. 1. 0. 1.]
[4. 4. 1. 1.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata=doubledata+data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata')
print(tripledata[:3,36:])
###Output
tripledata
[[ 6. 12. 0. 0.]
[ 2. 2. 0. 2.]
[ 6. 6. 2. 2.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass=47.5
age=122
mass=mass*2.0
age=age-20
print(mass,age)
###Output
95.0 102
###Markdown
Solution: 95.0 102 Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first,second='grace','hopper'
third, fourth= second, first
print(third, fourth)
###Output
hopper grace
###Markdown
Solution: hopper grace Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element='oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:])
print(element[:4])
print(element[4:])
print(element[:])
print(element[-1])
print(element[-2])
###Output
first three characters: oxy
last three characters: gen
oxyg
en
oxygen
n
e
###Markdown
Solution: first three characters: oxylast three characters: genoxygenoxygenne Given those answers, explain what `element[1:-1]` does.
###Code
print(element[1:-1])
###Output
xyge
###Markdown
Solution: Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
print(element[3:3])
print(data[3:3,4:4])
print(data[3:3,:])
###Output
[]
[]
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60``` From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5``` In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0``` And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)``` We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)``` Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)``` The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)``` To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)``` Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)``` Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)``` Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
_____no_output_____
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg = weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg = 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
weight_kg_text = 'weight in kilograms:'
print(weight_kg_text, weight_kg)
###Output
weight in kilograms: 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print('weight in pounds:', 2.2 * weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg = 65.0
print('weight in kilograms is now:', weight_kg)
###Output
weight in kilograms is now: 65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# There are 2.2 pounds per kilogram
weight_lb = 2.2 * weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
###Output
weight in kilograms: 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
###Output
weight in kilograms is now: 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))``` The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
###Output
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0, 0])
print('middle value in data:', data[30, 20])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4, 0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10, 0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small = data[:3, 36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata = data * 2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original:')
print(data[:3, 36:])
print('doubledata:')
print(doubledata[:3, 36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata:')
print(tripledata[:3, 36:])
###Output
tripledata:
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass = 47.5
age = 122
mass = mass * 2.0
age = age - 20
print(mass, age)
###Output
95.0 102
###Markdown
Solution: Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
###Output
Hopper Grace
###Markdown
Solution: Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element = 'oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
###Output
first three characters: oxy
last three characters: gen
###Markdown
Solution: Given those answers, explain what `element[1:-1]` does.
###Code
print(element[1:-1])
###Output
xyge
###Markdown
Solution: Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
print(element[3:3])
print(data[3:3, 4:4])
###Output
[]
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60``` From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5``` In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0``` And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)``` We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)``` Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)``` The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)``` To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)``` Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)``` Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)``` Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```pythondata = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))``` The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(type(data))
print(data.dtype)
###Output
<class 'numpy.ndarray'>
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0, 0])
print('middle value in data:', data[30, 20])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4, 0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10, 0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg= 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text = 'weight in kilograms:'
print(weight_kg)
###Output
60.0
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print(weight_kg_text, weight_kg)
###Output
weight in kilograms: 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print('weight in pounds:', 2.2 * weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg = 65.0
print('weight in kilograms is now:', weight_kg)
###Output
weight in kilograms is now: 65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# There are 2.2 pounds per kilogram
weight_lb = 2.2 * weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
###Output
weight in kilograms: 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
###Output
weight in kilograms is now: 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`.
###Code
weight_lb =2.2*weight_kg
print(weight_lb)
###Output
220.00000000000003
###Markdown
LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')``` The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))```
###Code
print(type(data))
###Output
<class 'numpy.ndarray'>
###Markdown
The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
###Output
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0, 0])
print('middle value in data:', data[30, 20])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4, 0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10, 0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small = data[:3, 36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata = data * 2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original:')
print(data[:3, 36:])
print('doubledata:')
print(doubledata[:3, 36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata:')
print(tripledata[:3, 36:])
###Output
tripledata:
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
###Markdown
Exercises VariablesWhat values do the variables mass and age have after each statement in the following program? ```mass = 47.5age = 122mass = mass * 2.0age = age - 20print(mass, age)```Test your answers by executing the commands.
###Code
mass = 47.5
age= 122
mass = mass * 2.0
age = age - 20
print(mass, age)
###Output
95.0 102
###Markdown
Solution:
###Code
print(mass, age)
###Output
95.0 102
###Markdown
Sorting Out ReferencesWhat does the following program print out?```first, second = 'Grace', 'Hopper'third, fourth = second, firstprint(third, fourth)```
###Code
first, second = 'Grace', 'Hopper'
third, fourth = second, first
print(third, fourth)
###Output
Hopper Grace
###Markdown
Solution:
###Code
print(third, fourth)
###Output
Hopper Grace
###Markdown
Slicing StringsA section of an array is called a slice. We can take slices of character strings as well:```element = 'oxygen'print('first three characters:', element[0:3])print('last three characters:', element[3:6])```What is the value of `element[:4]` ? What about `element[4:]`? Or `element[:]` ?What about `element[-1]` and `element[-2]` ?
###Code
element = 'oxygen'
print('first three characters:', element[0:3])
print('last three characters:', element[3:6])
###Output
first three characters: oxy
last three characters: gen
###Markdown
Solution:
###Code
print(element[:4])
print(element[4:])
print(element[:])
print(element[-1])
print(element[-2])
###Output
e
###Markdown
Given those answers, explain what `element[1:-1]` does.
###Code
print(element[1:-1])
###Output
xyge
###Markdown
Solution: Thin SlicesThe expression `element[3:3]` produces an empty string, i.e., a string that contains no characters. If data holds our array of patient data, what does `data[3:3, 4:4]` produce? What about `data[3:3, :]` ?
###Code
print(element[3:3])
print(data[3:3, 4:4])
print(data[3:3, : ])
###Output
[]
###Markdown
Programming with Python Episode 1a - Introduction - Analysing Patient DataTeaching: 60 min, Exercises: 30 min Objectives - Assign values to variables.- Explain what a library is and what libraries are used for.- Import a Python library and use the functions it contains.- Read tabular data from a file into a program.- Select individual values and subsections from data.- Perform operations on arrays of data. Our DatasetIn this episode we will learn how to work with CSV files in Python. Our dataset contains patient inflammation data - where each row represents a different patient and the column represent inflammation data over a series of days. However, before we discuss how to deal with many data points, let's learn how to work with single data values. VariablesAny Python interpreter can be used as a calculator:```3 + 5 * 4```
###Code
3 + 5 * 4
###Output
_____no_output_____
###Markdown
This is great but not very interesting. To do anything useful with data, we need to assign its value to a variable. In Python, we can assign a value to a variable, using the equals sign ``=``. For example, to assign value 60 to a variable ``weight_kg``, we would execute:```weight_kg = 60```
###Code
weight_kg = 60
###Output
_____no_output_____
###Markdown
From now on, whenever we use ``weight_kg``, Python will substitute the value we assigned to it. In essence, a variable is just a name for a value.```weight_kg + 5```
###Code
weight_kg + 5
###Output
_____no_output_____
###Markdown
In Python, variable names:- can include letters, digits, and underscores - `A-z, a-z, _`- cannot start with a digit- are case sensitive.This means that, for example:`weight0` is a valid variable name, whereas `0weight` is not`weight` and `Weight` are different variables Types of dataPython knows various types of data. Three common ones are:- integer numbers (whole numbers)- floating point numbers (numbers with a decimal point)- and strings (of characters).In the example above, variable `weight_kg` has an integer value of `60`. To create a variable with a floating point value, we can execute:```weight_kg = 60.0```
###Code
weight_kg = 60.0
###Output
_____no_output_____
###Markdown
And to create a string we simply have to add single or double quotes around some text, for example:```weight_kg_text = 'weight in kilograms:'```To display the value of a variable to the screen in Python, we can use the print function:```print(weight_kg)```
###Code
weight_kg_text = 'weight in kilograms:'
print(weight_kg)
###Output
60.0
###Markdown
We can display multiple things at once using only one print command:```print(weight_kg_text, weight_kg)```
###Code
print(weight_kg_text, weight_kg)
###Output
weight in kilograms: 60.0
###Markdown
Moreover, we can do arithmetic with variables right inside the print function:```print('weight in pounds:', 2.2 * weight_kg)```
###Code
print('weight in pounds:', 2.2 * weight_kg)
###Output
weight in pounds: 132.0
###Markdown
The above command, however, did not change the value of ``weight_kg``:```print(weight_kg)```
###Code
print(weight_kg)
###Output
60.0
###Markdown
To change the value of the ``weight_kg`` variable, we have to assign `weight_kg` a new value using the equals `=` sign:```weight_kg = 65.0print('weight in kilograms is now:', weight_kg)```
###Code
weight_kg = 65.0
print('weight in kilograms is now:', weight_kg)
###Output
weight in kilograms is now: 65.0
###Markdown
Variables as Sticky NotesA variable is analogous to a sticky note with a name written on it: assigning a value to a variable is like writing a value on the sticky note with a particular name.This means that assigning a value to one variable does not change values of other variables (or sticky notes). For example, let's store the subject's weight in pounds in its own variable:``` There are 2.2 pounds per kilogramweight_lb = 2.2 * weight_kgprint(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)```
###Code
# There are 2.2 pounds per kilogram
weight_lb = 2.2 * weight_kg
print(weight_kg_text, weight_kg, 'and in pounds:', weight_lb)
###Output
weight in kilograms: 65.0 and in pounds: 143.0
###Markdown
Updating a VariableVariables calculated from other variables do not change their value just because the original variable changed its value (unlike cells in Excel):```weight_kg = 100.0print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)```
###Code
weight_kg = 100.0
print('weight in kilograms is now:', weight_kg, 'and weight in pounds is still:', weight_lb)
###Output
weight in kilograms is now: 100.0 and weight in pounds is still: 143.0
###Markdown
Since `weight_lb` doesn't *remember* where its value comes from, so it is not updated when we change `weight_kg`. LibrariesWords are useful, but what's more useful are the sentences and stories we build with them (or indeed entire books or whole libraries). Similarly, while a lot of powerful, general tools are built into Python, specialised tools built up from these basic units live in *libraries* that can be called upon when needed. Loading data into PythonIn order to load our inflammation dataset into Python, we need to access (import in Python terminology) a library called `NumPy` (which stands for Numerical Python). In general you should use this library if you want to do fancy things with numbers, especially if you have matrices or arrays. We can import `NumPy` using:```import numpy```
###Code
import numpy
###Output
_____no_output_____
###Markdown
Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench. Libraries provide additional functionality to the basic Python package, much like a new piece of equipment adds functionality to a lab space. Just like in the lab, importing too many libraries can sometimes complicate and slow down your programs - so we only import what we need for each program. Once we've imported the library, we can ask the library to read our data file for us:```numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
The expression `numpy.loadtxt(...)` is a function call that asks Python to run the function `loadtxt` which belongs to the `numpy` library. This dot `.` notation is used everywhere in Python: the thing that appears before the dot contains the thing that appears after.As an example, John Smith is the John that belongs to the Smith family. We could use the dot notation to write his name smith.john, just as `loadtxt` is a function that belongs to the `numpy` library.`numpy.loadtxt` has two parameters: the name of the file we want to read and the delimiter that separates values on a line. These both need to be character strings (or strings for short), so we put them in quotes.Since we haven't told it to do anything else with the function's output, the notebook displays it. In this case, that output is the data we just loaded. By default, only a few rows and columns are shown (with ... to omit elements when displaying big arrays). To save space, Python displays numbers as 1. instead of 1.0 when there's nothing interesting after the decimal point.Our call to `numpy.loadtxt` read our file but didn't save the data in memory. To do that, we need to assign the array to a variable. Just as we can assign a single value to a variable, we can also assign an array of values to a variable using the same syntax. Let's re-run `numpy.loadtxt` and save the returned data:```data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')```
###Code
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
###Output
_____no_output_____
###Markdown
This statement doesn't produce any output because we've assigned the output to the variable `data`. If we want to check that the data has been loaded, we can print the variable's value:```print(data)```
###Code
print(data)
###Output
[[0. 0. 1. ... 3. 0. 0.]
[0. 1. 2. ... 1. 0. 1.]
[0. 1. 1. ... 2. 1. 1.]
...
[0. 1. 1. ... 1. 1. 1.]
[0. 0. 0. ... 0. 2. 0.]
[0. 0. 1. ... 1. 1. 0.]]
###Markdown
Now that the data is in memory, we can manipulate it. First, let's ask Python what type of thing `data` refers to:```print(type(data))```
###Code
print(type(data))
###Output
<class 'numpy.ndarray'>
###Markdown
The output tells us that data currently refers to an N-dimensional array, the functionality for which is provided by the `NumPy` library. This data correspond to arthritis patients' inflammation. The rows are the individual patients, and the columns are their daily inflammation measurements. Data TypeA NumPy array contains one or more elements of the same type. The type function will only tell you that a variable is a NumPy array but won't tell you the type of thing inside the array. We can find out the type of the data contained in the NumPy array.```print(data.dtype)```
###Code
print(data.dtype)
###Output
float64
###Markdown
This tells us that the NumPy array's elements are floating-point numbers.With the following command, we can see the array's shape:```print(data.shape)```
###Code
print(data.shape)
###Output
(60, 40)
###Markdown
The output tells us that the data array variable contains 60 rows and 40 columns. When we created the variable data to store our arthritis data, we didn't just create the array; we also created information about the array, called members or attributes. This extra information describes data in the same way an adjective describes a noun. data.shape is an attribute of data which describes the dimensions of data. We use the same dotted notation for the attributes of variables that we use for the functions in libraries because they have the same part-and-whole relationship.If we want to get a single number from the array, we must provide an index in square brackets after the variable name, just as we do in math when referring to an element of a matrix. Our inflammation data has two dimensions, so we will need to use two indices to refer to one specific value:```print('first value in data:', data[0, 0])print('middle value in data:', data[30, 20])```
###Code
print('first value in data:', data[0, 0])
print('middle value in data:', data[30, 20])
###Output
first value in data: 0.0
middle value in data: 13.0
###Markdown
The expression `data[30, 20]` accesses the element at row 30, column 20. While this expression may not surprise you, `data[0, 0]` might. Zero IndexingProgramming languages like Fortran, MATLAB and R start counting at 1 because that's what human beings have done for thousands of years. Languages in the C family (including C++, Java, Perl, and Python) count from 0 because it represents an offset from the first value in the array (the second value is offset by one index from the first value). This is closer to the way that computers represent arrays (if you are interested in the historical reasons behind counting indices from zero, you can read Mike Hoye's blog post). As a result, if we have an M×N array in Python, its indices go from 0 to M-1 on the first axis and 0 to N-1 on the second. It takes a bit of getting used to, but one way to remember the rule is that the index is how many steps we have to take from the start to get the item we want. In the CornerWhat may also surprise you is that when Python displays an array, it shows the element with index `[0, 0]` in the upper left corner rather than the lower left. This is consistent with the way mathematicians draw matrices but different from the Cartesian coordinates. The indices are (row, column) instead of (column, row) for the same reason, which can be confusing when plotting data. Slicing dataAn index like `[30, 20]` selects a single element of an array, but we can select whole sections as well. For example, we can select the first ten days (columns) of values for the first four patients (rows) like this:```print(data[0:4, 0:10])```
###Code
print(data[0:4, 0:10])
###Output
[[0. 0. 1. 3. 1. 2. 4. 7. 8. 3.]
[0. 1. 2. 1. 2. 1. 3. 2. 2. 6.]
[0. 1. 1. 3. 3. 2. 6. 2. 5. 9.]
[0. 0. 2. 0. 4. 2. 2. 1. 6. 7.]]
###Markdown
The slice `[0:4]` means, *Start at index 0 and go up to, but not including, index 4*. Again, the up-to-but-not-including takes a bit of getting used to, but the rule is that the difference between the upper and lower bounds is the number of values in the slice.Also, we don't have to start slices at `0`:```print(data[5:10, 0:10])```
###Code
print(data[5:10, 0:10])
###Output
[[0. 0. 1. 2. 2. 4. 2. 1. 6. 4.]
[0. 0. 2. 2. 4. 2. 2. 5. 5. 8.]
[0. 0. 1. 2. 3. 1. 2. 3. 5. 3.]
[0. 0. 0. 3. 1. 5. 6. 5. 5. 8.]
[0. 1. 1. 2. 1. 3. 5. 3. 5. 8.]]
###Markdown
and we don't have to include the upper or lower bound on the slice. If we don't include the lower bound, Python uses 0 by default; if we don't include the upper, the slice runs to the end of the axis, and if we don't include either (i.e., if we just use `:` on its own), the slice includes everything:```small = data[:3, 36:]print('small is:')print(small)```
###Code
small = data[:3, 36:]
print('small is:')
print(small)
###Output
small is:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
###Markdown
The above example selects rows 0 through 2 and columns 36 through to the end of the array.thus small is:```[[ 2. 3. 0. 0.] [ 1. 1. 0. 1.] [ 2. 2. 1. 1.]]```Arrays also know how to perform common mathematical operations on their values. The simplest operations with data are arithmetic: addition, subtraction, multiplication, and division. When you do such operations on arrays, the operation is done element-by-element. Thus:```doubledata = data * 2.0```
###Code
doubledata = data * 2.0
###Output
_____no_output_____
###Markdown
will create a new array doubledata each element of which is twice the value of the corresponding element in data:```print('original:')print(data[:3, 36:])print('doubledata:')print(doubledata[:3, 36:])```
###Code
print('original:')
print(data[:3, 36:])
print('doubledata:')
print(doubledata[:3, 36:])
###Output
original:
[[2. 3. 0. 0.]
[1. 1. 0. 1.]
[2. 2. 1. 1.]]
doubledata:
[[4. 6. 0. 0.]
[2. 2. 0. 2.]
[4. 4. 2. 2.]]
###Markdown
If, instead of taking an array and doing arithmetic with a single value (as above), you did the arithmetic operation with another array of the same shape, the operation will be done on corresponding elements of the two arrays. Thus:```tripledata = doubledata + data```
###Code
tripledata = doubledata + data
###Output
_____no_output_____
###Markdown
will give you an array where `tripledata[0,0]` will equal `doubledata[0,0]` plus `data[0,0]`, and so on for all other elements of the arrays.```print('tripledata:')print(tripledata[:3, 36:])```
###Code
print('tripledata:')
print(tripledata[:3, 36:])
###Output
tripledata:
[[6. 9. 0. 0.]
[3. 3. 0. 3.]
[6. 6. 3. 3.]]
|
notebooks/.ipynb_checkpoints/Project-checkpoint.ipynb | ###Markdown
Covid-19: What Factors Impact Mortality? Our Data was taken from the CDC general covid case dataset. Our first goal is to clean this data so that we can better predict which features impact mortality rate. (For Additonal Info look in 'Cleaning.ipynb')
###Code
import pandas as pd
import numpy as np
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
# Set up Data Frame to only work with Known Values and Drop Date Data.
df = pd.read_csv('../data/external/COVID-19_Case_Surveillance_Public_Use_Data.csv', low_memory = False)
df = df.drop(['cdc_report_dt', 'pos_spec_dt', 'onset_dt'], axis = 1)
df = df[df['current_status'] == 'Laboratory-confirmed case']
df = df[(df['sex']=='Male') | (df['sex']=='Female')]
df = df.drop(df[df['age_group'] == 'Unknown'].index)
df = df.drop(df[df['Race and ethnicity (combined)'] == 'Unknown'].index)
df = df[(df['hosp_yn']=='Yes') | (df['hosp_yn']=='No')]
df = df[(df['icu_yn']=='Yes') | (df['icu_yn']=='No')]
df = df[(df['death_yn']=='Yes') | (df['death_yn']=='No')]
df = df[(df['medcond_yn']=='Yes') | (df['medcond_yn']=='No')]
df.shape
###Output
_____no_output_____
###Markdown
Next we want to get a quick look at how each varible is distrubited. (For Additonal Info look in 'Describe.ipynb')
###Code
# Import Plotting
import matplotlib.pyplot as plt
import seaborn as sns
#Sex
plt.bar(df['sex'].value_counts().index, df['sex'].value_counts(),\
ecolor='black', align='center')
plt.xlabel('Sex')
plt.ylabel('Count')
plt.show()
#Age
plt.bar(df['age_group'].value_counts().sort_index().index, df['age_group'].value_counts().sort_index(),\
ecolor='black', align='center')
plt.xlabel('Age Group')
plt.ylabel('Count')
plt.xticks(rotation=45)
plt.show()
#Race & Ethnicity
plt.bar(df['Race and ethnicity (combined)'].value_counts().index, df['Race and ethnicity (combined)'].value_counts(),\
ecolor='black', align='center')
plt.xlabel('Race & Ethnicity')
plt.ylabel('Count')
plt.xticks(rotation=90)
plt.show()
#Hospitalized
plt.bar(df['hosp_yn'].value_counts().index, df['hosp_yn'].value_counts(),\
ecolor='black', align='center')
plt.xlabel('Hospitilization Status')
plt.ylabel('Count')
plt.show()
#ICU Status
plt.bar(df['icu_yn'].value_counts().index, df['icu_yn'].value_counts(),\
ecolor='black', align='center')
plt.xlabel('ICU Status')
plt.ylabel('Count')
plt.show()
#Medcond Status
plt.bar(df['medcond_yn'].value_counts().index, df['medcond_yn'].value_counts(),\
ecolor='black', align='center')
plt.xlabel('Medical Condition')
plt.ylabel('Count')
plt.show()
#Mortality
plt.bar(df['death_yn'].value_counts().index.astype(str), df['death_yn'].value_counts(),\
ecolor='black', align='center')
plt.xlabel('Mortality\n0 = Alive, 1 = Dead')
plt.ylabel('Count')
plt.show()
###Output
_____no_output_____
###Markdown
Next we want to look at how these varibles are affecting mortality rate in these covid cases. First we will use the model Logistic Regression.
###Code
# Additional Imports
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.model_selection import KFold
import statsmodels.formula.api as smf
from sklearn.metrics import classification_report, confusion_matrix, auc, roc_curve, f1_score, roc_auc_score
# Important Varibles for our Data
Y = df['death_yn']
Y[Y=='Yes']= 1
Y[Y=='No'] = 0
Y = Y.astype(int)
X_cat = df.drop(['death_yn'], axis = 1)
X_cat.columns = ['current_status', 'sex', 'age_group', 'Race_Ethnicity', 'hosp_yn', 'icu_yn', 'medcond_yn']
eq = 'death_yn ~ C(current_status) + C(sex) + C(age_group) +C(Race_Ethnicity) + C(hosp_yn) + C(icu_yn) +C(medcond_yn)'
# Logistic Regression Model
i=1
kf = KFold(n_splits=5) # We Will be using 5-Kfolds
dfa = pd.DataFrame()
ypreddf = pd.DataFrame()
classdf = pd.DataFrame()
for trainIndex, textIndex in kf.split(X_cat):
xTrain, xTest = X_cat.iloc[trainIndex], X_cat.iloc[textIndex]
yTrain, yTest = Y.iloc[trainIndex], Y.iloc[textIndex]
##Fitting model
dfTrain = pd.concat([xTrain, yTrain], axis = 1)
model = smf.logit(eq, data=dfTrain)
SMFresults = model.fit()
ypred = SMFresults.predict(xTest)
ypred[ypred>= 0.5] = 1
ypred[ypred<0.5] = 0
ypreddf = ypreddf.append(pd.concat([ypred, yTest], axis = 1, ignore_index = True))
##Storing Parameters
aic = SMFresults.aic
ci = SMFresults.conf_int()
pvals = SMFresults.pvalues
rsq = SMFresults.prsquared
coefs = SMFresults.params
logit_auc = roc_auc_score(yTest, ypred)
fpr, tpr, threshold = roc_curve(yTest, ypred)
##Appending Dataframes
#Main df
dfnew = pd.concat([coefs, ci, pvals], axis = 1)
dfnew.columns = ['Coeffs', 'Lower', 'Upper', 'P-value']
dfnew['Fold'] = i
dfnew['AIC'] = aic
dfnew['PRsq'] = rsq
dfnew['AUC'] = logit_auc
dfnew['TPR'] = tpr[1]
dfnew['FPR'] = fpr[1]
dfnew['F1'] = f1_score(y_true=yTest, y_pred = ypred)
dfa = dfa.append(dfnew)
#Classification df
rpt = pd.DataFrame(classification_report(yTest, ypred, output_dict=True)).transpose()
rpt['Fold'] = i
classdf = classdf.append(rpt)
i = i + 1
###Output
Optimization terminated successfully.
Current function value: 0.132287
Iterations 11
Optimization terminated successfully.
Current function value: 0.149844
Iterations 11
Optimization terminated successfully.
Current function value: 0.159531
Iterations 11
Optimization terminated successfully.
Current function value: 0.095630
Iterations 11
Optimization terminated successfully.
Current function value: 0.134211
Iterations 11
###Markdown
Now that we have run the model we want to look at the coeff's, roc curve, and the confusion matrix. (For Additonal Info look in 'Logit.ipynb')
###Code
# Clean Coeff
meandf = dfa.groupby(dfa.index).mean()
sddf = dfa.groupby(dfa.index).std()
indx = ["Asian", "Black", "Hispanic/Latino", "Multiple/Other", "Haw./Pac. Isldr.", "White"\
,"10 - 19 Years", "20 - 29 Years", "30 - 39 Years", "40 - 49 Years", "50 - 59 Years", "60 - 69 Years",\
"70 - 79 Years","80+ Years", "Hosp = Yes", "ICU = Yes", "Med. Cond. = Yes", "Male", "Intercept"]
meandf.index = indx
sddf.index = indx
# Graph of Coeff
meandf = meandf.sort_values('Coeffs')
plt.bar(meandf.index, meandf.loc[:,'Coeffs'], yerr=sddf.loc[:,'Coeffs'], ecolor='black', align='center')
plt.ylabel('Regression Coefficients')
plt.xticks(rotation=90)
plt.show()
# Confusion Matrix and Heatmap
cm = confusion_matrix(ypreddf.iloc[:,0], ypreddf.iloc[:,1])
ax = sns.heatmap(cm, annot=True, cbar=False, fmt='g')
bottom, top = ax.get_ylim()
ax.set_ylim(bottom + 0.5, top - 0.5)
plt.ylabel('True Label')
plt.xlabel('Predicted Label')
plt.title('Logistic Regression:\nConfusion Matrix on Predicting Mortality (0: survived, 1: died)')
plt.show()
# ROC Curve
fpr = np.array([0, meandf['FPR'][0], 1])
tpr = np.array([0, meandf['TPR'][0], 1])
plt.figure()
plt.plot(fpr, tpr, label='Logistic Regression (area = %0.2f)' % meandf['AUC'][0])
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
Next we want to use the model: MLPClassification. We will also compare the 'answers' we recieve from both models after.
###Code
# New Imports
from sklearn.neural_network import MLPClassifier
from sklearn import preprocessing
from sklearn.model_selection import StratifiedKFold, GridSearchCV, train_test_split
from sklearn.metrics import classification_report, confusion_matrix, auc, roc_curve
# Grid Search Parameters Set Up
parameters = [{
'hidden_layer_sizes': [(3,),(5,),(7,)],
'activation': ['identity', 'logistic', 'tanh', 'relu'],
'solver':['adam'], 'alpha':[0.00001, 0.0001, 0.001],
'learning_rate':['constant', 'invscaling', 'adaptive']
}]
# Get our new X & y
X = pd.get_dummies(df.drop(['death_yn'], axis = 1))
y = Y
# Split data into train and test splits
xTrain, xTest, yTrain, yTest = train_test_split(X, y)
# After Grid Search Picks best parameters for MLPClassifier we will run the model
clf = GridSearchCV(MLPClassifier(), parameters, cv=5, verbose=1)
clf.fit(xTrain, yTrain.values.ravel())
###Output
Fitting 5 folds for each of 108 candidates, totalling 540 fits
[Parallel(n_jobs=1)]: Using backend SequentialBackend with 1 concurrent workers.
[Parallel(n_jobs=1)]: Done 540 out of 540 | elapsed: 59.6min finished
###Markdown
Now that we have run the model we want to look at the input wieghts, roc curve, and the confusion matrix. (For Additonal Info look in 'MLPCLassifier.ipynb & MLP_Input_Weights.ipynb')
###Code
# Check Best Params
clf.best_params_
# Get our y Predictions & results
yPred = clf.predict(xTest)
res = MLPClassifier(hidden_layer_sizes=(7,), activation='tanh', solver='adam', alpha=0.001, learning_rate = 'adaptive').fit(X, y.values.ravel())
# Input Wieghts (First Get the wieghts)
input_weight_df = pd.DataFrame()
for i in range(len(res.coefs_[0])):
input_weight_df.loc[str(X.columns[i]),"Input Layer Weight"] = res.coefs_[0][i][0]
input_weight_df.index = ['Lab-confirmed case', 'Female', 'Male',\
'0 - 9 Years', '10 - 19 Years',\
'20 - 29 Years', '30 - 39 Years',\
'40 - 49 Years', '50 - 59 Years',\
'60 - 69 Years', '70 - 79 Years',\
'80+ Years',\
'American Indn./AK Nat.',\
'Asian',\
'Black',\
'Hispanic/Latino',\
'Multiple/Other',\
'Nat. HI/Other Pac. Isl.',\
'White', 'Hosp_No',\
'Hosp_Yes', 'ICU_No', 'ICU_Yes', 'Medcond_No',\
'Medcond_Yes']
# Plot the wieghts
input_weight_df = input_weight_df.sort_values('Input Layer Weight')
plt.bar(input_weight_df.index, input_weight_df.loc[:,'Input Layer Weight'], ecolor='black', align='center')
plt.ylabel('Input Layer Weight')
plt.xticks(rotation=90)
plt.show()
# Confusion Matrix and Heatmap
cm = confusion_matrix(yTest, yPred)
ax = sns.heatmap(cm, annot=True, cbar=False, fmt='g')
bottom, top = ax.get_ylim()
ax.set_ylim(bottom + 0.5, top - 0.5)
plt.ylabel('True Label')
plt.xlabel('Predicted Label')
plt.title('MLPClassification:\nConfusion Matrix on Predicting Mortality (0: survived, 1: died)')
plt.show()
# ROC Curve
fpr, tpr, thresholds = roc_curve(yTest, yPred)
roc_auc = auc(fpr, tpr)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic')
plt.plot(fpr, tpr, label = 'MLPClassifier (area = {:f})'.format(roc_auc))
plt.legend(loc="lower right")
plt.plot([0, 1], [0, 1],'r--')
plt.plot(fpr, tpr, lw=2)
plt.show()
###Output
_____no_output_____ |
King of the World - Titanic Survical.ipynb | ###Markdown
The following is an attempt at predcting if a passenger on Titanic survived or not based on attributes such as: Passenger Class, Sex, Age, Number of siblings/spouses, Fare, Port of embarkmentMore details about the dataset can be found here: https://www.kaggle.com/c/titanic/data
###Code
# We import the required libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from collections import Counter
# Load train and test datasets
train = pd.read_csv("/Users/ankit/Desktop/Titanic Dataset/train.csv")
test = pd.read_csv("/Users/ankit/Desktop/Titanic Dataset/test.csv")
train.head()
###Output
_____no_output_____
###Markdown
Since outliers can have a dramatic effect on the prediction, we first deal with them.We have used the Tukey method to detect ouliers which defines an interquartile range comprised between the 1st and 3rd quartile of the distribution values (IQR). An outlier is a row that have a feature value outside the (IQR +- an outlier step).We detect outliers from the numerical values features (Age, SibSp, Sarch and Fare). Then, we consider outliers as rows that have at least two outlied numerical values.
###Code
# Outlier detection
# Reference - https://www.kaggle.com/yassineghouzam/titanic-top-4-with-ensemble-modeling/notebook
def detect_outliers(df,n,features):
"""
Takes a dataframe df of features and returns a list of the indices
corresponding to the observations containing more than n outliers according
to the Tukey method.
"""
outlier_indices = []
# iterate over features(columns)
for col in features:
# 1st quartile (25%)
Q1 = np.percentile(df[col], 25)
# 3rd quartile (75%)
Q3 = np.percentile(df[col],75)
# Interquartile range (IQR)
IQR = Q3 - Q1
# outlier step
outlier_step = 1.5 * IQR
# Determine a list of indices of outliers for feature col
outlier_list_col = df[(df[col] < Q1 - outlier_step) | (df[col] > Q3 + outlier_step )].index
# append the found outlier indices for col to the list of outlier indices
outlier_indices.extend(outlier_list_col)
# select observations containing more than 2 outliers
outlier_indices = Counter(outlier_indices)
multiple_outliers = list( k for k, v in outlier_indices.items() if v > n )
return multiple_outliers
# detect outliers from Age, SibSp , Parch and Fare
Outliers_to_drop = detect_outliers(train,2,["Age","SibSp","Parch","Fare"])
# Show the outliers rows
train.loc[Outliers_to_drop]
###Output
_____no_output_____
###Markdown
We detect 10 outliers. The 28, 89 and 342 passenger have an high Ticket FareThe 7 others have very high values of SibSP.
###Code
# Drop outliers
train = train.drop(Outliers_to_drop, axis = 0).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Joining train and test set
###Code
## Join train and test datasets in order to obtain the same number of features during categorical conversion
train_len = len(train)
dataset = pd.concat(objs=[train, test], axis=0).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Check for null and missing values
###Code
# Fill empty and NaNs values with NaN
dataset = dataset.fillna(np.nan)
# Check for Null values
dataset.isnull().sum()
###Output
_____no_output_____
###Markdown
** Age and Cabin features have an important part of missing values. **** Survived missing values correspond to the join testing dataset (Survived column doesn't exist in test set and has been replace by NaN values when concatenating the train and test set) **
###Code
dataset.head()
#Removing all rows which do not have a 'Survived' column value
dataset.dropna(subset = ['Survived'],inplace=True)
dataset.Survived.isnull().sum()
dataset.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 881 entries, 0 to 880
Data columns (total 12 columns):
Age 711 non-null float64
Cabin 201 non-null object
Embarked 879 non-null object
Fare 881 non-null float64
Name 881 non-null object
Parch 881 non-null int64
PassengerId 881 non-null int64
Pclass 881 non-null int64
Sex 881 non-null object
SibSp 881 non-null int64
Survived 881 non-null float64
Ticket 881 non-null object
dtypes: float64(3), int64(4), object(5)
memory usage: 89.5+ KB
###Markdown
** Splitting into features and labels **
###Code
# converting the label values from float to int
y = dataset.Survived.astype(int)
y.head()
x = dataset.drop('Survived',axis = 1)
x.head()
###Output
_____no_output_____
###Markdown
Seperating train and test datasets
###Code
## Separate train dataset and test dataset
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=42, stratify = y)
x_train.head()
###Output
_____no_output_____
###Markdown
** Selecting features on which to train our models **
###Code
x_train_cols = x_train[['Fare','Parch','Pclass','Sex','SibSp']]
x_train_cols.head()
# Doing the same with the test set
x_test_cols = x_test[['Fare','Parch','Pclass','Sex','SibSp']]
###Output
_____no_output_____
###Markdown
Creating dummy variables
###Code
# Get dummy variables for categorical feature 'Sex' thus encoding it as 0 and 1
x_train_cat1 = x_train['Sex'].str.get_dummies()
x_train_cat1.head()
# Doing the same with the test set
x_test_cat1 = x_test['Sex'].str.get_dummies()
# Concatenate both the dataframes along axis = 1 (columns)
x_train_cols = pd.concat(objs = [x_train_cols, x_train_cat1],axis = 1)
x_train_cols.head()
# Doing the same with the test set
x_test_cols = pd.concat(objs = [x_test_cols, x_test_cat1],axis = 1)
# Drop categorical 'Sex' column
x_train_cols.drop(columns = 'Sex',axis=1,inplace=True)
x_train_cols.head()
# Doing the same with the test set
x_test_cols.drop (columns = 'Sex',axis=1,inplace=True)
x_test_cols.head()
y_train.head()
###Output
_____no_output_____
###Markdown
Decision Tree
###Code
from sklearn.tree import DecisionTreeClassifier
tree_clf = DecisionTreeClassifier(random_state=42,max_depth = 3)
tree_clf.fit(x_train_cols,y_train)
y_pred = tree_clf.predict(x_test_cols)
from sklearn.metrics import classification_report,confusion_matrix
print('Confusion Matrix:')
print('\n')
print(confusion_matrix(y_test,y_pred))
print('\n')
print('Classification Report:')
print('\n')
print(classification_report(y_test,y_pred))
###Output
Confusion Matrix:
[[97 12]
[27 41]]
Classification Report:
precision recall f1-score support
0 0.78 0.89 0.83 109
1 0.77 0.60 0.68 68
avg / total 0.78 0.78 0.77 177
###Markdown
Bagging ** Reference: **** Aurélien Géron. “Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. ** Bagging (Bootstrap Aggregating - sampling with replacement)
###Code
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
bag_clf = BaggingClassifier(
DecisionTreeClassifier(), n_estimators=500,
max_samples=100, bootstrap=True, n_jobs=-1)
bag_clf.fit(x_train_cols, y_train)
y_pred_bag = bag_clf.predict(x_test_cols)
###Output
_____no_output_____
###Markdown
Out of Bag Boosting With bagging, some instances might be sampled more than once while others might not be sampled at all. On average only about 63% of training instances are sampled. The remaining 37% of instances are called out-of-bag instances and since our training model has never seen this set of instances from the training set, it can be evaluated based on these out-of-bag instances.
###Code
oob_clf = BaggingClassifier(DecisionTreeClassifier(), n_estimators=500,bootstrap=True, n_jobs=-1, oob_score=True)
oob_clf.fit(x_train_cols, y_train)
oob_clf.oob_score_
###Output
_____no_output_____
###Markdown
Random Forest We are going to train a Random Forest model, which is an ensemble of trees where sampling is done by the Bagging (Bootstrap Aggregating - sampling with replacement) We are also going to specify some hyperparameters as follows:• n_estimators = number of decision trees in the forest. Random forest aggregates all predictions via either hardsoft voting.• max_leaf_nodes = maximum number of leaf nodes• n_jobs = number of CPU cores to be utilized for the job. n=-1 means engaging all cores.
###Code
from sklearn.ensemble import RandomForestClassifier
rf_clf = RandomForestClassifier(n_estimators=500, max_leaf_nodes=16, n_jobs=-1 ,random_state=42)
rf_clf.fit(x_train_cols,y_train)
y_pred_rf=rf_clf.predict(x_test_cols)
print('Confusion Matrix:')
print('\n')
print(confusion_matrix(y_test,y_pred_rf))
print('\n')
print('Classification Report:')
print('\n')
print(classification_report(y_test,y_pred_rf))
###Output
Confusion Matrix:
[[99 10]
[27 41]]
Classification Report:
precision recall f1-score support
0 0.79 0.91 0.84 109
1 0.80 0.60 0.69 68
avg / total 0.79 0.79 0.78 177
###Markdown
** Random Forest only gives us slightly better results ** Hyperparameter Tuning
###Code
from sklearn.model_selection import GridSearchCV
###Output
_____no_output_____
###Markdown
The model is tuned on following hyperparameters using a grid search: • min_samples_split: The number of samples in the node for the tree to split on• bootstrap: Smapling with replacement• n_estimators: Number of decsiion trees in our forest• criterion: defines the cost function which the forest tries to minimize in order ot split the node. gini is a measure of the purity of the node
###Code
rf_param_grid = {"max_depth": [None],
"min_samples_split": [2, 3, 10],
"min_samples_leaf": [1, 3, 10],
"bootstrap": [False],
"n_estimators" :[100,300],
"criterion": ['gini','entropy']}
gsRFC = GridSearchCV(rf_clf,param_grid = rf_param_grid,n_jobs = -1)
gsRFC.fit(x_train_cols,y_train)
# Best Estimator
RFC_best = gsRFC.best_estimator_
RFC_best
# Best score
gsRFC.best_score_
###Output
_____no_output_____
###Markdown
Boosting The main idea behind boosting methods is to train predictors sequentially, each trying to correct its predecessor.Below I display AdaBoost, one of the most popular boosting methods. AdaBoost Here, we train an AdaBoost classifier based on 200 Decision Stumps (Decision trees with the max_depth hyperparameter set to 1).
###Code
from sklearn.ensemble import AdaBoostClassifier
ada_clf = AdaBoostClassifier(
DecisionTreeClassifier(max_depth=1), n_estimators=200,
algorithm="SAMME.R", learning_rate=0.5)
ada_clf.fit(x_train_cols, y_train)
###Output
_____no_output_____ |
speech_emotion_recognition/ensemble_validation/ensemble_validation_notebooks/Ensemble_Validation_Avg_1_Threshold_0.7.ipynb | ###Markdown
Configuration
###Code
import os
from tqdm.notebook import tqdm
from tqdm import tqdm
import librosa
import pandas as pd
import pandas as np
from sklearn.metrics import classification_report
###Output
_____no_output_____
###Markdown
Clean Data - Compute dataframes for datasets and split in Train, Val, Test
###Code
main_path = '/Users/helemanc/Documents/MasterAI/THESIS/Datasets SER'
TESS = os.path.join(main_path, "tess/TESS Toronto emotional speech set data/")
RAV = os.path.join(main_path, "ravdess-emotional-speech-audio/audio_speech_actors_01-24")
SAVEE = os.path.join(main_path, "savee/ALL/")
CREMA = os.path.join(main_path, "creamd/AudioWAV/")
###Output
_____no_output_____
###Markdown
RAVDESS
###Code
lst = []
emotion = []
voc_channel = []
full_path = []
modality = []
intensity = []
actors = []
phrase =[]
for root, dirs, files in tqdm(os.walk(RAV)):
for file in files:
try:
#Load librosa array, obtain mfcss, store the file and the mfcss information in a new array
# X, sample_rate = librosa.load(os.path.join(root,file), res_type='kaiser_fast')
# mfccs = np.mean(librosa.feature.mfcc(y=X, sr=sample_rate, n_mfcc=40).T,axis=0)
# The instruction below converts the labels (from 1 to 8) to a series from 0 to 7
# This is because our predictor needs to start from 0 otherwise it will try to predict also 0.
modal = int(file[1:2])
vchan = int(file[4:5])
lab = int(file[7:8])
ints = int(file[10:11])
phr = int(file[13:14])
act = int(file[18:20])
# arr = mfccs, lab
# lst.append(arr)
modality.append(modal)
voc_channel.append(vchan)
emotion.append(lab) #only labels
intensity.append(ints)
phrase.append(phr)
actors.append(act)
full_path.append((root, file)) # only files
# If the file is not valid, skip it
except ValueError:
continue
# 01 = neutral, 02 = calm, 03 = happy, 04 = sad, 05 = angry, 06 = fearful, 07 = disgust, 08 = surprised
# merge neutral and calm
emotions_list = ['neutral', 'neutral', 'happy', 'sadness', 'angry', 'fear', 'disgust', 'surprise']
emotion_dict = {em[0]+1:em[1] for em in enumerate(emotions_list)}
df = pd.DataFrame([emotion, voc_channel, modality, intensity, actors, actors,phrase, full_path]).T
df.columns = ['emotion', 'voc_channel', 'modality', 'intensity', 'actors', 'gender', 'phrase', 'path']
df['emotion'] = df['emotion'].map(emotion_dict)
df['voc_channel'] = df['voc_channel'].map({1: 'speech', 2:'song'})
df['modality'] = df['modality'].map({1: 'full AV', 2:'video only', 3:'audio only'})
df['intensity'] = df['intensity'].map({1: 'normal', 2:'strong'})
df['actors'] = df['actors']
df['gender'] = df['actors'].apply(lambda x: 'female' if x%2 == 0 else 'male')
df['phrase'] = df['phrase'].map({1: 'Kids are talking by the door', 2:'Dogs are sitting by the door'})
df['path'] = df['path'].apply(lambda x: x[0] + '/' + x[1])
# remove files with noise to apply the same noise to all files for data augmentation
df = df[~df.path.str.contains('noise')]
df.head()
# only speech
RAV_df = df
RAV_df = RAV_df.loc[RAV_df.voc_channel == 'speech']
RAV_df.insert(0, "emotion_label", RAV_df.emotion, True)
RAV_df = RAV_df.drop(['emotion', 'voc_channel', 'modality', 'intensity', 'phrase'], 1)
RAV_df
RAV_train = []
RAV_val = []
RAV_test = []
for index, row in RAV_df.iterrows():
if row['actors'] in range(1,21):
RAV_train.append(row)
elif row['actors'] in range(21,23):
RAV_val.append(row)
elif row['actors'] in range(23,25):
RAV_test.append(row)
len(RAV_train), len(RAV_val), len(RAV_test)
RAV_train = pd.DataFrame(RAV_train)
RAV_val = pd.DataFrame(RAV_val)
RAV_test = pd.DataFrame(RAV_test)
RAV_train = RAV_train.drop(['actors'], 1)
RAV_val = RAV_val.drop(['actors'], 1)
RAV_test = RAV_test.drop(['actors'], 1)
RAV_train.reset_index(drop=True, inplace = True)
RAV_val.reset_index(drop=True, inplace = True)
RAV_test.reset_index(drop=True, inplace = True )
###Output
_____no_output_____
###Markdown
SAVEE
###Code
# Get the data location for SAVEE
dir_list = os.listdir(SAVEE)
# parse the filename to get the emotions
emotion=[]
path = []
actors = []
gender = []
for i in dir_list:
actors.append(i[:2])
if i[-8:-6]=='_a':
emotion.append('angry')
gender.append('male')
elif i[-8:-6]=='_d':
emotion.append('disgust')
gender.append('male')
elif i[-8:-6]=='_f':
emotion.append('fear')
gender.append('male')
elif i[-8:-6]=='_h':
emotion.append('happy')
gender.append('male')
elif i[-8:-6]=='_n':
emotion.append('neutral')
gender.append('male')
elif i[-8:-6]=='sa':
emotion.append('sadness')
gender.append('male')
elif i[-8:-6]=='su':
emotion.append('surprise')
gender.append('male')
else:
emotion.append('Unknown')
path.append(SAVEE + i)
# Now check out the label count distribution
SAVEE_df = pd.DataFrame(emotion, columns = ['emotion_label'])
SAVEE_df = pd.concat([SAVEE_df,
pd.DataFrame(actors, columns = ['actors']),
pd.DataFrame(gender, columns = ['gender']),
pd.DataFrame(path, columns = ['path'])], axis = 1)
SAVEE_df.emotion_label.value_counts()
SAVEE_df.head()
SAVEE_train = []
SAVEE_val = []
SAVEE_test = []
#DC, JE, JK, KL
for index, row in SAVEE_df.iterrows():
if row['actors'] == 'DC' or row ['actors'] == 'JE':
SAVEE_train.append(row)
elif row['actors'] == 'JK':
SAVEE_val.append(row)
else:
SAVEE_test.append(row)
len(SAVEE_train), len(SAVEE_val), len(SAVEE_test)
SAVEE_train = pd.DataFrame(SAVEE_train)
SAVEE_val = pd.DataFrame(SAVEE_val)
SAVEE_test = pd.DataFrame(SAVEE_test)
SAVEE_train = SAVEE_train.drop(['actors'], 1)
SAVEE_val = SAVEE_val.drop(['actors'], 1)
SAVEE_test = SAVEE_test.drop(['actors'], 1)
SAVEE_train = SAVEE_train.reset_index(drop=True)
SAVEE_val = SAVEE_val.reset_index(drop=True)
SAVEE_test = SAVEE_test.reset_index(drop=True)
###Output
_____no_output_____
###Markdown
TESS
###Code
dir_list = os.listdir(TESS)
dir_list.sort()
dir_list
path = []
emotion = []
gender = []
actors = []
for i in dir_list:
fname = os.listdir(TESS + i)
for f in fname:
if i == 'OAF_angry':
emotion.append('angry')
gender.append('female')
actors.append('OAF')
elif i == 'YAF_angry':
emotion.append('angry')
gender.append('female')
actors.append('YAF')
elif i == 'OAF_disgust' :
emotion.append('disgust')
gender.append('female')
actors.append('OAF')
elif i == 'YAF_disgust':
emotion.append('disgust')
gender.append('female')
actors.append('YAF')
elif i == 'OAF_Fear':
emotion.append('fear')
gender.append('female')
actors.append('OAF')
elif i == 'YAF_fear':
emotion.append('fear')
gender.append('female')
actors.append('YAF')
elif i == 'OAF_happy' :
emotion.append('happy')
gender.append('female')
actors.append('OAF')
elif i == 'YAF_happy':
emotion.append('angry')
gender.append('female')
actors.append('YAF')
elif i == 'OAF_neutral':
emotion.append('neutral')
gender.append('female')
actors.append('OAF')
elif i == 'YAF_neutral':
emotion.append('neutral')
gender.append('female')
actors.append('YAF')
elif i == 'OAF_Pleasant_surprise':
emotion.append('surprise')
gender.append('female')
actors.append('OAF')
elif i == 'YAF_pleasant_surprised':
emotion.append('surprise')
gender.append('female')
actors.append('YAF')
elif i == 'OAF_Sad':
emotion.append('sadness')
gender.append('female')
actors.append('OAF')
elif i == 'YAF_sad':
emotion.append('sadness')
gender.append('female')
actors.append('YAF')
else:
emotion.append('Unknown')
path.append(TESS + i + "/" + f)
TESS_df = pd.DataFrame(emotion, columns = ['emotion_label'])
TESS_df = pd.concat([TESS_df, pd.DataFrame(gender, columns = ['gender']),
pd.DataFrame(actors, columns= ['actors']),
pd.DataFrame(path, columns = ['path'])],axis=1)
TESS_df.emotion_label.value_counts()
TESS_df= TESS_df[~TESS_df.path.str.contains('noise')]
TESS_train = []
TESS_test = []
for index, row in TESS_df.iterrows():
if row['actors'] == 'YAF':
TESS_train.append(row)
else:
TESS_test.append(row)
len(TESS_train), len(TESS_test)
TESS_train = pd.DataFrame(TESS_train)
TESS_test = pd.DataFrame(TESS_test)
TESS_train = TESS_train.drop(['actors'], 1)
TESS_test = TESS_test.drop(['actors'], 1)
TESS_train = TESS_train.reset_index(drop=True)
TESS_test = TESS_test.reset_index(drop=True)
###Output
_____no_output_____
###Markdown
CREMA-D
###Code
males = [1,
5,
11,
14,
15,
16,
17,
19,
22,
23,
26,
27,
31,
32,
33,
34,
35,
36,
38,
39,
41,
42,
44,
45,
48,
50,
51,
57,
59,
62,
64,
65,
66,
67,
68,
69,
70,
71,
77,
80,
81,
83,
85,
86,
87,
88,
90]
females = [ 2,
3,
4,
6,
7,
8,
9,
10,
12,
13,
18,
20,
21,
24,
25,
28,
29,
30,
37,
40,
43,
46,
47,
49,
52,
53,
54,
55,
56,
58,
60,
61,
63,
72,
73,
74,
75,
76,
78,
79,
82,
84,
89,
91]
crema_directory_list = os.listdir(CREMA)
file_emotion = []
file_path = []
actors = []
gender = []
for file in crema_directory_list:
# storing file emotions
part=file.split('_')
# use only high intensity files
if "HI" in part[3] :
actor = part[0][2:]
actors.append(actor)
if int(actor) in males:
gender.append('male')
else:
gender.append('female')
# storing file paths
file_path.append(CREMA + file)
if part[2] == 'SAD':
file_emotion.append('sadness')
elif part[2] == 'ANG':
file_emotion.append('angry')
elif part[2] == 'DIS':
file_emotion.append('disgust')
elif part[2] == 'FEA':
file_emotion.append('fear')
elif part[2] == 'HAP':
file_emotion.append('happy')
elif part[2] == 'NEU':
file_emotion.append('neutral')
else:
file_emotion.append('Unknown')
# dataframe for emotion of files
emotion_df = pd.DataFrame(file_emotion, columns=['emotion_label'])
# dataframe for path of files.
path_df = pd.DataFrame(file_path, columns=['path'])
actors_df = pd.DataFrame(actors, columns=['actors'])
gender_df = pd.DataFrame(gender, columns=['gender'])
Crema_df = pd.concat([emotion_df, actors_df, gender_df, path_df], axis=1)
Crema_df.head()
Crema_df.shape
actor_files = {}
for index, row in Crema_df.iterrows():
actor = row['actors']
if actor not in actor_files.keys():
actor_files[actor] = 1
else:
actor_files[actor]+=1
actor_files
count_males = 0
count_females = 0
male_list = []
for index, row in Crema_df.iterrows():
gender = row['gender']
actor = row['actors']
if gender == 'male':
count_males +=1
if actor not in male_list:
male_list.append(actor)
else:
count_females +=1
count_males, count_females
###Output
_____no_output_____
###Markdown
Since there are more males than females we will remove randomly 3 male actors (since there are exactly 5 audio files per actor)
###Code
import random
'''
random.seed(42)
males_to_remove = random.sample(male_list, 3)
males_to_remove
'''
males_to_remove = ['17', '80', '88']
new_df = []
for index, row in Crema_df.iterrows():
if row['actors'] not in males_to_remove:
new_df.append(row)
CREMA_df = pd.DataFrame(new_df)
for index, row in CREMA_df.iterrows():
if row['actors'] == '17':
print("Elements not removed")
count_males = 0
count_females = 0
male_list = []
female_list = []
for index, row in CREMA_df.iterrows():
gender = row['gender']
actor = row['actors']
if gender == 'male':
count_males +=1
if actor not in male_list:
male_list.append(actor)
else:
count_females +=1
if actor not in female_list:
female_list.append(actor)
count_males, count_females
len(female_list)
len(male_list)
CREMA_train = []
CREMA_val = []
CREMA_test = []
females_train = random.sample(female_list, 32)
males_train = random.sample(male_list, 32)
# remove the elements assigned to train
for element in females_train:
if element in female_list:
female_list.remove(element)
for element in males_train:
if element in male_list:
male_list.remove(element)
females_val = random.sample(female_list, 6)
males_val = random.sample(male_list, 6)
# remove the elements assigned to val
for element in females_val:
if element in female_list:
female_list.remove(element)
for element in males_val:
if element in male_list:
male_list.remove(element)
females_test = random.sample(female_list, 6)
males_test = random.sample(male_list, 6)
females_train, males_train, females_val, males_val, females_test, males_test
train = females_train + males_train
val = females_val + males_val
test = females_test + males_test
for index, row in CREMA_df.iterrows():
gender = row['gender']
actor = row['actors']
if actor in train:
CREMA_train.append(row)
elif actor in val:
CREMA_val.append(row)
else:
CREMA_test.append(row)
CREMA_train = pd.DataFrame(CREMA_train)
CREMA_val = pd.DataFrame(CREMA_val)
CREMA_test = pd.DataFrame(CREMA_test)
CREMA_train.shape, CREMA_val.shape, CREMA_test.shape
CREMA_train = CREMA_train.drop(['actors'], 1)
CREMA_val = CREMA_val.drop(['actors'], 1)
CREMA_test = CREMA_test.drop(['actors'], 1)
CREMA_train = CREMA_train.reset_index(drop=True)
CREMA_val = CREMA_val.reset_index(drop = True)
CREMA_test = CREMA_test.reset_index(drop = True)
###Output
_____no_output_____
###Markdown
Utils Validation Ensemble
###Code
# import main
from inaSpeechSegmenter import Segmenter
from argparse import ArgumentParser
import utils
import warnings
# import utils
from speech_emotion_recognition import feature_extraction as fe, ensemble
import scipy
import numpy as np
from scipy import signal
from scipy.io.wavfile import write
from utils import resample, denoise
# other imports
import sklearn
import tensorflow as tf
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR)
# print('The scikit-learn version is {}.'.format(sklearn.__version__))
#!pip install scikit-learn==0.24.2
#!jupyter nbextension enable --py widgetsnbextension
#!jupyter labextension install @jupyter-widgets/jupyterlab-manager
def make_predictions(dataset, labels, prediction_scheme):
predictions = []
model_predictions_list = []
counter = 0
for filepath in tqdm(dataset['path']):
samples, sample_rate = fe.read_file(filepath)
samples, sample_rate = resample(samples, sample_rate)
new_samples = fe.cut_pad(samples)
#new_filepath = "tmp.wav"
final_prediction, model_predictions = ensemble.ensemble(new_samples, prediction_scheme, return_model_predictions = True)
predictions.append(final_prediction)
model_predictions_list.append(model_predictions)
print("True label", labels[counter], "Predicted label", predictions[counter])
counter+=1
return predictions, model_predictions_list
def create_dataframe_prediction_per_model(model_predictions_list):
df_model_predictions = pd.DataFrame(model_predictions_list)
df_model_predictions = df_model_predictions.reindex(sorted(df_model_predictions.columns), axis=1)
return df_model_predictions
def create_dataframe_predictions(prediction_list):
df_predictions = pd.DataFrame(prediction_list)
return df_predictions
def create_dataframe_res(labels, df_predictions, df_model_predictions, dataset):
df_res = pd.concat([labels,
df_predictions,
df_model_predictions,
dataset.path], axis = 1, ignore_index=True, sort=False)
header_model_predictions = list(df_model_predictions.columns)
new_header = []
new_header.append('true_label')
new_header.append('pred_label')
new_header = new_header + header_model_predictions
new_header.append('path')
df_res.columns = new_header
return df_res
def create_dataframes_false_positives_false_negatives(df_res):
misclassified_rows_false_positives = []
misclassified_rows_false_negatives = []
for index, row in df_res.iterrows():
true = df_res.true_label[index]
pred = df_res.pred_label[index]
if true != pred: # store misclassified files
if true ==1 and pred == 0: # disruptive classified as non-distruptive is false negative
misclassified_rows_false_negatives.append(row)
else:
misclassified_rows_false_positives.append(row)
df_false_negatives = pd.DataFrame(misclassified_rows_false_negatives)
df_false_negatives.reset_index(drop=True, inplace=True)
df_false_positives = pd.DataFrame(misclassified_rows_false_positives)
df_false_positives.reset_index(drop=True, inplace=True)
return df_false_positives, df_false_negatives
def print_hist_models_predictions_fp(df_false_positives):
if df_false_positives.empty:
print('DataFrame False Positives is empty!')
return 0
df_models_fp = df_false_positives.copy()
df_models_fp.drop(columns=['true_label', 'pred_label', 'path'])
df_models_fp.hist(figsize=(24,24))
def print_hist_models_predictions_fn(df_false_negatives):
if df_false_negatives.empty:
print('DataFrame False Negatives is empty!')
return 0
df_models_fn = df_false_negatives.copy()
df_models_fn.drop(columns=['true_label', 'pred_label', 'path'])
df_models_fn.hist(figsize=(24,24))
###Output
_____no_output_____
###Markdown
Validation - RAVDESSWe use the same code of the main.py of the real application, without VAD module
###Code
emotion_enc = {'fear':1, 'disgust':1, 'neutral':0, 'calm':0, 'happy':0, 'sadness':1, 'surprise':0, 'angry':1}
labels= pd.Series(list(RAV_test.emotion_label)).replace(emotion_enc)
predictions, model_prediction_list = make_predictions(RAV_test, labels, prediction_scheme='avg_1')
df_model_predictions = create_dataframe_prediction_per_model(model_prediction_list)
df_predictions = create_dataframe_predictions(predictions)
df_res = create_dataframe_res(labels, df_predictions, df_model_predictions, RAV_test)
df_fp, df_fn = create_dataframes_false_positives_false_negatives(df_res)
print_hist_models_predictions_fp(df_fp)
print_hist_models_predictions_fn(df_fn)
print(classification_report(df_res.true_label, df_res.pred_label))
csv_path = "/Users/helemanc/PycharmProjects/ambient-intelligence/speech_emotion_recognition/ensemble_validation_results/avg_1_validation_ravdess_0_7.csv"
df_res.to_csv(csv_path)
###Output
_____no_output_____
###Markdown
Validation - CREMAWe use the same code of the main.py of the real application, without VAD module
###Code
emotion_enc = {'fear':1, 'disgust':1, 'neutral':0, 'calm':0, 'happy':0, 'sadness':1, 'surprise':0, 'angry':1}
labels= pd.Series(list(CREMA_test.emotion_label)).replace(emotion_enc)
predictions, model_prediction_list = make_predictions(CREMA_test, labels, prediction_scheme='avg_1')
df_model_predictions = create_dataframe_prediction_per_model(model_prediction_list)
df_predictions = create_dataframe_predictions(predictions)
df_res = create_dataframe_res(labels, df_predictions, df_model_predictions, CREMA_test)
df_fp, df_fn = create_dataframes_false_positives_false_negatives(df_res)
print_hist_models_predictions_fp(df_fp)
print_hist_models_predictions_fn(df_fn)
print(classification_report(df_res.true_label, df_res.pred_label))
csv_path = "/Users/helemanc/PycharmProjects/ambient-intelligence/speech_emotion_recognition/ensemble_validation_results/avg_1_validation_crema_0_7.csv"
df_res.to_csv(csv_path)
###Output
_____no_output_____ |
NumPy-Revisited/NumpyStack.ipynb | ###Markdown
Numpy stack: numpy, pandas, scipy and matplotlib
###Code
import time
###Output
_____no_output_____
###Markdown
Numpy - for arrays and matrices
###Code
import numpy as np
# in numpy vectors = 1D arrays, rest are matrices
L = [1,2,3]
A = np.array(L)
for e in L:
print(e)
print("\n")
for e in A:
print(e)
# Python broadcasting
A = A + np.array([4])
print(A)
A = np.array([[1,2,3],
[0,0,0]])
B = np.array([4,5,6])
print(A+B)
# scalar operations
A = A+B
print(A**2)
print(A*2)
# numpy algebraic functions
print(np.sqrt(A))
print(np.log(A))
print(np.exp(B))
print(np.tanh(A))
print(np.sin(B))
# dot product
a = np.array([1,2])
b = np.array([3,4])
dot = np.dot(a.T,b) # standard method of dot product
print(dot)
dot = (a*b).sum() # A.B = a1*b1+a2*b2+...aN*bN
print(dot)
print(a@b) # dot product syntax in newer versions
# Matrices
A = np.array([[1,2,3],
[3,4,5],
[10,9,7]])
print(A[:,0]) # select every row value with column = 0
print(A[2,:]) # select every column value with row = 2
print(A.T) # transpose of matrix
B = np.array([[1,2],
[3,4],
[5,6]])
C = np.dot(A,B) # matrix multiplication == dot product
print(C)
# Inverse of a matrix - will throw exception if singular
try:
Inv = np.linalg.inv(A)
except:
print("Singular matrix so inverse not possible")
# trace of a matrix
tr = np.trace(A)
print(tr)
# leading diagonal of the matrix
np.diag(A)
# eigenvalues and eigenvectors of a matrix
eigen = np.linalg.eig(A)
print(eigen[0]) # eigenvalues
print(eigen[1]) # eigenvectors
# correct way to check equality of matrices (as floating point precisions may cause erroneous results)
X = np.array([1,2,3])
Y = np.array([[1,2,3]])
np.allclose(X,Y) # returns True if matrices are equal, else False
'''
Solving linear systems
AX = B => X = np.linag.solve(A,B) solves using Gaussian Elimination
N.B. X = np.dot(np.linalg.inv(A),B) solution is very inefficient
and slow for large matrices
'''
A = np.array([[8,4],
[2.5,-8]])
B = np.array([28.40,750])
start=time.time()
X = np.linalg.solve(A,B)
end=time.time()
print(str((end-start)*1000))
start=time.time()
X = np.dot(np.linalg.inv(A),B)
end=time.time()
print(str((end-start)*1000))
print(X)
# generating random data for NN training/ML practices
np.zeros((2,4)) # matrix of 0's
np.ones((2,3)) # matrix of 1's
np.ones((2,3))*7, np.full((2,3),7) # matrix of any constant number
np.eye(4) , np.identity(4) # creates n*n identity matrix
np.random.randn(3,4) # randomly initialises a 3x4 matrix - Normal/Gaussian distribution
np.random.rand(3,4) # randomly initialises a 3x4 matrix - Uniform distribution
# Statistics with NumPy
R = np.random.randn(10000) # 10^4 vector
np.mean(R) # mean of the matrix
np.var(R) # variance of matrix
np.std(R) # Standard deviation of matrix = sqrt(variance)
R = np.random.randn(10000,4) # matrix
np.mean(R,axis=1) # column wise mean
np.mean(R,axis=0) # row wise mean
np.std(R,axis=1) # column wise std. dev.
# column wise => axis=1 , row wise => axis=0
# axis parameter is applicable for almost all algebraic operations
###Output
_____no_output_____
###Markdown
Matplotlib - data visualising library
###Code
import matplotlib.pyplot as plt
# Line Charts
x = np.linspace(0,20,1000) # 1000 equally spaced data values between 0 and 20
y = np.sin(x+np.sin(x))
plt.plot(x,y)
plt.xlabel("input")
plt.ylabel("output")
plt.title("y = sin(x)")
plt.show()
# Scatter plot
x = np.random.randn(200,2) # for 2D scatterplot
plt.scatter(x[:,0],x[:,1]) # horizontal and vertical axes respectively
plt.show()
# if we want to take random matrix and shift some points farther manually
x = np.random.randn(200,2)
x[:50] += 3 # shifts first 50 row values by 3
C = np.zeros(200)
C[:85]=1 # labels - for colours
plt.scatter(x[:,0],x[:,1],c=C) # labelling scatterplot
plt.show()
# Histograms
x = np.random.randn(10000)
plt.hist(x)
plt.show()
plt.hist(x,bins=50) # setting no. of bars of histogram
plt.show()
'''
Image plotting using Matplotlib
from PIL import Image
img = Image.open('sed_cat.jpg')
print(img)
# convert to numpy array
arr = np.array(img)
print(arr)
print(arr.shape)
# first 2 dimens are length and width, the last dimen is for RGB
plt.imshow(arr);
# Convert to greyscale - taking mean about the colour dimen
grey = arr.mean(axis=2)
print(grey.shape)
plt.imshow(grey); # this prints a colour heatmap over the image
plt.imshow(grey,cmap='gray'); # this prints image in greyscale
'''
###Output
_____no_output_____
###Markdown
Pandas - data handling purposes
###Code
import pandas as pd
# fetching the data file
!wget https://raw.githubusercontent.com/lazyprogrammer/machine_learning_examples/master/tf2.0/sbux.csv
# Loading the data using pandas
df = pd.read_csv('sbux.csv')
# N.B. pasting the link as argument will also work
print(type(df))
# to get view of dataframe
df.head(10) # first 10 rows
df.tail(6) # last 6 rows
df.info() # all info of the dataframe
# pandas is most useful for handling tabulated date
df.columns = ['date', 'open', 'high', 'low', 'close', 'volume', 'name']
df.columns, df['open'], type(df['open']), type(df[['date','open']])
# one column of pandas is a 'Series' type whereas multiple cols form 'DataFrame' type
'''
to get a particular row - iloc() and loc()
df.iloc[index] is used for strictly integer indices
whereas df.loc[index] is for all types
'''
df.iloc[0],df.loc[1]['date']
# to make a specific column as index
df2 = pd.read_csv('sbux.csv',index_col='date') # this makes 'date' col as index
df2.head()
df2.loc['2013-02-14'], type(df2.loc['2013-02-14'])
# selecting rows with certain parameters (similar to SQL: SELECT...WHERE)
df[df['open'] > 64] # selects all rows where 'open' > 64
df[df['name'] != 'SBUX'] # selects all rows where 'name' is not 'SBUX'
df[df['high']-df['low'] > 1]
# to convert the dataframe into numpy array - strings have to be excluded
A = df.values # all cols taken
print(type(A))
print(A.dtype) # here dtype=object
A = df[['open','close','high','low']].values
print(type(A))
print(A.dtype) # here dtype is numeric (float)
# save editted data to a .csv file
small_df = df[['open','close']]
small_df.to_csv('output.csv') # saves the dataframe to a csv file
!head output.csv # linux command to print head of the csv file
small_df.to_csv('output.csv',index=False) # index column is removed
!head output.csv
# apply() - performing same operation on each row/column of a dataframe
# useful for preprocessing data before converting to numpy arrays
def date_to_year(row):
# extracts the year out of the datestamp provided in the dataframe
return int(row['date'].split('-')[0])
def range(row):
# returns a column containing difference of 'high' and 'low' values
return float(row['high']-row['low'])
df.apply(date_to_year,axis=1)
df['year'] = df.apply(date_to_year,axis=1)
df.head()
df['range'] = df.apply(range,axis=1)
df.head()
# Plotting with pandas
df['open'].hist(); # histogram for 'open' column
df[['open','high','low','close']].plot.box(); # box plot
###Output
_____no_output_____
###Markdown
Scipy - for scientific computing
###Code
# Statistics and probability
from scipy.stats import norm
x = np.linspace(-6,6,1000)
fx = norm.pdf(x,loc=0,scale=1) # standard normal pdf
plt.plot(x,fx);
Fx = norm.cdf(x,loc=0,scale=1) # standard normal cdf
plt.plot(x,Fx);
# also we have norm.logpdf etc
###Output
_____no_output_____
###Markdown
Convolution - a very fundamental operation in DL and Signal processing
###Code
from PIL import Image
!wget https://github.com/soumitri2001/CodeForces-Statistics-android-app/raw/master/app_src/screenshots/cf3.png
img = Image.open('image_2.png')
img = img.resize((800,800))
gray = np.mean(img,axis=2) # grayscale image
# 2D convolution works for 2D images, so we get rid of 3rd dimen
x = np.linspace(-6,6,50)
fx = norm.pdf(x,loc=0,scale=1) # normal pdf for the distribution
# outer product of fx with itself gives Gaussian filter
filt = np.outer(fx,fx)
plt.imshow(filt,cmap='gray');
# Convolution
from scipy.signal import convolve2d
out = convolve2d(gray,filt)
plt.subplot(1,2,1)
plt.imshow(gray,cmap='gray')
plt.subplot(1,2,2)
plt.imshow(out, cmap='gray')
plt.show();
# Scipy exercise - Edge Detection
Hx = np.array([[1,0,-1],
[2,0,-2],
[1,0,-1]])
Hy = np.array([[1,2,1],
[0,0,0],
[-1,-2,-1]])
Gx = convolve2d(gray,Hx)
Gy = convolve2d(gray,Hy)
G = np.sqrt(Gx**2 + Gy**2)
plt.imshow(G,cmap='gray');
###Output
_____no_output_____ |
tomo_nuevo.ipynb | ###Markdown
Test New Tomography Experiment ControlTest the OOP-based Tomography experiment control
###Code
import numpy as np
import matplotlib
import bluesky
import ophyd
import apstools
import databroker
from datetime import datetime
from seisidd.experiment import Tomography
# instantiate the experiment handle
testexp = Tomography()
testexp.mode = 'dryrun'
# expose internal RunEngine
RE = testexp.RE
# setup the Metadata as before
RE.md['beamline_id'] = 'APS 6-BM-A'
RE.md['versions'] = {}
RE.md['versions']['apstools'] = apstools.__version__
RE.md['versions']['bluesky'] = bluesky.__version__
RE.md['versions']['databroker'] = databroker.__version__
RE.md['versions']['matplotlib'] = matplotlib.__version__
RE.md['versions']['numpy'] = np.__version__
RE.md['versions']['ophyd'] = ophyd.__version__
RE.md['SESSION_STARTED'] = datetime.isoformat(datetime.now(), " ")
scan_cfg = 'seisidd/config/tomo_scan_template.yml'
# summarize plan
testexp.dryrun(scan_cfg)
testexp.run(scan_cfg)
###Output
Transient Scan ID: 2 Time: 2019-08-22 12:47:10
Persistent Unique Scan ID: 'e4c9c153-1090-47f1-bf23-fc246ac11792'
New stream: 'primary'
+-----------+------------+
| seq_num | time |
+-----------+------------+
| 1 | 12:47:16.2 |
| 2 | 12:47:22.3 |
| 3 | 12:47:25.4 |
| 4 | 12:47:28.4 |
| 5 | 12:47:31.5 |
| 6 | 12:47:34.6 |
| 7 | 12:47:37.6 |
| 8 | 12:47:40.7 |
| 9 | 12:47:43.7 |
| 10 | 12:47:46.8 |
| 11 | 12:47:49.9 |
| 12 | 12:47:52.9 |
| 13 | 12:47:57.5 |
| 14 | 12:48:02.1 |
+-----------+------------+
generator tomo_scan ['e4c9c153'] (scan num: 2)
###Markdown
Test New Tomography Experiment ControlTest the OOP-based Tomography experiment control
###Code
import numpy as np
import matplotlib
import bluesky
import ophyd
import apstools
import databroker
from datetime import datetime
from seisidd.experiment import Tomography
# instantiate the experiment handle
testexp = Tomography()
scan_cfg = 'seisidd/config/scan_template.yml'
# summarize plan
testexp.dryrun(scan_cfg)
testexp.run(scan_cfg)
testexp.RE.abort()
cfg
# expose internal RunEngine
RE = testexp.RE
# setup the Metadata as before
RE.md['beamline_id'] = 'APS 6'
RE.md['versions'] = {}
RE.md['versions']['apstools'] = apstools.__version__
RE.md['versions']['bluesky'] = bluesky.__version__
RE.md['versions']['databroker'] = databroker.__version__
RE.md['versions']['matplotlib'] = matplotlib.__version__
RE.md['versions']['numpy'] = np.__version__
RE.md['versions']['ophyd'] = ophyd.__version__
RE.md['SESSION_STARTED'] = datetime.isoformat(datetime.now(), " ")
scan_cfg = 'seisidd/config/scan_template.yml'
# summarize plan
testexp.dryrun(scan_cfg)
RE(scan_cfg)
det = testexp.tomo_det
det.cam.acquire_time.put(5)
det.cam.acquire_period.put(4)
###Output
_____no_output_____ |
site/en/r1/tutorials/distribute/tpu_custom_training.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Custom training with TPUs Run in Google Colab View source on GitHub This tutorial will take you through using [tf.distribute.experimental.TPUStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy). This is a new strategy, a part of `tf.distribute.Strategy`, that allows users to easily switch their model to using TPUs. As part of this tutorial, you will create a Keras model and take it through a custom training loop (instead of calling `fit` method).You should be able to understand what is a strategy and why it’s necessary in Tensorflow. This will help you switch between CPU, GPUs, and other device configurations more easily once you understand the strategy framework. To make the introduction easier, you will also make a Keras model that produces a simple convolutional neural network. A Keras model usually is trained in one line of code (by calling its `fit` method), but because some users require additional customization, we showcase how to use custom training loops. Distribution Strategy was originally written by DeepMind -- you can [read the story here](https://deepmind.com/blog/tf-replicator-distributed-machine-learning/).
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
# Import TensorFlow
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
tf.compat.v1.disable_eager_execution()
# Helper libraries
import numpy as np
import os
assert os.environ['COLAB_TPU_ADDR'], 'Make sure to select TPU from Edit > Notebook settings > Hardware accelerator'
assert float('.'.join(tf.__version__.split('.')[:2])) >= 1.14, 'Make sure that Tensorflow version is at least 1.14'
TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR']
###Output
_____no_output_____
###Markdown
Create model Since you will be working with the [MNIST data](https://en.wikipedia.org/wiki/MNIST_database), which is a collection of 70,000 greyscale images representing digits, you want to be using a convolutional neural network to help us with the labeled image data. You will use the Keras API.
###Code
def create_model(input_shape):
"""Creates a simple convolutional neural network model using the Keras API"""
return tf.keras.Sequential([
tf.keras.layers.Conv2D(28, kernel_size=(3, 3), activation='relu', input_shape=input_shape),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
###Output
_____no_output_____
###Markdown
Loss and gradient Since you are preparing to use a custom training loop, you need to explicitly write down the loss and gradient functions.
###Code
def loss(model, x, y):
"""Calculates the loss given an example (x, y)"""
logits = model(x)
return logits, tf.losses.sparse_softmax_cross_entropy(labels=y, logits=logits)
def grad(model, x, y):
"""Calculates the loss and the gradients given an example (x, y)"""
logits, loss_value = loss(model, x, y)
return logits, loss_value, tf.gradients(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
Main function Previous sections highlighted the most important parts of the tutorial. The following code block gives a complete and runnable example of using TPUStrategy with a Keras model and a custom training loop.
###Code
tf.keras.backend.clear_session()
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu=TPU_WORKER)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
# Load MNIST training and test data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# All MNIST examples are 28x28 pixel greyscale images (hence the 1
# for the number of channels).
input_shape = (28, 28, 1)
# Only specific data types are supported on the TPU, so it is important to
# pay attention to these.
# More information:
# https://cloud.google.com/tpu/docs/troubleshooting#unsupported_data_type
x_train = x_train.reshape(x_train.shape[0], *input_shape).astype(np.float32)
x_test = x_test.reshape(x_test.shape[0], *input_shape).astype(np.float32)
y_train, y_test = y_train.astype(np.int64), y_test.astype(np.int64)
# The batch size must be divisible by the number of workers (8 workers),
# so batch sizes of 8, 16, 24, 32, ... are supported.
BATCH_SIZE = 32
NUM_EPOCHS = 5
train_steps_per_epoch = len(x_train) // BATCH_SIZE
test_steps_per_epoch = len(x_test) // BATCH_SIZE
###Output
_____no_output_____
###Markdown
Start by creating objects within the strategy's scopeModel creation, optimizer creation, etc. must be written in the context of strategy.scope() in order to use TPUStrategy. Also initialize metrics for the train and test sets. More information: `keras.metrics.Mean` and `keras.metrics.SparseCategoricalAccuracy`
###Code
with strategy.scope():
model = create_model(input_shape)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
training_loss = tf.keras.metrics.Mean('training_loss', dtype=tf.float32)
training_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'training_accuracy', dtype=tf.float32)
test_loss = tf.keras.metrics.Mean('test_loss', dtype=tf.float32)
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'test_accuracy', dtype=tf.float32)
###Output
_____no_output_____
###Markdown
Define custom train and test steps
###Code
with strategy.scope():
def train_step(inputs):
"""Each training step runs this custom function which calculates
gradients and updates weights.
"""
x, y = inputs
logits, loss_value, grads = grad(model, x, y)
update_loss = training_loss.update_state(loss_value)
update_accuracy = training_accuracy.update_state(y, logits)
# Show that this is truly a custom training loop
# Multiply all gradients by 2.
grads = grads * 2
update_vars = optimizer.apply_gradients(
zip(grads, model.trainable_variables))
with tf.control_dependencies([update_vars, update_loss, update_accuracy]):
return tf.identity(loss_value)
def test_step(inputs):
"""Each training step runs this custom function"""
x, y = inputs
logits, loss_value = loss(model, x, y)
update_loss = test_loss.update_state(loss_value)
update_accuracy = test_accuracy.update_state(y, logits)
with tf.control_dependencies([update_loss, update_accuracy]):
return tf.identity(loss_value)
###Output
_____no_output_____
###Markdown
Do the trainingIn order to make the reading a little bit easier, the full training loop calls two helper functions, `run_train()` and `run_test()`.
###Code
def run_train():
# Train
session.run(train_iterator_init)
while True:
try:
session.run(dist_train)
except tf.errors.OutOfRangeError:
break
print('Train loss: {:0.4f}\t Train accuracy: {:0.4f}%'.format(
session.run(training_loss_result),
session.run(training_accuracy_result) * 100))
training_loss.reset_states()
training_accuracy.reset_states()
def run_test():
# Test
session.run(test_iterator_init)
while True:
try:
session.run(dist_test)
except tf.errors.OutOfRangeError:
break
print('Test loss: {:0.4f}\t Test accuracy: {:0.4f}%'.format(
session.run(test_loss_result),
session.run(test_accuracy_result) * 100))
test_loss.reset_states()
test_accuracy.reset_states()
with strategy.scope():
training_loss_result = training_loss.result()
training_accuracy_result = training_accuracy.result()
test_loss_result = test_loss.result()
test_accuracy_result = test_accuracy.result()
config = tf.ConfigProto()
config.allow_soft_placement = True
cluster_spec = resolver.cluster_spec()
if cluster_spec:
config.cluster_def.CopyFrom(cluster_spec.as_cluster_def())
print('Starting training...')
# Do all the computations inside a Session (as opposed to doing eager mode)
with tf.Session(target=resolver.master(), config=config) as session:
all_variables = (
tf.global_variables() + training_loss.variables +
training_accuracy.variables + test_loss.variables +
test_accuracy.variables)
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(BATCH_SIZE, drop_remainder=True)
train_iterator = strategy.make_dataset_iterator(train_dataset)
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(BATCH_SIZE, drop_remainder=True)
test_iterator = strategy.make_dataset_iterator(test_dataset)
train_iterator_init = train_iterator.initializer
test_iterator_init = test_iterator.initializer
session.run([v.initializer for v in all_variables])
dist_train = strategy.experimental_run(train_step, train_iterator).values
dist_test = strategy.experimental_run(test_step, test_iterator).values
# Custom training loop
for epoch in range(0, NUM_EPOCHS):
print('Starting epoch {}'.format(epoch))
run_train()
run_test()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Custom training with TPUs Run in Google Colab View source on GitHub This tutorial will take you through using [tf.distribute.experimental.TPUStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy). This is a new strategy, a part of `tf.distribute.Strategy`, that allows users to easily switch their model to using TPUs. As part of this tutorial, you will create a Keras model and take it through a custom training loop (instead of calling `fit` method).You should be able to understand what is a strategy and why it’s necessary in Tensorflow. This will help you switch between CPU, GPUs, and other device configurations more easily once you understand the strategy framework. To make the introduction easier, you will also make a Keras model that produces a simple convolutional neural network. A Keras model usually is trained in one line of code (by calling its `fit` method), but because some users require additional customization, we showcase how to use custom training loops. Distribution Strategy was originally written by DeepMind -- you can [read the story here](https://deepmind.com/blog/tf-replicator-distributed-machine-learning/).
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
# Import TensorFlow
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
tf.compat.v1.disable_eager_execution()
# Helper libraries
import numpy as np
import os
assert os.environ['COLAB_TPU_ADDR'], 'Make sure to select TPU from Edit > Notebook settings > Hardware accelerator'
assert float('.'.join(tf.__version__.split('.')[:2])) >= 1.14, 'Make sure that Tensorflow version is at least 1.14'
TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR']
###Output
_____no_output_____
###Markdown
Create model Since you will be working with the [MNIST data](https://en.wikipedia.org/wiki/MNIST_database), which is a collection of 70,000 greyscale images representing digits, you want to be using a convolutional neural network to help us with the labeled image data. You will use the Keras API.
###Code
def create_model(input_shape):
"""Creates a simple convolutional neural network model using the Keras API"""
return tf.keras.Sequential([
tf.keras.layers.Conv2D(28, kernel_size=(3, 3), activation='relu', input_shape=input_shape),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
###Output
_____no_output_____
###Markdown
Loss and gradient Since you are preparing to use a custom training loop, you need to explicitly write down the loss and gradient functions.
###Code
def loss(model, x, y):
"""Calculates the loss given an example (x, y)"""
logits = model(x)
return logits, tf.losses.sparse_softmax_cross_entropy(labels=y, logits=logits)
def grad(model, x, y):
"""Calculates the loss and the gradients given an example (x, y)"""
logits, loss_value = loss(model, x, y)
return logits, loss_value, tf.gradients(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
Main function Previous sections highlighted the most important parts of the tutorial. The following code block gives a complete and runnable example of using TPUStrategy with a Keras model and a custom training loop.
###Code
tf.keras.backend.clear_session()
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu=TPU_WORKER)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
# Load MNIST training and test data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# All MNIST examples are 28x28 pixel greyscale images (hence the 1
# for the number of channels).
input_shape = (28, 28, 1)
# Only specific data types are supported on the TPU, so it is important to
# pay attention to these.
# More information:
# https://cloud.google.com/tpu/docs/troubleshooting#unsupported_data_type
x_train = x_train.reshape(x_train.shape[0], *input_shape).astype(np.float32)
x_test = x_test.reshape(x_test.shape[0], *input_shape).astype(np.float32)
y_train, y_test = y_train.astype(np.int64), y_test.astype(np.int64)
# The batch size must be divisible by the number of workers (8 workers),
# so batch sizes of 8, 16, 24, 32, ... are supported.
BATCH_SIZE = 32
NUM_EPOCHS = 5
train_steps_per_epoch = len(x_train) // BATCH_SIZE
test_steps_per_epoch = len(x_test) // BATCH_SIZE
###Output
_____no_output_____
###Markdown
Start by creating objects within the strategy's scopeModel creation, optimizer creation, etc. must be written in the context of strategy.scope() in order to use TPUStrategy. Also initialize metrics for the train and test sets. More information: `keras.metrics.Mean` and `keras.metrics.SparseCategoricalAccuracy`
###Code
with strategy.scope():
model = create_model(input_shape)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
training_loss = tf.keras.metrics.Mean('training_loss', dtype=tf.float32)
training_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'training_accuracy', dtype=tf.float32)
test_loss = tf.keras.metrics.Mean('test_loss', dtype=tf.float32)
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'test_accuracy', dtype=tf.float32)
###Output
_____no_output_____
###Markdown
Define custom train and test steps
###Code
with strategy.scope():
def train_step(inputs):
"""Each training step runs this custom function which calculates
gradients and updates weights.
"""
x, y = inputs
logits, loss_value, grads = grad(model, x, y)
update_loss = training_loss.update_state(loss_value)
update_accuracy = training_accuracy.update_state(y, logits)
# Show that this is truly a custom training loop
# Multiply all gradients by 2.
grads = grads * 2
update_vars = optimizer.apply_gradients(
zip(grads, model.trainable_variables))
with tf.control_dependencies([update_vars, update_loss, update_accuracy]):
return tf.identity(loss_value)
def test_step(inputs):
"""Each training step runs this custom function"""
x, y = inputs
logits, loss_value = loss(model, x, y)
update_loss = test_loss.update_state(loss_value)
update_accuracy = test_accuracy.update_state(y, logits)
with tf.control_dependencies([update_loss, update_accuracy]):
return tf.identity(loss_value)
###Output
_____no_output_____
###Markdown
Do the trainingIn order to make the reading a little bit easier, the full training loop calls two helper functions, `run_train()` and `run_test()`.
###Code
def run_train():
# Train
session.run(train_iterator_init)
while True:
try:
session.run(dist_train)
except tf.errors.OutOfRangeError:
break
print('Train loss: {:0.4f}\t Train accuracy: {:0.4f}%'.format(
session.run(training_loss_result),
session.run(training_accuracy_result) * 100))
training_loss.reset_states()
training_accuracy.reset_states()
def run_test():
# Test
session.run(test_iterator_init)
while True:
try:
session.run(dist_test)
except tf.errors.OutOfRangeError:
break
print('Test loss: {:0.4f}\t Test accuracy: {:0.4f}%'.format(
session.run(test_loss_result),
session.run(test_accuracy_result) * 100))
test_loss.reset_states()
test_accuracy.reset_states()
with strategy.scope():
training_loss_result = training_loss.result()
training_accuracy_result = training_accuracy.result()
test_loss_result = test_loss.result()
test_accuracy_result = test_accuracy.result()
config = tf.ConfigProto()
config.allow_soft_placement = True
cluster_spec = resolver.cluster_spec()
if cluster_spec:
config.cluster_def.CopyFrom(cluster_spec.as_cluster_def())
print('Starting training...')
# Do all the computations inside a Session (as opposed to doing eager mode)
with tf.Session(target=resolver.master(), config=config) as session:
all_variables = (
tf.global_variables() + training_loss.variables +
training_accuracy.variables + test_loss.variables +
test_accuracy.variables)
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(BATCH_SIZE, drop_remainder=True)
train_iterator = strategy.make_dataset_iterator(train_dataset)
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(BATCH_SIZE, drop_remainder=True)
test_iterator = strategy.make_dataset_iterator(test_dataset)
train_iterator_init = train_iterator.initializer
test_iterator_init = test_iterator.initializer
session.run([v.initializer for v in all_variables])
dist_train = strategy.experimental_run(train_step, train_iterator).values
dist_test = strategy.experimental_run(test_step, test_iterator).values
# Custom training loop
for epoch in range(0, NUM_EPOCHS):
print('Starting epoch {}'.format(epoch))
run_train()
run_test()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Custom training with TPUs Run in Google Colab View source on GitHub This tutorial will take you through using [tf.distribute.experimental.TPUStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy). This is a new strategy, a part of `tf.distribute.Strategy`, that allows users to easily switch their model to using TPUs. As part of this tutorial, you will create a Keras model and take it through a custom training loop (instead of calling `fit` method).You should be able to understand what is a strategy and why it’s necessary in Tensorflow. This will help you switch between CPU, GPUs, and other device configurations more easily once you understand the strategy framework. To make the introduction easier, you will also make a Keras model that produces a simple convolutional neural network. A Keras model usually is trained in one line of code (by calling its `fit` method), but because some users require additional customization, we showcase how to use custom training loops. Distribution Strategy was originally written by DeepMind -- you can [read the story here](https://deepmind.com/blog/tf-replicator-distributed-machine-learning/).
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
# Import TensorFlow
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
tf.compat.v1.disable_eager_execution()
# Helper libraries
import numpy as np
import os
assert os.environ['COLAB_TPU_ADDR'], 'Make sure to select TPU from Edit > Notebook settings > Hardware accelerator'
assert float('.'.join(tf.__version__.split('.')[:2])) >= 1.14, 'Make sure that Tensorflow version is at least 1.14'
TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR']
###Output
_____no_output_____
###Markdown
Create model Since you will be working with the [MNIST data](https://en.wikipedia.org/wiki/MNIST_database), which is a collection of 70,000 greyscale images representing digits, you want to be using a convolutional neural network to help us with the labeled image data. You will use the Keras API.
###Code
def create_model(input_shape):
"""Creates a simple convolutional neural network model using the Keras API"""
return tf.keras.Sequential([
tf.keras.layers.Conv2D(28, kernel_size=(3, 3), activation='relu', input_shape=input_shape),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
###Output
_____no_output_____
###Markdown
Loss and gradient Since you are preparing to use a custom training loop, you need to explicitly write down the loss and gradient functions.
###Code
def loss(model, x, y):
"""Calculates the loss given an example (x, y)"""
logits = model(x)
return logits, tf.losses.sparse_softmax_cross_entropy(labels=y, logits=logits)
def grad(model, x, y):
"""Calculates the loss and the gradients given an example (x, y)"""
logits, loss_value = loss(model, x, y)
return logits, loss_value, tf.gradients(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
Main function Previous sections highlighted the most important parts of the tutorial. The following code block gives a complete and runnable example of using TPUStrategy with a Keras model and a custom training loop.
###Code
tf.keras.backend.clear_session()
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu=TPU_WORKER)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
# Load MNIST training and test data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# All MNIST examples are 28x28 pixel greyscale images (hence the 1
# for the number of channels).
input_shape = (28, 28, 1)
# Only specific data types are supported on the TPU, so it is important to
# pay attention to these.
# More information:
# https://cloud.google.com/tpu/docs/troubleshooting#unsupported_data_type
x_train = x_train.reshape(x_train.shape[0], *input_shape).astype(np.float32)
x_test = x_test.reshape(x_test.shape[0], *input_shape).astype(np.float32)
y_train, y_test = y_train.astype(np.int64), y_test.astype(np.int64)
# The batch size must be divisible by the number of workers (8 workers),
# so batch sizes of 8, 16, 24, 32, ... are supported.
BATCH_SIZE = 32
NUM_EPOCHS = 5
train_steps_per_epoch = len(x_train) // BATCH_SIZE
test_steps_per_epoch = len(x_test) // BATCH_SIZE
###Output
_____no_output_____
###Markdown
Start by creating objects within the strategy's scopeModel creation, optimizer creation, etc. must be written in the context of strategy.scope() in order to use TPUStrategy. Also initialize metrics for the train and test sets. More information: `keras.metrics.Mean` and `keras.metrics.SparseCategoricalAccuracy`
###Code
with strategy.scope():
model = create_model(input_shape)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
training_loss = tf.keras.metrics.Mean('training_loss', dtype=tf.float32)
training_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'training_accuracy', dtype=tf.float32)
test_loss = tf.keras.metrics.Mean('test_loss', dtype=tf.float32)
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'test_accuracy', dtype=tf.float32)
###Output
_____no_output_____
###Markdown
Define custom train and test steps
###Code
with strategy.scope():
def train_step(inputs):
"""Each training step runs this custom function which calculates
gradients and updates weights.
"""
x, y = inputs
logits, loss_value, grads = grad(model, x, y)
update_loss = training_loss.update_state(loss_value)
update_accuracy = training_accuracy.update_state(y, logits)
# Show that this is truly a custom training loop
# Multiply all gradients by 2.
grads = grads * 2
update_vars = optimizer.apply_gradients(
zip(grads, model.trainable_variables))
with tf.control_dependencies([update_vars, update_loss, update_accuracy]):
return tf.identity(loss_value)
def test_step(inputs):
"""Each training step runs this custom function"""
x, y = inputs
logits, loss_value = loss(model, x, y)
update_loss = test_loss.update_state(loss_value)
update_accuracy = test_accuracy.update_state(y, logits)
with tf.control_dependencies([update_loss, update_accuracy]):
return tf.identity(loss_value)
###Output
_____no_output_____
###Markdown
Do the trainingIn order to make the reading a little bit easier, the full training loop calls two helper functions, `run_train()` and `run_test()`.
###Code
def run_train():
# Train
session.run(train_iterator_init)
while True:
try:
session.run(dist_train)
except tf.errors.OutOfRangeError:
break
print('Train loss: {:0.4f}\t Train accuracy: {:0.4f}%'.format(
session.run(training_loss_result),
session.run(training_accuracy_result) * 100))
training_loss.reset_states()
training_accuracy.reset_states()
def run_test():
# Test
session.run(test_iterator_init)
while True:
try:
session.run(dist_test)
except tf.errors.OutOfRangeError:
break
print('Test loss: {:0.4f}\t Test accuracy: {:0.4f}%'.format(
session.run(test_loss_result),
session.run(test_accuracy_result) * 100))
test_loss.reset_states()
test_accuracy.reset_states()
with strategy.scope():
training_loss_result = training_loss.result()
training_accuracy_result = training_accuracy.result()
test_loss_result = test_loss.result()
test_accuracy_result = test_accuracy.result()
config = tf.ConfigProto()
config.allow_soft_placement = True
cluster_spec = resolver.cluster_spec()
if cluster_spec:
config.cluster_def.CopyFrom(cluster_spec.as_cluster_def())
print('Starting training...')
# Do all the computations inside a Session (as opposed to doing eager mode)
with tf.Session(target=resolver.master(), config=config) as session:
all_variables = (
tf.global_variables() + training_loss.variables +
training_accuracy.variables + test_loss.variables +
test_accuracy.variables)
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(BATCH_SIZE, drop_remainder=True)
train_iterator = strategy.make_dataset_iterator(train_dataset)
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(BATCH_SIZE, drop_remainder=True)
test_iterator = strategy.make_dataset_iterator(test_dataset)
train_iterator_init = train_iterator.initializer
test_iterator_init = test_iterator.initializer
session.run([v.initializer for v in all_variables])
dist_train = strategy.experimental_run(train_step, train_iterator).values
dist_test = strategy.experimental_run(test_step, test_iterator).values
# Custom training loop
for epoch in range(0, NUM_EPOCHS):
print('Starting epoch {}'.format(epoch))
run_train()
run_test()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Custom training with TPUs Run in Google Colab View source on GitHub This tutorial will take you through using [tf.distribute.experimental.TPUStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy). This is a new strategy, a part of `tf.distribute.Strategy`, that allows users to easily switch their model to using TPUs. As part of this tutorial, you will create a Keras model and take it through a custom training loop (instead of calling `fit` method).You should be able to understand what is a strategy and why it’s necessary in Tensorflow. This will help you switch between CPU, GPUs, and other device configurations more easily once you understand the strategy framework. To make the introduction easier, you will also make a Keras model that produces a simple convolutional neural network. A Keras model usually is trained in one line of code (by calling its `fit` method), but because some users require additional customization, we showcase how to use custom training loops. Distribution Strategy was originally written by DeepMind -- you can [read the story here](https://deepmind.com/blog/tf-replicator-distributed-machine-learning/).
###Code
# Import TensorFlow
import tensorflow.compat.v1 as tf
tf.compat.v1.disable_eager_execution()
# Helper libraries
import numpy as np
import os
assert os.environ['COLAB_TPU_ADDR'], 'Make sure to select TPU from Edit > Notebook settings > Hardware accelerator'
assert float('.'.join(tf.__version__.split('.')[:2])) >= 1.14, 'Make sure that Tensorflow version is at least 1.14'
TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR']
###Output
_____no_output_____
###Markdown
Create model Since you will be working with the [MNIST data](https://en.wikipedia.org/wiki/MNIST_database), which is a collection of 70,000 greyscale images representing digits, you want to be using a convolutional neural network to help us with the labeled image data. You will use the Keras API.
###Code
def create_model(input_shape):
"""Creates a simple convolutional neural network model using the Keras API"""
return tf.keras.Sequential([
tf.keras.layers.Conv2D(28, kernel_size=(3, 3), activation='relu', input_shape=input_shape),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
###Output
_____no_output_____
###Markdown
Loss and gradient Since you are preparing to use a custom training loop, you need to explicitly write down the loss and gradient functions.
###Code
def loss(model, x, y):
"""Calculates the loss given an example (x, y)"""
logits = model(x)
return logits, tf.losses.sparse_softmax_cross_entropy(labels=y, logits=logits)
def grad(model, x, y):
"""Calculates the loss and the gradients given an example (x, y)"""
logits, loss_value = loss(model, x, y)
return logits, loss_value, tf.gradients(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
Main function Previous sections highlighted the most important parts of the tutorial. The following code block gives a complete and runnable example of using TPUStrategy with a Keras model and a custom training loop.
###Code
tf.keras.backend.clear_session()
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu=TPU_WORKER)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
# Load MNIST training and test data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# All MNIST examples are 28x28 pixel greyscale images (hence the 1
# for the number of channels).
input_shape = (28, 28, 1)
# Only specific data types are supported on the TPU, so it is important to
# pay attention to these.
# More information:
# https://cloud.google.com/tpu/docs/troubleshooting#unsupported_data_type
x_train = x_train.reshape(x_train.shape[0], *input_shape).astype(np.float32)
x_test = x_test.reshape(x_test.shape[0], *input_shape).astype(np.float32)
y_train, y_test = y_train.astype(np.int64), y_test.astype(np.int64)
# The batch size must be divisible by the number of workers (8 workers),
# so batch sizes of 8, 16, 24, 32, ... are supported.
BATCH_SIZE = 32
NUM_EPOCHS = 5
train_steps_per_epoch = len(x_train) // BATCH_SIZE
test_steps_per_epoch = len(x_test) // BATCH_SIZE
###Output
_____no_output_____
###Markdown
Start by creating objects within the strategy's scopeModel creation, optimizer creation, etc. must be written in the context of strategy.scope() in order to use TPUStrategy. Also initialize metrics for the train and test sets. More information: `keras.metrics.Mean` and `keras.metrics.SparseCategoricalAccuracy`
###Code
with strategy.scope():
model = create_model(input_shape)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
training_loss = tf.keras.metrics.Mean('training_loss', dtype=tf.float32)
training_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'training_accuracy', dtype=tf.float32)
test_loss = tf.keras.metrics.Mean('test_loss', dtype=tf.float32)
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'test_accuracy', dtype=tf.float32)
###Output
_____no_output_____
###Markdown
Define custom train and test steps
###Code
with strategy.scope():
def train_step(inputs):
"""Each training step runs this custom function which calculates
gradients and updates weights.
"""
x, y = inputs
logits, loss_value, grads = grad(model, x, y)
update_loss = training_loss.update_state(loss_value)
update_accuracy = training_accuracy.update_state(y, logits)
# Show that this is truly a custom training loop
# Multiply all gradients by 2.
grads = grads * 2
update_vars = optimizer.apply_gradients(
zip(grads, model.trainable_variables))
with tf.control_dependencies([update_vars, update_loss, update_accuracy]):
return tf.identity(loss_value)
def test_step(inputs):
"""Each training step runs this custom function"""
x, y = inputs
logits, loss_value = loss(model, x, y)
update_loss = test_loss.update_state(loss_value)
update_accuracy = test_accuracy.update_state(y, logits)
with tf.control_dependencies([update_loss, update_accuracy]):
return tf.identity(loss_value)
###Output
_____no_output_____
###Markdown
Do the trainingIn order to make the reading a little bit easier, the full training loop calls two helper functions, `run_train()` and `run_test()`.
###Code
def run_train():
# Train
session.run(train_iterator_init)
while True:
try:
session.run(dist_train)
except tf.errors.OutOfRangeError:
break
print('Train loss: {:0.4f}\t Train accuracy: {:0.4f}%'.format(
session.run(training_loss_result),
session.run(training_accuracy_result) * 100))
training_loss.reset_states()
training_accuracy.reset_states()
def run_test():
# Test
session.run(test_iterator_init)
while True:
try:
session.run(dist_test)
except tf.errors.OutOfRangeError:
break
print('Test loss: {:0.4f}\t Test accuracy: {:0.4f}%'.format(
session.run(test_loss_result),
session.run(test_accuracy_result) * 100))
test_loss.reset_states()
test_accuracy.reset_states()
with strategy.scope():
training_loss_result = training_loss.result()
training_accuracy_result = training_accuracy.result()
test_loss_result = test_loss.result()
test_accuracy_result = test_accuracy.result()
config = tf.ConfigProto()
config.allow_soft_placement = True
cluster_spec = resolver.cluster_spec()
if cluster_spec:
config.cluster_def.CopyFrom(cluster_spec.as_cluster_def())
print('Starting training...')
# Do all the computations inside a Session (as opposed to doing eager mode)
with tf.Session(target=resolver.master(), config=config) as session:
all_variables = (
tf.global_variables() + training_loss.variables +
training_accuracy.variables + test_loss.variables +
test_accuracy.variables)
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(BATCH_SIZE, drop_remainder=True)
train_iterator = strategy.make_dataset_iterator(train_dataset)
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(BATCH_SIZE, drop_remainder=True)
test_iterator = strategy.make_dataset_iterator(test_dataset)
train_iterator_init = train_iterator.initializer
test_iterator_init = test_iterator.initializer
session.run([v.initializer for v in all_variables])
dist_train = strategy.experimental_run(train_step, train_iterator).values
dist_test = strategy.experimental_run(test_step, test_iterator).values
# Custom training loop
for epoch in range(0, NUM_EPOCHS):
print('Starting epoch {}'.format(epoch))
run_train()
run_test()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Custom training with TPUs Run in Google Colab View source on GitHub This tutorial will take you through using [tf.distribute.experimental.TPUStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy). This is a new strategy, a part of `tf.distribute.Strategy`, that allows users to easily switch their model to using TPUs. As part of this tutorial, you will create a Keras model and take it through a custom training loop (instead of calling `fit` method).You should be able to understand what is a strategy and why it’s necessary in Tensorflow. This will help you switch between CPU, GPUs, and other device configurations more easily once you understand the strategy framework. To make the introduction easier, you will also make a Keras model that produces a simple convolutional neural network. A Keras model usually is trained in one line of code (by calling its `fit` method), but because some users require additional customization, we showcase how to use custom training loops. Distribution Strategy was originally written by DeepMind -- you can [read the story here](https://deepmind.com/blog/tf-replicator-distributed-machine-learning/).
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
# Import TensorFlow
import tensorflow as tf
# Helper libraries
import numpy as np
import os
assert os.environ['COLAB_TPU_ADDR'], 'Make sure to select TPU from Edit > Notebook settings > Hardware accelerator'
assert float('.'.join(tf.__version__.split('.')[:2])) >= 1.14, 'Make sure that Tensorflow version is at least 1.14'
TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR']
###Output
_____no_output_____
###Markdown
Create model Since you will be working with the [MNIST data](https://en.wikipedia.org/wiki/MNIST_database), which is a collection of 70,000 greyscale images representing digits, you want to be using a convolutional neural network to help us with the labeled image data. You will use the Keras API.
###Code
def create_model(input_shape):
"""Creates a simple convolutional neural network model using the Keras API"""
return tf.keras.Sequential([
tf.keras.layers.Conv2D(28, kernel_size=(3, 3), activation='relu', input_shape=input_shape),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
###Output
_____no_output_____
###Markdown
Loss and gradient Since you are preparing to use a custom training loop, you need to explicitly write down the loss and gradient functions.
###Code
def loss(model, x, y):
"""Calculates the loss given an example (x, y)"""
logits = model(x)
return logits, tf.losses.sparse_softmax_cross_entropy(labels=y, logits=logits)
def grad(model, x, y):
"""Calculates the loss and the gradients given an example (x, y)"""
logits, loss_value = loss(model, x, y)
return logits, loss_value, tf.gradients(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
Main function Previous sections highlighted the most important parts of the tutorial. The following code block gives a complete and runnable example of using TPUStrategy with a Keras model and a custom training loop.
###Code
tf.keras.backend.clear_session()
resolver = tf.contrib.cluster_resolver.TPUClusterResolver(tpu=TPU_WORKER)
tf.contrib.distribute.initialize_tpu_system(resolver)
strategy = tf.contrib.distribute.TPUStrategy(resolver)
# Load MNIST training and test data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# All MNIST examples are 28x28 pixel greyscale images (hence the 1
# for the number of channels).
input_shape = (28, 28, 1)
# Only specific data types are supported on the TPU, so it is important to
# pay attention to these.
# More information:
# https://cloud.google.com/tpu/docs/troubleshooting#unsupported_data_type
x_train = x_train.reshape(x_train.shape[0], *input_shape).astype(np.float32)
x_test = x_test.reshape(x_test.shape[0], *input_shape).astype(np.float32)
y_train, y_test = y_train.astype(np.int64), y_test.astype(np.int64)
# The batch size must be divisible by the number of workers (8 workers),
# so batch sizes of 8, 16, 24, 32, ... are supported.
BATCH_SIZE = 32
NUM_EPOCHS = 5
train_steps_per_epoch = len(x_train) // BATCH_SIZE
test_steps_per_epoch = len(x_test) // BATCH_SIZE
###Output
_____no_output_____
###Markdown
Start by creating objects within the strategy's scopeModel creation, optimizer creation, etc. must be written in the context of strategy.scope() in order to use TPUStrategy. Also initialize metrics for the train and test sets. More information: `keras.metrics.Mean` and `keras.metrics.SparseCategoricalAccuracy`
###Code
with strategy.scope():
model = create_model(input_shape)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
training_loss = tf.keras.metrics.Mean('training_loss', dtype=tf.float32)
training_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'training_accuracy', dtype=tf.float32)
test_loss = tf.keras.metrics.Mean('test_loss', dtype=tf.float32)
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'test_accuracy', dtype=tf.float32)
###Output
_____no_output_____
###Markdown
Define custom train and test steps
###Code
with strategy.scope():
def train_step(inputs):
"""Each training step runs this custom function which calculates
gradients and updates weights.
"""
x, y = inputs
logits, loss_value, grads = grad(model, x, y)
update_loss = training_loss.update_state(loss_value)
update_accuracy = training_accuracy.update_state(y, logits)
# Show that this is truly a custom training loop
# Multiply all gradients by 2.
grads = grads * 2
update_vars = optimizer.apply_gradients(
zip(grads, model.trainable_variables))
with tf.control_dependencies([update_vars, update_loss, update_accuracy]):
return tf.identity(loss_value)
def test_step(inputs):
"""Each training step runs this custom function"""
x, y = inputs
logits, loss_value = loss(model, x, y)
update_loss = test_loss.update_state(loss_value)
update_accuracy = test_accuracy.update_state(y, logits)
with tf.control_dependencies([update_loss, update_accuracy]):
return tf.identity(loss_value)
###Output
_____no_output_____
###Markdown
Do the trainingIn order to make the reading a little bit easier, the full training loop calls two helper functions, `run_train()` and `run_test()`.
###Code
def run_train():
# Train
session.run(train_iterator_init)
while True:
try:
session.run(dist_train)
except tf.errors.OutOfRangeError:
break
print('Train loss: {:0.4f}\t Train accuracy: {:0.4f}%'.format(
session.run(training_loss_result),
session.run(training_accuracy_result) * 100))
training_loss.reset_states()
training_accuracy.reset_states()
def run_test():
# Test
session.run(test_iterator_init)
while True:
try:
session.run(dist_test)
except tf.errors.OutOfRangeError:
break
print('Test loss: {:0.4f}\t Test accuracy: {:0.4f}%'.format(
session.run(test_loss_result),
session.run(test_accuracy_result) * 100))
test_loss.reset_states()
test_accuracy.reset_states()
with strategy.scope():
training_loss_result = training_loss.result()
training_accuracy_result = training_accuracy.result()
test_loss_result = test_loss.result()
test_accuracy_result = test_accuracy.result()
config = tf.ConfigProto()
config.allow_soft_placement = True
cluster_spec = resolver.cluster_spec()
if cluster_spec:
config.cluster_def.CopyFrom(cluster_spec.as_cluster_def())
print('Starting training...')
# Do all the computations inside a Session (as opposed to doing eager mode)
with tf.Session(target=resolver.master(), config=config) as session:
all_variables = (
tf.global_variables() + training_loss.variables +
training_accuracy.variables + test_loss.variables +
test_accuracy.variables)
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(BATCH_SIZE, drop_remainder=True)
train_iterator = strategy.make_dataset_iterator(train_dataset)
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(BATCH_SIZE, drop_remainder=True)
test_iterator = strategy.make_dataset_iterator(train_dataset)
train_iterator_init = train_iterator.initialize()
test_iterator_init = test_iterator.initialize()
session.run([v.initializer for v in all_variables])
dist_train = strategy.experimental_run(train_step, train_iterator).values
dist_test = strategy.experimental_run(test_step, test_iterator).values
# Custom training loop
for epoch in range(0, NUM_EPOCHS):
print('Starting epoch {}'.format(epoch))
run_train()
run_test()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Custom training with TPUs Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. This tutorial will take you through using [tf.distribute.experimental.TPUStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy). This is a new strategy, a part of `tf.distribute.Strategy`, that allows users to easily switch their model to using TPUs. As part of this tutorial, you will create a Keras model and take it through a custom training loop (instead of calling `fit` method).You should be able to understand what is a strategy and why it’s necessary in Tensorflow. This will help you switch between CPU, GPUs, and other device configurations more easily once you understand the strategy framework. To make the introduction easier, you will also make a Keras model that produces a simple convolutional neural network. A Keras model usually is trained in one line of code (by calling its `fit` method), but because some users require additional customization, we showcase how to use custom training loops. Distribution Strategy was originally written by DeepMind -- you can [read the story here](https://deepmind.com/blog/tf-replicator-distributed-machine-learning/).
###Code
# Import TensorFlow
import tensorflow.compat.v1 as tf
tf.compat.v1.disable_eager_execution()
# Helper libraries
import numpy as np
import os
assert os.environ['COLAB_TPU_ADDR'], 'Make sure to select TPU from Edit > Notebook settings > Hardware accelerator'
assert float('.'.join(tf.__version__.split('.')[:2])) >= 1.14, 'Make sure that Tensorflow version is at least 1.14'
TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR']
###Output
_____no_output_____
###Markdown
Create model Since you will be working with the [MNIST data](https://en.wikipedia.org/wiki/MNIST_database), which is a collection of 70,000 greyscale images representing digits, you want to be using a convolutional neural network to help us with the labeled image data. You will use the Keras API.
###Code
def create_model(input_shape):
"""Creates a simple convolutional neural network model using the Keras API"""
return tf.keras.Sequential([
tf.keras.layers.Conv2D(28, kernel_size=(3, 3), activation='relu', input_shape=input_shape),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
###Output
_____no_output_____
###Markdown
Loss and gradient Since you are preparing to use a custom training loop, you need to explicitly write down the loss and gradient functions.
###Code
def loss(model, x, y):
"""Calculates the loss given an example (x, y)"""
logits = model(x)
return logits, tf.losses.sparse_softmax_cross_entropy(labels=y, logits=logits)
def grad(model, x, y):
"""Calculates the loss and the gradients given an example (x, y)"""
logits, loss_value = loss(model, x, y)
return logits, loss_value, tf.gradients(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
Main function Previous sections highlighted the most important parts of the tutorial. The following code block gives a complete and runnable example of using TPUStrategy with a Keras model and a custom training loop.
###Code
tf.keras.backend.clear_session()
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu=TPU_WORKER)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
# Load MNIST training and test data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# All MNIST examples are 28x28 pixel greyscale images (hence the 1
# for the number of channels).
input_shape = (28, 28, 1)
# Only specific data types are supported on the TPU, so it is important to
# pay attention to these.
# More information:
# https://cloud.google.com/tpu/docs/troubleshooting#unsupported_data_type
x_train = x_train.reshape(x_train.shape[0], *input_shape).astype(np.float32)
x_test = x_test.reshape(x_test.shape[0], *input_shape).astype(np.float32)
y_train, y_test = y_train.astype(np.int64), y_test.astype(np.int64)
# The batch size must be divisible by the number of workers (8 workers),
# so batch sizes of 8, 16, 24, 32, ... are supported.
BATCH_SIZE = 32
NUM_EPOCHS = 5
train_steps_per_epoch = len(x_train) // BATCH_SIZE
test_steps_per_epoch = len(x_test) // BATCH_SIZE
###Output
_____no_output_____
###Markdown
Start by creating objects within the strategy's scopeModel creation, optimizer creation, etc. must be written in the context of strategy.scope() in order to use TPUStrategy. Also initialize metrics for the train and test sets. More information: `keras.metrics.Mean` and `keras.metrics.SparseCategoricalAccuracy`
###Code
with strategy.scope():
model = create_model(input_shape)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
training_loss = tf.keras.metrics.Mean('training_loss', dtype=tf.float32)
training_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'training_accuracy', dtype=tf.float32)
test_loss = tf.keras.metrics.Mean('test_loss', dtype=tf.float32)
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'test_accuracy', dtype=tf.float32)
###Output
_____no_output_____
###Markdown
Define custom train and test steps
###Code
with strategy.scope():
def train_step(inputs):
"""Each training step runs this custom function which calculates
gradients and updates weights.
"""
x, y = inputs
logits, loss_value, grads = grad(model, x, y)
update_loss = training_loss.update_state(loss_value)
update_accuracy = training_accuracy.update_state(y, logits)
# Show that this is truly a custom training loop
# Multiply all gradients by 2.
grads = grads * 2
update_vars = optimizer.apply_gradients(
zip(grads, model.trainable_variables))
with tf.control_dependencies([update_vars, update_loss, update_accuracy]):
return tf.identity(loss_value)
def test_step(inputs):
"""Each training step runs this custom function"""
x, y = inputs
logits, loss_value = loss(model, x, y)
update_loss = test_loss.update_state(loss_value)
update_accuracy = test_accuracy.update_state(y, logits)
with tf.control_dependencies([update_loss, update_accuracy]):
return tf.identity(loss_value)
###Output
_____no_output_____
###Markdown
Do the trainingIn order to make the reading a little bit easier, the full training loop calls two helper functions, `run_train()` and `run_test()`.
###Code
def run_train():
# Train
session.run(train_iterator_init)
while True:
try:
session.run(dist_train)
except tf.errors.OutOfRangeError:
break
print('Train loss: {:0.4f}\t Train accuracy: {:0.4f}%'.format(
session.run(training_loss_result),
session.run(training_accuracy_result) * 100))
training_loss.reset_states()
training_accuracy.reset_states()
def run_test():
# Test
session.run(test_iterator_init)
while True:
try:
session.run(dist_test)
except tf.errors.OutOfRangeError:
break
print('Test loss: {:0.4f}\t Test accuracy: {:0.4f}%'.format(
session.run(test_loss_result),
session.run(test_accuracy_result) * 100))
test_loss.reset_states()
test_accuracy.reset_states()
with strategy.scope():
training_loss_result = training_loss.result()
training_accuracy_result = training_accuracy.result()
test_loss_result = test_loss.result()
test_accuracy_result = test_accuracy.result()
config = tf.ConfigProto()
config.allow_soft_placement = True
cluster_spec = resolver.cluster_spec()
if cluster_spec:
config.cluster_def.CopyFrom(cluster_spec.as_cluster_def())
print('Starting training...')
# Do all the computations inside a Session (as opposed to doing eager mode)
with tf.Session(target=resolver.master(), config=config) as session:
all_variables = (
tf.global_variables() + training_loss.variables +
training_accuracy.variables + test_loss.variables +
test_accuracy.variables)
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(BATCH_SIZE, drop_remainder=True)
train_iterator = strategy.make_dataset_iterator(train_dataset)
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(BATCH_SIZE, drop_remainder=True)
test_iterator = strategy.make_dataset_iterator(test_dataset)
train_iterator_init = train_iterator.initializer
test_iterator_init = test_iterator.initializer
session.run([v.initializer for v in all_variables])
dist_train = strategy.experimental_run(train_step, train_iterator).values
dist_test = strategy.experimental_run(test_step, test_iterator).values
# Custom training loop
for epoch in range(0, NUM_EPOCHS):
print('Starting epoch {}'.format(epoch))
run_train()
run_test()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Custom training with TPUs Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatibility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. This tutorial will take you through using [tf.distribute.experimental.TPUStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy). This is a new strategy, a part of `tf.distribute.Strategy`, that allows users to easily switch their model to using TPUs. As part of this tutorial, you will create a Keras model and take it through a custom training loop (instead of calling `fit` method).You should be able to understand what is a strategy and why it’s necessary in Tensorflow. This will help you switch between CPU, GPUs, and other device configurations more easily once you understand the strategy framework. To make the introduction easier, you will also make a Keras model that produces a simple convolutional neural network. A Keras model usually is trained in one line of code (by calling its `fit` method), but because some users require additional customization, we showcase how to use custom training loops. Distribution Strategy was originally written by DeepMind -- you can [read the story here](https://deepmind.com/blog/tf-replicator-distributed-machine-learning/).
###Code
# Import TensorFlow
import tensorflow.compat.v1 as tf
tf.compat.v1.disable_eager_execution()
# Helper libraries
import numpy as np
import os
assert os.environ['COLAB_TPU_ADDR'], 'Make sure to select TPU from Edit > Notebook settings > Hardware accelerator'
assert float('.'.join(tf.__version__.split('.')[:2])) >= 1.14, 'Make sure that Tensorflow version is at least 1.14'
TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR']
###Output
_____no_output_____
###Markdown
Create model Since you will be working with the [MNIST data](https://en.wikipedia.org/wiki/MNIST_database), which is a collection of 70,000 greyscale images representing digits, you want to be using a convolutional neural network to help us with the labeled image data. You will use the Keras API.
###Code
def create_model(input_shape):
"""Creates a simple convolutional neural network model using the Keras API"""
return tf.keras.Sequential([
tf.keras.layers.Conv2D(28, kernel_size=(3, 3), activation='relu', input_shape=input_shape),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
###Output
_____no_output_____
###Markdown
Loss and gradient Since you are preparing to use a custom training loop, you need to explicitly write down the loss and gradient functions.
###Code
def loss(model, x, y):
"""Calculates the loss given an example (x, y)"""
logits = model(x)
return logits, tf.losses.sparse_softmax_cross_entropy(labels=y, logits=logits)
def grad(model, x, y):
"""Calculates the loss and the gradients given an example (x, y)"""
logits, loss_value = loss(model, x, y)
return logits, loss_value, tf.gradients(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
Main function Previous sections highlighted the most important parts of the tutorial. The following code block gives a complete and runnable example of using TPUStrategy with a Keras model and a custom training loop.
###Code
tf.keras.backend.clear_session()
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu=TPU_WORKER)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
# Load MNIST training and test data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# All MNIST examples are 28x28 pixel greyscale images (hence the 1
# for the number of channels).
input_shape = (28, 28, 1)
# Only specific data types are supported on the TPU, so it is important to
# pay attention to these.
# More information:
# https://cloud.google.com/tpu/docs/troubleshooting#unsupported_data_type
x_train = x_train.reshape(x_train.shape[0], *input_shape).astype(np.float32)
x_test = x_test.reshape(x_test.shape[0], *input_shape).astype(np.float32)
y_train, y_test = y_train.astype(np.int64), y_test.astype(np.int64)
# The batch size must be divisible by the number of workers (8 workers),
# so batch sizes of 8, 16, 24, 32, ... are supported.
BATCH_SIZE = 32
NUM_EPOCHS = 5
train_steps_per_epoch = len(x_train) // BATCH_SIZE
test_steps_per_epoch = len(x_test) // BATCH_SIZE
###Output
_____no_output_____
###Markdown
Start by creating objects within the strategy's scopeModel creation, optimizer creation, etc. must be written in the context of strategy.scope() in order to use TPUStrategy. Also initialize metrics for the train and test sets. More information: `keras.metrics.Mean` and `keras.metrics.SparseCategoricalAccuracy`
###Code
with strategy.scope():
model = create_model(input_shape)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
training_loss = tf.keras.metrics.Mean('training_loss', dtype=tf.float32)
training_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'training_accuracy', dtype=tf.float32)
test_loss = tf.keras.metrics.Mean('test_loss', dtype=tf.float32)
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'test_accuracy', dtype=tf.float32)
###Output
_____no_output_____
###Markdown
Define custom train and test steps
###Code
with strategy.scope():
def train_step(inputs):
"""Each training step runs this custom function which calculates
gradients and updates weights.
"""
x, y = inputs
logits, loss_value, grads = grad(model, x, y)
update_loss = training_loss.update_state(loss_value)
update_accuracy = training_accuracy.update_state(y, logits)
# Show that this is truly a custom training loop
# Multiply all gradients by 2.
grads = grads * 2
update_vars = optimizer.apply_gradients(
zip(grads, model.trainable_variables))
with tf.control_dependencies([update_vars, update_loss, update_accuracy]):
return tf.identity(loss_value)
def test_step(inputs):
"""Each training step runs this custom function"""
x, y = inputs
logits, loss_value = loss(model, x, y)
update_loss = test_loss.update_state(loss_value)
update_accuracy = test_accuracy.update_state(y, logits)
with tf.control_dependencies([update_loss, update_accuracy]):
return tf.identity(loss_value)
###Output
_____no_output_____
###Markdown
Do the trainingIn order to make the reading a little bit easier, the full training loop calls two helper functions, `run_train()` and `run_test()`.
###Code
def run_train():
# Train
session.run(train_iterator_init)
while True:
try:
session.run(dist_train)
except tf.errors.OutOfRangeError:
break
print('Train loss: {:0.4f}\t Train accuracy: {:0.4f}%'.format(
session.run(training_loss_result),
session.run(training_accuracy_result) * 100))
training_loss.reset_states()
training_accuracy.reset_states()
def run_test():
# Test
session.run(test_iterator_init)
while True:
try:
session.run(dist_test)
except tf.errors.OutOfRangeError:
break
print('Test loss: {:0.4f}\t Test accuracy: {:0.4f}%'.format(
session.run(test_loss_result),
session.run(test_accuracy_result) * 100))
test_loss.reset_states()
test_accuracy.reset_states()
with strategy.scope():
training_loss_result = training_loss.result()
training_accuracy_result = training_accuracy.result()
test_loss_result = test_loss.result()
test_accuracy_result = test_accuracy.result()
config = tf.ConfigProto()
config.allow_soft_placement = True
cluster_spec = resolver.cluster_spec()
if cluster_spec:
config.cluster_def.CopyFrom(cluster_spec.as_cluster_def())
print('Starting training...')
# Do all the computations inside a Session (as opposed to doing eager mode)
with tf.Session(target=resolver.master(), config=config) as session:
all_variables = (
tf.global_variables() + training_loss.variables +
training_accuracy.variables + test_loss.variables +
test_accuracy.variables)
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(BATCH_SIZE, drop_remainder=True)
train_iterator = strategy.make_dataset_iterator(train_dataset)
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(BATCH_SIZE, drop_remainder=True)
test_iterator = strategy.make_dataset_iterator(test_dataset)
train_iterator_init = train_iterator.initializer
test_iterator_init = test_iterator.initializer
session.run([v.initializer for v in all_variables])
dist_train = strategy.experimental_run(train_step, train_iterator).values
dist_test = strategy.experimental_run(test_step, test_iterator).values
# Custom training loop
for epoch in range(0, NUM_EPOCHS):
print('Starting epoch {}'.format(epoch))
run_train()
run_test()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Custom training with TPUs Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. This tutorial will take you through using [tf.distribute.experimental.TPUStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy). This is a new strategy, a part of `tf.distribute.Strategy`, that allows users to easily switch their model to using TPUs. As part of this tutorial, you will create a Keras model and take it through a custom training loop (instead of calling `fit` method).You should be able to understand what is a strategy and why it’s necessary in Tensorflow. This will help you switch between CPU, GPUs, and other device configurations more easily once you understand the strategy framework. To make the introduction easier, you will also make a Keras model that produces a simple convolutional neural network. A Keras model usually is trained in one line of code (by calling its `fit` method), but because some users require additional customization, we showcase how to use custom training loops. Distribution Strategy was originally written by DeepMind -- you can [read the story here](https://deepmind.com/blog/tf-replicator-distributed-machine-learning/).
###Code
# Import TensorFlow
import tensorflow.compat.v1 as tf
tf.compat.v1.disable_eager_execution()
# Helper libraries
import numpy as np
import os
assert os.environ['COLAB_TPU_ADDR'], 'Make sure to select TPU from Edit > Notebook settings > Hardware accelerator'
assert float('.'.join(tf.__version__.split('.')[:2])) >= 1.14, 'Make sure that Tensorflow version is at least 1.14'
TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR']
###Output
_____no_output_____
###Markdown
Create model Since you will be working with the [MNIST data](https://en.wikipedia.org/wiki/MNIST_database), which is a collection of 70,000 greyscale images representing digits, you want to be using a convolutional neural network to help us with the labeled image data. You will use the Keras API.
###Code
def create_model(input_shape):
"""Creates a simple convolutional neural network model using the Keras API"""
return tf.keras.Sequential([
tf.keras.layers.Conv2D(28, kernel_size=(3, 3), activation='relu', input_shape=input_shape),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
###Output
_____no_output_____
###Markdown
Loss and gradient Since you are preparing to use a custom training loop, you need to explicitly write down the loss and gradient functions.
###Code
def loss(model, x, y):
"""Calculates the loss given an example (x, y)"""
logits = model(x)
return logits, tf.losses.sparse_softmax_cross_entropy(labels=y, logits=logits)
def grad(model, x, y):
"""Calculates the loss and the gradients given an example (x, y)"""
logits, loss_value = loss(model, x, y)
return logits, loss_value, tf.gradients(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
Main function Previous sections highlighted the most important parts of the tutorial. The following code block gives a complete and runnable example of using TPUStrategy with a Keras model and a custom training loop.
###Code
tf.keras.backend.clear_session()
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu=TPU_WORKER)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
# Load MNIST training and test data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# All MNIST examples are 28x28 pixel greyscale images (hence the 1
# for the number of channels).
input_shape = (28, 28, 1)
# Only specific data types are supported on the TPU, so it is important to
# pay attention to these.
# More information:
# https://cloud.google.com/tpu/docs/troubleshooting#unsupported_data_type
x_train = x_train.reshape(x_train.shape[0], *input_shape).astype(np.float32)
x_test = x_test.reshape(x_test.shape[0], *input_shape).astype(np.float32)
y_train, y_test = y_train.astype(np.int64), y_test.astype(np.int64)
# The batch size must be divisible by the number of workers (8 workers),
# so batch sizes of 8, 16, 24, 32, ... are supported.
BATCH_SIZE = 32
NUM_EPOCHS = 5
train_steps_per_epoch = len(x_train) // BATCH_SIZE
test_steps_per_epoch = len(x_test) // BATCH_SIZE
###Output
_____no_output_____
###Markdown
Start by creating objects within the strategy's scopeModel creation, optimizer creation, etc. must be written in the context of strategy.scope() in order to use TPUStrategy. Also initialize metrics for the train and test sets. More information: `keras.metrics.Mean` and `keras.metrics.SparseCategoricalAccuracy`
###Code
with strategy.scope():
model = create_model(input_shape)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
training_loss = tf.keras.metrics.Mean('training_loss', dtype=tf.float32)
training_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'training_accuracy', dtype=tf.float32)
test_loss = tf.keras.metrics.Mean('test_loss', dtype=tf.float32)
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'test_accuracy', dtype=tf.float32)
###Output
_____no_output_____
###Markdown
Define custom train and test steps
###Code
with strategy.scope():
def train_step(inputs):
"""Each training step runs this custom function which calculates
gradients and updates weights.
"""
x, y = inputs
logits, loss_value, grads = grad(model, x, y)
update_loss = training_loss.update_state(loss_value)
update_accuracy = training_accuracy.update_state(y, logits)
# Show that this is truly a custom training loop
# Multiply all gradients by 2.
grads = grads * 2
update_vars = optimizer.apply_gradients(
zip(grads, model.trainable_variables))
with tf.control_dependencies([update_vars, update_loss, update_accuracy]):
return tf.identity(loss_value)
def test_step(inputs):
"""Each training step runs this custom function"""
x, y = inputs
logits, loss_value = loss(model, x, y)
update_loss = test_loss.update_state(loss_value)
update_accuracy = test_accuracy.update_state(y, logits)
with tf.control_dependencies([update_loss, update_accuracy]):
return tf.identity(loss_value)
###Output
_____no_output_____
###Markdown
Do the trainingIn order to make the reading a little bit easier, the full training loop calls two helper functions, `run_train()` and `run_test()`.
###Code
def run_train():
# Train
session.run(train_iterator_init)
while True:
try:
session.run(dist_train)
except tf.errors.OutOfRangeError:
break
print('Train loss: {:0.4f}\t Train accuracy: {:0.4f}%'.format(
session.run(training_loss_result),
session.run(training_accuracy_result) * 100))
training_loss.reset_states()
training_accuracy.reset_states()
def run_test():
# Test
session.run(test_iterator_init)
while True:
try:
session.run(dist_test)
except tf.errors.OutOfRangeError:
break
print('Test loss: {:0.4f}\t Test accuracy: {:0.4f}%'.format(
session.run(test_loss_result),
session.run(test_accuracy_result) * 100))
test_loss.reset_states()
test_accuracy.reset_states()
with strategy.scope():
training_loss_result = training_loss.result()
training_accuracy_result = training_accuracy.result()
test_loss_result = test_loss.result()
test_accuracy_result = test_accuracy.result()
config = tf.ConfigProto()
config.allow_soft_placement = True
cluster_spec = resolver.cluster_spec()
if cluster_spec:
config.cluster_def.CopyFrom(cluster_spec.as_cluster_def())
print('Starting training...')
# Do all the computations inside a Session (as opposed to doing eager mode)
with tf.Session(target=resolver.master(), config=config) as session:
all_variables = (
tf.global_variables() + training_loss.variables +
training_accuracy.variables + test_loss.variables +
test_accuracy.variables)
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(BATCH_SIZE, drop_remainder=True)
train_iterator = strategy.make_dataset_iterator(train_dataset)
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(BATCH_SIZE, drop_remainder=True)
test_iterator = strategy.make_dataset_iterator(test_dataset)
train_iterator_init = train_iterator.initializer
test_iterator_init = test_iterator.initializer
session.run([v.initializer for v in all_variables])
dist_train = strategy.experimental_run(train_step, train_iterator).values
dist_test = strategy.experimental_run(test_step, test_iterator).values
# Custom training loop
for epoch in range(0, NUM_EPOCHS):
print('Starting epoch {}'.format(epoch))
run_train()
run_test()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Custom training with TPUs Run in Google Colab View source on GitHub This tutorial will take you through using [tf.distribute.experimental.TPUStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy). This is a new strategy, a part of `tf.distribute.Strategy`, that allows users to easily switch their model to using TPUs. As part of this tutorial, you will create a Keras model and take it through a custom training loop (instead of calling `fit` method).You should be able to understand what is a strategy and why it’s necessary in Tensorflow. This will help you switch between CPU, GPUs, and other device configurations more easily once you understand the strategy framework. To make the introduction easier, you will also make a Keras model that produces a simple convolutional neural network. A Keras model usually is trained in one line of code (by calling its `fit` method), but because some users require additional customization, we showcase how to use custom training loops. Distribution Strategy was originally written by DeepMind -- you can [read the story here](https://deepmind.com/blog/tf-replicator-distributed-machine-learning/).
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
# Import TensorFlow
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow.compat.v1 as tf
# Helper libraries
import numpy as np
import os
assert os.environ['COLAB_TPU_ADDR'], 'Make sure to select TPU from Edit > Notebook settings > Hardware accelerator'
assert float('.'.join(tf.__version__.split('.')[:2])) >= 1.14, 'Make sure that Tensorflow version is at least 1.14'
TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR']
###Output
_____no_output_____
###Markdown
Create model Since you will be working with the [MNIST data](https://en.wikipedia.org/wiki/MNIST_database), which is a collection of 70,000 greyscale images representing digits, you want to be using a convolutional neural network to help us with the labeled image data. You will use the Keras API.
###Code
def create_model(input_shape):
"""Creates a simple convolutional neural network model using the Keras API"""
return tf.keras.Sequential([
tf.keras.layers.Conv2D(28, kernel_size=(3, 3), activation='relu', input_shape=input_shape),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
###Output
_____no_output_____
###Markdown
Loss and gradient Since you are preparing to use a custom training loop, you need to explicitly write down the loss and gradient functions.
###Code
def loss(model, x, y):
"""Calculates the loss given an example (x, y)"""
logits = model(x)
return logits, tf.losses.sparse_softmax_cross_entropy(labels=y, logits=logits)
def grad(model, x, y):
"""Calculates the loss and the gradients given an example (x, y)"""
logits, loss_value = loss(model, x, y)
return logits, loss_value, tf.gradients(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
Main function Previous sections highlighted the most important parts of the tutorial. The following code block gives a complete and runnable example of using TPUStrategy with a Keras model and a custom training loop.
###Code
tf.keras.backend.clear_session()
resolver = tf.contrib.cluster_resolver.TPUClusterResolver(tpu=TPU_WORKER)
tf.contrib.distribute.initialize_tpu_system(resolver)
strategy = tf.contrib.distribute.TPUStrategy(resolver)
# Load MNIST training and test data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# All MNIST examples are 28x28 pixel greyscale images (hence the 1
# for the number of channels).
input_shape = (28, 28, 1)
# Only specific data types are supported on the TPU, so it is important to
# pay attention to these.
# More information:
# https://cloud.google.com/tpu/docs/troubleshooting#unsupported_data_type
x_train = x_train.reshape(x_train.shape[0], *input_shape).astype(np.float32)
x_test = x_test.reshape(x_test.shape[0], *input_shape).astype(np.float32)
y_train, y_test = y_train.astype(np.int64), y_test.astype(np.int64)
# The batch size must be divisible by the number of workers (8 workers),
# so batch sizes of 8, 16, 24, 32, ... are supported.
BATCH_SIZE = 32
NUM_EPOCHS = 5
train_steps_per_epoch = len(x_train) // BATCH_SIZE
test_steps_per_epoch = len(x_test) // BATCH_SIZE
###Output
_____no_output_____
###Markdown
Start by creating objects within the strategy's scopeModel creation, optimizer creation, etc. must be written in the context of strategy.scope() in order to use TPUStrategy. Also initialize metrics for the train and test sets. More information: `keras.metrics.Mean` and `keras.metrics.SparseCategoricalAccuracy`
###Code
with strategy.scope():
model = create_model(input_shape)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
training_loss = tf.keras.metrics.Mean('training_loss', dtype=tf.float32)
training_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'training_accuracy', dtype=tf.float32)
test_loss = tf.keras.metrics.Mean('test_loss', dtype=tf.float32)
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'test_accuracy', dtype=tf.float32)
###Output
_____no_output_____
###Markdown
Define custom train and test steps
###Code
with strategy.scope():
def train_step(inputs):
"""Each training step runs this custom function which calculates
gradients and updates weights.
"""
x, y = inputs
logits, loss_value, grads = grad(model, x, y)
update_loss = training_loss.update_state(loss_value)
update_accuracy = training_accuracy.update_state(y, logits)
# Show that this is truly a custom training loop
# Multiply all gradients by 2.
grads = grads * 2
update_vars = optimizer.apply_gradients(
zip(grads, model.trainable_variables))
with tf.control_dependencies([update_vars, update_loss, update_accuracy]):
return tf.identity(loss_value)
def test_step(inputs):
"""Each training step runs this custom function"""
x, y = inputs
logits, loss_value = loss(model, x, y)
update_loss = test_loss.update_state(loss_value)
update_accuracy = test_accuracy.update_state(y, logits)
with tf.control_dependencies([update_loss, update_accuracy]):
return tf.identity(loss_value)
###Output
_____no_output_____
###Markdown
Do the trainingIn order to make the reading a little bit easier, the full training loop calls two helper functions, `run_train()` and `run_test()`.
###Code
def run_train():
# Train
session.run(train_iterator_init)
while True:
try:
session.run(dist_train)
except tf.errors.OutOfRangeError:
break
print('Train loss: {:0.4f}\t Train accuracy: {:0.4f}%'.format(
session.run(training_loss_result),
session.run(training_accuracy_result) * 100))
training_loss.reset_states()
training_accuracy.reset_states()
def run_test():
# Test
session.run(test_iterator_init)
while True:
try:
session.run(dist_test)
except tf.errors.OutOfRangeError:
break
print('Test loss: {:0.4f}\t Test accuracy: {:0.4f}%'.format(
session.run(test_loss_result),
session.run(test_accuracy_result) * 100))
test_loss.reset_states()
test_accuracy.reset_states()
with strategy.scope():
training_loss_result = training_loss.result()
training_accuracy_result = training_accuracy.result()
test_loss_result = test_loss.result()
test_accuracy_result = test_accuracy.result()
config = tf.ConfigProto()
config.allow_soft_placement = True
cluster_spec = resolver.cluster_spec()
if cluster_spec:
config.cluster_def.CopyFrom(cluster_spec.as_cluster_def())
print('Starting training...')
# Do all the computations inside a Session (as opposed to doing eager mode)
with tf.Session(target=resolver.master(), config=config) as session:
all_variables = (
tf.global_variables() + training_loss.variables +
training_accuracy.variables + test_loss.variables +
test_accuracy.variables)
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(BATCH_SIZE, drop_remainder=True)
train_iterator = strategy.make_dataset_iterator(train_dataset)
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(BATCH_SIZE, drop_remainder=True)
test_iterator = strategy.make_dataset_iterator(train_dataset)
train_iterator_init = train_iterator.initializer
test_iterator_init = test_iterator.initializer()
session.run([v.initializer for v in all_variables])
dist_train = strategy.experimental_run(train_step, train_iterator).values
dist_test = strategy.experimental_run(test_step, test_iterator).values
# Custom training loop
for epoch in range(0, NUM_EPOCHS):
print('Starting epoch {}'.format(epoch))
run_train()
run_test()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Custom training with TPUs Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. This tutorial will take you through using [tf.distribute.experimental.TPUStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy). This is a new strategy, a part of `tf.distribute.Strategy`, that allows users to easily switch their model to using TPUs. As part of this tutorial, you will create a Keras model and take it through a custom training loop (instead of calling `fit` method).You should be able to understand what is a strategy and why it’s necessary in Tensorflow. This will help you switch between CPU, GPUs, and other device configurations more easily once you understand the strategy framework. To make the introduction easier, you will also make a Keras model that produces a simple convolutional neural network. A Keras model usually is trained in one line of code (by calling its `fit` method), but because some users require additional customization, we showcase how to use custom training loops. Distribution Strategy was originally written by DeepMind -- you can [read the story here](https://deepmind.com/blog/tf-replicator-distributed-machine-learning/).
###Code
# Import TensorFlow
import tensorflow.compat.v1 as tf
tf.compat.v1.disable_eager_execution()
# Helper libraries
import numpy as np
import os
assert os.environ['COLAB_TPU_ADDR'], 'Make sure to select TPU from Edit > Notebook settings > Hardware accelerator'
assert float('.'.join(tf.__version__.split('.')[:2])) >= 1.14, 'Make sure that Tensorflow version is at least 1.14'
TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR']
###Output
_____no_output_____
###Markdown
Create model Since you will be working with the [MNIST data](https://en.wikipedia.org/wiki/MNIST_database), which is a collection of 70,000 greyscale images representing digits, you want to be using a convolutional neural network to help us with the labeled image data. You will use the Keras API.
###Code
def create_model(input_shape):
"""Creates a simple convolutional neural network model using the Keras API"""
return tf.keras.Sequential([
tf.keras.layers.Conv2D(28, kernel_size=(3, 3), activation='relu', input_shape=input_shape),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
###Output
_____no_output_____
###Markdown
Loss and gradient Since you are preparing to use a custom training loop, you need to explicitly write down the loss and gradient functions.
###Code
def loss(model, x, y):
"""Calculates the loss given an example (x, y)"""
logits = model(x)
return logits, tf.losses.sparse_softmax_cross_entropy(labels=y, logits=logits)
def grad(model, x, y):
"""Calculates the loss and the gradients given an example (x, y)"""
logits, loss_value = loss(model, x, y)
return logits, loss_value, tf.gradients(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
Main function Previous sections highlighted the most important parts of the tutorial. The following code block gives a complete and runnable example of using TPUStrategy with a Keras model and a custom training loop.
###Code
tf.keras.backend.clear_session()
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu=TPU_WORKER)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
# Load MNIST training and test data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# All MNIST examples are 28x28 pixel greyscale images (hence the 1
# for the number of channels).
input_shape = (28, 28, 1)
# Only specific data types are supported on the TPU, so it is important to
# pay attention to these.
# More information:
# https://cloud.google.com/tpu/docs/troubleshooting#unsupported_data_type
x_train = x_train.reshape(x_train.shape[0], *input_shape).astype(np.float32)
x_test = x_test.reshape(x_test.shape[0], *input_shape).astype(np.float32)
y_train, y_test = y_train.astype(np.int64), y_test.astype(np.int64)
# The batch size must be divisible by the number of workers (8 workers),
# so batch sizes of 8, 16, 24, 32, ... are supported.
BATCH_SIZE = 32
NUM_EPOCHS = 5
train_steps_per_epoch = len(x_train) // BATCH_SIZE
test_steps_per_epoch = len(x_test) // BATCH_SIZE
###Output
_____no_output_____
###Markdown
Start by creating objects within the strategy's scopeModel creation, optimizer creation, etc. must be written in the context of strategy.scope() in order to use TPUStrategy. Also initialize metrics for the train and test sets. More information: `keras.metrics.Mean` and `keras.metrics.SparseCategoricalAccuracy`
###Code
with strategy.scope():
model = create_model(input_shape)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
training_loss = tf.keras.metrics.Mean('training_loss', dtype=tf.float32)
training_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'training_accuracy', dtype=tf.float32)
test_loss = tf.keras.metrics.Mean('test_loss', dtype=tf.float32)
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'test_accuracy', dtype=tf.float32)
###Output
_____no_output_____
###Markdown
Define custom train and test steps
###Code
with strategy.scope():
def train_step(inputs):
"""Each training step runs this custom function which calculates
gradients and updates weights.
"""
x, y = inputs
logits, loss_value, grads = grad(model, x, y)
update_loss = training_loss.update_state(loss_value)
update_accuracy = training_accuracy.update_state(y, logits)
# Show that this is truly a custom training loop
# Multiply all gradients by 2.
grads = grads * 2
update_vars = optimizer.apply_gradients(
zip(grads, model.trainable_variables))
with tf.control_dependencies([update_vars, update_loss, update_accuracy]):
return tf.identity(loss_value)
def test_step(inputs):
"""Each training step runs this custom function"""
x, y = inputs
logits, loss_value = loss(model, x, y)
update_loss = test_loss.update_state(loss_value)
update_accuracy = test_accuracy.update_state(y, logits)
with tf.control_dependencies([update_loss, update_accuracy]):
return tf.identity(loss_value)
###Output
_____no_output_____
###Markdown
Do the trainingIn order to make the reading a little bit easier, the full training loop calls two helper functions, `run_train()` and `run_test()`.
###Code
def run_train():
# Train
session.run(train_iterator_init)
while True:
try:
session.run(dist_train)
except tf.errors.OutOfRangeError:
break
print('Train loss: {:0.4f}\t Train accuracy: {:0.4f}%'.format(
session.run(training_loss_result),
session.run(training_accuracy_result) * 100))
training_loss.reset_states()
training_accuracy.reset_states()
def run_test():
# Test
session.run(test_iterator_init)
while True:
try:
session.run(dist_test)
except tf.errors.OutOfRangeError:
break
print('Test loss: {:0.4f}\t Test accuracy: {:0.4f}%'.format(
session.run(test_loss_result),
session.run(test_accuracy_result) * 100))
test_loss.reset_states()
test_accuracy.reset_states()
with strategy.scope():
training_loss_result = training_loss.result()
training_accuracy_result = training_accuracy.result()
test_loss_result = test_loss.result()
test_accuracy_result = test_accuracy.result()
config = tf.ConfigProto()
config.allow_soft_placement = True
cluster_spec = resolver.cluster_spec()
if cluster_spec:
config.cluster_def.CopyFrom(cluster_spec.as_cluster_def())
print('Starting training...')
# Do all the computations inside a Session (as opposed to doing eager mode)
with tf.Session(target=resolver.master(), config=config) as session:
all_variables = (
tf.global_variables() + training_loss.variables +
training_accuracy.variables + test_loss.variables +
test_accuracy.variables)
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(BATCH_SIZE, drop_remainder=True)
train_iterator = strategy.make_dataset_iterator(train_dataset)
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(BATCH_SIZE, drop_remainder=True)
test_iterator = strategy.make_dataset_iterator(test_dataset)
train_iterator_init = train_iterator.initializer
test_iterator_init = test_iterator.initializer
session.run([v.initializer for v in all_variables])
dist_train = strategy.experimental_run(train_step, train_iterator).values
dist_test = strategy.experimental_run(test_step, test_iterator).values
# Custom training loop
for epoch in range(0, NUM_EPOCHS):
print('Starting epoch {}'.format(epoch))
run_train()
run_test()
###Output
_____no_output_____
###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Custom training with TPUs Run in Google Colab View source on GitHub > Note: This is an archived TF1 notebook. These are configuredto run in TF2's [compatbility mode](https://www.tensorflow.org/guide/migrate)but will run in TF1 as well. To use TF1 in Colab, use the[%tensorflow_version 1.x](https://colab.research.google.com/notebooks/tensorflow_version.ipynb)magic. This tutorial will take you through using [tf.distribute.experimental.TPUStrategy](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/TPUStrategy). This is a new strategy, a part of `tf.distribute.Strategy`, that allows users to easily switch their model to using TPUs. As part of this tutorial, you will create a Keras model and take it through a custom training loop (instead of calling `fit` method).You should be able to understand what is a strategy and why it’s necessary in Tensorflow. This will help you switch between CPU, GPUs, and other device configurations more easily once you understand the strategy framework. To make the introduction easier, you will also make a Keras model that produces a simple convolutional neural network. A Keras model usually is trained in one line of code (by calling its `fit` method), but because some users require additional customization, we showcase how to use custom training loops. Distribution Strategy was originally written by DeepMind -- you can [read the story here](https://deepmind.com/blog/tf-replicator-distributed-machine-learning/).
###Code
# Import TensorFlow
import tensorflow.compat.v1 as tf
tf.compat.v1.disable_eager_execution()
# Helper libraries
import numpy as np
import os
assert os.environ['COLAB_TPU_ADDR'], 'Make sure to select TPU from Edit > Notebook settings > Hardware accelerator'
assert float('.'.join(tf.__version__.split('.')[:2])) >= 1.14, 'Make sure that Tensorflow version is at least 1.14'
TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR']
###Output
_____no_output_____
###Markdown
Create model Since you will be working with the [MNIST data](https://en.wikipedia.org/wiki/MNIST_database), which is a collection of 70,000 greyscale images representing digits, you want to be using a convolutional neural network to help us with the labeled image data. You will use the Keras API.
###Code
def create_model(input_shape):
"""Creates a simple convolutional neural network model using the Keras API"""
return tf.keras.Sequential([
tf.keras.layers.Conv2D(28, kernel_size=(3, 3), activation='relu', input_shape=input_shape),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
###Output
_____no_output_____
###Markdown
Loss and gradient Since you are preparing to use a custom training loop, you need to explicitly write down the loss and gradient functions.
###Code
def loss(model, x, y):
"""Calculates the loss given an example (x, y)"""
logits = model(x)
return logits, tf.losses.sparse_softmax_cross_entropy(labels=y, logits=logits)
def grad(model, x, y):
"""Calculates the loss and the gradients given an example (x, y)"""
logits, loss_value = loss(model, x, y)
return logits, loss_value, tf.gradients(loss_value, model.trainable_variables)
###Output
_____no_output_____
###Markdown
Main function Previous sections highlighted the most important parts of the tutorial. The following code block gives a complete and runnable example of using TPUStrategy with a Keras model and a custom training loop.
###Code
tf.keras.backend.clear_session()
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu=TPU_WORKER)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.experimental.TPUStrategy(resolver)
# Load MNIST training and test data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# All MNIST examples are 28x28 pixel greyscale images (hence the 1
# for the number of channels).
input_shape = (28, 28, 1)
# Only specific data types are supported on the TPU, so it is important to
# pay attention to these.
# More information:
# https://cloud.google.com/tpu/docs/troubleshooting#unsupported_data_type
x_train = x_train.reshape(x_train.shape[0], *input_shape).astype(np.float32)
x_test = x_test.reshape(x_test.shape[0], *input_shape).astype(np.float32)
y_train, y_test = y_train.astype(np.int64), y_test.astype(np.int64)
# The batch size must be divisible by the number of workers (8 workers),
# so batch sizes of 8, 16, 24, 32, ... are supported.
BATCH_SIZE = 32
NUM_EPOCHS = 5
train_steps_per_epoch = len(x_train) // BATCH_SIZE
test_steps_per_epoch = len(x_test) // BATCH_SIZE
###Output
_____no_output_____
###Markdown
Start by creating objects within the strategy's scopeModel creation, optimizer creation, etc. must be written in the context of strategy.scope() in order to use TPUStrategy. Also initialize metrics for the train and test sets. More information: `keras.metrics.Mean` and `keras.metrics.SparseCategoricalAccuracy`
###Code
with strategy.scope():
model = create_model(input_shape)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
training_loss = tf.keras.metrics.Mean('training_loss', dtype=tf.float32)
training_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'training_accuracy', dtype=tf.float32)
test_loss = tf.keras.metrics.Mean('test_loss', dtype=tf.float32)
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'test_accuracy', dtype=tf.float32)
###Output
_____no_output_____
###Markdown
Define custom train and test steps
###Code
with strategy.scope():
def train_step(inputs):
"""Each training step runs this custom function which calculates
gradients and updates weights.
"""
x, y = inputs
logits, loss_value, grads = grad(model, x, y)
update_loss = training_loss.update_state(loss_value)
update_accuracy = training_accuracy.update_state(y, logits)
# Show that this is truly a custom training loop
# Multiply all gradients by 2.
grads = grads * 2
update_vars = optimizer.apply_gradients(
zip(grads, model.trainable_variables))
with tf.control_dependencies([update_vars, update_loss, update_accuracy]):
return tf.identity(loss_value)
def test_step(inputs):
"""Each training step runs this custom function"""
x, y = inputs
logits, loss_value = loss(model, x, y)
update_loss = test_loss.update_state(loss_value)
update_accuracy = test_accuracy.update_state(y, logits)
with tf.control_dependencies([update_loss, update_accuracy]):
return tf.identity(loss_value)
###Output
_____no_output_____
###Markdown
Do the trainingIn order to make the reading a little bit easier, the full training loop calls two helper functions, `run_train()` and `run_test()`.
###Code
def run_train():
# Train
session.run(train_iterator_init)
while True:
try:
session.run(dist_train)
except tf.errors.OutOfRangeError:
break
print('Train loss: {:0.4f}\t Train accuracy: {:0.4f}%'.format(
session.run(training_loss_result),
session.run(training_accuracy_result) * 100))
training_loss.reset_states()
training_accuracy.reset_states()
def run_test():
# Test
session.run(test_iterator_init)
while True:
try:
session.run(dist_test)
except tf.errors.OutOfRangeError:
break
print('Test loss: {:0.4f}\t Test accuracy: {:0.4f}%'.format(
session.run(test_loss_result),
session.run(test_accuracy_result) * 100))
test_loss.reset_states()
test_accuracy.reset_states()
with strategy.scope():
training_loss_result = training_loss.result()
training_accuracy_result = training_accuracy.result()
test_loss_result = test_loss.result()
test_accuracy_result = test_accuracy.result()
config = tf.ConfigProto()
config.allow_soft_placement = True
cluster_spec = resolver.cluster_spec()
if cluster_spec:
config.cluster_def.CopyFrom(cluster_spec.as_cluster_def())
print('Starting training...')
# Do all the computations inside a Session (as opposed to doing eager mode)
with tf.Session(target=resolver.master(), config=config) as session:
all_variables = (
tf.global_variables() + training_loss.variables +
training_accuracy.variables + test_loss.variables +
test_accuracy.variables)
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(BATCH_SIZE, drop_remainder=True)
train_iterator = strategy.make_dataset_iterator(train_dataset)
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(BATCH_SIZE, drop_remainder=True)
test_iterator = strategy.make_dataset_iterator(test_dataset)
train_iterator_init = train_iterator.initializer
test_iterator_init = test_iterator.initializer
session.run([v.initializer for v in all_variables])
dist_train = strategy.experimental_run(train_step, train_iterator).values
dist_test = strategy.experimental_run(test_step, test_iterator).values
# Custom training loop
for epoch in range(0, NUM_EPOCHS):
print('Starting epoch {}'.format(epoch))
run_train()
run_test()
###Output
_____no_output_____ |
examples/notebooks/3d_image_transforms.ipynb | ###Markdown
OverviewThis notebook introduces you MONAI's image transformation module.
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import numpy as np
import torch
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
import monai
from monai.transforms import \
LoadNifti, LoadNiftid, AddChanneld, ScaleIntensityRanged, \
Rand3DElasticd, RandAffined, \
Spacingd, Orientationd
monai.config.print_config()
###Output
MONAI version: 0.1.0+109.ge324eb5.dirty
Python version: 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) [GCC 7.3.0]
Numpy version: 1.17.4
Pytorch version: 1.5.0
Ignite version: 0.3.0
###Markdown
Data sourcesStarting from a list of filenames. The following is a simple python scriptto group pairs of image and label from `Task09_Spleen/imagesTr` and `Task09_Spleen/labelsTr`folder.
###Code
data_root = '/workspace/data/medical/Task09_Spleen'
import os
import glob
train_images = sorted(glob.glob(os.path.join(data_root, 'imagesTr', '*.nii.gz')))
train_labels = sorted(glob.glob(os.path.join(data_root, 'labelsTr', '*.nii.gz')))
data_dicts = [{'image': image_name, 'label': label_name}
for image_name, label_name in zip(train_images, train_labels)]
train_data_dicts, val_data_dicts = data_dicts[:-9], data_dicts[-9:]
###Output
_____no_output_____
###Markdown
The image file names are organised into a list of dictionaries.
###Code
train_data_dicts[0]
###Output
_____no_output_____
###Markdown
The list of data dictionaries, `train_data_dicts`, could be used byPyTorch's data loader.For example,```pythonfrom torch.utils.data import DataLoaderdata_loader = DataLoader(train_data_dicts)for training_sample in data_loader: run the deep learning training with training_sample```The rest of this tutorial presents a set of "transforms"converting `train_data_dict` into data arrays thatwill eventually be consumed by the deep learning models. Load the NIfTI files One design choice of MONAI is that it provides not only the high-level workflow components,but also relatively lower level APIs in their minimal functioning form.For example, a `LoadNifti` class is a simple callable wrapper of the underlying `Nibabel` image loader.After constructing the loader with a few necessary system parameters,calling the loader instance with a NIfTI filename will return the image data arrays, as well as the metadata -- such as affine information and voxel sizes.
###Code
loader = LoadNifti(dtype=np.float32)
image, metadata = loader(train_data_dicts[0]['image'])
print('input:', train_data_dicts[0]['image'])
print('image shape', image.shape)
print('image affine', metadata['affine'])
print('image pixdim', metadata['pixdim'])
###Output
input: /workspace/data/medical/Task09_Spleen/imagesTr/spleen_10.nii.gz
image shape (512, 512, 55)
image affine [[ 0.97656202 0. 0. -499.02319336]
[ 0. 0.97656202 0. -499.02319336]
[ 0. 0. 5. 0. ]
[ 0. 0. 0. 1. ]]
image pixdim [1. 0.976562 0.976562 5. 0. 0. 0. 0. ]
###Markdown
Oftentimes, we want to load a group of inputs as a training sample.For example training a supervised image segmentation network requires a pair of image and label as a training sample.To ensure a group of inputs are beining preprocessed consistently,MONAI also provides dictionary-based interfaces for the minimal functioning transforms.`LoadNiftid` is the corresponding dict-based version of `LoadNifti`:
###Code
loader = LoadNiftid(keys=('image', 'label'))
data_dict = loader(train_data_dicts[0])
print('input:', train_data_dicts[0])
print('image shape', data_dict['image'].shape)
print('label shape', data_dict['label'].shape)
print('image pixdim', data_dict['image.pixdim'])
image, label = data_dict['image'], data_dict['label']
plt.figure('visualise', (8, 4))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[:, :, 30], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[:, :, 30])
plt.show()
###Output
_____no_output_____
###Markdown
Add the channel dimension Most of MONAI's image transformations assume that the input data has the shape:`[num_channels, spatial_dim_1, spatial_dim_2, ... ,spatial_dim_n]`so that they could be interpreted consistently (as "channel-first" is commonly used in PyTorch).Here the input image has shape `(512, 512, 55)` which isn't in the acceptable shape (missing the channel dimension),we therefore create a transform which is called to updat the shape:
###Code
add_channel = AddChanneld(keys=['image', 'label'])
datac_dict = add_channel(data_dict)
print('image shape', datac_dict['image'].shape)
###Output
image shape (1, 512, 512, 55)
###Markdown
Now we are ready to do some intensity and spatial transforms. Resample to a consistent voxel size The input volumes might have different voxel sizes.The following transform is created to normlise the volumes to have (1.5, 1.5, 5.) millimetre voxel size.The transform is set to read the original voxel size information from `data_dict['image.affine']`,which is from the corresponding NIfTI file, loaded earlier by `LoadNiftid`.
###Code
spacing = Spacingd(keys=['image', 'label'], pixdim=(1.5, 1.5, 5.), interp_order=('bilinear', 'nearest'))
data_dict = spacing(datac_dict)
print('image shape:', data_dict['image'].shape)
print('label shape:', data_dict['label'].shape)
print('image affine after Spacing\n', data_dict['image.affine'])
print('label affine after Spacing\n', data_dict['label.affine'])
###Output
image shape: (1, 334, 334, 55)
label shape: (1, 334, 334, 55)
image affine after Spacing
[[ 1.5 0. 0. -499.02319336]
[ 0. 1.5 0. -499.02319336]
[ 0. 0. 5. 0. ]
[ 0. 0. 0. 1. ]]
label affine after Spacing
[[ 1.5 0. 0. -499.02319336]
[ 0. 1.5 0. -499.02319336]
[ 0. 0. 5. 0. ]
[ 0. 0. 0. 1. ]]
###Markdown
To track the spacing changes, the data_dict was updated by `Spacingd`:- An `image.original_affine` key is added to the `data_dict`, logs the original affine.- An `image.affine` key is updated to have the current affine.
###Code
image, label = data_dict['image'], data_dict['label']
plt.figure('visualise', (8, 4))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[0, :, :, 30], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[0, :, :, 30])
plt.show()
###Output
_____no_output_____
###Markdown
Reorientation to a designated axes codes Sometimes it is nice to have all the input volumes in a consistent axes orientation.The default axis labels are Left (L), Right (R), Posterior (P), Anterior (A), Inferior (I), Superior (S).The following transform is created to reorientate the volumes to have 'Posterior, Left, Inferior' (PLI) orientation:
###Code
spacing = Orientationd(keys=['image', 'label'], axcodes='PLI')
data_dict = spacing(data_dict)
print('image shape:', data_dict['image'].shape)
print('label shape:', data_dict['label'].shape)
print('image affine after Spacing\n', data_dict['image.affine'])
print('label affine after Spacing\n', data_dict['label.affine'])
image, label = data_dict['image'], data_dict['label']
plt.figure('visualise', (8, 4))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[0, :, :, 30], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[0, :, :, 30])
plt.show()
###Output
_____no_output_____
###Markdown
Random affine transformation The following affine transformation is defined to output a (300, 300, 50) image patch.The patch location is randomly chosen in a range of (-40, 40), (-40, 40), (-2, 2) in x, y, and z axes respectively.The translation is relative to the image centre.The 3D rotation angle is randomly chosen from (-45, 45) degrees around the z axis, and 5 degrees around x and y axes.The random scaling factor is randomly chosen from (1.0 - 0.15, 1.0 + 0.15) along each axis.
###Code
rand_affine = RandAffined(keys=['image', 'label'], mode=('bilinear', 'nearest'), prob=1.0,
spatial_size=(300, 300, 50),
translate_range=(40, 40, 2),
rotate_range=(np.pi/36, np.pi/36, np.pi*4),
scale_range=(0.15, 0.15, 0.15),
padding_mode='border')
###Output
_____no_output_____
###Markdown
You can rerun this cell to generate a different randomised version of the original image.
###Code
affined_data_dict = rand_affine(data_dict)
print('image shape', affined_data_dict['image'].shape)
image, label = affined_data_dict['image'][0], affined_data_dict['label'][0]
plt.figure('visualise', (12, 6))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[:, :, 15], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[:, :, 15], cmap='gray')
plt.show()
###Output
image shape torch.Size([1, 300, 300, 50])
###Markdown
Random elastic deformation Similarly, the following elastic deformation is defined to output a (300, 300, 10) image patch.The image is resampled from a combination of affine transformations and elastic deformations.`sigma_range` controls the smoothness of the deformation (larger than 15 could be slow on CPU)`magnitude_rnage` controls the amplitude of the deformation (large than 500, the image becomes unrealistic).
###Code
rand_elastic = Rand3DElasticd(
keys=['image', 'label'], mode=('bilinear', 'nearest'), prob=1.0,
sigma_range=(5, 8),
magnitude_range=(100, 200),
spatial_size=(300, 300, 10),
translate_range=(50, 50, 2),
rotate_range=(np.pi/36, np.pi/36, np.pi*2),
scale_range=(0.15, 0.15, 0.15),
padding_mode='border')
###Output
_____no_output_____
###Markdown
You can rerun this cell to generate a different randomised version of the original image.
###Code
deformed_data_dict = rand_elastic(data_dict)
print('image shape', deformed_data_dict['image'].shape)
image, label = deformed_data_dict['image'][0], deformed_data_dict['label'][0]
plt.figure('visualise', (12, 6))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[:, :, 5], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[:, :, 5], cmap='gray')
plt.show()
###Output
image shape (1, 300, 300, 10)
###Markdown
OverviewThis notebook introduces you MONAI's image transformation module.
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import numpy as np
import torch
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
import monai
from monai.transforms import \
LoadNifti, LoadNiftid, AddChanneld, ScaleIntensityRanged, \
Rand3DElasticd, RandAffined, \
Spacingd, Orientationd
monai.config.print_config()
###Output
MONAI version: 0.0.1
Python version: 3.5.6 |Anaconda, Inc.| (default, Aug 26 2018, 16:30:03) [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]
Numpy version: 1.18.2
Pytorch version: 1.4.0
Ignite version: 0.3.0
###Markdown
Data sourcesStarting from a list of filenames. The following is a simple python scriptto group pairs of image and label from `Task09_Spleen/imagesTr` and `Task09_Spleen/labelsTr`folder.
###Code
data_root = 'temp/Task09_Spleen'
import os
import glob
train_images = sorted(glob.glob(os.path.join(data_root, 'imagesTr', '*.nii.gz')))
train_labels = sorted(glob.glob(os.path.join(data_root, 'labelsTr', '*.nii.gz')))
data_dicts = [{'image': image_name, 'label': label_name}
for image_name, label_name in zip(train_images, train_labels)]
train_data_dicts, val_data_dicts = data_dicts[:-9], data_dicts[-9:]
###Output
_____no_output_____
###Markdown
The image file names are organised into a list of dictionaries.
###Code
train_data_dicts[0]
###Output
_____no_output_____
###Markdown
The list of data dictionaries, `train_data_dicts`, could be used byPyTorch's data loader.For example,```pythonfrom torch.utils.data import DataLoaderdata_loader = DataLoader(train_data_dicts)for training_sample in data_loader: run the deep learning training with training_sample```The rest of this tutorial presents a set of "transforms"converting `train_data_dict` into data arrays thatwill eventually be consumed by the deep learning models. Load the NIfTI files One design choice of MONAI is that it provides not only the high-level workflow components,but also relatively lower level APIs in their minimal functioning form.For example, a `LoadNifti` class is a simple callable wrapper of the underlying `Nibabel` image loader.After constructing the loader with a few necessary system parameters,calling the loader instance with a NIfTI filename will return the image data arrays, as well as the metadata -- such as affine information and voxel sizes.
###Code
loader = LoadNifti(dtype: np.dtype = np.float32)
image, metadata = loader(train_data_dicts[0]['image'])
print('input:', train_data_dicts[0]['image'])
print('image shape', image.shape)
print('image affine', metadata['affine'])
print('image pixdim', metadata['pixdim'])
###Output
input: temp/Task09_Spleen/imagesTr/spleen_10.nii.gz
image shape (512, 512, 55)
image affine [[ 0.97656202 0. 0. -499.02319336]
[ 0. 0.97656202 0. -499.02319336]
[ 0. 0. 5. 0. ]
[ 0. 0. 0. 1. ]]
image pixdim [1. 0.976562 0.976562 5. 0. 0. 0. 0. ]
###Markdown
Oftentimes, we want to load a group of inputs as a training sample.For example training a supervised image segmentation network requires a pair of image and label as a training sample.To ensure a group of inputs are beining preprocessed consistently,MONAI also provides dictionary-based interfaces for the minimal functioning transforms.`LoadNiftid` is the corresponding dict-based version of `LoadNifti`:
###Code
loader = LoadNiftid(keys=('image', 'label'))
data_dict = loader(train_data_dicts[0])
print('input:', train_data_dicts[0])
print('image shape', data_dict['image'].shape)
print('label shape', data_dict['label'].shape)
print('image pixdim', data_dict['image.pixdim'])
image, label = data_dict['image'], data_dict['label']
plt.figure('visualise', (8, 4))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[:, :, 30], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[:, :, 30])
plt.show()
###Output
_____no_output_____
###Markdown
Add the channel dimension Most of MONAI's image transformations assume that the input data has the shape:`[num_channels, spatial_dim_1, spatial_dim_2, ... ,spatial_dim_n]`so that they could be interpreted consistently (as "channel-first" is commonly used in PyTorch).Here the input image has shape `(512, 512, 55)` which isn't in the acceptable shape (missing the channel dimension),we therefore create a transform which is called to updat the shape:
###Code
add_channel = AddChanneld(keys=['image', 'label'])
datac_dict = add_channel(data_dict)
print('image shape', datac_dict['image'].shape)
###Output
image shape (1, 512, 512, 55)
###Markdown
Now we are ready to do some intensity and spatial transforms. Resample to a consistent voxel size The input volumes might have different voxel sizes.The following transform is created to normlise the volumes to have (1.5, 1.5, 5.) millimetre voxel size.The transform is set to read the original voxel size information from `data_dict['image.affine']`,which is from the corresponding NIfTI file, loaded earlier by `LoadNiftid`.
###Code
spacing = Spacingd(keys=['image', 'label'],
pixdim=(1.5, 1.5, 5.), interp_order=(2, 0), mode='nearest')
data_dict = spacing(datac_dict)
print('image shape:', data_dict['image'].shape)
print('label shape:', data_dict['label'].shape)
print('image affine after Spacing\n', data_dict['image.affine'])
print('label affine after Spacing\n', data_dict['label.affine'])
###Output
image shape: (1, 334, 334, 55)
label shape: (1, 334, 334, 55)
image affine after Spacing
[[ 1.5 0. 0. -499.02319336]
[ 0. 1.5 0. -499.02319336]
[ 0. 0. 5. 0. ]
[ 0. 0. 0. 1. ]]
label affine after Spacing
[[ 1.5 0. 0. -499.02319336]
[ 0. 1.5 0. -499.02319336]
[ 0. 0. 5. 0. ]
[ 0. 0. 0. 1. ]]
###Markdown
To track the spacing changes, the data_dict was updated by `Spacingd`:- An `image.original_affine` key is added to the `data_dict`, logs the original affine.- An `image.affine` key is updated to have the current affine.
###Code
image, label = data_dict['image'], data_dict['label']
plt.figure('visualise', (8, 4))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[0, :, :, 30], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[0, :, :, 30])
plt.show()
###Output
_____no_output_____
###Markdown
Reorientation to a designated axes codes Sometimes it is nice to have all the input volumes in a consistent axes orientation.The default axis labels are Left (L), Right (R), Posterior (P), Anterior (A), Inferior (I), Superior (S).The following transform is created to reorientate the volumes to have 'Posterior, Left, Inferior' (PLI) orientation:
###Code
spacing = Orientationd(keys=['image', 'label'], axcodes='PLI')
data_dict = spacing(data_dict)
print('image shape:', data_dict['image'].shape)
print('label shape:', data_dict['label'].shape)
print('image affine after Spacing\n', data_dict['image.affine'])
print('label affine after Spacing\n', data_dict['label.affine'])
image, label = data_dict['image'], data_dict['label']
plt.figure('visualise', (8, 4))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[0, :, :, 30], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[0, :, :, 30])
plt.show()
###Output
_____no_output_____
###Markdown
Random affine transformation The following affine transformation is defined to output a (300, 300, 50) image patch.The patch location is randomly chosen in a range of (-40, 40), (-40, 40), (-2, 2) in x, y, and z axes respectively.The translation is relative to the image centre.The 3D rotation angle is randomly chosen from (-45, 45) degrees around the z axis, and 5 degrees around x and y axes.The random scaling factor is randomly chosen from (1.0 - 0.15, 1.0 + 0.15) along each axis.
###Code
rand_affine = RandAffined(keys=['image', 'label'], mode=('bilinear', 'nearest'), prob=1.0,
spatial_size=(300, 300, 50),
translate_range=(40, 40, 2),
rotate_range=(np.pi/36, np.pi/36, np.pi*4),
scale_range=(0.15, 0.15, 0.15),
padding_mode='border')
###Output
_____no_output_____
###Markdown
You can rerun this cell to generate a different randomised version of the original image.
###Code
affined_data_dict = rand_affine(data_dict)
print('image shape', affined_data_dict['image'].shape)
image, label = affined_data_dict['image'][0], affined_data_dict['label'][0]
plt.figure('visualise', (12, 6))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[:, :, 15], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[:, :, 15], cmap='gray')
plt.show()
###Output
image shape torch.Size([1, 300, 300, 50])
###Markdown
Random elastic deformation Similarly, the following elastic deformation is defined to output a (300, 300, 10) image patch.The image is resampled from a combination of affine transformations and elastic deformations.`sigma_range` controls the smoothness of the deformation (larger than 15 could be slow on CPU)`magnitude_rnage` controls the amplitude of the deformation (large than 500, the image becomes unrealistic).
###Code
rand_elastic = Rand3DElasticd(
keys=['image', 'label'], mode=('bilinear', 'nearest'), prob=1.0,
sigma_range=(5, 8),
magnitude_range=(100, 200),
spatial_size=(300, 300, 10),
translate_range=(50, 50, 2),
rotate_range=(np.pi/36, np.pi/36, np.pi*2),
scale_range=(0.15, 0.15, 0.15),
padding_mode='border')
###Output
_____no_output_____
###Markdown
You can rerun this cell to generate a different randomised version of the original image.
###Code
deformed_data_dict = rand_elastic(data_dict)
print('image shape', deformed_data_dict['image'].shape)
image, label = deformed_data_dict['image'][0], deformed_data_dict['label'][0]
plt.figure('visualise', (12, 6))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[:, :, 5], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[:, :, 5], cmap='gray')
plt.show()
###Output
image shape (1, 300, 300, 10)
###Markdown
OverviewThis notebook introduces you MONAI's image transformation module.
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import numpy as np
import torch
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
# assumes the framework is found here, change as necessary
sys.path.append("../..")
import monai
from monai.transforms import \
LoadNifti, LoadNiftid, AddChanneld, ScaleIntensityRanged, \
Rand3DElasticd, RandAffined, \
Spacingd, Orientationd
monai.config.print_config()
###Output
MONAI version: 0.0.1
Python version: 3.5.6 |Anaconda, Inc.| (default, Aug 26 2018, 16:30:03) [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]
Numpy version: 1.18.2
Pytorch version: 1.4.0
Ignite version: 0.3.0
###Markdown
Data sourcesStarting from a list of filenames. The following is a simple python scriptto group pairs of image and label from `Task09_Spleen/imagesTr` and `Task09_Spleen/labelsTr`folder.
###Code
data_root = 'temp/Task09_Spleen'
import os
import glob
train_images = sorted(glob.glob(os.path.join(data_root, 'imagesTr', '*.nii.gz')))
train_labels = sorted(glob.glob(os.path.join(data_root, 'labelsTr', '*.nii.gz')))
data_dicts = [{'image': image_name, 'label': label_name}
for image_name, label_name in zip(train_images, train_labels)]
train_data_dicts, val_data_dicts = data_dicts[:-9], data_dicts[-9:]
###Output
_____no_output_____
###Markdown
The image file names are organised into a list of dictionaries.
###Code
train_data_dicts[0]
###Output
_____no_output_____
###Markdown
The list of data dictionaries, `train_data_dicts`, could be used byPyTorch's data loader.For example,```pythonfrom torch.utils.data import DataLoaderdata_loader = DataLoader(train_data_dicts)for training_sample in data_loader: run the deep learning training with training_sample```The rest of this tutorial presents a set of "transforms"converting `train_data_dict` into data arrays thatwill eventually be consumed by the deep learning models. Load the NIfTI files One design choice of MONAI is that it provides not only the high-level workflow components,but also relatively lower level APIs in their minimal functioning form.For example, a `LoadNifti` class is a simple callable wrapper of the underlying `Nibabel` image loader.After constructing the loader with a few necessary system parameters,calling the loader instance with a NIfTI filename will return the image data arrays, as well as the metadata -- such as affine information and voxel sizes.
###Code
loader = LoadNifti(dtype=np.float32)
image, metadata = loader(train_data_dicts[0]['image'])
print('input:', train_data_dicts[0]['image'])
print('image shape', image.shape)
print('image affine', metadata['affine'])
print('image pixdim', metadata['pixdim'])
###Output
input: temp/Task09_Spleen/imagesTr/spleen_10.nii.gz
image shape (512, 512, 55)
image affine [[ 0.97656202 0. 0. -499.02319336]
[ 0. 0.97656202 0. -499.02319336]
[ 0. 0. 5. 0. ]
[ 0. 0. 0. 1. ]]
image pixdim [1. 0.976562 0.976562 5. 0. 0. 0. 0. ]
###Markdown
Oftentimes, we want to load a group of inputs as a training sample.For example training a supervised image segmentation network requires a pair of image and label as a training sample.To ensure a group of inputs are beining preprocessed consistently,MONAI also provides dictionary-based interfaces for the minimal functioning transforms.`LoadNiftid` is the corresponding dict-based version of `LoadNifti`:
###Code
loader = LoadNiftid(keys=('image', 'label'))
data_dict = loader(train_data_dicts[0])
print('input:', train_data_dicts[0])
print('image shape', data_dict['image'].shape)
print('label shape', data_dict['label'].shape)
print('image pixdim', data_dict['image.pixdim'])
image, label = data_dict['image'], data_dict['label']
plt.figure('visualise', (8, 4))
plt.subplot(1, 2, 1)
plt.title("image")
plt.imshow(image[:, :, 30], cmap='gray')
plt.subplot(1, 2, 2)
plt.title("label")
plt.imshow(label[:, :, 30])
plt.show()
###Output
_____no_output_____
###Markdown
Add the channel dimension Most of MONAI's image transformations assume that the input data has the shape:`[num_channels, spatial_dim_1, spatial_dim_2, ... ,spatial_dim_n]`so that they could be interpreted consistently (as "channel-first" is commonly used in PyTorch).Here the input image has shape `(512, 512, 55)` which isn't in the acceptable shape (missing the channel dimension),we therefore create a transform which is called to updat the shape:
###Code
add_channel = AddChanneld(keys=['image', 'label'])
datac_dict = add_channel(data_dict)
print('image shape', datac_dict['image'].shape)
###Output
image shape (1, 512, 512, 55)
###Markdown
Now we are ready to do some intensity and spatial transforms. Resample to a consistent voxel size The input volumes might have different voxel sizes.The following transform is created to normlise the volumes to have (1.5, 1.5, 5.) millimetre voxel size.The transform is set to read the original voxel size information from `data_dict['image.affine']`,which is from the corresponding NIfTI file, loaded earlier by `LoadNiftid`.
###Code
spacing = Spacingd(keys=['image', 'label'],
pixdim=(1.5, 1.5, 5.), interp_order=(2, 0), mode='nearest')
data_dict = spacing(datac_dict)
print('image shape:', data_dict['image'].shape)
print('label shape:', data_dict['label'].shape)
print('image affine after Spacing\n', data_dict['image.affine'])
print('label affine after Spacing\n', data_dict['label.affine'])
###Output
image shape: (1, 334, 334, 55)
label shape: (1, 334, 334, 55)
image affine after Spacing
[[ 1.5 0. 0. -499.02319336]
[ 0. 1.5 0. -499.02319336]
[ 0. 0. 5. 0. ]
[ 0. 0. 0. 1. ]]
label affine after Spacing
[[ 1.5 0. 0. -499.02319336]
[ 0. 1.5 0. -499.02319336]
[ 0. 0. 5. 0. ]
[ 0. 0. 0. 1. ]]
###Markdown
To track the spacing changes, the data_dict was updated by `Spacingd`:- An `image.original_affine` key is added to the `data_dict`, logs the original affine.- An `image.affine` key is updated to have the current affine.
###Code
image, label = data_dict['image'], data_dict['label']
plt.figure('visualise', (8, 4))
plt.subplot(1, 2, 1)
plt.title("image")
plt.imshow(image[0, :, :, 30], cmap='gray')
plt.subplot(1, 2, 2)
plt.title("label")
plt.imshow(label[0, :, :, 30])
plt.show()
###Output
_____no_output_____
###Markdown
Reorientation to a designated axes codes Sometimes it is nice to have all the input volumes in a consistent axes orientation.The default axis labels are Left (L), Right (R), Posterior (P), Anterior (A), Inferior (I), Superior (S).The following transform is created to reorientate the volumes to have 'Posterior, Left, Inferior' (PLI) orientation:
###Code
spacing = Orientationd(keys=['image', 'label'], axcodes='PLI')
data_dict = spacing(data_dict)
print('image shape:', data_dict['image'].shape)
print('label shape:', data_dict['label'].shape)
print('image affine after Spacing\n', data_dict['image.affine'])
print('label affine after Spacing\n', data_dict['label.affine'])
image, label = data_dict['image'], data_dict['label']
plt.figure('visualise', (8, 4))
plt.subplot(1, 2, 1)
plt.title("image")
plt.imshow(image[0, :, :, 30], cmap='gray')
plt.subplot(1, 2, 2)
plt.title("label")
plt.imshow(label[0, :, :, 30])
plt.show()
###Output
_____no_output_____
###Markdown
Random affine transformation The following affine transformation is defined to output a (300, 300, 50) image patch.The patch location is randomly chosen in a range of (-40, 40), (-40, 40), (-2, 2) in x, y, and z axes respectively.The translation is relative to the image centre.The 3D rotation angle is randomly chosen from (-45, 45) degrees around the z axis, and 5 degrees around x and y axes.The random scaling factor is randomly chosen from (1.0 - 0.15, 1.0 + 0.15) along each axis.
###Code
rand_affine = RandAffined(keys=['image', 'label'], mode=('bilinear', 'nearest'), prob=1.0,
spatial_size=(300, 300, 50),
translate_range=(40, 40, 2),
rotate_range=(np.pi/36, np.pi/36, np.pi*4),
scale_range=(0.15, 0.15, 0.15),
padding_mode='border')
###Output
_____no_output_____
###Markdown
You can rerun this cell to generate a different randomised version of the original image.
###Code
affined_data_dict = rand_affine(data_dict)
print('image shape', affined_data_dict['image'].shape)
image, label = affined_data_dict['image'][0], affined_data_dict['label'][0]
plt.figure('visualise', (12, 6))
plt.subplot(1, 2, 1)
plt.title("image")
plt.imshow(image[:, :, 15], cmap='gray')
plt.subplot(1, 2, 2)
plt.title("label")
plt.imshow(label[:, :, 15], cmap='gray')
plt.show()
###Output
image shape torch.Size([1, 300, 300, 50])
###Markdown
Random elastic deformation Similarly, the following elastic deformation is defined to output a (300, 300, 10) image patch.The image is resampled from a combination of affine transformations and elastic deformations.`sigma_range` controls the smoothness of the deformation (larger than 15 could be slow on CPU)`magnitude_rnage` controls the amplitude of the deformation (large than 500, the image becomes unrealistic).
###Code
rand_elastic = Rand3DElasticd(
keys=['image', 'label'], mode=('bilinear', 'nearest'), prob=1.0,
sigma_range=(5, 8),
magnitude_range=(100, 200),
spatial_size=(300, 300, 10),
translate_range=(50, 50, 2),
rotate_range=(np.pi/36, np.pi/36, np.pi*2),
scale_range=(0.15, 0.15, 0.15),
padding_mode='border')
###Output
_____no_output_____
###Markdown
You can rerun this cell to generate a different randomised version of the original image.
###Code
deformed_data_dict = rand_elastic(data_dict)
print('image shape', deformed_data_dict['image'].shape)
image, label = deformed_data_dict['image'][0], deformed_data_dict['label'][0]
plt.figure('visualise', (12, 6))
plt.subplot(1, 2, 1)
plt.title("image")
plt.imshow(image[:, :, 5], cmap='gray')
plt.subplot(1, 2, 2)
plt.title("label")
plt.imshow(label[:, :, 5], cmap='gray')
plt.show()
###Output
image shape (1, 300, 300, 10)
###Markdown
OverviewThis notebook introduces you MONAI's image transformation module.
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import numpy as np
import torch
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
import monai
from monai.transforms import \
LoadNifti, LoadNiftid, AddChanneld, ScaleIntensityRanged, \
Rand3DElasticd, RandAffined, \
Spacingd, Orientationd
monai.config.print_config()
###Output
MONAI version: 0.1.0+109.ge324eb5.dirty
Python version: 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) [GCC 7.3.0]
Numpy version: 1.17.4
Pytorch version: 1.5.0
Ignite version: 0.3.0
###Markdown
Data sourcesStarting from a list of filenames. The following is a simple python scriptto group pairs of image and label from `Task09_Spleen/imagesTr` and `Task09_Spleen/labelsTr`folder.
###Code
data_root = '/workspace/data/medical/Task09_Spleen'
import os
import glob
train_images = sorted(glob.glob(os.path.join(data_root, 'imagesTr', '*.nii.gz')))
train_labels = sorted(glob.glob(os.path.join(data_root, 'labelsTr', '*.nii.gz')))
data_dicts = [{'image': image_name, 'label': label_name}
for image_name, label_name in zip(train_images, train_labels)]
train_data_dicts, val_data_dicts = data_dicts[:-9], data_dicts[-9:]
###Output
_____no_output_____
###Markdown
The image file names are organised into a list of dictionaries.
###Code
train_data_dicts[0]
###Output
_____no_output_____
###Markdown
The list of data dictionaries, `train_data_dicts`, could be used byPyTorch's data loader.For example,```pythonfrom torch.utils.data import DataLoaderdata_loader = DataLoader(train_data_dicts)for training_sample in data_loader: run the deep learning training with training_sample```The rest of this tutorial presents a set of "transforms"converting `train_data_dict` into data arrays thatwill eventually be consumed by the deep learning models. Load the NIfTI files One design choice of MONAI is that it provides not only the high-level workflow components,but also relatively lower level APIs in their minimal functioning form.For example, a `LoadNifti` class is a simple callable wrapper of the underlying `Nibabel` image loader.After constructing the loader with a few necessary system parameters,calling the loader instance with a NIfTI filename will return the image data arrays, as well as the metadata -- such as affine information and voxel sizes.
###Code
loader = LoadNifti(dtype=np.float32)
image, metadata = loader(train_data_dicts[0]['image'])
print('input:', train_data_dicts[0]['image'])
print('image shape', image.shape)
print('image affine', metadata['affine'])
print('image pixdim', metadata['pixdim'])
###Output
input: /workspace/data/medical/Task09_Spleen/imagesTr/spleen_10.nii.gz
image shape (512, 512, 55)
image affine [[ 0.97656202 0. 0. -499.02319336]
[ 0. 0.97656202 0. -499.02319336]
[ 0. 0. 5. 0. ]
[ 0. 0. 0. 1. ]]
image pixdim [1. 0.976562 0.976562 5. 0. 0. 0. 0. ]
###Markdown
Oftentimes, we want to load a group of inputs as a training sample.For example training a supervised image segmentation network requires a pair of image and label as a training sample.To ensure a group of inputs are beining preprocessed consistently,MONAI also provides dictionary-based interfaces for the minimal functioning transforms.`LoadNiftid` is the corresponding dict-based version of `LoadNifti`:
###Code
loader = LoadNiftid(keys=('image', 'label'))
data_dict = loader(train_data_dicts[0])
print('input:', train_data_dicts[0])
print('image shape', data_dict['image'].shape)
print('label shape', data_dict['label'].shape)
print('image pixdim', data_dict['image.pixdim'])
image, label = data_dict['image'], data_dict['label']
plt.figure('visualise', (8, 4))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[:, :, 30], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[:, :, 30])
plt.show()
###Output
_____no_output_____
###Markdown
Add the channel dimension Most of MONAI's image transformations assume that the input data has the shape:`[num_channels, spatial_dim_1, spatial_dim_2, ... ,spatial_dim_n]`so that they could be interpreted consistently (as "channel-first" is commonly used in PyTorch).Here the input image has shape `(512, 512, 55)` which isn't in the acceptable shape (missing the channel dimension),we therefore create a transform which is called to update the shape:
###Code
add_channel = AddChanneld(keys=['image', 'label'])
datac_dict = add_channel(data_dict)
print('image shape', datac_dict['image'].shape)
###Output
image shape (1, 512, 512, 55)
###Markdown
Now we are ready to do some intensity and spatial transforms. Resample to a consistent voxel size The input volumes might have different voxel sizes.The following transform is created to normalise the volumes to have (1.5, 1.5, 5.) millimetre voxel size.The transform is set to read the original voxel size information from `data_dict['image.affine']`,which is from the corresponding NIfTI file, loaded earlier by `LoadNiftid`.
###Code
spacing = Spacingd(keys=['image', 'label'], pixdim=(1.5, 1.5, 5.), mode=('bilinear', 'nearest'))
data_dict = spacing(datac_dict)
print('image shape:', data_dict['image'].shape)
print('label shape:', data_dict['label'].shape)
print('image affine after Spacing\n', data_dict['image.affine'])
print('label affine after Spacing\n', data_dict['label.affine'])
###Output
image shape: (1, 334, 334, 55)
label shape: (1, 334, 334, 55)
image affine after Spacing
[[ 1.5 0. 0. -499.02319336]
[ 0. 1.5 0. -499.02319336]
[ 0. 0. 5. 0. ]
[ 0. 0. 0. 1. ]]
label affine after Spacing
[[ 1.5 0. 0. -499.02319336]
[ 0. 1.5 0. -499.02319336]
[ 0. 0. 5. 0. ]
[ 0. 0. 0. 1. ]]
###Markdown
To track the spacing changes, the data_dict was updated by `Spacingd`:- An `image.original_affine` key is added to the `data_dict`, logs the original affine.- An `image.affine` key is updated to have the current affine.
###Code
image, label = data_dict['image'], data_dict['label']
plt.figure('visualise', (8, 4))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[0, :, :, 30], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[0, :, :, 30])
plt.show()
###Output
_____no_output_____
###Markdown
Reorientation to a designated axes codes Sometimes it is nice to have all the input volumes in a consistent axes orientation.The default axis labels are Left (L), Right (R), Posterior (P), Anterior (A), Inferior (I), Superior (S).The following transform is created to reorientate the volumes to have 'Posterior, Left, Inferior' (PLI) orientation:
###Code
spacing = Orientationd(keys=['image', 'label'], axcodes='PLI')
data_dict = spacing(data_dict)
print('image shape:', data_dict['image'].shape)
print('label shape:', data_dict['label'].shape)
print('image affine after Spacing\n', data_dict['image.affine'])
print('label affine after Spacing\n', data_dict['label.affine'])
image, label = data_dict['image'], data_dict['label']
plt.figure('visualise', (8, 4))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[0, :, :, 30], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[0, :, :, 30])
plt.show()
###Output
_____no_output_____
###Markdown
Random affine transformation The following affine transformation is defined to output a (300, 300, 50) image patch.The patch location is randomly chosen in a range of (-40, 40), (-40, 40), (-2, 2) in x, y, and z axes respectively.The translation is relative to the image centre.The 3D rotation angle is randomly chosen from (-45, 45) degrees around the z axis, and 5 degrees around x and y axes.The random scaling factor is randomly chosen from (1.0 - 0.15, 1.0 + 0.15) along each axis.
###Code
rand_affine = RandAffined(keys=['image', 'label'], mode=('bilinear', 'nearest'), prob=1.0,
spatial_size=(300, 300, 50),
translate_range=(40, 40, 2),
rotate_range=(np.pi/36, np.pi/36, np.pi*4),
scale_range=(0.15, 0.15, 0.15),
padding_mode='border')
###Output
_____no_output_____
###Markdown
You can rerun this cell to generate a different randomised version of the original image.
###Code
affined_data_dict = rand_affine(data_dict)
print('image shape', affined_data_dict['image'].shape)
image, label = affined_data_dict['image'][0], affined_data_dict['label'][0]
plt.figure('visualise', (12, 6))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[:, :, 15], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[:, :, 15], cmap='gray')
plt.show()
###Output
image shape torch.Size([1, 300, 300, 50])
###Markdown
Random elastic deformation Similarly, the following elastic deformation is defined to output a (300, 300, 10) image patch.The image is resampled from a combination of affine transformations and elastic deformations.`sigma_range` controls the smoothness of the deformation (larger than 15 could be slow on CPU)`magnitude_range` controls the amplitude of the deformation (large than 500, the image becomes unrealistic).
###Code
rand_elastic = Rand3DElasticd(
keys=['image', 'label'], mode=('bilinear', 'nearest'), prob=1.0,
sigma_range=(5, 8),
magnitude_range=(100, 200),
spatial_size=(300, 300, 10),
translate_range=(50, 50, 2),
rotate_range=(np.pi/36, np.pi/36, np.pi*2),
scale_range=(0.15, 0.15, 0.15),
padding_mode='border')
###Output
_____no_output_____
###Markdown
You can rerun this cell to generate a different randomised version of the original image.
###Code
deformed_data_dict = rand_elastic(data_dict)
print('image shape', deformed_data_dict['image'].shape)
image, label = deformed_data_dict['image'][0], deformed_data_dict['label'][0]
plt.figure('visualise', (12, 6))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[:, :, 5], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[:, :, 5], cmap='gray')
plt.show()
###Output
image shape (1, 300, 300, 10)
###Markdown
OverviewThis notebook introduces you MONAI's image transformation module.[](https://colab.research.google.com/github/Project-MONAI/MONAI/blob/master/examples/notebooks/3d_image_transforms.ipynb) Setup environment
###Code
%pip install -qU "monai[gdown, nibabel]"
%pip install -qU matplotlib
%matplotlib inline
###Output
Note: you may need to restart the kernel to use updated packages.
###Markdown
Setup imports
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import glob
import os
import shutil
import tempfile
import matplotlib.pyplot as plt
import numpy as np
from monai.apps import download_and_extract
from monai.config import print_config
from monai.transforms import (
AddChanneld,
LoadNifti,
LoadNiftid,
Orientationd,
Rand3DElasticd,
RandAffined,
Spacingd,
)
print_config()
###Output
MONAI version: 0.2.0
Python version: 3.7.5 (default, Nov 7 2019, 10:50:52) [GCC 8.3.0]
Numpy version: 1.19.1
Pytorch version: 1.6.0
Optional dependencies:
Pytorch Ignite version: NOT INSTALLED or UNKNOWN VERSION.
Nibabel version: 3.1.1
scikit-image version: NOT INSTALLED or UNKNOWN VERSION.
Pillow version: 7.2.0
Tensorboard version: NOT INSTALLED or UNKNOWN VERSION.
For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
###Markdown
Setup data directoryYou can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable. This allows you to save results and reuse downloads. If not specified a temporary directory will be used.
###Code
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
###Output
/home/bengorman/notebooks/
###Markdown
Download datasetDownloads and extracts the dataset. The dataset comes from http://medicaldecathlon.com/.
###Code
resource = "https://drive.google.com/uc?id=1jzeNU1EKnK81PyTsrx0ujfNl-t0Jo8uE"
md5 = "410d4a301da4e5b2f6f86ec3ddba524e"
compressed_file = os.path.join(root_dir, "Task09_Spleen.tar")
data_dir = os.path.join(root_dir, "Task09_Spleen")
download_and_extract(resource, compressed_file, root_dir, md5)
###Output
file /home/bengorman/notebooks/Task09_Spleen.tar exists, skip downloading.
extracted file /home/bengorman/notebooks/Task09_Spleen exists, skip extracting.
###Markdown
Set MSD Spleen dataset pathThe following groups images and labels from `Task09_Spleen/imagesTr` and `Task09_Spleen/labelsTr` into pairs.
###Code
train_images = sorted(glob.glob(os.path.join(data_dir, "imagesTr", "*.nii.gz")))
train_labels = sorted(glob.glob(os.path.join(data_dir, "labelsTr", "*.nii.gz")))
data_dicts = [
{"image": image_name, "label": label_name}
for image_name, label_name in zip(train_images, train_labels)
]
train_data_dicts, val_data_dicts = data_dicts[:-9], data_dicts[-9:]
###Output
_____no_output_____
###Markdown
The image file names are organised into a list of dictionaries.
###Code
train_data_dicts[0]
###Output
_____no_output_____
###Markdown
The list of data dictionaries, `train_data_dicts`,could be used by PyTorch's data loader.For example,```pythonfrom torch.utils.data import DataLoaderdata_loader = DataLoader(train_data_dicts)for training_sample in data_loader: run the deep learning training with training_sample```The rest of this tutorial presents a set of "transforms"converting `train_data_dict` into data arrays that willeventually be consumed by the deep learning models. Load the NIfTI filesOne design choice of MONAI is that it provides not only the high-level workflow components,but also relatively lower level APIs in their minimal functioning form.For example, a `LoadNifti` class is a simple callable wrapper of the underlying `Nibabel` image loader.After constructing the loader with a few necessary system parameters,calling the loader instance with a `NIfTI` filename will return the image data arrays,as well as the metadata -- such as affine information and voxel sizes.
###Code
loader = LoadNifti(dtype=np.float32)
image, metadata = loader(train_data_dicts[0]["image"])
print(f"input: {train_data_dicts[0]['image']}")
print(f"image shape: {image.shape}")
print(f"image affine:\n{metadata['affine']}")
print(f"image pixdim:\n{metadata['pixdim']}")
###Output
input: /home/bengorman/notebooks/Task09_Spleen/imagesTr/spleen_10.nii.gz
image shape: (512, 512, 55)
image affine:
[[ 0.97656202 0. 0. -499.02319336]
[ 0. 0.97656202 0. -499.02319336]
[ 0. 0. 5. 0. ]
[ 0. 0. 0. 1. ]]
image pixdim:
[1. 0.976562 0.976562 5. 0. 0. 0. 0. ]
###Markdown
Oftentimes, we want to load a group of inputs as a training sample.For example training a supervised image segmentation network requires a pair of image and label as a training sample.To ensure a group of inputs are beining preprocessed consistently,MONAI also provides dictionary-based interfaces for the minimal functioning transforms.`LoadNiftid` is the corresponding dict-based version of `LoadNifti`:
###Code
loader = LoadNiftid(keys=("image", "label"))
data_dict = loader(train_data_dicts[0])
print(f"input:, {train_data_dicts[0]}")
print(f"image shape: {data_dict['image'].shape}")
print(f"label shape: {data_dict['label'].shape}")
print(f"image pixdim:\n{data_dict['image_meta_dict']['pixdim']}")
image, label = data_dict["image"], data_dict["label"]
plt.figure("visualize", (8, 4))
plt.subplot(1, 2, 1)
plt.title("image")
plt.imshow(image[:, :, 30], cmap="gray")
plt.subplot(1, 2, 2)
plt.title("label")
plt.imshow(label[:, :, 30])
plt.show()
###Output
_____no_output_____
###Markdown
Add the channel dimensionMost of MONAI's image transformations assume that the input data has the shape: `[num_channels, spatial_dim_1, spatial_dim_2, ... ,spatial_dim_n]` so that they could be interpreted consistently (as "channel-first" is commonly used in PyTorch). Here the input image has shape `(512, 512, 55)` which isn't in the acceptable shape (missing the channel dimension), we therefore create a transform which is called to update the shape:
###Code
add_channel = AddChanneld(keys=["image", "label"])
datac_dict = add_channel(data_dict)
print(f"image shape: {datac_dict['image'].shape}")
###Output
image shape: (1, 512, 512, 55)
###Markdown
Now we are ready to do some intensity and spatial transforms. Resample to a consistent voxel sizeThe input volumes might have different voxel sizes. The following transform is created to normalise the volumes to have (1.5, 1.5, 5.) millimetre voxel size. The transform is set to read the original voxel size information from `data_dict['image.affine']`, which is from the corresponding NIfTI file, loaded earlier by `LoadNiftid`.
###Code
spacing = Spacingd(keys=["image", "label"], pixdim=(1.5, 1.5, 5.0), mode=("bilinear", "nearest"))
data_dict = spacing(datac_dict)
print(f"image shape: {data_dict['image'].shape}")
print(f"label shape: {data_dict['label'].shape}")
print(f"image affine after Spacing:\n{data_dict['image_meta_dict']['affine']}")
print(f"label affine after Spacing:\n{data_dict['label_meta_dict']['affine']}")
###Output
image shape: (1, 334, 334, 55)
label shape: (1, 334, 334, 55)
image affine after Spacing:
[[ 1.5 0. 0. -499.02319336]
[ 0. 1.5 0. -499.02319336]
[ 0. 0. 5. 0. ]
[ 0. 0. 0. 1. ]]
label affine after Spacing:
[[ 1.5 0. 0. -499.02319336]
[ 0. 1.5 0. -499.02319336]
[ 0. 0. 5. 0. ]
[ 0. 0. 0. 1. ]]
###Markdown
To track the spacing changes, the data_dict was updated by `Spacingd`:* An `image.original_affine` key is added to the `data_dict`, logs the original affine.* An `image.affine` key is updated to have the current affine.
###Code
image, label = data_dict["image"], data_dict["label"]
plt.figure("visualise", (8, 4))
plt.subplot(1, 2, 1)
plt.title("image")
plt.imshow(image[0, :, :, 30], cmap="gray")
plt.subplot(1, 2, 2)
plt.title("label")
plt.imshow(label[0, :, :, 30])
plt.show()
###Output
_____no_output_____
###Markdown
Reorientation to a designated axes codesSometimes it is nice to have all the input volumes in a consistent axes orientation. The default axis labels are Left (L), Right (R), Posterior (P), Anterior (A), Inferior (I), Superior (S). The following transform is created to reorientate the volumes to have 'Posterior, Left, Inferior' (PLI) orientation:
###Code
spacing = Orientationd(keys=["image", "label"], axcodes="PLI")
data_dict = spacing(data_dict)
print(f"image shape: {data_dict['image'].shape}")
print(f"label shape: {data_dict['label'].shape}")
print(f"image affine after Spacing:\n{data_dict['image_meta_dict']['affine']}")
print(f"label affine after Spacing:\n{data_dict['label_meta_dict']['affine']}")
image, label = data_dict["image"], data_dict["label"]
plt.figure("visualise", (8, 4))
plt.subplot(1, 2, 1)
plt.title("image")
plt.imshow(image[0, :, :, 30], cmap="gray")
plt.subplot(1, 2, 2)
plt.title("label")
plt.imshow(label[0, :, :, 30])
plt.show()
###Output
_____no_output_____
###Markdown
Random affine transformationThe following affine transformation is defined to output a (300, 300, 50) image patch. The patch location is randomly chosen in a range of (-40, 40), (-40, 40), (-2, 2) in x, y, and z axes respectively. The translation is relative to the image centre. The 3D rotation angle is randomly chosen from (-45, 45) degrees around the z axis, and 5 degrees around x and y axes. The random scaling factor is randomly chosen from (1.0 - 0.15, 1.0 + 0.15) along each axis.
###Code
rand_affine = RandAffined(
keys=["image", "label"],
mode=("bilinear", "nearest"),
prob=1.0,
spatial_size=(300, 300, 50),
translate_range=(40, 40, 2),
rotate_range=(np.pi / 36, np.pi / 36, np.pi * 4),
scale_range=(0.15, 0.15, 0.15),
padding_mode="border",
)
###Output
_____no_output_____
###Markdown
You can rerun this cell to generate a different randomised version of the original image.
###Code
affined_data_dict = rand_affine(data_dict)
print(f"image shape: {affined_data_dict['image'].shape}")
image, label = affined_data_dict["image"][0], affined_data_dict["label"][0]
plt.figure("visualise", (12, 6))
plt.subplot(1, 2, 1)
plt.title("image")
plt.imshow(image[:, :, 15], cmap="gray")
plt.subplot(1, 2, 2)
plt.title("label")
plt.imshow(label[:, :, 15])
plt.show()
###Output
image shape: torch.Size([1, 300, 300, 50])
###Markdown
Random elastic deformationSimilarly, the following elastic deformation is defined to output a (300, 300, 10) image patch. The image is resampled from a combination of affine transformations and elastic deformations. `sigma_range` controls the smoothness of the deformation (larger than 15 could be slow on CPU) `magnitude_range` controls the amplitude of the deformation (large than 500, the image becomes unrealistic).
###Code
rand_elastic = Rand3DElasticd(
keys=["image", "label"],
mode=("bilinear", "nearest"),
prob=1.0,
sigma_range=(5, 8),
magnitude_range=(100, 200),
spatial_size=(300, 300, 10),
translate_range=(50, 50, 2),
rotate_range=(np.pi / 36, np.pi / 36, np.pi * 2),
scale_range=(0.15, 0.15, 0.15),
padding_mode="border",
)
###Output
_____no_output_____
###Markdown
You can rerun this cell to generate a different randomised version of the original image.
###Code
deformed_data_dict = rand_elastic(data_dict)
print(f"image shape: {deformed_data_dict['image'].shape}")
image, label = deformed_data_dict["image"][0], deformed_data_dict["label"][0]
plt.figure("visualise", (12, 6))
plt.subplot(1, 2, 1)
plt.title("image")
plt.imshow(image[:, :, 5], cmap="gray")
plt.subplot(1, 2, 2)
plt.title("label")
plt.imshow(label[:, :, 5])
plt.show()
###Output
image shape: (1, 300, 300, 10)
###Markdown
Cleanup data directoryRemove directory if a temporary was used.
###Code
if directory is None:
shutil.rmtree(root_dir)
###Output
_____no_output_____
###Markdown
OverviewThis notebook introduces you MONAI's image transformation module.
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import numpy as np
import torch
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
import monai
from monai.transforms import \
LoadNifti, LoadNiftid, AddChanneld, ScaleIntensityRanged, \
Rand3DElasticd, RandAffined, \
Spacingd, Orientationd
monai.config.print_config()
###Output
MONAI version: 0.0.1
Python version: 3.5.6 |Anaconda, Inc.| (default, Aug 26 2018, 16:30:03) [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]
Numpy version: 1.18.2
Pytorch version: 1.4.0
Ignite version: 0.3.0
###Markdown
Data sourcesStarting from a list of filenames. The following is a simple python scriptto group pairs of image and label from `Task09_Spleen/imagesTr` and `Task09_Spleen/labelsTr`folder.
###Code
data_root = 'temp/Task09_Spleen'
import os
import glob
train_images = sorted(glob.glob(os.path.join(data_root, 'imagesTr', '*.nii.gz')))
train_labels = sorted(glob.glob(os.path.join(data_root, 'labelsTr', '*.nii.gz')))
data_dicts = [{'image': image_name, 'label': label_name}
for image_name, label_name in zip(train_images, train_labels)]
train_data_dicts, val_data_dicts = data_dicts[:-9], data_dicts[-9:]
###Output
_____no_output_____
###Markdown
The image file names are organised into a list of dictionaries.
###Code
train_data_dicts[0]
###Output
_____no_output_____
###Markdown
The list of data dictionaries, `train_data_dicts`, could be used byPyTorch's data loader.For example,```pythonfrom torch.utils.data import DataLoaderdata_loader = DataLoader(train_data_dicts)for training_sample in data_loader: run the deep learning training with training_sample```The rest of this tutorial presents a set of "transforms"converting `train_data_dict` into data arrays thatwill eventually be consumed by the deep learning models. Load the NIfTI files One design choice of MONAI is that it provides not only the high-level workflow components,but also relatively lower level APIs in their minimal functioning form.For example, a `LoadNifti` class is a simple callable wrapper of the underlying `Nibabel` image loader.After constructing the loader with a few necessary system parameters,calling the loader instance with a NIfTI filename will return the image data arrays, as well as the metadata -- such as affine information and voxel sizes.
###Code
loader = LoadNifti(dtype=np.float32)
image, metadata = loader(train_data_dicts[0]['image'])
print('input:', train_data_dicts[0]['image'])
print('image shape', image.shape)
print('image affine', metadata['affine'])
print('image pixdim', metadata['pixdim'])
###Output
input: temp/Task09_Spleen/imagesTr/spleen_10.nii.gz
image shape (512, 512, 55)
image affine [[ 0.97656202 0. 0. -499.02319336]
[ 0. 0.97656202 0. -499.02319336]
[ 0. 0. 5. 0. ]
[ 0. 0. 0. 1. ]]
image pixdim [1. 0.976562 0.976562 5. 0. 0. 0. 0. ]
###Markdown
Oftentimes, we want to load a group of inputs as a training sample.For example training a supervised image segmentation network requires a pair of image and label as a training sample.To ensure a group of inputs are beining preprocessed consistently,MONAI also provides dictionary-based interfaces for the minimal functioning transforms.`LoadNiftid` is the corresponding dict-based version of `LoadNifti`:
###Code
loader = LoadNiftid(keys=('image', 'label'))
data_dict = loader(train_data_dicts[0])
print('input:', train_data_dicts[0])
print('image shape', data_dict['image'].shape)
print('label shape', data_dict['label'].shape)
print('image pixdim', data_dict['image.pixdim'])
image, label = data_dict['image'], data_dict['label']
plt.figure('visualise', (8, 4))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[:, :, 30], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[:, :, 30])
plt.show()
###Output
_____no_output_____
###Markdown
Add the channel dimension Most of MONAI's image transformations assume that the input data has the shape:`[num_channels, spatial_dim_1, spatial_dim_2, ... ,spatial_dim_n]`so that they could be interpreted consistently (as "channel-first" is commonly used in PyTorch).Here the input image has shape `(512, 512, 55)` which isn't in the acceptable shape (missing the channel dimension),we therefore create a transform which is called to updat the shape:
###Code
add_channel = AddChanneld(keys=['image', 'label'])
datac_dict = add_channel(data_dict)
print('image shape', datac_dict['image'].shape)
###Output
image shape (1, 512, 512, 55)
###Markdown
Now we are ready to do some intensity and spatial transforms. Resample to a consistent voxel size The input volumes might have different voxel sizes.The following transform is created to normlise the volumes to have (1.5, 1.5, 5.) millimetre voxel size.The transform is set to read the original voxel size information from `data_dict['image.affine']`,which is from the corresponding NIfTI file, loaded earlier by `LoadNiftid`.
###Code
spacing = Spacingd(keys=['image', 'label'],
pixdim=(1.5, 1.5, 5.), interp_order=(2, 0), mode='nearest')
data_dict = spacing(datac_dict)
print('image shape:', data_dict['image'].shape)
print('label shape:', data_dict['label'].shape)
print('image affine after Spacing\n', data_dict['image.affine'])
print('label affine after Spacing\n', data_dict['label.affine'])
###Output
image shape: (1, 334, 334, 55)
label shape: (1, 334, 334, 55)
image affine after Spacing
[[ 1.5 0. 0. -499.02319336]
[ 0. 1.5 0. -499.02319336]
[ 0. 0. 5. 0. ]
[ 0. 0. 0. 1. ]]
label affine after Spacing
[[ 1.5 0. 0. -499.02319336]
[ 0. 1.5 0. -499.02319336]
[ 0. 0. 5. 0. ]
[ 0. 0. 0. 1. ]]
###Markdown
To track the spacing changes, the data_dict was updated by `Spacingd`:- An `image.original_affine` key is added to the `data_dict`, logs the original affine.- An `image.affine` key is updated to have the current affine.
###Code
image, label = data_dict['image'], data_dict['label']
plt.figure('visualise', (8, 4))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[0, :, :, 30], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[0, :, :, 30])
plt.show()
###Output
_____no_output_____
###Markdown
Reorientation to a designated axes codes Sometimes it is nice to have all the input volumes in a consistent axes orientation.The default axis labels are Left (L), Right (R), Posterior (P), Anterior (A), Inferior (I), Superior (S).The following transform is created to reorientate the volumes to have 'Posterior, Left, Inferior' (PLI) orientation:
###Code
spacing = Orientationd(keys=['image', 'label'], axcodes='PLI')
data_dict = spacing(data_dict)
print('image shape:', data_dict['image'].shape)
print('label shape:', data_dict['label'].shape)
print('image affine after Spacing\n', data_dict['image.affine'])
print('label affine after Spacing\n', data_dict['label.affine'])
image, label = data_dict['image'], data_dict['label']
plt.figure('visualise', (8, 4))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[0, :, :, 30], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[0, :, :, 30])
plt.show()
###Output
_____no_output_____
###Markdown
Random affine transformation The following affine transformation is defined to output a (300, 300, 50) image patch.The patch location is randomly chosen in a range of (-40, 40), (-40, 40), (-2, 2) in x, y, and z axes respectively.The translation is relative to the image centre.The 3D rotation angle is randomly chosen from (-45, 45) degrees around the z axis, and 5 degrees around x and y axes.The random scaling factor is randomly chosen from (1.0 - 0.15, 1.0 + 0.15) along each axis.
###Code
rand_affine = RandAffined(keys=['image', 'label'], mode=('bilinear', 'nearest'), prob=1.0,
spatial_size=(300, 300, 50),
translate_range=(40, 40, 2),
rotate_range=(np.pi/36, np.pi/36, np.pi*4),
scale_range=(0.15, 0.15, 0.15),
padding_mode='border')
###Output
_____no_output_____
###Markdown
You can rerun this cell to generate a different randomised version of the original image.
###Code
affined_data_dict = rand_affine(data_dict)
print('image shape', affined_data_dict['image'].shape)
image, label = affined_data_dict['image'][0], affined_data_dict['label'][0]
plt.figure('visualise', (12, 6))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[:, :, 15], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[:, :, 15], cmap='gray')
plt.show()
###Output
image shape torch.Size([1, 300, 300, 50])
###Markdown
Random elastic deformation Similarly, the following elastic deformation is defined to output a (300, 300, 10) image patch.The image is resampled from a combination of affine transformations and elastic deformations.`sigma_range` controls the smoothness of the deformation (larger than 15 could be slow on CPU)`magnitude_rnage` controls the amplitude of the deformation (large than 500, the image becomes unrealistic).
###Code
rand_elastic = Rand3DElasticd(
keys=['image', 'label'], mode=('bilinear', 'nearest'), prob=1.0,
sigma_range=(5, 8),
magnitude_range=(100, 200),
spatial_size=(300, 300, 10),
translate_range=(50, 50, 2),
rotate_range=(np.pi/36, np.pi/36, np.pi*2),
scale_range=(0.15, 0.15, 0.15),
padding_mode='border')
###Output
_____no_output_____
###Markdown
You can rerun this cell to generate a different randomised version of the original image.
###Code
deformed_data_dict = rand_elastic(data_dict)
print('image shape', deformed_data_dict['image'].shape)
image, label = deformed_data_dict['image'][0], deformed_data_dict['label'][0]
plt.figure('visualise', (12, 6))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[:, :, 5], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[:, :, 5], cmap='gray')
plt.show()
###Output
image shape (1, 300, 300, 10)
###Markdown
OverviewThis notebook introduces you MONAI's image transformation module.
###Code
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import numpy as np
import torch
from torch.utils.data import DataLoader
import matplotlib.pyplot as plt
import monai
from monai.transforms import \
LoadNifti, LoadNiftid, AddChanneld, ScaleIntensityRanged, \
Rand3DElasticd, RandAffined, \
Spacingd, Orientationd
monai.config.print_config()
###Output
MONAI version: 0.1.0+236.g2bf147e.dirty
Python version: 3.6.9 |Anaconda, Inc.| (default, Jul 30 2019, 19:07:31) [GCC 7.3.0]
Numpy version: 1.17.4
Pytorch version: 1.5.1
Optional dependencies:
Pytorch Ignite version: 0.3.0
Nibabel version: 3.0.1
scikit-image version: 0.15.0
Pillow version: 7.2.0
Tensorboard version: 2.1.0
For details about installing the optional dependencies, please visit:
https://docs.monai.io/en/latest/installation.html#installing-the-recommended-dependencies
###Markdown
Data sourcesStarting from a list of filenames. The Spleen dataset can be downloaded from http://medicaldecathlon.com/.The following is a simple python scriptto group pairs of image and label from `Task09_Spleen/imagesTr` and `Task09_Spleen/labelsTr`folder.
###Code
data_root = '/workspace/data/medical/Task09_Spleen'
import os
import glob
train_images = sorted(glob.glob(os.path.join(data_root, 'imagesTr', '*.nii.gz')))
train_labels = sorted(glob.glob(os.path.join(data_root, 'labelsTr', '*.nii.gz')))
data_dicts = [{'image': image_name, 'label': label_name}
for image_name, label_name in zip(train_images, train_labels)]
train_data_dicts, val_data_dicts = data_dicts[:-9], data_dicts[-9:]
###Output
_____no_output_____
###Markdown
The image file names are organised into a list of dictionaries.
###Code
train_data_dicts[0]
###Output
_____no_output_____
###Markdown
The list of data dictionaries, `train_data_dicts`, could be used byPyTorch's data loader.For example,```pythonfrom torch.utils.data import DataLoaderdata_loader = DataLoader(train_data_dicts)for training_sample in data_loader: run the deep learning training with training_sample```The rest of this tutorial presents a set of "transforms"converting `train_data_dict` into data arrays thatwill eventually be consumed by the deep learning models. Load the NIfTI files One design choice of MONAI is that it provides not only the high-level workflow components,but also relatively lower level APIs in their minimal functioning form.For example, a `LoadNifti` class is a simple callable wrapper of the underlying `Nibabel` image loader.After constructing the loader with a few necessary system parameters,calling the loader instance with a NIfTI filename will return the image data arrays, as well as the metadata -- such as affine information and voxel sizes.
###Code
loader = LoadNifti(dtype=np.float32)
image, metadata = loader(train_data_dicts[0]['image'])
print(f"input: {train_data_dicts[0]['image']}")
print(f"image shape: {image.shape}")
print(f"image affine:\n{metadata['affine']}")
print(f"image pixdim:\n{metadata['pixdim']}")
###Output
input: /workspace/data/medical/Task09_Spleen/imagesTr/spleen_10.nii.gz
image shape: (512, 512, 55)
image affine:
[[ 0.97656202 0. 0. -499.02319336]
[ 0. 0.97656202 0. -499.02319336]
[ 0. 0. 5. 0. ]
[ 0. 0. 0. 1. ]]
image pixdim:
[1. 0.976562 0.976562 5. 0. 0. 0. 0. ]
###Markdown
Oftentimes, we want to load a group of inputs as a training sample.For example training a supervised image segmentation network requires a pair of image and label as a training sample.To ensure a group of inputs are beining preprocessed consistently,MONAI also provides dictionary-based interfaces for the minimal functioning transforms.`LoadNiftid` is the corresponding dict-based version of `LoadNifti`:
###Code
loader = LoadNiftid(keys=('image', 'label'))
data_dict = loader(train_data_dicts[0])
print(f"input:, {train_data_dicts[0]}")
print(f"image shape: {data_dict['image'].shape}")
print(f"label shape: {data_dict['label'].shape}")
print(f"image pixdim:\n{data_dict['image_meta_dict']['pixdim']}")
image, label = data_dict['image'], data_dict['label']
plt.figure('visualize', (8, 4))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[:, :, 30], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[:, :, 30])
plt.show()
###Output
_____no_output_____
###Markdown
Add the channel dimension Most of MONAI's image transformations assume that the input data has the shape:`[num_channels, spatial_dim_1, spatial_dim_2, ... ,spatial_dim_n]`so that they could be interpreted consistently (as "channel-first" is commonly used in PyTorch).Here the input image has shape `(512, 512, 55)` which isn't in the acceptable shape (missing the channel dimension),we therefore create a transform which is called to update the shape:
###Code
add_channel = AddChanneld(keys=['image', 'label'])
datac_dict = add_channel(data_dict)
print(f"image shape: {datac_dict['image'].shape}")
###Output
image shape: (1, 512, 512, 55)
###Markdown
Now we are ready to do some intensity and spatial transforms. Resample to a consistent voxel size The input volumes might have different voxel sizes.The following transform is created to normalise the volumes to have (1.5, 1.5, 5.) millimetre voxel size.The transform is set to read the original voxel size information from `data_dict['image.affine']`,which is from the corresponding NIfTI file, loaded earlier by `LoadNiftid`.
###Code
spacing = Spacingd(keys=['image', 'label'], pixdim=(1.5, 1.5, 5.), mode=('bilinear', 'nearest'))
data_dict = spacing(datac_dict)
print(f"image shape: {data_dict['image'].shape}")
print(f"label shape: {data_dict['label'].shape}")
print(f"image affine after Spacing:\n{data_dict['image_meta_dict']['affine']}")
print(f"label affine after Spacing:\n{data_dict['label_meta_dict']['affine']}")
###Output
image shape: (1, 334, 334, 55)
label shape: (1, 334, 334, 55)
image affine after Spacing:
[[ 1.5 0. 0. -499.02319336]
[ 0. 1.5 0. -499.02319336]
[ 0. 0. 5. 0. ]
[ 0. 0. 0. 1. ]]
label affine after Spacing:
[[ 1.5 0. 0. -499.02319336]
[ 0. 1.5 0. -499.02319336]
[ 0. 0. 5. 0. ]
[ 0. 0. 0. 1. ]]
###Markdown
To track the spacing changes, the data_dict was updated by `Spacingd`:- An `image.original_affine` key is added to the `data_dict`, logs the original affine.- An `image.affine` key is updated to have the current affine.
###Code
image, label = data_dict['image'], data_dict['label']
plt.figure('visualise', (8, 4))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[0, :, :, 30], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[0, :, :, 30])
plt.show()
###Output
_____no_output_____
###Markdown
Reorientation to a designated axes codes Sometimes it is nice to have all the input volumes in a consistent axes orientation.The default axis labels are Left (L), Right (R), Posterior (P), Anterior (A), Inferior (I), Superior (S).The following transform is created to reorientate the volumes to have 'Posterior, Left, Inferior' (PLI) orientation:
###Code
spacing = Orientationd(keys=['image', 'label'], axcodes='PLI')
data_dict = spacing(data_dict)
print(f"image shape: {data_dict['image'].shape}")
print(f"label shape: {data_dict['label'].shape}")
print(f"image affine after Spacing:\n{data_dict['image_meta_dict']['affine']}")
print(f"label affine after Spacing:\n{data_dict['label_meta_dict']['affine']}")
image, label = data_dict['image'], data_dict['label']
plt.figure('visualise', (8, 4))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[0, :, :, 30], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[0, :, :, 30])
plt.show()
###Output
_____no_output_____
###Markdown
Random affine transformation The following affine transformation is defined to output a (300, 300, 50) image patch.The patch location is randomly chosen in a range of (-40, 40), (-40, 40), (-2, 2) in x, y, and z axes respectively.The translation is relative to the image centre.The 3D rotation angle is randomly chosen from (-45, 45) degrees around the z axis, and 5 degrees around x and y axes.The random scaling factor is randomly chosen from (1.0 - 0.15, 1.0 + 0.15) along each axis.
###Code
rand_affine = RandAffined(keys=['image', 'label'], mode=('bilinear', 'nearest'), prob=1.0,
spatial_size=(300, 300, 50),
translate_range=(40, 40, 2),
rotate_range=(np.pi/36, np.pi/36, np.pi*4),
scale_range=(0.15, 0.15, 0.15),
padding_mode='border')
###Output
_____no_output_____
###Markdown
You can rerun this cell to generate a different randomised version of the original image.
###Code
affined_data_dict = rand_affine(data_dict)
print(f"image shape: {affined_data_dict['image'].shape}")
image, label = affined_data_dict['image'][0], affined_data_dict['label'][0]
plt.figure('visualise', (12, 6))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[:, :, 15], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[:, :, 15])
plt.show()
###Output
image shape: torch.Size([1, 300, 300, 50])
###Markdown
Random elastic deformation Similarly, the following elastic deformation is defined to output a (300, 300, 10) image patch.The image is resampled from a combination of affine transformations and elastic deformations.`sigma_range` controls the smoothness of the deformation (larger than 15 could be slow on CPU)`magnitude_range` controls the amplitude of the deformation (large than 500, the image becomes unrealistic).
###Code
rand_elastic = Rand3DElasticd(
keys=['image', 'label'], mode=('bilinear', 'nearest'), prob=1.0,
sigma_range=(5, 8),
magnitude_range=(100, 200),
spatial_size=(300, 300, 10),
translate_range=(50, 50, 2),
rotate_range=(np.pi/36, np.pi/36, np.pi*2),
scale_range=(0.15, 0.15, 0.15),
padding_mode='border')
###Output
_____no_output_____
###Markdown
You can rerun this cell to generate a different randomised version of the original image.
###Code
deformed_data_dict = rand_elastic(data_dict)
print(f"image shape: {deformed_data_dict['image'].shape}")
image, label = deformed_data_dict['image'][0], deformed_data_dict['label'][0]
plt.figure('visualise', (12, 6))
plt.subplot(1, 2, 1)
plt.title('image')
plt.imshow(image[:, :, 5], cmap='gray')
plt.subplot(1, 2, 2)
plt.title('label')
plt.imshow(label[:, :, 5])
plt.show()
###Output
image shape: (1, 300, 300, 10)
|
courses/machine_learning/deepdive/03_tensorflow/d_traineval.ipynb | ###Markdown
2d. Distributed training and monitoring In this notebook, we refactor to call ```train_and_evaluate``` instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.We also use TensorBoard to monitor the training.
###Code
import datalab.bigquery as bq
import tensorflow as tf
import numpy as np
import shutil
from google.datalab.ml import TensorBoard
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Input Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph.
###Code
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Create list of file names that match "glob" pattern (i.e. data_file_*.csv)
filenames_dataset = tf.data.Dataset.list_files(filename)
# Read lines from text files
textlines_dataset = filenames_dataset.flat_map(tf.data.TextLineDataset)
# Parse text lines as comma-separated values (CSV)
dataset = textlines_dataset.map(decode_csv)
# Note:
# use tf.data.Dataset.flat_map to apply one to many transformations (here: filename -> text lines)
# use tf.data.Dataset.map to apply one to one transformations (here: text line -> feature list)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
def get_train():
return read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN)
def get_valid():
return read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL)
def get_test():
return read_dataset('./taxi-test.csv', mode = tf.estimator.ModeKeys.EVAL)
###Output
_____no_output_____
###Markdown
Create features out of input data For now, pass these through. (same as previous lab)
###Code
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
###Output
_____no_output_____
###Markdown
Serving input function
###Code
# Defines the expected shape of the JSON feed that the model
# will receive once deployed behind a REST API in production.
def serving_input_fn():
feature_placeholders = {
'pickuplon' : tf.placeholder(tf.float32, [None]),
'pickuplat' : tf.placeholder(tf.float32, [None]),
'dropofflat' : tf.placeholder(tf.float32, [None]),
'dropofflon' : tf.placeholder(tf.float32, [None]),
'passengers' : tf.placeholder(tf.float32, [None]),
}
# You can transforma data here from the input format to the format expected by your model.
features = feature_placeholders # no transformation needed
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
###Output
_____no_output_____
###Markdown
tf.estimator.train_and_evaluate
###Code
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn = read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = num_train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
Monitoring with TensorBoard Use "refresh" in Tensorboard during training to see progress.
###Code
OUTDIR = 'taxi_trained'
TensorBoard().start(OUTDIR)
###Output
_____no_output_____
###Markdown
Run training
###Code
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = 2000)
###Output
_____no_output_____
###Markdown
You can now shut Tensorboard down
###Code
# to list Tensorboard instances
TensorBoard().list()
# to stop TensorBoard fill the correct pid below
TensorBoard().stop(27855)
print("Stopped Tensorboard")
###Output
_____no_output_____
###Markdown
2d. Distributed training and monitoring In this notebook, we refactor to call ```train_and_evaluate``` instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.1
from google.cloud import bigquery
import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Input Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph.
###Code
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def decode_csv(value_column):
columns = tf.compat.v1.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
# No need to features.pop('key') since it is not specified in the INPUT_COLUMNS.
# The key passes through the graph unused.
return features, label
# Create list of file names that match "glob" pattern (i.e. data_file_*.csv)
filenames_dataset = tf.data.Dataset.list_files(filename)
# Read lines from text files
textlines_dataset = filenames_dataset.flat_map(tf.data.TextLineDataset)
# Parse text lines as comma-separated values (CSV)
dataset = textlines_dataset.map(decode_csv)
# Note:
# use tf.data.Dataset.flat_map to apply one to many transformations (here: filename -> text lines)
# use tf.data.Dataset.map to apply one to one transformations (here: text line -> feature list)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset
###Output
_____no_output_____
###Markdown
Create features out of input data For now, pass these through. (same as previous lab)
###Code
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
###Output
_____no_output_____
###Markdown
Serving input function
###Code
# Defines the expected shape of the JSON feed that the model
# will receive once deployed behind a REST API in production.
def serving_input_fn():
json_feature_placeholders = {
'pickuplon' : tf.compat.v1.placeholder(tf.float32, [None]),
'pickuplat' : tf.compat.v1.placeholder(tf.float32, [None]),
'dropofflat' : tf.compat.v1.placeholder(tf.float32, [None]),
'dropofflon' : tf.compat.v1.placeholder(tf.float32, [None]),
'passengers' : tf.compat.v1.placeholder(tf.float32, [None]),
}
# You can transforma data here from the input format to the format expected by your model.
features = json_feature_placeholders # no transformation needed
return tf.estimator.export.ServingInputReceiver(features, json_feature_placeholders)
###Output
_____no_output_____
###Markdown
tf.estimator.train_and_evaluate
###Code
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn = lambda: read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = num_train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = lambda: read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
Run training
###Code
OUTDIR = './taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
tf.compat.v1.summary.FileWriterCache.clear()
train_and_evaluate(OUTDIR, num_train_steps = 500)
###Output
_____no_output_____
###Markdown
2d. Distributed training and monitoring In this notebook, we refactor to call ```train_and_evaluate``` instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.We also use TensorBoard to monitor the training.
###Code
from google.cloud import bigquery
import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Input Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph.
###Code
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
# No need to features.pop('key') since it is not specified in the INPUT_COLUMNS.
# The key passes through the graph unused.
return features, label
# Create list of file names that match "glob" pattern (i.e. data_file_*.csv)
filenames_dataset = tf.data.Dataset.list_files(filename)
# Read lines from text files
textlines_dataset = filenames_dataset.flat_map(tf.data.TextLineDataset)
# Parse text lines as comma-separated values (CSV)
dataset = textlines_dataset.map(decode_csv)
# Note:
# use tf.data.Dataset.flat_map to apply one to many transformations (here: filename -> text lines)
# use tf.data.Dataset.map to apply one to one transformations (here: text line -> feature list)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset
###Output
_____no_output_____
###Markdown
Create features out of input data For now, pass these through. (same as previous lab)
###Code
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
###Output
_____no_output_____
###Markdown
Serving input function
###Code
# Defines the expected shape of the JSON feed that the model
# will receive once deployed behind a REST API in production.
def serving_input_fn():
json_feature_placeholders = {
'pickuplon' : tf.placeholder(tf.float32, [None]),
'pickuplat' : tf.placeholder(tf.float32, [None]),
'dropofflat' : tf.placeholder(tf.float32, [None]),
'dropofflon' : tf.placeholder(tf.float32, [None]),
'passengers' : tf.placeholder(tf.float32, [None]),
}
# You can transforma data here from the input format to the format expected by your model.
features = json_feature_placeholders # no transformation needed
return tf.estimator.export.ServingInputReceiver(features, json_feature_placeholders)
###Output
_____no_output_____
###Markdown
tf.estimator.train_and_evaluate
###Code
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn = lambda: read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = num_train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = lambda: read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
Monitor training with TensorBoardTo activate TensorBoard within the JupyterLab UI navigate to "File" - "New Launcher". Then double-click the 'Tensorboard' icon on the bottom row.TensorBoard 1 will appear in the new tab. Navigate through the three tabs to see the active TensorBoard. The 'Graphs' and 'Projector' tabs offer very interesting information including the ability to replay the tests.You may close the TensorBoard tab when you are finished exploring.
###Code
OUTDIR = './taxi_trained'
###Output
_____no_output_____
###Markdown
Run training
###Code
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
train_and_evaluate(OUTDIR, num_train_steps = 500)
###Output
_____no_output_____
###Markdown
2d. Distributed training and monitoring In this notebook, we refactor to call ```train_and_evaluate``` instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.5
from google.cloud import bigquery
import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Input Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph.
###Code
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def decode_csv(value_column):
columns = tf.compat.v1.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
# No need to features.pop('key') since it is not specified in the INPUT_COLUMNS.
# The key passes through the graph unused.
return features, label
# Create list of file names that match "glob" pattern (i.e. data_file_*.csv)
filenames_dataset = tf.data.Dataset.list_files(filename)
# Read lines from text files
textlines_dataset = filenames_dataset.flat_map(tf.data.TextLineDataset)
# Parse text lines as comma-separated values (CSV)
dataset = textlines_dataset.map(decode_csv)
# Note:
# use tf.data.Dataset.flat_map to apply one to many transformations (here: filename -> text lines)
# use tf.data.Dataset.map to apply one to one transformations (here: text line -> feature list)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset
###Output
_____no_output_____
###Markdown
Create features out of input data For now, pass these through. (same as previous lab)
###Code
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
###Output
_____no_output_____
###Markdown
Serving input function
###Code
# Defines the expected shape of the JSON feed that the model
# will receive once deployed behind a REST API in production.
def serving_input_fn():
json_feature_placeholders = {
'pickuplon' : tf.compat.v1.placeholder(tf.float32, [None]),
'pickuplat' : tf.compat.v1.placeholder(tf.float32, [None]),
'dropofflat' : tf.compat.v1.placeholder(tf.float32, [None]),
'dropofflon' : tf.compat.v1.placeholder(tf.float32, [None]),
'passengers' : tf.compat.v1.placeholder(tf.float32, [None]),
}
# You can transforma data here from the input format to the format expected by your model.
features = json_feature_placeholders # no transformation needed
return tf.estimator.export.ServingInputReceiver(features, json_feature_placeholders)
###Output
_____no_output_____
###Markdown
tf.estimator.train_and_evaluate
###Code
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn = lambda: read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = num_train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = lambda: read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
Run training
###Code
OUTDIR = './taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
tf.compat.v1.summary.FileWriterCache.clear()
train_and_evaluate(OUTDIR, num_train_steps = 500)
###Output
_____no_output_____
###Markdown
Monitoring with TensorBoard
###Code
from google.datalab.ml import TensorBoard
TensorBoard().start('./taxi_trained')
TensorBoard().list()
# to stop TensorBoard
TensorBoard().stop(9049)
print 'stopped TensorBoard'
###Output
_____no_output_____
###Markdown
2d. Distributed training and monitoring In this notebook, we refactor to call ```train_and_evaluate``` instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.We also use TensorBoard to monitor the training.
###Code
import datalab.bigquery as bq
import tensorflow as tf
import numpy as np
import shutil
from google.datalab.ml import TensorBoard
print(tf.__version__)
###Output
1.8.0
###Markdown
Input Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph.
###Code
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Create list of file names that match "glob" pattern (i.e. data_file_*.csv)
filenames_dataset = tf.data.Dataset.list_files(filename)
# Read lines from text files
textlines_dataset = filenames_dataset.flat_map(tf.data.TextLineDataset)
# Parse text lines as comma-separated values (CSV)
dataset = textlines_dataset.map(decode_csv)
# Note:
# use tf.data.Dataset.flat_map to apply one to many transformations (here: filename -> text lines)
# use tf.data.Dataset.map to apply one to one transformations (here: text line -> feature list)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
###Output
_____no_output_____
###Markdown
Create features out of input data For now, pass these through. (same as previous lab)
###Code
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
###Output
_____no_output_____
###Markdown
Serving input function
###Code
# Defines the expected shape of the JSON feed that the model
# will receive once deployed behind a REST API in production.
def serving_input_fn():
feature_placeholders = {
'pickuplon' : tf.placeholder(tf.float32, [None]),
'pickuplat' : tf.placeholder(tf.float32, [None]),
'dropofflat' : tf.placeholder(tf.float32, [None]),
'dropofflon' : tf.placeholder(tf.float32, [None]),
'passengers' : tf.placeholder(tf.float32, [None]),
}
# You can transforma data here from the input format to the format expected by your model.
features = feature_placeholders # no transformation needed
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
###Output
_____no_output_____
###Markdown
tf.estimator.train_and_evaluate
###Code
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn = read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = num_train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
Monitoring with TensorBoard Use "refresh" in Tensorboard during training to see progress.
###Code
OUTDIR = 'taxi_trained'
TensorBoard().start(OUTDIR)
###Output
_____no_output_____
###Markdown
Run training
###Code
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = 2000)
###Output
_____no_output_____
###Markdown
You can now shut Tensorboard down
###Code
# to list Tensorboard instances
TensorBoard().list()
# to stop TensorBoard fill the correct pid below
TensorBoard().stop(27855)
print("Stopped Tensorboard")
###Output
_____no_output_____
###Markdown
2d. Distributed training and monitoring In this notebook, we refactor to call ```train_and_evaluate``` instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.We also use TensorBoard to monitor the training.
###Code
import datalab.bigquery as bq
import tensorflow as tf
import numpy as np
import shutil
import tensorflow as tf
print tf.__version__
###Output
_____no_output_____
###Markdown
Input Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph.
###Code
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Create list of files that match pattern
file_list = tf.gfile.Glob(filename)
# Create dataset from file list
dataset = tf.data.TextLineDataset(file_list).map(decode_csv)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
###Output
_____no_output_____
###Markdown
Create features out of input data For now, pass these through. (same as previous lab)
###Code
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
###Output
_____no_output_____
###Markdown
train_and_evaluate
###Code
def serving_input_fn():
feature_placeholders = {
'pickuplon' : tf.placeholder(tf.float32, [None]),
'pickuplat' : tf.placeholder(tf.float32, [None]),
'dropofflat' : tf.placeholder(tf.float32, [None]),
'dropofflon' : tf.placeholder(tf.float32, [None]),
'passengers' : tf.placeholder(tf.float32, [None]),
}
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn = read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = num_train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# Run training
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = 5000)
###Output
_____no_output_____
###Markdown
2d. Distributed training and monitoring In this notebook, we refactor to call ```train_and_evaluate``` instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.We also use TensorBoard to monitor the training.
###Code
import datalab.bigquery as bq
import tensorflow as tf
import numpy as np
import shutil
from google.datalab.ml import TensorBoard
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Input Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph.
###Code
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
# No need to features.pop('key') since it is not specified in the INPUT_COLUMNS.
# The key passes through the graph unused.
return features, label
# Create list of file names that match "glob" pattern (i.e. data_file_*.csv)
filenames_dataset = tf.data.Dataset.list_files(filename)
# Read lines from text files
textlines_dataset = filenames_dataset.flat_map(tf.data.TextLineDataset)
# Parse text lines as comma-separated values (CSV)
dataset = textlines_dataset.map(decode_csv)
# Note:
# use tf.data.Dataset.flat_map to apply one to many transformations (here: filename -> text lines)
# use tf.data.Dataset.map to apply one to one transformations (here: text line -> feature list)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
###Output
_____no_output_____
###Markdown
Create features out of input data For now, pass these through. (same as previous lab)
###Code
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
###Output
_____no_output_____
###Markdown
Serving input function
###Code
# Defines the expected shape of the JSON feed that the model
# will receive once deployed behind a REST API in production.
def serving_input_fn():
feature_placeholders = {
'pickuplon' : tf.placeholder(tf.float32, [None]),
'pickuplat' : tf.placeholder(tf.float32, [None]),
'dropofflat' : tf.placeholder(tf.float32, [None]),
'dropofflon' : tf.placeholder(tf.float32, [None]),
'passengers' : tf.placeholder(tf.float32, [None]),
}
# You can transforma data here from the input format to the format expected by your model.
features = feature_placeholders # no transformation needed
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
###Output
_____no_output_____
###Markdown
tf.estimator.train_and_evaluate
###Code
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn = read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = num_train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
Monitoring with TensorBoard Use "refresh" in Tensorboard during training to see progress.
###Code
OUTDIR = './taxi_trained'
TensorBoard().start(OUTDIR)
###Output
_____no_output_____
###Markdown
Run training
###Code
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = 2000)
###Output
_____no_output_____
###Markdown
You can now shut Tensorboard down
###Code
# to list Tensorboard instances
TensorBoard().list()
pids_df = TensorBoard.list()
if not pids_df.empty:
for pid in pids_df['pid']:
TensorBoard().stop(pid)
print('Stopped TensorBoard with pid {}'.format(pid))
###Output
_____no_output_____
###Markdown
2d. Distributed training and monitoring In this notebook, we refactor to call ```train_and_evaluate``` instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.We also use TensorBoard to monitor the training.
###Code
import datalab.bigquery as bq
import tensorflow as tf
import numpy as np
import shutil
from google.datalab.ml import TensorBoard
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Input Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph.
###Code
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Create list of file names that match "glob" pattern (i.e. data_file_*.csv)
filenames_dataset = tf.data.Dataset.list_files(filename)
# Read lines from text files
textlines_dataset = filenames_dataset.flat_map(tf.data.TextLineDataset)
# Parse text lines as comma-separated values (CSV)
dataset = textlines_dataset.map(decode_csv)
# Note:
# use tf.data.Dataset.flat_map to apply one to many transformations (here: filename -> text lines)
# use tf.data.Dataset.map to apply one to one transformations (here: text line -> feature list)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
def get_train():
return read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN)
def get_valid():
return read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL)
def get_test():
return read_dataset('./taxi-test.csv', mode = tf.estimator.ModeKeys.EVAL)
###Output
_____no_output_____
###Markdown
Create features out of input data For now, pass these through. (same as previous lab)
###Code
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
###Output
_____no_output_____
###Markdown
Serving input function
###Code
# Defines the expected shape of the JSON feed that the model
# will receive once deployed behind a REST API in production.
def serving_input_fn():
feature_placeholders = {
'pickuplon' : tf.placeholder(tf.float32, [None]),
'pickuplat' : tf.placeholder(tf.float32, [None]),
'dropofflat' : tf.placeholder(tf.float32, [None]),
'dropofflon' : tf.placeholder(tf.float32, [None]),
'passengers' : tf.placeholder(tf.float32, [None]),
}
# You can transforma data here from the input format to the format expected by your model.
features = feature_placeholders # no transformation needed
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
###Output
_____no_output_____
###Markdown
tf.estimator.train_and_evaluate
###Code
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn = read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = num_train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
Monitoring with TensorBoard Use "refresh" in Tensorboard during training to see progress.
###Code
OUTDIR = 'taxi_trained'
TensorBoard().start(OUTDIR)
###Output
_____no_output_____
###Markdown
Run training
###Code
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = 2000)
###Output
_____no_output_____
###Markdown
You can now shut Tensorboard down
###Code
# to list Tensorboard instances
TensorBoard().list()
# to stop TensorBoard fill the correct pid below
TensorBoard().stop(27855)
print("Stopped Tensorboard")
###Output
_____no_output_____
###Markdown
2d. Distributed training and monitoring In this notebook, we refactor to call ```train_and_evaluate``` instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.We also use TensorBoard to monitor the training.
###Code
import tensorflow as tf
print tf.__version__
import datalab.bigquery as bq
import tensorflow as tf
import numpy as np
import shutil
###Output
_____no_output_____
###Markdown
Input Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph.
###Code
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size=512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults=DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
dataset = tf.data.TextLineDataset(filename).map(decode_csv)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size=10*batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
###Output
_____no_output_____
###Markdown
Create features out of input data For now, pass these through. (same as previous lab)
###Code
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
###Output
_____no_output_____
###Markdown
train_and_evaluate
###Code
def serving_input_fn():
feature_placeholders = {
'pickuplon' : tf.placeholder(tf.float32, [None]),
'pickuplat' : tf.placeholder(tf.float32, [None]),
'dropofflat' : tf.placeholder(tf.float32, [None]),
'dropofflon' : tf.placeholder(tf.float32, [None]),
'passengers' : tf.placeholder(tf.float32, [None]),
}
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir=output_dir,
feature_columns=feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn=read_dataset('./taxi-train.csv', mode=tf.contrib.learn.ModeKeys.TRAIN),
max_steps=num_train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn=read_dataset('./taxi-valid.csv', mode=tf.contrib.learn.ModeKeys.EVAL),
steps=None,
start_delay_secs=1, # start evaluating after N seconds
throttle_secs=10, # evaluate every N seconds
exporters=exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# run training
OUTDIR='taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors=True) # start fresh each time
train_and_evaluate(OUTDIR, 5000)
###Output
INFO:tensorflow:Using default config.
[2018-01-28 07:02:26,584] {tf_logging.py:82} INFO - Using default config.
INFO:tensorflow:Using config: {'_save_checkpoints_secs': 600, '_session_config': None, '_keep_checkpoint_max': 5, '_task_type': 'worker', '_is_chief': True, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f50bc2c1410>, '_save_checkpoints_steps': None, '_keep_checkpoint_every_n_hours': 10000, '_service': None, '_num_ps_replicas': 0, '_tf_random_seed': None, '_master': '', '_num_worker_replicas': 1, '_task_id': 0, '_log_step_count_steps': 100, '_model_dir': 'taxi_trained', '_save_summary_steps': 100}
[2018-01-28 07:02:26,587] {tf_logging.py:82} INFO - Using config: {'_save_checkpoints_secs': 600, '_session_config': None, '_keep_checkpoint_max': 5, '_task_type': 'worker', '_is_chief': True, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f50bc2c1410>, '_save_checkpoints_steps': None, '_keep_checkpoint_every_n_hours': 10000, '_service': None, '_num_ps_replicas': 0, '_tf_random_seed': None, '_master': '', '_num_worker_replicas': 1, '_task_id': 0, '_log_step_count_steps': 100, '_model_dir': 'taxi_trained', '_save_summary_steps': 100}
INFO:tensorflow:Running training and evaluation locally (non-distributed).
[2018-01-28 07:02:26,976] {tf_logging.py:82} INFO - Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after 10 secs (eval_spec.throttle_secs) or training is finished.
[2018-01-28 07:02:26,978] {tf_logging.py:82} INFO - Start train and evaluate loop. The evaluate will happen after 10 secs (eval_spec.throttle_secs) or training is finished.
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:02:27,285] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Saving checkpoints for 1 into taxi_trained/model.ckpt.
[2018-01-28 07:02:28,338] {tf_logging.py:82} INFO - Saving checkpoints for 1 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 111467.61, step = 1
[2018-01-28 07:02:28,409] {tf_logging.py:82} INFO - loss = 111467.61, step = 1
INFO:tensorflow:global_step/sec: 15.0825
[2018-01-28 07:02:35,038] {tf_logging.py:82} INFO - global_step/sec: 15.0825
INFO:tensorflow:loss = 36651.65, step = 101 (6.633 sec)
[2018-01-28 07:02:35,042] {tf_logging.py:82} INFO - loss = 36651.65, step = 101 (6.633 sec)
INFO:tensorflow:Saving checkpoints for 137 into taxi_trained/model.ckpt.
[2018-01-28 07:02:37,756] {tf_logging.py:82} INFO - Saving checkpoints for 137 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 40536.54.
[2018-01-28 07:02:37,835] {tf_logging.py:82} INFO - Loss for final step: 40536.54.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:02:37
[2018-01-28 07:02:37,964] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:02:37
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-137
[2018-01-28 07:02:38,030] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-137
INFO:tensorflow:Finished evaluation at 2018-01-28-07:02:38
[2018-01-28 07:02:38,261] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:02:38
INFO:tensorflow:Saving dict for global step 137: average_loss = 109.12946, global_step = 137, loss = 45425.14
[2018-01-28 07:02:38,263] {tf_logging.py:82} INFO - Saving dict for global step 137: average_loss = 109.12946, global_step = 137, loss = 45425.14
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-137
[2018-01-28 07:02:38,522] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-137
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:02:38,576] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:02:38,582] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517122958/saved_model.pb
[2018-01-28 07:02:38,653] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517122958/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:02:38,876] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-137
[2018-01-28 07:02:38,978] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-137
INFO:tensorflow:Saving checkpoints for 138 into taxi_trained/model.ckpt.
[2018-01-28 07:02:40,294] {tf_logging.py:82} INFO - Saving checkpoints for 138 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 52633.062, step = 138
[2018-01-28 07:02:40,356] {tf_logging.py:82} INFO - loss = 52633.062, step = 138
INFO:tensorflow:global_step/sec: 13.4044
[2018-01-28 07:02:47,816] {tf_logging.py:82} INFO - global_step/sec: 13.4044
INFO:tensorflow:loss = 38713.707, step = 238 (7.464 sec)
[2018-01-28 07:02:47,821] {tf_logging.py:82} INFO - loss = 38713.707, step = 238 (7.464 sec)
INFO:tensorflow:Saving checkpoints for 259 into taxi_trained/model.ckpt.
[2018-01-28 07:02:49,312] {tf_logging.py:82} INFO - Saving checkpoints for 259 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 37271.152.
[2018-01-28 07:02:49,380] {tf_logging.py:82} INFO - Loss for final step: 37271.152.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:02:49
[2018-01-28 07:02:49,511] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:02:49
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-259
[2018-01-28 07:02:49,579] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-259
INFO:tensorflow:Finished evaluation at 2018-01-28-07:02:49
[2018-01-28 07:02:49,828] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:02:49
INFO:tensorflow:Saving dict for global step 259: average_loss = 109.37532, global_step = 259, loss = 45527.477
[2018-01-28 07:02:49,830] {tf_logging.py:82} INFO - Saving dict for global step 259: average_loss = 109.37532, global_step = 259, loss = 45527.477
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-259
[2018-01-28 07:02:50,038] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-259
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:02:50,086] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:02:50,089] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517122970/saved_model.pb
[2018-01-28 07:02:50,158] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517122970/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:02:50,379] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-259
[2018-01-28 07:02:50,482] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-259
INFO:tensorflow:Saving checkpoints for 260 into taxi_trained/model.ckpt.
[2018-01-28 07:02:51,561] {tf_logging.py:82} INFO - Saving checkpoints for 260 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 42189.76, step = 260
[2018-01-28 07:02:51,628] {tf_logging.py:82} INFO - loss = 42189.76, step = 260
INFO:tensorflow:global_step/sec: 15.4346
[2018-01-28 07:02:58,106] {tf_logging.py:82} INFO - global_step/sec: 15.4346
INFO:tensorflow:loss = 38248.645, step = 360 (6.481 sec)
[2018-01-28 07:02:58,109] {tf_logging.py:82} INFO - loss = 38248.645, step = 360 (6.481 sec)
INFO:tensorflow:Saving checkpoints for 398 into taxi_trained/model.ckpt.
[2018-01-28 07:03:00,403] {tf_logging.py:82} INFO - Saving checkpoints for 398 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 53145.715.
[2018-01-28 07:03:00,489] {tf_logging.py:82} INFO - Loss for final step: 53145.715.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:03:00
[2018-01-28 07:03:00,620] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:03:00
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-398
[2018-01-28 07:03:00,818] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-398
INFO:tensorflow:Finished evaluation at 2018-01-28-07:03:01
[2018-01-28 07:03:01,048] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:03:01
INFO:tensorflow:Saving dict for global step 398: average_loss = 108.82624, global_step = 398, loss = 45298.92
[2018-01-28 07:03:01,050] {tf_logging.py:82} INFO - Saving dict for global step 398: average_loss = 108.82624, global_step = 398, loss = 45298.92
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-398
[2018-01-28 07:03:01,146] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-398
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:03:01,187] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:03:01,189] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517122981/saved_model.pb
[2018-01-28 07:03:01,256] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517122981/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:03:01,471] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-398
[2018-01-28 07:03:01,576] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-398
INFO:tensorflow:Saving checkpoints for 399 into taxi_trained/model.ckpt.
[2018-01-28 07:03:02,779] {tf_logging.py:82} INFO - Saving checkpoints for 399 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 43021.938, step = 399
[2018-01-28 07:03:02,864] {tf_logging.py:82} INFO - loss = 43021.938, step = 399
INFO:tensorflow:global_step/sec: 12.9195
[2018-01-28 07:03:10,604] {tf_logging.py:82} INFO - global_step/sec: 12.9195
INFO:tensorflow:loss = 39898.117, step = 499 (7.744 sec)
[2018-01-28 07:03:10,608] {tf_logging.py:82} INFO - loss = 39898.117, step = 499 (7.744 sec)
INFO:tensorflow:Saving checkpoints for 509 into taxi_trained/model.ckpt.
[2018-01-28 07:03:11,485] {tf_logging.py:82} INFO - Saving checkpoints for 509 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 44153.902.
[2018-01-28 07:03:11,575] {tf_logging.py:82} INFO - Loss for final step: 44153.902.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:03:11
[2018-01-28 07:03:11,848] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:03:11
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-509
[2018-01-28 07:03:11,913] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-509
INFO:tensorflow:Finished evaluation at 2018-01-28-07:03:12
[2018-01-28 07:03:12,167] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:03:12
INFO:tensorflow:Saving dict for global step 509: average_loss = 109.23884, global_step = 509, loss = 45470.668
[2018-01-28 07:03:12,169] {tf_logging.py:82} INFO - Saving dict for global step 509: average_loss = 109.23884, global_step = 509, loss = 45470.668
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-509
[2018-01-28 07:03:12,256] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-509
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:03:12,294] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:03:12,296] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517122992/saved_model.pb
[2018-01-28 07:03:12,356] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517122992/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:03:12,581] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-509
[2018-01-28 07:03:12,683] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-509
INFO:tensorflow:Saving checkpoints for 510 into taxi_trained/model.ckpt.
[2018-01-28 07:03:13,729] {tf_logging.py:82} INFO - Saving checkpoints for 510 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 49592.707, step = 510
[2018-01-28 07:03:13,808] {tf_logging.py:82} INFO - loss = 49592.707, step = 510
INFO:tensorflow:global_step/sec: 12.8801
[2018-01-28 07:03:21,571] {tf_logging.py:82} INFO - global_step/sec: 12.8801
INFO:tensorflow:loss = 37150.42, step = 610 (7.771 sec)
[2018-01-28 07:03:21,578] {tf_logging.py:82} INFO - loss = 37150.42, step = 610 (7.771 sec)
INFO:tensorflow:Saving checkpoints for 619 into taxi_trained/model.ckpt.
[2018-01-28 07:03:22,669] {tf_logging.py:82} INFO - Saving checkpoints for 619 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 34902.18.
[2018-01-28 07:03:22,750] {tf_logging.py:82} INFO - Loss for final step: 34902.18.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:03:23
[2018-01-28 07:03:23,000] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:03:23
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-619
[2018-01-28 07:03:23,059] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-619
INFO:tensorflow:Finished evaluation at 2018-01-28-07:03:23
[2018-01-28 07:03:23,307] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:03:23
INFO:tensorflow:Saving dict for global step 619: average_loss = 109.833046, global_step = 619, loss = 45718.004
[2018-01-28 07:03:23,309] {tf_logging.py:82} INFO - Saving dict for global step 619: average_loss = 109.833046, global_step = 619, loss = 45718.004
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-619
[2018-01-28 07:03:23,399] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-619
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:03:23,439] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:03:23,441] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123003/saved_model.pb
[2018-01-28 07:03:23,515] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123003/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:03:23,727] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-619
[2018-01-28 07:03:23,821] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-619
INFO:tensorflow:Saving checkpoints for 620 into taxi_trained/model.ckpt.
[2018-01-28 07:03:24,810] {tf_logging.py:82} INFO - Saving checkpoints for 620 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 49387.66, step = 620
[2018-01-28 07:03:24,882] {tf_logging.py:82} INFO - loss = 49387.66, step = 620
INFO:tensorflow:global_step/sec: 15.2261
[2018-01-28 07:03:31,449] {tf_logging.py:82} INFO - global_step/sec: 15.2261
INFO:tensorflow:loss = 36194.117, step = 720 (6.570 sec)
[2018-01-28 07:03:31,452] {tf_logging.py:82} INFO - loss = 36194.117, step = 720 (6.570 sec)
INFO:tensorflow:Saving checkpoints for 746 into taxi_trained/model.ckpt.
[2018-01-28 07:03:33,738] {tf_logging.py:82} INFO - Saving checkpoints for 746 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 41101.668.
[2018-01-28 07:03:33,939] {tf_logging.py:82} INFO - Loss for final step: 41101.668.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:03:34
[2018-01-28 07:03:34,067] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:03:34
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-746
[2018-01-28 07:03:34,126] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-746
INFO:tensorflow:Finished evaluation at 2018-01-28-07:03:34
[2018-01-28 07:03:34,342] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:03:34
INFO:tensorflow:Saving dict for global step 746: average_loss = 109.143974, global_step = 746, loss = 45431.18
[2018-01-28 07:03:34,345] {tf_logging.py:82} INFO - Saving dict for global step 746: average_loss = 109.143974, global_step = 746, loss = 45431.18
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-746
[2018-01-28 07:03:34,430] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-746
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:03:34,469] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:03:34,470] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123014/saved_model.pb
[2018-01-28 07:03:34,529] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123014/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:03:34,745] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-746
[2018-01-28 07:03:34,844] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-746
INFO:tensorflow:Saving checkpoints for 747 into taxi_trained/model.ckpt.
[2018-01-28 07:03:36,202] {tf_logging.py:82} INFO - Saving checkpoints for 747 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 48942.93, step = 747
[2018-01-28 07:03:36,274] {tf_logging.py:82} INFO - loss = 48942.93, step = 747
INFO:tensorflow:global_step/sec: 14.8499
[2018-01-28 07:03:43,008] {tf_logging.py:82} INFO - global_step/sec: 14.8499
INFO:tensorflow:loss = 49690.97, step = 847 (6.739 sec)
[2018-01-28 07:03:43,013] {tf_logging.py:82} INFO - loss = 49690.97, step = 847 (6.739 sec)
INFO:tensorflow:Saving checkpoints for 868 into taxi_trained/model.ckpt.
[2018-01-28 07:03:44,812] {tf_logging.py:82} INFO - Saving checkpoints for 868 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 41284.188.
[2018-01-28 07:03:44,894] {tf_logging.py:82} INFO - Loss for final step: 41284.188.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:03:45
[2018-01-28 07:03:45,015] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:03:45
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-868
[2018-01-28 07:03:45,074] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-868
INFO:tensorflow:Finished evaluation at 2018-01-28-07:03:45
[2018-01-28 07:03:45,300] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:03:45
INFO:tensorflow:Saving dict for global step 868: average_loss = 109.14027, global_step = 868, loss = 45429.637
[2018-01-28 07:03:45,302] {tf_logging.py:82} INFO - Saving dict for global step 868: average_loss = 109.14027, global_step = 868, loss = 45429.637
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-868
[2018-01-28 07:03:45,387] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-868
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:03:45,425] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:03:45,426] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123025/saved_model.pb
[2018-01-28 07:03:45,486] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123025/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:03:45,701] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-868
[2018-01-28 07:03:45,799] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-868
INFO:tensorflow:Saving checkpoints for 869 into taxi_trained/model.ckpt.
[2018-01-28 07:03:46,796] {tf_logging.py:82} INFO - Saving checkpoints for 869 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 41973.477, step = 869
[2018-01-28 07:03:46,871] {tf_logging.py:82} INFO - loss = 41973.477, step = 869
INFO:tensorflow:global_step/sec: 14.5823
[2018-01-28 07:03:53,728] {tf_logging.py:82} INFO - global_step/sec: 14.5823
INFO:tensorflow:loss = 52860.414, step = 969 (6.860 sec)
[2018-01-28 07:03:53,731] {tf_logging.py:82} INFO - loss = 52860.414, step = 969 (6.860 sec)
INFO:tensorflow:Saving checkpoints for 1005 into taxi_trained/model.ckpt.
[2018-01-28 07:03:56,090] {tf_logging.py:82} INFO - Saving checkpoints for 1005 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 44731.914.
[2018-01-28 07:03:56,182] {tf_logging.py:82} INFO - Loss for final step: 44731.914.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:03:56
[2018-01-28 07:03:56,311] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:03:56
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1005
[2018-01-28 07:03:56,370] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1005
INFO:tensorflow:Finished evaluation at 2018-01-28-07:03:56
[2018-01-28 07:03:56,657] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:03:56
INFO:tensorflow:Saving dict for global step 1005: average_loss = 109.16493, global_step = 1005, loss = 45439.902
[2018-01-28 07:03:56,660] {tf_logging.py:82} INFO - Saving dict for global step 1005: average_loss = 109.16493, global_step = 1005, loss = 45439.902
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1005
[2018-01-28 07:03:56,748] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1005
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:03:56,784] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:03:56,786] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123036/saved_model.pb
[2018-01-28 07:03:56,845] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123036/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:03:57,057] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1005
[2018-01-28 07:03:57,152] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1005
INFO:tensorflow:Saving checkpoints for 1006 into taxi_trained/model.ckpt.
[2018-01-28 07:03:58,242] {tf_logging.py:82} INFO - Saving checkpoints for 1006 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 44617.688, step = 1006
[2018-01-28 07:03:58,317] {tf_logging.py:82} INFO - loss = 44617.688, step = 1006
INFO:tensorflow:global_step/sec: 15.4452
[2018-01-28 07:04:04,791] {tf_logging.py:82} INFO - global_step/sec: 15.4452
INFO:tensorflow:loss = 44143.766, step = 1106 (6.478 sec)
[2018-01-28 07:04:04,795] {tf_logging.py:82} INFO - loss = 44143.766, step = 1106 (6.478 sec)
INFO:tensorflow:Saving checkpoints for 1129 into taxi_trained/model.ckpt.
[2018-01-28 07:04:07,117] {tf_logging.py:82} INFO - Saving checkpoints for 1129 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 48292.125.
[2018-01-28 07:04:07,195] {tf_logging.py:82} INFO - Loss for final step: 48292.125.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:04:07
[2018-01-28 07:04:07,321] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:04:07
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1129
[2018-01-28 07:04:07,379] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1129
INFO:tensorflow:Finished evaluation at 2018-01-28-07:04:07
[2018-01-28 07:04:07,600] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:04:07
INFO:tensorflow:Saving dict for global step 1129: average_loss = 108.863625, global_step = 1129, loss = 45314.484
[2018-01-28 07:04:07,602] {tf_logging.py:82} INFO - Saving dict for global step 1129: average_loss = 108.863625, global_step = 1129, loss = 45314.484
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1129
[2018-01-28 07:04:07,692] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1129
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:04:07,729] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:04:07,733] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123047/saved_model.pb
[2018-01-28 07:04:07,796] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123047/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:04:08,010] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1129
[2018-01-28 07:04:08,235] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1129
INFO:tensorflow:Saving checkpoints for 1130 into taxi_trained/model.ckpt.
[2018-01-28 07:04:09,133] {tf_logging.py:82} INFO - Saving checkpoints for 1130 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 40067.656, step = 1130
[2018-01-28 07:04:09,211] {tf_logging.py:82} INFO - loss = 40067.656, step = 1130
INFO:tensorflow:global_step/sec: 13.6598
[2018-01-28 07:04:16,531] {tf_logging.py:82} INFO - global_step/sec: 13.6598
INFO:tensorflow:loss = 44355.266, step = 1230 (7.324 sec)
[2018-01-28 07:04:16,535] {tf_logging.py:82} INFO - loss = 44355.266, step = 1230 (7.324 sec)
INFO:tensorflow:Saving checkpoints for 1251 into taxi_trained/model.ckpt.
[2018-01-28 07:04:18,167] {tf_logging.py:82} INFO - Saving checkpoints for 1251 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 38190.83.
[2018-01-28 07:04:18,251] {tf_logging.py:82} INFO - Loss for final step: 38190.83.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:04:18
[2018-01-28 07:04:18,377] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:04:18
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1251
[2018-01-28 07:04:18,436] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1251
INFO:tensorflow:Finished evaluation at 2018-01-28-07:04:18
[2018-01-28 07:04:18,656] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:04:18
INFO:tensorflow:Saving dict for global step 1251: average_loss = 109.210495, global_step = 1251, loss = 45458.867
[2018-01-28 07:04:18,658] {tf_logging.py:82} INFO - Saving dict for global step 1251: average_loss = 109.210495, global_step = 1251, loss = 45458.867
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1251
[2018-01-28 07:04:18,741] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1251
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:04:18,778] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:04:18,780] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123058/saved_model.pb
[2018-01-28 07:04:18,842] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123058/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:04:19,052] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1251
[2018-01-28 07:04:19,271] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1251
INFO:tensorflow:Saving checkpoints for 1252 into taxi_trained/model.ckpt.
[2018-01-28 07:04:20,236] {tf_logging.py:82} INFO - Saving checkpoints for 1252 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 46948.5, step = 1252
[2018-01-28 07:04:20,313] {tf_logging.py:82} INFO - loss = 46948.5, step = 1252
INFO:tensorflow:global_step/sec: 14.7947
[2018-01-28 07:04:27,072] {tf_logging.py:82} INFO - global_step/sec: 14.7947
INFO:tensorflow:loss = 46416.434, step = 1352 (6.763 sec)
[2018-01-28 07:04:27,076] {tf_logging.py:82} INFO - loss = 46416.434, step = 1352 (6.763 sec)
INFO:tensorflow:Saving checkpoints for 1373 into taxi_trained/model.ckpt.
[2018-01-28 07:04:29,187] {tf_logging.py:82} INFO - Saving checkpoints for 1373 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 36476.85.
[2018-01-28 07:04:29,270] {tf_logging.py:82} INFO - Loss for final step: 36476.85.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:04:29
[2018-01-28 07:04:29,395] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:04:29
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1373
[2018-01-28 07:04:29,452] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1373
INFO:tensorflow:Finished evaluation at 2018-01-28-07:04:29
[2018-01-28 07:04:29,685] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:04:29
INFO:tensorflow:Saving dict for global step 1373: average_loss = 109.693695, global_step = 1373, loss = 45660.0
[2018-01-28 07:04:29,688] {tf_logging.py:82} INFO - Saving dict for global step 1373: average_loss = 109.693695, global_step = 1373, loss = 45660.0
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1373
[2018-01-28 07:04:29,774] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1373
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:04:29,814] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:04:29,817] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123069/saved_model.pb
[2018-01-28 07:04:29,877] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123069/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:04:30,208] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1373
[2018-01-28 07:04:30,301] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1373
INFO:tensorflow:Saving checkpoints for 1374 into taxi_trained/model.ckpt.
[2018-01-28 07:04:31,272] {tf_logging.py:82} INFO - Saving checkpoints for 1374 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 41820.227, step = 1374
[2018-01-28 07:04:31,344] {tf_logging.py:82} INFO - loss = 41820.227, step = 1374
INFO:tensorflow:global_step/sec: 14.9741
[2018-01-28 07:04:38,022] {tf_logging.py:82} INFO - global_step/sec: 14.9741
INFO:tensorflow:loss = 36014.164, step = 1474 (6.683 sec)
[2018-01-28 07:04:38,027] {tf_logging.py:82} INFO - loss = 36014.164, step = 1474 (6.683 sec)
INFO:tensorflow:Saving checkpoints for 1510 into taxi_trained/model.ckpt.
[2018-01-28 07:04:40,653] {tf_logging.py:82} INFO - Saving checkpoints for 1510 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 56772.137.
[2018-01-28 07:04:40,730] {tf_logging.py:82} INFO - Loss for final step: 56772.137.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:04:40
[2018-01-28 07:04:40,859] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:04:40
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1510
[2018-01-28 07:04:40,915] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1510
INFO:tensorflow:Finished evaluation at 2018-01-28-07:04:41
[2018-01-28 07:04:41,103] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:04:41
INFO:tensorflow:Saving dict for global step 1510: average_loss = 108.83607, global_step = 1510, loss = 45303.01
[2018-01-28 07:04:41,107] {tf_logging.py:82} INFO - Saving dict for global step 1510: average_loss = 108.83607, global_step = 1510, loss = 45303.01
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1510
[2018-01-28 07:04:41,193] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1510
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:04:41,231] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:04:41,234] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123081/saved_model.pb
[2018-01-28 07:04:41,295] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123081/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:04:41,633] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1510
[2018-01-28 07:04:41,729] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1510
INFO:tensorflow:Saving checkpoints for 1511 into taxi_trained/model.ckpt.
[2018-01-28 07:04:42,712] {tf_logging.py:82} INFO - Saving checkpoints for 1511 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 39860.45, step = 1511
[2018-01-28 07:04:42,786] {tf_logging.py:82} INFO - loss = 39860.45, step = 1511
INFO:tensorflow:global_step/sec: 15.037
[2018-01-28 07:04:49,436] {tf_logging.py:82} INFO - global_step/sec: 15.037
INFO:tensorflow:loss = 42581.43, step = 1611 (6.653 sec)
[2018-01-28 07:04:49,438] {tf_logging.py:82} INFO - loss = 42581.43, step = 1611 (6.653 sec)
INFO:tensorflow:Saving checkpoints for 1647 into taxi_trained/model.ckpt.
[2018-01-28 07:04:52,040] {tf_logging.py:82} INFO - Saving checkpoints for 1647 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 46212.32.
[2018-01-28 07:04:52,118] {tf_logging.py:82} INFO - Loss for final step: 46212.32.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:04:52
[2018-01-28 07:04:52,242] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:04:52
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1647
[2018-01-28 07:04:52,302] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1647
INFO:tensorflow:Finished evaluation at 2018-01-28-07:04:52
[2018-01-28 07:04:52,590] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:04:52
INFO:tensorflow:Saving dict for global step 1647: average_loss = 108.93688, global_step = 1647, loss = 45344.977
[2018-01-28 07:04:52,593] {tf_logging.py:82} INFO - Saving dict for global step 1647: average_loss = 108.93688, global_step = 1647, loss = 45344.977
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1647
[2018-01-28 07:04:52,679] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1647
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:04:52,718] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:04:52,721] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123092/saved_model.pb
[2018-01-28 07:04:52,780] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123092/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:04:53,111] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1647
[2018-01-28 07:04:53,207] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1647
INFO:tensorflow:Saving checkpoints for 1648 into taxi_trained/model.ckpt.
[2018-01-28 07:04:54,249] {tf_logging.py:82} INFO - Saving checkpoints for 1648 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 50479.105, step = 1648
[2018-01-28 07:04:54,324] {tf_logging.py:82} INFO - loss = 50479.105, step = 1648
INFO:tensorflow:global_step/sec: 14.3097
[2018-01-28 07:05:01,311] {tf_logging.py:82} INFO - global_step/sec: 14.3097
INFO:tensorflow:loss = 38929.4, step = 1748 (6.991 sec)
[2018-01-28 07:05:01,314] {tf_logging.py:82} INFO - loss = 38929.4, step = 1748 (6.991 sec)
INFO:tensorflow:Saving checkpoints for 1769 into taxi_trained/model.ckpt.
[2018-01-28 07:05:03,249] {tf_logging.py:82} INFO - Saving checkpoints for 1769 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 38832.35.
[2018-01-28 07:05:03,330] {tf_logging.py:82} INFO - Loss for final step: 38832.35.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:05:03
[2018-01-28 07:05:03,455] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:05:03
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1769
[2018-01-28 07:05:03,514] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1769
INFO:tensorflow:Finished evaluation at 2018-01-28-07:05:03
[2018-01-28 07:05:03,738] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:05:03
INFO:tensorflow:Saving dict for global step 1769: average_loss = 110.00537, global_step = 1769, loss = 45789.734
[2018-01-28 07:05:03,740] {tf_logging.py:82} INFO - Saving dict for global step 1769: average_loss = 110.00537, global_step = 1769, loss = 45789.734
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1769
[2018-01-28 07:05:03,826] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1769
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:05:03,866] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:05:03,868] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123103/saved_model.pb
[2018-01-28 07:05:04,045] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123103/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:05:04,269] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1769
[2018-01-28 07:05:04,360] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1769
INFO:tensorflow:Saving checkpoints for 1770 into taxi_trained/model.ckpt.
[2018-01-28 07:05:05,350] {tf_logging.py:82} INFO - Saving checkpoints for 1770 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 34949.55, step = 1770
[2018-01-28 07:05:05,427] {tf_logging.py:82} INFO - loss = 34949.55, step = 1770
INFO:tensorflow:global_step/sec: 14.4275
[2018-01-28 07:05:12,358] {tf_logging.py:82} INFO - global_step/sec: 14.4275
INFO:tensorflow:loss = 40012.61, step = 1870 (6.934 sec)
[2018-01-28 07:05:12,361] {tf_logging.py:82} INFO - loss = 40012.61, step = 1870 (6.934 sec)
INFO:tensorflow:Saving checkpoints for 1906 into taxi_trained/model.ckpt.
[2018-01-28 07:05:14,773] {tf_logging.py:82} INFO - Saving checkpoints for 1906 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 40821.0.
[2018-01-28 07:05:14,849] {tf_logging.py:82} INFO - Loss for final step: 40821.0.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:05:14
[2018-01-28 07:05:14,978] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:05:14
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1906
[2018-01-28 07:05:15,037] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1906
INFO:tensorflow:Finished evaluation at 2018-01-28-07:05:15
[2018-01-28 07:05:15,252] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:05:15
INFO:tensorflow:Saving dict for global step 1906: average_loss = 109.39613, global_step = 1906, loss = 45536.14
[2018-01-28 07:05:15,254] {tf_logging.py:82} INFO - Saving dict for global step 1906: average_loss = 109.39613, global_step = 1906, loss = 45536.14
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1906
[2018-01-28 07:05:15,454] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1906
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:05:15,492] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:05:15,494] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123115/saved_model.pb
[2018-01-28 07:05:15,552] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123115/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:05:15,761] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1906
[2018-01-28 07:05:15,852] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-1906
INFO:tensorflow:Saving checkpoints for 1907 into taxi_trained/model.ckpt.
[2018-01-28 07:05:16,766] {tf_logging.py:82} INFO - Saving checkpoints for 1907 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 47866.97, step = 1907
[2018-01-28 07:05:16,838] {tf_logging.py:82} INFO - loss = 47866.97, step = 1907
INFO:tensorflow:global_step/sec: 15.5776
[2018-01-28 07:05:23,257] {tf_logging.py:82} INFO - global_step/sec: 15.5776
INFO:tensorflow:loss = 40658.758, step = 2007 (6.423 sec)
[2018-01-28 07:05:23,262] {tf_logging.py:82} INFO - loss = 40658.758, step = 2007 (6.423 sec)
INFO:tensorflow:Saving checkpoints for 2043 into taxi_trained/model.ckpt.
[2018-01-28 07:05:25,832] {tf_logging.py:82} INFO - Saving checkpoints for 2043 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 41978.832.
[2018-01-28 07:05:25,910] {tf_logging.py:82} INFO - Loss for final step: 41978.832.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:05:26
[2018-01-28 07:05:26,034] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:05:26
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2043
[2018-01-28 07:05:26,095] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2043
INFO:tensorflow:Finished evaluation at 2018-01-28-07:05:26
[2018-01-28 07:05:26,384] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:05:26
INFO:tensorflow:Saving dict for global step 2043: average_loss = 109.68041, global_step = 2043, loss = 45654.473
[2018-01-28 07:05:26,388] {tf_logging.py:82} INFO - Saving dict for global step 2043: average_loss = 109.68041, global_step = 2043, loss = 45654.473
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2043
[2018-01-28 07:05:26,588] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2043
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:05:26,627] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:05:26,630] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123126/saved_model.pb
[2018-01-28 07:05:26,688] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123126/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:05:26,898] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2043
[2018-01-28 07:05:26,990] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2043
INFO:tensorflow:Saving checkpoints for 2044 into taxi_trained/model.ckpt.
[2018-01-28 07:05:27,865] {tf_logging.py:82} INFO - Saving checkpoints for 2044 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 37793.312, step = 2044
[2018-01-28 07:05:27,937] {tf_logging.py:82} INFO - loss = 37793.312, step = 2044
INFO:tensorflow:global_step/sec: 14.15
[2018-01-28 07:05:35,004] {tf_logging.py:82} INFO - global_step/sec: 14.15
INFO:tensorflow:loss = 30396.006, step = 2144 (7.070 sec)
[2018-01-28 07:05:35,007] {tf_logging.py:82} INFO - loss = 30396.006, step = 2144 (7.070 sec)
INFO:tensorflow:Saving checkpoints for 2180 into taxi_trained/model.ckpt.
[2018-01-28 07:05:37,414] {tf_logging.py:82} INFO - Saving checkpoints for 2180 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 44665.133.
[2018-01-28 07:05:37,492] {tf_logging.py:82} INFO - Loss for final step: 44665.133.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:05:37
[2018-01-28 07:05:37,610] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:05:37
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2180
[2018-01-28 07:05:37,803] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2180
INFO:tensorflow:Finished evaluation at 2018-01-28-07:05:38
[2018-01-28 07:05:38,098] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:05:38
INFO:tensorflow:Saving dict for global step 2180: average_loss = 108.89056, global_step = 2180, loss = 45325.695
[2018-01-28 07:05:38,100] {tf_logging.py:82} INFO - Saving dict for global step 2180: average_loss = 108.89056, global_step = 2180, loss = 45325.695
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2180
[2018-01-28 07:05:38,186] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2180
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:05:38,220] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:05:38,223] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123138/saved_model.pb
[2018-01-28 07:05:38,282] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123138/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:05:38,494] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2180
[2018-01-28 07:05:38,590] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2180
INFO:tensorflow:Saving checkpoints for 2181 into taxi_trained/model.ckpt.
[2018-01-28 07:05:39,524] {tf_logging.py:82} INFO - Saving checkpoints for 2181 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 42932.83, step = 2181
[2018-01-28 07:05:39,607] {tf_logging.py:82} INFO - loss = 42932.83, step = 2181
INFO:tensorflow:global_step/sec: 15.301
[2018-01-28 07:05:46,141] {tf_logging.py:82} INFO - global_step/sec: 15.301
INFO:tensorflow:loss = 38634.67, step = 2281 (6.538 sec)
[2018-01-28 07:05:46,144] {tf_logging.py:82} INFO - loss = 38634.67, step = 2281 (6.538 sec)
INFO:tensorflow:Saving checkpoints for 2310 into taxi_trained/model.ckpt.
[2018-01-28 07:05:48,498] {tf_logging.py:82} INFO - Saving checkpoints for 2310 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 41380.53.
[2018-01-28 07:05:48,581] {tf_logging.py:82} INFO - Loss for final step: 41380.53.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:05:48
[2018-01-28 07:05:48,853] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:05:48
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2310
[2018-01-28 07:05:48,911] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2310
INFO:tensorflow:Finished evaluation at 2018-01-28-07:05:49
[2018-01-28 07:05:49,137] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:05:49
INFO:tensorflow:Saving dict for global step 2310: average_loss = 109.030876, global_step = 2310, loss = 45384.1
[2018-01-28 07:05:49,139] {tf_logging.py:82} INFO - Saving dict for global step 2310: average_loss = 109.030876, global_step = 2310, loss = 45384.1
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2310
[2018-01-28 07:05:49,228] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2310
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:05:49,264] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:05:49,266] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123149/saved_model.pb
[2018-01-28 07:05:49,327] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123149/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:05:49,544] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2310
[2018-01-28 07:05:49,638] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2310
INFO:tensorflow:Saving checkpoints for 2311 into taxi_trained/model.ckpt.
[2018-01-28 07:05:50,472] {tf_logging.py:82} INFO - Saving checkpoints for 2311 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 46196.9, step = 2311
[2018-01-28 07:05:50,549] {tf_logging.py:82} INFO - loss = 46196.9, step = 2311
INFO:tensorflow:global_step/sec: 15.1778
[2018-01-28 07:05:57,137] {tf_logging.py:82} INFO - global_step/sec: 15.1778
INFO:tensorflow:loss = 45091.477, step = 2411 (6.592 sec)
[2018-01-28 07:05:57,141] {tf_logging.py:82} INFO - loss = 45091.477, step = 2411 (6.592 sec)
INFO:tensorflow:Saving checkpoints for 2447 into taxi_trained/model.ckpt.
[2018-01-28 07:05:59,682] {tf_logging.py:82} INFO - Saving checkpoints for 2447 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 31176.158.
[2018-01-28 07:05:59,764] {tf_logging.py:82} INFO - Loss for final step: 31176.158.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:06:00
[2018-01-28 07:06:00,014] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:06:00
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2447
[2018-01-28 07:06:00,075] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2447
INFO:tensorflow:Finished evaluation at 2018-01-28-07:06:00
[2018-01-28 07:06:00,334] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:06:00
INFO:tensorflow:Saving dict for global step 2447: average_loss = 109.965675, global_step = 2447, loss = 45773.21
[2018-01-28 07:06:00,336] {tf_logging.py:82} INFO - Saving dict for global step 2447: average_loss = 109.965675, global_step = 2447, loss = 45773.21
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2447
[2018-01-28 07:06:00,419] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2447
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:06:00,454] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:06:00,458] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123160/saved_model.pb
[2018-01-28 07:06:00,516] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123160/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:06:00,732] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2447
[2018-01-28 07:06:00,829] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2447
INFO:tensorflow:Saving checkpoints for 2448 into taxi_trained/model.ckpt.
[2018-01-28 07:06:01,876] {tf_logging.py:82} INFO - Saving checkpoints for 2448 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 33655.266, step = 2448
[2018-01-28 07:06:01,953] {tf_logging.py:82} INFO - loss = 33655.266, step = 2448
INFO:tensorflow:global_step/sec: 15.6131
[2018-01-28 07:06:08,357] {tf_logging.py:82} INFO - global_step/sec: 15.6131
INFO:tensorflow:loss = 33563.55, step = 2548 (6.409 sec)
[2018-01-28 07:06:08,362] {tf_logging.py:82} INFO - loss = 33563.55, step = 2548 (6.409 sec)
INFO:tensorflow:Saving checkpoints for 2572 into taxi_trained/model.ckpt.
[2018-01-28 07:06:10,757] {tf_logging.py:82} INFO - Saving checkpoints for 2572 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 41564.5.
[2018-01-28 07:06:10,980] {tf_logging.py:82} INFO - Loss for final step: 41564.5.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:06:11
[2018-01-28 07:06:11,107] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:06:11
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2572
[2018-01-28 07:06:11,171] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2572
INFO:tensorflow:Finished evaluation at 2018-01-28-07:06:11
[2018-01-28 07:06:11,411] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:06:11
INFO:tensorflow:Saving dict for global step 2572: average_loss = 109.28161, global_step = 2572, loss = 45488.47
[2018-01-28 07:06:11,415] {tf_logging.py:82} INFO - Saving dict for global step 2572: average_loss = 109.28161, global_step = 2572, loss = 45488.47
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2572
[2018-01-28 07:06:11,497] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2572
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:06:11,531] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:06:11,533] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123171/saved_model.pb
[2018-01-28 07:06:11,592] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123171/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:06:11,799] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2572
[2018-01-28 07:06:11,892] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2572
INFO:tensorflow:Saving checkpoints for 2573 into taxi_trained/model.ckpt.
[2018-01-28 07:06:12,858] {tf_logging.py:82} INFO - Saving checkpoints for 2573 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 47952.8, step = 2573
[2018-01-28 07:06:12,948] {tf_logging.py:82} INFO - loss = 47952.8, step = 2573
INFO:tensorflow:global_step/sec: 13.3849
[2018-01-28 07:06:20,419] {tf_logging.py:82} INFO - global_step/sec: 13.3849
INFO:tensorflow:loss = 47812.152, step = 2673 (7.476 sec)
[2018-01-28 07:06:20,424] {tf_logging.py:82} INFO - loss = 47812.152, step = 2673 (7.476 sec)
INFO:tensorflow:Saving checkpoints for 2694 into taxi_trained/model.ckpt.
[2018-01-28 07:06:22,297] {tf_logging.py:82} INFO - Saving checkpoints for 2694 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 39605.594.
[2018-01-28 07:06:22,377] {tf_logging.py:82} INFO - Loss for final step: 39605.594.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:06:22
[2018-01-28 07:06:22,503] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:06:22
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2694
[2018-01-28 07:06:22,566] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2694
INFO:tensorflow:Finished evaluation at 2018-01-28-07:06:22
[2018-01-28 07:06:22,803] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:06:22
INFO:tensorflow:Saving dict for global step 2694: average_loss = 109.46598, global_step = 2694, loss = 45565.215
[2018-01-28 07:06:22,806] {tf_logging.py:82} INFO - Saving dict for global step 2694: average_loss = 109.46598, global_step = 2694, loss = 45565.215
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2694
[2018-01-28 07:06:22,892] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2694
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:06:22,929] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:06:22,931] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123182/saved_model.pb
[2018-01-28 07:06:22,993] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123182/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:06:23,205] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2694
[2018-01-28 07:06:23,303] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2694
INFO:tensorflow:Saving checkpoints for 2695 into taxi_trained/model.ckpt.
[2018-01-28 07:06:24,643] {tf_logging.py:82} INFO - Saving checkpoints for 2695 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 69116.305, step = 2695
[2018-01-28 07:06:24,723] {tf_logging.py:82} INFO - loss = 69116.305, step = 2695
INFO:tensorflow:global_step/sec: 14.8409
[2018-01-28 07:06:31,460] {tf_logging.py:82} INFO - global_step/sec: 14.8409
INFO:tensorflow:loss = 42985.844, step = 2795 (6.746 sec)
[2018-01-28 07:06:31,468] {tf_logging.py:82} INFO - loss = 42985.844, step = 2795 (6.746 sec)
INFO:tensorflow:Saving checkpoints for 2816 into taxi_trained/model.ckpt.
[2018-01-28 07:06:33,913] {tf_logging.py:82} INFO - Saving checkpoints for 2816 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 38251.92.
[2018-01-28 07:06:34,003] {tf_logging.py:82} INFO - Loss for final step: 38251.92.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:06:34
[2018-01-28 07:06:34,164] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:06:34
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2816
[2018-01-28 07:06:34,264] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2816
INFO:tensorflow:Finished evaluation at 2018-01-28-07:06:34
[2018-01-28 07:06:34,675] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:06:34
INFO:tensorflow:Saving dict for global step 2816: average_loss = 109.50468, global_step = 2816, loss = 45581.32
[2018-01-28 07:06:34,677] {tf_logging.py:82} INFO - Saving dict for global step 2816: average_loss = 109.50468, global_step = 2816, loss = 45581.32
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2816
[2018-01-28 07:06:34,772] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2816
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:06:34,808] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:06:34,810] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123194/saved_model.pb
[2018-01-28 07:06:34,873] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123194/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:06:35,115] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2816
[2018-01-28 07:06:35,213] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2816
INFO:tensorflow:Saving checkpoints for 2817 into taxi_trained/model.ckpt.
[2018-01-28 07:06:36,909] {tf_logging.py:82} INFO - Saving checkpoints for 2817 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 27480.33, step = 2817
[2018-01-28 07:06:36,992] {tf_logging.py:82} INFO - loss = 27480.33, step = 2817
INFO:tensorflow:Saving checkpoints for 2909 into taxi_trained/model.ckpt.
[2018-01-28 07:06:45,208] {tf_logging.py:82} INFO - Saving checkpoints for 2909 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 46606.02.
[2018-01-28 07:06:45,291] {tf_logging.py:82} INFO - Loss for final step: 46606.02.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:06:45
[2018-01-28 07:06:45,419] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:06:45
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2909
[2018-01-28 07:06:45,482] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2909
INFO:tensorflow:Finished evaluation at 2018-01-28-07:06:45
[2018-01-28 07:06:45,758] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:06:45
INFO:tensorflow:Saving dict for global step 2909: average_loss = 108.83984, global_step = 2909, loss = 45304.586
[2018-01-28 07:06:45,760] {tf_logging.py:82} INFO - Saving dict for global step 2909: average_loss = 108.83984, global_step = 2909, loss = 45304.586
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2909
[2018-01-28 07:06:45,852] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2909
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:06:45,891] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:06:45,893] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123205/saved_model.pb
[2018-01-28 07:06:45,955] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123205/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:06:46,174] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2909
[2018-01-28 07:06:46,399] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-2909
INFO:tensorflow:Saving checkpoints for 2910 into taxi_trained/model.ckpt.
[2018-01-28 07:06:47,417] {tf_logging.py:82} INFO - Saving checkpoints for 2910 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 40214.79, step = 2910
[2018-01-28 07:06:47,498] {tf_logging.py:82} INFO - loss = 40214.79, step = 2910
INFO:tensorflow:global_step/sec: 13.4656
[2018-01-28 07:06:54,923] {tf_logging.py:82} INFO - global_step/sec: 13.4656
INFO:tensorflow:loss = 34108.96, step = 3010 (7.428 sec)
[2018-01-28 07:06:54,926] {tf_logging.py:82} INFO - loss = 34108.96, step = 3010 (7.428 sec)
INFO:tensorflow:Saving checkpoints for 3031 into taxi_trained/model.ckpt.
[2018-01-28 07:06:56,965] {tf_logging.py:82} INFO - Saving checkpoints for 3031 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 29683.191.
[2018-01-28 07:06:57,051] {tf_logging.py:82} INFO - Loss for final step: 29683.191.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:06:57
[2018-01-28 07:06:57,186] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:06:57
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3031
[2018-01-28 07:06:57,262] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3031
INFO:tensorflow:Finished evaluation at 2018-01-28-07:06:57
[2018-01-28 07:06:57,538] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:06:57
INFO:tensorflow:Saving dict for global step 3031: average_loss = 110.0938, global_step = 3031, loss = 45826.547
[2018-01-28 07:06:57,542] {tf_logging.py:82} INFO - Saving dict for global step 3031: average_loss = 110.0938, global_step = 3031, loss = 45826.547
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3031
[2018-01-28 07:06:57,646] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3031
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:06:57,690] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:06:57,696] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123217/saved_model.pb
[2018-01-28 07:06:57,774] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123217/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:06:58,045] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3031
[2018-01-28 07:06:58,300] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3031
INFO:tensorflow:Saving checkpoints for 3032 into taxi_trained/model.ckpt.
[2018-01-28 07:06:59,542] {tf_logging.py:82} INFO - Saving checkpoints for 3032 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 44990.918, step = 3032
[2018-01-28 07:06:59,620] {tf_logging.py:82} INFO - loss = 44990.918, step = 3032
INFO:tensorflow:Saving checkpoints for 3125 into taxi_trained/model.ckpt.
[2018-01-28 07:07:08,055] {tf_logging.py:82} INFO - Saving checkpoints for 3125 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 46754.65.
[2018-01-28 07:07:08,145] {tf_logging.py:82} INFO - Loss for final step: 46754.65.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:07:08
[2018-01-28 07:07:08,299] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:07:08
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3125
[2018-01-28 07:07:08,364] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3125
INFO:tensorflow:Finished evaluation at 2018-01-28-07:07:08
[2018-01-28 07:07:08,671] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:07:08
INFO:tensorflow:Saving dict for global step 3125: average_loss = 109.13078, global_step = 3125, loss = 45425.688
[2018-01-28 07:07:08,675] {tf_logging.py:82} INFO - Saving dict for global step 3125: average_loss = 109.13078, global_step = 3125, loss = 45425.688
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3125
[2018-01-28 07:07:08,778] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3125
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:07:08,828] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:07:08,830] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123228/saved_model.pb
[2018-01-28 07:07:08,896] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123228/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:07:09,261] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3125
[2018-01-28 07:07:09,359] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3125
INFO:tensorflow:Saving checkpoints for 3126 into taxi_trained/model.ckpt.
[2018-01-28 07:07:10,532] {tf_logging.py:82} INFO - Saving checkpoints for 3126 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 42909.895, step = 3126
[2018-01-28 07:07:10,619] {tf_logging.py:82} INFO - loss = 42909.895, step = 3126
INFO:tensorflow:global_step/sec: 11.7993
[2018-01-28 07:07:19,094] {tf_logging.py:82} INFO - global_step/sec: 11.7993
INFO:tensorflow:loss = 45996.61, step = 3226 (8.482 sec)
[2018-01-28 07:07:19,101] {tf_logging.py:82} INFO - loss = 45996.61, step = 3226 (8.482 sec)
INFO:tensorflow:Saving checkpoints for 3232 into taxi_trained/model.ckpt.
[2018-01-28 07:07:19,903] {tf_logging.py:82} INFO - Saving checkpoints for 3232 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 45918.4.
[2018-01-28 07:07:19,986] {tf_logging.py:82} INFO - Loss for final step: 45918.4.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:07:20
[2018-01-28 07:07:20,125] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:07:20
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3232
[2018-01-28 07:07:20,204] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3232
INFO:tensorflow:Finished evaluation at 2018-01-28-07:07:20
[2018-01-28 07:07:20,471] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:07:20
INFO:tensorflow:Saving dict for global step 3232: average_loss = 108.92035, global_step = 3232, loss = 45338.094
[2018-01-28 07:07:20,475] {tf_logging.py:82} INFO - Saving dict for global step 3232: average_loss = 108.92035, global_step = 3232, loss = 45338.094
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3232
[2018-01-28 07:07:20,573] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3232
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:07:20,623] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:07:20,625] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123240/saved_model.pb
[2018-01-28 07:07:20,691] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123240/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:07:21,067] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3232
[2018-01-28 07:07:21,171] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3232
INFO:tensorflow:Saving checkpoints for 3233 into taxi_trained/model.ckpt.
[2018-01-28 07:07:22,519] {tf_logging.py:82} INFO - Saving checkpoints for 3233 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 39733.0, step = 3233
[2018-01-28 07:07:22,600] {tf_logging.py:82} INFO - loss = 39733.0, step = 3233
INFO:tensorflow:Saving checkpoints for 3331 into taxi_trained/model.ckpt.
[2018-01-28 07:07:31,071] {tf_logging.py:82} INFO - Saving checkpoints for 3331 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 43197.297.
[2018-01-28 07:07:31,160] {tf_logging.py:82} INFO - Loss for final step: 43197.297.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:07:31
[2018-01-28 07:07:31,295] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:07:31
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3331
[2018-01-28 07:07:31,359] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3331
INFO:tensorflow:Finished evaluation at 2018-01-28-07:07:31
[2018-01-28 07:07:31,589] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:07:31
INFO:tensorflow:Saving dict for global step 3331: average_loss = 109.09693, global_step = 3331, loss = 45411.598
[2018-01-28 07:07:31,592] {tf_logging.py:82} INFO - Saving dict for global step 3331: average_loss = 109.09693, global_step = 3331, loss = 45411.598
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3331
[2018-01-28 07:07:31,686] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3331
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:07:31,727] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:07:31,729] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123251/saved_model.pb
[2018-01-28 07:07:31,789] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123251/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:07:32,153] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3331
[2018-01-28 07:07:32,252] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3331
INFO:tensorflow:Saving checkpoints for 3332 into taxi_trained/model.ckpt.
[2018-01-28 07:07:33,145] {tf_logging.py:82} INFO - Saving checkpoints for 3332 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 48535.414, step = 3332
[2018-01-28 07:07:33,221] {tf_logging.py:82} INFO - loss = 48535.414, step = 3332
INFO:tensorflow:global_step/sec: 19.2982
[2018-01-28 07:07:38,402] {tf_logging.py:82} INFO - global_step/sec: 19.2982
INFO:tensorflow:loss = 46014.07, step = 3432 (5.186 sec)
[2018-01-28 07:07:38,407] {tf_logging.py:82} INFO - loss = 46014.07, step = 3432 (5.186 sec)
INFO:tensorflow:Saving checkpoints for 3483 into taxi_trained/model.ckpt.
[2018-01-28 07:07:42,253] {tf_logging.py:82} INFO - Saving checkpoints for 3483 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 50322.938.
[2018-01-28 07:07:42,332] {tf_logging.py:82} INFO - Loss for final step: 50322.938.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:07:42
[2018-01-28 07:07:42,461] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:07:42
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3483
[2018-01-28 07:07:42,524] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3483
INFO:tensorflow:Finished evaluation at 2018-01-28-07:07:42
[2018-01-28 07:07:42,819] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:07:42
INFO:tensorflow:Saving dict for global step 3483: average_loss = 109.12564, global_step = 3483, loss = 45423.547
[2018-01-28 07:07:42,821] {tf_logging.py:82} INFO - Saving dict for global step 3483: average_loss = 109.12564, global_step = 3483, loss = 45423.547
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3483
[2018-01-28 07:07:42,905] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3483
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:07:42,941] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:07:42,943] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123262/saved_model.pb
[2018-01-28 07:07:43,128] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123262/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:07:43,350] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3483
[2018-01-28 07:07:43,445] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3483
INFO:tensorflow:Saving checkpoints for 3484 into taxi_trained/model.ckpt.
[2018-01-28 07:07:44,723] {tf_logging.py:82} INFO - Saving checkpoints for 3484 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 42064.625, step = 3484
[2018-01-28 07:07:44,811] {tf_logging.py:82} INFO - loss = 42064.625, step = 3484
INFO:tensorflow:global_step/sec: 12.4506
[2018-01-28 07:07:52,842] {tf_logging.py:82} INFO - global_step/sec: 12.4506
INFO:tensorflow:loss = 39889.617, step = 3584 (8.034 sec)
[2018-01-28 07:07:52,845] {tf_logging.py:82} INFO - loss = 39889.617, step = 3584 (8.034 sec)
INFO:tensorflow:Saving checkpoints for 3590 into taxi_trained/model.ckpt.
[2018-01-28 07:07:53,763] {tf_logging.py:82} INFO - Saving checkpoints for 3590 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 42522.55.
[2018-01-28 07:07:53,842] {tf_logging.py:82} INFO - Loss for final step: 42522.55.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:07:53
[2018-01-28 07:07:53,967] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:07:53
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3590
[2018-01-28 07:07:54,027] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3590
INFO:tensorflow:Finished evaluation at 2018-01-28-07:07:54
[2018-01-28 07:07:54,296] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:07:54
INFO:tensorflow:Saving dict for global step 3590: average_loss = 108.99241, global_step = 3590, loss = 45368.09
[2018-01-28 07:07:54,299] {tf_logging.py:82} INFO - Saving dict for global step 3590: average_loss = 108.99241, global_step = 3590, loss = 45368.09
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3590
[2018-01-28 07:07:54,511] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3590
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:07:54,551] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:07:54,553] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123274/saved_model.pb
[2018-01-28 07:07:54,617] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123274/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:07:54,835] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3590
[2018-01-28 07:07:54,934] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3590
INFO:tensorflow:Saving checkpoints for 3591 into taxi_trained/model.ckpt.
[2018-01-28 07:07:55,926] {tf_logging.py:82} INFO - Saving checkpoints for 3591 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 40034.336, step = 3591
[2018-01-28 07:07:56,000] {tf_logging.py:82} INFO - loss = 40034.336, step = 3591
INFO:tensorflow:global_step/sec: 14.8045
[2018-01-28 07:08:02,754] {tf_logging.py:82} INFO - global_step/sec: 14.8045
INFO:tensorflow:loss = 44571.625, step = 3691 (6.759 sec)
[2018-01-28 07:08:02,759] {tf_logging.py:82} INFO - loss = 44571.625, step = 3691 (6.759 sec)
INFO:tensorflow:Saving checkpoints for 3720 into taxi_trained/model.ckpt.
[2018-01-28 07:08:04,841] {tf_logging.py:82} INFO - Saving checkpoints for 3720 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 38875.75.
[2018-01-28 07:08:04,917] {tf_logging.py:82} INFO - Loss for final step: 38875.75.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:08:05
[2018-01-28 07:08:05,043] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:08:05
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3720
[2018-01-28 07:08:05,100] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3720
INFO:tensorflow:Finished evaluation at 2018-01-28-07:08:05
[2018-01-28 07:08:05,347] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:08:05
INFO:tensorflow:Saving dict for global step 3720: average_loss = 109.228294, global_step = 3720, loss = 45466.277
[2018-01-28 07:08:05,352] {tf_logging.py:82} INFO - Saving dict for global step 3720: average_loss = 109.228294, global_step = 3720, loss = 45466.277
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3720
[2018-01-28 07:08:05,566] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3720
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:08:05,604] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:08:05,608] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123285/saved_model.pb
[2018-01-28 07:08:05,668] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123285/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:08:05,885] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3720
[2018-01-28 07:08:05,980] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3720
INFO:tensorflow:Saving checkpoints for 3721 into taxi_trained/model.ckpt.
[2018-01-28 07:08:07,228] {tf_logging.py:82} INFO - Saving checkpoints for 3721 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 39552.543, step = 3721
[2018-01-28 07:08:07,305] {tf_logging.py:82} INFO - loss = 39552.543, step = 3721
INFO:tensorflow:global_step/sec: 12.839
[2018-01-28 07:08:15,093] {tf_logging.py:82} INFO - global_step/sec: 12.839
INFO:tensorflow:loss = 38923.28, step = 3821 (7.793 sec)
[2018-01-28 07:08:15,098] {tf_logging.py:82} INFO - loss = 38923.28, step = 3821 (7.793 sec)
INFO:tensorflow:Saving checkpoints for 3828 into taxi_trained/model.ckpt.
[2018-01-28 07:08:15,903] {tf_logging.py:82} INFO - Saving checkpoints for 3828 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 50896.11.
[2018-01-28 07:08:15,987] {tf_logging.py:82} INFO - Loss for final step: 50896.11.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:08:16
[2018-01-28 07:08:16,113] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:08:16
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3828
[2018-01-28 07:08:16,301] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3828
INFO:tensorflow:Finished evaluation at 2018-01-28-07:08:16
[2018-01-28 07:08:16,671] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:08:16
INFO:tensorflow:Saving dict for global step 3828: average_loss = 108.83108, global_step = 3828, loss = 45300.938
[2018-01-28 07:08:16,674] {tf_logging.py:82} INFO - Saving dict for global step 3828: average_loss = 108.83108, global_step = 3828, loss = 45300.938
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3828
[2018-01-28 07:08:16,766] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3828
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:08:16,804] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:08:16,807] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123296/saved_model.pb
[2018-01-28 07:08:16,866] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123296/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:08:17,105] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3828
[2018-01-28 07:08:17,201] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3828
INFO:tensorflow:Saving checkpoints for 3829 into taxi_trained/model.ckpt.
[2018-01-28 07:08:18,649] {tf_logging.py:82} INFO - Saving checkpoints for 3829 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 36434.168, step = 3829
[2018-01-28 07:08:18,731] {tf_logging.py:82} INFO - loss = 36434.168, step = 3829
INFO:tensorflow:global_step/sec: 12.8402
[2018-01-28 07:08:26,518] {tf_logging.py:82} INFO - global_step/sec: 12.8402
INFO:tensorflow:loss = 39332.9, step = 3929 (7.792 sec)
[2018-01-28 07:08:26,522] {tf_logging.py:82} INFO - loss = 39332.9, step = 3929 (7.792 sec)
INFO:tensorflow:Saving checkpoints for 3935 into taxi_trained/model.ckpt.
[2018-01-28 07:08:27,243] {tf_logging.py:82} INFO - Saving checkpoints for 3935 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 38545.84.
[2018-01-28 07:08:27,331] {tf_logging.py:82} INFO - Loss for final step: 38545.84.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:08:27
[2018-01-28 07:08:27,586] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:08:27
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3935
[2018-01-28 07:08:27,650] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3935
INFO:tensorflow:Finished evaluation at 2018-01-28-07:08:27
[2018-01-28 07:08:27,845] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:08:27
INFO:tensorflow:Saving dict for global step 3935: average_loss = 109.207985, global_step = 3935, loss = 45457.824
[2018-01-28 07:08:27,848] {tf_logging.py:82} INFO - Saving dict for global step 3935: average_loss = 109.207985, global_step = 3935, loss = 45457.824
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3935
[2018-01-28 07:08:27,942] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3935
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:08:27,979] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:08:27,981] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123307/saved_model.pb
[2018-01-28 07:08:28,045] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123307/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:08:28,260] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3935
[2018-01-28 07:08:28,355] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-3935
INFO:tensorflow:Saving checkpoints for 3936 into taxi_trained/model.ckpt.
[2018-01-28 07:08:29,252] {tf_logging.py:82} INFO - Saving checkpoints for 3936 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 42007.695, step = 3936
[2018-01-28 07:08:29,354] {tf_logging.py:82} INFO - loss = 42007.695, step = 3936
INFO:tensorflow:global_step/sec: 12.4146
[2018-01-28 07:08:37,408] {tf_logging.py:82} INFO - global_step/sec: 12.4146
INFO:tensorflow:loss = 40296.688, step = 4036 (8.060 sec)
[2018-01-28 07:08:37,414] {tf_logging.py:82} INFO - loss = 40296.688, step = 4036 (8.060 sec)
INFO:tensorflow:Saving checkpoints for 4051 into taxi_trained/model.ckpt.
[2018-01-28 07:08:38,265] {tf_logging.py:82} INFO - Saving checkpoints for 4051 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 43951.535.
[2018-01-28 07:08:38,345] {tf_logging.py:82} INFO - Loss for final step: 43951.535.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:08:38
[2018-01-28 07:08:38,601] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:08:38
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4051
[2018-01-28 07:08:38,663] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4051
INFO:tensorflow:Finished evaluation at 2018-01-28-07:08:38
[2018-01-28 07:08:38,901] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:08:38
INFO:tensorflow:Saving dict for global step 4051: average_loss = 109.66371, global_step = 4051, loss = 45647.52
[2018-01-28 07:08:38,904] {tf_logging.py:82} INFO - Saving dict for global step 4051: average_loss = 109.66371, global_step = 4051, loss = 45647.52
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4051
[2018-01-28 07:08:38,996] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4051
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:08:39,035] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:08:39,038] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123318/saved_model.pb
[2018-01-28 07:08:39,105] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123318/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:08:39,330] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4051
[2018-01-28 07:08:39,444] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4051
INFO:tensorflow:Saving checkpoints for 4052 into taxi_trained/model.ckpt.
[2018-01-28 07:08:40,319] {tf_logging.py:82} INFO - Saving checkpoints for 4052 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 35488.383, step = 4052
[2018-01-28 07:08:40,398] {tf_logging.py:82} INFO - loss = 35488.383, step = 4052
INFO:tensorflow:global_step/sec: 12.7194
[2018-01-28 07:08:48,259] {tf_logging.py:82} INFO - global_step/sec: 12.7194
INFO:tensorflow:loss = 33736.64, step = 4152 (7.864 sec)
[2018-01-28 07:08:48,262] {tf_logging.py:82} INFO - loss = 33736.64, step = 4152 (7.864 sec)
INFO:tensorflow:Saving checkpoints for 4162 into taxi_trained/model.ckpt.
[2018-01-28 07:08:49,386] {tf_logging.py:82} INFO - Saving checkpoints for 4162 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 58067.945.
[2018-01-28 07:08:49,597] {tf_logging.py:82} INFO - Loss for final step: 58067.945.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:08:49
[2018-01-28 07:08:49,722] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:08:49
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4162
[2018-01-28 07:08:49,780] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4162
INFO:tensorflow:Finished evaluation at 2018-01-28-07:08:50
[2018-01-28 07:08:50,019] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:08:50
INFO:tensorflow:Saving dict for global step 4162: average_loss = 108.97929, global_step = 4162, loss = 45362.63
[2018-01-28 07:08:50,022] {tf_logging.py:82} INFO - Saving dict for global step 4162: average_loss = 108.97929, global_step = 4162, loss = 45362.63
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4162
[2018-01-28 07:08:50,108] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4162
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:08:50,145] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:08:50,147] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123330/saved_model.pb
[2018-01-28 07:08:50,207] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123330/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:08:50,433] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4162
[2018-01-28 07:08:50,528] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4162
INFO:tensorflow:Saving checkpoints for 4163 into taxi_trained/model.ckpt.
[2018-01-28 07:08:51,657] {tf_logging.py:82} INFO - Saving checkpoints for 4163 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 37032.996, step = 4163
[2018-01-28 07:08:51,742] {tf_logging.py:82} INFO - loss = 37032.996, step = 4163
INFO:tensorflow:global_step/sec: 12.7084
[2018-01-28 07:08:59,610] {tf_logging.py:82} INFO - global_step/sec: 12.7084
INFO:tensorflow:loss = 33938.29, step = 4263 (7.872 sec)
[2018-01-28 07:08:59,614] {tf_logging.py:82} INFO - loss = 33938.29, step = 4263 (7.872 sec)
INFO:tensorflow:Saving checkpoints for 4271 into taxi_trained/model.ckpt.
[2018-01-28 07:09:00,490] {tf_logging.py:82} INFO - Saving checkpoints for 4271 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 39866.11.
[2018-01-28 07:09:00,576] {tf_logging.py:82} INFO - Loss for final step: 39866.11.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:09:00
[2018-01-28 07:09:00,701] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:09:00
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4271
[2018-01-28 07:09:00,767] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4271
INFO:tensorflow:Finished evaluation at 2018-01-28-07:09:01
[2018-01-28 07:09:01,103] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:09:01
INFO:tensorflow:Saving dict for global step 4271: average_loss = 109.08688, global_step = 4271, loss = 45407.414
[2018-01-28 07:09:01,107] {tf_logging.py:82} INFO - Saving dict for global step 4271: average_loss = 109.08688, global_step = 4271, loss = 45407.414
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4271
[2018-01-28 07:09:01,209] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4271
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:09:01,247] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:09:01,250] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123341/saved_model.pb
[2018-01-28 07:09:01,310] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123341/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:09:01,543] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4271
[2018-01-28 07:09:01,643] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4271
INFO:tensorflow:Saving checkpoints for 4272 into taxi_trained/model.ckpt.
[2018-01-28 07:09:02,955] {tf_logging.py:82} INFO - Saving checkpoints for 4272 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 43725.79, step = 4272
[2018-01-28 07:09:03,038] {tf_logging.py:82} INFO - loss = 43725.79, step = 4272
INFO:tensorflow:global_step/sec: 13.5287
[2018-01-28 07:09:10,429] {tf_logging.py:82} INFO - global_step/sec: 13.5287
INFO:tensorflow:loss = 52431.184, step = 4372 (7.396 sec)
[2018-01-28 07:09:10,433] {tf_logging.py:82} INFO - loss = 52431.184, step = 4372 (7.396 sec)
INFO:tensorflow:Saving checkpoints for 4393 into taxi_trained/model.ckpt.
[2018-01-28 07:09:12,148] {tf_logging.py:82} INFO - Saving checkpoints for 4393 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 37563.44.
[2018-01-28 07:09:12,238] {tf_logging.py:82} INFO - Loss for final step: 37563.44.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:09:12
[2018-01-28 07:09:12,369] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:09:12
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4393
[2018-01-28 07:09:12,430] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4393
INFO:tensorflow:Finished evaluation at 2018-01-28-07:09:12
[2018-01-28 07:09:12,624] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:09:12
INFO:tensorflow:Saving dict for global step 4393: average_loss = 109.788445, global_step = 4393, loss = 45699.44
[2018-01-28 07:09:12,626] {tf_logging.py:82} INFO - Saving dict for global step 4393: average_loss = 109.788445, global_step = 4393, loss = 45699.44
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4393
[2018-01-28 07:09:12,731] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4393
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:09:12,771] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:09:12,773] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123352/saved_model.pb
[2018-01-28 07:09:12,845] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123352/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:09:13,067] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4393
[2018-01-28 07:09:13,163] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4393
INFO:tensorflow:Saving checkpoints for 4394 into taxi_trained/model.ckpt.
[2018-01-28 07:09:14,499] {tf_logging.py:82} INFO - Saving checkpoints for 4394 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 51770.7, step = 4394
[2018-01-28 07:09:14,584] {tf_logging.py:82} INFO - loss = 51770.7, step = 4394
INFO:tensorflow:global_step/sec: 12.4504
[2018-01-28 07:09:22,615] {tf_logging.py:82} INFO - global_step/sec: 12.4504
INFO:tensorflow:loss = 40583.094, step = 4494 (8.034 sec)
[2018-01-28 07:09:22,618] {tf_logging.py:82} INFO - loss = 40583.094, step = 4494 (8.034 sec)
INFO:tensorflow:Saving checkpoints for 4500 into taxi_trained/model.ckpt.
[2018-01-28 07:09:24,365] {tf_logging.py:82} INFO - Saving checkpoints for 4500 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 31866.129.
[2018-01-28 07:09:24,455] {tf_logging.py:82} INFO - Loss for final step: 31866.129.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:09:24
[2018-01-28 07:09:24,600] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:09:24
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4500
[2018-01-28 07:09:24,665] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4500
INFO:tensorflow:Finished evaluation at 2018-01-28-07:09:24
[2018-01-28 07:09:24,892] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:09:24
INFO:tensorflow:Saving dict for global step 4500: average_loss = 110.08146, global_step = 4500, loss = 45821.406
[2018-01-28 07:09:24,894] {tf_logging.py:82} INFO - Saving dict for global step 4500: average_loss = 110.08146, global_step = 4500, loss = 45821.406
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4500
[2018-01-28 07:09:24,988] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4500
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:09:25,027] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:09:25,029] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123364/saved_model.pb
[2018-01-28 07:09:25,094] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123364/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:09:25,338] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4500
[2018-01-28 07:09:25,571] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4500
INFO:tensorflow:Saving checkpoints for 4501 into taxi_trained/model.ckpt.
[2018-01-28 07:09:26,919] {tf_logging.py:82} INFO - Saving checkpoints for 4501 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 30036.82, step = 4501
[2018-01-28 07:09:27,003] {tf_logging.py:82} INFO - loss = 30036.82, step = 4501
INFO:tensorflow:Saving checkpoints for 4594 into taxi_trained/model.ckpt.
[2018-01-28 07:09:35,366] {tf_logging.py:82} INFO - Saving checkpoints for 4594 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 46673.26.
[2018-01-28 07:09:35,454] {tf_logging.py:82} INFO - Loss for final step: 46673.26.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:09:35
[2018-01-28 07:09:35,595] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:09:35
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4594
[2018-01-28 07:09:35,661] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4594
INFO:tensorflow:Finished evaluation at 2018-01-28-07:09:35
[2018-01-28 07:09:35,945] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:09:35
INFO:tensorflow:Saving dict for global step 4594: average_loss = 108.92774, global_step = 4594, loss = 45341.17
[2018-01-28 07:09:35,947] {tf_logging.py:82} INFO - Saving dict for global step 4594: average_loss = 108.92774, global_step = 4594, loss = 45341.17
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4594
[2018-01-28 07:09:36,045] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4594
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:09:36,085] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:09:36,087] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123376/saved_model.pb
[2018-01-28 07:09:36,153] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123376/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:09:36,399] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4594
[2018-01-28 07:09:36,657] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4594
INFO:tensorflow:Saving checkpoints for 4595 into taxi_trained/model.ckpt.
[2018-01-28 07:09:37,811] {tf_logging.py:82} INFO - Saving checkpoints for 4595 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 42165.203, step = 4595
[2018-01-28 07:09:37,897] {tf_logging.py:82} INFO - loss = 42165.203, step = 4595
INFO:tensorflow:global_step/sec: 13.0202
[2018-01-28 07:09:45,576] {tf_logging.py:82} INFO - global_step/sec: 13.0202
INFO:tensorflow:loss = 53079.824, step = 4695 (7.683 sec)
[2018-01-28 07:09:45,579] {tf_logging.py:82} INFO - loss = 53079.824, step = 4695 (7.683 sec)
INFO:tensorflow:Saving checkpoints for 4701 into taxi_trained/model.ckpt.
[2018-01-28 07:09:46,500] {tf_logging.py:82} INFO - Saving checkpoints for 4701 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 46547.195.
[2018-01-28 07:09:46,579] {tf_logging.py:82} INFO - Loss for final step: 46547.195.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:09:46
[2018-01-28 07:09:46,706] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:09:46
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4701
[2018-01-28 07:09:46,766] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4701
INFO:tensorflow:Finished evaluation at 2018-01-28-07:09:47
[2018-01-28 07:09:47,072] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:09:47
INFO:tensorflow:Saving dict for global step 4701: average_loss = 109.27469, global_step = 4701, loss = 45485.59
[2018-01-28 07:09:47,075] {tf_logging.py:82} INFO - Saving dict for global step 4701: average_loss = 109.27469, global_step = 4701, loss = 45485.59
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4701
[2018-01-28 07:09:47,164] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4701
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:09:47,200] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:09:47,202] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123387/saved_model.pb
[2018-01-28 07:09:47,262] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123387/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:09:47,624] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4701
[2018-01-28 07:09:47,728] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4701
INFO:tensorflow:Saving checkpoints for 4702 into taxi_trained/model.ckpt.
[2018-01-28 07:09:49,102] {tf_logging.py:82} INFO - Saving checkpoints for 4702 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 28152.488, step = 4702
[2018-01-28 07:09:49,180] {tf_logging.py:82} INFO - loss = 28152.488, step = 4702
INFO:tensorflow:global_step/sec: 13.5758
[2018-01-28 07:09:56,545] {tf_logging.py:82} INFO - global_step/sec: 13.5758
INFO:tensorflow:loss = 40240.426, step = 4802 (7.371 sec)
[2018-01-28 07:09:56,551] {tf_logging.py:82} INFO - loss = 40240.426, step = 4802 (7.371 sec)
INFO:tensorflow:Saving checkpoints for 4823 into taxi_trained/model.ckpt.
[2018-01-28 07:09:58,206] {tf_logging.py:82} INFO - Saving checkpoints for 4823 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 46819.387.
[2018-01-28 07:09:58,282] {tf_logging.py:82} INFO - Loss for final step: 46819.387.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:09:58
[2018-01-28 07:09:58,408] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:09:58
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4823
[2018-01-28 07:09:58,467] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4823
INFO:tensorflow:Finished evaluation at 2018-01-28-07:09:58
[2018-01-28 07:09:58,820] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:09:58
INFO:tensorflow:Saving dict for global step 4823: average_loss = 109.01662, global_step = 4823, loss = 45378.168
[2018-01-28 07:09:58,823] {tf_logging.py:82} INFO - Saving dict for global step 4823: average_loss = 109.01662, global_step = 4823, loss = 45378.168
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4823
[2018-01-28 07:09:58,912] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4823
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:09:58,947] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:09:58,949] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123398/saved_model.pb
[2018-01-28 07:09:59,010] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123398/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:09:59,344] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4823
[2018-01-28 07:09:59,438] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4823
INFO:tensorflow:Saving checkpoints for 4824 into taxi_trained/model.ckpt.
[2018-01-28 07:10:00,533] {tf_logging.py:82} INFO - Saving checkpoints for 4824 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 48342.703, step = 4824
[2018-01-28 07:10:00,611] {tf_logging.py:82} INFO - loss = 48342.703, step = 4824
INFO:tensorflow:global_step/sec: 12.6495
[2018-01-28 07:10:08,516] {tf_logging.py:82} INFO - global_step/sec: 12.6495
INFO:tensorflow:loss = 46588.07, step = 4924 (7.910 sec)
[2018-01-28 07:10:08,521] {tf_logging.py:82} INFO - loss = 46588.07, step = 4924 (7.910 sec)
INFO:tensorflow:Saving checkpoints for 4933 into taxi_trained/model.ckpt.
[2018-01-28 07:10:09,379] {tf_logging.py:82} INFO - Saving checkpoints for 4933 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 41518.848.
[2018-01-28 07:10:09,461] {tf_logging.py:82} INFO - Loss for final step: 41518.848.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:10:09
[2018-01-28 07:10:09,585] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:10:09
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4933
[2018-01-28 07:10:09,645] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4933
INFO:tensorflow:Finished evaluation at 2018-01-28-07:10:09
[2018-01-28 07:10:09,908] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:10:09
INFO:tensorflow:Saving dict for global step 4933: average_loss = 109.20073, global_step = 4933, loss = 45454.805
[2018-01-28 07:10:09,910] {tf_logging.py:82} INFO - Saving dict for global step 4933: average_loss = 109.20073, global_step = 4933, loss = 45454.805
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4933
[2018-01-28 07:10:09,997] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4933
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:10:10,034] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:10:10,037] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123409/saved_model.pb
[2018-01-28 07:10:10,100] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123409/saved_model.pb
INFO:tensorflow:Create CheckpointSaverHook.
[2018-01-28 07:10:10,441] {tf_logging.py:82} INFO - Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4933
[2018-01-28 07:10:10,533] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-4933
INFO:tensorflow:Saving checkpoints for 4934 into taxi_trained/model.ckpt.
[2018-01-28 07:10:11,469] {tf_logging.py:82} INFO - Saving checkpoints for 4934 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 42963.52, step = 4934
[2018-01-28 07:10:11,550] {tf_logging.py:82} INFO - loss = 42963.52, step = 4934
INFO:tensorflow:Saving checkpoints for 5000 into taxi_trained/model.ckpt.
[2018-01-28 07:10:16,371] {tf_logging.py:82} INFO - Saving checkpoints for 5000 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 38572.33.
[2018-01-28 07:10:16,457] {tf_logging.py:82} INFO - Loss for final step: 38572.33.
INFO:tensorflow:Starting evaluation at 2018-01-28-07:10:16
[2018-01-28 07:10:16,592] {tf_logging.py:82} INFO - Starting evaluation at 2018-01-28-07:10:16
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-5000
[2018-01-28 07:10:16,653] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-5000
INFO:tensorflow:Finished evaluation at 2018-01-28-07:10:17
[2018-01-28 07:10:17,059] {tf_logging.py:82} INFO - Finished evaluation at 2018-01-28-07:10:17
INFO:tensorflow:Saving dict for global step 5000: average_loss = 109.39484, global_step = 5000, loss = 45535.6
[2018-01-28 07:10:17,061] {tf_logging.py:82} INFO - Saving dict for global step 5000: average_loss = 109.39484, global_step = 5000, loss = 45535.6
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-5000
[2018-01-28 07:10:17,155] {tf_logging.py:82} INFO - Restoring parameters from taxi_trained/model.ckpt-5000
INFO:tensorflow:Assets added to graph.
[2018-01-28 07:10:17,190] {tf_logging.py:82} INFO - Assets added to graph.
INFO:tensorflow:No assets to write.
[2018-01-28 07:10:17,194] {tf_logging.py:82} INFO - No assets to write.
INFO:tensorflow:SavedModel written to: taxi_trained/export/exporter/temp-1517123417/saved_model.pb
[2018-01-28 07:10:17,383] {tf_logging.py:82} INFO - SavedModel written to: taxi_trained/export/exporter/temp-1517123417/saved_model.pb
###Markdown
Monitoring with TensorBoard
###Code
from google.datalab.ml import TensorBoard
TensorBoard().start('./taxi_trained')
TensorBoard().list()
# to stop TensorBoard
TensorBoard().stop(9049)
print 'stopped TensorBoard'
###Output
_____no_output_____
###Markdown
2d. Distributed training and monitoring In this notebook, we refactor to call ```train_and_evaluate``` instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.We also use TensorBoard to monitor the training.
###Code
import datalab.bigquery as bq
import tensorflow as tf
import numpy as np
import shutil
from google.datalab.ml import TensorBoard
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Input Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph.
###Code
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
# No need to features.pop('key') since it is not specified in the INPUT_COLUMNS.
# The key passes through the graph unused.
return features, label
# Create list of file names that match "glob" pattern (i.e. data_file_*.csv)
filenames_dataset = tf.data.Dataset.list_files(filename)
# Read lines from text files
textlines_dataset = filenames_dataset.flat_map(tf.data.TextLineDataset)
# Parse text lines as comma-separated values (CSV)
dataset = textlines_dataset.map(decode_csv)
# Note:
# use tf.data.Dataset.flat_map to apply one to many transformations (here: filename -> text lines)
# use tf.data.Dataset.map to apply one to one transformations (here: text line -> feature list)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset
###Output
_____no_output_____
###Markdown
Create features out of input data For now, pass these through. (same as previous lab)
###Code
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
###Output
_____no_output_____
###Markdown
Serving input function
###Code
# Defines the expected shape of the JSON feed that the model
# will receive once deployed behind a REST API in production.
def serving_input_fn():
json_feature_placeholders = {
'pickuplon' : tf.placeholder(tf.float32, [None]),
'pickuplat' : tf.placeholder(tf.float32, [None]),
'dropofflat' : tf.placeholder(tf.float32, [None]),
'dropofflon' : tf.placeholder(tf.float32, [None]),
'passengers' : tf.placeholder(tf.float32, [None]),
}
# You can transforma data here from the input format to the format expected by your model.
features = json_feature_placeholders # no transformation needed
return tf.estimator.export.ServingInputReceiver(features, json_feature_placeholders)
###Output
_____no_output_____
###Markdown
tf.estimator.train_and_evaluate
###Code
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn = lambda: read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = num_train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = lambda: read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
Monitoring with TensorBoard Use "refresh" in Tensorboard during training to see progress.
###Code
OUTDIR = './taxi_trained'
TensorBoard().start(OUTDIR)
###Output
_____no_output_____
###Markdown
Run training
###Code
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
train_and_evaluate(OUTDIR, num_train_steps = 500)
###Output
_____no_output_____
###Markdown
You can now shut Tensorboard down
###Code
# to list Tensorboard instances
TensorBoard().list()
pids_df = TensorBoard.list()
if not pids_df.empty:
for pid in pids_df['pid']:
TensorBoard().stop(pid)
print('Stopped TensorBoard with pid {}'.format(pid))
###Output
_____no_output_____
###Markdown
2d. Distributed training and monitoring In this notebook, we refactor to call ```train_and_evaluate``` instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.We also use TensorBoard to monitor the training.
###Code
#model taxi-linear taxi-dnn
import tensorflow as tf
import numpy as np
import shutil
#import datalab.bigquery as bq
#from google.datalab.ml import TensorBoard
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Input Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph.
###Code
#model taxi-linear taxi-dnn
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
# No need to features.pop('key') since it is not specified in the INPUT_COLUMNS.
# The key passes through the graph unused.
return features, label
# Create list of file names that match "glob" pattern (i.e. data_file_*.csv)
filenames_dataset = tf.data.Dataset.list_files(filename)
# Read lines from text files
textlines_dataset = filenames_dataset.flat_map(tf.data.TextLineDataset)
# Parse text lines as comma-separated values (CSV)
dataset = textlines_dataset.map(decode_csv)
# Note:
# use tf.data.Dataset.flat_map to apply one to many transformations (here: filename -> text lines)
# use tf.data.Dataset.map to apply one to one transformations (here: text line -> feature list)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset
###Output
_____no_output_____
###Markdown
Create features out of input data For now, pass these through. (same as previous lab)
###Code
#model taxi-linear taxi-dnn
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
###Output
_____no_output_____
###Markdown
Serving input function
###Code
#model taxi-linear taxi-dnn
# Defines the expected shape of the JSON feed that the model
# will receive once deployed behind a REST API in production.
def serving_input_fn():
json_feature_placeholders = {
'pickuplon' : tf.placeholder(tf.float32, [None]),
'pickuplat' : tf.placeholder(tf.float32, [None]),
'dropofflat' : tf.placeholder(tf.float32, [None]),
'dropofflon' : tf.placeholder(tf.float32, [None]),
'passengers' : tf.placeholder(tf.float32, [None]),
}
# You can transforma data here from the input format to the format expected by your model.
features = json_feature_placeholders # no transformation needed
return tf.estimator.export.ServingInputReceiver(features, json_feature_placeholders)
###Output
_____no_output_____
###Markdown
tf.estimator.train_and_evaluate
###Code
#model taxi-dnn
# Create an estimator that we are going to train and evaluate
def train_and_evaluate(job_dir: ['GCS or local path to write checkpoints and export models', str],
train_data_paths: ['GCS or local path to training data', str],
eval_data_paths: ['GCS or local path to evaluation data', str],
train_steps: ['Steps to run the training job for', int],
train_batch_size: ['Batch size for training steps', int] = 512,
eval_batch_size: ['Batch size for evaluation steps', int] = 512,
eval_delay_secs: ['How long to wait before running first evaluation', int] = 1,
eval_throttle_secs: ['Minimul seconds between evaluations', int] = 10,
eval_steps: ['Number of steps to run evalution for at each checkpoint', int] = None,
hidden_units: ['List of hidden layer sizes to use for DNN feature columns', int, '+']
= [128, 32, 4]
):
estimator = tf.estimator.DNNRegressor(
model_dir = job_dir,
feature_columns = feature_cols,
hidden_units = hidden_units)
train_spec = tf.estimator.TrainSpec(
input_fn = lambda: read_dataset(train_data_paths,
batch_size = train_batch_size,
mode = tf.estimator.ModeKeys.TRAIN),
max_steps = train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec = tf.estimator.EvalSpec(
input_fn = lambda: read_dataset(eval_data_paths,
batch_size = eval_batch_size,
mode = tf.estimator.ModeKeys.EVAL),
steps = eval_steps,
start_delay_secs = eval_delay_secs,
throttle_secs = eval_throttle_secs,
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
#model taxi-linear
def train_and_evaluate(job_dir: ['GCS or local path to write checkpoints and export models', str],
train_data_paths: ['GCS or local path to training data', str],
eval_data_paths: ['GCS or local path to evaluation data', str],
train_steps: ['Steps to run the training job for', int],
train_batch_size: ['Batch size for training steps', int] = 512,
eval_batch_size: ['Batch size for evaluation steps', int] = 512,
eval_delay_secs: ['How long to wait before running first evaluation', int] = 10,
eval_throttle_secs: ['Minimul seconds between evaluations', int] = 10,
eval_steps: ['Number of steps to run evalution for at each checkpoint', int] = None
):
estimator = tf.estimator.LinearRegressor(
model_dir = job_dir,
feature_columns = feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn = lambda: read_dataset(train_data_paths,
batch_size = train_batch_size,
mode = tf.estimator.ModeKeys.TRAIN),
max_steps = train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = lambda: read_dataset(eval_data_paths,
batch_size = eval_batch_size,
mode = tf.estimator.ModeKeys.EVAL),
steps = eval_steps,
start_delay_secs = eval_delay_secs,
throttle_secs = eval_throttle_secs,
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
AG: reproduce bug with events and eval dirRun it first time in the notebook - and events file + eval dir are created in each job_dir.Run it another time (for the same paths) and they are missing
###Code
shutil.rmtree('taxi_trained', ignore_errors=True)
train_and_evaluate(job_dir='taxi_trained',
train_data_paths='taxi-train.csv',
eval_data_paths='taxi-valid.csv',
train_steps=500
)
shutil.rmtree('taxi_trained2', ignore_errors=True)
train_and_evaluate(job_dir='taxi_trained2',
train_data_paths='taxi-train.csv',
eval_data_paths='taxi-valid.csv',
train_steps=500
)
shutil.rmtree('taxi_trained3', ignore_errors=True)
train_and_evaluate(job_dir='taxi_trained3',
train_data_paths='taxi-train.csv',
eval_data_paths='taxi-valid.csv',
train_steps=500
)
###Output
_____no_output_____
###Markdown
AG: bugTried to use original Google code just in case I screwed up.
###Code
def train_and_evaluate_old(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn = lambda: read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = num_train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = lambda: read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
This will produce events. When run second time - in this (or another cell - next) - it won't produce events.
###Code
OUTDIR = './taxi_trained_old'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate_old(OUTDIR, num_train_steps = 500)
OUTDIR = './taxi_trained_old'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate_old(OUTDIR, num_train_steps = 500)
###Output
_____no_output_____
###Markdown
This will produce events again when run first time. Note the new path.
###Code
OUTDIR = './taxi_trained_old2'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate_old(OUTDIR, num_train_steps = 500)
###Output
_____no_output_____
###Markdown
no events:
###Code
OUTDIR = './taxi_trained_old2'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate_old(OUTDIR, num_train_steps = 500)
###Output
_____no_output_____
###Markdown
Monitoring with TensorBoard Use "refresh" in Tensorboard during training to see progress.We link /var/log/tensorboard
###Code
%%bash
mkdir /var/log/tensorboard 2>/dev/null
ln -sfT `pwd`/taxi_trained /var/log/tensorboard/taxi_trained
ls -l /var/log/tensorboard/*
estimator = tf.estimator.LinearRegressor(
model_dir = OUTDIR,
feature_columns = feature_cols)`
metrics = estimator.evaluate(lambda: read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL))
print('RMSE on dataset = {}'.format(np.sqrt(metrics['average_loss'])))
# to list Tensorboard instances
pids_df = TensorBoard().list`()
pids_df[~pi`ds_df['port'].isnull()]
pids_df = TensorBoard.list()
pids_df = pids_df[~pids_df['port'].isnull()]
if not pids_df.empty:
for pid in pids_df['pid']:
TensorBoard().stop(pid)
print('Stopped TensorBoard with pid {}'.format(pid))
###Output
_____no_output_____
###Markdown
2d. Distributed training and monitoring In this notebook, we refactor to call ```train_and_evaluate``` instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.We also use TensorBoard to monitor the training.
###Code
from google.cloud import bigquery
import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Input Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph.
###Code
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
# No need to features.pop('key') since it is not specified in the INPUT_COLUMNS.
# The key passes through the graph unused.
return features, label
# Create list of file names that match "glob" pattern (i.e. data_file_*.csv)
filenames_dataset = tf.data.Dataset.list_files(filename)
# Read lines from text files
textlines_dataset = filenames_dataset.flat_map(tf.data.TextLineDataset)
# Parse text lines as comma-separated values (CSV)
dataset = textlines_dataset.map(decode_csv)
# Note:
# use tf.data.Dataset.flat_map to apply one to many transformations (here: filename -> text lines)
# use tf.data.Dataset.map to apply one to one transformations (here: text line -> feature list)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset
###Output
_____no_output_____
###Markdown
Create features out of input data For now, pass these through. (same as previous lab)
###Code
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
###Output
_____no_output_____
###Markdown
Serving input function
###Code
# Defines the expected shape of the JSON feed that the model
# will receive once deployed behind a REST API in production.
def serving_input_fn():
json_feature_placeholders = {
'pickuplon' : tf.placeholder(tf.float32, [None]),
'pickuplat' : tf.placeholder(tf.float32, [None]),
'dropofflat' : tf.placeholder(tf.float32, [None]),
'dropofflon' : tf.placeholder(tf.float32, [None]),
'passengers' : tf.placeholder(tf.float32, [None]),
}
# You can transforma data here from the input format to the format expected by your model.
features = json_feature_placeholders # no transformation needed
return tf.estimator.export.ServingInputReceiver(features, json_feature_placeholders)
###Output
_____no_output_____
###Markdown
tf.estimator.train_and_evaluate
###Code
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn = lambda: read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = num_train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = lambda: read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
Monitoring with TensorBoard Start the TensorBoard by opening up a new Launcher (File > New Launcher) and selecting TensorBoard.
###Code
OUTDIR = './taxi_trained'
###Output
_____no_output_____
###Markdown
Run training
###Code
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
train_and_evaluate(OUTDIR, num_train_steps = 500)
###Output
_____no_output_____
###Markdown
2d. Distributed training and monitoring In this notebook, we refactor to call ```train_and_evaluate``` instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.
###Code
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.5
from google.cloud import bigquery
import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Input Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph.
###Code
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def decode_csv(value_column):
columns = tf.compat.v1.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
# No need to features.pop('key') since it is not specified in the INPUT_COLUMNS.
# The key passes through the graph unused.
return features, label
# Create list of file names that match "glob" pattern (i.e. data_file_*.csv)
filenames_dataset = tf.data.Dataset.list_files(filename)
# Read lines from text files
textlines_dataset = filenames_dataset.flat_map(tf.data.TextLineDataset)
# Parse text lines as comma-separated values (CSV)
dataset = textlines_dataset.map(decode_csv)
# Note:
# use tf.data.Dataset.flat_map to apply one to many transformations (here: filename -> text lines)
# use tf.data.Dataset.map to apply one to one transformations (here: text line -> feature list)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset
###Output
_____no_output_____
###Markdown
Create features out of input data For now, pass these through. (same as previous lab)
###Code
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
###Output
_____no_output_____
###Markdown
Serving input function
###Code
# Defines the expected shape of the JSON feed that the model
# will receive once deployed behind a REST API in production.
def serving_input_fn():
json_feature_placeholders = {
'pickuplon' : tf.compat.v1.placeholder(tf.float32, [None]),
'pickuplat' : tf.compat.v1.placeholder(tf.float32, [None]),
'dropofflat' : tf.compat.v1.placeholder(tf.float32, [None]),
'dropofflon' : tf.compat.v1.placeholder(tf.float32, [None]),
'passengers' : tf.compat.v1.placeholder(tf.float32, [None]),
}
# You can transforma data here from the input format to the format expected by your model.
features = json_feature_placeholders # no transformation needed
return tf.estimator.export.ServingInputReceiver(features, json_feature_placeholders)
###Output
_____no_output_____
###Markdown
tf.estimator.train_and_evaluate
###Code
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn = lambda: read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = num_train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = lambda: read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
Run training
###Code
OUTDIR = './taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
tf.compat.v1.summary.FileWriterCache.clear()
train_and_evaluate(OUTDIR, num_train_steps = 500)
###Output
_____no_output_____
###Markdown
2d. Distributed training and monitoring In this notebook, we refactor to call ```train_and_evaluate``` instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.We also use TensorBoard to monitor the training.
###Code
import datalab.bigquery as bq
import tensorflow as tf
import numpy as np
import shutil
from google.datalab.ml import TensorBoard
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Input Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph.
###Code
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
# No need to features.pop('key') since it is not specified in the INPUT_COLUMNS.
# The key passes through the graph unused.
return features, label
# Create list of file names that match "glob" pattern (i.e. data_file_*.csv)
filenames_dataset = tf.data.Dataset.list_files(filename)
# Read lines from text files
textlines_dataset = filenames_dataset.flat_map(tf.data.TextLineDataset)
# Parse text lines as comma-separated values (CSV)
dataset = textlines_dataset.map(decode_csv)
# Note:
# use tf.data.Dataset.flat_map to apply one to many transformations (here: filename -> text lines)
# use tf.data.Dataset.map to apply one to one transformations (here: text line -> feature list)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
###Output
_____no_output_____
###Markdown
Create features out of input data For now, pass these through. (same as previous lab)
###Code
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
###Output
_____no_output_____
###Markdown
Serving input function
###Code
# Defines the expected shape of the JSON feed that the model
# will receive once deployed behind a REST API in production.
def serving_input_fn():
feature_placeholders = {
'pickuplon' : tf.placeholder(tf.float32, [None]),
'pickuplat' : tf.placeholder(tf.float32, [None]),
'dropofflat' : tf.placeholder(tf.float32, [None]),
'dropofflon' : tf.placeholder(tf.float32, [None]),
'passengers' : tf.placeholder(tf.float32, [None]),
}
# You can transforma data here from the input format to the format expected by your model.
features = feature_placeholders # no transformation needed
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
###Output
_____no_output_____
###Markdown
tf.estimator.train_and_evaluate
###Code
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn = read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = num_train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
Monitoring with TensorBoard Use "refresh" in Tensorboard during training to see progress.
###Code
OUTDIR = './taxi_trained'
TensorBoard().start(OUTDIR)
###Output
_____no_output_____
###Markdown
Run training
###Code
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = 2000)
###Output
_____no_output_____
###Markdown
You can now shut Tensorboard down
###Code
# to list Tensorboard instances
TensorBoard().list()
pids_df = TensorBoard.list()
if not pids_df.empty:
for pid in pids_df['pid']:
TensorBoard().stop(pid)
print 'Stopped TensorBoard with pid {}'.format(pid)
###Output
_____no_output_____
###Markdown
2d. Distributed training and monitoring In this notebook, we refactor to call ```train_and_evaluate``` instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.We also use TensorBoard to monitor the training.
###Code
import datalab.bigquery as bq
import tensorflow as tf
import numpy as np
import shutil
from google.datalab.ml import TensorBoard
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Input Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph.
###Code
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Create list of file names that match "glob" pattern (i.e. data_file_*.csv)
filenames_dataset = tf.data.Dataset.list_files(filename)
# Read lines from text files
textlines_dataset = filenames_dataset.flat_map(tf.data.TextLineDataset)
# Parse text lines as comma-separated values (CSV)
dataset = textlines_dataset.map(decode_csv)
# Note:
# use tf.data.Dataset.flat_map to apply one to many transformations (here: filename -> text lines)
# use tf.data.Dataset.map to apply one to one transformations (here: text line -> feature list)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
def get_train():
return read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN)
def get_valid():
return read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL)
def get_test():
return read_dataset('./taxi-test.csv', mode = tf.estimator.ModeKeys.EVAL)
###Output
_____no_output_____
###Markdown
Create features out of input data For now, pass these through. (same as previous lab)
###Code
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
###Output
_____no_output_____
###Markdown
Serving input function
###Code
# Defines the expected shape of the JSON feed that the model
# will receive once deployed behind a REST API in production.
def serving_input_fn():
feature_placeholders = {
'pickuplon' : tf.placeholder(tf.float32, [None]),
'pickuplat' : tf.placeholder(tf.float32, [None]),
'dropofflat' : tf.placeholder(tf.float32, [None]),
'dropofflon' : tf.placeholder(tf.float32, [None]),
'passengers' : tf.placeholder(tf.float32, [None]),
}
# You can transforma data here from the input format to the format expected by your model.
features = feature_placeholders # no transformation needed
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
###Output
_____no_output_____
###Markdown
tf.estimator.train_and_evaluate
###Code
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn = read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = num_train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
Monitoring with TensorBoard Use "refresh" in Tensorboard during training to see progress.
###Code
OUTDIR = 'taxi_trained'
TensorBoard().start(OUTDIR)
###Output
_____no_output_____
###Markdown
Run training
###Code
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = 2000)
###Output
_____no_output_____
###Markdown
You can now shut Tensorboard down
###Code
# to list Tensorboard instances
TensorBoard().list()
# to stop TensorBoard fill the correct pid below
TensorBoard().stop(27855)
print("Stopped Tensorboard")
###Output
_____no_output_____
###Markdown
2d. Distributed training and monitoring In this notebook, we refactor to call ```train_and_evaluate``` instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.We also use TensorBoard to monitor the training.
###Code
import datalab.bigquery as bq
import tensorflow as tf
import numpy as np
import shutil
from google.datalab.ml import TensorBoard
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Input Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph.
###Code
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
# No need to features.pop('key') since it is not specified in the INPUT_COLUMNS.
# The key passes through the graph unused.
return features, label
# Create list of file names that match "glob" pattern (i.e. data_file_*.csv)
filenames_dataset = tf.data.Dataset.list_files(filename)
# Read lines from text files
textlines_dataset = filenames_dataset.flat_map(tf.data.TextLineDataset)
# Parse text lines as comma-separated values (CSV)
dataset = textlines_dataset.map(decode_csv)
# Note:
# use tf.data.Dataset.flat_map to apply one to many transformations (here: filename -> text lines)
# use tf.data.Dataset.map to apply one to one transformations (here: text line -> feature list)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset
###Output
_____no_output_____
###Markdown
Create features out of input data For now, pass these through. (same as previous lab)
###Code
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
###Output
_____no_output_____
###Markdown
Serving input function
###Code
# Defines the expected shape of the JSON feed that the model
# will receive once deployed behind a REST API in production.
def serving_input_fn():
json_feature_placeholders = {
'pickuplon' : tf.placeholder(tf.float32, [None]),
'pickuplat' : tf.placeholder(tf.float32, [None]),
'dropofflat' : tf.placeholder(tf.float32, [None]),
'dropofflon' : tf.placeholder(tf.float32, [None]),
'passengers' : tf.placeholder(tf.float32, [None]),
}
# You can transforma data here from the input format to the format expected by your model.
features = json_feature_placeholders # no transformation needed
return tf.estimator.export.ServingInputReceiver(features, json_feature_placeholders)
###Output
_____no_output_____
###Markdown
tf.estimator.train_and_evaluate
###Code
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn = lambda: read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = num_train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = lambda: read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
###Output
_____no_output_____
###Markdown
Monitoring with TensorBoard Use "refresh" in Tensorboard during training to see progress.
###Code
OUTDIR = './taxi_trained'
TensorBoard().start(OUTDIR)
###Output
_____no_output_____
###Markdown
Run training
###Code
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = 500)
###Output
_____no_output_____
###Markdown
You can now shut Tensorboard down
###Code
# to list Tensorboard instances
TensorBoard().list()
pids_df = TensorBoard.list()
if not pids_df.empty:
for pid in pids_df['pid']:
TensorBoard().stop(pid)
print('Stopped TensorBoard with pid {}'.format(pid))
###Output
_____no_output_____
###Markdown
2d. Distributed training and monitoring In this notebook, we refactor to call ```train_and_evaluate``` instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.We also use TensorBoard to monitor the training.
###Code
from google.cloud import bigquery
import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Input Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph.
###Code
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
# No need to features.pop('key') since it is not specified in the INPUT_COLUMNS.
# The key passes through the graph unused.
return features, label
# Create list of file names that match "glob" pattern (i.e. data_file_*.csv)
filenames_dataset = tf.data.Dataset.list_files(filename)
# Read lines from text files
textlines_dataset = filenames_dataset.flat_map(tf.data.TextLineDataset)
# Parse text lines as comma-separated values (CSV)
dataset = textlines_dataset.map(decode_csv)
# Note:
# use tf.data.Dataset.flat_map to apply one to many transformations (here: filename -> text lines)
# use tf.data.Dataset.map to apply one to one transformations (here: text line -> feature list)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset
###Output
_____no_output_____
###Markdown
Create features out of input data For now, pass these through. (same as previous lab)
###Code
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
###Output
_____no_output_____
###Markdown
Serving input function
###Code
# Defines the expected shape of the JSON feed that the model
# will receive once deployed behind a REST API in production.
def serving_input_fn():
json_feature_placeholders = {
'pickuplon' : tf.placeholder(tf.float32, [None]),
'pickuplat' : tf.placeholder(tf.float32, [None]),
'dropofflat' : tf.placeholder(tf.float32, [None]),
'dropofflon' : tf.placeholder(tf.float32, [None]),
'passengers' : tf.placeholder(tf.float32, [None]),
}
# You can transforma data here from the input format to the format expected by your model.
features = json_feature_placeholders # no transformation needed
return tf.estimator.export.ServingInputReceiver(features, json_feature_placeholders)
###Output
_____no_output_____
###Markdown
tf.estimator.train_and_evaluate
###Code
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn = lambda: read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = num_train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = lambda: read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
OUTDIR = './taxi_trained'
###Output
_____no_output_____
###Markdown
Run training
###Code
# Run training
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
train_and_evaluate(OUTDIR, num_train_steps = 500)
###Output
_____no_output_____
###Markdown
2d. Distributed training and monitoring In this notebook, we refactor to call ```train_and_evaluate``` instead of hand-coding our ML pipeline. This allows us to carry out evaluation as part of our training loop instead of as a separate step. It also adds in failure-handling that is necessary for distributed training capabilities.We also use TensorBoard to monitor the training.
###Code
import datalab.bigquery as bq
import tensorflow as tf
import numpy as np
import shutil
import tensorflow as tf
print (tf.__version__)
###Output
/usr/local/envs/py3env/lib/python3.5/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
###Markdown
Input Read data created in Lab1a, but this time make it more general, so that we are reading in batches. Instead of using Pandas, we will use add a filename queue to the TensorFlow graph.
###Code
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Create list of files that match pattern
file_list = tf.gfile.Glob(filename)
# Create dataset from file list
dataset = tf.data.TextLineDataset(file_list).map(decode_csv)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
###Output
_____no_output_____
###Markdown
Create features out of input data For now, pass these through. (same as previous lab)
###Code
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
###Output
_____no_output_____
###Markdown
train_and_evaluate
###Code
def serving_input_fn():
feature_placeholders = {
'pickuplon' : tf.placeholder(tf.float32, [None]),
'pickuplat' : tf.placeholder(tf.float32, [None]),
'dropofflat' : tf.placeholder(tf.float32, [None]),
'dropofflon' : tf.placeholder(tf.float32, [None]),
'passengers' : tf.placeholder(tf.float32, [None]),
}
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn = read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = num_train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# Run training
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = 5000)
###Output
INFO:tensorflow:Using default config.
INFO:tensorflow:Using config: {'_session_config': None, '_num_ps_replicas': 0, '_keep_checkpoint_max': 5, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_is_chief': True, '_keep_checkpoint_every_n_hours': 10000, '_tf_random_seed': None, '_log_step_count_steps': 100, '_master': '', '_task_type': 'worker', '_num_worker_replicas': 1, '_model_dir': 'taxi_trained', '_task_id': 0, '_save_summary_steps': 100, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f706bb59748>}
INFO:tensorflow:Running training and evaluation locally (non-distributed).
INFO:tensorflow:Start train and evaluate loop. The evaluate will happen after 10 secs (eval_spec.throttle_secs) or training is finished.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Saving checkpoints for 1 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 96403.4, step = 1
INFO:tensorflow:global_step/sec: 27.2453
INFO:tensorflow:loss = 44517.793, step = 101 (3.675 sec)
INFO:tensorflow:global_step/sec: 23.0761
INFO:tensorflow:loss = 40268.633, step = 201 (4.335 sec)
INFO:tensorflow:Saving checkpoints for 228 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 42678.17.
INFO:tensorflow:Starting evaluation at 2018-03-16-07:36:35
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-228
INFO:tensorflow:Finished evaluation at 2018-03-16-07:36:36
INFO:tensorflow:Saving dict for global step 228: average_loss = 109.463234, global_step = 228, loss = 45564.07
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'regression' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'serving_default' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-228
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"taxi_trained/export/exporter/temp-b'1521185796'/saved_model.pb"
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-228
INFO:tensorflow:Saving checkpoints for 229 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 36881.656, step = 229
INFO:tensorflow:global_step/sec: 27.6988
INFO:tensorflow:loss = 37488.504, step = 329 (3.616 sec)
INFO:tensorflow:global_step/sec: 25.7956
INFO:tensorflow:loss = 38757.105, step = 429 (3.877 sec)
INFO:tensorflow:Saving checkpoints for 471 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 30782.748.
INFO:tensorflow:Starting evaluation at 2018-03-16-07:36:47
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-471
INFO:tensorflow:Finished evaluation at 2018-03-16-07:36:47
INFO:tensorflow:Saving dict for global step 471: average_loss = 109.19711, global_step = 471, loss = 45453.297
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'regression' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'serving_default' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-471
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"taxi_trained/export/exporter/temp-b'1521185807'/saved_model.pb"
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-471
INFO:tensorflow:Saving checkpoints for 472 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 39356.14, step = 472
INFO:tensorflow:global_step/sec: 28.3546
INFO:tensorflow:loss = 32824.562, step = 572 (3.532 sec)
INFO:tensorflow:global_step/sec: 26.308
INFO:tensorflow:loss = 36922.367, step = 672 (3.802 sec)
INFO:tensorflow:Saving checkpoints for 719 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 50809.273.
INFO:tensorflow:Starting evaluation at 2018-03-16-07:36:58
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-719
INFO:tensorflow:Finished evaluation at 2018-03-16-07:36:58
INFO:tensorflow:Saving dict for global step 719: average_loss = 109.03941, global_step = 719, loss = 45387.656
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'regression' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'serving_default' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-719
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"taxi_trained/export/exporter/temp-b'1521185819'/saved_model.pb"
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-719
INFO:tensorflow:Saving checkpoints for 720 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 53386.43, step = 720
INFO:tensorflow:global_step/sec: 29.8565
INFO:tensorflow:loss = 35187.754, step = 820 (3.355 sec)
INFO:tensorflow:global_step/sec: 26.4148
INFO:tensorflow:loss = 27025.826, step = 920 (3.786 sec)
INFO:tensorflow:Saving checkpoints for 977 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 46512.934.
INFO:tensorflow:Starting evaluation at 2018-03-16-07:37:10
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-977
INFO:tensorflow:Finished evaluation at 2018-03-16-07:37:10
INFO:tensorflow:Saving dict for global step 977: average_loss = 109.15176, global_step = 977, loss = 45434.418
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'regression' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'serving_default' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-977
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"taxi_trained/export/exporter/temp-b'1521185830'/saved_model.pb"
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-977
INFO:tensorflow:Saving checkpoints for 978 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 50723.74, step = 978
INFO:tensorflow:global_step/sec: 29.5905
INFO:tensorflow:loss = 37930.445, step = 1078 (3.386 sec)
INFO:tensorflow:global_step/sec: 25.0398
INFO:tensorflow:loss = 38155.78, step = 1178 (3.993 sec)
INFO:tensorflow:Saving checkpoints for 1220 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 54262.53.
INFO:tensorflow:Starting evaluation at 2018-03-16-07:37:21
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1220
INFO:tensorflow:Finished evaluation at 2018-03-16-07:37:21
INFO:tensorflow:Saving dict for global step 1220: average_loss = 108.83315, global_step = 1220, loss = 45301.8
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'regression' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'serving_default' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1220
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"taxi_trained/export/exporter/temp-b'1521185841'/saved_model.pb"
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1220
INFO:tensorflow:Saving checkpoints for 1221 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 49367.312, step = 1221
INFO:tensorflow:global_step/sec: 27.7907
INFO:tensorflow:loss = 39683.555, step = 1321 (3.603 sec)
INFO:tensorflow:global_step/sec: 27.2513
INFO:tensorflow:loss = 41785.336, step = 1421 (3.671 sec)
INFO:tensorflow:Saving checkpoints for 1466 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 34080.97.
INFO:tensorflow:Starting evaluation at 2018-03-16-07:37:32
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1466
INFO:tensorflow:Finished evaluation at 2018-03-16-07:37:32
INFO:tensorflow:Saving dict for global step 1466: average_loss = 109.95677, global_step = 1466, loss = 45769.508
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'regression' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'serving_default' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1466
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"taxi_trained/export/exporter/temp-b'1521185852'/saved_model.pb"
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1466
INFO:tensorflow:Saving checkpoints for 1467 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 43065.727, step = 1467
INFO:tensorflow:global_step/sec: 27.6421
INFO:tensorflow:loss = 45107.773, step = 1567 (3.624 sec)
INFO:tensorflow:global_step/sec: 26.9207
INFO:tensorflow:loss = 54588.695, step = 1667 (3.715 sec)
INFO:tensorflow:Saving checkpoints for 1709 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 52919.12.
INFO:tensorflow:Starting evaluation at 2018-03-16-07:37:43
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1709
INFO:tensorflow:Finished evaluation at 2018-03-16-07:37:43
INFO:tensorflow:Saving dict for global step 1709: average_loss = 108.89669, global_step = 1709, loss = 45328.246
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'regression' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'serving_default' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1709
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"taxi_trained/export/exporter/temp-b'1521185864'/saved_model.pb"
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1709
INFO:tensorflow:Saving checkpoints for 1710 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 43845.746, step = 1710
INFO:tensorflow:global_step/sec: 29.1614
INFO:tensorflow:loss = 52611.312, step = 1810 (3.433 sec)
INFO:tensorflow:global_step/sec: 27.7674
INFO:tensorflow:loss = 43767.562, step = 1910 (3.604 sec)
INFO:tensorflow:Saving checkpoints for 1968 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 46959.33.
INFO:tensorflow:Starting evaluation at 2018-03-16-07:37:54
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1968
INFO:tensorflow:Finished evaluation at 2018-03-16-07:37:55
INFO:tensorflow:Saving dict for global step 1968: average_loss = 108.913185, global_step = 1968, loss = 45335.113
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'regression' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'serving_default' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1968
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"taxi_trained/export/exporter/temp-b'1521185875'/saved_model.pb"
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-1968
INFO:tensorflow:Saving checkpoints for 1969 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 34575.69, step = 1969
INFO:tensorflow:global_step/sec: 29.8332
INFO:tensorflow:loss = 36910.242, step = 2069 (3.359 sec)
INFO:tensorflow:global_step/sec: 24.4628
INFO:tensorflow:loss = 36987.086, step = 2169 (4.087 sec)
INFO:tensorflow:Saving checkpoints for 2213 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 40878.008.
INFO:tensorflow:Starting evaluation at 2018-03-16-07:38:06
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2213
INFO:tensorflow:Finished evaluation at 2018-03-16-07:38:06
INFO:tensorflow:Saving dict for global step 2213: average_loss = 109.02315, global_step = 2213, loss = 45380.887
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'regression' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'serving_default' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2213
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"taxi_trained/export/exporter/temp-b'1521185886'/saved_model.pb"
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2213
INFO:tensorflow:Saving checkpoints for 2214 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 36876.516, step = 2214
INFO:tensorflow:global_step/sec: 28.9135
INFO:tensorflow:loss = 45291.77, step = 2314 (3.463 sec)
INFO:tensorflow:global_step/sec: 27.6705
INFO:tensorflow:loss = 37378.543, step = 2414 (3.616 sec)
INFO:tensorflow:Saving checkpoints for 2471 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 56429.508.
INFO:tensorflow:Starting evaluation at 2018-03-16-07:38:17
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2471
INFO:tensorflow:Finished evaluation at 2018-03-16-07:38:17
INFO:tensorflow:Saving dict for global step 2471: average_loss = 108.8264, global_step = 2471, loss = 45298.99
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'regression' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'serving_default' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2471
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"taxi_trained/export/exporter/temp-b'1521185897'/saved_model.pb"
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2471
INFO:tensorflow:Saving checkpoints for 2472 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 37852.016, step = 2472
INFO:tensorflow:global_step/sec: 22.4313
INFO:tensorflow:loss = 39188.094, step = 2572 (4.464 sec)
INFO:tensorflow:Saving checkpoints for 2670 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 31181.086.
INFO:tensorflow:Starting evaluation at 2018-03-16-07:38:28
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2670
INFO:tensorflow:Finished evaluation at 2018-03-16-07:38:29
INFO:tensorflow:Saving dict for global step 2670: average_loss = 109.94571, global_step = 2670, loss = 45764.902
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'regression' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'serving_default' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2670
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"taxi_trained/export/exporter/temp-b'1521185909'/saved_model.pb"
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2670
INFO:tensorflow:Saving checkpoints for 2671 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 51298.324, step = 2671
INFO:tensorflow:global_step/sec: 30.2636
INFO:tensorflow:loss = 38669.633, step = 2771 (3.311 sec)
INFO:tensorflow:global_step/sec: 26.6909
INFO:tensorflow:loss = 39798.055, step = 2871 (3.746 sec)
INFO:tensorflow:Saving checkpoints for 2928 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 39105.7.
INFO:tensorflow:Starting evaluation at 2018-03-16-07:38:40
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2928
INFO:tensorflow:Finished evaluation at 2018-03-16-07:38:40
INFO:tensorflow:Saving dict for global step 2928: average_loss = 109.499825, global_step = 2928, loss = 45579.3
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'regression' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'serving_default' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2928
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"taxi_trained/export/exporter/temp-b'1521185920'/saved_model.pb"
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-2928
INFO:tensorflow:Saving checkpoints for 2929 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 33287.375, step = 2929
INFO:tensorflow:global_step/sec: 26.5375
INFO:tensorflow:loss = 53822.54, step = 3029 (3.774 sec)
INFO:tensorflow:global_step/sec: 24.3289
INFO:tensorflow:loss = 38563.07, step = 3129 (4.111 sec)
INFO:tensorflow:Saving checkpoints for 3156 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 42968.992.
INFO:tensorflow:Starting evaluation at 2018-03-16-07:38:51
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3156
INFO:tensorflow:Finished evaluation at 2018-03-16-07:38:52
INFO:tensorflow:Saving dict for global step 3156: average_loss = 109.03826, global_step = 3156, loss = 45387.176
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'regression' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'serving_default' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3156
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"taxi_trained/export/exporter/temp-b'1521185932'/saved_model.pb"
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3156
INFO:tensorflow:Saving checkpoints for 3157 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 37841.797, step = 3157
INFO:tensorflow:global_step/sec: 27.174
INFO:tensorflow:loss = 30988.283, step = 3257 (3.686 sec)
INFO:tensorflow:global_step/sec: 27.4745
INFO:tensorflow:loss = 43615.758, step = 3357 (3.639 sec)
INFO:tensorflow:Saving checkpoints for 3402 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 36789.164.
INFO:tensorflow:Starting evaluation at 2018-03-16-07:39:03
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3402
INFO:tensorflow:Finished evaluation at 2018-03-16-07:39:03
INFO:tensorflow:Saving dict for global step 3402: average_loss = 109.53729, global_step = 3402, loss = 45594.9
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'regression' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'serving_default' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3402
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"taxi_trained/export/exporter/temp-b'1521185943'/saved_model.pb"
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3402
INFO:tensorflow:Saving checkpoints for 3403 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 43076.848, step = 3403
INFO:tensorflow:global_step/sec: 30.1919
INFO:tensorflow:loss = 44529.31, step = 3503 (3.317 sec)
INFO:tensorflow:global_step/sec: 28.2039
INFO:tensorflow:loss = 47712.055, step = 3603 (3.545 sec)
INFO:tensorflow:Saving checkpoints for 3676 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 54933.613.
INFO:tensorflow:Starting evaluation at 2018-03-16-07:39:14
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3676
INFO:tensorflow:Finished evaluation at 2018-03-16-07:39:14
INFO:tensorflow:Saving dict for global step 3676: average_loss = 108.920135, global_step = 3676, loss = 45338.008
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'regression' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'serving_default' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3676
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"taxi_trained/export/exporter/temp-b'1521185954'/saved_model.pb"
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3676
INFO:tensorflow:Saving checkpoints for 3677 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 35915.086, step = 3677
INFO:tensorflow:global_step/sec: 26.4016
INFO:tensorflow:loss = 37165.92, step = 3777 (3.793 sec)
INFO:tensorflow:global_step/sec: 26.4675
INFO:tensorflow:loss = 51227.14, step = 3877 (3.780 sec)
INFO:tensorflow:Saving checkpoints for 3920 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 35460.15.
INFO:tensorflow:Starting evaluation at 2018-03-16-07:39:25
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3920
INFO:tensorflow:Finished evaluation at 2018-03-16-07:39:26
INFO:tensorflow:Saving dict for global step 3920: average_loss = 109.71915, global_step = 3920, loss = 45670.594
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'regression' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'serving_default' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3920
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"taxi_trained/export/exporter/temp-b'1521185966'/saved_model.pb"
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-3920
INFO:tensorflow:Saving checkpoints for 3921 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 46566.55, step = 3921
INFO:tensorflow:global_step/sec: 29.5612
INFO:tensorflow:loss = 46561.39, step = 4021 (3.388 sec)
INFO:tensorflow:global_step/sec: 28.6238
INFO:tensorflow:loss = 42352.984, step = 4121 (3.494 sec)
INFO:tensorflow:Saving checkpoints for 4181 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 40400.42.
INFO:tensorflow:Starting evaluation at 2018-03-16-07:39:36
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4181
INFO:tensorflow:Finished evaluation at 2018-03-16-07:39:37
INFO:tensorflow:Saving dict for global step 4181: average_loss = 109.197525, global_step = 4181, loss = 45453.47
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'regression' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'serving_default' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4181
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"taxi_trained/export/exporter/temp-b'1521185977'/saved_model.pb"
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4181
INFO:tensorflow:Saving checkpoints for 4182 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 45964.12, step = 4182
INFO:tensorflow:global_step/sec: 30.8992
INFO:tensorflow:loss = 41314.46, step = 4282 (3.242 sec)
INFO:tensorflow:global_step/sec: 28.9975
INFO:tensorflow:loss = 46196.78, step = 4382 (3.449 sec)
INFO:tensorflow:Saving checkpoints for 4443 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 51281.78.
INFO:tensorflow:Starting evaluation at 2018-03-16-07:39:48
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4443
INFO:tensorflow:Finished evaluation at 2018-03-16-07:39:48
INFO:tensorflow:Saving dict for global step 4443: average_loss = 108.830605, global_step = 4443, loss = 45300.74
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'regression' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'serving_default' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4443
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"taxi_trained/export/exporter/temp-b'1521185988'/saved_model.pb"
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4443
INFO:tensorflow:Saving checkpoints for 4444 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 30826.887, step = 4444
INFO:tensorflow:global_step/sec: 29.8337
INFO:tensorflow:loss = 44386.086, step = 4544 (3.356 sec)
INFO:tensorflow:global_step/sec: 24.5688
INFO:tensorflow:loss = 45148.83, step = 4644 (4.072 sec)
INFO:tensorflow:Saving checkpoints for 4687 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 38833.977.
INFO:tensorflow:Starting evaluation at 2018-03-16-07:39:59
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4687
INFO:tensorflow:Finished evaluation at 2018-03-16-07:40:00
INFO:tensorflow:Saving dict for global step 4687: average_loss = 109.14033, global_step = 4687, loss = 45429.66
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'regression' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'serving_default' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4687
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"taxi_trained/export/exporter/temp-b'1521186000'/saved_model.pb"
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4687
INFO:tensorflow:Saving checkpoints for 4688 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 39568.96, step = 4688
INFO:tensorflow:global_step/sec: 31.6718
INFO:tensorflow:loss = 38586.93, step = 4788 (3.163 sec)
INFO:tensorflow:global_step/sec: 29.5189
INFO:tensorflow:loss = 56042.28, step = 4888 (3.388 sec)
INFO:tensorflow:Saving checkpoints for 4961 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 51117.42.
INFO:tensorflow:Starting evaluation at 2018-03-16-07:40:11
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4961
INFO:tensorflow:Finished evaluation at 2018-03-16-07:40:11
INFO:tensorflow:Saving dict for global step 4961: average_loss = 108.84294, global_step = 4961, loss = 45305.875
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'regression' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'serving_default' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4961
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"taxi_trained/export/exporter/temp-b'1521186011'/saved_model.pb"
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-4961
INFO:tensorflow:Saving checkpoints for 4962 into taxi_trained/model.ckpt.
INFO:tensorflow:loss = 43951.383, step = 4962
INFO:tensorflow:Saving checkpoints for 5000 into taxi_trained/model.ckpt.
INFO:tensorflow:Loss for final step: 34604.707.
INFO:tensorflow:Starting evaluation at 2018-03-16-07:40:15
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-5000
INFO:tensorflow:Finished evaluation at 2018-03-16-07:40:15
INFO:tensorflow:Saving dict for global step 5000: average_loss = 109.54296, global_step = 5000, loss = 45597.258
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'regression' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'serving_default' : Regression input must be a single string Tensor; got {'pickuplon': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'pickuplat': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'passengers': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>, 'dropofflon': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'dropofflat': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
INFO:tensorflow:Restoring parameters from taxi_trained/model.ckpt-5000
INFO:tensorflow:Assets added to graph.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: b"taxi_trained/export/exporter/temp-b'1521186015'/saved_model.pb"
###Markdown
Monitoring with TensorBoard
###Code
from google.datalab.ml import TensorBoard
TensorBoard().start('./taxi_trained')
TensorBoard().list()
# to stop TensorBoard
TensorBoard().stop(9049)
print ('stopped TensorBoard')
###Output
stopped TensorBoard
|
2021_08_28/IDS_LAB_5.ipynb | ###Markdown
Q2
###Code
df_2018[['id','market_cap_usd']].count()
###Output
_____no_output_____
###Markdown
Q3
###Code
df_2017[['id','market_cap_usd']].count()
df_2017 = df_2017[df_2017["market_cap_usd"]>0]
df_2017[['id','market_cap_usd']].count()
###Output
_____no_output_____
###Markdown
Q4
###Code
top_ten = df_2018.head(10)
top_ten.plot(x='id',y = 'market_cap_usd', kind = 'bar')
top_ten = df_2018.head(10)
sum_all = top_ten[['market_cap_usd']].sum().iloc[0]
top_ten['market_cap_usd'] = (top_ten['market_cap_usd'] / sum_all) * 100
top_ten[['market_cap_usd']]
top_ten.plot(x='id',y='market_cap_usd',kind= 'bar')
###Output
_____no_output_____
###Markdown
Q5
###Code
sns.set_theme(style="whitegrid")
sns.set(rc={"figure.figsize":(12, 8)})
sns.barplot(x='id',y='market_cap_usd',data = top_ten)
###Output
_____no_output_____
###Markdown
Q5
###Code
top_ten.plot(kind='bar',x='id',y='market_cap_usd',figsize=(12,6),grid=True,log=10,color= ['#e0e0e0','green','#e0e0e0','k','k','pink','brown','#e0e0e0','orange','green'])
top_ten.head(5)
###Output
_____no_output_____
###Markdown
Q6
###Code
sns.barplot(x='id',y='24h_volume_usd',data = top_ten)
top_ten[['24h_volume_usd']]
df_2017[['id','24h_volume_usd']].count()
df_2017 = df_2017[df_2017["24h_volume_usd"]>0]
df_2017[['id','24h_volume_usd']].count()
sum_all = top_ten[['24h_volume_usd']].sum().iloc[0]
top_ten['24h_volume_usd'] = (top_ten['24h_volume_usd'] / sum_all) * 100
sns.barplot(x='id',y='24h_volume_usd', data = top_ten)
# top_ten.plot(x='id',y='24h_volume_usd',kind= 'bar')
top_ten[['percent_change_7d']]
df_2017[['id','percent_change_7d']].count()
df_2017 = df_2017[df_2017["percent_change_7d"]>0]
df_2017[['id','percent_change_7d']].count()
sum_all = top_ten[['24h_volume_usd']].sum().iloc[0]
top_ten['24h_volume_usd'] = (top_ten['24h_volume_usd'] / sum_all) * 100
sns.barplot(x='id',y='percent_change_7d',data = top_ten)
###Output
_____no_output_____
###Markdown
Q7
###Code
# top_ten['24h_volume_usd'[:10]].plot(x='id',y='24h_volume_usd',kind= 'bar')
def top10_subplot():
return df_2017[('24h_volume_usd')[:10]].plot.bar(color='darkred')
top10_subplot()
###Output
_____no_output_____
###Markdown
Q8
###Code
df_2017['percent_change_7d'].plot(kind='bar',grid=True, figsize=(15,8))
# df_2017['percent_change_7d'].nsmallest(10).plot(kind='bar',grid=True, figsize=(15,8))
df_2017["percent_change_7d"] = np.where(df_2017["percent_change_7d"]<0, 'red', 'green')
df_2017["percent_change_7d"]
###Output
_____no_output_____
###Markdown
Q9
###Code
df_2017
df_2017 = df_2017['market_cap_usd'].nsllaest
sns.histplot(x='id',y='market_cap_usd',data = df_2017)
###Output
_____no_output_____ |
notebooks/exercises/05.Cavity.Flow.Exercises.ipynb | ###Markdown
Exercise 1 JIT the pressure poisson equationThe equation we need to unroll is given by \begin{equation}p_{i,j}^{n} = \frac{1}{4}\left(p_{i+1,j}^{n}+p_{i-1,j}^{n}+p_{i,j+1}^{n}+p_{i,j-1}^{n}\right) - b\end{equation}and recall that `b` is already computed, so no need to worry about unrolling that. We've also filled in the boundary conditions, so don't worry about those. (don't forget to decorate your function!)
###Code
import numpy
from numba import jit
def pressure_poisson(p, b, l2_target=1e-4):
I, J = b.shape
iter_diff = l2_target + 1
n = 0
while iter_diff > l2_target and n <= 500:
pn = p.copy()
#Your code here
#boundary conditions
for i in range(I):
p[i, 0] = p[i, 1]
p[i, -1] = 0
for j in range(J):
p[0, j] = p[1, j]
p[-1, j] = p[-2, j]
if n % 10 == 0:
iter_diff = numpy.sqrt(numpy.sum((p - pn)**2)/numpy.sum(pn**2))
n += 1
return p
import pickle
from snippets.ns_helper import cavity_flow, velocity_term, quiver_plot
def run_cavity():
nx = 41
with open('../IC.pickle', 'rb') as f:
u, v, p, b = pickle.load(f)
dx = 2 / (nx - 1)
dt = .005
nt = 1000
u, v, p = cavity_flow(u, v, p, nt, dt, dx,
velocity_term,
pressure_poisson,
rtol=1e-4)
return u, v, p
un, vn, pn = run_cavity()
%timeit run_cavity()
with open('numpy_ans.pickle', 'rb') as f:
u, v, p = pickle.load(f)
assert numpy.allclose(u, un)
assert numpy.allclose(v, vn)
assert numpy.allclose(p, pn)
###Output
_____no_output_____
###Markdown
Exercise 1 JIT the pressure poisson equationThe equation we need to unroll is given by \begin{equation}p_{i,j}^{n} = \frac{1}{4}\left(p_{i+1,j}^{n}+p_{i-1,j}^{n}+p_{i,j+1}^{n}+p_{i,j-1}^{n}\right) - b\end{equation}and recall that `b` is already computed, so no need to worry about unrolling that. We've also filled in the boundary conditions, so don't worry about those. (don't forget to decorate your function!)
###Code
import numpy
from numba import jit
def pressure_poisson(p, b, l2_target=1e-4):
I, J = b.shape
iter_diff = l2_target + 1
n = 0
while iter_diff > l2_target and n <= 500
pn = p.copy()
#Your code here
#boundary conditions
for i in range(I):
p[i, 0] = p[i, 1]
p[i, -1] = 0
for j in range(J):
p[0, j] = p[1, j]
p[-1, j] = p[-2, j]
if n % 10 == 0:
iter_diff = numpy.sqrt(numpy.sum((p - pn)**2)/numpy.sum(pn**2))
n += 1
return p
import pickle
from snippets.ns_helper import cavity_flow, velocity_term, quiver_plot
def run_cavity():
nx = 41
with open('IC.pickle', 'rb') as f:
u, v, p, b = pickle.load(f)
dx = 2 / (nx - 1)
dt = .005
nt = 1000
u, v, p = cavity_flow(u, v, p, nt, dt, dx,
velocity_term,
pressure_poisson,
rtol=1e-4)
return u, v, p
un, vn, pn = run_cavity()
%timeit run_cavity()
with open('numpy_ans.pickle', 'rb') as f:
u, v, p = pickle.load(f)
assert numpy.allclose(u, un)
assert numpy.allclose(v, vn)
assert numpy.allclose(p, pn)
###Output
_____no_output_____
###Markdown
Exercise 1 JIT the pressure poisson equationThe equation we need to unroll is given by \begin{equation}p_{i,j}^{n} = \frac{1}{4}\left(p_{i+1,j}^{n}+p_{i-1,j}^{n}+p_{i,j+1}^{n}+p_{i,j-1}^{n}\right) - b\end{equation}and recall that `b` is already computed, so no need to worry about unrolling that. We've also filled in the boundary conditions, so don't worry about those. (don't forget to decorate your function!)
###Code
import numpy
from numba import jit
def pressure_poisson(p, b, l2_target=1e-4):
I, J = b.shape
iter_diff = l2_target + 1
n = 0
while iter_diff > l2_target and n <= 500:
pn = p.copy()
#Your code here
#boundary conditions
for i in range(I):
p[i, 0] = p[i, 1]
p[i, -1] = 0
for j in range(J):
p[0, j] = p[1, j]
p[-1, j] = p[-2, j]
if n % 10 == 0:
iter_diff = numpy.sqrt(numpy.sum((p - pn)**2)/numpy.sum(pn**2))
n += 1
return p
import pickle
from snippets.ns_helper import cavity_flow, velocity_term, quiver_plot
def run_cavity():
nx = 41
with open('../IC.pickle', 'rb') as f:
u, v, p, b = pickle.load(f)
dx = 2 / (nx - 1)
dt = .005
nt = 1000
u, v, p = cavity_flow(u, v, p, nt, dt, dx,
velocity_term,
pressure_poisson,
rtol=1e-4)
return u, v, p
un, vn, pn = run_cavity()
%timeit run_cavity()
with open('../numpy_ans.pickle', 'rb') as f:
u, v, p = pickle.load(f)
assert numpy.allclose(u, un)
assert numpy.allclose(v, vn)
assert numpy.allclose(p, pn)
###Output
_____no_output_____ |
Worksheet_set_1/python Worksheet - 1_Flip Robo.ipynb | ###Markdown
FIND THE FACTORIAL NUMBER
###Code
num = int(input("Enter Factorial Number = "))
fac = 1
if num <= 0:
print ("Factorial is 1")
else:
print ( "Factorial Calculation Method")
for i in range (1, num+1):
f = fac * i
print (fac, "*", i, "=", f)
fac = f
print ("\nFactorial", num , "is", fac)
###Output
_____no_output_____
###Markdown
FIND THE PRIME NUMBER AND COMPOSITE NUMBER
###Code
number = int(input("Enter any number:"))
if number ==1:
print ("This is not Prime and not Composite")
else:
for i in range(2,number):
if (number%i)==0:
print(number, "is Composite number")
break
else:
print(number, "is prime number")
###Output
_____no_output_____
###Markdown
CHECK WHETHER A GIVEN STRING IS PALINDROME OR NOT
###Code
string = input("Enter Any Word: ")
rev_string = string [::-1]
if string == rev_string:
print ("This Word is Palindrome")
else:
print ("This Word is Not Palindrome")
###Output
_____no_output_____
###Markdown
FIND THE ANGLE FOR TRIANGLE
###Code
import math
a = float(input("Enter the Base A = "))
b = float(input("Enter the Base B = "))
c=math.sqrt(a*a + b*b)
print ("3rd Side of Right Angle", float(c))
###Output
_____no_output_____
###Markdown
FIND FREQUENCY OF EACH OF THE CHARACTER PRESENT IN A GIVEN STRING
###Code
a = input("Enter any String: ")
for x in a:
if x in count.keys():
count[x]+=1
else:
count[x]=1
for x in count.keys():
print (x, "Number of", count[x], "times")
###Output
_____no_output_____ |
notebooks/archive/k_means_tslearn.ipynb | ###Markdown
k-means=======This example uses :math:`k`-means clustering for time series. Three variants ofthe algorithm are available: - standard Euclidean :math:`k`-means, - DBA-:math:`k`-means (for DTW Barycenter Averaging [1])- Soft-DTW :math:`k`-means [2].A note on pre-processing~~~~~~~~~~~~~~~~~~~~~~~~In this example, time series are preprocessed using`TimeSeriesScalerMeanVariance`. This scaler is such that each output timeseries has zero mean and unit variance.The assumption here is that the range of a given time series is uninformativeand one only wants to compare shapes in an amplitude-invariant manner (whentime series are multivariate, this also rescales all modalities such that therewill not be a single modality responsible for a large part of the variance).This means that one cannot scale barycenters back to data range because eachtime series is scaled independently and there is hence no such thing as anoverall data range.[1] F. Petitjean, A. Ketterlin & P. Gancarski. A global averaging method \for dynamic time warping, with applications to clustering. Pattern \Recognition, Elsevier, 2011, Vol. 44, Num. 3, pp. 678-693[2] M. Cuturi, M. Blondel "Soft-DTW: a Differentiable Loss Function for Time-Series," ICML 2017."""_Author: Romain Tavenard_
###Code
!pip install tslearn
import numpy
import matplotlib.pyplot as plt
from tslearn.clustering import TimeSeriesKMeans
from tslearn.datasets import CachedDatasets
from tslearn.preprocessing import TimeSeriesScalerMeanVariance, \
TimeSeriesResampler
###Output
Collecting tslearn
[?25l Downloading https://files.pythonhosted.org/packages/09/6a/04fb547ac56093e6cd3d904fb2ad15f355934d0beadbded6978f07d1be9e/tslearn-0.3.1-cp36-cp36m-manylinux2010_x86_64.whl (746kB)
[K |████████████████████████████████| 747kB 3.2MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from tslearn) (1.18.3)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from tslearn) (1.4.1)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.6/dist-packages (from tslearn) (0.22.2.post1)
Requirement already satisfied: Cython in /usr/local/lib/python3.6/dist-packages (from tslearn) (0.29.17)
Requirement already satisfied: numba in /usr/local/lib/python3.6/dist-packages (from tslearn) (0.48.0)
Requirement already satisfied: joblib in /usr/local/lib/python3.6/dist-packages (from tslearn) (0.14.1)
Requirement already satisfied: setuptools in /usr/local/lib/python3.6/dist-packages (from numba->tslearn) (46.1.3)
Requirement already satisfied: llvmlite<0.32.0,>=0.31.0dev0 in /usr/local/lib/python3.6/dist-packages (from numba->tslearn) (0.31.0)
Installing collected packages: tslearn
Successfully installed tslearn-0.3.1
###Markdown
Load and prepare data
###Code
seed = 0
numpy.random.seed(seed)
X_train, y_train, X_test, y_test = CachedDatasets().load_dataset("Trace")
X_train = X_train[y_train < 4] # Keep first 3 classes
numpy.random.shuffle(X_train)
# Keep only 50 time series
X_train = TimeSeriesScalerMeanVariance().fit_transform(X_train[:50])
# Make time series shorter
X_train = TimeSeriesResampler(sz=40).fit_transform(X_train)
sz = X_train.shape[1]
###Output
_____no_output_____
###Markdown
Train kmeans algorithmEuclidean k-means
###Code
km = TimeSeriesKMeans(n_clusters=3, verbose=True, random_state=seed)
y_pred = km.fit_predict(X_train)
plt.figure()
for yi in range(3):
plt.subplot(3, 3, yi + 1)
for xx in X_train[y_pred == yi]:
plt.plot(xx.ravel(), "k-", alpha=.2)
plt.plot(km.cluster_centers_[yi].ravel(), "r-")
plt.xlim(0, sz)
plt.ylim(-4, 4)
plt.text(0.55, 0.85,'Cluster %d' % (yi + 1),
transform=plt.gca().transAxes)
if yi == 1:
plt.title("Euclidean $k$-means")
###Output
16.434 --> 9.437 --> 9.437 -->
###Markdown
k-means clustering : DBA-k-means
###Code
# DBA-k-means
print("DBA k-means")
dba_km = TimeSeriesKMeans(n_clusters=3,
n_init=2,
metric="dtw",
verbose=True,
max_iter_barycenter=10,
random_state=seed)
y_pred = dba_km.fit_predict(X_train)
for yi in range(3):
plt.subplot(3, 3, 4 + yi)
for xx in X_train[y_pred == yi]:
plt.plot(xx.ravel(), "k-", alpha=.2)
plt.plot(dba_km.cluster_centers_[yi].ravel(), "r-")
plt.xlim(0, sz)
plt.ylim(-4, 4)
plt.text(0.55, 0.85,'Cluster %d' % (yi + 1),
transform=plt.gca().transAxes)
if yi == 1:
plt.title("DBA $k$-means")
###Output
DBA k-means
Init 1
###Markdown
k-means clustering: Soft-DTW
###Code
sdtw_km = TimeSeriesKMeans(n_clusters=3,
metric="softdtw",
metric_params={"gamma": .01},
verbose=True,
random_state=seed)
y_pred = sdtw_km.fit_predict(X_train)
for yi in range(3):
plt.subplot(3, 3, 7 + yi)
for xx in X_train[y_pred == yi]:
plt.plot(xx.ravel(), "k-", alpha=.2)
plt.plot(sdtw_km.cluster_centers_[yi].ravel(), "r-")
plt.xlim(0, sz)
plt.ylim(-4, 4)
plt.text(0.55, 0.85,'Cluster %d' % (yi + 1),
transform=plt.gca().transAxes)
if yi == 1:
plt.title("Soft-DTW $k$-means")
plt.tight_layout()
plt.show()
###Output
2.475 --> 0.158 --> 0.157 --> 0.157 --> 0.157 --> 0.157 --> 0.157 --> 0.157 --> 0.157 --> 0.157 --> 0.157 --> 0.157 --> 0.157 --> 0.158 --> 0.157 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 --> 0.156 -->
|
week2/hrojas-learn-pandas-ee4e114ca411/lessons/05 - Lesson.ipynb | ###Markdown
Lesson 5 These tutorials are also available through an email course, please visit http://www.hedaro.com/pandas-tutorial to sign up today. > We will be taking a brief look at the ***stack*** and ***unstack*** functions.
###Code
# Import libraries
import pandas as pd
import sys
print('Python version ' + sys.version)
print('Pandas version: ' + pd.__version__)
# Our small data set
d = {'one':[1,1],'two':[2,2]}
i = ['a','b']
# Create dataframe
df = pd.DataFrame(data = d, index = i)
df
df.index
# Bring the columns and place them in the index
stack = df.stack()
stack
# The index now includes the column names
stack.index
unstack = df.unstack()
unstack
unstack.index
###Output
_____no_output_____
###Markdown
We can also flip the column names with the index using the ***T*** (transpose) function.
###Code
transpose = df.T
transpose
transpose.index
###Output
_____no_output_____ |
Plotly/Plotly_Create_Bubblechart.ipynb | ###Markdown
Plotly - Create Bubblechart **Tags:** plotly chart bubblechart dataviz snippet operations image html **Author:** [Jeremy Ravenel](https://www.linkedin.com/in/ACoAAAJHE7sB5OxuKHuzguZ9L6lfDHqw--cdnJg/) Input Import libraries
###Code
import naas
import plotly.express as px
import pandas as pd
###Output
_____no_output_____
###Markdown
Variables
###Code
title = "Life Expectancy vs GDP per Capita GDP, 2007"
# Output paths
output_image = f"{title}.png"
output_html = f"{title}.html"
###Output
_____no_output_____
###Markdown
Get data
###Code
df = px.data.gapminder()
df
###Output
_____no_output_____
###Markdown
Model Create Bubblechart
###Code
fig = px.scatter(
df.query("year==2007"),
x="gdpPercap",
y="lifeExp",
size="pop",
color="continent",
hover_name="country",
log_x=True,
size_max=60
)
fig.update_layout(
plot_bgcolor="#ffffff",
margin=dict(l=0, r=0, t=50, b=50),
width=1200,
height=800,
showlegend=False,
xaxis_nticks=36,
title= title,
xaxis=dict(
title='GDP per capita (dollars)',
gridcolor='white',
type='log',
gridwidth=2,
),
yaxis=dict(
title='Life Expectancy (years)',
gridcolor='white',
gridwidth=2,
))
config = {'displayModeBar': False}
fig.show(config=config)
###Output
_____no_output_____
###Markdown
Output Export in PNG and HTML
###Code
fig.write_image(output_image, width=1200)
fig.write_html(output_html)
###Output
_____no_output_____
###Markdown
Generate shareable assets
###Code
link_image = naas.asset.add(output_image)
link_html = naas.asset.add(output_html, {"inline":True})
#-> Uncomment the line below to remove your assets
# naas.asset.delete(output_image)
# naas.asset.delete(output_html)
###Output
_____no_output_____
###Markdown
Plotly - Create Bubblechart **Tags:** plotly chart bubblechart dataviz snippet **Author:** [Jeremy Ravenel](https://www.linkedin.com/in/ACoAAAJHE7sB5OxuKHuzguZ9L6lfDHqw--cdnJg/) Input Import libraries
###Code
import naas
import plotly.express as px
import pandas as pd
###Output
_____no_output_____
###Markdown
Variables
###Code
title = "Life Expectancy vs GDP per Capita GDP, 2007"
# Output paths
output_image = f"{title}.png"
output_html = f"{title}.html"
###Output
_____no_output_____
###Markdown
Get data
###Code
df = px.data.gapminder()
df
###Output
_____no_output_____
###Markdown
Model Create Bubblechart
###Code
fig = px.scatter(
df.query("year==2007"),
x="gdpPercap",
y="lifeExp",
size="pop",
color="continent",
hover_name="country",
log_x=True,
size_max=60
)
fig.update_layout(
plot_bgcolor="#ffffff",
margin=dict(l=0, r=0, t=50, b=50),
width=1200,
height=800,
showlegend=False,
xaxis_nticks=36,
title= title,
xaxis=dict(
title='GDP per capita (dollars)',
gridcolor='white',
type='log',
gridwidth=2,
),
yaxis=dict(
title='Life Expectancy (years)',
gridcolor='white',
gridwidth=2,
))
config = {'displayModeBar': False}
fig.show(config=config)
###Output
_____no_output_____
###Markdown
Output Export in PNG and HTML
###Code
fig.write_image(output_image, width=1200)
fig.write_html(output_html)
###Output
_____no_output_____
###Markdown
Generate shareable assets
###Code
link_image = naas.asset.add(output_image)
link_html = naas.asset.add(output_html, {"inline":True})
#-> Uncomment the line below to remove your assets
# naas.asset.delete(output_image)
# naas.asset.delete(output_html)
###Output
_____no_output_____
###Markdown
Plotly - Create Bubblechart **Tags:** plotly chart bubblechart dataviz snippet operations image html **Author:** [Jeremy Ravenel](https://www.linkedin.com/in/ACoAAAJHE7sB5OxuKHuzguZ9L6lfDHqw--cdnJg/) Input Import libraries
###Code
import naas
import plotly.express as px
import pandas as pd
###Output
_____no_output_____
###Markdown
Variables
###Code
title = "Life Expectancy vs GDP per Capita GDP, 2007"
# Output paths
output_image = f"{title}.png"
output_html = f"{title}.html"
###Output
_____no_output_____
###Markdown
Get data
###Code
df = px.data.gapminder()
df
###Output
_____no_output_____
###Markdown
Model Create Bubblechart
###Code
fig = px.scatter(
df.query("year==2007"),
x="gdpPercap",
y="lifeExp",
size="pop",
color="continent",
hover_name="country",
log_x=True,
size_max=60
)
fig.update_layout(
plot_bgcolor="#ffffff",
margin=dict(l=0, r=0, t=50, b=50),
width=1200,
height=800,
showlegend=False,
xaxis_nticks=36,
title= title,
xaxis=dict(
title='GDP per capita (dollars)',
gridcolor='white',
type='log',
gridwidth=2,
),
yaxis=dict(
title='Life Expectancy (years)',
gridcolor='white',
gridwidth=2,
))
config = {'displayModeBar': False}
fig.show(config=config)
###Output
_____no_output_____
###Markdown
Output Export in PNG and HTML
###Code
fig.write_image(output_image, width=1200)
fig.write_html(output_html)
###Output
_____no_output_____
###Markdown
Generate shareable assets
###Code
link_image = naas.asset.add(output_image)
link_html = naas.asset.add(output_html, {"inline":True})
#-> Uncomment the line below to remove your assets
# naas.asset.delete(output_image)
# naas.asset.delete(output_html)
###Output
_____no_output_____ |
notebooks/em/TDEM_ElectricDipole_Wholespace.ipynb | ###Markdown
Electrical Dipole in a Whole-space (time domain) PurposeBy using an analytic solution electromagnetic (EM) fields from electrical dipole in a Whole-space, we present some fundamentals of EM responses in the context of crosswell EM survey. Set upFor time domain EM method using galvanic source, we inject step-off currents to the earth through electrodes. Crosswell EM geometryHere, we choose geometric parameters for a crosswell EM set-up having two boreholes for Tx and Rx. In the Tx hole, a VED source is located at (0m, 0m, 0m), and it is fixed. Horizontal location of the Rx hole is fixed to 50m apart from the source location in x-direction.
###Code
ax = plotObj3D()
###Output
_____no_output_____
###Markdown
Backgrounds When using crosswell electromagnetic (EM) survey, we inject step-off currents to the earth using (+) and (-) current electrodes, and measure voltages between potential electrodes in the off-time, when DC effects are disappeared. A common goal here is imaging conductivity structure of the earth by interpreting measured voltages. However, to accomplish that task well, we first need to understand physical behavior of EM responses for the given survey set-up. Assuming length of the current elecrodes are small enough, this can be assumed as electrical dipole (ED). For a croswell set-up, let we have a vertical magnetic dipole (VMD) source in a homogeneous earth with step-off currents, then we can have analytic solution of EM fields in time domain (WH1988). Solution of of arbitrary EM fields, $\mathbf{f}$, will be a function of $$ \mathbf{f} (x, y, z; \sigma, t),$$ where $\sigma$ is conductivity of homogenous earth (S/m), and $f$ is transmitting frequency (Hz). Here $\mathbf{f}$ can be electic ($\mathbf{e}$) or magnetic field ($\mathbf{h}$), or current density ($\mathbf{j}$). Now, you will explore how EM responses behaves as a function of space, $\sigma$, and $t$ for the given crosswell EM set-up . Geometry appHere, we choose geometric parameters for a crosswell EM set-up having two boreholes for Tx and Rx. In the Tx hole, a VED source is located at (0m, 0m, 0m), and it is fixed. Horizontal location of the Rx hole is fixed to 50m apart from the source location in x-direction. Parameters- plane: Choose either "XZ" or "YZ" plane- offset: Offset from a source plane (m)- nRx: The number of receivers in the Rx holeChosen geometric parameters will be used in the Electric Dipole widget below.
###Code
Q0 = InteractivePlanes(planevalue="XZ", offsetvalue=0.)
Q0
###Output
_____no_output_____
###Markdown
Electric Dipole appExplore behavior of EM fields, $\mathbf{f} (x, y, z; \sigma, t)$ on 2D plane chosen in the above app. And also at the receiver locations. Parameters:- Field: Type of EM fields ("E": electric field, "H": magnetic field, "J": current density)- AmpDir: Type of the vectoral EM fields None: $f_x$ or $f_y$ or $f_z$ Amp: $\mathbf{f} \cdot \mathbf{f} = |\mathbf{f}|^2$ Dir: A vectoral EM fields, $\mathbf{f}$ - Comp.: Direction of $\mathbf{F}$ at Rx locations - $t$: time after current switch-off - $\sigma$: Conductivity of homogeneous earth (S/m)- Offset: Offset from a source plane- Scale: Choose "log" or "linear" scale - Slider: When it is checked, it activates "flog" and "siglog" sliders above. - TimeLog: A float slider for log10 time (only activated when slider is checked) - SigLog: A float slider for log10 conductivity (only activated when slider is checked)
###Code
dwidget = DipoleWidgetTD()
Q1 = dwidget.InteractiveDipoleBH(nRx=Q0.kwargs["nRx"], plane=Q0.kwargs["Plane"], offset_plane=Q0.kwargs["Offset"])
Q1
###Output
_____no_output_____
###Markdown
Proflie appHere we focuson data, which can be measured at receiver locations. We limit our attention to three different profile shown in **Geometry** app: Rxhole (red), Txhole (black), TxProfile (blue). Parameters:- Comp.: Direction of $\mathbf{F}$ at Rx locations - ComplexNumber: Type of complex data ("Re", "Im", "Amp", "Phase")- $t_1$: Time (sec)- $t_2$: Time (sec)- $t_3$: Time (sec)- Profile: Type of profile line ("Rxhole", "Txhole", "TxProfile")- Scale: Choose "log" or "linear" scale - Rx: choice of Rx point for the following Sounding app
###Code
Q2 = InteractiveDipoleProfileTD(dwidget, Q1.kwargs["Sigma"], Q1.kwargs["Field"], Q1.kwargs["Component"], Q1.kwargs["Scale"])
Q2
###Output
_____no_output_____
###Markdown
Sounding app Parameters:- Comp.: Direction of $\mathbf{F}$ at Rx locations - $\sigma$: Conductivity of homogeneous earth (S/m)- Scale: Choose "log" or "linear" scale
###Code
InteractiveDipoleDecay(dwidget, dwidget.dataview.xyz_line[Q2.kwargs["irx"],:], Q1.kwargs["Field"], Q1.kwargs["Component"])
###Output
_____no_output_____
###Markdown
Electrical Dipole in a Whole-space (time domain) PurposeBy using an analytic solution electromagnetic (EM) fields from electrical dipole in a Whole-space, we present some fundamentals of EM responses in the context of crosswell EM survey. Set upFor time domain EM method using galvanic source, we inject step-off currents to the earth through electrodes. Crosswell EM geometryHere, we choose geometric parameters for a crosswell EM set-up having two boreholes for Tx and Rx. In the Tx hole, a VED source is located at (0m, 0m, 0m), and it is fixed. Horizontal location of the Rx hole is fixed to 50m apart from the source location in x-direction.
###Code
ax = plotObj3D()
###Output
_____no_output_____
###Markdown
Backgrounds When using crosswell electromagnetic (EM) survey, we inject step-off currents to the earth using (+) and (-) current electrodes, and measure voltages between potential electrodes in the off-time, when DC effects are disappeared. A common goal here is imaging conductivity structure of the earth by interpreting measured voltages. However, to accomplish that task well, we first need to understand physical behavior of EM responses for the given survey set-up. Assuming length of the current elecrodes are small enough, this can be assumed as electrical dipole (ED). For a croswell set-up, let we have a vertical magnetic dipole (VMD) source in a homogeneous earth with step-off currents, then we can have analytic solution of EM fields in time domain (WH1988). Solution of of arbitrary EM fields, $\mathbf{f}$, will be a function of $$ \mathbf{f} (x, y, z; \sigma, t),$$ where $\sigma$ is conductivity of homogenous earth (S/m), and $f$ is transmitting frequency (Hz). Here $\mathbf{f}$ can be electic ($\mathbf{e}$) or magnetic field ($\mathbf{h}$), or current density ($\mathbf{j}$). Now, you will explore how EM responses behaves as a function of space, $\sigma$, and $t$ for the given crosswell EM set-up . Geometry appHere, we choose geometric parameters for a crosswell EM set-up having two boreholes for Tx and Rx. In the Tx hole, a VED source is located at (0m, 0m, 0m), and it is fixed. Horizontal location of the Rx hole is fixed to 50m apart from the source location in x-direction. Parameters- plane: Choose either "XZ" or "YZ" plane- offset: Offset from a source plane (m)- nRx: The number of receivers in the Rx holeChosen geometric parameters will be used in the Electric Dipole widget below.
###Code
Q0 = InteractivePlanes(planevalue="XZ", offsetvalue=0.)
display(Q0)
###Output
_____no_output_____
###Markdown
Electric Dipole appExplore behavior of EM fields, $\mathbf{f} (x, y, z; \sigma, t)$ on 2D plane chosen in the above app. And also at the receiver locations. Parameters:- Field: Type of EM fields ("E": electric field, "H": magnetic field, "J": current density)- AmpDir: Type of the vectoral EM fields None: $f_x$ or $f_y$ or $f_z$ Amp: $\mathbf{f} \cdot \mathbf{f} = |\mathbf{f}|^2$ Dir: A vectoral EM fields, $\mathbf{f}$ - Comp.: Direction of $\mathbf{F}$ at Rx locations - $t$: time after current switch-off - $\sigma$: Conductivity of homogeneous earth (S/m)- Offset: Offset from a source plane- Scale: Choose "log" or "linear" scale - Slider: When it is checked, it activates "flog" and "siglog" sliders above. - TimeLog: A float slider for log10 time (only activated when slider is checked) - SigLog: A float slider for log10 conductivity (only activated when slider is checked)
###Code
dwidget = DipoleWidgetTD()
Q1 = dwidget.InteractiveDipoleBH(nRx=Q0.kwargs["nRx"], plane=Q0.kwargs["Plane"], offset_plane=Q0.kwargs["Offset"])
display(Q1)
###Output
_____no_output_____
###Markdown
Proflie appHere we focuson data, which can be measured at receiver locations. We limit our attention to three different profile shown in **Geometry** app: Rxhole (red), Txhole (black), TxProfile (blue). Parameters:- Comp.: Direction of $\mathbf{F}$ at Rx locations - ComplexNumber: Type of complex data ("Re", "Im", "Amp", "Phase")- $t_1$: Time (sec)- $t_2$: Time (sec)- $t_3$: Time (sec)- Profile: Type of profile line ("Rxhole", "Txhole", "TxProfile")- Scale: Choose "log" or "linear" scale - Rx: choice of Rx point for the following Sounding app
###Code
Q2 = InteractiveDipoleProfileTD(dwidget, Q1.kwargs["Sigma"], Q1.kwargs["Field"], Q1.kwargs["Component"], Q1.kwargs["Scale"])
display(Q2)
###Output
_____no_output_____
###Markdown
Sounding app Parameters:- Comp.: Direction of $\mathbf{F}$ at Rx locations - $\sigma$: Conductivity of homogeneous earth (S/m)- Scale: Choose "log" or "linear" scale
###Code
app = InteractiveDipoleDecay(dwidget, dwidget.dataview.xyz_line[Q2.kwargs["irx"],:], Q1.kwargs["Field"], Q1.kwargs["Component"])
display(app)
###Output
_____no_output_____
###Markdown
Electrical Dipole in a Whole-space (time domain) PurposeBy using an analytic solution electromagnetic (EM) fields from electrical dipole in a Whole-space, we present some fundamentals of EM responses in the context of crosswell EM survey. Set upFor time domain EM method using galvanic source, we inject step-off currents to the earth through electrodes. Crosswell EM geometryHere, we choose geometric parameters for a crosswell EM set-up having two boreholes for Tx and Rx. In the Tx hole, a VED source is located at (0m, 0m, 0m), and it is fixed. Horizontal location of the Rx hole is fixed to 50m apart from the source location in x-direction.
###Code
ax = plotObj3D()
###Output
_____no_output_____
###Markdown
Backgrounds When using crosswell electromagnetic (EM) survey, we inject step-off currents to the earth using (+) and (-) current electrodes, and measure voltages between potential electrodes in the off-time, when DC effects are disappeared. A common goal here is imaging conductivity structure of the earth by interpreting measured voltages. However, to accomplish that task well, we first need to understand physical behavior of EM responses for the given survey set-up. Assuming length of the current elecrodes are small enough, this can be assumed as electrical dipole (ED). For a croswell set-up, let we have a vertical magnetic dipole (VMD) source in a homogeneous earth with step-off currents, then we can have analytic solution of EM fields in time domain (WH1988). Solution of of arbitrary EM fields, $\mathbf{f}$, will be a function of $$ \mathbf{f} (x, y, z; \sigma, t),$$ where $\sigma$ is conductivity of homogenous earth (S/m), and $f$ is transmitting frequency (Hz). Here $\mathbf{f}$ can be electic ($\mathbf{e}$) or magnetic field ($\mathbf{h}$), or current density ($\mathbf{j}$). Now, you will explore how EM responses behaves as a function of space, $\sigma$, and $t$ for the given crosswell EM set-up . Geometry appHere, we choose geometric parameters for a crosswell EM set-up having two boreholes for Tx and Rx. In the Tx hole, a VED source is located at (0m, 0m, 0m), and it is fixed. Horizontal location of the Rx hole is fixed to 50m apart from the source location in x-direction. Parameters- plane: Choose either "XZ" or "YZ" plane- offset: Offset from a source plane (m)- nRx: The number of receivers in the Rx holeChosen geometric parameters will be used in the Electric Dipole widget below.
###Code
Q0 = InteractivePlanes(planevalue="XZ", offsetvalue=0.)
display(Q0)
###Output
_____no_output_____
###Markdown
Electric Dipole appExplore behavior of EM fields, $\mathbf{f} (x, y, z; \sigma, t)$ on 2D plane chosen in the above app. And also at the receiver locations. Parameters:- Field: Type of EM fields ("E": electric field, "H": magnetic field, "J": current density)- AmpDir: Type of the vectoral EM fields None: $f_x$ or $f_y$ or $f_z$ Amp: $\mathbf{f} \cdot \mathbf{f} = |\mathbf{f}|^2$ Dir: A vectoral EM fields, $\mathbf{f}$ - Comp.: Direction of $\mathbf{F}$ at Rx locations - $t$: time after current switch-off - $\sigma$: Conductivity of homogeneous earth (S/m)- Offset: Offset from a source plane- Scale: Choose "log" or "linear" scale - Slider: When it is checked, it activates "flog" and "siglog" sliders above. - TimeLog: A float slider for log10 time (only activated when slider is checked) - SigLog: A float slider for log10 conductivity (only activated when slider is checked)
###Code
dwidget = DipoleWidgetTD()
Q1 = dwidget.InteractiveDipoleBH(nRx=Q0.kwargs["nRx"], plane=Q0.kwargs["Plane"], offset_plane=Q0.kwargs["Offset"])
display(Q1)
###Output
_____no_output_____
###Markdown
Proflie appHere we focuson data, which can be measured at receiver locations. We limit our attention to three different profile shown in **Geometry** app: Rxhole (red), Txhole (black), TxProfile (blue). Parameters:- Comp.: Direction of $\mathbf{F}$ at Rx locations - ComplexNumber: Type of complex data ("Re", "Im", "Amp", "Phase")- $t_1$: Time (sec)- $t_2$: Time (sec)- $t_3$: Time (sec)- Profile: Type of profile line ("Rxhole", "Txhole", "TxProfile")- Scale: Choose "log" or "linear" scale - Rx: choice of Rx point for the following Sounding app
###Code
Q2 = InteractiveDipoleProfileTD(dwidget, Q1.kwargs["Sigma"], Q1.kwargs["Field"], Q1.kwargs["Component"], Q1.kwargs["Scale"])
display(Q2)
###Output
_____no_output_____
###Markdown
Sounding app Parameters:- Comp.: Direction of $\mathbf{F}$ at Rx locations - $\sigma$: Conductivity of homogeneous earth (S/m)- Scale: Choose "log" or "linear" scale
###Code
app = InteractiveDipoleDecay(dwidget, dwidget.dataview.xyz_line[Q2.kwargs["irx"],:], Q1.kwargs["Field"], Q1.kwargs["Component"])
display(app)
###Output
_____no_output_____
###Markdown
Electrical Dipole in a Whole-space (time domain) PurposeBy using an analytic solution electromagnetic (EM) fields from electrical dipole in a Whole-space, we present some fundamentals of EM responses in the context of crosswell EM survey. Set upFor time domain EM method using galvanic source, we inject step-off currents to the earth through electrodes. Crosswell EM geometryHere, we choose geometric parameters for a crosswell EM set-up having two boreholes for Tx and Rx. In the Tx hole, a VED source is located at (0m, 0m, 0m), and it is fixed. Horizontal location of the Rx hole is fixed to 50m apart from the source location in x-direction.
###Code
ax = plotObj3D()
###Output
_____no_output_____
###Markdown
Backgrounds When using crosswell electromagnetic (EM) survey, we inject step-off currents to the earth using (+) and (-) current electrodes, and measure voltages between potential electrodes in the off-time, when DC effects are disappeared. A common goal here is imaging conductivity structure of the earth by interpreting measured voltages. However, to accomplish that task well, we first need to understand physical behavior of EM responses for the given survey set-up. Assuming length of the current elecrodes are small enough, this can be assumed as electrical dipole (ED). For a croswell set-up, let we have a vertical magnetic dipole (VMD) source in a homogeneous earth with step-off currents, then we can have analytic solution of EM fields in time domain (WH1988). Solution of of arbitrary EM fields, $\mathbf{f}$, will be a function of $$ \mathbf{f} (x, y, z; \sigma, t),$$ where $\sigma$ is conductivity of homogenous earth (S/m), and $f$ is transmitting frequency (Hz). Here $\mathbf{f}$ can be electic ($\mathbf{e}$) or magnetic field ($\mathbf{h}$), or current density ($\mathbf{j}$). Now, you will explore how EM responses behaves as a function of space, $\sigma$, and $t$ for the given crosswell EM set-up . Geometry appHere, we choose geometric parameters for a crosswell EM set-up having two boreholes for Tx and Rx. In the Tx hole, a VED source is located at (0m, 0m, 0m), and it is fixed. Horizontal location of the Rx hole is fixed to 50m apart from the source location in x-direction. Parameters- plane: Choose either "XZ" or "YZ" plane- offset: Offset from a source plane (m)- nRx: The number of receivers in the Rx holeChosen geometric parameters will be used in the Electric Dipole widget below.
###Code
Q0 = InteractivePlanes(planevalue="XZ", offsetvalue=0.)
display(Q0)
###Output
_____no_output_____
###Markdown
Electric Dipole appExplore behavior of EM fields, $\mathbf{f} (x, y, z; \sigma, t)$ on 2D plane chosen in the above app. And also at the receiver locations. Parameters:- Field: Type of EM fields ("E": electric field, "H": magnetic field, "J": current density)- AmpDir: Type of the vectoral EM fields None: $f_x$ or $f_y$ or $f_z$ Amp: $\mathbf{f} \cdot \mathbf{f} = |\mathbf{f}|^2$ Dir: A vectoral EM fields, $\mathbf{f}$ - Comp.: Direction of $\mathbf{F}$ at Rx locations - $t$: time after current switch-off - $\sigma$: Conductivity of homogeneous earth (S/m)- Offset: Offset from a source plane- Scale: Choose "log" or "linear" scale - Slider: When it is checked, it activates "flog" and "siglog" sliders above. - TimeLog: A float slider for log10 time (only activated when slider is checked) - SigLog: A float slider for log10 conductivity (only activated when slider is checked)
###Code
dwidget = DipoleWidgetTD()
Q1 = dwidget.InteractiveDipoleBH(nRx=Q0.kwargs["nRx"], plane=Q0.kwargs["Plane"], offset_plane=Q0.kwargs["Offset"])
display(Q1)
###Output
_____no_output_____
###Markdown
Proflie appHere we focuson data, which can be measured at receiver locations. We limit our attention to three different profile shown in **Geometry** app: Rxhole (red), Txhole (black), TxProfile (blue). Parameters:- Comp.: Direction of $\mathbf{F}$ at Rx locations - ComplexNumber: Type of complex data ("Re", "Im", "Amp", "Phase")- $t_1$: Time (sec)- $t_2$: Time (sec)- $t_3$: Time (sec)- Profile: Type of profile line ("Rxhole", "Txhole", "TxProfile")- Scale: Choose "log" or "linear" scale - Rx: choice of Rx point for the following Sounding app
###Code
Q2 = InteractiveDipoleProfileTD(dwidget, Q1.kwargs["Sigma"], Q1.kwargs["Field"], Q1.kwargs["Component"], Q1.kwargs["Scale"])
display(Q2)
###Output
_____no_output_____
###Markdown
Sounding app Parameters:- Comp.: Direction of $\mathbf{F}$ at Rx locations - $\sigma$: Conductivity of homogeneous earth (S/m)- Scale: Choose "log" or "linear" scale
###Code
app = InteractiveDipoleDecay(dwidget, dwidget.dataview.xyz_line[Q2.kwargs["irx"],:], Q1.kwargs["Field"], Q1.kwargs["Component"])
display(app)
###Output
_____no_output_____ |
docs/contents/basic/compare.ipynb | ###Markdown
Compare
###Code
molsys_A = msm.convert('pdb_id:2LAO', to_form='pdbfixer.PDBFixer')
molsys_B = msm.convert(molsys_A, to_form='molsysmt.MolSys')
molsys_C = msm.extract(molsys_B, selection='molecule_type=="protein"')
###Output
_____no_output_____
###Markdown
Compare 'all'
###Code
msm.compare(molsys_A, molsys_B, comparison='all', rule='A_eq_B')
msm.compare(molsys_C, molsys_B, comparison='all', rule='A_eq_B')
msm.compare(molsys_C, molsys_B, comparison='all', rule='A_eq_B', report =True)
###Output
_____no_output_____
###Markdown
Compare 'info'
###Code
msm.compare(molsys_A, molsys_B, comparison='info', rule='A_eq_B')
msm.compare(molsys_A, molsys_B, comparison='n_elements', rule='A_eq_B')
msm.compare(molsys_A, molsys_B, comparison='n_molecules', rule='A_eq_B')
msm.compare(molsys_A, molsys_B, comparison='n_frames', rule='A_eq_B')
msm.compare(molsys_A, molsys_B, comparison='form', rule='A_eq_B')
###Output
_____no_output_____ |
model-deployment-pipeline/functions/stream-to-features.ipynb | ###Markdown
Stream to Features -------------------------------------------------------------------- Receive a stream of events from `incoming-events-stream`, enrich specific events and update a set of aggregations on the data. The output data is stored to an aggregation table called `feature-table` and a new event that includes the calculated features is written to `serving-stream` Enrich the relevant events with socioeconomic data by looking up the enrichment table.During the feature calculation, we calculate sum, mean, count and variance for the 3 amount fields (`amount`, `bet_amount` and `win_amount` for `new_purchases`, `new_bet` and `new_win` respectively). This results with the following list of fields:- purchase_sum- purchase_mean- purchase_count- purchase_var- bet_sum- bet_mean- bet_count- bet_var- win_sum- win_mean- win_count- win_varYou can change the incoming events and the generated features by customizing the methods below. Create and Test a Local Function [Nuclio](https://nuclio.io/) is a high-performance open-source and managed serverless framework, which is available as a predefined tenant-wide platform service (`nuclio`).The demo uses Nuclio to create and deploy serverless functions.Therefore, you need to import the Nuclio package and configure Nuclio for your project.The platform's Jupyter Notebook service preinstalls the [nuclio-jupyter SDK](https://github.com/nuclio/nuclio-jupyter/blob/master/README.md) for creating and deploying Nuclio functions with Python and Jupyter Notebook.The tutorial uses the Nuclio magic commands and annotation comments of this SDK to automate function code generation.The magic commands are initialized when you import the `nuclio` package.The `%nuclio` magic commands are used to run Nuclio commands from Jupyter notebooks (`%nuclio `).You can also use `%%nuclio` at the start of a cell to identify the entire cell as containing Nuclio code.The magic commands are initialized when you import the `nuclio` package.The ` nuclio: start-code`, ` nuclio: end-code`, and ` nuclio: ignore` section-marker annotations notify Nuclio of the beginning or end of code sections.Nuclio ignores all notebook code before a ` nuclio: start-code` marker or after an ` nuclio: end-code` marker.Nuclio translates all other notebook code sections into function code, except for sections that are marked with the ` nuclio: ignore` marker. Import Nuclio The following code imports the `nuclio` Python package.
###Code
import nuclio
###Output
_____no_output_____
###Markdown
Configure Nuclio The following code uses the ` nuclio: start-code` marker to instruct Nuclio to start processing code only from this location, and then performs basic Nuclio function configuration — defining the name of the function's container image (`mlrun/ml-models`), the function type (`nuclio`), and some additional package installation commands.> **Note:** You can add code to define function dependencies and perform additional configuration after the ` nuclio: start-code` marker.
###Code
# nuclio: start-code
###Output
_____no_output_____
###Markdown
Specify function dependencies and configuration
###Code
%%nuclio config
spec.build.baseImage = "mlrun/mlrun"
kind = "nuclio"
###Output
_____no_output_____
###Markdown
Function code
###Code
import os
import json
import numpy as np
from v3io import dataplane, common
from datetime import datetime
def init_context(context):
v3io_access_key = os.getenv('V3IO_ACCESS_KEY')
container = os.getenv('CONTAINER')
feature_table_path = os.getenv('FEATURE_TABLE_PATH')
feature_list = [v.strip() for v in os.getenv('FEATURE_LIST').split(',')]
serving_events = [v.strip() for v in os.getenv('SERVING_EVENTS').split(',')]
output_stream_path = os.getenv('OUTPUT_STREAM_PATH')
partition_attr = os.getenv('PARTITION_ATTR')
enrichment_table_path = os.getenv('ENRICHMENT_TABLE_PATH')
enrichment_key = os.getenv('ENRICHMENT_KEY')
v3io_client = dataplane.Client(endpoint='http://v3io-webapi:8081', access_key=v3io_access_key)
event_handlers = {'registration': process_registration,
'purchase': process_purchase,
'bet': process_bet,
'win': process_win}
setattr(context, 'v3io_client', v3io_client)
setattr(context, 'container', container)
setattr(context, 'feature_table_path', feature_table_path)
setattr(context, 'feature_list', feature_list)
setattr(context, 'serving_events', serving_events)
setattr(context, 'output_stream_path', output_stream_path)
setattr(context, 'partition_attr', partition_attr)
setattr(context, 'event_handlers', event_handlers)
setattr(context, 'enrichment_table_path', enrichment_table_path)
setattr(context, 'enrichment_key', enrichment_key)
pass
def handler(context, event):
if type(event.body) is dict:
event_dict = event.body
else:
event_dict = json.loads(event.body)
if is_relevant_event(context, event_dict):
event_type = get_event_type(event_dict)
context.logger.info(f'Incoming event type: {event_type}')
# python switch-case
process_func = context.event_handlers.get(event_type)
context.logger.info(f'Processing event {event_dict}')
response = process_func(context, event_dict)
context.logger.info(f'Finished processing with status: {response.status_code} - and response body: {response.body} , event: {event_dict}')
if event_type in context.serving_events and (200 <= response.status_code < 300) :
context.logger.info(f'sending event for serving')
write_to_output_stream(context, event_dict)
else:
context.logger.info(f'Not relevant event')
pass
def get_event_type(event):
return event['event_type']
def is_relevant_event(context, event):
return get_event_type(event) in context.event_handlers
def get_features(context, event):
user_id = event['user_id']
features_list = context.feature_list
resp = context.v3io_client.kv.get(container=context.container,
table_path=context.feature_table_path,
key=str(user_id),
raise_for_status=dataplane.RaiseForStatus.never)
feat_list = [resp.output.item.get(feat) for feat in features_list]
feat_list = [0 if v is None else v for v in feat_list]
return json.dumps({'user_id': user_id, 'instances': np.array(feat_list).reshape(1,-1).tolist()})
def write_to_output_stream(context, event):
partition_key = event.get(context.partition_attr)
data = get_features(context, event)
record = {'partition_key': str(partition_key), 'data': data }
resp = context.v3io_client.stream.put_records(container=context.container,
stream_path=context.output_stream_path,
records=[record],
raise_for_status=dataplane.RaiseForStatus.never)
context.logger.info(f'Sent features for user: {event["user_id"]} to serving stream')
context.logger.debug(f'Feature values: {data}')
def event_time_to_ts(event_time):
dt = datetime.strptime(event_time,'%Y-%m-%d %H:%M:%S.%f')
return datetime.timestamp(dt)
def get_sum_count_mean_var_expr(feature: str, current_value):
sum_str = f"SET {feature}_sum= if_not_exists({feature}_sum, 0) + {current_value};"
count_str = f"SET {feature}_count= if_not_exists({feature}_count, 0) + 1;"
delta_str = f"SET {feature}_delta= {current_value} - if_not_exists({feature}_mean, 0);"
mean_str = f"SET {feature}_mean= if_not_exists({feature}_mean, 0) + ({feature}_delta / {feature}_count);"
m2_str = f"SET {feature}_m2= if_not_exists({feature}_m2, 0) + ({feature}_delta * ({current_value} - {feature}_mean));"
var_str = f"SET {feature}_var= {feature}_m2 / (max(2, {feature}_count)-1) ;"
expression = sum_str + count_str + delta_str + mean_str + m2_str + var_str
return expression
def update_features(context, user_id, expression, condition):
return context.v3io_client.kv.update(container=context.container,
table_path=context.feature_table_path,
key=str(user_id),
condition=condition,
expression=expression,
raise_for_status=dataplane.RaiseForStatus.never)
def enrich_event(context, event):
if context.enrichment_key in event:
enrichment_key_value = event[context.enrichment_key]
resp = context.v3io_client.kv.get(container=context.container,
table_path=context.enrichment_table_path,
key=str(enrichment_key_value),
raise_for_status=dataplane.RaiseForStatus.never)
if 200 <= resp.status_code <= 299:
enriched_event = dict(event, **resp.output.item)
context.logger.info_with('Event was enriched', enriched_event=enriched_event)
return enriched_event
else:
context.logger.debug_with("Couldn't enrich event",
enrichment_key_value=enrichment_key_value,
response_status=resp.status_code,
response_body=resp.body.decode('utf-8'))
return event
else:
return event
def process_registration(context, event):
user_id = event['user_id']
event = enrich_event(context, event)
features = {'user_id': event['user_id'],
'registration_date': event['event_time'],
'date_of_birth': event['date_of_birth'],
'socioeconomic_idx': event['socioeconomic_idx'],
'affiliate_url': event['affiliate_url'],
'label': event['label']}
response = context.v3io_client.kv.put(container=context.container,
table_path=context.feature_table_path,
key=str(user_id),
attributes=features,
raise_for_status=dataplane.RaiseForStatus.never)
return response
def process_purchase(context, event):
user_id = event['user_id']
event_time = event['event_time']
event_ts = event_time_to_ts(event_time)
purchase_amount = event['amount']
first_purchase_ts_str = f"SET first_purchase_ts=if_not_exists(first_purchase_ts, {event_ts});"
sum_count_mean_var_expr = get_sum_count_mean_var_expr('purchase', purchase_amount)
expression = first_purchase_ts_str + sum_count_mean_var_expr
condition = f"exists(registration_date) AND (NOT exists(first_purchase_ts) OR first_purchase_ts >= ({event_ts} - 86400 ))"
return update_features(context, user_id, expression, condition)
def process_bet(context, event):
user_id = event['user_id']
event_time = event['event_time']
event_ts = event_time_to_ts(event_time)
bet_amount = event['bet_amount']
sum_count_mean_var_expr = get_sum_count_mean_var_expr('bet', bet_amount)
expression = sum_count_mean_var_expr
condition = f"first_purchase_ts >= ({event_ts} - 86400 )"
return update_features(context, user_id, expression, condition)
def process_win(context, event):
user_id = event['user_id']
event_time = event['event_time']
event_ts = event_time_to_ts(event_time)
win_amount = event['win_amount']
sum_count_mean_var_expr = get_sum_count_mean_var_expr('win', win_amount)
expression = sum_count_mean_var_expr
condition = f"first_purchase_ts >= ({event_ts} - 86400 )"
return update_features(context, user_id, expression, condition)
###Output
_____no_output_____
###Markdown
The following cell uses the ` nuclio: end-code` marker to mark the end of a Nuclio code section and instruct Nuclio to stop parsing the notebook at this point.> **IMPORTANT:** Do not remove the end-code cell.
###Code
# nuclio: end-code
###Output
_____no_output_____
###Markdown
Set a dictionary for initializing the environment variables used by the function Test locally
###Code
import v3io.dataplane
v3io_client = v3io.dataplane.Client()
test_path = os.path.join(os.getcwd(), 'test')
# Create a test target stream
v3io_client = v3io.dataplane.Client()
container = 'users'
output_stream_path = os.path.join(test_path.replace('/User', os.getenv('V3IO_USERNAME')), 'serving-stream')
v3io_client.stream.create(container=container, stream_path=output_stream_path, shard_count=1)
# Create a test enrichment table
enrichment_table_path = os.path.join(test_path.replace('/User', os.getenv('V3IO_USERNAME')), 'enrichment-table')
postcode = 11012
attr = {'postcode': postcode ,'socioeconomic_idx': 3}
v3io_client.kv.put(container=container,
table_path=enrichment_table_path,
key=str(postcode),
attributes=attr)
# Create feature table
feature_table_path = os.path.join(test_path.replace('/User', os.getenv('V3IO_USERNAME')), 'feature-table')
feature_list = ['socioeconomic_idx','purchase_sum','purchase_mean','purchase_count',
'purchase_var','bet_sum','bet_mean','bet_count',
'bet_var','win_sum','win_mean','win_count','win_var']
envs = {'V3IO_ACCESS_KEY': os.getenv('V3IO_ACCESS_KEY'),
'FEATURE_TABLE_PATH': feature_table_path,
'SERVING_EVENTS': ",".join(['bet','win']),
'FEATURE_LIST': ",".join(feature_list),
'CONTAINER': container,
'OUTPUT_STREAM_PATH': output_stream_path,
'PARTITION_ATTR': 'user_id',
'ENRICHMENT_TABLE_PATH': enrichment_table_path,
'ENRICHMENT_KEY':"postcode"}
for key, value in envs.items():
os.environ[key] = str(value)
reg_event = nuclio.Event(body=f'{{"user_id" : 111111 ,"affiliate_url":"aa.biz", "event_type": "registration", "postcode": {postcode}, "event_time": "2020-07-20 11:00:00","date_of_birth": "1970-03-03", "label":0}}'.encode())
pur_event = nuclio.Event(body=b'{"user_id" : 111111 ,"amount": 3000, "event_type": "purchase", "event_time": "2020-07-20 11:00:00.009"}')
bet_event = nuclio.Event(body=b'{"user_id" : 111111 ,"bet_amount": 300, "event_type": "bet", "event_time": "2020-07-20 11:00:00.889"}')
init_context(context)
handler(context, reg_event)
handler(context, pur_event)
handler(context, bet_event)
# cleanup
!rm -rf {test_path}
###Output
_____no_output_____
###Markdown
Nuclio Deploy Convert code to function We use MLRun `code_to_function` in order to convert the python code to a Nuclio function. We then set the relevant enrivonment variables and streaming trigger.
###Code
from mlrun import code_to_function
fn = code_to_function(name='features')
fn.set_envs(envs)
###Output
_____no_output_____
###Markdown
Deploy
###Code
fn.deploy()
###Output
_____no_output_____
###Markdown
Stream to Features -------------------------------------------------------------------- Receive a stream of events from `incoming-events-stream`, enrich specific events and update a set of aggregations on the data. The output data is stored to an aggregation table called `feature-table` and a new event that includes the calculated features is written to `serving-stream` Enrich the relevant events with socioeconomic data by looking up the enrichment table.During the feature calculation, we calculate sum, mean, count and variance for the 3 amount fields (`amount`, `bet_amount` and `win_amount` for `new_purchases`, `new_bet` and `new_win` respectively). This results with the following list of fields:- purchase_sum- purchase_mean- purchase_count- purchase_var- bet_sum- bet_mean- bet_count- bet_var- win_sum- win_mean- win_count- win_varYou can change the incoming events and the generated features by customizing the methods below. Create and Test a Local Function [Nuclio](https://nuclio.io/) is a high-performance open-source and managed serverless framework, which is available as a predefined tenant-wide platform service (`nuclio`).The demo uses Nuclio to create and deploy serverless functions.Therefore, you need to import the Nuclio package and configure Nuclio for your project.The platform's Jupyter Notebook service preinstalls the [nuclio-jupyter SDK](https://github.com/nuclio/nuclio-jupyter/blob/master/README.md) for creating and deploying Nuclio functions with Python and Jupyter Notebook.The tutorial uses the Nuclio magic commands and annotation comments of this SDK to automate function code generation.The magic commands are initialized when you import the `nuclio` package.The `%nuclio` magic commands are used to run Nuclio commands from Jupyter notebooks (`%nuclio `).You can also use `%%nuclio` at the start of a cell to identify the entire cell as containing Nuclio code.The magic commands are initialized when you import the `nuclio` package.The ` nuclio: start-code`, ` nuclio: end-code`, and ` nuclio: ignore` section-marker annotations notify Nuclio of the beginning or end of code sections.Nuclio ignores all notebook code before a ` nuclio: start-code` marker or after an ` nuclio: end-code` marker.Nuclio translates all other notebook code sections into function code, except for sections that are marked with the ` nuclio: ignore` marker. Import Nuclio The following code imports the `nuclio` Python package.
###Code
import nuclio
###Output
_____no_output_____
###Markdown
Configure Nuclio The following code uses the ` nuclio: start-code` marker to instruct Nuclio to start processing code only from this location, and then performs basic Nuclio function configuration — defining the name of the function's container image (`mlrun/ml-models`), the function type (`nuclio`), and some additional package installation commands.> **Note:** You can add code to define function dependencies and perform additional configuration after the ` nuclio: start-code` marker.
###Code
# nuclio: start-code
###Output
_____no_output_____
###Markdown
Specify function dependencies and configuration Function code
###Code
import os
import json
import numpy as np
from v3io import dataplane, common
from datetime import datetime
def init_context(context):
v3io_access_key = os.getenv('V3IO_ACCESS_KEY')
container = os.getenv('CONTAINER')
feature_table_path = os.getenv('FEATURE_TABLE_PATH')
feature_list = [v.strip() for v in os.getenv('FEATURE_LIST').split(',')]
serving_events = [v.strip() for v in os.getenv('SERVING_EVENTS').split(',')]
output_stream_path = os.getenv('OUTPUT_STREAM_PATH')
partition_attr = os.getenv('PARTITION_ATTR')
enrichment_table_path = os.getenv('ENRICHMENT_TABLE_PATH')
enrichment_key = os.getenv('ENRICHMENT_KEY')
v3io_client = dataplane.Client(endpoint='http://v3io-webapi:8081', access_key=v3io_access_key)
event_handlers = {'registration': process_registration,
'purchase': process_purchase,
'bet': process_bet,
'win': process_win}
setattr(context, 'v3io_client', v3io_client)
setattr(context, 'container', container)
setattr(context, 'feature_table_path', feature_table_path)
setattr(context, 'feature_list', feature_list)
setattr(context, 'serving_events', serving_events)
setattr(context, 'output_stream_path', output_stream_path)
setattr(context, 'partition_attr', partition_attr)
setattr(context, 'event_handlers', event_handlers)
setattr(context, 'enrichment_table_path', enrichment_table_path)
setattr(context, 'enrichment_key', enrichment_key)
pass
def handler(context, event):
if type(event.body) is dict:
event_dict = event.body
else:
event_dict = json.loads(event.body)
if is_relevant_event(context, event_dict):
event_type = get_event_type(event_dict)
context.logger.info(f'Incoming event type: {event_type}')
# python switch-case
process_func = context.event_handlers.get(event_type)
context.logger.info(f'Processing event {event_dict}')
response = process_func(context, event_dict)
context.logger.info(f'Finished processing with status: {response.status_code} - and response body: {response.body} , event: {event_dict}')
if event_type in context.serving_events and (200 <= response.status_code < 300) :
context.logger.info(f'sending event for serving')
write_to_output_stream(context, event_dict)
else:
context.logger.info(f'Not relevant event')
pass
def get_event_type(event):
return event['event_type']
def is_relevant_event(context, event):
return get_event_type(event) in context.event_handlers
def get_features(context, event):
user_id = event['user_id']
features_list = context.feature_list
resp = context.v3io_client.kv.get(container=context.container,
table_path=context.feature_table_path,
key=str(user_id),
raise_for_status=dataplane.RaiseForStatus.never)
feat_list = [resp.output.item.get(feat) for feat in features_list]
feat_list = [0 if v is None else v for v in feat_list]
return json.dumps({'user_id': user_id, 'instances': np.array(feat_list).reshape(1,-1).tolist()})
def write_to_output_stream(context, event):
partition_key = event.get(context.partition_attr)
data = get_features(context, event)
record = {'partition_key': str(partition_key), 'data': data }
resp = context.v3io_client.stream.put_records(container=context.container,
stream_path=context.output_stream_path,
records=[record],
raise_for_status=dataplane.RaiseForStatus.never)
context.logger.info(f'Sent features for user: {event["user_id"]} to serving stream')
context.logger.debug(f'Feature values: {data}')
def event_time_to_ts(event_time):
dt = datetime.strptime(event_time,'%Y-%m-%d %H:%M:%S.%f')
return datetime.timestamp(dt)
def get_sum_count_mean_var_expr(feature: str, current_value):
sum_str = f"SET {feature}_sum= if_not_exists({feature}_sum, 0) + {current_value};"
count_str = f"SET {feature}_count= if_not_exists({feature}_count, 0) + 1;"
delta_str = f"SET {feature}_delta= {current_value} - if_not_exists({feature}_mean, 0);"
mean_str = f"SET {feature}_mean= if_not_exists({feature}_mean, 0) + ({feature}_delta / {feature}_count);"
m2_str = f"SET {feature}_m2= if_not_exists({feature}_m2, 0) + ({feature}_delta * ({current_value} - {feature}_mean));"
var_str = f"SET {feature}_var= {feature}_m2 / (max(2, {feature}_count)-1) ;"
expression = sum_str + count_str + delta_str + mean_str + m2_str + var_str
return expression
def update_features(context, user_id, expression, condition):
return context.v3io_client.kv.update(container=context.container,
table_path=context.feature_table_path,
key=str(user_id),
condition=condition,
expression=expression,
raise_for_status=dataplane.RaiseForStatus.never)
def enrich_event(context, event):
if context.enrichment_key in event:
enrichment_key_value = event[context.enrichment_key]
resp = context.v3io_client.kv.get(container=context.container,
table_path=context.enrichment_table_path,
key=str(enrichment_key_value),
raise_for_status=dataplane.RaiseForStatus.never)
if 200 <= resp.status_code <= 299:
enriched_event = dict(event, **resp.output.item)
context.logger.info_with('Event was enriched', enriched_event=enriched_event)
return enriched_event
else:
context.logger.debug_with("Couldn't enrich event",
enrichment_key_value=enrichment_key_value,
response_status=resp.status_code,
response_body=resp.body.decode('utf-8'))
return event
else:
return event
def process_registration(context, event):
user_id = event['user_id']
event = enrich_event(context, event)
features = {'user_id': event['user_id'],
'registration_date': event['event_time'],
'date_of_birth': event['date_of_birth'],
'socioeconomic_idx': event['socioeconomic_idx'],
'affiliate_url': event['affiliate_url'],
'label': event['label']}
response = context.v3io_client.kv.put(container=context.container,
table_path=context.feature_table_path,
key=str(user_id),
attributes=features,
raise_for_status=dataplane.RaiseForStatus.never)
return response
def process_purchase(context, event):
user_id = event['user_id']
event_time = event['event_time']
event_ts = event_time_to_ts(event_time)
purchase_amount = event['amount']
first_purchase_ts_str = f"SET first_purchase_ts=if_not_exists(first_purchase_ts, {event_ts});"
sum_count_mean_var_expr = get_sum_count_mean_var_expr('purchase', purchase_amount)
expression = first_purchase_ts_str + sum_count_mean_var_expr
condition = f"exists(registration_date) AND (NOT exists(first_purchase_ts) OR first_purchase_ts >= ({event_ts} - 86400 ))"
return update_features(context, user_id, expression, condition)
def process_bet(context, event):
user_id = event['user_id']
event_time = event['event_time']
event_ts = event_time_to_ts(event_time)
bet_amount = event['bet_amount']
sum_count_mean_var_expr = get_sum_count_mean_var_expr('bet', bet_amount)
expression = sum_count_mean_var_expr
condition = f"first_purchase_ts >= ({event_ts} - 86400 )"
return update_features(context, user_id, expression, condition)
def process_win(context, event):
user_id = event['user_id']
event_time = event['event_time']
event_ts = event_time_to_ts(event_time)
win_amount = event['win_amount']
sum_count_mean_var_expr = get_sum_count_mean_var_expr('win', win_amount)
expression = sum_count_mean_var_expr
condition = f"first_purchase_ts >= ({event_ts} - 86400 )"
return update_features(context, user_id, expression, condition)
###Output
_____no_output_____
###Markdown
The following cell uses the ` nuclio: end-code` marker to mark the end of a Nuclio code section and instruct Nuclio to stop parsing the notebook at this point.> **IMPORTANT:** Do not remove the end-code cell.
###Code
# nuclio: end-code
###Output
_____no_output_____
###Markdown
Set a dictionary for initializing the environment variables used by the function Test locally
###Code
import v3io.dataplane
import os
v3io_client = v3io.dataplane.Client()
test_path = os.path.join(os.getcwd(), 'test')
# Create a test target stream
v3io_client = v3io.dataplane.Client()
container = 'users'
output_stream_path = os.path.join(test_path.replace('/User', os.getenv('V3IO_USERNAME')), 'serving-stream')
v3io_client.stream.create(container=container, stream_path=output_stream_path, shard_count=1)
# Create a test enrichment table
enrichment_table_path = os.path.join(test_path.replace('/User', os.getenv('V3IO_USERNAME')), 'enrichment-table')
postcode = 11012
attr = {'postcode': postcode ,'socioeconomic_idx': 3}
v3io_client.kv.put(container=container,
table_path=enrichment_table_path,
key=str(postcode),
attributes=attr)
# Create feature table
feature_table_path = os.path.join(test_path.replace('/User', os.getenv('V3IO_USERNAME')), 'feature-table')
feature_list = ['socioeconomic_idx','purchase_sum','purchase_mean','purchase_count',
'purchase_var','bet_sum','bet_mean','bet_count',
'bet_var','win_sum','win_mean','win_count','win_var']
envs = {'V3IO_ACCESS_KEY': os.getenv('V3IO_ACCESS_KEY'),
'FEATURE_TABLE_PATH': feature_table_path,
'SERVING_EVENTS': ",".join(['bet','win']),
'FEATURE_LIST': ",".join(feature_list),
'CONTAINER': container,
'OUTPUT_STREAM_PATH': output_stream_path,
'PARTITION_ATTR': 'user_id',
'ENRICHMENT_TABLE_PATH': enrichment_table_path,
'ENRICHMENT_KEY':"postcode"}
import nuclio
for key, value in envs.items():
os.environ[key] = str(value)
reg_event = nuclio.Event(body=f'{{"user_id" : 111111 ,"affiliate_url":"aa.biz", "event_type": "registration", "postcode": {postcode}, "event_time": "2020-07-20 11:00:00","date_of_birth": "1970-03-03", "label":0}}'.encode())
pur_event = nuclio.Event(body=b'{"user_id" : 111111 ,"amount": 3000, "event_type": "purchase", "event_time": "2020-07-20 11:00:00.009"}')
bet_event = nuclio.Event(body=b'{"user_id" : 111111 ,"bet_amount": 300, "event_type": "bet", "event_time": "2020-07-20 11:00:00.889"}')
init_context(context)
handler(context, reg_event)
handler(context, pur_event)
handler(context, bet_event)
# cleanup
!rm -rf {test_path}
###Output
_____no_output_____
###Markdown
Nuclio Deploy Convert code to function We use MLRun `code_to_function` in order to convert the python code to a Nuclio function. We then set the relevant enrivonment variables and streaming trigger.
###Code
from mlrun import code_to_function
fn = code_to_function(name='features',image='mlrun/mlrun',kind='nuclio')
fn.set_envs(envs)
###Output
_____no_output_____
###Markdown
Deploy
###Code
fn.deploy()
###Output
> 2021-10-03 08:44:32,808 [info] Starting remote function deploy
2021-10-03 08:44:32 (info) Deploying function
2021-10-03 08:44:32 (info) Building
2021-10-03 08:44:32 (info) Staging files and preparing base images
2021-10-03 08:44:32 (info) Building processor image
2021-10-03 08:44:35 (info) Build complete
2021-10-03 08:44:39 (info) Function deploy complete
> 2021-10-03 08:44:39,378 [info] successfully deployed function: {'internal_invocation_urls': ['nuclio-default-features.default-tenant.svc.cluster.local:8080'], 'external_invocation_urls': ['default-tenant.app.dev8.lab.iguazeng.com:30995']}
###Markdown
Stream to Features -------------------------------------------------------------------- Receive a stream of events from `incoming-events-stream`, enrich specific events and update a set of aggregations on the data. The output data is stored to an aggregation table called `feature-table` and a new event that includes the calculated features is written to `serving-stream` Enrich the relevant events with socioeconomic data by looking up the enrichment table.During the feature calculation, we calculate sum, mean, count and variance for the 3 amount fields (`amount`, `bet_amount` and `win_amount` for `new_purchases`, `new_bet` and `new_win` respectively). This results with the following list of fields:- purchase_sum- purchase_mean- purchase_count- purchase_var- bet_sum- bet_mean- bet_count- bet_var- win_sum- win_mean- win_count- win_varYou can change the incoming events and the generated features by customizing the methods below. Create and Test a Local Function [Nuclio](https://nuclio.io/) is a high-performance open-source and managed serverless framework, which is available as a predefined tenant-wide platform service (`nuclio`).The demo uses Nuclio to create and deploy serverless functions.Therefore, you need to import the Nuclio package and configure Nuclio for your project.The platform's Jupyter Notebook service preinstalls the [nuclio-jupyter SDK](https://github.com/nuclio/nuclio-jupyter/blob/master/README.md) for creating and deploying Nuclio functions with Python and Jupyter Notebook.The tutorial uses the Nuclio magic commands and annotation comments of this SDK to automate function code generation.The magic commands are initialized when you import the `nuclio` package.The `%nuclio` magic commands are used to run Nuclio commands from Jupyter notebooks (`%nuclio `).You can also use `%%nuclio` at the start of a cell to identify the entire cell as containing Nuclio code.The magic commands are initialized when you import the `nuclio` package.The ` nuclio: start-code`, ` nuclio: end-code`, and ` nuclio: ignore` section-marker annotations notify Nuclio of the beginning or end of code sections.Nuclio ignores all notebook code before a ` nuclio: start-code` marker or after an ` nuclio: end-code` marker.Nuclio translates all other notebook code sections into function code, except for sections that are marked with the ` nuclio: ignore` marker. Import Nuclio The following code imports the `nuclio` Python package.
###Code
import nuclio
###Output
_____no_output_____
###Markdown
Configure Nuclio The following code uses the ` nuclio: start-code` marker to instruct Nuclio to start processing code only from this location, and then performs basic Nuclio function configuration — defining the name of the function's container image (`mlrun/ml-models`), the function type (`nuclio`), and some additional package installation commands.> **Note:** You can add code to define function dependencies and perform additional configuration after the ` nuclio: start-code` marker.
###Code
# nuclio: start-code
###Output
_____no_output_____
###Markdown
Specify function dependencies and configuration Function code
###Code
import os
import json
import numpy as np
from v3io import dataplane, common
from datetime import datetime
def init_context(context):
v3io_access_key = os.getenv('V3IO_ACCESS_KEY')
container = os.getenv('CONTAINER')
feature_table_path = os.getenv('FEATURE_TABLE_PATH')
feature_list = [v.strip() for v in os.getenv('FEATURE_LIST').split(',')]
serving_events = [v.strip() for v in os.getenv('SERVING_EVENTS').split(',')]
output_stream_path = os.getenv('OUTPUT_STREAM_PATH')
partition_attr = os.getenv('PARTITION_ATTR')
enrichment_table_path = os.getenv('ENRICHMENT_TABLE_PATH')
enrichment_key = os.getenv('ENRICHMENT_KEY')
v3io_client = dataplane.Client(endpoint='http://v3io-webapi:8081', access_key=v3io_access_key)
event_handlers = {'registration': process_registration,
'purchase': process_purchase,
'bet': process_bet,
'win': process_win}
setattr(context, 'v3io_client', v3io_client)
setattr(context, 'container', container)
setattr(context, 'feature_table_path', feature_table_path)
setattr(context, 'feature_list', feature_list)
setattr(context, 'serving_events', serving_events)
setattr(context, 'output_stream_path', output_stream_path)
setattr(context, 'partition_attr', partition_attr)
setattr(context, 'event_handlers', event_handlers)
setattr(context, 'enrichment_table_path', enrichment_table_path)
setattr(context, 'enrichment_key', enrichment_key)
pass
def handler(context, event):
if type(event.body) is dict:
event_dict = event.body
else:
event_dict = json.loads(event.body)
if is_relevant_event(context, event_dict):
event_type = get_event_type(event_dict)
context.logger.info(f'Incoming event type: {event_type}')
# python switch-case
process_func = context.event_handlers.get(event_type)
context.logger.info(f'Processing event {event_dict}')
response = process_func(context, event_dict)
context.logger.info(f'Finished processing with status: {response.status_code} - and response body: {response.body} , event: {event_dict}')
if event_type in context.serving_events and (200 <= response.status_code < 300) :
context.logger.info(f'sending event for serving')
write_to_output_stream(context, event_dict)
else:
context.logger.info(f'Not relevant event')
pass
def get_event_type(event):
return event['event_type']
def is_relevant_event(context, event):
return get_event_type(event) in context.event_handlers
def get_features(context, event):
user_id = event['user_id']
features_list = context.feature_list
resp = context.v3io_client.kv.get(container=context.container,
table_path=context.feature_table_path,
key=str(user_id),
raise_for_status=dataplane.RaiseForStatus.never)
feat_list = [resp.output.item.get(feat) for feat in features_list]
feat_list = [0 if v is None else v for v in feat_list]
return json.dumps({'user_id': user_id, 'inputs': np.array(feat_list).reshape(1,-1).tolist()})
def write_to_output_stream(context, event):
partition_key = event.get(context.partition_attr)
data = get_features(context, event)
record = {'partition_key': str(partition_key), 'data': data }
resp = context.v3io_client.stream.put_records(container=context.container,
stream_path=context.output_stream_path,
records=[record],
raise_for_status=dataplane.RaiseForStatus.never)
context.logger.info(f'Sent features for user: {event["user_id"]} to serving stream')
context.logger.debug(f'Feature values: {data}')
def event_time_to_ts(event_time):
dt = datetime.strptime(event_time,'%Y-%m-%d %H:%M:%S.%f')
return datetime.timestamp(dt)
def get_sum_count_mean_var_expr(feature: str, current_value):
sum_str = f"SET {feature}_sum= if_not_exists({feature}_sum, 0) + {current_value};"
count_str = f"SET {feature}_count= if_not_exists({feature}_count, 0) + 1;"
delta_str = f"SET {feature}_delta= {current_value} - if_not_exists({feature}_mean, 0);"
mean_str = f"SET {feature}_mean= if_not_exists({feature}_mean, 0) + ({feature}_delta / {feature}_count);"
m2_str = f"SET {feature}_m2= if_not_exists({feature}_m2, 0) + ({feature}_delta * ({current_value} - {feature}_mean));"
var_str = f"SET {feature}_var= {feature}_m2 / (max(2, {feature}_count)-1) ;"
expression = sum_str + count_str + delta_str + mean_str + m2_str + var_str
return expression
def update_features(context, user_id, expression, condition):
return context.v3io_client.kv.update(container=context.container,
table_path=context.feature_table_path,
key=str(user_id),
condition=condition,
expression=expression,
raise_for_status=dataplane.RaiseForStatus.never)
def enrich_event(context, event):
if context.enrichment_key in event:
enrichment_key_value = event[context.enrichment_key]
resp = context.v3io_client.kv.get(container=context.container,
table_path=context.enrichment_table_path,
key=str(enrichment_key_value),
raise_for_status=dataplane.RaiseForStatus.never)
if 200 <= resp.status_code <= 299:
enriched_event = dict(event, **resp.output.item)
context.logger.info_with('Event was enriched', enriched_event=enriched_event)
return enriched_event
else:
context.logger.debug_with("Couldn't enrich event",
enrichment_key_value=enrichment_key_value,
response_status=resp.status_code,
response_body=resp.body.decode('utf-8'))
return event
else:
return event
def process_registration(context, event):
user_id = event['user_id']
event = enrich_event(context, event)
features = {'user_id': event['user_id'],
'registration_date': event['event_time'],
'date_of_birth': event['date_of_birth'],
'socioeconomic_idx': event['socioeconomic_idx'],
'affiliate_url': event['affiliate_url'],
'label': event['label']}
response = context.v3io_client.kv.put(container=context.container,
table_path=context.feature_table_path,
key=str(user_id),
attributes=features,
raise_for_status=dataplane.RaiseForStatus.never)
return response
def process_purchase(context, event):
user_id = event['user_id']
event_time = event['event_time']
event_ts = event_time_to_ts(event_time)
purchase_amount = event['amount']
first_purchase_ts_str = f"SET first_purchase_ts=if_not_exists(first_purchase_ts, {event_ts});"
sum_count_mean_var_expr = get_sum_count_mean_var_expr('purchase', purchase_amount)
expression = first_purchase_ts_str + sum_count_mean_var_expr
condition = f"exists(registration_date) AND (NOT exists(first_purchase_ts) OR first_purchase_ts >= ({event_ts} - 86400 ))"
return update_features(context, user_id, expression, condition)
def process_bet(context, event):
user_id = event['user_id']
event_time = event['event_time']
event_ts = event_time_to_ts(event_time)
bet_amount = event['bet_amount']
sum_count_mean_var_expr = get_sum_count_mean_var_expr('bet', bet_amount)
expression = sum_count_mean_var_expr
condition = f"first_purchase_ts >= ({event_ts} - 86400 )"
return update_features(context, user_id, expression, condition)
def process_win(context, event):
user_id = event['user_id']
event_time = event['event_time']
event_ts = event_time_to_ts(event_time)
win_amount = event['win_amount']
sum_count_mean_var_expr = get_sum_count_mean_var_expr('win', win_amount)
expression = sum_count_mean_var_expr
condition = f"first_purchase_ts >= ({event_ts} - 86400 )"
return update_features(context, user_id, expression, condition)
###Output
_____no_output_____
###Markdown
The following cell uses the ` nuclio: end-code` marker to mark the end of a Nuclio code section and instruct Nuclio to stop parsing the notebook at this point.> **IMPORTANT:** Do not remove the end-code cell.
###Code
# nuclio: end-code
###Output
_____no_output_____
###Markdown
Set a dictionary for initializing the environment variables used by the function Test locally
###Code
import v3io.dataplane
import os
v3io_client = v3io.dataplane.Client()
test_path = os.path.join(os.getcwd(), 'test')
# Create a test target stream
v3io_client = v3io.dataplane.Client()
container = 'users'
output_stream_path = os.path.join(test_path.replace('/User', os.getenv('V3IO_USERNAME')), 'serving-stream')
v3io_client.stream.create(container=container, stream_path=output_stream_path, shard_count=1)
# Create a test enrichment table
enrichment_table_path = os.path.join(test_path.replace('/User', os.getenv('V3IO_USERNAME')), 'enrichment-table')
postcode = 11012
attr = {'postcode': postcode ,'socioeconomic_idx': 3}
v3io_client.kv.put(container=container,
table_path=enrichment_table_path,
key=str(postcode),
attributes=attr)
# Create feature table
feature_table_path = os.path.join(test_path.replace('/User', os.getenv('V3IO_USERNAME')), 'feature-table')
feature_list = ['socioeconomic_idx','purchase_sum','purchase_mean','purchase_count',
'purchase_var','bet_sum','bet_mean','bet_count',
'bet_var','win_sum','win_mean','win_count','win_var']
envs = {'V3IO_ACCESS_KEY': os.getenv('V3IO_ACCESS_KEY'),
'FEATURE_TABLE_PATH': feature_table_path,
'SERVING_EVENTS': ",".join(['bet','win']),
'FEATURE_LIST': ",".join(feature_list),
'CONTAINER': container,
'OUTPUT_STREAM_PATH': output_stream_path,
'PARTITION_ATTR': 'user_id',
'ENRICHMENT_TABLE_PATH': enrichment_table_path,
'ENRICHMENT_KEY':"postcode"}
import nuclio
for key, value in envs.items():
os.environ[key] = str(value)
reg_event = nuclio.Event(body=f'{{"user_id" : 111111 ,"affiliate_url":"aa.biz", "event_type": "registration", "postcode": {postcode}, "event_time": "2020-07-20 11:00:00","date_of_birth": "1970-03-03", "label":0}}'.encode())
pur_event = nuclio.Event(body=b'{"user_id" : 111111 ,"amount": 3000, "event_type": "purchase", "event_time": "2020-07-20 11:00:00.009"}')
bet_event = nuclio.Event(body=b'{"user_id" : 111111 ,"bet_amount": 300, "event_type": "bet", "event_time": "2020-07-20 11:00:00.889"}')
init_context(context)
handler(context, reg_event)
handler(context, pur_event)
handler(context, bet_event)
# cleanup
!rm -rf {test_path}
###Output
_____no_output_____
###Markdown
Nuclio Deploy Convert code to function We use MLRun `code_to_function` in order to convert the python code to a Nuclio function. We then set the relevant enrivonment variables and streaming trigger.
###Code
from mlrun import code_to_function
fn = code_to_function(name='features',image='mlrun/mlrun',kind='nuclio')
fn.set_envs(envs)
###Output
_____no_output_____
###Markdown
Deploy
###Code
fn.deploy()
###Output
> 2021-10-03 08:44:32,808 [info] Starting remote function deploy
2021-10-03 08:44:32 (info) Deploying function
2021-10-03 08:44:32 (info) Building
2021-10-03 08:44:32 (info) Staging files and preparing base images
2021-10-03 08:44:32 (info) Building processor image
2021-10-03 08:44:35 (info) Build complete
2021-10-03 08:44:39 (info) Function deploy complete
> 2021-10-03 08:44:39,378 [info] successfully deployed function: {'internal_invocation_urls': ['nuclio-default-features.default-tenant.svc.cluster.local:8080'], 'external_invocation_urls': ['default-tenant.app.dev8.lab.iguazeng.com:30995']}
|
Deep Learning/Assignment 4 - Python Advanced.ipynb | ###Markdown
Assignment 4 Questions Problem Statement Q.1 Given a sequence of n values x1, x2, ..., xn and a window size k>0, the k-th moving average ofthe given sequence is defined as follows:The moving average sequence has n-k+1 elements as shown below.The moving averages with k=4 of a ten-value sequence (n=10) is shown below i 1 2 3 4 5 6 7 8 9 10 ===== == == == == == == == == == == Input 10 20 30 40 50 60 70 80 90 100 y1 25 = (10+20+30+40)/4 y2 35 = (20+30+40+50)/4 y3 45 = (30+40+50+60)/4 y4 55 = (40+50+60+70)/4 y5 65 = (50+60+70+80)/4 y6 75 = (60+70+80+90)/4 y7 85 = (70+80+90+100)/4Thus, the moving average sequence has n-k+1=10-4+1=7 values.Write a function to find moving average in an array over a window:Test it over [3, 5, 7, 2, 8, 10, 11, 65, 72, 81, 99, 100, 150] and window of 3.
###Code
import numpy as np
from numpy import convolve
import matplotlib.pyplot as plt
def movingaverage (values, window):
weights = np.repeat(1.0, window)/window
sma = np.convolve(values, weights, 'valid')
return sma
x = [1,2,3,4,5,6,7,8,9,10]
y = [3, 5, 7, 2, 8, 10, 11, 65, 72, 81, 99, 100, 150]
x_MA = movingaverage(x,3)
print(x_MA)
y_MA = movingaverage(y,3)
print (y_MA)
import numpy as np
mylist = [3, 5, 7, 2, 8, 10, 11, 65, 72, 81, 99, 100, 150]
N = 3
mean = [np.mean(mylist[x:x+N]) for x in range(len(mylist)-N+1)]
print(mean)
import numpy as np
def movingavg(custom_list,N):
MA = [np.mean(mylist[x:x+N]) for x in range(len(mylist)-N+1)]
print(f'The moving avg of given list for a window {N} is: {MA}')
my_list = [3, 5, 7, 2, 8, 10, 11, 65, 72, 81, 99, 100, 150]
movingavg(my_list,3)
###Output
The moving avg of given list for a window 3 is: [5.0, 4.666666666666667, 5.666666666666667, 6.666666666666667, 9.666666666666666, 28.666666666666668, 49.333333333333336, 72.66666666666667, 84.0, 93.33333333333333, 116.33333333333333]
###Markdown
Q.2 How-to-count-distance-to-the-previous-zeroFor each value, count the difference back to the previous zero (or the start of the Series,whichever is closer)create a new column 'Y'Consider a DataFrame df where there is an integer column 'X' import pandas as pd df = pd.DataFrame({'X': [7, 2, 0, 3, 4, 2, 5, 0, 3, 4]})
###Code
import pandas as pd
df = pd.DataFrame({'X': [7, 2, 0, 3, 4, 2, 5, 0, 3, 4]})
# https://pandas.pydata.org/pandas-docs/stable/user_guide/cookbook.html
# https://stackoverflow.com/questions/30730981/how-to-count-distance-to-the-previous-zero-in-pandas-series
x = (df['X'] != 0).cumsum()
y = x != x.shift()
df['Y'] = y.groupby((y != y.shift()).cumsum()).cumsum()
df
###Output
_____no_output_____
###Markdown
Q.3 Create a DatetimeIndex that contains each business day of 2015 and use it to index a Series ofrandom numbers.
###Code
import numpy as np
np.busday_count('2015-01-01', '2015-12-31')
import pandas as pd
dates = pd.date_range('1/1/2015', periods=261, freq='B')
df = pd.DataFrame(np.random.randn(261,1), index=dates)
df.head(10)
###Output
_____no_output_____ |
winner-predictor.ipynb | ###Markdown
We attempt to create a predictor that can predict the outcome of a match given previous wins (2017 IPL Matches). Using dataset: matches.csv (636 rows) We create a basic predictor that uses a neural network to predict the match outcome. The model will exported to Django where a web interface will interact with the model.
###Code
import pandas as pd
import numpy as np
import keras
from keras.utils import to_categorical, plot_model
from keras.preprocessing.text import Tokenizer
from keras import Sequential
from keras import losses
from keras.layers import Dense
from sklearn.model_selection import train_test_split, cross_val_score, KFold
from sklearn.metrics import accuracy_score, log_loss, recall_score
import matplotlib.pyplot as plt
# Loading dataset
data = pd.read_csv("matches.csv")
data.head()
# Removing unwanted columns
data.drop(columns=['venue', 'player_of_match', 'dl_applied','umpire1','umpire2','umpire3','date','city','season','id'], inplace=True)
# Label encoding toss_decision
data.toss_decision = data.toss_decision.map({'bat':1, 'field':0})
# Encoding result
data.result = data.result.map({'normal':1, 'tie':2, 'no result':0})
r = len(data.team2.unique())
teams = data.team1.unique()
mapping = {}
for i in range(14): # There are 14 teams.
mapping[teams[i]] = i
data.toss_winner = data.toss_winner.map(mapping)
# Encoding team data in numeric form
data.team1 = data.team1.map(mapping)
data.team2 = data.team2.map(mapping)
mapping # A value is repeated
data.winner = data.winner.map(mapping)
# Removing NA Fields
data.dropna(axis=0,inplace=True)
data.winner = data.winner.astype(int)
data.head()
len(data)
###Output
_____no_output_____
###Markdown
While we think that win_by_runs and win_by_wickets are important metrics for the dataset, we are dropping it for user's simplicity.
###Code
data.drop(columns=["win_by_runs", "win_by_wickets"], axis=1, inplace=True)
data.head()
data.drop(columns=["toss_decision", "result"], inplace=True)
data.head()
labels = data.winner.values
features = data.drop(columns=["winner"], axis=1).values
labels_copy = data.winner.values
features_copy = data.drop(columns=["winner"], axis=1).values
features.shape
# We have three input dim
labels.shape
# As there is no activaton function that can predict 'winner', we are one hot encoding it.
labels = to_categorical(labels)
labels
labels.shape
# Now we will use a softmax which will provide probs for 14 different classes aka teams.
features_train, features_test, labels_train,labels_test = train_test_split(features, labels, shuffle=True, random_state=42)
features_copy_train, features_copy_test, labels_copy_train,labels_copy_test = train_test_split(features, labels, shuffle=True, random_state=42)
len(features_train)
len(features_test)
# Creating model
model = Sequential()
model.add(Dense(100, activation="relu", input_dim=features.shape[1]))
model.add(Dense(75, activation="relu"))
model.add(Dense(75, activation="relu"))
model.add(Dense(labels.shape[1], activation="softmax"))
model.compile(optimizer=keras.optimizers.Adam(lr=0.001), loss=losses.categorical_crossentropy, metrics=["accuracy"])
model.summary()
# Let us train the model
history = model.fit(features_train, labels_train, epochs=1000, validation_data=(features_test, labels_test), batch_size=25)
pred = model.predict(features_test)
e = model.evaluate(features_test, labels_test)
print ("Loss: ", e[0])
print ("Accuracy on the test set: ", e[1])
pred
# Now we we will use np.argmax to get the max probability and the index
pred = np.argmax(pred, axis=1)
pred
decoded_labels_test = np.argmax(labels_test, axis=1)
decoded_labels_test
# Checking accuracy (Using a 3 hidden layer neural network)
print ("The accuracy of the mode is : ", accuracy_score(decoded_labels_test, pred))
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
clf = SVC()
clf.fit(features_copy_train, labels_copy_train)
clf.score(features_copy_test, labels_copy_test)
labels_copy_train
clf2 = RandomForestClassifier(n_estimators=100)
clf2.fit(features_copy_train, labels_copy_train)
clf2.score(features_copy_test, labels_copy_test)
###Output
_____no_output_____
###Markdown
On average, we are getting an accuracy of 50-ish %.
###Code
# Saving keras model
model.save("match-predictor.h5")
plot_model(model, to_file="predictor-arch.png",show_layer_names=False, show_shapes=True)
features_train.shape
from keras.models import load_model
model.save("test.json")
x = load_model("test.json")
x.predict(np.array([ [3,0, 1]])) # Match against Pune and Sunrisers
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
pca = PCA()
import matplotlib.pyplot as plt
pca.fit(StandardScaler().fit_transform(features))
plt.plot(np.cumsum(pca.explained_variance_ratio_))
###Output
_____no_output_____ |
Day_5_NLP/Exercises/Exercise 1 - Text Classification.ipynb | ###Markdown
Classification of Job Posts Reading the data
###Code
# Helper method: reading data (only column "job description" is kept)
# param list: a list of labels, only rows with those labels are read in
# param name: label that is added to all rows in the data frame
def read_all(list, name, path):
posts = pd.DataFrame([], columns = ["job_description", "label"])
jobs = pd.read_csv(path)
for item in list:
selected = jobs[jobs.category == item]
selected["label"] = name
selected = selected.loc[:,["job_description", "label"]]
posts = posts.append(selected)
return posts
path = "./Data/Data Scientist Job Postings/job_posts_jobspikr.csv"
class1 = read_all(["Engineering-or-architecture"], "architecture-and-engineering", path)
class2 = read_all(["business and financial operations"], "business-and-financial", path)
allData = pd.concat([class1, class2])
allData.head()
###Output
_____no_output_____ |
extras/svm.ipynb | ###Markdown
Maximal margin classifier
###Code
from sklearn.svm import SVC
clf = SVC(kernel='linear', C=1E10)
clf.fit(X, y)
clf2 = SVC(kernel='linear', C=0.1)
clf2.fit(X, y)
clf3 = SVC()
clf3.fit(X, y)
def plot_svc_decision_function(model, ax=None, plot_support=True):
# source: Python Data Science Handbook by Jake VanderPlas
"""Plot the decision function for a 2D SVC"""
if ax is None:
ax = plt.gca()
xlim = ax.get_xlim()
ylim = ax.get_ylim()
# create grid to evaluate model
x = np.linspace(xlim[0], xlim[1], 30)
y = np.linspace(ylim[0], ylim[1], 30)
Y, X = np.meshgrid(y, x)
xy = np.vstack([X.ravel(), Y.ravel()]).T
P = model.decision_function(xy).reshape(X.shape)
# plot decision boundary and margins
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
# plot support vectors
if plot_support:
ax.scatter(model.support_vectors_[:, 0],
model.support_vectors_[:, 1],
s=300, linewidth=1, facecolors='none');
ax.set_xlim(xlim)
ax.set_ylim(ylim)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(clf);
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(clf2);
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(clf3);
###Output
_____no_output_____ |
docs/source/emat.analysis/optimization.ipynb | ###Markdown
Optimization Tools Typically, transportation policy planning models will be used totry to find policies that provide the "best" outcomes. In a traditionalanalytical environment, that typically means using models to find optimal outcomes for performance measures.Transportation models as used in the TMIP-EMAT frawework are generally characterized by two important features: they are subject to significant exogenous uncertainties about the futurestate of the world, and they include numerous performance measuresfor which decision makers would like to see good outcomes. Therefore,optimization tools applied to these models should be flexible to consider *multiple objectives*,as well as be *robust* against uncertainty. Multi-Objective Optimization With exploratory modeling, optimization is also often undertaken as a [multi-objective optimization](https://en.wikipedia.org/wiki/Multi-objective_optimization) exercise, where multipleand possibly conflicting performance measures need to be addressed simultaneously. A road capacity expansion project is a good example of a multi-objective optimization problem. In such a situation, we want to expand the capacity of a roadway, both minimizingthe costs and maximizing the travel time benefits. A smaller expansion project will cost lessbut also provide lesser benefits. Funding it with variable rate debt might decrease expected future costs but doing so entails more risk than fixed-rate debt. One approach to managing a multi-objective optimization problem is to distill it into a single objective problem, by assigning relative weights to the various objectives. For a variety of reasons, this can be difficult to accomplish in public policy environments that are common in transportation planning. Multiple stakeholders may have differentpriorities and may not be able to agree on a relative weighting structure. Certain small improvements in a performance measure may be valued very differently if they tip the measure over a regulated threshold (e.g. to attain a particular mandated level of emissions or air quality).Instead of trying to simplify a multi-objective into a simple-objective one,an alternate approach is to preserve the multi-objective nature of the problemand find a set or spectrum of different solutions, each of which solves the problemat a different weighting of the various objectives. Decision makers can thenreview the various different solutions, and make judgements about the varioustrade-offs implicit in choosing one path over another.Within a set of solutions for this kind of problem, each individual solution is "[Pareto optimal](https://en.wikipedia.org/wiki/Pareto_efficiency)", such thatno individual objective can be improved without degrading at least one otherobjective by some amount. Thus, each of these solutions might be the "best"policy to adopt, and exactly which is the best is left as a subjective judgementto decision makers, instead of being a concretely objective evaluation based on mathematics alone. Robust Optimization Robust optimization is a variant of the more traditional optimizationproblem, where we will try to find policies that yield *good* outcomesacross a range of possible futures, instead of trying to find a policythat delivers the *best* outcome for a particular future.To conceptualize this, let us consider a decision where there are fourpossible policies to choose among, a single exogenous uncertainty thatwill impact the future, and a single performance measure that we wouldlike to mimimize. We have a model that can forecast the performance measure, conditional on the chosen policy and the future value of theexogenous uncertainty, and which gives us a forecasts as shown below.
###Code
import numpy
from matplotlib import pyplot as plt
x = numpy.linspace(0,1)
y1 = x**3 + numpy.cos((x-3)*23)*.01
y2 = x**3 + .1 + numpy.sin((x-3)*23)*.01
y3 = 1.3*(1-x)**17 + numpy.cos((x-3)*23)*.01 + .1
y4 = numpy.sin((x-3)*23)*.01+0.16 + .1*(x-0.5)
linestyles = [
dict(ls='-', c='blue'),
dict(ls=':', c='orange'),
dict(ls='-.', c='green'),
dict(ls='--', c='red'),
]
fig, ax = plt.subplots(1,1)
ax.plot(x, y1, **linestyles[0], label="Policy 1")
ax.plot(x, y2, **linestyles[1], label="Policy 2")
ax.plot(x, y3, **linestyles[2], label="Policy 3")
ax.plot(x, y4, **linestyles[3], label="Policy 4")
ax.set_ylabel("Performance Measure\n← Lower is Better ←")
ax.set_xlabel("Exogenous Uncertainty")
ax.legend()
ax.set_title("Example Simple Forecasting Experiment")
plt.savefig("robust_example.png")
plt.show()
###Output
_____no_output_____
###Markdown
.. image:: robust_example.png
###Code
In a naive optimization approach, if we want to minimize the performance
measure, we can do so by selecting Policy 1 and setting the exogenous
uncertainty to 0.1. Of course, in application we are able to select
Policy 1, but we are unable to actually control the exogenous uncertainty
(hence, "exogenous") and we may very well end up with a very bad result
on the right side of the figure.
We can see from the figure that, depending on the ultimate value for
the exogenous uncertainty, either Policy 1 or Policy 3 might yield the
best possible value of the performace measure. However, both of these
policies come with substantial risks as well -- in each Policy there are
some futures where the results are optimal, but there are also some
futures where the results are exceptionally poor.
In contrast with these optimal policies, Policy 4 may be considered a
"robust" solution. Although there is no value of the exogenous
uncertainty where Policy 4 yields the best possible outcome, there is
also no future where Policy 4 yields a very poor outcome. Instead, across
all futures it is always generating a "pretty good" outcome.
Different expectations for the future may lead to different policy
choices. If the decision maker feels that low values of the exogenous
uncertainty are much more likely than high values, Policy 1 might be
the best policy to choose. If high values of the exogenous uncertainty
are expected, then Policy 3 might be the best choice. If there is not
much agreement on the probable future values of the exogenous uncertainty,
or if decision makers want to adopt a risk-averse stance, then Policy 4
might be the best choice.
The remaining policy shown in the figure, Policy 2, is the lone holdout
in this example -- there is no set of expectations about the future, or
attitudes toward risk, that can make this policy the best choice. This
is because, no matter what the future value of the exogenous uncertainty
may be, the performance measure has a better outcome from Policy 1 than
from Policy 2. In this circumstance, we can say that Policy 2 is
"dominated" and should never be chosen by decision makers.
### Robustness Functions
To perform robust optimization in EMAT, we need a core model (or meta-model)
with defined input and output parameters, as well a set of functions called
"robustness measures" that define what a "robust" measure represents.
As noted above, different expectations about future states of the world
can lead to different rankings of policy options. In addition, different
attitudes towards risk can result in different robustness measures that are
derived from the same underlying modeled performance measures.
For example, consider the *Example Simple Forecasting Experiment* shown above.
In this example, we could compute a robustness measure for each policy where
we calculate the maximum value of the performance measure across any possible
value of the exogenous uncertainty. This value is shown in the first column
of robustness measures in the figure below, and under this measure Policy 4 is
far and away the best choice.
###Output
_____no_output_____
###Markdown
fig, ax = plt.subplots( 1,6, sharey=True, gridspec_kw=dict(width_ratios=[7,1,1,1,1,1], wspace=0.05,), figsize=[8, 4.],)ax[0].plot(x, y1, **linestyles[0], label="Policy 1")ax[0].plot(x, y2, **linestyles[1], label="Policy 2")ax[0].plot(x, y3, **linestyles[2], label="Policy 3")ax[0].plot(x, y4, **linestyles[3], label="Policy 4")ax[1].plot([0,1], [y1.max()]*2, **linestyles[0], lw=2)ax[1].plot([0,1], [y2.max()]*2, **linestyles[1], lw=2)ax[1].plot([0,1],[y3.max()]*2, **linestyles[2], lw=2)ax[1].plot([0,1], [y4.max()]*2, **linestyles[3], lw=2)ax[2].plot([0,1], [y1.mean()]*2, **linestyles[0], lw=2)ax[2].plot([0,1], [y2.mean()]*2, **linestyles[1], lw=2)ax[2].plot([0,1],[y3.mean()]*2, **linestyles[2], lw=2)ax[2].plot([0,1], [y4.mean()]*2, **linestyles[3], lw=2)ax[3].plot([0,1], [numpy.median(y1)]*2, **linestyles[0], lw=2)ax[3].plot([0,1], [numpy.median(y2)]*2, **linestyles[1], lw=2)ax[3].plot([0,1], [numpy.median(y3)]*2, **linestyles[2], lw=2)ax[3].plot([0,1], [numpy.median(y4)]*2, **linestyles[3], lw=2)ax[4].plot([0,1], [numpy.percentile(y1, 90)]*2, **linestyles[0], lw=2)ax[4].plot([0,1], [numpy.percentile(y2, 90)]*2, **linestyles[1], lw=2)ax[4].plot([0,1], [numpy.percentile(y3, 90)]*2, **linestyles[2], lw=2)ax[4].plot([0,1], [numpy.percentile(y4, 90)]*2, **linestyles[3], lw=2)ax[5].plot([0,1], [y1.min()]*2, **linestyles[0], lw=2)ax[5].plot([0,1], [y2.min()]*2, **linestyles[1], lw=2)ax[5].plot([0,1], [y3.min()]*2, **linestyles[2], lw=2)ax[5].plot([0,1], [y4.min()]*2, **linestyles[3], lw=2)ax[0].set_ylabel("Performance Measure\n← Lower is Better ←")ax[0].set_xlabel("Exogenous Uncertainty")ax[0].legend()ax[0].set_title("Example Simple Forecasting Experiment")ax[3].set_title("Robustness Measures")ax[1].set_xlabel("Max\nPM")ax[2].set_xlabel("Mean\nPM")ax[3].set_xlabel("Median\nPM")ax[4].set_xlabel("90%ile\nPM")ax[5].set_xlabel("Min\nPM")ax[-1].yaxis.set_label_position("right")for a in [1,2,3,4,5]: ax[a].set_xticks([]) ax[a].set_yticks([])plt.savefig("robust_measures.png")plt.show()
###Code
.. image:: robust_measures.png
###Output
_____no_output_____
###Markdown
The "maximum performance measure result" robustness function is a veryrisk averse approach, as no consideration is given to the shape or distributionof performance measure values other than the maximum. Consider these samepolicies shown if Policy 4 is not available. In this case, the next best policy under this robustness function is Policy 1, as it has the next lowestmaximum value. However, when we look ata comparison between Policies 1 and 3 in aggregate, we might easily concludethat Policy 3 is a better choice overall: it is better than Policy 1 on average,as judged by the mean, median, and 90th percentile measures. The only reason Policy 3 appears worse than Policy 1 on the initial robustness function is that it has an especially poor outcome at one extreme end of the uncertainty distribution. Depending on our attitude towards risk on this performance measure,we may want to consider using some of these alternative robustness functions.An additional consideration here is that the various robustness measuresin this example case are all unweighted measures: they implicitly assume a uniform probability distribution for the entire range of possible values forthe exogenous uncertainty. If we are able to develop a probability distributionon our expectations for the future values of the exogenous uncertainties,we can use that probability distribution to weight the robustness functionsappropriately, creating more meaningful values, particularly for the non-extremevalue robustness functions (i.e., everything except the min and max). Mechanics of Using Optimization Policy Optimization: Search over Levers The simplest optimization tool available for TMIP-EMAT users is a *search over policy levers*, which represents multi-objectiveoptimization, manipulating policy lever values to find a Paretooptimal set of solutions, holding the exogenous uncertaintiesfixed at a particular value for each uncertainty (typically at the default values). This is often a useful first stepin exploratory analysis, even if your ultimate goal is to eventually undertake a robust optimization analysis. This lesscomplex optimization can give insights into tradeoffs betweenperformance measures and reasonable combinations of policy levers.To demonstrate a search over levers, we'll use the Road Test example model.
###Code
import emat.examples
scope, db, model = emat.examples.road_test()
###Output
_____no_output_____
###Markdown
The scope defined for a model in TMIP-EMAT will already provideinformation about the preferred directionality of performancemeasures (i.e., 'maximize' when larger values are better,'minimize' when smaller values are better, or 'info' when wedo not have a preference for bigger or smaller values, but we just want to be tracking the measure). We can see thesepreferences for any particular performance measure by inspectingthe scope definition file, or by using the `info` method of`emat.Measure` instances.
###Code
scope['net_benefits'].info()
###Output
net_benefits:
shortname: Net Benefits
kind: maximize
###Markdown
To conduct an optimization search over levers, we'll use the `optimize` method of the TMIP-EMAT model class, setting the`search_over` argument to `'levers'`. In this example, we willset the number of function evaluations (`nfe`) to 10,000, althoughfor other models you may need more or fewer to achieve a good, well converged result. In a Jupyter notebook environment, we can monitor convergence visually in real time in the figuresthat will appear automatically.
###Code
result = model.optimize(
nfe=10_000,
searchover='levers',
check_extremes=1,
cache_file='./optimization_cache/road_test_search_over_levers.gz',
)
###Output
_____no_output_____
###Markdown
The `optimize` method returns an `OptimizationResult` object, which contains the resulting solutions, as well as some informationabout how they were derived. We can review the raw values of thesolutions as a pandas DataFrame, or see the scenario values used to generate these solutions.
###Code
result.result.head()
result.scenario
###Output
_____no_output_____
###Markdown
We can visualize the set of solutions using a[parallel coordinates](https://en.wikipedia.org/wiki/Parallel_coordinates) plot. This figure is composed of a number of vertical axes, one for each column of data in the results DataFrame. Each row of the DataFrame isrepresented by a chord that connects across each of the vertical axes.By default, the axes representing performace measures to be minimized areinverted in the parallel coordinates, such that moving up along any performance measure axis results in a "better" outcome for that performancemeasure.In the figure below, we can quickly see that pretty much all the all of the Paretooptimal policy solutions for our reference scenario share an amortizationperiod of 50 years, and all share a debt type of 'Paygo'. By contrast, the set of solutions include multiple different values for the expand capacity lever,ranging from 0 to 100. These different values offer possible tradeoffs among the performance measures: lower levels of capacity expansion (shownin yellow) will maximize net benefits and minimize the cost of the project,but they will also fail to provide much travel time savings. Conversely,larger levels of capacity expansion will provide better travel time savings, but will not perform as well on costs. It is left up the the analysts anddecision makers to judge what tradeoffs to make between these conflictinggoals.
###Code
q=result.par_coords()
q
###Output
_____no_output_____
###Markdown
As noted above, nearly all but not exactly all of the identified Pareto optimalsolutions in this figure share an amortization period of 50 years. We can review a table of a subset of particular solutions using the `query` method of a pandas DataFrame. In this example, we may want to see a table of the instances where the amortization period is not 50 years.
###Code
result.result.query("amortization_period != 50")
###Output
_____no_output_____
###Markdown
The first row in this table shows a particular edge case: when the capacity expansion isexactly zero, all of the remaining policy levers have no effect -- the details of debtfinancing are irrelevant when there is no debt at all, and thus no values of the otherlevers result in a Pareto-optimal solution that would dominate any other such solution.On the other hand other solution shown is a different edge case, with a capacity expansion at the maximum (100). Here, the numerical difference between an amortizationperiod of 49 years and 50 years may be two small for the algorithm to catch (i.e., it may be swallowed by a rounding error somewhere inside the calculations). In each case,an analyst with domain knowledge, who understands the underlying system being modeled,will be able to bring a more nuanced understanding of the results than can be achievedmerely by applying the mathematical algorithms in TMIP-EMAT, and correctly inferthat the 50 year amortization is always an optimal solution, and that the outliersolutions are not important. Lastly, a note on the interpretation of the visualization of parallel coordinatesplots: for numerical parameters and measures, the range of values shown in each vertical axis of plot is determined not by the full range of possible values, but insteadit only displays the range of values included in the solutions being visualized.For example, the `amortization_period` axis only shows values between 47 and 50, eventhough the actual range of values is defined in the scope to be between 15 and 50 years.Similarly, the range of values for `net_benefits` is between 0.08 and -194.18.Because the solutions being displayed are optimal, the top value of 0.08 is (barring numerical problems) the best value of `net_benefits` that might be obtained,but the bottom value of -194.18 is by no means the worst possible `net_benefits`outcome that could arise from various different policies. Instead, this bottomvalue is not a worst outcome but *also an optimal value*, except it is conditional on also achieving some particular other desirable outcome. In the example shown, this other desirable outcome is a high level of travel time savings. It is left entirelyup to the analyst and policy makers to judge whether this outcome is "bad" or not,relative to the other possible outcomes. Worst Case Discovery: Search over Uncertainties We can apply the same multiobjective optimization tool in reverse to study the worst case outcomes from any particular set of policy leversettings. To do so, we switch out the `searchover` argument from`'levers'` to `'uncertainties'`, and set `reverse_targets` to True,which will tell the optimization engine to search for the worst outcomesinstead of the best ones. We'll often also want to override the referencepolicy values with selected particular values, although it's possibleto omit this reference argument to search for the worst case scenariosunder the default policy lever settings.
###Code
worst = model.optimize(
nfe=10_000,
searchover='uncertainties',
reverse_targets = True,
check_extremes=1,
cache_file='./optimization_cache/road_test_search_over_uncs.gz',
reference={
'expand_capacity': 100.0,
'amortization_period': 50,
'debt_type': 'PayGo',
'interest_rate_lock': False,
}
)
worst.par_coords()
###Output
_____no_output_____
###Markdown
Using Robust Optimization As discussed above, implementing robust optimization requires thedefinition of relevant robustness functions. Because the functionalform of these functions can be so many different things depending onthe particular application, TMIP-EMAT does not implement a mechanismto generate them automatically. Instead, it is left to the analyst todevelop a set of robustness functions that are appropriate for eachapplication.Each robustness function represents an aggregate performance measure,calculated based on the results from a group of individual runs of theunderlying core model (or meta-model). These runs are conducted witha common set of policy lever settings, and a variety of different exogenous uncertainty scenarios. This allows us to create aggregate measures that encapsulate information from the distribution of possibleoutcomes, instead of for just one particular future scenario.A robust measure is created in TMIP-EMAT using the same `Measure` classused for performace measures that are direct model outputs. Like any other measure, they have a `name` and `kind` (minimize, maximize, or info).The names used for robust measures must be unique new names that are not otherwise used in the model's scope, so you cannot use the same nameas an existing performance measure. Instead, the names can (and usually should) be descriptive variants of the existing performance measures. For example, if an existing performance measure is `'net_benefits'`,you can name a robust measure `'min_net_benefits'`.In addition to the `name` and `kind`, robust measures have two importantadditional attributes: a `variable_name`, which names the underlyingperformance measure upon which this robust measure is based, and a`function` that describes how to aggregate the results. The functionshould be a callable function, which accepts an array of performancemeasure values as its single argument, and returns a single numericvalue that is the robust measure. For example, the code below will create a robust measure that represents the minimum net benefit across all exogenous uncertainty scenarios.
###Code
from emat import Measure
minimum_net_benefit = Measure(
name='Minimum Net Benefits',
kind=Measure.MAXIMIZE,
variable_name='net_benefits',
function=min,
)
###Output
_____no_output_____
###Markdown
As suggested earlier, this measure might be too sensitive to outliersin the set of exogenous uncertainty scenarios. We can address this by creating a different robust measure, based on the same underlyingperformance measure, but which is based on the mean instead of the minimum value.
###Code
expected_net_benefit = Measure(
name='Mean Net Benefits',
kind=Measure.MAXIMIZE,
variable_name='net_benefits',
function=numpy.mean,
)
###Output
_____no_output_____
###Markdown
Or we can adopt an intermediate approach, focusing on the 5th percentileinstead of the minimum, which avoids being overly sensitive to the most extreme tail ends of the distribution, but maintains a fairly risk-averse robustness approach.Note that normally, the `numpy.percentile` function requires two argumentsinstead of one: the array of values, and the target percentile value.Since the `function` of the robust measure needs to accept only a single argument, we can inject the `q=5` argument here using [functools.partial](https://docs.python.org/3/library/functools.htmlfunctools.partial).
###Code
import functools
pct5_net_benefit = Measure(
'5%ile Net Benefits',
kind = Measure.MAXIMIZE,
variable_name = 'net_benefits',
function = functools.partial(numpy.percentile, q=5),
)
###Output
_____no_output_____
###Markdown
We can also capture robustness measures that are not statistical versionsof the performance measure (that can be contrasted directly with the performance measure outputs, like the mean or median), but rather more abstract measures, like the percentage of scenarios where the performace measure meets some target value. For example, we can compute the percentage of scenariosfor the road test example where the net benefits are negative. To do so,we will use the `percentileofscore` function from the `scipy.stats` package.For this function, set the `kind` argument to `'strict'` to count only strictlynegative results -- not scenarios where the net benefits are exactly zero -- orto `'weak'` to count all non-positive results.
###Code
from scipy.stats import percentileofscore
neg_net_benefit = Measure(
'Possibility of Negative Net Benefits',
kind = Measure.MINIMIZE,
variable_name = 'net_benefits',
function = functools.partial(percentileofscore, score=0, kind='strict'),
)
###Output
_____no_output_____
###Markdown
We can of course also create robust measures based on other performance measures in the core model. For example, in the Road Test model the total cost of the capacity expansion issubject to some uncertainty, and we may want to make policychoices not just to maximize net benefits but also trying tokeep costs in check.
###Code
pct95_cost = Measure(
'95%ile Capacity Expansion Cost',
kind = Measure.MINIMIZE,
variable_name = 'cost_of_capacity_expansion',
function = functools.partial(numpy.percentile, q = 95),
)
###Output
_____no_output_____
###Markdown
We may also be interested in finding policies that will maximize the expected time savings. Although we can easily conceptualize that these two goals are in opposition (increasing timesavings pretty obviously goes hand in hand with increasing cost) we will be able to usethe results of this robust optimization to visualize the tradeoffs and try to find an appropriate balance.
###Code
expected_time_savings = Measure(
'Expected Time Savings',
kind = Measure.MAXIMIZE,
variable_name = 'time_savings',
function = numpy.mean,
)
from emat.util.distributed import get_client
robust_result = model.robust_optimize(
robustness_functions=[
expected_net_benefit,
pct5_net_benefit,
neg_net_benefit,
pct95_cost,
expected_time_savings,
],
scenarios=250,
nfe=25_000,
check_extremes=1,
evaluator=get_client(),
cache_file='./optimization_cache/road_test_robust_search.gz',
)
robust_result.par_coords()
###Output
_____no_output_____
###Markdown
ConstraintsThe robust optimization process can be constrained to only include solutions that satisfy certain constraints. These constraints can be based on the policy lever parameters that are contained in the core model, the aggregate performance measures identified in the list of robustness functions, or some combination of levers and aggregate measures. Importantly, the constraints *cannot* be imposed on the exogenous uncertainties, nor directly on the output measures from the core models (or the equivalent meta-model). This is because the robust version of the model aggregates a number of individual core model runs, and effectively hides these two components from the optimization engine. One way to work around this limitation, at least on the output measures, is to write robustness functions that transmit the relevant output measures through the aggregation process. For example, to constrain the robust search only to instances where a particular output measure is always positive, then write a robustness function that takes the *minimum* value of the targeted performance measure, and write a constraint that ensures that the minimum value is always positive. This approach should be used with caution, however, as it may severely limit the search space.For the road test example, we can define some constraints to consider solutions that are within the limited search space. To do so, we will use the `Constraint` class.
###Code
from emat import Constraint
###Output
_____no_output_____
###Markdown
Each `Constraint` needs to have a unique name(i.e. not the same as anything else in the scope or any robustmeasure). Each `Constraint` is also defined by one or more `parameter_names` and/or `outcome_names`, plus a `function` that will be used to determine whether the constraint is violated. The `function`should accept positional values for each of the `parameter_names` and `outcome_names`, in order, and return 0 if the constraint is notviolated, and a positive number if it is violated.Two convenient class methods are provided within the `Constraint` class:`must_be_less_than` and `must_be_greater_than`, which can simplifythe creation and legibility of simple constraints on a singleparameter or outcome. Each take a single argument, the thresholdof the constraint.
###Code
c_min_expansion = Constraint(
"Minimum Capacity Expansion",
parameter_names="expand_capacity",
function=Constraint.must_be_greater_than(10),
)
c_positive_mean_net_benefit = Constraint(
"Minimum Net Benefit",
outcome_names = "Mean Net Benefits",
function = Constraint.must_be_greater_than(0),
)
###Output
_____no_output_____
###Markdown
It is also possible to impose constraints based on a combination of inputs and outputs. For example, suppose that the total funds available for pay-as-you-go financing are only 3,000. We may thus want to restrict the robust search to only solutions that are almost certainly within the available funds at 99% confidence (a robustnessmeasure that is an output we can construct) but only if the Paygo financing option is used (a model input). This kind of constraint can be created by giving both `parameter_names` and `outcomes_names`, and writing a constraint function that takes two arguments.
###Code
pct99_present_cost = Measure(
'99%ile Present Cost',
kind=Measure.INFO,
variable_name='present_cost_expansion',
function=functools.partial(numpy.percentile, q=99),
)
c_max_paygo = Constraint(
"Maximum Paygo",
parameter_names='debt_type',
outcome_names='99%ile Present Cost',
function=lambda i,j: max(0, j-3000) if i=='Paygo' else 0,
)
###Output
_____no_output_____
###Markdown
The constraints are then passed to the `robust_optimize` method in addition to the other arguments.
###Code
robust_constrained = model.robust_optimize(
robustness_functions=[
expected_net_benefit,
pct5_net_benefit,
neg_net_benefit,
pct95_cost,
expected_time_savings,
pct99_present_cost,
],
constraints = [
c_min_expansion,
c_positive_mean_net_benefit,
c_max_paygo,
],
scenarios=250,
nfe=10_000,
check_extremes=1,
evaluator=get_client(),
cache_file='./optimization_cache/road_test_robust_search_constrained.gz',
)
robust_constrained.par_coords()
###Output
_____no_output_____
###Markdown
Optimization Tools Typically, transportation policy planning models will be used totry to find policies that provide the "best" outcomes. In a traditionalanalytical environment, that typically means using models to find optimal outcomes for performance measures.Transportation models as used in the TMIP-EMAT frawework are generally characterized by two important features: they are subject to significant exogenous uncertainties about the futurestate of the world, and they include numerous performance measuresfor which decision makers would like to see good outcomes. Therefore,optimization tools applied to these models should be flexible to consider *multiple objectives*,as well as be *robust* against uncertainty. Multi-Objective Optimization With exploratory modeling, optimization is also often undertaken as a [multi-objective optimization](https://en.wikipedia.org/wiki/Multi-objective_optimization) exercise, where multipleand possibly conflicting performance measures need to be addressed simultaneously. A road capacity expansion project is a good example of a multi-objective optimization problem. In such a situation, we want to expand the capacity of a roadway, both minimizingthe costs and maximizing the travel time benefits. A smaller expansion project will cost lessbut also provide lesser benefits. Funding it with variable rate debt might decrease expected future costs but doing so entails more risk than fixed-rate debt. One approach to managing a multi-objective optimization problem is to distill it into a single objective problem, by assigning relative weights to the various objectives. For a variety of reasons, this can be difficult to accomplish in public policy environments that are common in transportation planning. Multiple stakeholders may have differentpriorities and may not be able to agree on a relative weighting structure. Certain small improvements in a performance measure may be valued very differently if they tip the measure over a regulated threshold (e.g. to attain a particular mandated level of emissions or air quality).Instead of trying to simplify a multi-objective into a simple-objective one,an alternate approach is to preserve the multi-objective nature of the problemand find a set or spectrum of different solutions, each of which solves the problemat a different weighting of the various objectives. Decision makers can thenreview the various different solutions, and make judgements about the varioustrade-offs implicit in choosing one path over another.Within a set of solutions for this kind of problem, each individual solution is "[Pareto optimal](https://en.wikipedia.org/wiki/Pareto_efficiency)", such thatno individual objective can be improved without degrading at least one otherobjective by some amount. Thus, each of these solutions might be the "best"policy to adopt, and exactly which is the best is left as a subjective judgementto decision makers, instead of being a concretely objective evaluation based on mathematics alone. Robust Optimization Robust optimization is a variant of the more traditional optimizationproblem, where we will try to find policies that yield *good* outcomesacross a range of possible futures, instead of trying to find a policythat delivers the *best* outcome for a particular future.To conceptualize this, let us consider a decision where there are fourpossible policies to choose among, a single exogenous uncertainty thatwill impact the future, and a single performance measure that we wouldlike to mimimize. We have a model that can forecast the performance measure, conditional on the chosen policy and the future value of theexogenous uncertainty, and which gives us a forecasts as shown below.
###Code
import numpy
from matplotlib import pyplot as plt
x = numpy.linspace(0,1)
y1 = x**3 + numpy.cos((x-3)*23)*.01
y2 = x**3 + .1 + numpy.sin((x-3)*23)*.01
y3 = 1.3*(1-x)**17 + numpy.cos((x-3)*23)*.01 + .1
y4 = numpy.sin((x-3)*23)*.01+0.16 + .1*(x-0.5)
linestyles = [
dict(ls='-', c='blue'),
dict(ls=':', c='orange'),
dict(ls='-.', c='green'),
dict(ls='--', c='red'),
]
fig, ax = plt.subplots(1,1)
ax.plot(x, y1, **linestyles[0], label="Policy 1")
ax.plot(x, y2, **linestyles[1], label="Policy 2")
ax.plot(x, y3, **linestyles[2], label="Policy 3")
ax.plot(x, y4, **linestyles[3], label="Policy 4")
ax.set_ylabel("Performance Measure\n← Lower is Better ←")
ax.set_xlabel("Exogenous Uncertainty")
ax.legend()
ax.set_title("Example Simple Forecasting Experiment")
plt.savefig("robust_example.png")
plt.show()
###Output
_____no_output_____
###Markdown
.. image:: robust_example.png
###Code
In a naive optimization approach, if we want to minimize the performance
measure, we can do so by selecting Policy 1 and setting the exogenous
uncertainty to 0.1. Of course, in application we are able to select
Policy 1, but we are unable to actually control the exogenous uncertainty
(hence, "exogenous") and we may very well end up with a very bad result
on the right side of the figure.
We can see from the figure that, depending on the ultimate value for
the exogenous uncertainty, either Policy 1 or Policy 3 might yield the
best possible value of the performace measure. However, both of these
policies come with substantial risks as well -- in each Policy there are
some futures where the results are optimal, but there are also some
futures where the results are exceptionally poor.
In contrast with these optimal policies, Policy 4 may be considered a
"robust" solution. Although there is no value of the exogenous
uncertainty where Policy 4 yields the best possible outcome, there is
also no future where Policy 4 yields a very poor outcome. Instead, across
all futures it is always generating a "pretty good" outcome.
Different expectations for the future may lead to different policy
choices. If the decision maker feels that low values of the exogenous
uncertainty are much more likely than high values, Policy 1 might be
the best policy to choose. If high values of the exogenous uncertainty
are expected, then Policy 3 might be the best choice. If there is not
much agreement on the probable future values of the exogenous uncertainty,
or if decision makers want to adopt a risk-averse stance, then Policy 4
might be the best choice.
The remaining policy shown in the figure, Policy 2, is the lone holdout
in this example -- there is no set of expectations about the future, or
attitudes toward risk, that can make this policy the best choice. This
is because, no matter what the future value of the exogenous uncertainty
may be, the performance measure has a better outcome from Policy 1 than
from Policy 2. In this circumstance, we can say that Policy 2 is
"dominated" and should never be chosen by decision makers.
### Robustness Functions
To perform robust optimization in EMAT, we need a core model (or meta-model)
with defined input and output parameters, as well a set of functions called
"robustness measures" that define what a "robust" measure represents.
As noted above, different expectations about future states of the world
can lead to different rankings of policy options. In addition, different
attitudes towards risk can result in different robustness measures that are
derived from the same underlying modeled performance measures.
For example, consider the *Example Simple Forecasting Experiment* shown above.
In this example, we could compute a robustness measure for each policy where
we calculate the maximum value of the performance measure across any possible
value of the exogenous uncertainty. This value is shown in the first column
of robustness measures in the figure below, and under this measure Policy 4 is
far and away the best choice.
###Output
_____no_output_____
###Markdown
fig, ax = plt.subplots( 1,6, sharey=True, gridspec_kw=dict(width_ratios=[7,1,1,1,1,1], wspace=0.05,), figsize=[8, 4.],)ax[0].plot(x, y1, **linestyles[0], label="Policy 1")ax[0].plot(x, y2, **linestyles[1], label="Policy 2")ax[0].plot(x, y3, **linestyles[2], label="Policy 3")ax[0].plot(x, y4, **linestyles[3], label="Policy 4")ax[1].plot([0,1], [y1.max()]*2, **linestyles[0], lw=2)ax[1].plot([0,1], [y2.max()]*2, **linestyles[1], lw=2)ax[1].plot([0,1],[y3.max()]*2, **linestyles[2], lw=2)ax[1].plot([0,1], [y4.max()]*2, **linestyles[3], lw=2)ax[2].plot([0,1], [y1.mean()]*2, **linestyles[0], lw=2)ax[2].plot([0,1], [y2.mean()]*2, **linestyles[1], lw=2)ax[2].plot([0,1],[y3.mean()]*2, **linestyles[2], lw=2)ax[2].plot([0,1], [y4.mean()]*2, **linestyles[3], lw=2)ax[3].plot([0,1], [numpy.median(y1)]*2, **linestyles[0], lw=2)ax[3].plot([0,1], [numpy.median(y2)]*2, **linestyles[1], lw=2)ax[3].plot([0,1], [numpy.median(y3)]*2, **linestyles[2], lw=2)ax[3].plot([0,1], [numpy.median(y4)]*2, **linestyles[3], lw=2)ax[4].plot([0,1], [numpy.percentile(y1, 90)]*2, **linestyles[0], lw=2)ax[4].plot([0,1], [numpy.percentile(y2, 90)]*2, **linestyles[1], lw=2)ax[4].plot([0,1], [numpy.percentile(y3, 90)]*2, **linestyles[2], lw=2)ax[4].plot([0,1], [numpy.percentile(y4, 90)]*2, **linestyles[3], lw=2)ax[5].plot([0,1], [y1.min()]*2, **linestyles[0], lw=2)ax[5].plot([0,1], [y2.min()]*2, **linestyles[1], lw=2)ax[5].plot([0,1], [y3.min()]*2, **linestyles[2], lw=2)ax[5].plot([0,1], [y4.min()]*2, **linestyles[3], lw=2)ax[0].set_ylabel("Performance Measure\n← Lower is Better ←")ax[0].set_xlabel("Exogenous Uncertainty")ax[0].legend()ax[0].set_title("Example Simple Forecasting Experiment")ax[3].set_title("Robustness Measures")ax[1].set_xlabel("Max\nPM")ax[2].set_xlabel("Mean\nPM")ax[3].set_xlabel("Median\nPM")ax[4].set_xlabel("90%ile\nPM")ax[5].set_xlabel("Min\nPM")ax[-1].yaxis.set_label_position("right")for a in [1,2,3,4,5]: ax[a].set_xticks([]) ax[a].set_yticks([])plt.savefig("robust_measures.png")plt.show()
###Code
.. image:: robust_measures.png
###Output
_____no_output_____
###Markdown
The "maximum performance measure result" robustness function is a veryrisk averse approach, as no consideration is given to the shape or distributionof performance measure values other than the maximum. Consider these samepolicies shown if Policy 4 is not available. In this case, the next best policy under this robustness function is Policy 1, as it has the next lowestmaximum value. However, when we look ata comparison between Policies 1 and 3 in aggregate, we might easily concludethat Policy 3 is a better choice overall: it is better than Policy 1 on average,as judged by the mean, median, and 90th percentile measures. The only reason Policy 3 appears worse than Policy 1 on the initial robustness function is that it has an especially poor outcome at one extreme end of the uncertainty distribution. Depending on our attitude towards risk on this performance measure,we may want to consider using some of these alternative robustness functions.An additional consideration here is that the various robustness measuresin this example case are all unweighted measures: they implicitly assume a uniform probability distribution for the entire range of possible values forthe exogenous uncertainty. If we are able to develop a probability distributionon our expectations for the future values of the exogenous uncertainties,we can use that probability distribution to weight the robustness functionsappropriately, creating more meaningful values, particularly for the non-extremevalue robustness functions (i.e., everything except the min and max). Mechanics of Using Optimization Policy Optimization: Search over Levers The simplest optimization tool available for TMIP-EMAT users is a *search over policy levers*, which represents multi-objectiveoptimization, manipulating policy lever values to find a Paretooptimal set of solutions, holding the exogenous uncertaintiesfixed at a particular value for each uncertainty (typically at the default values). This is often a useful first stepin exploratory analysis, even if your ultimate goal is to eventually undertake a robust optimization analysis. This lesscomplex optimization can give insights into tradeoffs betweenperformance measures and reasonable combinations of policy levers.To demonstrate a search over levers, we'll use the Road Test example model.
###Code
import emat.examples
scope, db, model = emat.examples.road_test()
###Output
_____no_output_____
###Markdown
The scope defined for a model in TMIP-EMAT will already provideinformation about the preferred directionality of performancemeasures (i.e., 'maximize' when larger values are better,'minimize' when smaller values are better, or 'info' when wedo not have a preference for bigger or smaller values, but we just want to be tracking the measure). We can see thesepreferences for any particular performance measure by inspectingthe scope definition file, or by using the `info` method of`emat.Measure` instances.
###Code
scope['net_benefits'].info()
###Output
_____no_output_____
###Markdown
To conduct an optimization search over levers, we'll use the `optimize` method of the TMIP-EMAT model class, setting the`search_over` argument to `'levers'`. In this example, we willset the number of function evaluations (`nfe`) to 10,000, althoughfor other models you may need more or fewer to achieve a good, well converged result. In a Jupyter notebook environment, we can monitor convergence visually in real time in the figuresthat will appear automatically.
###Code
result = model.optimize(
nfe=10_000,
searchover='levers',
check_extremes=1,
cache_file='./optimization_cache/road_test_search_over_levers.gz',
)
###Output
_____no_output_____
###Markdown
The `optimize` method returns an `OptimizationResult` object, which contains the resulting solutions, as well as some informationabout how they were derived. We can review the raw values of thesolutions as a pandas DataFrame, or see the scenario values used to generate these solutions.
###Code
result.result.head()
result.scenario
###Output
_____no_output_____
###Markdown
We can visualize the set of solutions using a[parallel coordinates](https://en.wikipedia.org/wiki/Parallel_coordinates) plot. This figure is composed of a number of vertical axes, one for each column of data in the results DataFrame. Each row of the DataFrame isrepresented by a chord that connects across each of the vertical axes.By default, the axes representing performace measures to be minimized areinverted in the parallel coordinates, such that moving up along any performance measure axis results in a "better" outcome for that performancemeasure.In the figure below, we can quickly see that pretty much all the all of the Paretooptimal policy solutions for our reference scenario share an amortizationperiod of 50 years, and all share a debt type of 'Paygo'. By contrast, the set of solutions include multiple different values for the expand capacity lever,ranging from 0 to 100. These different values offer possible tradeoffs among the performance measures: lower levels of capacity expansion (shownin yellow) will maximize net benefits and minimize the cost of the project,but they will also fail to provide much travel time savings. Conversely,larger levels of capacity expansion will provide better travel time savings, but will not perform as well on costs. It is left up the the analysts anddecision makers to judge what tradeoffs to make between these conflictinggoals.
###Code
q=result.par_coords()
q
###Output
_____no_output_____
###Markdown
As noted above, nearly all but not exactly all of the identified Pareto optimalsolutions in this figure share an amortization period of 50 years. We can review a table of a subset of particular solutions using the `query` method of a pandas DataFrame. In this example, we may want to see a table of the instances where the amortization period is not 50 years.
###Code
result.result.query("amortization_period != 50")
###Output
_____no_output_____
###Markdown
The first row in this table shows a particular edge case: when the capacity expansion isexactly zero, all of the remaining policy levers have no effect -- the details of debtfinancing are irrelevant when there is no debt at all, and thus no values of the otherlevers result in a Pareto-optimal solution that would dominate any other such solution.On the other hand other solution shown is a different edge case, with a capacity expansion at the maximum (100). Here, the numerical difference between an amortizationperiod of 49 years and 50 years may be two small for the algorithm to catch (i.e., it may be swallowed by a rounding error somewhere inside the calculations). In each case,an analyst with domain knowledge, who understands the underlying system being modeled,will be able to bring a more nuanced understanding of the results than can be achievedmerely by applying the mathematical algorithms in TMIP-EMAT, and correctly inferthat the 50 year amortization is always an optimal solution, and that the outliersolutions are not important. Lastly, a note on the interpretation of the visualization of parallel coordinatesplots: for numerical parameters and measures, the range of values shown in each vertical axis of plot is determined not by the full range of possible values, but insteadit only displays the range of values included in the solutions being visualized.For example, the `amortization_period` axis only shows values between 47 and 50, eventhough the actual range of values is defined in the scope to be between 15 and 50 years.Similarly, the range of values for `net_benefits` is between 0.08 and -194.18.Because the solutions being displayed are optimal, the top value of 0.08 is (barring numerical problems) the best value of `net_benefits` that might be obtained,but the bottom value of -194.18 is by no means the worst possible `net_benefits`outcome that could arise from various different policies. Instead, this bottomvalue is not a worst outcome but *also an optimal value*, except it is conditional on also achieving some particular other desirable outcome. In the example shown, this other desirable outcome is a high level of travel time savings. It is left entirelyup to the analyst and policy makers to judge whether this outcome is "bad" or not,relative to the other possible outcomes. Worst Case Discovery: Search over Uncertainties We can apply the same multiobjective optimization tool in reverse to study the worst case outcomes from any particular set of policy leversettings. To do so, we switch out the `searchover` argument from`'levers'` to `'uncertainties'`, and set `reverse_targets` to True,which will tell the optimization engine to search for the worst outcomesinstead of the best ones. We'll often also want to override the referencepolicy values with selected particular values, although it's possibleto omit this reference argument to search for the worst case scenariosunder the default policy lever settings.
###Code
worst = model.optimize(
nfe=10_000,
searchover='uncertainties',
reverse_targets = True,
check_extremes=1,
cache_file='./optimization_cache/road_test_search_over_uncs.gz',
reference={
'expand_capacity': 100.0,
'amortization_period': 50,
'debt_type': 'PayGo',
'interest_rate_lock': False,
}
)
worst.par_coords()
###Output
_____no_output_____
###Markdown
Using Robust Optimization As discussed above, implementing robust optimization requires thedefinition of relevant robustness functions. Because the functionalform of these functions can be so many different things depending onthe particular application, TMIP-EMAT does not implement a mechanismto generate them automatically. Instead, it is left to the analyst todevelop a set of robustness functions that are appropriate for eachapplication.Each robustness function represents an aggregate performance measure,calculated based on the results from a group of individual runs of theunderlying core model (or meta-model). These runs are conducted witha common set of policy lever settings, and a variety of different exogenous uncertainty scenarios. This allows us to create aggregate measures that encapsulate information from the distribution of possibleoutcomes, instead of for just one particular future scenario.A robust measure is created in TMIP-EMAT using the same `Measure` classused for performace measures that are direct model outputs. Like any other measure, they have a `name` and `kind` (minimize, maximize, or info).The names used for robust measures must be unique new names that are not otherwise used in the model's scope, so you cannot use the same nameas an existing performance measure. Instead, the names can (and usually should) be descriptive variants of the existing performance measures. For example, if an existing performance measure is `'net_benefits'`,you can name a robust measure `'min_net_benefits'`.In addition to the `name` and `kind`, robust measures have two importantadditional attributes: a `variable_name`, which names the underlyingperformance measure upon which this robust measure is based, and a`function` that describes how to aggregate the results. The functionshould be a callable function, which accepts an array of performancemeasure values as its single argument, and returns a single numericvalue that is the robust measure. For example, the code below will create a robust measure that represents the minimum net benefit across all exogenous uncertainty scenarios.
###Code
from emat import Measure
minimum_net_benefit = Measure(
name='Minimum Net Benefits',
kind=Measure.MAXIMIZE,
variable_name='net_benefits',
function=min,
)
###Output
_____no_output_____
###Markdown
As suggested earlier, this measure might be too sensitive to outliersin the set of exogenous uncertainty scenarios. We can address this by creating a different robust measure, based on the same underlyingperformance measure, but which is based on the mean instead of the minimum value.
###Code
expected_net_benefit = Measure(
name='Mean Net Benefits',
kind=Measure.MAXIMIZE,
variable_name='net_benefits',
function=numpy.mean,
)
###Output
_____no_output_____
###Markdown
Or we can adopt an intermediate approach, focusing on the 5th percentileinstead of the minimum, which avoids being overly sensitive to the most extreme tail ends of the distribution, but maintains a fairly risk-averse robustness approach.Note that normally, the `numpy.percentile` function requires two argumentsinstead of one: the array of values, and the target percentile value.Since the `function` of the robust measure needs to accept only a single argument, we can inject the `q=5` argument here using [functools.partial](https://docs.python.org/3/library/functools.htmlfunctools.partial).
###Code
import functools
pct5_net_benefit = Measure(
'5%ile Net Benefits',
kind = Measure.MAXIMIZE,
variable_name = 'net_benefits',
function = functools.partial(numpy.percentile, q=5),
)
###Output
_____no_output_____
###Markdown
We can also capture robustness measures that are not statistical versionsof the performance measure (that can be contrasted directly with the performance measure outputs, like the mean or median), but rather more abstract measures, like the percentage of scenarios where the performace measure meets some target value. For example, we can compute the percentage of scenariosfor the road test example where the net benefits are negative. To do so,we will use the `percentileofscore` function from the `scipy.stats` package.For this function, set the `kind` argument to `'strict'` to count only strictlynegative results -- not scenarios where the net benefits are exactly zero -- orto `'weak'` to count all non-positive results.
###Code
from scipy.stats import percentileofscore
neg_net_benefit = Measure(
'Possibility of Negative Net Benefits',
kind = Measure.MINIMIZE,
variable_name = 'net_benefits',
function = functools.partial(percentileofscore, score=0, kind='strict'),
)
###Output
_____no_output_____
###Markdown
We can of course also create robust measures based on other performance measures in the core model. For example, in the Road Test model the total cost of the capacity expansion issubject to some uncertainty, and we may want to make policychoices not just to maximize net benefits but also trying tokeep costs in check.
###Code
pct95_cost = Measure(
'95%ile Capacity Expansion Cost',
kind = Measure.MINIMIZE,
variable_name = 'cost_of_capacity_expansion',
function = functools.partial(numpy.percentile, q = 95),
)
###Output
_____no_output_____
###Markdown
We may also be interested in finding policies that will maximize the expected time savings. Although we can easily conceptualize that these two goals are in opposition (increasing timesavings pretty obviously goes hand in hand with increasing cost) we will be able to usethe results of this robust optimization to visualize the tradeoffs and try to find an appropriate balance.
###Code
expected_time_savings = Measure(
'Expected Time Savings',
kind = Measure.MAXIMIZE,
variable_name = 'time_savings',
function = numpy.mean,
)
from emat.util.distributed import get_client
robust_result = model.robust_optimize(
robustness_functions=[
expected_net_benefit,
pct5_net_benefit,
neg_net_benefit,
pct95_cost,
expected_time_savings,
],
scenarios=250,
nfe=25_000,
check_extremes=1,
evaluator=get_client(),
cache_file='./optimization_cache/road_test_robust_search.gz',
)
robust_result.par_coords()
###Output
_____no_output_____
###Markdown
ConstraintsThe robust optimization process can be constrained to only include solutions that satisfy certain constraints. These constraints can be based on the policy lever parameters that are contained in the core model, the aggregate performance measures identified in the list of robustness functions, or some combination of levers and aggregate measures. Importantly, the constraints *cannot* be imposed on the exogenous uncertainties, nor directly on the output measures from the core models (or the equivalent meta-model). This is because the robust version of the model aggregates a number of individual core model runs, and effectively hides these two components from the optimization engine. One way to work around this limitation, at least on the output measures, is to write robustness functions that transmit the relevant output measures through the aggregation process. For example, to constrain the robust search only to instances where a particular output measure is always positive, then write a robustness function that takes the *minimum* value of the targeted performance measure, and write a constraint that ensures that the minimum value is always positive. This approach should be used with caution, however, as it may severely limit the search space.For the road test example, we can define some constraints to consider solutions that are within the limited search space. To do so, we will use the `Constraint` class.
###Code
from emat import Constraint
###Output
_____no_output_____
###Markdown
Each `Constraint` needs to have a unique name(i.e. not the same as anything else in the scope or any robustmeasure). Each `Constraint` is also defined by one or more `parameter_names` and/or `outcome_names`, plus a `function` that will be used to determine whether the constraint is violated. The `function`should accept positional values for each of the `parameter_names` and `outcome_names`, in order, and return 0 if the constraint is notviolated, and a positive number if it is violated.Two convenient class methods are provided within the `Constraint` class:`must_be_less_than` and `must_be_greater_than`, which can simplifythe creation and legibility of simple constraints on a singleparameter or outcome. Each take a single argument, the thresholdof the constraint.
###Code
c_min_expansion = Constraint(
"Minimum Capacity Expansion",
parameter_names="expand_capacity",
function=Constraint.must_be_greater_than(10),
)
c_positive_mean_net_benefit = Constraint(
"Minimum Net Benefit",
outcome_names = "Mean Net Benefits",
function = Constraint.must_be_greater_than(0),
)
###Output
_____no_output_____
###Markdown
It is also possible to impose constraints based on a combination of inputs and outputs. For example, suppose that the total funds available for pay-as-you-go financing are only 3,000. We may thus want to restrict the robust search to only solutions that are almost certainly within the available funds at 99% confidence (a robustnessmeasure that is an output we can construct) but only if the Paygo financing option is used (a model input). This kind of constraint can be created by giving both `parameter_names` and `outcomes_names`, and writing a constraint function that takes two arguments.
###Code
pct99_present_cost = Measure(
'99%ile Present Cost',
kind=Measure.INFO,
variable_name='present_cost_expansion',
function=functools.partial(numpy.percentile, q=99),
)
c_max_paygo = Constraint(
"Maximum Paygo",
parameter_names='debt_type',
outcome_names='99%ile Present Cost',
function=lambda i,j: max(0, j-3000) if i=='Paygo' else 0,
)
###Output
_____no_output_____
###Markdown
The constraints are then passed to the `robust_optimize` method in addition to the other arguments.
###Code
robust_constrained = model.robust_optimize(
robustness_functions=[
expected_net_benefit,
pct5_net_benefit,
neg_net_benefit,
pct95_cost,
expected_time_savings,
pct99_present_cost,
],
constraints = [
c_min_expansion,
c_positive_mean_net_benefit,
c_max_paygo,
],
scenarios=250,
nfe=10_000,
check_extremes=1,
evaluator=get_client(),
cache_file='./optimization_cache/road_test_robust_search_constrained.gz',
)
robust_constrained.par_coords()
###Output
_____no_output_____ |
notebooks/tutorial1.ipynb | ###Markdown
1 - Quick start
###Code
import aseg_gdf2
###Output
_____no_output_____
###Markdown
Read in a simple example GDF2 file
###Code
gdf = aseg_gdf2.read(repo / r"tests/example_datasets/3bcfc711/GA1286_Waveforms")
gdf
###Output
_____no_output_____
###Markdown
How big is the data table? `aseg_gdf2` doesn't know initially, because it is generally "lazy", only calculating things or retrieving data as requested. This is intended to allow working on very large files. You can find out the size of the data table file by accessing the `nrecords` attribute:
###Code
gdf.nrecords
gdf
gdf.field_names()
###Output
_____no_output_____
###Markdown
You can iterate over rows in the data table:
###Code
i = 0
for row in gdf.iterrows():
print(row)
i += 1
if i > 5:
break
###Output
{'Index': 0, 'FLTNUM': 1.0, 'Rx_Voltage': -0.0, 'Flight': 1, 'Time': 0.0052, 'Tx_Current': 0.00176}
{'Index': 1, 'FLTNUM': 1.0, 'Rx_Voltage': -0.0, 'Flight': 1, 'Time': 0.0104, 'Tx_Current': 0.00176}
{'Index': 2, 'FLTNUM': 1.0, 'Rx_Voltage': -0.0, 'Flight': 1, 'Time': 0.0156, 'Tx_Current': 0.00176}
{'Index': 3, 'FLTNUM': 1.0, 'Rx_Voltage': -0.0, 'Flight': 1, 'Time': 0.0208, 'Tx_Current': 0.00176}
{'Index': 4, 'FLTNUM': 1.0, 'Rx_Voltage': -0.0, 'Flight': 1, 'Time': 0.026, 'Tx_Current': 0.00176}
{'Index': 5, 'FLTNUM': 1.0, 'Rx_Voltage': -0.0, 'Flight': 1, 'Time': 0.0312, 'Tx_Current': 0.00176}
###Markdown
You can also get the data table as a pandas.DataFrame:
###Code
df = gdf.df()
df.head()
print(gdf.df().head())
###Output
FLTNUM Rx_Voltage Flight Time Tx_Current
0 1.0 -0.0 1 0.0052 0.00176
1 1.0 -0.0 1 0.0104 0.00176
2 1.0 -0.0 1 0.0156 0.00176
3 1.0 -0.0 1 0.0208 0.00176
4 1.0 -0.0 1 0.0260 0.00176
###Markdown
(If the file is too big to fit in memory, you can also use dask in exactly the same way -- see [Example 3](3%20-%20Use%20dask%20to%20read%20a%20DAT%20file%20too%20big%20to%20fit%20in%20memory.ipynb)) The metadata from the definition file is there too:
###Code
gdf.record_types
###Output
_____no_output_____
###Markdown
You can also get this metadata as a DataFrame:
###Code
gdf.record_types.df()
###Output
_____no_output_____
###Markdown
Get the data just for one field/column as either a pandas Series, or an ndarray:
###Code
gdf.get_field_data('Time')
###Output
_____no_output_____
###Markdown
What about fields which are 2D arrays? Some GDF2 data files have fields with more than one value per row/record. Let's load a different example where this is the case.
###Code
gdf = aseg_gdf2.read(repo / r"tests/example_datasets/9a13704a/Mugrave_WB_MGA52.dfn")
gdf.record_types.df()
print(gdf.record_types.df()[["name", "unit", "format", "cols"]])
###Output
name unit format cols
0 RT A4 1
1 COMMENTS A76 1
0 GA_Project I10 1
1 Job_No I10 1
2 Fiducial F15.2 1
3 DATETIME days F18.10 1
4 LINE I10 1
5 Easting m F12.2 1
6 NORTH m F15.2 1
7 DTM_AHD F10.2 1
8 RESI1 F10.3 1
9 HEIGHT m F10.2 1
10 INVHEI m F10.2 1
11 DOI m F10.2 1
12 Elev m 30F12.2 30
13 Con mS/m 30F15.5 30
14 Con_doi mS/m 30F15.5 30
15 RUnc 30F12.3 30
###Markdown
See those last four fields? They have 30 columns each and are therefore each 2D arrays.They are still normal GDF fields:
###Code
gdf.field_names()
###Output
_____no_output_____
###Markdown
But we can see their representation in the data table file, as 30 separate columns each, by explicitly requesting a listing of the column names:
###Code
gdf.column_names()
###Output
_____no_output_____
###Markdown
These are represented as you'd expect in the data table's DataFrame object:
###Code
gdf.df().head()
###Output
_____no_output_____
###Markdown
We can get the data in exactly the same way as a normal "column" field.
###Code
gdf.get_field_data("Elev")
###Output
_____no_output_____
###Markdown
We can also get a combination of ordinary column fields and 2D fields:
###Code
data = gdf.get_fields_data(["Easting", "NORTH", "Elev"])
data
###Output
_____no_output_____
###Markdown
OpenPIV tutorial 1In this tutorial we read the pair of images using `imread`, compare them visually and process using OpenPIV. Here the import is using directly the basic functions and methods
###Code
from openpiv import tools, pyprocess, validation, filters, scaling
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import imageio
frame_a = tools.imread( '../test1/exp1_001_a.bmp' )
frame_b = tools.imread( '../test1/exp1_001_b.bmp' )
fig,ax = plt.subplots(1,2,figsize=(12,10))
ax[0].imshow(frame_a,cmap=plt.cm.gray)
ax[1].imshow(frame_b,cmap=plt.cm.gray)
winsize = 32 # pixels, interrogation window size in frame A
searchsize = 38 # pixels, search in image B
overlap = 12 # pixels, 50% overlap
dt = 0.02 # sec, time interval between pulses
u0, v0, sig2noise = pyprocess.extended_search_area_piv(frame_a.astype(np.int32),
frame_b.astype(np.int32),
window_size=winsize,
overlap=overlap,
dt=dt,
search_area_size=searchsize,
sig2noise_method='peak2peak')
x, y = pyprocess.get_coordinates( image_size=frame_a.shape,
search_area_size=searchsize,
overlap=overlap )
u1, v1, mask = validation.sig2noise_val( u0, v0,
sig2noise,
threshold = 1.05 )
# if you need more detailed look, first create a histogram of sig2noise
# plt.hist(sig2noise.flatten())
# to see where is a reasonable limit
# filter out outliers that are very different from the
# neighbours
u2, v2 = filters.replace_outliers( u1, v1,
method='localmean',
max_iter=3,
kernel_size=3)
# convert x,y to mm
# convert u,v to mm/sec
x, y, u3, v3 = scaling.uniform(x, y, u2, v2,
scaling_factor = 96.52 ) # 96.52 microns/pixel
# 0,0 shall be bottom left, positive rotation rate is counterclockwise
x, y, u3, v3 = tools.transform_coordinates(x, y, u3, v3)
#save in the simple ASCII table format
tools.save(x, y, u3, v3, mask, 'exp1_001.txt' )
fig, ax = plt.subplots(figsize=(8,8))
tools.display_vector_field('exp1_001.txt',
ax=ax, scaling_factor=96.52,
scale=50, # scale defines here the arrow length
width=0.0035, # width is the thickness of the arrow
on_img=True, # overlay on the image
image_name='../test1/exp1_001_a.bmp');
###Output
_____no_output_____
###Markdown
One could also use some shortcuts
###Code
from openpiv import piv
piv.simple_piv(frame_a, frame_b);
piv.piv_example();
###Output
_____no_output_____ |
climate_hissey_submission.ipynb | ###Markdown
Reflect Tables into SQLAlchemy ORM
###Code
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
###Output
_____no_output_____
###Markdown
Exploratory Climate Analysis
###Code
# Design a query to retrieve the last 12 months of precipitation data and plot the results
previous_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
previous_date
# Calculate the date 1 year ago from the last data point in the database & display
ann_start_date = dt.date(2017, 8, 23) - dt.timedelta(days=365)
ann_start_date
# Perform a query to retrieve the data and precipitation scores
rainfall_query = session.query(Measurement.date, Measurement.prcp).filter(Measurement.date > year_start_date).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
rainfall_df = pd.DataFrame(rainfall_query, columns=['Date', 'Precipitation'])
rainfall_df_final = (rainfall_df.dropna()).sort_values(by="Date")
# Sort the dataframe by date
rainfall_df_final.head()
# Use Pandas Plotting with Matplotlib to plot the data
rainfall_graph = rainfall_df_final.plot(x="Date", y="Precipitation", kind="line", figsize=(20,10), fontsize=20, linewidth=7, color="blue", rot=45)
plt.legend(prop={"size":30}, loc="best")
plt.xlabel("Date", fontsize=23, labelpad=10)
plt.ylabel("Precipitation Measurement", fontsize=23, labelpad=10)
plt.title("Hawaiin Rainfall Measurements\n Date Range: Aug. 24, 2016 to Aug. 23, 2017", fontsize=30, pad=15)
plt.tight_layout()
plt.savefig("Images/Hawaii_Precipitation_Measurements.png")
plt.show(graph_1)
plt.close()
# Display the row's columns and data in dictionary format
rainfall_dict = session.query(Measurement).first()
rainfall_dict.__dict__
# Use Pandas to calcualte the summary statistics for the precipitation data
rainfall_df_final.describe()
# Design a query to show how many stations are available in this dataset?
all_stations_total = session.query(Station).group_by(Station.station).count()
all_stations_total
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
station_activity = session.query(Measurement.station, func.count(Measurement.station)).\
group_by(Measurement.station).\
order_by(func.count(Measurement.station).desc()).all()
station_activity
# Display as a clean DF
station_activity_df = pd.DataFrame(station_activity, columns=["Station","Observation Count"])
station_activity_df
# Display data for station with most activity
station_most_activity = station_activity_df.iat[0,0]
station_observation_count = station_activity_df.iat[0,1]
print(f"The station with the most activity is: {station_most_activity}, with {station_observation_count} observations.")
# Full station info (most activity)
most_active_info = session.query(Station.station, Station.name, Station.latitude, Station.longitude, Station.elevation, func.count(Measurement.station)).\
filter(Measurement.station == Station.station).group_by(Measurement.station).\
order_by(func.count(Measurement.station).desc()).first()
print("Station with Most Activity - - Information:")
most_active_info
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature most active station?
temp_data = session.query(func.min(Measurement.tobs),
func.max(Measurement.tobs),
func.avg(Measurement.tobs)
).filter(Measurement.station == station_most_activity).all()
print("Lowest temperature, Highest temperature, and Average Temperature (respectively):")
temp_data
# Convert to DF
temp_data_df = pd.DataFrame(temp_data, index=[f"Station: {station_most_activity}"],
columns=["Min Temp (F)", "Max Temp (F)", "Average Temp (F)"])
temp_data_df
# Query temperature data
temperature_query = session.query(Measurement.station, Measurement.tobs).\
filter(Measurement.date > year_start_date).all()
temperature_query[:10]
# Convert to DF
temperature_query_df = pd.DataFrame(temperature_query, columns=["Station","Observed Temperature"])
temperature_query_df.head()
# Re-index & use value counts to get station with most observations
station_temperature_df = pd.DataFrame(temperature_query_df["Station"].value_counts())
station_temperature_df
# Choose the station with the highest number of temperature observations.
greatest_observations = station_temperature_df.index[0]
print(f"The station with the highest number of temperature observations is: {greatest_observations}.")
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
USC00519397_annual = session.query(Measurement.station, Measurement.tobs).\
filter(Measurement.date > year_start_date).\
filter(Measurement.station == greatest_observations).all()
USC00519397_annual[:10]
# Convert to DF
USC00519397_annual_df = pd.DataFrame(USC00519397_annual, columns=["Station", "Temperature"])
USC00519397_annual_df_final = (USC00519397_annual_df.dropna()).set_index("Station")
USC00519397_annual_df_final.head()
FOT_graph = USC00519397_annual_df_final.hist(bins=13, figsize=(8,6))
plt.title("Frequency of Observed Temperatures (F)", pad=50)
plt.suptitle("Station: USC00519397\nDate Range: Aug. 24, 2016 to Aug. 23, 2017", y=0.92)
plt.xlabel("Temperature (F)")
plt.ylabel("Frequency")
plt.tight_layout()
plt.savefig("Images/Temperature_Observation Frequency.png")
plt.show(graph_2)
plt.close()
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
def calc_temps_new(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
trip_temps = calc_temps_new("2016-12-01", "2016-12-12")
# function usage example
print (trip_temps)
# Convert to DF
trip_temps_df = pd.DataFrame(trip_temps, index = ["Hawaii Temperature (F)"], columns=["Lowest", "Average", "Highest"])
trip_temps_df
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
Trip_Avg_Temp_Graph = trip_temps_df.plot.bar(y='Average',
yerr=(trip_temps_df["Highest"] - trip_temps_df["Lowest"]),
title="Trip Avg Temp",
color='coral', alpha=0.5, figsize=(5,6), rot=0)
plt.ylabel("Tempeature (F)")
plt.tight_layout()
plt.savefig("Images/Average_Temperature_Hawaii.png")
plt.show(Trip_Avg_Temp_Graph)
plt.close()
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
# Define start and end date for Hawaii trip
start_date = "2017-12-01"
end_date = "2017-12-12"
# Establish date range
date_range = pd.date_range(start_date, end_date)
# Convert the date range into a list
date_list = []
for date in date_range:
date_list.append(date.strftime("%Y-%m-%d"))
date_list
# Correct formatting sans year
dates_reformat = [date[5:] for date in trip_date_list]
dates_reformat
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
per_station_rain = session.query(Station.station, Station.name, Station.latitude, Station.longitude, Station.elevation, func.sum(Measurement.prcp)).\
join(Station, Measurement.station==Station.station).\
group_by(Measurement.station).\
filter(Measurement.date >= "2016-12-01").\
filter(Measurement.date <= "2016-12-12").all()
# Convert to DF
per_station_rain_df = pd.DataFrame(per_station_rain, columns=["Station ID", "Station Info",
"Latitude", "Longitude", "Elevation",
"Precip. Total"])
print("Total Rain by Station from 2016-12-01 to 2016-12-12:")
per_station_rain_df
###Output
Total Rain by Station from 2016-12-01 to 2016-12-12:
|
docs/_downloads/d9398fce39ca80dc4bb8b8ea55b575a8/nn_tutorial.ipynb | ###Markdown
`torch.nn` 이 *실제로* 무엇인가요?=====================================저자: Jeremy Howard, `fast.ai `_.도움: Rachel Thomas, Francisco Ingham.번역: `남상호 `_ 이 튜토리얼을 스크립트가 아닌 노트북으로 실행하기를 권장합니다. 노트북 (.ipynb) 파일을 다운 받으시려면,페이지 상단에 있는 링크를 클릭해주세요.PyTorch 는 여러분이 신경망(neural network)를 생성하고 학습시키는 것을 도와주기 위해서`torch.nn `_ ,`torch.optim `_ ,`Dataset `_ ,그리고 `DataLoader `_와 같은 잘 디자인된 모듈과 클래스들을 제공합니다.이들의 성능을 최대한 활용하고 여러분의 문제에 맞게 커스터마이즈하기 위해서,정확히 이들이 어떤 작업을 수행하는지 이해할 필요가 있습니다.이해를 증진하기 위해서, 우리는 먼저 이들 모델들로 부터 아무 피쳐도 사용하지 않고MNIST 데이터셋에 대해 기초적인 신경망을 학습시킬 것입니다;우리는 처음에는 가장 기초적인 PyTorch 텐서(tensor) 기능만을 사용할 것입니다.그리고나서 우리는 점차적으로 ``torch.nn``, ``torch.optim``, ``Dataset``, 또는``DataLoader`` 로부터 한번에 하나씩 피쳐를 추가하면서, 정확히 각 부분이 어떤 일을 하는지 그리고이것이 어떻게 코드를 더 간결하고 유연하게 만드는지 보여줄 것입니다.**이 튜토리얼은 여러분이 이미 PyTorch를 설치하였고, 그리고 텐서 연산의 기초에 대해 익숙하다고 가정합니다.**(만약 여러분이 Numpy 배열(array) 연산에 익숙하다면, 여기에서 사용되는 PyTorch 텐서 연산도거의 동일하다는 것을 알게 될 것입니다).MNIST 데이터 준비-------------------우리는 손으로 쓴 숫자(0에서 9 사이)의 흑백 이미지로 구성된 클래식`MNIST `_ 데이터셋을 사용할 것 입니다.우리는 경로 설정을 담당하는 (Python3 표준 라이브러리의 일부인)`pathlib `_ 을 사용할 것이고,`requests `_ 를 이용하여데이터셋을 다운로드 할 것입니다. 우리는 모듈을 사용할 때만 임포트(import) 할 것이므로,여러분은 매 포인트마다 정확히 어떤 것이 사용되는지 확인할 수 있습니다.
###Code
from pathlib import Path
import requests
DATA_PATH = Path("data")
PATH = DATA_PATH / "mnist"
PATH.mkdir(parents=True, exist_ok=True)
URL = "https://github.com/pytorch/tutorials/raw/master/_static/"
FILENAME = "mnist.pkl.gz"
if not (PATH / FILENAME).exists():
content = requests.get(URL + FILENAME).content
(PATH / FILENAME).open("wb").write(content)
###Output
_____no_output_____
###Markdown
이 데이터셋은 numpy 배열 포맷이고, 데이터를 직렬화하기 위한python 전용 포맷 pickle 을 이용하여 저장되어 있습니다.
###Code
import pickle
import gzip
with gzip.open((PATH / FILENAME).as_posix(), "rb") as f:
((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding="latin-1")
###Output
_____no_output_____
###Markdown
각 이미지는 28 x 28 형태 이고, 784 (=28x28) 크기를 가진 하나의 행으로 저장되어 있습니다.하나를 살펴 봅시다; 먼저 우리는 이 이미지를 2d로 재구성해야 합니다.
###Code
from matplotlib import pyplot
import numpy as np
pyplot.imshow(x_train[0].reshape((28, 28)), cmap="gray")
print(x_train.shape)
###Output
_____no_output_____
###Markdown
PyTorch는 numpy 배열 보다는 ``torch.tensor`` 를 사용하므로, 우리는 데이터를 변환해야 합니다.
###Code
import torch
x_train, y_train, x_valid, y_valid = map(
torch.tensor, (x_train, y_train, x_valid, y_valid)
)
n, c = x_train.shape
print(x_train, y_train)
print(x_train.shape)
print(y_train.min(), y_train.max())
###Output
_____no_output_____
###Markdown
torch.nn 없이 밑바닥부터 신경망 만들기---------------------------------------------PyTorch 텐서 연산만으로 첫 모델을 만들어봅시다.여러분이 신경망의 기초에 대해서 이미 익숙하다고 가정합니다.(만약 익숙하지 않다면 `course.fast.ai `_ 에서 학습할 수 있습니다).PyTorch는 랜덤 또는 0으로만 이루어진 텐서를 생성하는 메서드를 제공하고,우리는 간단한 선형 모델의 가중치(weights)와 절편(bias)을 생성하기 위해서 이것을 사용할 것입니다.이들은 일반적인 텐서에 매우 특별한 한 가지가 추가된 것입니다: 우리는 PyTorch에게 이들이기울기(gradient)가 필요하다고 알려줍니다.이를 통해 PyTorch는 텐서에 행해지는 모든 연산을 기록하게 하고,따라서 *자동적으로* 역전파(back-propagation) 동안에 기울기를 계산할 수 있습니다!가중치에 대해서는 ``requires_grad`` 를 초기화(initialization) **다음에** 설정합니다,왜냐하면 우리는 해당 단계가 기울기에 포함되는 것을 원치 않기 때문입니다.(PyTorch에서 ``_`` 다음에 오는 메서드 이름은 연산이 인플레이스(in-place)로 수행되는 것을 의미합니다.)Note`Xavier initialisation `_ 기법을 이용하여 가중치를 초기화 합니다. (1/sqrt(n)을 곱해주는 것을 통해서 초기화).
###Code
import math
weights = torch.randn(784, 10) / math.sqrt(784)
weights.requires_grad_()
bias = torch.zeros(10, requires_grad=True)
###Output
_____no_output_____
###Markdown
PyTorch의 기울기를 자동으로 계산해주는 기능 덕분에, Python 표준 함수(또는 호출 가능한 객체)를 모델로 사용할 수 있습니다!그러므로 간단한 선형 모델을 만들기 위해서 단순한 행렬 곱셈과 브로드캐스트(broadcast)덧셈을 사용하여 보겠습니다. 또한, 우리는 활성화 함수(activation function)가 필요하므로,`log_softmax` 를 구현하고 사용할 것입니다.PyTorch에서 많은 사전 구현된 손실 함수(loss function), 활성화 함수들이 제공되지만,일반적인 python을 사용하여 자신만의 함수를 쉽게 작성할 수 있음을 기억해주세요.PyTorch는 심지어 여러분의 함수를 위해서 빠른 GPU 또는 벡터화된 CPU 코드를 만들어줄 것입니다.
###Code
def log_softmax(x):
return x - x.exp().sum(-1).log().unsqueeze(-1)
def model(xb):
return log_softmax(xb @ weights + bias)
###Output
_____no_output_____
###Markdown
위에서, ``@`` 기호는 행렬 곱셈(matrix multiplication) 연산을 나타냅니다.우리는 하나의 배치(batch) 데이터(이 경우에는 64개의 이미지들)에 대하여 함수를 호출할 것입니다.이것은 하나의 *포워드 전달(forward pass)* 입니다. 이 단계에서 우리는 무작위(random) 가중치로시작했기 때문에 우리의 예측이 무작위 예측보다 전혀 나은 점이 없을 것입니다.
###Code
bs = 64 # 배치 크기
xb = x_train[0:bs] # x로부터 미니배치(mini-batch) 추출
preds = model(xb) # 예측
preds[0], preds.shape
print(preds[0], preds.shape)
###Output
_____no_output_____
###Markdown
여러분이 보시듯이, ``preds`` 텐서(tensor)는 텐서 값 외에도, 또한기울기 함수(gradient function)를 담고 있습니다.우리는 나중에 이것을 역전파(backpropagation)를 위해 사용할 것입니다.이제 손실함수(loss function)로 사용하기 위한 음의 로그 우도(negative log-likelihood)를구현합시다. (다시 말하지만, 우리는 표준 Python을 사용할 수 있습니다.):
###Code
def nll(input, target):
return -input[range(target.shape[0]), target].mean()
loss_func = nll
###Output
_____no_output_____
###Markdown
우리의 무작위 모델에 대한 손실을 점검해봅시다, 그럼으로써 우리는 나중에 역전파 이후에 개선이 있는지확인할 수 있습니다.
###Code
yb = y_train[0:bs]
print(loss_func(preds, yb))
###Output
_____no_output_____
###Markdown
또한, 우리 모델의 정확도(accuracy)를 계산하기 위한 함수를 구현합시다.매 예측마다, 만약 가장 큰 값의 인덱스가 목표값(target value)과 동일하다면,그 예측은 올바른 것입니다.
###Code
def accuracy(out, yb):
preds = torch.argmax(out, dim=1)
return (preds == yb).float().mean()
###Output
_____no_output_____
###Markdown
우리의 무작위 모델의 정확도를 점검해 봅시다, 그럼으로써 손실이 개선됨에 따라서 정확도가 개선되는지확인할 수 있습니다.
###Code
print(accuracy(preds, yb))
###Output
_____no_output_____
###Markdown
이제 우리는 훈련 루프(training loop)를 실행할 수 있습니다. 매 반복마다, 우리는 다음을 수행할 것입니다:- 데이터의 미니배치를 선택 (``bs`` 크기)- 모델을 이용하여 예측 수행- 손실 계산- ``loss.backward()`` 를 이용하여 모델의 기울기 업데이트, 이 경우에는, ``weights`` 와 ``bias``.이제 우리는 이 기울기들을 이용하여 가중치와 절편을 업데이트 합니다.우리는 이것을 ``torch.no_grad()`` 컨텍스트 매니져(context manager) 내에서 실행합니다,왜냐하면 이러한 실행이 다음 기울기의 계산에 기록되지 않기를 원하기 때문입니다.PyTorch의 자동 기울기(Autograd)가 어떻게 연산을 기록하는지`여기 `_ 에서 더 알아볼 수 있습니다.우리는 그러고나서 기울기를 0으로 설정합니다, 그럼으로써 다음 루프(loop)에 준비하게 됩니다.그렇지 않으면, 우리의 기울기들은 일어난 모든 연산의 누적 집계를 기록하게 되버립니다.(즉, ``loss.backward()`` 가 이미 저장된 것을 대체하기보단, 기존 값에 기울기를 *더하게* 됩니다)... tip:: 여러분들은 PyTorch 코드에 대하여 표준 python 디버거(debugger)를 사용할 수 있으므로, 매 단계마다 다양한 변수 값을 점검할 수 있습니다. 아래에서 ``set_trace()`` 를 주석 해제하여 사용해보세요.
###Code
from IPython.core.debugger import set_trace
lr = 0.5 # 학습률(learning rate)
epochs = 2 # 훈련에 사용할 에폭(epoch) 수
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
# set_trace()
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
with torch.no_grad():
weights -= weights.grad * lr
bias -= bias.grad * lr
weights.grad.zero_()
bias.grad.zero_()
###Output
_____no_output_____
###Markdown
이제 다 됐습니다: 우리는 제일 간단한 신경망(neural network)의 모든 것을 밑바닥부터 생성하고훈련하였습니다! (이번에는 은닉층(hidden layer)이 없기 때문에,로지스틱 회귀(logistic regression)입니다).이제 손실과 정확도를 이전 값들과 비교하면서 확인해봅시다.우리는 손실은 감소하고, 정확도는 증가하기를 기대할 것이고, 그들은 아래와 같습니다.
###Code
print(loss_func(model(xb), yb), accuracy(model(xb), yb))
###Output
_____no_output_____
###Markdown
torch.nn.functional 사용하기------------------------------이제 우리는 코드를 리팩토링(refactoring) 하겠습니다, 그럼으로써 이전과 동일하지만,PyTorch의 ``nn`` 클래스의 장점을 활용하여 더 간결하고 유연하게 만들 것입니다.지금부터 매 단계에서, 우리는 코드를 더 짧고, 이해하기 쉽고, 유연하게 만들어야 합니다.처음이면서 우리의 코드를 짧게 만들기 가장 쉬운 단계는 직접 작성한 활성화, 손실 함수를``torch.nn.functional`` 의 함수로 대체하는 것입니다(관례에 따라, 일반적으로 ``F`` 네임스페이스(namespace)를 통해 임포트(import) 합니다).이 모듈에는 ``torch.nn`` 라이브러리의 모든 함수가 포함되어 있습니다(라이브러리의 다른 부분에는 클래스가 포함되어 있습니다.)다양한 손실 및 활성화 함수 뿐만 아니라, 풀링(pooling) 함수와 같이 신경망을 만드는데편리한 몇 가지 함수도 여기에서 찾을 수 있습니다.(컨볼루션(convolution) 연산, 선형(linear) 레이어, 등을 수행하는 함수도 있지만,앞으로 보시겠지만 일반적으로 라이브러리의 다른 부분을 사용하여 더 잘 처리 할 수 있습니다.)만약 여러분들이 음의 로그 우도 손실과 로그 소프트맥스 (log softmax) 활성화 함수를 사용하는 경우,Pytorch는 이 둘을 결합하는 단일 함수인 ``F.cross_entropy`` 를 제공합니다.따라서 모델에서 활성화 함수를 제거할 수도 있습니다.
###Code
import torch.nn.functional as F
loss_func = F.cross_entropy
def model(xb):
return xb @ weights + bias
###Output
_____no_output_____
###Markdown
더이상 ``model`` 함수에서 ``log_softmax`` 를 호출하지 않고 있습니다.손실과 정확도과 이전과 동일한지 확인해봅시다:
###Code
print(loss_func(model(xb), yb), accuracy(model(xb), yb))
###Output
_____no_output_____
###Markdown
nn.Module 을 이용하여 리팩토링 하기--------------------------------------다음으로, 더 명확하고 간결한 훈련 루프를 위해 ``nn.Module`` 및 ``nn.Parameter`` 를 사용합니다.우리는 ``nn.Module`` (자체가 클래스이고 상태를 추척할 수 있는) 하위 클래스(subclass)를 만듭니다.이 경우에는, 포워드(forward) 단계에 대한 가중치, 절편, 그리고 메소드(method) 등을 유지하는클래스를 만들고자 합니다.``nn.Module`` 은 우리가 사용할 몇 가지 속성(attribute)과 메소드를 (``.parameters()`` 와``.zero_grad()`` 같은) 가지고 있습니다.Note``nn.Module`` (대문자 M) 은 PyTorch 의 특정 개념이고, 우리는 이 클래스를 많이 사용할 것입니다. ``nn.Module`` 를 Python 의 코드를 임포트하기 위한 코드 파일인 `module `_ (소문자 ``m``) 의 개념과 헷갈리지 말아주세요.
###Code
from torch import nn
class Mnist_Logistic(nn.Module):
def __init__(self):
super().__init__()
self.weights = nn.Parameter(torch.randn(784, 10) / math.sqrt(784))
self.bias = nn.Parameter(torch.zeros(10))
def forward(self, xb):
return xb @ self.weights + self.bias
###Output
_____no_output_____
###Markdown
함수를 사용하는 대신에 이제는 오브젝트(object) 를 사용하기 때문에,먼저 모델을 인스턴스화(instantiate) 해야 합니다:
###Code
model = Mnist_Logistic()
###Output
_____no_output_____
###Markdown
이제 우리는 이전과 동일한 방식으로 손실을 계산할 수 있습니다.여기서 ``nn.Module`` 오브젝트들은 마치 함수처럼 사용됩니다 (즉, 이들은 *호출가능* 합니다),그러나 배후에서 Pytorch 는 우리의 ``forward`` 메소드를 자동으로 호출합니다.
###Code
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
이전에는 훈련 루프를 위해 이름 별로 각 매개변수(parameter)의 값을 업데이트하고 다음과 같이각 매개 변수에 대한 기울기들을 개별적으로 수동으로 0으로 제거해야 했습니다::: with torch.no_grad(): weights -= weights.grad * lr bias -= bias.grad * lr weights.grad.zero_() bias.grad.zero_()이제 우리는 model.parameters() 및 model.zero_grad() (모두``nn.Module`` 에 대해 PyTorch에 의해 정의됨)를 활용하여 이러한 단계를 더 간결하게만들고, 특히 더 복잡한 모델에 대해서 일부 매개변수를 잊어 버리는 오류를 덜 발생시킬 수 있습니다::: with torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad()이제 이것을 나중에 다시 실행할 수 있도록 ``fit`` 함수로 작은 훈련 루프를 감쌀 것입니다.
###Code
def fit():
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
with torch.no_grad():
for p in model.parameters():
p -= p.grad * lr
model.zero_grad()
fit()
###Output
_____no_output_____
###Markdown
손실이 줄어들었는지 다시 한번 확인합시다:
###Code
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
nn.Linear 를 이용하여 리팩토링 하기------------------------------------계속해서 코드를 리팩토링 합니다. ``self.weights`` 및 ``self.bias`` 를 수동으로 정의 및초기화하고, ``xb @ self.weights + self.bias`` 를 계산하는 대신에,위의 모든 것을 해줄 Pytorch 클래스인`nn.Linear `_ 를 선형레이어로 사용합니다.Pytorch 에는 다양한 유형의 코드를 크게 단순화 할 수 있는 미리 정의된 레이어가 있고 이는 또한종종 기존 코드보다 속도를 빠르게 합니다.
###Code
class Mnist_Logistic(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(784, 10)
def forward(self, xb):
return self.lin(xb)
###Output
_____no_output_____
###Markdown
이전과 같은 방식으로 모델을 인스턴스화하고 손실을 계산합니다:
###Code
model = Mnist_Logistic()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
우리는 여전히 이전과 동일한 ``fit`` 메소드를 사용할 수 있습니다.
###Code
fit()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
optim 을 이용하여 리팩토링 하기---------------------------------Pytorch에는 다양한 최적화(optimization) 알고리즘을 가진 패키지인 ``torch.optim`` 도 있습니다.각 매개변수를 수동으로 업데이트 하는 대신, 옵티마이저(optimizer)의 ``step`` 메소드를 사용하여업데이트를 진행할 수 있습니다.이렇게 하면 이전에 수동으로 코딩한 최적화 단계를 대체할 수 있습니다::: with torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad()대신에 이렇게 말이죠::: opt.step() opt.zero_grad()(``optim.zero_grad()`` 는 기울기를 0으로 재설정 해줍니다. 다음 미니 배치에 대한기울기를 계산하기 전에 호출해야 합니다.)
###Code
from torch import optim
###Output
_____no_output_____
###Markdown
나중에 다시 사용할 수 있도록 모델과 옵티마이져를 만드는 작은 함수를 정의합니다.
###Code
def get_model():
model = Mnist_Logistic()
return model, optim.SGD(model.parameters(), lr=lr)
model, opt = get_model()
print(loss_func(model(xb), yb))
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
Dataset 을 이용하여 리팩토링하기----------------------------------PyTorch 에는 추상 Dataset 클래스가 있습니다. Dataset 은``__len__`` 함수 (Python의 표준 ``len`` 함수에 의해 호출됨) 및``__getitem__`` 함수를 가진 어떤 것이라도 될 수 있으며, 이 함수들을 인덱싱(indexing)하기위한 방법으로 사용합니다.`이 튜토리얼 `_은 ``Dataset`` 의 하위 클래스로써, 사용자 지정 ``FacialLandmarkDataset`` 클래스를 만드는좋은 예를 제시합니다.PyTorch 의 `TensorDataset `_은 텐서를 감싸는(wrapping) Dataset 입니다.길이와 인덱싱 방식을 정의함으로써 텐서의 첫 번째 차원을 따라 반복, 인덱싱 및 슬라이스(slice)하는 방법도 제공합니다.이렇게하면 훈련 할 때 동일한 라인에서 독립(independent) 변수와 종속(dependent) 변수에 쉽게 액세스 할 수 있습니다.
###Code
from torch.utils.data import TensorDataset
###Output
_____no_output_____
###Markdown
``x_train`` 및 ``y_train`` 모두 하나의 ``TensorDataset`` 에 합쳐질 수 있습니다,따라서 반복시키고 슬라이스 하기 편리합니다.
###Code
train_ds = TensorDataset(x_train, y_train)
###Output
_____no_output_____
###Markdown
이전에는 x 및 y 값의 미니 배치를 별도로 반복해야했습니다::: xb = x_train[start_i:end_i] yb = y_train[start_i:end_i]이제 이 두 단계를 함께 수행 할 수 있습니다::: xb,yb = train_ds[i*bs : i*bs+bs]
###Code
model, opt = get_model()
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
xb, yb = train_ds[i * bs: i * bs + bs]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
DataLoader 를 이용하여 리팩토링하기-----------------------------------Pytorch 의 ``DataLoader`` 는 배치 관리를 담당합니다.여러분들은 모든 ``Dataset`` 으로부터 ``DataLoader`` 를 생성할 수 있습니다.``DataLoader`` 는 배치들에 대해서 반복하기 쉽게 만들어줍니다.``train_ds[i*bs : i*bs+bs]`` 를 사용하는 대신,DataLoader 는 매 미니배치를 자동적으로 제공합니다.
###Code
from torch.utils.data import DataLoader
train_ds = TensorDataset(x_train, y_train)
train_dl = DataLoader(train_ds, batch_size=bs)
###Output
_____no_output_____
###Markdown
이전에는 루프가 다음과 같이 배치 (xb, yb)를 반복했습니다::: for i in range((n-1)//bs + 1): xb,yb = train_ds[i*bs : i*bs+bs] pred = model(xb)이제 (xb, yb)가 DataLoader 에서 자동으로 로드되므로 루프가 훨씬 깨끗해졌습니다::: for xb,yb in train_dl: pred = model(xb)
###Code
model, opt = get_model()
for epoch in range(epochs):
for xb, yb in train_dl:
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
Pytorch의 nn.Module, nn.Parameter, Dataset 및 DataLoader 덕분에 이제 훈련 루프가훨씬 더 작아지고 이해하기 쉬워졌습니다.이제 실제로 효과적인 모델을 만드는 데 필요한 기본 기능을 추가해 보겠습니다.검증(validation) 추가하기---------------------------섹션 1에서, 우리는 훈련 데이터에 사용하기 위해 합리적인 훈련 루프를 설정하려고했습니다.실전에서, 여러분들은 과적합(overfitting)을 확인하기 위해서 **항상**`검증 데이터셋(validation set) `_ 이있어야 합니다.훈련 데이터를 섞는(shuffling) 것은 배치와 과적합 사이의 상관관계를 방지하기 위해`중요합니다. `_반면에, 검증 손실(validation loss)은 검증 데이터셋을 섞든 안섞든 동일합니다.데이터를 섞는 것은 추가 시간이 걸리므로, 검증 데이터를 섞는 것은 의미가 없습니다.검증 데이터셋에 대한 배치 크기는 학습 데이터셋 배치 크기의 2배를 사용할 것입니다.이는 검증 데이터셋에 대해서는 역전파(backpropagation)가 필요하지 않으므로 메모리를덜 사용하기 때문입니다 (기울기를 저장할 필요가 없음).더 큰 배치 크기를 사용하여 손실을 더 빨리 계산하기 위해 이렇게 합니다.
###Code
train_ds = TensorDataset(x_train, y_train)
train_dl = DataLoader(train_ds, batch_size=bs, shuffle=True)
valid_ds = TensorDataset(x_valid, y_valid)
valid_dl = DataLoader(valid_ds, batch_size=bs * 2)
###Output
_____no_output_____
###Markdown
각 에폭이 끝날 때 검증 손실을 계산하고 프린트 할 것입니다.(훈련 전에 항상 ``model.train()`` 을 호출하고, 추론(inference) 전에 ``model.eval()``을 호출합니다, 이는 ``nn.BatchNorm2d`` 및 ``nn.Dropout`` 과 같은 레이어에서이러한 다른 단계(훈련, 추론) 에 대한 적절한 동작이 일어나게 하기 위함입니다.)
###Code
model, opt = get_model()
for epoch in range(epochs):
model.train()
for xb, yb in train_dl:
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
model.eval()
with torch.no_grad():
valid_loss = sum(loss_func(model(xb), yb) for xb, yb in valid_dl)
print(epoch, valid_loss / len(valid_dl))
###Output
_____no_output_____
###Markdown
fit() 와 get_data() 생성하기----------------------------------이제 우리는 우리만의 작은 리팩토링을 수행할 것입니다.훈련 데이터셋과 검증 데이터셋 모두에 대한 손실을 계산하는 유사한 프로세스를 두 번 거치므로,이를 하나의 배치에 대한 손실을 계산하는 자체 함수 ``loss_batch`` 로 만들어보겠습니다.훈련 데이터셋에 대한 옵티마이저를 전달하고 이를 사용하여 역전파를 수행합니다.검증 데이터셋의 경우 옵티마이저를 전달하지 않으므로 메소드가 역전파를 수행하지 않습니다.
###Code
def loss_batch(model, loss_func, xb, yb, opt=None):
loss = loss_func(model(xb), yb)
if opt is not None:
loss.backward()
opt.step()
opt.zero_grad()
return loss.item(), len(xb)
###Output
_____no_output_____
###Markdown
``fit`` 은 모델을 훈련하고 각 에폭에 대한 훈련 및 검증 손실을 계산하는 작업을 수행합니다.
###Code
import numpy as np
def fit(epochs, model, loss_func, opt, train_dl, valid_dl):
for epoch in range(epochs):
model.train()
for xb, yb in train_dl:
loss_batch(model, loss_func, xb, yb, opt)
model.eval()
with torch.no_grad():
losses, nums = zip(
*[loss_batch(model, loss_func, xb, yb) for xb, yb in valid_dl]
)
val_loss = np.sum(np.multiply(losses, nums)) / np.sum(nums)
print(epoch, val_loss)
###Output
_____no_output_____
###Markdown
``get_data`` 는 학습 및 검증 데이터셋에 대한 dataloader 를 출력합니다.
###Code
def get_data(train_ds, valid_ds, bs):
return (
DataLoader(train_ds, batch_size=bs, shuffle=True),
DataLoader(valid_ds, batch_size=bs * 2),
)
###Output
_____no_output_____
###Markdown
이제 dataloader를 가져오고 모델을 훈련하는 전체 프로세스를 3 줄의 코드로 실행할 수 있습니다:
###Code
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
model, opt = get_model()
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
이러한 기본 3줄의 코드를 사용하여 다양한 모델을 훈련할 수 있습니다.컨볼루션 신경망(CNN)을 훈련하는 데 사용할 수 있는지 살펴 보겠습니다!CNN 으로 넘어가기--------------------이제 3개의 컨볼루션 레이어로 신경망을 구축할 것입니다.이전 섹션의 어떤 함수도 모델의 형식에 대해 가정하지 않기 때문에,별도의 수정없이 CNN을 학습하는 데 사용할 수 있습니다.Pytorch 의 사전정의된`Conv2d `_ 클래스를컨볼루션 레이어로 사용합니다. 3개의 컨볼루션 레이어로 CNN을 정의합니다.각 컨볼루션 뒤에는 ReLU가 있습니다. 마지막으로 평균 풀링(average pooling)을 수행합니다.(``view`` 는 PyTorch의 numpy ``reshape`` 버전입니다.)
###Code
class Mnist_CNN(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1)
self.conv2 = nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1)
self.conv3 = nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1)
def forward(self, xb):
xb = xb.view(-1, 1, 28, 28)
xb = F.relu(self.conv1(xb))
xb = F.relu(self.conv2(xb))
xb = F.relu(self.conv3(xb))
xb = F.avg_pool2d(xb, 4)
return xb.view(-1, xb.size(1))
lr = 0.1
###Output
_____no_output_____
###Markdown
`모멘텀(Momentum) `_ 은이전 업데이트도 고려하고 일반적으로 더 빠른 훈련으로 이어지는 확률적 경사하강법(stochastic gradient descent)의 변형입니다.
###Code
model = Mnist_CNN()
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
nn.Sequential------------------------``torch.nn`` 에는 코드를 간단히 사용할 수 있는 또 다른 편리한 클래스인`Sequential `_이 있습니다..``Sequential`` 객체는 그 안에 포함된 각 모듈을 순차적으로 실행합니다.이것은 우리의 신경망을 작성하는 더 간단한 방법입니다.이를 활용하려면 주어진 함수에서 **사용자정의 레이어(custom layer)** 를 쉽게정의할 수 있어야 합니다.예를 들어, PyTorch에는 `view` 레이어가 없으므로 우리의 신경망 용으로 만들어야 합니다.``Lambda`` 는 ``Sequential`` 로 신경망을 정의할 때 사용할 수 있는 레이어를 생성할 것입니다.
###Code
class Lambda(nn.Module):
def __init__(self, func):
super().__init__()
self.func = func
def forward(self, x):
return self.func(x)
def preprocess(x):
return x.view(-1, 1, 28, 28)
###Output
_____no_output_____
###Markdown
``Sequential`` 로 생성된 모들은 간단하게 아래와 같습니다:
###Code
model = nn.Sequential(
Lambda(preprocess),
nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.AvgPool2d(4),
Lambda(lambda x: x.view(x.size(0), -1)),
)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
DataLoader 감싸기-----------------------------우리의 CNN은 상당히 간결하지만, MNIST에서만 작동합니다, 왜냐하면: - 입력이 28\*28의 긴 벡터라고 가정합니다. - 최종적으로 CNN 그리드 크기는 4\*4 라고 가정합니다. (이것은 우리가 사용한 평균 풀링 커널 크기 때문입니다.)이 두 가지 가정을 제거하여 모델이 모든 2d 단일 채널(channel) 이미지에서 작동하도록 하겠습니다.먼저 초기 Lambda 레이어를 제거하고 데이터 전처리를 제네레이터(generator)로 이동시킬 수 있습니다:
###Code
def preprocess(x, y):
return x.view(-1, 1, 28, 28), y
class WrappedDataLoader:
def __init__(self, dl, func):
self.dl = dl
self.func = func
def __len__(self):
return len(self.dl)
def __iter__(self):
batches = iter(self.dl)
for b in batches:
yield (self.func(*b))
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
train_dl = WrappedDataLoader(train_dl, preprocess)
valid_dl = WrappedDataLoader(valid_dl, preprocess)
###Output
_____no_output_____
###Markdown
다음으로 ``nn.AvgPool2d`` 를 ``nn.AdaptiveAvgPool2d`` 로 대체하여 우리가 가진*입력* 텐서가 아니라 원하는 *출력* 텐서의 크기를 정의할 수 있습니다.결과적으로 우리 모델은 모든 크기의 입력과 함께 작동합니다.
###Code
model = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
Lambda(lambda x: x.view(x.size(0), -1)),
)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
###Output
_____no_output_____
###Markdown
한번 실행해 봅시다:
###Code
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
GPU 사용하기---------------만약 여러분들이 운이 좋아서 CUDA 지원 GPU (대부분의 클라우드 제공 업체에서시간당 약 $0.50 에 이용할 수 있습니다) 를 사용할 수 있다면, 코드 실행 속도를 높일 수 있습니다.먼저 GPU가 Pytorch에서 작동하는지 확인합니다:
###Code
print(torch.cuda.is_available())
###Output
_____no_output_____
###Markdown
그리고 이에 대한 디바이스 오브젝트를 생성합니다:
###Code
dev = torch.device(
"cuda") if torch.cuda.is_available() else torch.device("cpu")
###Output
_____no_output_____
###Markdown
GPU로 배치를 옮기도록 ``preprocess`` 를 업데이트 합시다:
###Code
def preprocess(x, y):
return x.view(-1, 1, 28, 28).to(dev), y.to(dev)
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
train_dl = WrappedDataLoader(train_dl, preprocess)
valid_dl = WrappedDataLoader(valid_dl, preprocess)
###Output
_____no_output_____
###Markdown
마지막으로 모델을 GPU로 이동시킬 수 있습니다.
###Code
model.to(dev)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
###Output
_____no_output_____
###Markdown
이제 더 빨리 실행됩니다:
###Code
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
`torch.nn` 이 *실제로* 무엇인가요?===============================저자: Jeremy Howard, `fast.ai `_.도움: Rachel Thomas, Francisco Ingham.번역: `남상호 `_ 이 튜토리얼을 스크립트가 아닌 노트북으로 실행하기를 권장합니다. 노트북 (.ipynb) 파일을 다운 받으시려면,페이지 상단에 있는 링크를 클릭해주세요.PyTorch 는 여러분이 신경망(neural network)를 생성하고 학습시키는 것을 도와주기 위해서`torch.nn `_ ,`torch.optim `_ ,`Dataset `_ ,그리고 `DataLoader `_와 같은 잘 디자인된 모듈과 클래스들을 제공합니다.이들의 성능을 최대한 활용하고 여러분의 문제에 맞게 커스터마이즈하기 위해서,정확히 이들이 어떤 작업을 수행하는지 이해할 필요가 있습니다.이해를 증진하기 위해서, 우리는 먼저 이들 모델들로 부터 아무 피쳐도 사용하지 않고MNIST 데이터셋에 대해 기초적인 신경망을 학습시킬 것입니다;우리는 처음에는 가장 기초적인 PyTorch 텐서(tensor) 기능만을 사용할 것입니다.그리고나서 우리는 점차적으로 ``torch.nn``, ``torch.optim``, ``Dataset``, 또는``DataLoader`` 로부터 한번에 하나씩 피쳐를 추가하면서, 정확히 각 부분이 어떤 일을 하는지 그리고이것이 어떻게 코드를 더 정확하고 유연하게 만드는지 보여줄 것입니다.**이 튜토리얼은 여러분이 이미 PyTorch를 설치하였고, 그리고 텐서 연산의 기초에 대해 익숙하다고 가정합니다.**(만약 여러분이 Numpy 배열(array) 연산에 익숙하다면, 여기에서 사용되는 PyTorch 텐서 연산도거의 동일하다는 것을 알게 될 것입니다).MNIST 데이터 준비----------------우리는 손으로 쓴 숫자(0에서 9 사이)의 흑백 이미지로 구성된 클래식`MNIST `_ 데이터셋을 사용할 것 입니다.우리는 경로 설정을 담당하는 (Python3 표준 라이브러리의 일부인)`pathlib `_ 을 사용할 것이고,`requests `_ 를 이용하여데이터셋을 다운로드 할 것입니다. 우리는 모듈을 사용할 때만 임포트(import) 할 것이므로,여러분은 매 포인트마다 정확히 어떤 것이 사용되는지 확인할 수 있습니다.
###Code
from pathlib import Path
import requests
DATA_PATH = Path("data")
PATH = DATA_PATH / "mnist"
PATH.mkdir(parents=True, exist_ok=True)
URL = "http://deeplearning.net/data/mnist/"
FILENAME = "mnist.pkl.gz"
if not (PATH / FILENAME).exists():
content = requests.get(URL + FILENAME).content
(PATH / FILENAME).open("wb").write(content)
###Output
_____no_output_____
###Markdown
이 데이터셋은 numpy 배열 포맷이고, 데이터를 직렬화하기 위한python 전용 포맷 pickle 을 이용하여 저장되어 있습니다.
###Code
import pickle
import gzip
with gzip.open((PATH / FILENAME).as_posix(), "rb") as f:
((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding="latin-1")
###Output
_____no_output_____
###Markdown
각 이미지는 28 x 28 형태 이고, 784 (=28x28) 크기를 가진 하나의 행으로 저장되어 있습니다.하나를 살펴 봅시다; 먼저 우리는 이 이미지를 2d로 재구성해야 합니다.
###Code
from matplotlib import pyplot
import numpy as np
pyplot.imshow(x_train[0].reshape((28, 28)), cmap="gray")
print(x_train.shape)
###Output
_____no_output_____
###Markdown
PyTorch는 numpy 배열 보다는 ``torch.tensor`` 를 사용하므로, 우리는 데이터를 변환해야 합니다.
###Code
import torch
x_train, y_train, x_valid, y_valid = map(
torch.tensor, (x_train, y_train, x_valid, y_valid)
)
n, c = x_train.shape
x_train, x_train.shape, y_train.min(), y_train.max()
print(x_train, y_train)
print(x_train.shape)
print(y_train.min(), y_train.max())
###Output
_____no_output_____
###Markdown
torch.nn 없이 밑바닥부터 신경망 만들기---------------------------------------------PyTorch 텐서 연산만으로 첫 모델을 만들어봅시다.여러분이 신경망의 기초에 대해서 이미 익숙하다고 가정합니다.(만약 익숙하지 않다면 `course.fast.ai `_ 에서 학습할 수 있습니다).PyTorch는 랜덤 또는 0으로만 이루어진 텐서를 생성하는 메서드를 제공하고,우리는 간단한 선형 모델의 가중치(weights)와 절편(bias)을 생성하기 위해서 이것을 사용할 것입니다.이들은 일반적인 텐서에 매우 특별한 한 가지가 추가된 것입니다: 우리는 PyTorch에게 이들이기울기(gradient)가 필요하다고 알려줍니다.이를 통해 PyTorch는 텐서에 행해지는 모든 연산을 기록하게 하고,따라서 *자동적으로* 역전파(back-propagation) 동안에 기울기를 계산할 수 있습니다!가중치에 대해서는 ``requires_grad`` 를 초기화(initialization) **다음에** 설정합니다,왜냐하면 우리는 해당 단계가 기울기에 포함되는 것을 원치 않기 때문입니다.(PyTorch에서 ``_`` 다음에 오는 메서드 이름은 연산이 인플레이스(in-place)로 수행되는 것을 의미합니다.)Note`Xavier initialisation `_ 기법을 이용하여 가중치를 초기화 합니다. (1/sqrt(n)을 곱해주는 것을 통해서 초기화).
###Code
import math
weights = torch.randn(784, 10) / math.sqrt(784)
weights.requires_grad_()
bias = torch.zeros(10, requires_grad=True)
###Output
_____no_output_____
###Markdown
PyTorch의 기울기를 자동으로 계산해주는 기능 덕분에, Python 표준 함수(또는 호출 가능한 객체)를 모델로 사용할 수 있습니다!그러므로 간단한 선형 모델을 만들기 위해서 단순한 행렬 곱셈과 브로드캐스트(broadcast)덧셈을 사용하여 보겠습니다. 또한, 우리는 활성화 함수(activation function)가 필요하므로,`log_softmax` 를 구현하고 사용할 것입니다.PyTorch에서 많은 사전 구현된 손실 함수(loss function), 활성화 함수들이 제공되지만,일반적인 python을 사용하여 자신만의 함수를 쉽게 작성할 수 있음을 기억해주세요.PyTorch는 심지어 여러분의 함수를 위해서 빠른 GPU 또는 벡터화된 CPU 코드를 만들어줄 것입니다.
###Code
def log_softmax(x):
return x - x.exp().sum(-1).log().unsqueeze(-1)
def model(xb):
return log_softmax(xb @ weights + bias)
###Output
_____no_output_____
###Markdown
위에서, ``@`` 기호는 점곱(dot product) 연산을 나타냅니다.우리는 하나의 배치(batch) 데이터(이 경우에는 64개의 이미지들)에 대하여 함수를 호출할 것입니다.이것은 하나의 *포워드 전달(forward pass)* 입니다. 이 단계에서 우리는 무작위(random) 가중치로시작했기 때문에 우리의 예측이 무작위 예측보다 전혀 나은 점이 없을 것입니다.
###Code
bs = 64 # 배치 사이즈
xb = x_train[0:bs] # x로부터 미니배치(mini-batch) 추출
preds = model(xb) # 예측
preds[0], preds.shape
print(preds[0], preds.shape)
###Output
_____no_output_____
###Markdown
여러분이 보시듯이, ``preds`` 텐서(tensor)는 텐서 값 외에도, 또한기울기 함수(gradient function)를 담고 있습니다.우리는 나중에 이것을 역전파(backpropagation)를 위해 사용할 것입니다.이제 손실함수(loss function)로 사용하기 위한 음의 로그 우도(negative log-likelihood)를구현합시다. (다시 말하지만, 우리는 표준 Python을 사용할 수 있습니다.):
###Code
def nll(input, target):
return -input[range(target.shape[0]), target].mean()
loss_func = nll
###Output
_____no_output_____
###Markdown
우리의 무작위 모델에 대한 손실을 점검해봅시다, 그럼으로써 우리는 나중에 역전파 이후에 개선이 있는지확인할 수 있습니다.
###Code
yb = y_train[0:bs]
print(loss_func(preds, yb))
###Output
_____no_output_____
###Markdown
또한, 우리 모델의 정확도(accuracy)를 계산하기 위한 함수를 구현합시다.매 예측마다, 만약 가장 큰 값의 인덱스가 목표값(target value)과 동일하다면,그 예측은 올바른 것입니다.
###Code
def accuracy(out, yb):
preds = torch.argmax(out, dim=1)
return (preds == yb).float().mean()
###Output
_____no_output_____
###Markdown
우리의 무작위 모델의 정확도를 점검해 봅시다, 그럼으로써 손실이 개선됨에 따라서 정확도가 개선되는지확인할 수 있습니다.
###Code
print(accuracy(preds, yb))
###Output
_____no_output_____
###Markdown
이제 우리는 훈련 루프(training loop)를 실행할 수 있습니다. 매 반복마다, 우리는 다음을 수행할 것입니다:- 데이터의 미니배치를 선택 (``bs`` 사이즈)- 모델을 이용하여 예측 수행- 손실 계산- ``loss.backward()`` 를 이용하여 모델의 기울기 업데이트, 이 경우에는, ``weights`` 와 ``bias``.이제 우리는 이 기울기들을 이용하여 가중치와 절편을 업데이트 합니다.우리는 이것을 ``torch.no_grad()`` 컨텍스트 매니져(context manager) 내에서 실행합니다,왜냐하면 이러한 실행이 다음 기울기의 계산에 기록되지 않기를 원하기 때문입니다.PyTorch의 자동 기울기(Autograd)가 어떻게 연산을 기록하는지`여기 `_ 에서 더 알아볼 수 있습니다.우리는 그러고나서 기울기를 0으로 설정합니다, 그럼으로써 다음 루프(loop)에 준비하게 됩니다.그렇지 않으면, 우리의 기울기들은 일어난 모든 연산의 누적 집계를 기록하게 되버립니다.(즉, ``loss.backward()`` 가 이미 저장된 것을 대체하기보단, 기존 값에 기울기를 *더하게* 됩니다)... tip:: 여러분들은 PyTorch 코드에 대하여 표준 python 디버거(debugger)를 사용할 수 있으므로, 매 단계마다 다양한 변수 값을 점검할 수 있습니다. 아래에서 ``set_trace()`` 를 주석 해제하여 사용해보세요.
###Code
from IPython.core.debugger import set_trace
lr = 0.5 # 학습률(learning rate)
epochs = 2 # 훈련에 사용할 에포크(epoch) 수
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
# set_trace()
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
with torch.no_grad():
weights -= weights.grad * lr
bias -= bias.grad * lr
weights.grad.zero_()
bias.grad.zero_()
###Output
_____no_output_____
###Markdown
이제 다 됐습니다: 우리는 제일 간단한 신경망(neural network)의 모든 것을 밑바닥부터 생성하고훈련하였습니다! (이번에는 은닉층(hidden layer)이 없기 때문에,로지스틱 회귀(logistic regression)입니다).이제 손실과 정확도를 이전 값들과 비교하면서 확인해봅시다.우리는 손실은 감소하고, 정확도는 증가하기를 기대할 것이고, 그들은 아래와 같습니다.
###Code
print(loss_func(model(xb), yb), accuracy(model(xb), yb))
###Output
_____no_output_____
###Markdown
torch.nn.functional 사용하기------------------------------이제 우리는 코드를 리팩토링(refactoring) 하겠습니다, 그럼으로써 이전과 동일하지만,PyTorch의 ``nn`` 클래스의 장점을 활용하여 더 간결하고 유연하게 만들 것입니다.지금부터 매 단계에서, 우리는 코드를 더 짧고, 이해하기 쉽고, 유연하게 만들어야 합니다.처음이면서 우리의 코드를 짧게 만들기 가장 쉬운 단계는 직접 작성한 활성화, 손실 함수를``torch.nn.functional`` 의 함수로 대체하는 것입니다(관례에 따라, 일반적으로 ``F`` 네임스페이스(namespace)를 통해 임포트(import) 합니다).이 모듈에는 ``torch.nn`` 라이브러리의 모든 함수가 포함되어 있습니다(라이브러리의 다른 부분에는 클래스가 포함되어 있습니다.)다양한 손실 및 활성화 함수 뿐만 아니라, 풀링(pooling) 함수와 같이 신경망을 만드는데편리한 몇 가지 함수도 여기에서 찾을 수 있습니다.(컨볼루션(convolution) 연산, 선형(linear) 레이어, 등을 수행하는 함수도 있지만,앞으로 보시겠지만 일반적으로 라이브러리의 다른 부분을 사용하여 더 잘 처리 할 수 있습니다.)만약 여러분들이 음의 로그 우도 손실과 로그 소프트맥스 (log softmax) 활성화 함수를 사용하는 경우,Pytorch는 이 둘을 결합하는 단일 함수인 ``F.cross_entropy`` 를 제공합니다.따라서 모델에서 활성화 함수를 제거할 수도 있습니다.
###Code
import torch.nn.functional as F
loss_func = F.cross_entropy
def model(xb):
return xb @ weights + bias
###Output
_____no_output_____
###Markdown
더이상 ``model`` 함수에서 ``log_softmax`` 를 호출하지 않고 있습니다.손실과 정확도과 이전과 동일한지 확인해봅시다:
###Code
print(loss_func(model(xb), yb), accuracy(model(xb), yb))
###Output
_____no_output_____
###Markdown
nn.Module 을 이용하여 리팩토링 하기------------------------------다음으로, 더 명확하고 간결한 훈련 루프를 위해 ``nn.Module`` 및 ``nn.Parameter`` 를 사용합니다.우리는 ``nn.Module`` (자체가 클래스이고 상태를 추척할 수 있는) 하위 클래스(subclass)를 만듭니다.이 경우에는, 포워드(forward) 단계에 대한 가중치, 절편, 그리고 메서드(method) 등을 유지하는클래스를 만들고자 합니다.``nn.Module`` 은 우리가 사용할 몇 가지 속성(attribute)과 메서드를 (``.parameters()`` 와``.zero_grad()`` 같은) 가지고 있습니다.Note``nn.Module`` (대문자 M) 은 PyTorch 의 특정 개념이고, 우리는 이 클래스를 많이 사용할 것입니다. ``nn.Module`` 를 Python 의 코드를 임포트하기 위한 코드 파일인 `module `_ (소문자 ``m``) 의 개념과 헷갈리지 말아주세요.
###Code
from torch import nn
class Mnist_Logistic(nn.Module):
def __init__(self):
super().__init__()
self.weights = nn.Parameter(torch.randn(784, 10) / math.sqrt(784))
self.bias = nn.Parameter(torch.zeros(10))
def forward(self, xb):
return xb @ self.weights + self.bias
###Output
_____no_output_____
###Markdown
함수를 사용하는 대신에 이제는 오브젝트(object) 를 사용하기 때문에,먼저 모델을 인스턴스화(instantiate) 해야 합니다:
###Code
model = Mnist_Logistic()
###Output
_____no_output_____
###Markdown
이제 우리는 이전과 동일한 방식으로 손실을 계산할 수 있습니다.여기서 ``nn.Module`` 오브젝트들은 마치 함수처럼 사용됩니다 (즉, 이들은 *호출가능* 합니다),그러나 배후에서 Pytorch 는 우리의 ``forward`` 메서드를 자동으로 호출합니다.
###Code
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
이전에는 훈련 루프를 위해 이름 별로 각 매개변수(parameter)의 값을 업데이트하고 다음과 같이각 매개 변수에 대한 기울기들을 개별적으로 수동으로 0으로 제거해야 했습니다::: with torch.no_grad(): weights -= weights.grad * lr bias -= bias.grad * lr weights.grad.zero_() bias.grad.zero_()이제 우리는 model.parameters() 및 model.zero_grad() (모두``nn.Module`` 에 대해 PyTorch에 의해 정의됨)를 활용하여 이러한 단계를 더 간결하게만들고, 특히 더 복잡한 모델에 대해서 일부 매개변수를 잊어 버리는 오류를 덜 발생시킬 수 있습니다::: with torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad()이제 이것을 나중에 다시 실행할 수 있도록 ``fit`` 함수로 작은 훈련 루프를 감쌀 것입니다.
###Code
def fit():
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
with torch.no_grad():
for p in model.parameters():
p -= p.grad * lr
model.zero_grad()
fit()
###Output
_____no_output_____
###Markdown
손실이 줄어들었는지 다시 한번 확인합시다:
###Code
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
nn.Linear 를 이용하여 리팩토링 하기------------------------------계속해서 코드를 리팩토링 합니다. ``self.weights`` 및 ``self.bias`` 를 수동으로 정의 및초기화하고, ``xb @ self.weights + self.bias`` 를 계산하는 대신에,위의 모든 것을 해줄 Pytorch 클래스인`nn.Linear `_ 를 선형레이어로 사용합니다.Pytorch 에는 다양한 유형의 코드를 크게 단순화 할 수 있는 미리 정의된 레이어가 있고 이는 또한종종 기존 코드보다 속도를 빠르게 합니다.
###Code
class Mnist_Logistic(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(784, 10)
def forward(self, xb):
return self.lin(xb)
###Output
_____no_output_____
###Markdown
이전과 같은 방식으로 모델을 인스턴스화하고 손실을 계산합니다:
###Code
model = Mnist_Logistic()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
우리는 여전히 이전과 동일한 ``fit`` 메서드를 사용할 수 있습니다.
###Code
fit()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
optim 을 이용하여 리팩토링 하기------------------------------Pytorch에는 다양한 최적화(optimization) 알고리즘을 가진 패키지인 ``torch.optim`` 도 있습니다.각 매개변수를 수동으로 업데이트 하는 대신, 옵티마이저(optimizer)의 ``step`` 메서드를 사용하여업데이트를 진행할 수 있습니다.이렇게 하면 이전에 수동으로 코딩한 최적화 단계를 대체할 수 있습니다::: with torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad()대신에 이렇게 말이죠::: opt.step() opt.zero_grad()(``optim.zero_grad()`` 는 기울기를 0으로 재설정 해줍니다. 다음 미니 배치에 대한기울기를 계산하기 전에 호출해야 합니다.)
###Code
from torch import optim
###Output
_____no_output_____
###Markdown
나중에 다시 사용할 수 있도록 모델과 옵티마이져를 만드는 작은 함수를 정의합니다.
###Code
def get_model():
model = Mnist_Logistic()
return model, optim.SGD(model.parameters(), lr=lr)
model, opt = get_model()
print(loss_func(model(xb), yb))
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
Dataset 을 이용하여 리팩토링하기------------------------------PyTorch 에는 추상 Dataset 클래스가 있습니다. Dataset 은``__len__`` 함수 (Python의 표준 ``len`` 함수에 의해 호출됨) 및``__getitem__`` 함수를 가진 어떤 것이라도 될 수 있으며, 이 함수들을 인덱싱(indexing)하기위한 방법으로 사용합니다.`이 튜토리얼 `_은 ``Dataset`` 의 하위 클래스로써, 사용자 지정 ``FacialLandmarkDataset`` 클래스를 만드는좋은 예를 제시합니다.PyTorch 의 `TensorDataset `_은 텐서를 감싸는(wrapping) Dataset 입니다.길이와 인덱싱 방식을 정의함으로써 텐서의 첫 번째 차원을 따라 반복, 인덱싱 및 슬라이스(slice)하는 방법도 제공합니다.이렇게하면 훈련 할 때 동일한 라인에서 독립(independent) 변수와 종속(dependent) 변수에 쉽게 액세스 할 수 있습니다.
###Code
from torch.utils.data import TensorDataset
###Output
_____no_output_____
###Markdown
``x_train`` 및 ``y_train`` 모두 하나의 ``TensorDataset`` 에 합쳐질 수 있습니다,따라서 반복시키고 슬라이스 하기 편리합니다.
###Code
train_ds = TensorDataset(x_train, y_train)
###Output
_____no_output_____
###Markdown
이전에는 x 및 y 값의 미니 배치를 별도로 반복해야했습니다::: xb = x_train[start_i:end_i] yb = y_train[start_i:end_i]이제 이 두 단계를 함께 수행 할 수 있습니다::: xb,yb = train_ds[i*bs : i*bs+bs]
###Code
model, opt = get_model()
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
xb, yb = train_ds[i * bs: i * bs + bs]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
DataLoader 를 이용하여 리팩토링하기-------------------------------Pytorch 의 ``DataLoader`` 는 배치 관리를 담당합니다.여러분들은 모든 ``Dataset`` 으로부터 ``DataLoader`` 를 생성할 수 있습니다.``DataLoader`` 는 배치들에 대해서 반복하기 쉽게 만들어줍니다.``train_ds[i*bs : i*bs+bs]`` 를 사용하는 대신,DataLoader 는 매 미니배치를 자동적으로 제공합니다.
###Code
from torch.utils.data import DataLoader
train_ds = TensorDataset(x_train, y_train)
train_dl = DataLoader(train_ds, batch_size=bs)
###Output
_____no_output_____
###Markdown
이전에는 루프가 다음과 같이 배치 (xb, yb)를 반복했습니다::: for i in range((n-1)//bs + 1): xb,yb = train_ds[i*bs : i*bs+bs] pred = model(xb)이제 (xb, yb)가 DataLoader 에서 자동으로 로드되므로 루프가 훨씬 깨끗해졌습니다::: for xb,yb in train_dl: pred = model(xb)
###Code
model, opt = get_model()
for epoch in range(epochs):
for xb, yb in train_dl:
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
Pytorch의 nn.Module, nn.Parameter, Dataset 및 DataLoader 덕분에 이제 훈련 루프가훨씬 더 작아지고 이해하기 쉬워졌습니다.이제 실제로 효과적인 모델을 만드는 데 필요한 기본 기능을 추가해 보겠습니다.검증(validation) 추가하기-----------------------섹션 1에서, 우리는 훈련 데이터에 사용하기 위해 합리적인 훈련 루프를 설정하려고했습니다.실전에서, 여러분들은 과적합(overfitting)을 확인하기 위해서 **항상**`검증 데이터셋(validation set) `_ 이있어야 합니다.훈련 데이터를 섞는(shuffling) 것은 배치와 과적합 사이의 상관관계를 방지하기 위해`중요합니다. `_반면에, 검증 손실(validation loss)은 검증 데이터셋을 섞든 안섞든 동일합니다.데이터를 섞는 것은 추가 시간이 걸리므로, 검증 데이터를 섞는 것은 의미가 없습니다.검증 데이터셋에 대한 배치 사이즈는 학습 데이터셋 배치 크기의 2배를 사용할 것입니다.이는 검증 데이터셋에 대해서는 역전파(backpropagation)가 필요하지 않으므로 메모리를덜 사용하기 때문입니다 (기울기를 저장할 필요가 없음).더 큰 배치 크기를 사용하여 손실을 더 빨리 계산하기 위해 이렇게 합니다.
###Code
train_ds = TensorDataset(x_train, y_train)
train_dl = DataLoader(train_ds, batch_size=bs, shuffle=True)
valid_ds = TensorDataset(x_valid, y_valid)
valid_dl = DataLoader(valid_ds, batch_size=bs * 2)
###Output
_____no_output_____
###Markdown
각 에포크가 끝날 때 검증 손실을 계산하고 프린트 할 것입니다.(훈련 전에 항상 ``model.train()`` 을 호출하고, 추론(inference) 전에 ``model.eval()``을 호출합니다, 이는 ``nn.BatchNorm2d`` 및 ``nn.Dropout`` 과 같은 레이어에서이러한 다른 단계(훈련, 추론) 에 대한 적절한 동작이 일어나게 하기 위함입니다.)
###Code
model, opt = get_model()
for epoch in range(epochs):
model.train()
for xb, yb in train_dl:
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
model.eval()
with torch.no_grad():
valid_loss = sum(loss_func(model(xb), yb) for xb, yb in valid_dl)
print(epoch, valid_loss / len(valid_dl))
###Output
_____no_output_____
###Markdown
fit() 와 get_data() 생성하기----------------------------------이제 우리는 우리만의 작은 리팩토링을 수행할 것입니다.훈련 데이터셋과 검증 데이터셋 모두에 대한 손실을 계산하는 유사한 프로세스를 두 번 거치므로,이를 하나의 배치에 대한 손실을 계산하는 자체 함수 ``loss_batch`` 로 만들어보겠습니다.훈련 데이터셋에 대한 옵티마이저를 전달하고 이를 사용하여 역전파를 수행합니다.검증 데이터셋의 경우 옵티마이저를 전덜하지 않으므로 메서드가 역전파를 수행하지 않습니다.
###Code
def loss_batch(model, loss_func, xb, yb, opt=None):
loss = loss_func(model(xb), yb)
if opt is not None:
loss.backward()
opt.step()
opt.zero_grad()
return loss.item(), len(xb)
###Output
_____no_output_____
###Markdown
``fit`` 은 모델을 훈련하고 각 에포크에 대한 훈련 및 검증 손실을 계산하는 작업을 수행합니다.
###Code
import numpy as np
def fit(epochs, model, loss_func, opt, train_dl, valid_dl):
for epoch in range(epochs):
model.train()
for xb, yb in train_dl:
loss_batch(model, loss_func, xb, yb, opt)
model.eval()
with torch.no_grad():
losses, nums = zip(
*[loss_batch(model, loss_func, xb, yb) for xb, yb in valid_dl]
)
val_loss = np.sum(np.multiply(losses, nums)) / np.sum(nums)
print(epoch, val_loss)
###Output
_____no_output_____
###Markdown
``get_data`` 는 학습 및 검증 데이터셋에 대한 dataloader 를 출력합니다.
###Code
def get_data(train_ds, valid_ds, bs):
return (
DataLoader(train_ds, batch_size=bs, shuffle=True),
DataLoader(valid_ds, batch_size=bs * 2),
)
###Output
_____no_output_____
###Markdown
이제 dataloader를 가져오고 모델을 훈련하는 전체 프로세스를 3 줄의 코드로 실행할 수 있습니다:
###Code
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
model, opt = get_model()
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
이러한 기본 3줄의 코드를 사용하여 다양한 모델을 훈련할 수 있습니다.컨볼루션 신경망(CNN)을 훈련하는 데 사용할 수 있는지 살펴 보겠습니다!CNN 으로 넘어가기---------------이제 3개의 컨볼루션 레이어로 신경망을 구축할 것입니다.이전 섹션의 어떤 함수도 모델의 형식에 대해 가정하지 않기 때문에,별도의 수정없이 CNN을 학습하는 데 사용할 수 있습니다.Pytorch 의 사전정의된`Conv2d `_ 클래스를컨볼루션 레이어로 사용합니다. 3개의 컨볼루션 레이어로 CNN을 정의합니다.각 컨볼루션 뒤에는 ReLU가 있습니다. 마지막으로 평균 풀링(average pooling)을 수행합니다.(``view`` 는 PyTorch의 numpy ``reshape`` 버전입니다.)
###Code
class Mnist_CNN(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1)
self.conv2 = nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1)
self.conv3 = nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1)
def forward(self, xb):
xb = xb.view(-1, 1, 28, 28)
xb = F.relu(self.conv1(xb))
xb = F.relu(self.conv2(xb))
xb = F.relu(self.conv3(xb))
xb = F.avg_pool2d(xb, 4)
return xb.view(-1, xb.size(1))
lr = 0.1
###Output
_____no_output_____
###Markdown
`모멘텀(Momentum) `_ 은이전 업데이트도 고려하고 일반적으로 더 빠른 훈련으로 이어지는 확률적 경사하강법(stochastic gradient descent)의 변형입니다.
###Code
model = Mnist_CNN()
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
nn.Sequential------------------------``torch.nn`` 에는 코드를 간단히 사용할 수 있는 또 다른 편리한 클래스인`Sequential `_이 있습니다..``Sequential`` 객체는 그 안에 포함된 각 모듈을 순차적으로 실행합니다.이것은 우리의 신경망을 작성하는 더 간단한 방법입니다.이를 활용하려면 주어진 함수에서 **사용자정의 레이어(custom layer)** 를 쉽게정의할 수 있어야 합니다.예를 들어, PyTorch에는 `view` 레이어가 없으므로 우리의 신경망 용으로 만들어야 합니다.``Lambda`` 는 ``Sequential`` 로 신경망을 정의할 때 사용할 수 있는 레이어를 생성할 것입니다.
###Code
class Lambda(nn.Module):
def __init__(self, func):
super().__init__()
self.func = func
def forward(self, x):
return self.func(x)
def preprocess(x):
return x.view(-1, 1, 28, 28)
###Output
_____no_output_____
###Markdown
``Sequential`` 로 생성된 모들은 간단하게 아래와 같습니다:
###Code
model = nn.Sequential(
Lambda(preprocess),
nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.AvgPool2d(4),
Lambda(lambda x: x.view(x.size(0), -1)),
)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
DataLoader 감싸기-----------------------------우리의 CNN은 상당히 간결하지만, MNIST에서만 작동합니다, 왜냐하면: - 입력이 28\*28의 긴 벡터라고 가정합니다. - 최종적으로 CNN 그리드 크기는 4\*4 라고 가정합니다. (이것은 우리가 사용한 평균 풀링 커널 크기 때문입니다.)이 두 가지 가정을 제거하여 모델이 모든 2d 단일 채널(channel) 이미지에서 작동하도록 하겠습니다.먼저 초기 Lambda 레이어를 제거하고 데이터 전처리를 제네레이터(generator)로 이동시킬 수 있습니다:
###Code
def preprocess(x, y):
return x.view(-1, 1, 28, 28), y
class WrappedDataLoader:
def __init__(self, dl, func):
self.dl = dl
self.func = func
def __len__(self):
return len(self.dl)
def __iter__(self):
batches = iter(self.dl)
for b in batches:
yield (self.func(*b))
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
train_dl = WrappedDataLoader(train_dl, preprocess)
valid_dl = WrappedDataLoader(valid_dl, preprocess)
###Output
_____no_output_____
###Markdown
다음으로 ``nn.AvgPool2d`` 를 ``nn.AdaptiveAvgPool2d`` 로 대체하여 우리가 가진*입력* 텐서가 아니라 원하는 *출력* 텐서의 크기를 정의할 수 있습니다.결과적으로 우리 모델은 모든 크기의 입력과 함께 작동합니다.
###Code
model = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
Lambda(lambda x: x.view(x.size(0), -1)),
)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
###Output
_____no_output_____
###Markdown
한번 실행해 봅시다:
###Code
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
GPU 사용하기---------------만약 여러분들이 운이 좋아서 CUDA 지원 GPU (대부분의 클라우드 제공 업체에서시간당 약 $0.50 에 이용할 수 있습니다) 를 사용할 수 있다면, 코드 실행 속도를 높일 수 있습니다.먼저 GPU가 Pytorch에서 작동하는지 확인합니다:
###Code
print(torch.cuda.is_available())
###Output
_____no_output_____
###Markdown
그리고 이에 대한 디바이스 오브젝트를 생성합니다:
###Code
dev = torch.device(
"cuda") if torch.cuda.is_available() else torch.device("cpu")
###Output
_____no_output_____
###Markdown
GPU로 배치를 옮기도록 ``preprocess`` 를 업데이트 합시다:
###Code
def preprocess(x, y):
return x.view(-1, 1, 28, 28).to(dev), y.to(dev)
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
train_dl = WrappedDataLoader(train_dl, preprocess)
valid_dl = WrappedDataLoader(valid_dl, preprocess)
###Output
_____no_output_____
###Markdown
마지막으로 모델을 GPU로 이동시킬 수 있습니다.
###Code
model.to(dev)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
###Output
_____no_output_____
###Markdown
이제 더 빨리 실행됩니다:
###Code
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
`torch.nn` 이 *실제로* 무엇인가요?===============================저자: Jeremy Howard, `fast.ai `_.도움: Rachel Thomas, Francisco Ingham.번역: `남상호 `_ 이 튜토리얼을 스크립트가 아닌 노트북으로 실행하기를 권장합니다. 노트북 (.ipynb) 파일을 다운 받으시려면,페이지 상단에 있는 링크를 클릭해주세요.PyTorch 는 여러분이 신경망(neural network)를 생성하고 학습시키는 것을 도와주기 위해서`torch.nn `_ ,`torch.optim `_ ,`Dataset `_ ,그리고 `DataLoader `_와 같은 잘 디자인된 모듈과 클래스들을 제공합니다.이들의 성능을 최대한 활용하고 여러분의 문제에 맞게 커스터마이즈하기 위해서,정확히 이들이 어떤 작업을 수행하는지 이해할 필요가 있습니다.이해를 증진하기 위해서, 우리는 먼저 이들 모델들로 부터 아무 피쳐도 사용하지 않고MNIST 데이터셋에 대해 기초적인 신경망을 학습시킬 것입니다;우리는 처음에는 가장 기초적인 PyTorch 텐서(tensor) 기능만을 사용할 것입니다.그리고나서 우리는 점차적으로 ``torch.nn``, ``torch.optim``, ``Dataset``, 또는``DataLoader`` 로부터 한번에 하나씩 피쳐를 추가하면서, 정확히 각 부분이 어떤 일을 하는지 그리고이것이 어떻게 코드를 더 정확하고 유연하게 만드는지 보여줄 것입니다.**이 튜토리얼은 여러분이 이미 PyTorch를 설치하였고, 그리고 텐서 연산의 기초에 대해 익숙하다고 가정합니다.**(만약 여러분이 Numpy 배열(array) 연산에 익숙하다면, 여기에서 사용되는 PyTorch 텐서 연산도거의 동일하다는 것을 알게 될 것입니다).MNIST 데이터 준비----------------우리는 손으로 쓴 숫자(0에서 9 사이)의 흑백 이미지로 구성된 클래식`MNIST `_ 데이터셋을 사용할 것 입니다.우리는 경로 설정을 담당하는 (Python3 표준 라이브러리의 일부인)`pathlib `_ 을 사용할 것이고,`requests `_ 를 이용하여데이터셋을 다운로드 할 것입니다. 우리는 모듈을 사용할 때만 임포트(import) 할 것이므로,여러분은 매 포인트마다 정확히 어떤 것이 사용되는지 확인할 수 있습니다.
###Code
from pathlib import Path
import requests
DATA_PATH = Path("data")
PATH = DATA_PATH / "mnist"
PATH.mkdir(parents=True, exist_ok=True)
URL = "http://deeplearning.net/data/mnist/"
FILENAME = "mnist.pkl.gz"
if not (PATH / FILENAME).exists():
content = requests.get(URL + FILENAME).content
(PATH / FILENAME).open("wb").write(content)
###Output
_____no_output_____
###Markdown
이 데이터셋은 numpy 배열 포맷이고, 데이터를 직렬화하기 위한python 전용 포맷 pickle 을 이용하여 저장되어 있습니다.
###Code
import pickle
import gzip
with gzip.open((PATH / FILENAME).as_posix(), "rb") as f:
((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding="latin-1")
###Output
_____no_output_____
###Markdown
각 이미지는 28 x 28 형태 이고, 784 (=28x28) 크기를 가진 하나의 행으로 저장되어 있습니다.하나를 살펴 봅시다; 먼저 우리는 이 이미지를 2d로 재구성해야 합니다.
###Code
from matplotlib import pyplot
import numpy as np
pyplot.imshow(x_train[0].reshape((28, 28)), cmap="gray")
print(x_train.shape)
###Output
_____no_output_____
###Markdown
PyTorch는 numpy 배열 보다는 ``torch.tensor`` 를 사용하므로, 우리는 데이터를 변환해야 합니다.
###Code
import torch
x_train, y_train, x_valid, y_valid = map(
torch.tensor, (x_train, y_train, x_valid, y_valid)
)
n, c = x_train.shape
x_train, x_train.shape, y_train.min(), y_train.max()
print(x_train, y_train)
print(x_train.shape)
print(y_train.min(), y_train.max())
###Output
_____no_output_____
###Markdown
torch.nn 없이 밑바닥부터 신경망 만들기---------------------------------------------PyTorch 텐서 연산만으로 첫 모델을 만들어봅시다.여러분이 신경망의 기초에 대해서 이미 익숙하다고 가정합니다.(만약 익숙하지 않다면 `course.fast.ai `_ 에서 학습할 수 있습니다).PyTorch는 랜덤 또는 0으로만 이루어진 텐서를 생성하는 메서드를 제공하고,우리는 간단한 선형 모델의 가중치(weights)와 절편(bias)을 생성하기 위해서 이것을 사용할 것입니다.이들은 일반적인 텐서에 매우 특별한 한 가지가 추가된 것입니다: 우리는 PyTorch에게 이들이기울기(gradient)가 필요하다고 알려줍니다.이를 통해 PyTorch는 텐서에 행해지는 모든 연산을 기록하게 하고,따라서 *자동적으로* 역전파(back-propagation) 동안에 기울기를 계산할 수 있습니다!가중치에 대해서는 ``requires_grad`` 를 초기화(initialization) **다음에** 설정합니다,왜냐하면 우리는 해당 단계가 기울기에 포함되는 것을 원치 않기 때문입니다.(PyTorch에서 ``_`` 다음에 오는 메서드 이름은 연산이 인플레이스(in-place)로 수행되는 것을 의미합니다.)Note`Xavier initialisation `_ 기법을 이용하여 가중치를 초기화 합니다. (1/sqrt(n)을 곱해주는 것을 통해서 초기화).
###Code
import math
weights = torch.randn(784, 10) / math.sqrt(784)
weights.requires_grad_()
bias = torch.zeros(10, requires_grad=True)
###Output
_____no_output_____
###Markdown
PyTorch의 기울기를 자동으로 계산해주는 기능 덕분에, Python 표준 함수(또는 호출 가능한 객체)를 모델로 사용할 수 있습니다!그러므로 간단한 선형 모델을 만들기 위해서 단순한 행렬 곱셈과 브로드캐스트(broadcast)덧셈을 사용하여 보겠습니다. 또한, 우리는 활성화 함수(activation function)가 필요하므로,`log_softmax` 를 구현하고 사용할 것입니다.PyTorch에서 많은 사전 구현된 손실 함수(loss function), 활성화 함수들이 제공되지만,일반적인 python을 사용하여 자신만의 함수를 쉽게 작성할 수 있음을 기억해주세요.PyTorch는 심지어 여러분의 함수를 위해서 빠른 GPU 또는 벡터화된 CPU 코드를 만들어줄 것입니다.
###Code
def log_softmax(x):
return x - x.exp().sum(-1).log().unsqueeze(-1)
def model(xb):
return log_softmax(xb @ weights + bias)
###Output
_____no_output_____
###Markdown
위에서, ``@`` 기호는 점곱(dot product) 연산을 나타냅니다.우리는 하나의 배치(batch) 데이터(이 경우에는 64개의 이미지들)에 대하여 함수를 호출할 것입니다.이것은 하나의 *포워드 전달(forward pass)* 입니다. 이 단계에서 우리는 무작위(random) 가중치로시작했기 때문에 우리의 예측이 무작위 예측보다 전혀 나은 점이 없을 것입니다.
###Code
bs = 64 # 배치 사이즈
xb = x_train[0:bs] # x로부터 미니배치(mini-batch) 추출
preds = model(xb) # 예측
preds[0], preds.shape
print(preds[0], preds.shape)
###Output
_____no_output_____
###Markdown
여러분이 보시듯이, ``preds`` 텐서(tensor)는 텐서 값 외에도, 또한기울기 함수(gradient function)를 담고 있습니다.우리는 나중에 이것을 역전파(backpropagation)를 위해 사용할 것입니다.이제 손실함수(loss function)로 사용하기 위한 음의 로그 우도(negative log-likelihood)를구현합시다. (다시 말하지만, 우리는 표준 Python을 사용할 수 있습니다.):
###Code
def nll(input, target):
return -input[range(target.shape[0]), target].mean()
loss_func = nll
###Output
_____no_output_____
###Markdown
우리의 무작위 모델에 대한 손실을 점검해봅시다, 그럼으로써 우리는 나중에 역전파 이후에 개선이 있는지확인할 수 있습니다.
###Code
yb = y_train[0:bs]
print(loss_func(preds, yb))
###Output
_____no_output_____
###Markdown
또한, 우리 모델의 정확도(accuracy)를 계산하기 위한 함수를 구현합시다.매 예측마다, 만약 가장 큰 값의 인덱스가 목표값(target value)과 동일하다면,그 예측은 올바른 것입니다.
###Code
def accuracy(out, yb):
preds = torch.argmax(out, dim=1)
return (preds == yb).float().mean()
###Output
_____no_output_____
###Markdown
우리의 무작위 모델의 정확도를 점검해 봅시다, 그럼으로써 손실이 개선됨에 따라서 정확도가 개선되는지확인할 수 있습니다.
###Code
print(accuracy(preds, yb))
###Output
_____no_output_____
###Markdown
이제 우리는 훈련 루프(training loop)를 실행할 수 있습니다. 매 반복마다, 우리는 다음을 수행할 것입니다:- 데이터의 미니배치를 선택 (``bs`` 사이즈)- 모델을 이용하여 예측 수행- 손실 계산- ``loss.backward()`` 를 이용하여 모델의 기울기 업데이트, 이 경우에는, ``weights`` 와 ``bias``.이제 우리는 이 기울기들을 이용하여 가중치와 절편을 업데이트 합니다.우리는 이것을 ``torch.no_grad()`` 컨텍스트 매니져(context manager) 내에서 실행합니다,왜냐하면 이러한 실행이 다음 기울기의 계산에 기록되지 않기를 원하기 때문입니다.PyTorch의 자동 기울기(Autograd)가 어떻게 연산을 기록하는지`여기 `_ 에서 더 알아볼 수 있습니다.우리는 그러고나서 기울기를 0으로 설정합니다, 그럼으로써 다음 루프(loop)에 준비하게 됩니다.그렇지 않으면, 우리의 기울기들은 일어난 모든 연산의 누적 집계를 기록하게 되버립니다.(즉, ``loss.backward()`` 가 이미 저장된 것을 대체하기보단, 기존 값에 기울기를 *더하게* 됩니다)... tip:: 여러분들은 PyTorch 코드에 대하여 표준 python 디버거(debugger)를 사용할 수 있으므로, 매 단계마다 다양한 변수 값을 점검할 수 있습니다. 아래에서 ``set_trace()`` 를 주석 해제하여 사용해보세요.
###Code
from IPython.core.debugger import set_trace
lr = 0.5 # 학습률(learning rate)
epochs = 2 # 훈련에 사용할 에포크(epoch) 수
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
# set_trace()
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
with torch.no_grad():
weights -= weights.grad * lr
bias -= bias.grad * lr
weights.grad.zero_()
bias.grad.zero_()
###Output
_____no_output_____
###Markdown
이제 다 됐습니다: 우리는 제일 간단한 신경망(neural network)의 모든 것을 밑바닥부터 생성하고훈련하였습니다! (이번에는 은닉층(hidden layer)이 없기 때문에,로지스틱 회귀(logistic regression)입니다).이제 손실과 정확도를 이전 값들과 비교하면서 확인해봅시다.우리는 손실은 감소하고, 정확도는 증가하기를 기대할 것이고, 그들은 아래와 같습니다.
###Code
print(loss_func(model(xb), yb), accuracy(model(xb), yb))
###Output
_____no_output_____
###Markdown
torch.nn.functional 사용하기------------------------------이제 우리는 코드를 리팩토링(refactoring) 하겠습니다, 그럼으로써 이전과 동일하지만,PyTorch의 ``nn`` 클래스의 장점을 활용하여 더 간결하고 유연하게 만들 것입니다.지금부터 매 단계에서, 우리는 코드를 더 짧고, 이해하기 쉽고, 유연하게 만들어야 합니다.처음이면서 우리의 코드를 짧게 만들기 가장 쉬운 단계는 직접 작성한 활성화, 손실 함수를``torch.nn.functional`` 의 함수로 대체하는 것입니다(관례에 따라, 일반적으로 ``F`` 네임스페이스(namespace)를 통해 임포트(import) 합니다).이 모듈에는 ``torch.nn`` 라이브러리의 모든 함수가 포함되어 있습니다(라이브러리의 다른 부분에는 클래스가 포함되어 있습니다.)다양한 손실 및 활성화 함수 뿐만 아니라, 풀링(pooling) 함수와 같이 신경망을 만드는데편리한 몇 가지 함수도 여기에서 찾을 수 있습니다.(컨볼루션(convolution) 연산, 선형(linear) 레이어, 등을 수행하는 함수도 있지만,앞으로 보시겠지만 일반적으로 라이브러리의 다른 부분을 사용하여 더 잘 처리 할 수 있습니다.)만약 여러분들이 음의 로그 우도 손실과 로그 소프트맥스 (log softmax) 활성화 함수를 사용하는 경우,Pytorch는 이 둘을 결합하는 단일 함수인 ``F.cross_entropy`` 를 제공합니다.따라서 모델에서 활성화 함수를 제거할 수도 있습니다.
###Code
import torch.nn.functional as F
loss_func = F.cross_entropy
def model(xb):
return xb @ weights + bias
###Output
_____no_output_____
###Markdown
더이상 ``model`` 함수에서 ``log_softmax`` 를 호출하지 않고 있습니다.손실과 정확도과 이전과 동일한지 확인해봅시다:
###Code
print(loss_func(model(xb), yb), accuracy(model(xb), yb))
###Output
_____no_output_____
###Markdown
nn.Module 을 이용하여 리팩토링 하기------------------------------다음으로, 더 명확하고 간결한 훈련 루프를 위해 ``nn.Module`` 및 ``nn.Parameter`` 를 사용합니다.우리는 ``nn.Module`` (자체가 클래스이고 상태를 추척할 수 있는) 하위 클래스(subclass)를 만듭니다.이 경우에는, 포워드(forward) 단계에 대한 가중치, 절편, 그리고 메소드(method) 등을 유지하는클래스를 만들고자 합니다.``nn.Module`` 은 우리가 사용할 몇 가지 속성(attribute)과 메소드를 (``.parameters()`` 와``.zero_grad()`` 같은) 가지고 있습니다.Note``nn.Module`` (대문자 M) 은 PyTorch 의 특정 개념이고, 우리는 이 클래스를 많이 사용할 것입니다. ``nn.Module`` 를 Python 의 코드를 임포트하기 위한 코드 파일인 `module `_ (소문자 ``m``) 의 개념과 헷갈리지 말아주세요.
###Code
from torch import nn
class Mnist_Logistic(nn.Module):
def __init__(self):
super().__init__()
self.weights = nn.Parameter(torch.randn(784, 10) / math.sqrt(784))
self.bias = nn.Parameter(torch.zeros(10))
def forward(self, xb):
return xb @ self.weights + self.bias
###Output
_____no_output_____
###Markdown
함수를 사용하는 대신에 이제는 오브젝트(object) 를 사용하기 때문에,먼저 모델을 인스턴스화(instantiate) 해야 합니다:
###Code
model = Mnist_Logistic()
###Output
_____no_output_____
###Markdown
이제 우리는 이전과 동일한 방식으로 손실을 계산할 수 있습니다.여기서 ``nn.Module`` 오브젝트들은 마치 함수처럼 사용됩니다 (즉, 이들은 *호출가능* 합니다),그러나 배후에서 Pytorch 는 우리의 ``forward`` 메소드를 자동으로 호출합니다.
###Code
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
이전에는 훈련 루프를 위해 이름 별로 각 매개변수(parameter)의 값을 업데이트하고 다음과 같이각 매개 변수에 대한 기울기들을 개별적으로 수동으로 0으로 제거해야 했습니다::: with torch.no_grad(): weights -= weights.grad * lr bias -= bias.grad * lr weights.grad.zero_() bias.grad.zero_()이제 우리는 model.parameters() 및 model.zero_grad() (모두``nn.Module`` 에 대해 PyTorch에 의해 정의됨)를 활용하여 이러한 단계를 더 간결하게만들고, 특히 더 복잡한 모델에 대해서 일부 매개변수를 잊어 버리는 오류를 덜 발생시킬 수 있습니다::: with torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad()이제 이것을 나중에 다시 실행할 수 있도록 ``fit`` 함수로 작은 훈련 루프를 감쌀 것입니다.
###Code
def fit():
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
with torch.no_grad():
for p in model.parameters():
p -= p.grad * lr
model.zero_grad()
fit()
###Output
_____no_output_____
###Markdown
손실이 줄어들었는지 다시 한번 확인합시다:
###Code
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
nn.Linear 를 이용하여 리팩토링 하기------------------------------계속해서 코드를 리팩토링 합니다. ``self.weights`` 및 ``self.bias`` 를 수동으로 정의 및초기화하고, ``xb @ self.weights + self.bias`` 를 계산하는 대신에,위의 모든 것을 해줄 Pytorch 클래스인`nn.Linear `_ 를 선형레이어로 사용합니다.Pytorch 에는 다양한 유형의 코드를 크게 단순화 할 수 있는 미리 정의된 레이어가 있고 이는 또한종종 기존 코드보다 속도를 빠르게 합니다.
###Code
class Mnist_Logistic(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(784, 10)
def forward(self, xb):
return self.lin(xb)
###Output
_____no_output_____
###Markdown
이전과 같은 방식으로 모델을 인스턴스화하고 손실을 계산합니다:
###Code
model = Mnist_Logistic()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
우리는 여전히 이전과 동일한 ``fit`` 메소드를 사용할 수 있습니다.
###Code
fit()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
optim 을 이용하여 리팩토링 하기------------------------------Pytorch에는 다양한 최적화(optimization) 알고리즘을 가진 패키지인 ``torch.optim`` 도 있습니다.각 매개변수를 수동으로 업데이트 하는 대신, 옵티마이저(optimizer)의 ``step`` 메소드를 사용하여업데이트를 진행할 수 있습니다.이렇게 하면 이전에 수동으로 코딩한 최적화 단계를 대체할 수 있습니다::: with torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad()대신에 이렇게 말이죠::: opt.step() opt.zero_grad()(``optim.zero_grad()`` 는 기울기를 0으로 재설정 해줍니다. 다음 미니 배치에 대한기울기를 계산하기 전에 호출해야 합니다.)
###Code
from torch import optim
###Output
_____no_output_____
###Markdown
나중에 다시 사용할 수 있도록 모델과 옵티마이져를 만드는 작은 함수를 정의합니다.
###Code
def get_model():
model = Mnist_Logistic()
return model, optim.SGD(model.parameters(), lr=lr)
model, opt = get_model()
print(loss_func(model(xb), yb))
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
Dataset 을 이용하여 리팩토링하기------------------------------PyTorch 에는 추상 Dataset 클래스가 있습니다. Dataset 은``__len__`` 함수 (Python의 표준 ``len`` 함수에 의해 호출됨) 및``__getitem__`` 함수를 가진 어떤 것이라도 될 수 있으며, 이 함수들을 인덱싱(indexing)하기위한 방법으로 사용합니다.`이 튜토리얼 `_은 ``Dataset`` 의 하위 클래스로써, 사용자 지정 ``FacialLandmarkDataset`` 클래스를 만드는좋은 예를 제시합니다.PyTorch 의 `TensorDataset `_은 텐서를 감싸는(wrapping) Dataset 입니다.길이와 인덱싱 방식을 정의함으로써 텐서의 첫 번째 차원을 따라 반복, 인덱싱 및 슬라이스(slice)하는 방법도 제공합니다.이렇게하면 훈련 할 때 동일한 라인에서 독립(independent) 변수와 종속(dependent) 변수에 쉽게 액세스 할 수 있습니다.
###Code
from torch.utils.data import TensorDataset
###Output
_____no_output_____
###Markdown
``x_train`` 및 ``y_train`` 모두 하나의 ``TensorDataset`` 에 합쳐질 수 있습니다,따라서 반복시키고 슬라이스 하기 편리합니다.
###Code
train_ds = TensorDataset(x_train, y_train)
###Output
_____no_output_____
###Markdown
이전에는 x 및 y 값의 미니 배치를 별도로 반복해야했습니다::: xb = x_train[start_i:end_i] yb = y_train[start_i:end_i]이제 이 두 단계를 함께 수행 할 수 있습니다::: xb,yb = train_ds[i*bs : i*bs+bs]
###Code
model, opt = get_model()
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
xb, yb = train_ds[i * bs: i * bs + bs]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
DataLoader 를 이용하여 리팩토링하기-------------------------------Pytorch 의 ``DataLoader`` 는 배치 관리를 담당합니다.여러분들은 모든 ``Dataset`` 으로부터 ``DataLoader`` 를 생성할 수 있습니다.``DataLoader`` 는 배치들에 대해서 반복하기 쉽게 만들어줍니다.``train_ds[i*bs : i*bs+bs]`` 를 사용하는 대신,DataLoader 는 매 미니배치를 자동적으로 제공합니다.
###Code
from torch.utils.data import DataLoader
train_ds = TensorDataset(x_train, y_train)
train_dl = DataLoader(train_ds, batch_size=bs)
###Output
_____no_output_____
###Markdown
이전에는 루프가 다음과 같이 배치 (xb, yb)를 반복했습니다::: for i in range((n-1)//bs + 1): xb,yb = train_ds[i*bs : i*bs+bs] pred = model(xb)이제 (xb, yb)가 DataLoader 에서 자동으로 로드되므로 루프가 훨씬 깨끗해졌습니다::: for xb,yb in train_dl: pred = model(xb)
###Code
model, opt = get_model()
for epoch in range(epochs):
for xb, yb in train_dl:
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
Pytorch의 nn.Module, nn.Parameter, Dataset 및 DataLoader 덕분에 이제 훈련 루프가훨씬 더 작아지고 이해하기 쉬워졌습니다.이제 실제로 효과적인 모델을 만드는 데 필요한 기본 기능을 추가해 보겠습니다.검증(validation) 추가하기-----------------------섹션 1에서, 우리는 훈련 데이터에 사용하기 위해 합리적인 훈련 루프를 설정하려고했습니다.실전에서, 여러분들은 과적합(overfitting)을 확인하기 위해서 **항상**`검증 데이터셋(validation set) `_ 이있어야 합니다.훈련 데이터를 섞는(shuffling) 것은 배치와 과적합 사이의 상관관계를 방지하기 위해`중요합니다. `_반면에, 검증 손실(validation loss)은 검증 데이터셋을 섞든 안섞든 동일합니다.데이터를 섞는 것은 추가 시간이 걸리므로, 검증 데이터를 섞는 것은 의미가 없습니다.검증 데이터셋에 대한 배치 사이즈는 학습 데이터셋 배치 크기의 2배를 사용할 것입니다.이는 검증 데이터셋에 대해서는 역전파(backpropagation)가 필요하지 않으므로 메모리를덜 사용하기 때문입니다 (기울기를 저장할 필요가 없음).더 큰 배치 크기를 사용하여 손실을 더 빨리 계산하기 위해 이렇게 합니다.
###Code
train_ds = TensorDataset(x_train, y_train)
train_dl = DataLoader(train_ds, batch_size=bs, shuffle=True)
valid_ds = TensorDataset(x_valid, y_valid)
valid_dl = DataLoader(valid_ds, batch_size=bs * 2)
###Output
_____no_output_____
###Markdown
각 에포크가 끝날 때 검증 손실을 계산하고 프린트 할 것입니다.(훈련 전에 항상 ``model.train()`` 을 호출하고, 추론(inference) 전에 ``model.eval()``을 호출합니다, 이는 ``nn.BatchNorm2d`` 및 ``nn.Dropout`` 과 같은 레이어에서이러한 다른 단계(훈련, 추론) 에 대한 적절한 동작이 일어나게 하기 위함입니다.)
###Code
model, opt = get_model()
for epoch in range(epochs):
model.train()
for xb, yb in train_dl:
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
model.eval()
with torch.no_grad():
valid_loss = sum(loss_func(model(xb), yb) for xb, yb in valid_dl)
print(epoch, valid_loss / len(valid_dl))
###Output
_____no_output_____
###Markdown
fit() 와 get_data() 생성하기----------------------------------이제 우리는 우리만의 작은 리팩토링을 수행할 것입니다.훈련 데이터셋과 검증 데이터셋 모두에 대한 손실을 계산하는 유사한 프로세스를 두 번 거치므로,이를 하나의 배치에 대한 손실을 계산하는 자체 함수 ``loss_batch`` 로 만들어보겠습니다.훈련 데이터셋에 대한 옵티마이저를 전달하고 이를 사용하여 역전파를 수행합니다.검증 데이터셋의 경우 옵티마이저를 전덜하지 않으므로 메소드가 역전파를 수행하지 않습니다.
###Code
def loss_batch(model, loss_func, xb, yb, opt=None):
loss = loss_func(model(xb), yb)
if opt is not None:
loss.backward()
opt.step()
opt.zero_grad()
return loss.item(), len(xb)
###Output
_____no_output_____
###Markdown
``fit`` 은 모델을 훈련하고 각 에포크에 대한 훈련 및 검증 손실을 계산하는 작업을 수행합니다.
###Code
import numpy as np
def fit(epochs, model, loss_func, opt, train_dl, valid_dl):
for epoch in range(epochs):
model.train()
for xb, yb in train_dl:
loss_batch(model, loss_func, xb, yb, opt)
model.eval()
with torch.no_grad():
losses, nums = zip(
*[loss_batch(model, loss_func, xb, yb) for xb, yb in valid_dl]
)
val_loss = np.sum(np.multiply(losses, nums)) / np.sum(nums)
print(epoch, val_loss)
###Output
_____no_output_____
###Markdown
``get_data`` 는 학습 및 검증 데이터셋에 대한 dataloader 를 출력합니다.
###Code
def get_data(train_ds, valid_ds, bs):
return (
DataLoader(train_ds, batch_size=bs, shuffle=True),
DataLoader(valid_ds, batch_size=bs * 2),
)
###Output
_____no_output_____
###Markdown
이제 dataloader를 가져오고 모델을 훈련하는 전체 프로세스를 3 줄의 코드로 실행할 수 있습니다:
###Code
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
model, opt = get_model()
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
이러한 기본 3줄의 코드를 사용하여 다양한 모델을 훈련할 수 있습니다.컨볼루션 신경망(CNN)을 훈련하는 데 사용할 수 있는지 살펴 보겠습니다!CNN 으로 넘어가기---------------이제 3개의 컨볼루션 레이어로 신경망을 구축할 것입니다.이전 섹션의 어떤 함수도 모델의 형식에 대해 가정하지 않기 때문에,별도의 수정없이 CNN을 학습하는 데 사용할 수 있습니다.Pytorch 의 사전정의된`Conv2d `_ 클래스를컨볼루션 레이어로 사용합니다. 3개의 컨볼루션 레이어로 CNN을 정의합니다.각 컨볼루션 뒤에는 ReLU가 있습니다. 마지막으로 평균 풀링(average pooling)을 수행합니다.(``view`` 는 PyTorch의 numpy ``reshape`` 버전입니다.)
###Code
class Mnist_CNN(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1)
self.conv2 = nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1)
self.conv3 = nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1)
def forward(self, xb):
xb = xb.view(-1, 1, 28, 28)
xb = F.relu(self.conv1(xb))
xb = F.relu(self.conv2(xb))
xb = F.relu(self.conv3(xb))
xb = F.avg_pool2d(xb, 4)
return xb.view(-1, xb.size(1))
lr = 0.1
###Output
_____no_output_____
###Markdown
`모멘텀(Momentum) `_ 은이전 업데이트도 고려하고 일반적으로 더 빠른 훈련으로 이어지는 확률적 경사하강법(stochastic gradient descent)의 변형입니다.
###Code
model = Mnist_CNN()
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
nn.Sequential------------------------``torch.nn`` 에는 코드를 간단히 사용할 수 있는 또 다른 편리한 클래스인`Sequential `_이 있습니다..``Sequential`` 객체는 그 안에 포함된 각 모듈을 순차적으로 실행합니다.이것은 우리의 신경망을 작성하는 더 간단한 방법입니다.이를 활용하려면 주어진 함수에서 **사용자정의 레이어(custom layer)** 를 쉽게정의할 수 있어야 합니다.예를 들어, PyTorch에는 `view` 레이어가 없으므로 우리의 신경망 용으로 만들어야 합니다.``Lambda`` 는 ``Sequential`` 로 신경망을 정의할 때 사용할 수 있는 레이어를 생성할 것입니다.
###Code
class Lambda(nn.Module):
def __init__(self, func):
super().__init__()
self.func = func
def forward(self, x):
return self.func(x)
def preprocess(x):
return x.view(-1, 1, 28, 28)
###Output
_____no_output_____
###Markdown
``Sequential`` 로 생성된 모들은 간단하게 아래와 같습니다:
###Code
model = nn.Sequential(
Lambda(preprocess),
nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.AvgPool2d(4),
Lambda(lambda x: x.view(x.size(0), -1)),
)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
DataLoader 감싸기-----------------------------우리의 CNN은 상당히 간결하지만, MNIST에서만 작동합니다, 왜냐하면: - 입력이 28\*28의 긴 벡터라고 가정합니다. - 최종적으로 CNN 그리드 크기는 4\*4 라고 가정합니다. (이것은 우리가 사용한 평균 풀링 커널 크기 때문입니다.)이 두 가지 가정을 제거하여 모델이 모든 2d 단일 채널(channel) 이미지에서 작동하도록 하겠습니다.먼저 초기 Lambda 레이어를 제거하고 데이터 전처리를 제네레이터(generator)로 이동시킬 수 있습니다:
###Code
def preprocess(x, y):
return x.view(-1, 1, 28, 28), y
class WrappedDataLoader:
def __init__(self, dl, func):
self.dl = dl
self.func = func
def __len__(self):
return len(self.dl)
def __iter__(self):
batches = iter(self.dl)
for b in batches:
yield (self.func(*b))
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
train_dl = WrappedDataLoader(train_dl, preprocess)
valid_dl = WrappedDataLoader(valid_dl, preprocess)
###Output
_____no_output_____
###Markdown
다음으로 ``nn.AvgPool2d`` 를 ``nn.AdaptiveAvgPool2d`` 로 대체하여 우리가 가진*입력* 텐서가 아니라 원하는 *출력* 텐서의 크기를 정의할 수 있습니다.결과적으로 우리 모델은 모든 크기의 입력과 함께 작동합니다.
###Code
model = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
Lambda(lambda x: x.view(x.size(0), -1)),
)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
###Output
_____no_output_____
###Markdown
한번 실행해 봅시다:
###Code
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
GPU 사용하기---------------만약 여러분들이 운이 좋아서 CUDA 지원 GPU (대부분의 클라우드 제공 업체에서시간당 약 $0.50 에 이용할 수 있습니다) 를 사용할 수 있다면, 코드 실행 속도를 높일 수 있습니다.먼저 GPU가 Pytorch에서 작동하는지 확인합니다:
###Code
print(torch.cuda.is_available())
###Output
_____no_output_____
###Markdown
그리고 이에 대한 디바이스 오브젝트를 생성합니다:
###Code
dev = torch.device(
"cuda") if torch.cuda.is_available() else torch.device("cpu")
###Output
_____no_output_____
###Markdown
GPU로 배치를 옮기도록 ``preprocess`` 를 업데이트 합시다:
###Code
def preprocess(x, y):
return x.view(-1, 1, 28, 28).to(dev), y.to(dev)
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
train_dl = WrappedDataLoader(train_dl, preprocess)
valid_dl = WrappedDataLoader(valid_dl, preprocess)
###Output
_____no_output_____
###Markdown
마지막으로 모델을 GPU로 이동시킬 수 있습니다.
###Code
model.to(dev)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
###Output
_____no_output_____
###Markdown
이제 더 빨리 실행됩니다:
###Code
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
What is `torch.nn` *really*?============================by Jeremy Howard, `fast.ai `_. Thanks to Rachel Thomas and Francisco Ingham. We recommend running this tutorial as a notebook, not a script. To download the notebook (.ipynb) file,click the link at the top of the page.PyTorch provides the elegantly designed modules and classes `torch.nn `_ ,`torch.optim `_ ,`Dataset `_ ,and `DataLoader `_to help you create and train neural networks.In order to fully utilize their power and customizethem for your problem, you need to really understand exactly what they'redoing. To develop this understanding, we will first train basic neural neton the MNIST data set without using any features from these models; we willinitially only use the most basic PyTorch tensor functionality. Then, we willincrementally add one feature from ``torch.nn``, ``torch.optim``, ``Dataset``, or``DataLoader`` at a time, showing exactly what each piece does, and how itworks to make the code either more concise, or more flexible.**This tutorial assumes you already have PyTorch installed, and are familiarwith the basics of tensor operations.** (If you're familiar with Numpy arrayoperations, you'll find the PyTorch tensor operations used here nearly identical).MNIST data setup----------------We will use the classic `MNIST `_ dataset,which consists of black-and-white images of hand-drawn digits (between 0 and 9).We will use `pathlib `_for dealing with paths (part of the Python 3 standard library), and willdownload the dataset using`requests `_. We will onlyimport modules when we use them, so you can see exactly what's beingused at each point.
###Code
from pathlib import Path
import requests
DATA_PATH = Path("data")
PATH = DATA_PATH / "mnist"
PATH.mkdir(parents=True, exist_ok=True)
URL = "http://deeplearning.net/data/mnist/"
FILENAME = "mnist.pkl.gz"
if not (PATH / FILENAME).exists():
content = requests.get(URL + FILENAME).content
(PATH / FILENAME).open("wb").write(content)
###Output
_____no_output_____
###Markdown
This dataset is in numpy array format, and has been stored using pickle,a python-specific format for serializing data.
###Code
import pickle
import gzip
with gzip.open((PATH / FILENAME).as_posix(), "rb") as f:
((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding="latin-1")
###Output
_____no_output_____
###Markdown
Each image is 28 x 28, and is being stored as a flattened row of length784 (=28x28). Let's take a look at one; we need to reshape it to 2dfirst.
###Code
from matplotlib import pyplot
import numpy as np
pyplot.imshow(x_train[0].reshape((28, 28)), cmap="gray")
print(x_train.shape)
###Output
_____no_output_____
###Markdown
PyTorch uses ``torch.tensor``, rather than numpy arrays, so we need toconvert our data.
###Code
import torch
x_train, y_train, x_valid, y_valid = map(
torch.tensor, (x_train, y_train, x_valid, y_valid)
)
n, c = x_train.shape
x_train, x_train.shape, y_train.min(), y_train.max()
print(x_train, y_train)
print(x_train.shape)
print(y_train.min(), y_train.max())
###Output
_____no_output_____
###Markdown
Neural net from scratch (no torch.nn)---------------------------------------------Let's first create a model using nothing but PyTorch tensor operations. We're assumingyou're already familiar with the basics of neural networks. (If you're not, you canlearn them at `course.fast.ai `_).PyTorch provides methods to create random or zero-filled tensors, which we willuse to create our weights and bias for a simple linear model. These are just regulartensors, with one very special addition: we tell PyTorch that they require agradient. This causes PyTorch to record all of the operations done on the tensor,so that it can calculate the gradient during back-propagation *automatically*!For the weights, we set ``requires_grad`` **after** the initialization, since wedon't want that step included in the gradient. (Note that a trailling ``_`` inPyTorch signifies that the operation is performed in-place.)NoteWe are initializing the weights here with `Xavier initialisation `_ (by multiplying with 1/sqrt(n)).
###Code
import math
weights = torch.randn(784, 10) / math.sqrt(784)
weights.requires_grad_()
bias = torch.zeros(10, requires_grad=True)
###Output
_____no_output_____
###Markdown
Thanks to PyTorch's ability to calculate gradients automatically, we canuse any standard Python function (or callable object) as a model! Solet's just write a plain matrix multiplication and broadcasted additionto create a simple linear model. We also need an activation function, sowe'll write `log_softmax` and use it. Remember: although PyTorchprovides lots of pre-written loss functions, activation functions, andso forth, you can easily write your own using plain python. PyTorch willeven create fast GPU or vectorized CPU code for your functionautomatically.
###Code
def log_softmax(x):
return x - x.exp().sum(-1).log().unsqueeze(-1)
def model(xb):
return log_softmax(xb @ weights + bias)
###Output
_____no_output_____
###Markdown
In the above, the ``@`` stands for the dot product operation. We will callour function on one batch of data (in this case, 64 images). This isone *forward pass*. Note that our predictions won't be any better thanrandom at this stage, since we start with random weights.
###Code
bs = 64 # batch size
xb = x_train[0:bs] # a mini-batch from x
preds = model(xb) # predictions
preds[0], preds.shape
print(preds[0], preds.shape)
###Output
_____no_output_____
###Markdown
As you see, the ``preds`` tensor contains not only the tensor values, but also agradient function. We'll use this later to do backprop.Let's implement negative log-likelihood to use as the loss function(again, we can just use standard Python):
###Code
def nll(input, target):
return -input[range(target.shape[0]), target].mean()
loss_func = nll
###Output
_____no_output_____
###Markdown
Let's check our loss with our random model, so we can see if we improveafter a backprop pass later.
###Code
yb = y_train[0:bs]
print(loss_func(preds, yb))
###Output
_____no_output_____
###Markdown
Let's also implement a function to calculate the accuracy of our model.For each prediction, if the index with the largest value matches thetarget value, then the prediction was correct.
###Code
def accuracy(out, yb):
preds = torch.argmax(out, dim=1)
return (preds == yb).float().mean()
###Output
_____no_output_____
###Markdown
Let's check the accuracy of our random model, so we can see if ouraccuracy improves as our loss improves.
###Code
print(accuracy(preds, yb))
###Output
_____no_output_____
###Markdown
We can now run a training loop. For each iteration, we will:- select a mini-batch of data (of size ``bs``)- use the model to make predictions- calculate the loss- ``loss.backward()`` updates the gradients of the model, in this case, ``weights`` and ``bias``.We now use these gradients to update the weights and bias. We do thiswithin the ``torch.no_grad()`` context manager, because we do not want theseactions to be recorded for our next calculation of the gradient. You can readmore about how PyTorch's Autograd records operations`here `_.We then set thegradients to zero, so that we are ready for the next loop.Otherwise, our gradients would record a running tally of all the operationsthat had happened (i.e. ``loss.backward()`` *adds* the gradients to whatever isalready stored, rather than replacing them)... tip:: You can use the standard python debugger to step through PyTorch code, allowing you to check the various variable values at each step. Uncomment ``set_trace()`` below to try it out.
###Code
from IPython.core.debugger import set_trace
lr = 0.5 # learning rate
epochs = 2 # how many epochs to train for
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
# set_trace()
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
with torch.no_grad():
weights -= weights.grad * lr
bias -= bias.grad * lr
weights.grad.zero_()
bias.grad.zero_()
###Output
_____no_output_____
###Markdown
That's it: we've created and trained a minimal neural network (in this case, alogistic regression, since we have no hidden layers) entirely from scratch!Let's check the loss and accuracy and compare those to what we gotearlier. We expect that the loss will have decreased and accuracy tohave increased, and they have.
###Code
print(loss_func(model(xb), yb), accuracy(model(xb), yb))
###Output
_____no_output_____
###Markdown
Using torch.nn.functional------------------------------We will now refactor our code, so that it does the same thing as before, onlywe'll start taking advantage of PyTorch's ``nn`` classes to make it more conciseand flexible. At each step from here, we should be making our code one or moreof: shorter, more understandable, and/or more flexible.The first and easiest step is to make our code shorter by replacing ourhand-written activation and loss functions with those from ``torch.nn.functional``(which is generally imported into the namespace ``F`` by convention). This modulecontains all the functions in the ``torch.nn`` library (whereas other parts of thelibrary contain classes). As well as a wide range of loss and activationfunctions, you'll also find here some convenient functions for creating neuralnets, such as pooling functions. (There are also functions for doing convolutions,linear layers, etc, but as we'll see, these are usually better handled usingother parts of the library.)If you're using negative log likelihood loss and log softmax activation,then Pytorch provides a single function ``F.cross_entropy`` that combinesthe two. So we can even remove the activation function from our model.
###Code
import torch.nn.functional as F
loss_func = F.cross_entropy
def model(xb):
return xb @ weights + bias
###Output
_____no_output_____
###Markdown
Note that we no longer call ``log_softmax`` in the ``model`` function. Let'sconfirm that our loss and accuracy are the same as before:
###Code
print(loss_func(model(xb), yb), accuracy(model(xb), yb))
###Output
_____no_output_____
###Markdown
Refactor using nn.Module-----------------------------Next up, we'll use ``nn.Module`` and ``nn.Parameter``, for a clearer and moreconcise training loop. We subclass ``nn.Module`` (which itself is a class andable to keep track of state). In this case, we want to create a class thatholds our weights, bias, and method for the forward step. ``nn.Module`` has anumber of attributes and methods (such as ``.parameters()`` and ``.zero_grad()``)which we will be using.Note``nn.Module`` (uppercase M) is a PyTorch specific concept, and is a class we'll be using a lot. ``nn.Module`` is not to be confused with the Python concept of a (lowercase ``m``) `module `_, which is a file of Python code that can be imported.
###Code
from torch import nn
class Mnist_Logistic(nn.Module):
def __init__(self):
super().__init__()
self.weights = nn.Parameter(torch.randn(784, 10) / math.sqrt(784))
self.bias = nn.Parameter(torch.zeros(10))
def forward(self, xb):
return xb @ self.weights + self.bias
###Output
_____no_output_____
###Markdown
Since we're now using an object instead of just using a function, wefirst have to instantiate our model:
###Code
model = Mnist_Logistic()
###Output
_____no_output_____
###Markdown
Now we can calculate the loss in the same way as before. Note that``nn.Module`` objects are used as if they are functions (i.e they are*callable*), but behind the scenes Pytorch will call our ``forward``method automatically.
###Code
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
Previously for our training loop we had to update the values for each parameterby name, and manually zero out the grads for each parameter separately, like this::: with torch.no_grad(): weights -= weights.grad * lr bias -= bias.grad * lr weights.grad.zero_() bias.grad.zero_()Now we can take advantage of model.parameters() and model.zero_grad() (whichare both defined by PyTorch for ``nn.Module``) to make those steps more conciseand less prone to the error of forgetting some of our parameters, particularlyif we had a more complicated model::: with torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad()We'll wrap our little training loop in a ``fit`` function so we can run itagain later.
###Code
def fit():
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
with torch.no_grad():
for p in model.parameters():
p -= p.grad * lr
model.zero_grad()
fit()
###Output
_____no_output_____
###Markdown
Let's double-check that our loss has gone down:
###Code
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
Refactor using nn.Linear-------------------------We continue to refactor our code. Instead of manually defining andinitializing ``self.weights`` and ``self.bias``, and calculating ``xb @self.weights + self.bias``, we will instead use the Pytorch class`nn.Linear `_ for alinear layer, which does all that for us. Pytorch has many types ofpredefined layers that can greatly simplify our code, and often makes itfaster too.
###Code
class Mnist_Logistic(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(784, 10)
def forward(self, xb):
return self.lin(xb)
###Output
_____no_output_____
###Markdown
We instantiate our model and calculate the loss in the same way as before:
###Code
model = Mnist_Logistic()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
We are still able to use our same ``fit`` method as before.
###Code
fit()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
Refactor using optim------------------------------Pytorch also has a package with various optimization algorithms, ``torch.optim``.We can use the ``step`` method from our optimizer to take a forward step, insteadof manually updating each parameter.This will let us replace our previous manually coded optimization step::: with torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad()and instead use just::: opt.step() opt.zero_grad()(``optim.zero_grad()`` resets the gradient to 0 and we need to call it beforecomputing the gradient for the next minibatch.)
###Code
from torch import optim
###Output
_____no_output_____
###Markdown
We'll define a little function to create our model and optimizer so wecan reuse it in the future.
###Code
def get_model():
model = Mnist_Logistic()
return model, optim.SGD(model.parameters(), lr=lr)
model, opt = get_model()
print(loss_func(model(xb), yb))
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
Refactor using Dataset------------------------------PyTorch has an abstract Dataset class. A Dataset can be anything that hasa ``__len__`` function (called by Python's standard ``len`` function) anda ``__getitem__`` function as a way of indexing into it.`This tutorial `_walks through a nice example of creating a custom ``FacialLandmarkDataset`` classas a subclass of ``Dataset``.PyTorch's `TensorDataset `_is a Dataset wrapping tensors. By defining a length and way of indexing,this also gives us a way to iterate, index, and slice along the firstdimension of a tensor. This will make it easier to access both theindependent and dependent variables in the same line as we train.
###Code
from torch.utils.data import TensorDataset
###Output
_____no_output_____
###Markdown
Both ``x_train`` and ``y_train`` can be combined in a single ``TensorDataset``,which will be easier to iterate over and slice.
###Code
train_ds = TensorDataset(x_train, y_train)
###Output
_____no_output_____
###Markdown
Previously, we had to iterate through minibatches of x and y values separately::: xb = x_train[start_i:end_i] yb = y_train[start_i:end_i]Now, we can do these two steps together::: xb,yb = train_ds[i*bs : i*bs+bs]
###Code
model, opt = get_model()
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
xb, yb = train_ds[i * bs: i * bs + bs]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
Refactor using DataLoader------------------------------Pytorch's ``DataLoader`` is responsible for managing batches. You cancreate a ``DataLoader`` from any ``Dataset``. ``DataLoader`` makes it easierto iterate over batches. Rather than having to use ``train_ds[i*bs : i*bs+bs]``,the DataLoader gives us each minibatch automatically.
###Code
from torch.utils.data import DataLoader
train_ds = TensorDataset(x_train, y_train)
train_dl = DataLoader(train_ds, batch_size=bs)
###Output
_____no_output_____
###Markdown
Previously, our loop iterated over batches (xb, yb) like this::: for i in range((n-1)//bs + 1): xb,yb = train_ds[i*bs : i*bs+bs] pred = model(xb)Now, our loop is much cleaner, as (xb, yb) are loaded automatically from the data loader::: for xb,yb in train_dl: pred = model(xb)
###Code
model, opt = get_model()
for epoch in range(epochs):
for xb, yb in train_dl:
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
Thanks to Pytorch's ``nn.Module``, ``nn.Parameter``, ``Dataset``, and ``DataLoader``,our training loop is now dramatically smaller and easier to understand. Let'snow try to add the basic features necessary to create effecive models in practice.Add validation-----------------------In section 1, we were just trying to get a reasonable training loop set up foruse on our training data. In reality, you **always** should also havea `validation set `_, in orderto identify if you are overfitting.Shuffling the training data is`important `_to prevent correlation between batches and overfitting. On the other hand, thevalidation loss will be identical whether we shuffle the validation set or not.Since shuffling takes extra time, it makes no sense to shuffle the validation data.We'll use a batch size for the validation set that is twice as large asthat for the training set. This is because the validation set does notneed backpropagation and thus takes less memory (it doesn't need tostore the gradients). We take advantage of this to use a larger batchsize and compute the loss more quickly.
###Code
train_ds = TensorDataset(x_train, y_train)
train_dl = DataLoader(train_ds, batch_size=bs, shuffle=True)
valid_ds = TensorDataset(x_valid, y_valid)
valid_dl = DataLoader(valid_ds, batch_size=bs * 2)
###Output
_____no_output_____
###Markdown
We will calculate and print the validation loss at the end of each epoch.(Note that we always call ``model.train()`` before training, and ``model.eval()``before inference, because these are used by layers such as ``nn.BatchNorm2d``and ``nn.Dropout`` to ensure appropriate behaviour for these different phases.)
###Code
model, opt = get_model()
for epoch in range(epochs):
model.train()
for xb, yb in train_dl:
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
model.eval()
with torch.no_grad():
valid_loss = sum(loss_func(model(xb), yb) for xb, yb in valid_dl)
print(epoch, valid_loss / len(valid_dl))
###Output
_____no_output_____
###Markdown
Create fit() and get_data()----------------------------------We'll now do a little refactoring of our own. Since we go through a similarprocess twice of calculating the loss for both the training set and thevalidation set, let's make that into its own function, ``loss_batch``, whichcomputes the loss for one batch.We pass an optimizer in for the training set, and use it to performbackprop. For the validation set, we don't pass an optimizer, so themethod doesn't perform backprop.
###Code
def loss_batch(model, loss_func, xb, yb, opt=None):
loss = loss_func(model(xb), yb)
if opt is not None:
loss.backward()
opt.step()
opt.zero_grad()
return loss.item(), len(xb)
###Output
_____no_output_____
###Markdown
``fit`` runs the necessary operations to train our model and compute thetraining and validation losses for each epoch.
###Code
import numpy as np
def fit(epochs, model, loss_func, opt, train_dl, valid_dl):
for epoch in range(epochs):
model.train()
for xb, yb in train_dl:
loss_batch(model, loss_func, xb, yb, opt)
model.eval()
with torch.no_grad():
losses, nums = zip(
*[loss_batch(model, loss_func, xb, yb) for xb, yb in valid_dl]
)
val_loss = np.sum(np.multiply(losses, nums)) / np.sum(nums)
print(epoch, val_loss)
###Output
_____no_output_____
###Markdown
``get_data`` returns dataloaders for the training and validation sets.
###Code
def get_data(train_ds, valid_ds, bs):
return (
DataLoader(train_ds, batch_size=bs, shuffle=True),
DataLoader(valid_ds, batch_size=bs * 2),
)
###Output
_____no_output_____
###Markdown
Now, our whole process of obtaining the data loaders and fitting themodel can be run in 3 lines of code:
###Code
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
model, opt = get_model()
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
You can use these basic 3 lines of code to train a wide variety of models.Let's see if we can use them to train a convolutional neural network (CNN)!Switch to CNN-------------We are now going to build our neural network with three convolutional layers.Because none of the functions in the previous section assume anything aboutthe model form, we'll be able to use them to train a CNN without any modification.We will use Pytorch's predefined`Conv2d `_ classas our convolutional layer. We define a CNN with 3 convolutional layers.Each convolution is followed by a ReLU. At the end, we perform anaverage pooling. (Note that ``view`` is PyTorch's version of numpy's``reshape``)
###Code
class Mnist_CNN(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1)
self.conv2 = nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1)
self.conv3 = nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1)
def forward(self, xb):
xb = xb.view(-1, 1, 28, 28)
xb = F.relu(self.conv1(xb))
xb = F.relu(self.conv2(xb))
xb = F.relu(self.conv3(xb))
xb = F.avg_pool2d(xb, 4)
return xb.view(-1, xb.size(1))
lr = 0.1
###Output
_____no_output_____
###Markdown
`Momentum `_ is a variation onstochastic gradient descent that takes previous updates into account as welland generally leads to faster training.
###Code
model = Mnist_CNN()
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
nn.Sequential------------------------``torch.nn`` has another handy class we can use to simply our code:`Sequential `_ .A ``Sequential`` object runs each of the modules contained within it, in asequential manner. This is a simpler way of writing our neural network.To take advantage of this, we need to be able to easily define a**custom layer** from a given function. For instance, PyTorch doesn'thave a `view` layer, and we need to create one for our network. ``Lambda``will create a layer that we can then use when defining a network with``Sequential``.
###Code
class Lambda(nn.Module):
def __init__(self, func):
super().__init__()
self.func = func
def forward(self, x):
return self.func(x)
def preprocess(x):
return x.view(-1, 1, 28, 28)
###Output
_____no_output_____
###Markdown
The model created with ``Sequential`` is simply:
###Code
model = nn.Sequential(
Lambda(preprocess),
nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.AvgPool2d(4),
Lambda(lambda x: x.view(x.size(0), -1)),
)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
Wrapping DataLoader-----------------------------Our CNN is fairly concise, but it only works with MNIST, because: - It assumes the input is a 28\*28 long vector - It assumes that the final CNN grid size is 4\*4 (since that's the averagepooling kernel size we used)Let's get rid of these two assumptions, so our model works with any 2dsingle channel image. First, we can remove the initial Lambda layer butmoving the data preprocessing into a generator:
###Code
def preprocess(x, y):
return x.view(-1, 1, 28, 28), y
class WrappedDataLoader:
def __init__(self, dl, func):
self.dl = dl
self.func = func
def __len__(self):
return len(self.dl)
def __iter__(self):
batches = iter(self.dl)
for b in batches:
yield (self.func(*b))
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
train_dl = WrappedDataLoader(train_dl, preprocess)
valid_dl = WrappedDataLoader(valid_dl, preprocess)
###Output
_____no_output_____
###Markdown
Next, we can replace ``nn.AvgPool2d`` with ``nn.AdaptiveAvgPool2d``, whichallows us to define the size of the *output* tensor we want, rather thanthe *input* tensor we have. As a result, our model will work with anysize input.
###Code
model = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
Lambda(lambda x: x.view(x.size(0), -1)),
)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
###Output
_____no_output_____
###Markdown
Let's try it out:
###Code
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
Using your GPU---------------If you're lucky enough to have access to a CUDA-capable GPU (you canrent one for about $0.50/hour from most cloud providers) you canuse it to speed up your code. First check that your GPU is working inPytorch:
###Code
print(torch.cuda.is_available())
###Output
_____no_output_____
###Markdown
And then create a device object for it:
###Code
dev = torch.device(
"cuda") if torch.cuda.is_available() else torch.device("cpu")
###Output
_____no_output_____
###Markdown
Let's update ``preprocess`` to move batches to the GPU:
###Code
def preprocess(x, y):
return x.view(-1, 1, 28, 28).to(dev), y.to(dev)
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
train_dl = WrappedDataLoader(train_dl, preprocess)
valid_dl = WrappedDataLoader(valid_dl, preprocess)
###Output
_____no_output_____
###Markdown
Finally, we can move our model to the GPU.
###Code
model.to(dev)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
###Output
_____no_output_____
###Markdown
You should find it runs faster now:
###Code
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
What is `torch.nn` *really*?============================by Jeremy Howard, `fast.ai `_. Thanks to Rachel Thomas and Francisco Ingham. We recommend running this tutorial as a notebook, not a script. To download the notebook (.ipynb) file,click the link at the top of the page.PyTorch provides the elegantly designed modules and classes `torch.nn `_ ,`torch.optim `_ ,`Dataset `_ ,and `DataLoader `_to help you create and train neural networks.In order to fully utilize their power and customizethem for your problem, you need to really understand exactly what they'redoing. To develop this understanding, we will first train basic neural neton the MNIST data set without using any features from these models; we willinitially only use the most basic PyTorch tensor functionality. Then, we willincrementally add one feature from ``torch.nn``, ``torch.optim``, ``Dataset``, or``DataLoader`` at a time, showing exactly what each piece does, and how itworks to make the code either more concise, or more flexible.**This tutorial assumes you already have PyTorch installed, and are familiarwith the basics of tensor operations.** (If you're familiar with Numpy arrayoperations, you'll find the PyTorch tensor operations used here nearly identical).MNIST data setup----------------We will use the classic `MNIST `_ dataset,which consists of black-and-white images of hand-drawn digits (between 0 and 9).We will use `pathlib `_for dealing with paths (part of the Python 3 standard library), and willdownload the dataset using`requests `_. We will onlyimport modules when we use them, so you can see exactly what's beingused at each point.
###Code
from pathlib import Path
import requests
DATA_PATH = Path("data")
PATH = DATA_PATH / "mnist"
PATH.mkdir(parents=True, exist_ok=True)
URL = "http://deeplearning.net/data/mnist/"
FILENAME = "mnist.pkl.gz"
if not (PATH / FILENAME).exists():
content = requests.get(URL + FILENAME).content
(PATH / FILENAME).open("wb").write(content)
###Output
_____no_output_____
###Markdown
This dataset is in numpy array format, and has been stored using pickle,a python-specific format for serializing data.
###Code
import pickle
import gzip
with gzip.open((PATH / FILENAME).as_posix(), "rb") as f:
((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding="latin-1")
###Output
_____no_output_____
###Markdown
Each image is 28 x 28, and is being stored as a flattened row of length784 (=28x28). Let's take a look at one; we need to reshape it to 2dfirst.
###Code
from matplotlib import pyplot
import numpy as np
pyplot.imshow(x_train[0].reshape((28, 28)), cmap="gray")
print(x_train.shape)
###Output
_____no_output_____
###Markdown
PyTorch uses ``torch.tensor``, rather than numpy arrays, so we need toconvert our data.
###Code
import torch
x_train, y_train, x_valid, y_valid = map(
torch.tensor, (x_train, y_train, x_valid, y_valid)
)
n, c = x_train.shape
x_train, x_train.shape, y_train.min(), y_train.max()
print(x_train, y_train)
print(x_train.shape)
print(y_train.min(), y_train.max())
###Output
_____no_output_____
###Markdown
Neural net from scratch (no torch.nn)---------------------------------------------Let's first create a model using nothing but PyTorch tensor operations. We're assumingyou're already familiar with the basics of neural networks. (If you're not, you canlearn them at `course.fast.ai `_).PyTorch provides methods to create random or zero-filled tensors, which we willuse to create our weights and bias for a simple linear model. These are just regulartensors, with one very special addition: we tell PyTorch that they require agradient. This causes PyTorch to record all of the operations done on the tensor,so that it can calculate the gradient during back-propagation *automatically*!For the weights, we set ``requires_grad`` **after** the initialization, since wedon't want that step included in the gradient. (Note that a trailling ``_`` inPyTorch signifies that the operation is performed in-place.)NoteWe are initializing the weights here with `Xavier initialisation `_ (by multiplying with 1/sqrt(n)).
###Code
import math
weights = torch.randn(784, 10) / math.sqrt(784)
weights.requires_grad_()
bias = torch.zeros(10, requires_grad=True)
###Output
_____no_output_____
###Markdown
Thanks to PyTorch's ability to calculate gradients automatically, we canuse any standard Python function (or callable object) as a model! Solet's just write a plain matrix multiplication and broadcasted additionto create a simple linear model. We also need an activation function, sowe'll write `log_softmax` and use it. Remember: although PyTorchprovides lots of pre-written loss functions, activation functions, andso forth, you can easily write your own using plain python. PyTorch willeven create fast GPU or vectorized CPU code for your functionautomatically.
###Code
def log_softmax(x):
return x - x.exp().sum(-1).log().unsqueeze(-1)
def model(xb):
return log_softmax(xb @ weights + bias)
###Output
_____no_output_____
###Markdown
In the above, the ``@`` stands for the dot product operation. We will callour function on one batch of data (in this case, 64 images). This isone *forward pass*. Note that our predictions won't be any better thanrandom at this stage, since we start with random weights.
###Code
bs = 64 # batch size
xb = x_train[0:bs] # a mini-batch from x
preds = model(xb) # predictions
preds[0], preds.shape
print(preds[0], preds.shape)
###Output
_____no_output_____
###Markdown
As you see, the ``preds`` tensor contains not only the tensor values, but also agradient function. We'll use this later to do backprop.Let's implement negative log-likelihood to use as the loss function(again, we can just use standard Python):
###Code
def nll(input, target):
return -input[range(target.shape[0]), target].mean()
loss_func = nll
###Output
_____no_output_____
###Markdown
Let's check our loss with our random model, so we can see if we improveafter a backprop pass later.
###Code
yb = y_train[0:bs]
print(loss_func(preds, yb))
###Output
_____no_output_____
###Markdown
Let's also implement a function to calculate the accuracy of our model.For each prediction, if the index with the largest value matches thetarget value, then the prediction was correct.
###Code
def accuracy(out, yb):
preds = torch.argmax(out, dim=1)
return (preds == yb).float().mean()
###Output
_____no_output_____
###Markdown
Let's check the accuracy of our random model, so we can see if ouraccuracy improves as our loss improves.
###Code
print(accuracy(preds, yb))
###Output
_____no_output_____
###Markdown
We can now run a training loop. For each iteration, we will:- select a mini-batch of data (of size ``bs``)- use the model to make predictions- calculate the loss- ``loss.backward()`` updates the gradients of the model, in this case, ``weights`` and ``bias``.We now use these gradients to update the weights and bias. We do thiswithin the ``torch.no_grad()`` context manager, because we do not want theseactions to be recorded for our next calculation of the gradient. You can readmore about how PyTorch's Autograd records operations`here `_.We then set thegradients to zero, so that we are ready for the next loop.Otherwise, our gradients would record a running tally of all the operationsthat had happened (i.e. ``loss.backward()`` *adds* the gradients to whatever isalready stored, rather than replacing them)... tip:: You can use the standard python debugger to step through PyTorch code, allowing you to check the various variable values at each step. Uncomment ``set_trace()`` below to try it out.
###Code
from IPython.core.debugger import set_trace
lr = 0.5 # learning rate
epochs = 2 # how many epochs to train for
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
# set_trace()
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
with torch.no_grad():
weights -= weights.grad * lr
bias -= bias.grad * lr
weights.grad.zero_()
bias.grad.zero_()
###Output
_____no_output_____
###Markdown
That's it: we've created and trained a minimal neural network (in this case, alogistic regression, since we have no hidden layers) entirely from scratch!Let's check the loss and accuracy and compare those to what we gotearlier. We expect that the loss will have decreased and accuracy tohave increased, and they have.
###Code
print(loss_func(model(xb), yb), accuracy(model(xb), yb))
###Output
_____no_output_____
###Markdown
Using torch.nn.functional------------------------------We will now refactor our code, so that it does the same thing as before, onlywe'll start taking advantage of PyTorch's ``nn`` classes to make it more conciseand flexible. At each step from here, we should be making our code one or moreof: shorter, more understandable, and/or more flexible.The first and easiest step is to make our code shorter by replacing ourhand-written activation and loss functions with those from ``torch.nn.functional``(which is generally imported into the namespace ``F`` by convention). This modulecontains all the functions in the ``torch.nn`` library (whereas other parts of thelibrary contain classes). As well as a wide range of loss and activationfunctions, you'll also find here some convenient functions for creating neuralnets, such as pooling functions. (There are also functions for doing convolutions,linear layers, etc, but as we'll see, these are usually better handled usingother parts of the library.)If you're using negative log likelihood loss and log softmax activation,then Pytorch provides a single function ``F.cross_entropy`` that combinesthe two. So we can even remove the activation function from our model.
###Code
import torch.nn.functional as F
loss_func = F.cross_entropy
def model(xb):
return xb @ weights + bias
###Output
_____no_output_____
###Markdown
Note that we no longer call ``log_softmax`` in the ``model`` function. Let'sconfirm that our loss and accuracy are the same as before:
###Code
print(loss_func(model(xb), yb), accuracy(model(xb), yb))
###Output
_____no_output_____
###Markdown
Refactor using nn.Module-----------------------------Next up, we'll use ``nn.Module`` and ``nn.Parameter``, for a clearer and moreconcise training loop. We subclass ``nn.Module`` (which itself is a class andable to keep track of state). In this case, we want to create a class thatholds our weights, bias, and method for the forward step. ``nn.Module`` has anumber of attributes and methods (such as ``.parameters()`` and ``.zero_grad()``)which we will be using.Note``nn.Module`` (uppercase M) is a PyTorch specific concept, and is a class we'll be using a lot. ``nn.Module`` is not to be confused with the Python concept of a (lowercase ``m``) `module `_, which is a file of Python code that can be imported.
###Code
from torch import nn
class Mnist_Logistic(nn.Module):
def __init__(self):
super().__init__()
self.weights = nn.Parameter(torch.randn(784, 10) / math.sqrt(784))
self.bias = nn.Parameter(torch.zeros(10))
def forward(self, xb):
return xb @ self.weights + self.bias
###Output
_____no_output_____
###Markdown
Since we're now using an object instead of just using a function, wefirst have to instantiate our model:
###Code
model = Mnist_Logistic()
###Output
_____no_output_____
###Markdown
Now we can calculate the loss in the same way as before. Note that``nn.Module`` objects are used as if they are functions (i.e they are*callable*), but behind the scenes Pytorch will call our ``forward``method automatically.
###Code
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
Previously for our training loop we had to update the values for each parameterby name, and manually zero out the grads for each parameter separately, like this::: with torch.no_grad(): weights -= weights.grad * lr bias -= bias.grad * lr weights.grad.zero_() bias.grad.zero_()Now we can take advantage of model.parameters() and model.zero_grad() (whichare both defined by PyTorch for ``nn.Module``) to make those steps more conciseand less prone to the error of forgetting some of our parameters, particularlyif we had a more complicated model::: with torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad()We'll wrap our little training loop in a ``fit`` function so we can run itagain later.
###Code
def fit():
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
with torch.no_grad():
for p in model.parameters():
p -= p.grad * lr
model.zero_grad()
fit()
###Output
_____no_output_____
###Markdown
Let's double-check that our loss has gone down:
###Code
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
Refactor using nn.Linear-------------------------We continue to refactor our code. Instead of manually defining andinitializing ``self.weights`` and ``self.bias``, and calculating ``xb @self.weights + self.bias``, we will instead use the Pytorch class`nn.Linear `_ for alinear layer, which does all that for us. Pytorch has many types ofpredefined layers that can greatly simplify our code, and often makes itfaster too.
###Code
class Mnist_Logistic(nn.Module):
def __init__(self):
super().__init__()
self.lin = nn.Linear(784, 10)
def forward(self, xb):
return self.lin(xb)
###Output
_____no_output_____
###Markdown
We instantiate our model and calculate the loss in the same way as before:
###Code
model = Mnist_Logistic()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
We are still able to use our same ``fit`` method as before.
###Code
fit()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
Refactor using optim------------------------------Pytorch also has a package with various optimization algorithms, ``torch.optim``.We can use the ``step`` method from our optimizer to take a forward step, insteadof manually updating each parameter.This will let us replace our previous manually coded optimization step::: with torch.no_grad(): for p in model.parameters(): p -= p.grad * lr model.zero_grad()and instead use just::: opt.step() opt.zero_grad()(``optim.zero_grad()`` resets the gradient to 0 and we need to call it beforecomputing the gradient for the next minibatch.)
###Code
from torch import optim
###Output
_____no_output_____
###Markdown
We'll define a little function to create our model and optimizer so wecan reuse it in the future.
###Code
def get_model():
model = Mnist_Logistic()
return model, optim.SGD(model.parameters(), lr=lr)
model, opt = get_model()
print(loss_func(model(xb), yb))
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
start_i = i * bs
end_i = start_i + bs
xb = x_train[start_i:end_i]
yb = y_train[start_i:end_i]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
Refactor using Dataset------------------------------PyTorch has an abstract Dataset class. A Dataset can be anything that hasa ``__len__`` function (called by Python's standard ``len`` function) anda ``__getitem__`` function as a way of indexing into it.`This tutorial `_walks through a nice example of creating a custom ``FacialLandmarkDataset`` classas a subclass of ``Dataset``.PyTorch's `TensorDataset `_is a Dataset wrapping tensors. By defining a length and way of indexing,this also gives us a way to iterate, index, and slice along the firstdimension of a tensor. This will make it easier to access both theindependent and dependent variables in the same line as we train.
###Code
from torch.utils.data import TensorDataset
###Output
_____no_output_____
###Markdown
Both ``x_train`` and ``y_train`` can be combined in a single ``TensorDataset``,which will be easier to iterate over and slice.
###Code
train_ds = TensorDataset(x_train, y_train)
###Output
_____no_output_____
###Markdown
Previously, we had to iterate through minibatches of x and y values separately::: xb = x_train[start_i:end_i] yb = y_train[start_i:end_i]Now, we can do these two steps together::: xb,yb = train_ds[i*bs : i*bs+bs]
###Code
model, opt = get_model()
for epoch in range(epochs):
for i in range((n - 1) // bs + 1):
xb, yb = train_ds[i * bs: i * bs + bs]
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
Refactor using DataLoader------------------------------Pytorch's ``DataLoader`` is responsible for managing batches. You cancreate a ``DataLoader`` from any ``Dataset``. ``DataLoader`` makes it easierto iterate over batches. Rather than having to use ``train_ds[i*bs : i*bs+bs]``,the DataLoader gives us each minibatch automatically.
###Code
from torch.utils.data import DataLoader
train_ds = TensorDataset(x_train, y_train)
train_dl = DataLoader(train_ds, batch_size=bs)
###Output
_____no_output_____
###Markdown
Previously, our loop iterated over batches (xb, yb) like this::: for i in range((n-1)//bs + 1): xb,yb = train_ds[i*bs : i*bs+bs] pred = model(xb)Now, our loop is much cleaner, as (xb, yb) are loaded automatically from the data loader::: for xb,yb in train_dl: pred = model(xb)
###Code
model, opt = get_model()
for epoch in range(epochs):
for xb, yb in train_dl:
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
print(loss_func(model(xb), yb))
###Output
_____no_output_____
###Markdown
Thanks to Pytorch's ``nn.Module``, ``nn.Parameter``, ``Dataset``, and ``DataLoader``,our training loop is now dramatically smaller and easier to understand. Let'snow try to add the basic features necessary to create effecive models in practice.Add validation-----------------------In section 1, we were just trying to get a reasonable training loop set up foruse on our training data. In reality, you **always** should also havea `validation set `_, in orderto identify if you are overfitting.Shuffling the training data is`important `_to prevent correlation between batches and overfitting. On the other hand, thevalidation loss will be identical whether we shuffle the validation set or not.Since shuffling takes extra time, it makes no sense to shuffle the validation data.We'll use a batch size for the validation set that is twice as large asthat for the training set. This is because the validation set does notneed backpropagation and thus takes less memory (it doesn't need tostore the gradients). We take advantage of this to use a larger batchsize and compute the loss more quickly.
###Code
train_ds = TensorDataset(x_train, y_train)
train_dl = DataLoader(train_ds, batch_size=bs, shuffle=True)
valid_ds = TensorDataset(x_valid, y_valid)
valid_dl = DataLoader(valid_ds, batch_size=bs * 2)
###Output
_____no_output_____
###Markdown
We will calculate and print the validation loss at the end of each epoch.(Note that we always call ``model.train()`` before training, and ``model.eval()``before inference, because these are used by layers such as ``nn.BatchNorm2d``and ``nn.Dropout`` to ensure appropriate behaviour for these different phases.)
###Code
model, opt = get_model()
for epoch in range(epochs):
model.train()
for xb, yb in train_dl:
pred = model(xb)
loss = loss_func(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
model.eval()
with torch.no_grad():
valid_loss = sum(loss_func(model(xb), yb) for xb, yb in valid_dl)
print(epoch, valid_loss / len(valid_dl))
###Output
_____no_output_____
###Markdown
Create fit() and get_data()----------------------------------We'll now do a little refactoring of our own. Since we go through a similarprocess twice of calculating the loss for both the training set and thevalidation set, let's make that into its own function, ``loss_batch``, whichcomputes the loss for one batch.We pass an optimizer in for the training set, and use it to performbackprop. For the validation set, we don't pass an optimizer, so themethod doesn't perform backprop.
###Code
def loss_batch(model, loss_func, xb, yb, opt=None):
loss = loss_func(model(xb), yb)
if opt is not None:
loss.backward()
opt.step()
opt.zero_grad()
return loss.item(), len(xb)
###Output
_____no_output_____
###Markdown
``fit`` runs the necessary operations to train our model and compute thetraining and validation losses for each epoch.
###Code
import numpy as np
def fit(epochs, model, loss_func, opt, train_dl, valid_dl):
for epoch in range(epochs):
model.train()
for xb, yb in train_dl:
loss_batch(model, loss_func, xb, yb, opt)
model.eval()
with torch.no_grad():
losses, nums = zip(
*[loss_batch(model, loss_func, xb, yb) for xb, yb in valid_dl]
)
val_loss = np.sum(np.multiply(losses, nums)) / np.sum(nums)
print(epoch, val_loss)
###Output
_____no_output_____
###Markdown
``get_data`` returns dataloaders for the training and validation sets.
###Code
def get_data(train_ds, valid_ds, bs):
return (
DataLoader(train_ds, batch_size=bs, shuffle=True),
DataLoader(valid_ds, batch_size=bs * 2),
)
###Output
_____no_output_____
###Markdown
Now, our whole process of obtaining the data loaders and fitting themodel can be run in 3 lines of code:
###Code
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
model, opt = get_model()
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
You can use these basic 3 lines of code to train a wide variety of models.Let's see if we can use them to train a convolutional neural network (CNN)!Switch to CNN-------------We are now going to build our neural network with three convolutional layers.Because none of the functions in the previous section assume anything aboutthe model form, we'll be able to use them to train a CNN without any modification.We will use Pytorch's predefined`Conv2d `_ classas our convolutional layer. We define a CNN with 3 convolutional layers.Each convolution is followed by a ReLU. At the end, we perform anaverage pooling. (Note that ``view`` is PyTorch's version of numpy's``reshape``)
###Code
class Mnist_CNN(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1)
self.conv2 = nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1)
self.conv3 = nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1)
def forward(self, xb):
xb = xb.view(-1, 1, 28, 28)
xb = F.relu(self.conv1(xb))
xb = F.relu(self.conv2(xb))
xb = F.relu(self.conv3(xb))
xb = F.avg_pool2d(xb, 4)
return xb.view(-1, xb.size(1))
lr = 0.1
###Output
_____no_output_____
###Markdown
`Momentum `_ is a variation onstochastic gradient descent that takes previous updates into account as welland generally leads to faster training.
###Code
model = Mnist_CNN()
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
nn.Sequential------------------------``torch.nn`` has another handy class we can use to simply our code:`Sequential `_ .A ``Sequential`` object runs each of the modules contained within it, in asequential manner. This is a simpler way of writing our neural network.To take advantage of this, we need to be able to easily define a**custom layer** from a given function. For instance, PyTorch doesn'thave a `view` layer, and we need to create one for our network. ``Lambda``will create a layer that we can then use when defining a network with``Sequential``.
###Code
class Lambda(nn.Module):
def __init__(self, func):
super().__init__()
self.func = func
def forward(self, x):
return self.func(x)
def preprocess(x):
return x.view(-1, 1, 28, 28)
###Output
_____no_output_____
###Markdown
The model created with ``Sequential`` is simply:
###Code
model = nn.Sequential(
Lambda(preprocess),
nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.AvgPool2d(4),
Lambda(lambda x: x.view(x.size(0), -1)),
)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
Wrapping DataLoader-----------------------------Our CNN is fairly concise, but it only works with MNIST, because: - It assumes the input is a 28\*28 long vector - It assumes that the final CNN grid size is 4\*4 (since that's the averagepooling kernel size we used)Let's get rid of these two assumptions, so our model works with any 2dsingle channel image. First, we can remove the initial Lambda layer butmoving the data preprocessing into a generator:
###Code
def preprocess(x, y):
return x.view(-1, 1, 28, 28), y
class WrappedDataLoader:
def __init__(self, dl, func):
self.dl = dl
self.func = func
def __len__(self):
return len(self.dl)
def __iter__(self):
batches = iter(self.dl)
for b in batches:
yield (self.func(*b))
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
train_dl = WrappedDataLoader(train_dl, preprocess)
valid_dl = WrappedDataLoader(valid_dl, preprocess)
###Output
_____no_output_____
###Markdown
Next, we can replace ``nn.AvgPool2d`` with ``nn.AdaptiveAvgPool2d``, whichallows us to define the size of the *output* tensor we want, rather thanthe *input* tensor we have. As a result, our model will work with anysize input.
###Code
model = nn.Sequential(
nn.Conv2d(1, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 16, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.Conv2d(16, 10, kernel_size=3, stride=2, padding=1),
nn.ReLU(),
nn.AdaptiveAvgPool2d(1),
Lambda(lambda x: x.view(x.size(0), -1)),
)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
###Output
_____no_output_____
###Markdown
Let's try it out:
###Code
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____
###Markdown
Using your GPU---------------If you're lucky enough to have access to a CUDA-capable GPU (you canrent one for about $0.50/hour from most cloud providers) you canuse it to speed up your code. First check that your GPU is working inPytorch:
###Code
print(torch.cuda.is_available())
###Output
_____no_output_____
###Markdown
And then create a device object for it:
###Code
dev = torch.device(
"cuda") if torch.cuda.is_available() else torch.device("cpu")
###Output
_____no_output_____
###Markdown
Let's update ``preprocess`` to move batches to the GPU:
###Code
def preprocess(x, y):
return x.view(-1, 1, 28, 28).to(dev), y.to(dev)
train_dl, valid_dl = get_data(train_ds, valid_ds, bs)
train_dl = WrappedDataLoader(train_dl, preprocess)
valid_dl = WrappedDataLoader(valid_dl, preprocess)
###Output
_____no_output_____
###Markdown
Finally, we can move our model to the GPU.
###Code
model.to(dev)
opt = optim.SGD(model.parameters(), lr=lr, momentum=0.9)
###Output
_____no_output_____
###Markdown
You should find it runs faster now:
###Code
fit(epochs, model, loss_func, opt, train_dl, valid_dl)
###Output
_____no_output_____ |
Exercises/Exercise-9-Program-Testing-and-Data-Validation.ipynb | ###Markdown
Exercise 9Related Notes:- Fundamentals_6_Data_Validation_And_Program_Testing Exercise 9.1Consider the code in the code cell below.Clearly, there is a bug in this code. Identify the bug and then use a `try-except` statement to “fix” the code so that it works as intended. (Note that you may not utilise `else` or `finally` in the `try-except` statement.).
###Code
numbers = [x for x in range(1000)]
i = 0
while True:
print(str(numbers[i]))
i += 1
numbers = [x for x in range(1000)]+[1001]
i = 0
try:
while True:
print(str(numbers[i]))
i += 1
except IndexError as e:
False
numbers = [x for x in range(1000)]+[1001]
i = 0
while True:
try:
print(str(numbers[i]))
i += 1
except IndexError:
print("You're printing out of the index range, brudda.")
break
###Output
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1001
You're printing out of the index range, brudda.
###Markdown
Exercise 9.2Consider the code in the code cell below.Once again, there is a problem with this code. Use the `try-except` statement to ensure the code does not cause a runtime error. (Note: you may not change the way name is generated or used.)
###Code
from random import randint
name = ""
for i in range(100):
name = name + str(randint(0, 9))
name = name + ".txt"
fileHandle = open(name)
data = f.read()
print(data)
f.close()
#YOUR_CODE_HERE
for i in range(100):
name = str(randint(0,9)) + ".txt"
try:
fileHandle = open(name)
data = fileHandle.read()
print(data)
fileHandle.close()
except FileNotFoundError:
fileHandle = open(name, 'w')
fileHandle.close()
###Output
###Markdown
Exercise 9.3Write code that continually asks for rational number input from the user (until the string `exit` is inputTED). For each value entered, (if it is not the string `exit`) utilise exception handling to verify that the inputs are indeed rational numbers before saving them to the file called `RATIONALS.TXT`. (Note that your code should open, write, and close the file with each entry. Also comment why this may be a more prudent course of action.)
###Code
#YOUR_CODE_HERE
while True:
a = input("input rational no.")
if a == 'exit':
break
try:
a = float(a)
f = open('rationals.txt','a')
f.write(str(a)+'\n')
f.close()
except ValueError:
pass
print('bullsh*t')
###Output
input rational no.apple
bullsh*t
input rational no.asdf
bullsh*t
input rational no.math.pi
bullsh*t
###Markdown
Exercise 9.4The file `WORDS.TXT` under the folder `resources` contains a list of single word computing terms used in a textbook. Each entry has the following format:>```python>>>```One of the file entries (in both files) is:>```python>program >52>``` This means that after a complete scan of the textbook the word `program` was found 52 times. By utilising the `try-except` statement, write program code to find and output the term with the highest number of occurrences.
###Code
#YOUR_CODE_HERE
def maxim():
with open("resources/WORDS.TXT",'r') as f:
x = [i.strip() for i in f.readlines()]
y = []
print(x)
for j in x:
try:
y.append(int(j))
except ValueError:
None
return x[x.index(str(max(y))) - 1] # takes the one before the value itself.
maxim()
###Output
['compiler', '12', 'interpreter', '9', 'procedure', '22', 'function', '29', 'statement', '24', 'program', '52', 'encapsulation', '12', 'compression', '4', 'field', '11', 'attribute', '17', 'entity', '18', 'magnetic', '9', 'optical', '14', 'barcode', '8', 'anti-virus', '7', 'boolean', '12', 'buffer', '9', 'star', '3', 'bridge', '2', 'router', '13', 'hub', '11', 'identifier', '23', 'system', '86', 'global', '19', 'local', '26', 'command', '28', 'translator', '7', 'lexical', '5', 'syntax', '29', 'infix', '14', 'postfix', '12', 'iteration', '19', 'iterative', '14', 'recursive', '19', 'infra-red', '15', 'communications', '27', 'handshaking', '11', 'protocol', '15', 'integer', '23', 'real', '19', 'string', '27', 'character', '24', 'loop', '23', 'selection', '21', 'interrupt', '15', 'Internet', '29', 'initialisation', '8', 'logic', '4', 'library', '2', 'declarative', '4', 'object-oriented', '9', 'encryption', '9', 'computer', '86', 'analysis', '6', 'design', '18', 'implementation', '6', 'testing', '28', 'test', '11', 'wireless', '4', 'network', '28', 'assignment', '13', 'declaration', '9', 'procedural', '4', 'array', '16', 'processor', '4', 'disk', '6', 'disc', '8', 'bus', '2', 'firewall', '1', 'domain', '3', 'website', '8', 'memory', '17', 'verification', '3', 'validation', '5', 'vector', '2', 'bitmap', '6', 'variable', '20']
###Markdown
Exercise 9.5 ISBN The International Standard Book Number (ISBN) is a numeric commercial book identifier which is intended to be unique. Publishers purchase ISBNs from an affiliate of the International ISBN Agency.Write a code that - takes in a string input,- return `True` if the string is either a valid ISBN-10 or ISBN-13 numbers.Each line in the text file, `ISBN_EXERCISE.TXT` under the folder `resources`, contain ISBN-10 or ISBN-13 which can be valid or invalid.Write a program to:- print out the valid ISBNs in the file.- print out the number of valid ISBNs in the file.
###Code
import csv
def isValid(a):
b = [10 if x == 'X' else int(x) for x in a if x not in '- ']
if len(b) ==10:
total = 0
for i in range(len(b)):
total+= (len(a)-i)*b[i]
return (total/11).is_integer()
elif len(b) == 13:
total = 0
for i in range(len(b)):
if i%2 == 0:
total += b[i]
else:
total +=b[i]*3
return (total/10).is_integer()
else:
return
with open('ISBN_EXERCISE.txt') as f:
reader = csv.reader(f,delimiter = ':')
count = 0
for row in reader:
if isValid(row[1]):
count +=1
print(row)
print(f'{count} valid ISBN')
isbn_10_weight=list(range(10,0,-1))
print(isbn_10_weight)
def ISBN10_check(s):
vec=[int(i) for i in list(s)]
return (dot(isbn_10_weight,vec)%11)==0
def ISBN10_check(s):
x=list(s)
l=len(x)
tot=0
for i in range(l):
tot=tot+int(x[i])*(l-i)
return tot%11==0
def dot(vec1,vec2):
return sum([vec1[i]*vec2[i] for i in range(13)])
isbn_13_weight=[1,3]*6+[1]
def ISBN13_check(s):
vec=[int(i) for i in list(s)]
return (dot(isbn_13_weight,vec)%10)==0
with open('ISBN_EXERCISE.txt') as f:
data=f.readlines()
data=[x.split(':') for x in data]
data=[[x[0].strip(),x[1].strip().replace('-','')] for x in data]
isbn_10_count=0
isbn_13_count=0
for x in data:
if x[0]=='ISBN-10':
if len(x[1])==10 and ISBN10_check(x[1]):
print(f'{x[1]} is a valid {x[0]} number')
isbn_10_count +=1
else:
print(f'This {x[0]} number, {x[1]}, fails the length check or the check digit doesnt match.')
if x[0]=='ISBN-13':
if len(x[1])==13 and ISBN13_check(x[1]):
print(f'{x[1]} is a valid {x[0]} number')
isbn_13_count +=1
else:
print(f'This {x[0]} number, {x[1]}, fails the length check or the check digit doesnt match.')
print(f'There are {isbn_10_count} valid ISBN-10 numbers and {isbn_13_count} for ISBN-13 numbers. Thus, total is {isbn_10_count + isbn_13_count}')
def f(x):
a=input()
return 1/x
###Output
_____no_output_____ |
spreadsheet_handling.ipynb | ###Markdown
Dictionaries
###Code
line['SHIPMT_WGHT']
for line in dictreader: # "loop" or a "for loop"
print('The ' + line['DEST_MA'] + ' weighs ' + line['SHIPMT_WGHT'])
###Output
The 41740 weighs 11
The 314 weighs 5134
The 556 weighs 6
The 99999 weighs 527
The 99999 weighs 1132
The 99999 weighs 13501
The 99999 weighs 4
The 99999 weighs 12826
The 99999 weighs 22
The 464 weighs 1
The 99999 weighs 3960
The 408 weighs 3
The 99999 weighs 49037
The 99999 weighs 11
The 198 weighs 30
The 99999 weighs 65994
The 99999 weighs 110
The 288 weighs 6
The 400 weighs 3
The 184 weighs 3223
The 99999 weighs 50415
The 99999 weighs 513
The 268 weighs 3
The 99999 weighs 33269
The 99999 weighs 43304
The 122 weighs 7
The 99999 weighs 32405
The 312 weighs 378
The 99999 weighs 407
The 99999 weighs 3356
The 482 weighs 1067
The 122 weighs 1026
The 148 weighs 9947
The 176 weighs 473
The 104 weighs 16
The 99999 weighs 57038
The 378 weighs 1008
The 99999 weighs 275
The 176 weighs 1159
The 545 weighs 41
The 376 weighs 4198
The 370 weighs 796
The 41700 weighs 116
The 198 weighs 5
The 99999 weighs 486
The 268 weighs 2
The 408 weighs 42
The 220 weighs 23524
The 376 weighs 23472
The 408 weighs 938
The 312 weighs 1
The 420 weighs 537
The 99999 weighs 5
The 99999 weighs 102
The 206 weighs 27366
The 99999 weighs 51141
The 99999 weighs 1
The 99999 weighs 1109
The 99999 weighs 12293
The 148 weighs 43101
The 99999 weighs 540
The 99999 weighs 24922
The 348 weighs 1742
The 99999 weighs 3
The 178 weighs 194
The 99999 weighs 4
The 416 weighs 33308
The 545 weighs 165
The 312 weighs 8869
The 348 weighs 47
The 176 weighs 45313
The 99999 weighs 45
The 184 weighs 51470
The 408 weighs 5
The 99999 weighs 9221
The 99999 weighs 362
The 408 weighs 4
The 99999 weighs 143
The 122 weighs 14688
The 273 weighs 13
The 216 weighs 650
The 99999 weighs 9525
The 482 weighs 47767
The 348 weighs 261
The 238 weighs 1371
The 122 weighs 4233
The 238 weighs 7
The 99999 weighs 577
The 400 weighs 12
The 408 weighs 1
The 206 weighs 16
The 99999 weighs 104
The 40060 weighs 48016
The 378 weighs 9
The 38060 weighs 60311
The 99999 weighs 94
The 176 weighs 4
The 212 weighs 213
The 500 weighs 1
The 450 weighs 75
The 430 weighs 38
The 206 weighs 13195
The 408 weighs 45948
The 312 weighs 28594
The 312 weighs 10
The 99999 weighs 227
The 99999 weighs 469
The 176 weighs 1
The 99999 weighs 8
The 428 weighs 232
The 38060 weighs 4
The 376 weighs 637
The 41700 weighs 28
The 408 weighs 373
The 99999 weighs 60527
The 294 weighs 5348
The 47900 weighs 41384
The 148 weighs 1
The 148 weighs 14
The 178 weighs 2
The 312 weighs 825
The 99999 weighs 9128
The 482 weighs 3448
The 99999 weighs 4
The 212 weighs 10
The 99999 weighs 1
The 99999 weighs 14062
The 206 weighs 728
The 536 weighs 16265
The 172 weighs 5
The 99999 weighs 13
The 99999 weighs 4
The 348 weighs 27
The 488 weighs 33
The 99999 weighs 52435
The 476 weighs 12
The 348 weighs 12
The 99999 weighs 36510
The 99999 weighs 2
The 104 weighs 31988
The 440 weighs 216
The 472 weighs 7
The 220 weighs 75
The 238 weighs 73
The 99999 weighs 36
The 99999 weighs 1
The 99999 weighs 28881
The 348 weighs 2
The 184 weighs 2
The 220 weighs 456
The 41740 weighs 49
The 99999 weighs 40909
The 99999 weighs 29
The 416 weighs 9
The 99999 weighs 9264
The 348 weighs 1
The 99999 weighs 9
The 408 weighs 330
The 488 weighs 1
The 430 weighs 5
The 99999 weighs 23
The 160 weighs 29
The 314 weighs 3801
The 99999 weighs 9
The 545 weighs 22
The 99999 weighs 23
The 198 weighs 110
The 99999 weighs 11
The 99999 weighs 28239
The 99999 weighs 42482
The 488 weighs 1
The 99999 weighs 499
The 99999 weighs 1
The 148 weighs 16
The 99999 weighs 1
The 536 weighs 193
The 408 weighs 2
The 428 weighs 17752
The 99999 weighs 6
The 428 weighs 13175
The 99999 weighs 118
The 99999 weighs 21
The 41700 weighs 1
The 420 weighs 11
The 99999 weighs 6880
The 16700 weighs 46764
The 408 weighs 2089
The 99999 weighs 54
The 25540 weighs 5
The 348 weighs 13
The 198 weighs 655
The 273 weighs 260
The 99999 weighs 1
The 12940 weighs 1117478
The 204 weighs 2377
The 500 weighs 1
The 488 weighs 2
The 206 weighs 21
The 184 weighs 48605
The 99999 weighs 36
The 99999 weighs 6591
The 99999 weighs 39615
The 99999 weighs 36842
The 378 weighs 3
The 99999 weighs 330
The 99999 weighs 2
The 99999 weighs 4
The 476 weighs 450
The 25540 weighs 9
The 99999 weighs 851
The 238 weighs 7039
The 376 weighs 31353
The 476 weighs 2
The 25540 weighs 3
The 268 weighs 1759
The 99999 weighs 11
The 99999 weighs 110
The 178 weighs 105
The 370 weighs 16495
The 488 weighs 60
The 500 weighs 207
The 99999 weighs 44369
The 482 weighs 836
The 268 weighs 3
The 482 weighs 3
The 99999 weighs 40327
The 378 weighs 69
The 99999 weighs 2
The 99999 weighs 2
The 148 weighs 80
The 122 weighs 4399
The 99999 weighs 5
The 99999 weighs 158
The 148 weighs 8
The 99999 weighs 3
The 99999 weighs 15
The 348 weighs 140
The 368 weighs 1866
The 99999 weighs 6
The 378 weighs 33620
The 122 weighs 10
The 370 weighs 7
The 99999 weighs 21992
The 488 weighs 1869
The 99999 weighs 36
The 99999 weighs 62
The 288 weighs 448
The 99999 weighs 167
The 99999 weighs 50868
The 348 weighs 2
The 41740 weighs 5
The 258 weighs 8569
The 99999 weighs 1
The 556 weighs 11
The 464 weighs 13
The 99999 weighs 39887
The 176 weighs 1092
The 99999 weighs 77
The 184 weighs 127
The 41740 weighs 1
The 99999 weighs 4214
The 488 weighs 2363
The 12580 weighs 2
The 488 weighs 22
The 142 weighs 418
The 45300 weighs 123
The 450 weighs 14
The 99999 weighs 18008
The 422 weighs 3678
The 428 weighs 3
The 488 weighs 2
The 538 weighs 5
The 348 weighs 46263
The 350 weighs 44
The 99999 weighs 2
The 99999 weighs 7
The 99999 weighs 110
The 440 weighs 2
The 428 weighs 364
The 99999 weighs 47433
The 348 weighs 48338
The 99999 weighs 89029
The 408 weighs 1216
The 99999 weighs 1
The 47900 weighs 1801
The 99999 weighs 36419
The 476 weighs 4
The 176 weighs 14
The 206 weighs 18
The 500 weighs 1355
The 488 weighs 45651
The 378 weighs 3672
The 206 weighs 41208
The 41740 weighs 4
The 99999 weighs 880
The 99999 weighs 38
The 99999 weighs 5
The 348 weighs 33252
The 99999 weighs 40245
The 12420 weighs 4
The 99999 weighs 26
The 41740 weighs 57
The 99999 weighs 14
The 294 weighs 55862
The 288 weighs 30
The 99999 weighs 900
The 99999 weighs 606244
The 408 weighs 42422
The 99999 weighs 9
The 300 weighs 497
The 500 weighs 1
The 500 weighs 50820
The 178 weighs 1
The 99999 weighs 360
The 99999 weighs 707
The 268 weighs 65
The 99999 weighs 482
The 428 weighs 9
The 428 weighs 8670
The 99999 weighs 2
The 99999 weighs 5
The 99999 weighs 13
The 488 weighs 55
The 408 weighs 92
The 122 weighs 247
The 408 weighs 308
The 348 weighs 2
The 348 weighs 67
The 99999 weighs 746
The 99999 weighs 778
The 408 weighs 2199
The 488 weighs 1078
The 260 weighs 3
The 99999 weighs 7
The 184 weighs 360
The 488 weighs 1
The 142 weighs 3749
The 99999 weighs 1733
The 184 weighs 20
The 99999 weighs 65426
The 408 weighs 2
The 220 weighs 41328
The 378 weighs 248
The 488 weighs 3088
The 47900 weighs 35459
The 99999 weighs 47
The 99999 weighs 2
The 41740 weighs 1515
The 482 weighs 11706
The 176 weighs 45
The 99999 weighs 9
The 99999 weighs 60
The 206 weighs 2
The 99999 weighs 23138
The 408 weighs 35171
The 99999 weighs 28328
The 12580 weighs 70
The 99999 weighs 65798
The 99999 weighs 17602
The 300 weighs 1859
The 476 weighs 1
The 428 weighs 32
The 184 weighs 30939
The 488 weighs 2
The 122 weighs 47287
The 294 weighs 6894
The 440 weighs 2
The 99999 weighs 550
The 99999 weighs 4
The 348 weighs 37819
The 428 weighs 12457
The 99999 weighs 4
The 488 weighs 60255
The 38060 weighs 106
The 176 weighs 1429
The 99999 weighs 450
The 268 weighs 9
The 99999 weighs 29848
The 212 weighs 189
The 45300 weighs 22331
The 99999 weighs 29551
The 99999 weighs 28815
The 206 weighs 48061
The 273 weighs 1
The 99999 weighs 8920
The 348 weighs 5
The 12420 weighs 243
The 370 weighs 26
The 99999 weighs 630
The 99999 weighs 15310
The 99999 weighs 56495
The 538 weighs 141
The 99999 weighs 180
The 482 weighs 30212
The 220 weighs 34
The 408 weighs 1
The 148 weighs 1
The 99999 weighs 1
The 99999 weighs 3289
The 99999 weighs 46196
The 198 weighs 17910
The 99999 weighs 40521
The 440 weighs 5278
The 184 weighs 23412
The 268 weighs 10
The 172 weighs 357
The 122 weighs 56032
The 258 weighs 1081
The 476 weighs 25375
The 176 weighs 1
The 122 weighs 32287
The 99999 weighs 4
The 99999 weighs 7
The 350 weighs 13520
The 99999 weighs 46906
The 408 weighs 1
The 500 weighs 2749
The 99999 weighs 1233
The 312 weighs 3
The 99999 weighs 28833
The 288 weighs 14
The 430 weighs 1
The 368 weighs 6303
The 47900 weighs 1536
The 99999 weighs 8830
The 184 weighs 43674
The 104 weighs 38091
The 212 weighs 328
The 99999 weighs 4343
The 420 weighs 13
The 370 weighs 2199
The 428 weighs 1354
The 99999 weighs 71
The 99999 weighs 550
The 300 weighs 15
The 273 weighs 1
The 99999 weighs 9
The 99999 weighs 1
The 99999 weighs 1979
The 99999 weighs 58406
The 99999 weighs 225
The 25540 weighs 10
The 99999 weighs 43568
The 406 weighs 2
The 268 weighs 20631
The 99999 weighs 11643
The 99999 weighs 711
The 212 weighs 41873
The 99999 weighs 117965
The 99999 weighs 11971
The 220 weighs 12
The 29340 weighs 2
The 312 weighs 1
The 99999 weighs 27015
The 288 weighs 35109
The 99999 weighs 901
The 488 weighs 90
The 122 weighs 32418
The 178 weighs 66
The 370 weighs 23
The 99999 weighs 19721
The 99999 weighs 1
The 122 weighs 3019
The 99999 weighs 2
The 430 weighs 3
The 99999 weighs 2
The 99999 weighs 14
The 408 weighs 2747
The 408 weighs 7
The 99999 weighs 65222
The 220 weighs 8160
The 176 weighs 22226
The 368 weighs 3
The 348 weighs 684
The 12420 weighs 64
The 99999 weighs 85603
The 178 weighs 27
The 488 weighs 252
The 99999 weighs 3
The 99999 weighs 216817
The 450 weighs 725
The 99999 weighs 2
The 99999 weighs 923
The 220 weighs 407
The 99999 weighs 2
The 406 weighs 30785
The 378 weighs 275
The 99999 weighs 16
The 99999 weighs 27
The 41740 weighs 16356
The 12940 weighs 1
The 206 weighs 1
The 312 weighs 765
The 99999 weighs 1
The 99999 weighs 22
The 206 weighs 11
The 370 weighs 35
The 545 weighs 4
The 406 weighs 56139
The 99999 weighs 53350
The 122 weighs 32575
The 99999 weighs 6754
The 350 weighs 40146
The 99999 weighs 1041
The 440 weighs 28853
The 176 weighs 39714
The 41700 weighs 607
The 212 weighs 26134
The 99999 weighs 55059
The 148 weighs 15
The 99999 weighs 172
The 99999 weighs 150930
The 99999 weighs 3562
The 148 weighs 1
The 99999 weighs 49296
The 378 weighs 7270
The 422 weighs 48083
The 172 weighs 29
The 99999 weighs 499
The 314 weighs 144
The 332 weighs 9
The 99999 weighs 60514
The 220 weighs 1709
The 176 weighs 15506
The 350 weighs 113
The 99999 weighs 19
The 348 weighs 156
The 378 weighs 6432
The 408 weighs 67
The 206 weighs 15769
The 536 weighs 225
The 538 weighs 2067
The 104 weighs 6
The 148 weighs 2
The 216 weighs 26387
The 99999 weighs 20241
The 46520 weighs 24188
The 408 weighs 8290
The 99999 weighs 4783
The 294 weighs 1
The 294 weighs 22588
The 99999 weighs 44415
The 99999 weighs 858
The 312 weighs 1
The 148 weighs 50
The 99999 weighs 39679
The 99999 weighs 1
The 314 weighs 115
The 416 weighs 48702
The 176 weighs 6182
The 99999 weighs 23
The 99999 weighs 10
The 176 weighs 173548
The 41700 weighs 380
The 428 weighs 10934
The 99999 weighs 27
The 488 weighs 9
The 45300 weighs 14
The 288 weighs 16700
The 99999 weighs 1
The 99999 weighs 830
The 408 weighs 25
The 99999 weighs 23
The 99999 weighs 44002
The 99999 weighs 28171
The 408 weighs 2199
The 178 weighs 4
The 99999 weighs 38986
The 400 weighs 811
The 99999 weighs 3
The 406 weighs 9
The 99999 weighs 18024
The 160 weighs 11
The 176 weighs 1039
The 428 weighs 1986
The 46520 weighs 483
The 408 weighs 58630
The 99999 weighs 1
The 40060 weighs 1
The 99999 weighs 136
The 99999 weighs 72424
The 148 weighs 385
The 476 weighs 35807
The 99999 weighs 1
The 348 weighs 57527
The 99999 weighs 1568
The 408 weighs 27
The 178 weighs 47924
The 99999 weighs 12094
The 99999 weighs 22
The 206 weighs 48375
The 99999 weighs 56027
The 99999 weighs 628
The 99999 weighs 440
The 348 weighs 1
The 40060 weighs 66
The 38060 weighs 20026
The 488 weighs 113
The 176 weighs 7282
The 440 weighs 33
The 99999 weighs 39293
The 348 weighs 14
The 99999 weighs 4
The 376 weighs 6932
The 99999 weighs 16
The 428 weighs 17532
The 99999 weighs 6171
The 99999 weighs 5
The 420 weighs 541
The 104 weighs 54
The 45300 weighs 51635
The 148 weighs 19
The 273 weighs 26238
The 99999 weighs 417
The 430 weighs 8
The 376 weighs 16535
The 99999 weighs 46175
The 29700 weighs 2767
The 408 weighs 220
The 314 weighs 1135
The 99999 weighs 1447
The 99999 weighs 573
The 428 weighs 37803
The 99999 weighs 16
The 99999 weighs 1
The 99999 weighs 3
The 99999 weighs 32421
The 99999 weighs 437
The 99999 weighs 59434
The 206 weighs 19767
The 496 weighs 32
The 99999 weighs 3391
The 428 weighs 24
The 99999 weighs 42887
The 288 weighs 165
The 184 weighs 284
The 99999 weighs 23
The 99999 weighs 14673
The 408 weighs 1826
The 348 weighs 31001
The 99999 weighs 11149
The 176 weighs 1
The 99999 weighs 896
The 406 weighs 12
The 41740 weighs 32422
The 266 weighs 110
The 408 weighs 48241
The 472 weighs 2908
The 99999 weighs 39269
The 99999 weighs 51
The 99999 weighs 48921
The 99999 weighs 3
The 176 weighs 1277
The 99999 weighs 2
The 99999 weighs 52771
The 99999 weighs 38832
The 99999 weighs 42322
The 220 weighs 1769
The 488 weighs 385
The 220 weighs 163999
The 99999 weighs 1396
The 99999 weighs 739
The 268 weighs 7476
The 99999 weighs 48903
The 99999 weighs 26
The 99999 weighs 177
The 348 weighs 5148
The 99999 weighs 99
The 500 weighs 43
The 148 weighs 673
The 184 weighs 11
The 99999 weighs 1
The 122 weighs 2
The 99999 weighs 3729
The 332 weighs 231
The 378 weighs 585
The 99999 weighs 2004
The 99999 weighs 2868
The 122 weighs 32
The 99999 weighs 381
The 408 weighs 2483
The 476 weighs 6957
The 40060 weighs 50461
The 99999 weighs 21
The 176 weighs 4307
The 428 weighs 50528
The 99999 weighs 42725
The 99999 weighs 934
The 99999 weighs 53677
The 536 weighs 2858
The 273 weighs 334
The 99999 weighs 8
The 99999 weighs 52858
The 406 weighs 4
The 378 weighs 23
The 400 weighs 497
The 99999 weighs 4
The 122 weighs 309
The 184 weighs 5
The 408 weighs 32
The 370 weighs 1
The 99999 weighs 44
The 348 weighs 2864
The 176 weighs 56073
The 380 weighs 25896
The 99999 weighs 3526
The 288 weighs 7
The 176 weighs 39255
The 314 weighs 24
The 12940 weighs 106
The 422 weighs 6620
The 38060 weighs 1047
The 378 weighs 2
The 12420 weighs 49533
The 99999 weighs 93
The 99999 weighs 246
The 350 weighs 10994
The 348 weighs 35296
The 440 weighs 1
The 99999 weighs 3
The 38060 weighs 34
The 408 weighs 1
The 406 weighs 601084
The 99999 weighs 51700
The 99999 weighs 53137
The 99999 weighs 331
The 99999 weighs 7
The 348 weighs 721
The 99999 weighs 59366
The 16700 weighs 3
The 99999 weighs 38
The 176 weighs 13735
The 408 weighs 25
The 99999 weighs 1756
The 348 weighs 27560
The 216 weighs 2
The 99999 weighs 1
The 38060 weighs 159
The 176 weighs 2
The 47900 weighs 1
The 99999 weighs 275
The 378 weighs 1544
The 178 weighs 10
The 500 weighs 4
The 440 weighs 1
The 294 weighs 41
The 99999 weighs 2085
The 408 weighs 7
The 314 weighs 378
The 99999 weighs 58398
The 273 weighs 219366
The 176 weighs 1331
The 99999 weighs 88
The 99999 weighs 40789
The 178 weighs 77
The 408 weighs 3
The 99999 weighs 24966
The 99999 weighs 3603
The 220 weighs 822
The 99999 weighs 31510
The 99999 weighs 257
The 176 weighs 26439
The 408 weighs 23301
The 408 weighs 43
The 45300 weighs 118
The 258 weighs 1539
The 416 weighs 70
The 348 weighs 2
The 99999 weighs 36963
The 348 weighs 1439
The 99999 weighs 2566
The 556 weighs 1292
The 99999 weighs 214109
The 99999 weighs 654
The 408 weighs 13
The 266 weighs 1
The 260 weighs 24
The 99999 weighs 47043
The 378 weighs 22
The 99999 weighs 241482
The 428 weighs 7
The 408 weighs 1
The 12580 weighs 209
The 176 weighs 29
The 348 weighs 1
The 220 weighs 2
The 99999 weighs 57847
The 99999 weighs 176
The 482 weighs 14821
The 348 weighs 9
The 440 weighs 5299
The 99999 weighs 1
The 476 weighs 64
The 99999 weighs 31499
The 38060 weighs 57
The 99999 weighs 1
The 488 weighs 49608
The 408 weighs 9970
The 206 weighs 171199
The 176 weighs 4323
The 99999 weighs 4159
The 420 weighs 5
The 12420 weighs 1
The 99999 weighs 73
The 99999 weighs 45
The 408 weighs 4
The 99999 weighs 42
The 99999 weighs 17
The 99999 weighs 1000
The 99999 weighs 1979
The 99999 weighs 9
The 47900 weighs 66
The 294 weighs 37
The 99999 weighs 1
The 99999 weighs 192
The 38060 weighs 9
The 99999 weighs 721
The 288 weighs 41775
The 348 weighs 33
The 500 weighs 29
The 99999 weighs 2
The 430 weighs 423
The 212 weighs 42340
The 99999 weighs 41789
The 99999 weighs 1
The 99999 weighs 3298
The 216 weighs 29
The 212 weighs 5
The 266 weighs 9
The 99999 weighs 7256
The 99999 weighs 5057
The 99999 weighs 36919
The 41740 weighs 2
The 430 weighs 68
The 198 weighs 77
The 348 weighs 435
The 176 weighs 68
The 496 weighs 540
The 99999 weighs 40400
The 370 weighs 276
The 99999 weighs 2724
The 99999 weighs 1
The 45300 weighs 17
The 370 weighs 3
The 408 weighs 1
The 206 weighs 437
The 99999 weighs 37829
The 420 weighs 3
The 148 weighs 18
The 408 weighs 1
The 122 weighs 20821
The 122 weighs 10
The 482 weighs 1
The 482 weighs 1016
The 99999 weighs 1891
The 99999 weighs 8
The 99999 weighs 49689
The 99999 weighs 15
The 99999 weighs 12236
The 41740 weighs 1045
The 378 weighs 12
The 294 weighs 81
The 99999 weighs 43691
The 176 weighs 311
The 148 weighs 5
The 41740 weighs 1816
The 408 weighs 13
The 348 weighs 21274
The 122 weighs 44697
The 370 weighs 1
The 99999 weighs 6634
The 294 weighs 3603
The 148 weighs 16134
The 99999 weighs 2565
The 99999 weighs 173494
The 99999 weighs 567
The 38060 weighs 330
The 122 weighs 1023
The 29700 weighs 31361
The 538 weighs 26
The 314 weighs 13
The 332 weighs 93677
The 500 weighs 113
The 430 weighs 3
The 206 weighs 1
The 99999 weighs 12818
The 99999 weighs 2309
The 198 weighs 4080
The 488 weighs 440
The 99999 weighs 36
The 428 weighs 1451
The 348 weighs 1
The 176 weighs 54122
The 370 weighs 1
The 198 weighs 26372
The 99999 weighs 42657
The 488 weighs 28555
The 12940 weighs 89
The 370 weighs 69
The 99999 weighs 10308
The 184 weighs 1111
The 148 weighs 236
The 428 weighs 11485
The 212 weighs 1396
The 99999 weighs 40659
The 148 weighs 11053
The 38060 weighs 49539
The 99999 weighs 2124
The 99999 weighs 5
The 47900 weighs 45
The 422 weighs 26441
The 99999 weighs 33860
The 500 weighs 19
The 408 weighs 1
The 99999 weighs 2
The 206 weighs 211
The 408 weighs 12379
The 220 weighs 7566
The 99999 weighs 13741
The 16700 weighs 6019
The 206 weighs 278
The 148 weighs 1484
The 288 weighs 39430
The 176 weighs 39228
The 12580 weighs 5
The 160 weighs 42
The 220 weighs 317
The 99999 weighs 18703
The 500 weighs 55
The 408 weighs 40352
The 148 weighs 1118
The 368 weighs 5
The 238 weighs 38
The 376 weighs 1979
The 99999 weighs 142
The 12580 weighs 1
The 428 weighs 33
The 99999 weighs 275
The 408 weighs 10
The 148 weighs 138
The 406 weighs 959
The 99999 weighs 44891
The 45300 weighs 1441
The 99999 weighs 248
The 176 weighs 17
The 348 weighs 1696
The 99999 weighs 769
The 312 weighs 3
The 99999 weighs 2858
The 198 weighs 3
The 538 weighs 23
The 99999 weighs 599
The 99999 weighs 3299
The 430 weighs 61394
The 99999 weighs 20933
The 428 weighs 55
The 376 weighs 1979
The 99999 weighs 220
The 99999 weighs 18771
The 408 weighs 5
The 216 weighs 47
The 348 weighs 3108
The 450 weighs 1030
The 99999 weighs 7
The 488 weighs 17434
The 368 weighs 3573
The 370 weighs 54
The 348 weighs 555
The 266 weighs 7
The 99999 weighs 315
The 176 weighs 2
The 99999 weighs 550
The 408 weighs 8
The 99999 weighs 94632
The 160 weighs 50061
The 99999 weighs 22724
The 482 weighs 18
The 206 weighs 4
The 482 weighs 336
The 380 weighs 504
The 488 weighs 11
The 99999 weighs 42785
The 41740 weighs 1
The 406 weighs 8372
The 408 weighs 14
The 500 weighs 14772
The 348 weighs 16
The 376 weighs 161612
The 29700 weighs 70
The 99999 weighs 2
The 99999 weighs 1
The 312 weighs 80
The 408 weighs 45
The 370 weighs 8
The 99999 weighs 25
The 500 weighs 41
The 198 weighs 55
The 99999 weighs 99571
|
boards/Pynq-Z2/mqttsn/notebooks/05_networking_acceleration.ipynb | ###Markdown
Networking AccelerationThe Network IO Processor (IOP) enables raw access to the Ethernet interface from within Python.The usage is similar in many ways to sending and receiving Ethernet frames using raw sockets.In software, the code typically executes with about 10ms between publish events.When accelerated using the PL, the code executes about 120us between publish events. 1. Downloading overlayNow let's download the overlay and do necessary configurations.
###Code
from pynq_networking import MqttsnOverlay
from site import getsitepackages
import os
mqttsn_bit = os.path.join(getsitepackages()[0], 'pynq_networking',
'overlays', 'mqttsn', 'mqttsn.bit')
overlay = MqttsnOverlay(mqttsn_bit)
overlay.download()
import timeit
import logging
logging.getLogger("kamene.runtime").setLevel(logging.ERROR)
from kamene.all import *
from wurlitzer import sys_pipes
from pynq_networking.lib.network_iop import NetworkIOP
from pynq_networking.lib.slurper import PacketSlurper
from pynq_networking.lib.pynqsocket import L2PynqSocket
conf.L2PynqSocket = L2PynqSocket
###Output
_____no_output_____
###Markdown
2. Read the temperature sensor valueNow let's read the temperature values from Pmod TMP2. In this example,Pmod TMP2 has to be plugged into the PMODB interface.
###Code
from pynq.lib.pmod import Pmod_TMP2
from pynq_networking.lib.accelerator import Accelerator
pmod_tmp2 = Pmod_TMP2(overlay.PMODB)
pkt_accel = Accelerator()
print("Reading TMP2 from MicroBlaze: {}".format(pmod_tmp2.read()))
with sys_pipes():
print("Reading TMP2 from SDSoC: {}".format(
pkt_accel.read_sensor(pmod_tmp2.microblaze)))
###Output
Reading TMP2 from MicroBlaze: 30.2
Reading TMP2 from SDSoC: 30.1875
###Markdown
Next we can test how fast we can read the temperature sensor.
###Code
count = 1000
start_time = timeit.default_timer()
for _ in range(count):
x = pmod_tmp2.read()
elapsed = timeit.default_timer() - start_time
print("Sensor performs " + str(count/elapsed)+" reads/second.")
print("Temperature is " + str(x) + " degrees.")
###Output
Sensor performs 1225.1861066887543 reads/second.
Temperature is 30.2 degrees.
###Markdown
3. Bring up interfaces and modulesWe can bring up a network interface for testing. For hardware acceleration, we need to inject the Linux kernel driver.The Python class `LinkManager` is a wrapper for the following commands:```cshchmod 777 ./kernel_module/*.shifconfig br0:1 192.168.3.99ifconfig br0:0 192.168.1.99./kernel_module/link_up.sh```
###Code
from pynq_networking import LinkManager
if_manager = LinkManager()
if_manager.if_up("br0:1", "192.168.3.99")
if_manager.if_up("br0:0", "192.168.1.99")
if_manager.kernel_up()
###Output
_____no_output_____
###Markdown
The kernel module only needs to be brought up 1 time after the board has been booted. 4. Setup brokerWe will use the IP associated with `br0:1` to set up the broker.For the network IOP, the packets have to be flushed first.
###Code
from pynq_networking import Broker
from pynq_networking import get_ip_string, get_mac_string
from pynq_networking.lib.network_iop import NetworkIOP
from pynq_networking.lib.mqttsn_hw import MQTT_Client_PL
serverIP = "192.168.3.99"
serverPort = 1884
broker_mqtt = Broker(ip_address=serverIP, mqttsn_port=serverPort)
broker_mqtt.open()
mynet = NetworkIOP()
mytmp = pmod_tmp2.microblaze
conf.L2PynqSocket().flush()
###Output
181 packets flushed
###Markdown
5. Publish events with acceleratorWe will now use the hardware accelerator to publish events. The Python API is shown below:```Python def publish_mmio(self, size, count, pl_mac_address, pl_ip_address, server_ip_address, server_port_number, topic_id, qos, verbose, net_iop, sensor_iop): """Publish data from the given temperature sensor to an MQTTSN server. This method will use the MMIO to control the accelerator. Parameters ---------- size : int The size of frames to generate. count : int The number of publish events to complete. pl_mac_address : int/str The MAC Address of the PL accelerator (not the host MAC address). pl_ip_address : int/str The IP Address of the PL accelerator (not the host IP address). server_ip_address : int/str The IP Address of the MQTTSN server. server_port_number : int The port number of the MQTTSN server. topic_id : int The topic ID to publish on. qos : int The MQTTSN qos to use (0 means response is not required). verbose : int A non-zero value will get verbose debugging information. net_iop : NetworkIOP The network IOP object. sensor_iop : Pmod_TMP2 The temperature sensor object. """```For example, we can call:```pythonfrom broker_client.accelerator import Acceleratorpkt_accel = Accelerator()pkt_accel.publish_mmio(100, 5, local_mac, local_ip, serverIP, serverPort, 1, 0, 1, mynet, mytmp)```We can keep publishing events and see how fast we can publish them.The following cell is using a thin wrapper for the MQTT client. Internally,it calls the `Accelerator()` class and uses `publish_mmio()` method.Users can also check the frame content using:```pythonframe = conf.L2PynqSocket().recv()print(frame)```
###Code
conf.L2PynqSocket().flush()
count = 500
with sys_pipes():
with MQTT_Client_PL(serverIP, serverPort, "client-hw") as client:
topicID = client.register("temperature")
client.publish_sw(topicID, "27.0")
conf.L2PynqSocket().flush()
start_time = timeit.default_timer()
client.publish_hw(mynet, mytmp, topicID, 0, range(count))
elapsed = timeit.default_timer() - start_time
print("HW publish speed: " + str(count/elapsed)+" packets/second.")
conf.L2PynqSocket().flush()
count = 500
with sys_pipes():
with MQTT_Client_PL(serverIP, serverPort, "client-sw") as client:
temp_topicID = client.register("temperature")
client.publish_sw(topicID, "27.0")
conf.L2PynqSocket().flush()
start_time = timeit.default_timer()
temperature = str(pmod_tmp2.read())
for i in range(count):
client.publish_sw(temp_topicID, temperature, qos=0)
elapsed = timeit.default_timer() - start_time
print("SW publish speed: " + str(count/elapsed)+" packets/second.")
###Output
195 packets flushed
MQTTSN: Ether / IP / UDP 192.168.3.99:1884 > 192.168.1.104:50000 / MQTTSN / MQTTSN_CONNACK / Padding
MQTTSN: Ether / IP / UDP 192.168.3.99:1884 > 192.168.1.104:50000 / MQTTSN / MQTTSN_REGACK / Padding
MQTTSN: Ether / IP / UDP 192.168.3.99:1884 > 192.168.1.104:50000 / MQTTSN / MQTTSN_PUBACK / Padding
3 packets flushed
SW publish speed: 46.874655421477996 packets/second.
###Markdown
7. CleanupWe can remove the kernel module and close the broker in the end.
###Code
if_manager.kernel_down()
if_manager.if_down('br0:0')
if_manager.if_down('br0:1')
broker_mqtt.close()
###Output
_____no_output_____ |
flowers_recognition/data_generator_demo.ipynb | ###Markdown
we need a bit of more preparation like normalizing the data and converting the labels to one hot encoding
###Code
def preprocessing(img,label):
img = cv2.resize(img,(Config.resize,Config.resize))
img = img/255
label = np_utils.to_categorical(label, Config.num_classes)
return img,label
def data_generator(samples, batch_size=32,shuffle_data=True,resize=224):
"""
Yields the next training batch.
Suppose `samples` is an array [[image1_filename,label1], [image2_filename,label2],...].
"""
num_samples = len(samples)
while True: # Loop forever so the generator never terminates
samples = shuffle(samples)
# Get index to start each batch: [0, batch_size, 2*batch_size, ..., max multiple of batch_size <= num_samples]
for offset in range(0, num_samples, batch_size):
# Get the samples you'll use in this batch
batch_samples = samples[offset:offset+batch_size]
# Initialise X_train and y_train arrays for this batch
X_train = []
y_train = []
# For each example
for batch_sample in batch_samples:
# Load image (X) and label (y)
img_name = batch_sample[0]
label = batch_sample[1]
img = cv2.imread(os.path.join(root_dir,img_name))
# apply any kind of preprocessing
img,label = preprocessing(img,label)
# Add example to arrays
X_train.append(img)
y_train.append(label)
# Make sure they're numpy arrays (as opposed to lists)
X_train = np.array(X_train)
y_train = np.array(y_train)
# The generator-y part: yield the next training batch
yield X_train, y_train
# this will create a generator object
train_datagen = data_generator(samples,batch_size=8)
x,y = next(train_datagen)
print ('x_shape: ', x.shape)
print ('labels shape: ', y.shape)
print ('labels: ', y)
###Output
x_shape: (8, 224, 224, 3)
labels shape: (8, 5)
labels: [[1. 0. 0. 0. 0.]
[0. 0. 0. 1. 0.]
[0. 0. 1. 0. 0.]
[0. 0. 0. 0. 1.]
[0. 1. 0. 0. 0.]
[0. 0. 0. 0. 1.]
[1. 0. 0. 0. 0.]
[0. 0. 0. 0. 1.]]
###Markdown
Now we have a data loader ready to serve us the data. Let's define a CNN model in Keras and train it
###Code
train_data_path = 'flowers_recognition_train.csv'
test_data_path = 'flowers_recognition_test.csv'
train_samples = load_samples(train_data_path)
test_samples = load_samples(test_data_path)
num_train_samples = len(train_samples)
num_test_samples = len(test_samples)
print ('number of train samples: ', num_train_samples)
print ('number of test samples: ', num_test_samples)
# Create generator
batch_size = Config.batch_size
train_generator = data_generator(train_samples, batch_size=32)
validation_generator = data_generator(test_samples, batch_size=32)
# import the necessary modules from the library
from keras.models import Sequential
from keras.layers import Dense, Conv2D, Flatten, Activation, MaxPooling2D, Dropout
input_shape = (Config.resize,Config.resize,3)
print (input_shape)
model = Sequential()
#filters,kernel_size,strides=(1, 1),padding='valid',data_format=None,dilation_rate=(1, 1),activation=None,use_bias=True,
#kernel_initializer='glorot_uniform',bias_initializer='zeros',kernel_regularizer=None,bias_regularizer=None,
#activity_regularizer=None,kernel_constraint=None,bias_constraint=None,
#pool_size=(2, 2), strides=None, padding='valid',data_format=None
model.add(Conv2D(32, (3,3),padding='same',input_shape=input_shape,name='conv2d_1'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2),name='maxpool2d_1'))
model.add(Conv2D(32, (3, 3),name='conv2d_2'))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2),name='maxpool2d_2'))
model.add(Dropout(0.5))
#model.add(Convolution2D(64, 3, 3))
#model.add(Activation('relu'))
#model.add(Convolution2D(64, 3, 3))
#model.add(Activation('relu'))
#model.add(MaxPooling2D(pool_size=(2, 2)))
#model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(Config.num_classes))
model.add(Activation('softmax'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.fit_generator(
train_generator,
steps_per_epoch=num_train_samples // batch_size,
epochs=Config.num_epochs,
validation_data=validation_generator,
validation_steps=num_test_samples // batch_size)
model.save_weights('first_try.h5')
###Output
Epoch 1/10
402/402 [==============================] - 1249s 3s/step - loss: 1.1912 - acc: 0.5431 - val_loss: 1.1057 - val_acc: 0.6333
Epoch 2/10
402/402 [==============================] - 1245s 3s/step - loss: 0.7368 - acc: 0.7438 - val_loss: 1.0109 - val_acc: 0.6667
Epoch 3/10
402/402 [==============================] - 10348s 26s/step - loss: 0.4385 - acc: 0.8529 - val_loss: 1.0953 - val_acc: 0.6467
Epoch 4/10
5/402 [..............................] - ETA: 21:18 - loss: 0.2822 - acc: 0.8750 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.