path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
2-analysis-examples/5-osm-traces.ipynb | ###Markdown
OSM Traces (GPX files)[](https://mybinder.org/v2/gh/anitagraser/movingpandas-examples/main?filepath=2-analysis-examples/5-osm-traces.ipynb)This notebook illustrates the use of GPS traces shared publicly by OSM community members in GPX format. Source: https://www.openstreetmap.org/traces
###Code
import pandas as pd
import geopandas as gpd
import movingpandas as mpd
from os.path import exists
from urllib.request import urlretrieve
from shapely.geometry import Point, LineString, Polygon
from datetime import datetime, timedelta
import warnings
warnings.simplefilter("ignore")
mpd.__version__
###Output
_____no_output_____
###Markdown
Download OSM traces and generate a GeoDataFrame
###Code
def get_osm_traces(page=0, bbox='16.18,48.09,16.61,48.32'):
file = 'osm_traces.gpx'
url = f'https://api.openstreetmap.org/api/0.6/trackpoints?bbox={bbox}&page={page}'
if not exists(file):
urlretrieve(url, file)
gdf = gpd.read_file(file, layer='track_points')
# dropping empty columns
gdf.drop(columns=['ele', 'course', 'speed', 'magvar', 'geoidheight', 'name', 'cmt', 'desc',
'src', 'url', 'urlname', 'sym', 'type', 'fix', 'sat', 'hdop', 'vdop',
'pdop', 'ageofdgpsdata', 'dgpsid'], inplace=True)
gdf['t'] = pd.to_datetime(gdf['time'])
gdf.set_index('t', inplace=True)
return gdf
###Output
_____no_output_____
###Markdown
TrajectoryCollection from OSM traces GeoDataFrame
###Code
gdf = get_osm_traces()
osm_traces = mpd.TrajectoryCollection(gdf, 'track_fid')
print(f'The OSM traces download contains {len(osm_traces)} tracks')
for track in osm_traces: print(f'Track {track.id}: length={track.get_length():.0f}m')
###Output
_____no_output_____
###Markdown
Genearlizing and visualizingGeneralization is optional but speeds up rendering
###Code
osm_traces = mpd.MinTimeDeltaGeneralizer(osm_traces).generalize(tolerance=timedelta(minutes=1))
osm_traces.hvplot(title='OSM Traces', line_width=7, width=700, height=500)
osm_traces.get_trajectory(0).hvplot(title='Speed (m/s) along track', c='speed', cmap='RdYlBu',
line_width=7, width=700, height=500, tiles='CartoLight', colorbar=True)
###Output
_____no_output_____
###Markdown
OSM Traces (GPX files)[](https://mybinder.org/v2/gh/anitagraser/movingpandas-examples/main?filepath=2-analysis-examples/5-osm-traces.ipynb)This notebook illustrates the use of GPS traces shared publicly by OSM community members in GPX format. Source: https://www.openstreetmap.org/traces
###Code
import pandas as pd
import geopandas as gpd
import movingpandas as mpd
from os.path import exists
from urllib.request import urlretrieve
from shapely.geometry import Point, LineString, Polygon
from datetime import datetime, timedelta
import warnings
warnings.simplefilter("ignore")
mpd.__version__
###Output
_____no_output_____
###Markdown
Download OSM traces and generate a GeoDataFrame
###Code
def get_osm_traces(page=0, bbox='16.18,48.09,16.61,48.32'):
file = 'osm_traces.gpx'
url = f'https://api.openstreetmap.org/api/0.6/trackpoints?bbox={bbox}&page={page}'
if not exists(file):
urlretrieve(url, file)
gdf = gpd.read_file(file, layer='track_points')
# dropping empty columns
gdf.drop(columns=['ele', 'course', 'speed', 'magvar', 'geoidheight', 'name', 'cmt', 'desc',
'src', 'url', 'urlname', 'sym', 'type', 'fix', 'sat', 'hdop', 'vdop',
'pdop', 'ageofdgpsdata', 'dgpsid'], inplace=True)
gdf['t'] = pd.to_datetime(gdf['time'])
gdf.set_index('t', inplace=True)
return gdf
###Output
_____no_output_____
###Markdown
TrajectoryCollection from OSM traces GeoDataFrame
###Code
gdf = get_osm_traces()
osm_traces = mpd.TrajectoryCollection(gdf, 'track_fid')
print(f'The OSM traces download contains {len(osm_traces)} tracks')
for track in osm_traces: print(f'Track {track.id}: length={track.get_length():.0f}m')
###Output
_____no_output_____
###Markdown
Generalizing and visualizingGeneralization is optional but speeds up rendering
###Code
osm_traces = mpd.MinTimeDeltaGeneralizer(osm_traces).generalize(tolerance=timedelta(minutes=1))
osm_traces.hvplot(title='OSM Traces', line_width=7, width=700, height=500)
osm_traces.get_trajectory(0).hvplot(title='Speed (m/s) along track', c='speed', cmap='RdYlBu',
line_width=7, width=700, height=500, tiles='CartoLight', colorbar=True)
###Output
_____no_output_____ |
nbs/73_callback.captum.ipynb | ###Markdown
CaptumCaptum is the Model Interpretation Library from PyTorch as available [here](https://captum.ai)To use this we need to install the package using `conda install captum -c pytorch`or `pip install captum`This is a Call back to use Captum.
###Code
#export
from captum.attr import IntegratedGradients
from captum.attr import visualization as viz
from matplotlib.colors import LinearSegmentedColormap
#export
class CaptumCallback(Callback):
"Captum Callback for Resnet Interpretation"
def __init__(self):
pass
def after_fit(self):
self.integrated_gradients = IntegratedGradients(self.model)
def visualize(self,inp_data,n_steps=200,cmap_name='custom blue',colors=None,N=256,methods=['original_image','heat_map'],signs=["all", "positive"],outlier_perc=1):
dl = self.dls.test_dl([inp_data],with_labels=True, bs=1)
self.enc_inp,self.enc_preds= dl.one_batch()
dec_data=dl.decode((self.enc_inp,self.enc_preds))
self.dec_img,self.dec_pred=dec_data[0][0],dec_data[1][0]
self.colors = [(0, '#ffffff'),(0.25, '#000000'),(1, '#000000')] if colors is None else colors
self.attributions_ig = self.integrated_gradients.attribute(self.enc_inp.to(self.dl.device), target=self.enc_preds, n_steps=200)
default_cmap = LinearSegmentedColormap.from_list(cmap_name,
self.colors, N=N)
_ = viz.visualize_image_attr_multiple(np.transpose(self.attributions_ig.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(self.dec_img.numpy(), (1,2,0)),
methods=methods,
cmap=default_cmap,
show_colorbar=True,
signs=signs,
outlier_perc=outlier_perc, titles=[f'Original Image - ({self.dec_pred})', 'IG'])
from fastai2.vision.all import *
path = untar_data(URLs.PETS)/'images'
def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2, seed=42,
label_func=is_cat, item_tfms=Resize(128))
learn = cnn_learner(dls, resnet34, metrics=error_rate,cbs=CaptumCallback())
learn.fine_tune(1)
paths=list(path.iterdir())
index=random.randint(0,len(paths))
image_path=paths[index]
learn.captum.visualize(image_path,n_steps=1000)
###Output
_____no_output_____
###Markdown
CaptumCaptum is the Model Interpretation Library from PyTorch as available [here](https://captum.ai)To use this we need to install the package using `conda install captum -c pytorch`or `pip install captum`This is a Call back to use Captum.
###Code
# export
# Dirty hack as json_clean doesn't support CategoryMap type
from ipykernel import jsonutil
_json_clean=jsonutil.json_clean
def json_clean(o):
o = list(o.items) if isinstance(o,CategoryMap) else o
return _json_clean(o)
jsonutil.json_clean = json_clean
#export
from captum.attr import IntegratedGradients
from captum.attr import visualization as viz
from matplotlib.colors import LinearSegmentedColormap
from captum.insights import AttributionVisualizer, Batch
from captum.insights.features import ImageFeature
#export
class IntegradedGradientsCallback(Callback):
"Captum Callback for Resnet Interpretation"
def __init__(self):
pass
def after_fit(self):
self.integrated_gradients = IntegratedGradients(self.model)
def visualize(self, inp_data, n_steps=200, cmap_name='custom blue', colors=None, N=256,
methods=None, signs=None, outlier_perc=1):
if methods is None: methods=['original_image','heat_map']
if signs is None: signs=["all", "positive"]
dl = self.dls.test_dl(L(inp_data),with_labels=True, bs=1)
self.enc_inp,self.enc_preds= dl.one_batch()
dec_data=dl.decode((self.enc_inp,self.enc_preds))
self.dec_img,self.dec_pred=dec_data[0][0],dec_data[1][0]
self.colors = [(0, '#ffffff'),(0.25, '#000000'),(1, '#000000')] if colors is None else colors
self.attributions_ig = self.integrated_gradients.attribute(self.enc_inp.to(self.dl.device), target=self.enc_preds, n_steps=200)
default_cmap = LinearSegmentedColormap.from_list(cmap_name,
self.colors, N=N)
_ = viz.visualize_image_attr_multiple(np.transpose(self.attributions_ig.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(self.dec_img.numpy(), (1,2,0)),
methods=methods,
cmap=default_cmap,
show_colorbar=True,
signs=signs,
outlier_perc=outlier_perc, titles=[f'Original Image - ({self.dec_pred})', 'IG'])
from fastai2.vision.all import *
path = untar_data(URLs.PETS)/'images'
def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2, seed=42,
label_func=is_cat, item_tfms=Resize(128))
learn = cnn_learner(dls, resnet34, metrics=error_rate,cbs=IntegradedGradientsCallback())
learn.fine_tune(1)
paths=list(path.iterdir())
learn.integraded_gradients.visualize(paths,n_steps=1000)
#export
class CaptumInsightsCallback(Callback):
"Captum Insights Callback for Image Interpretation"
def __init__(self): pass
def _formatted_data_iter(self, dl, normalize_func):
dl_iter=iter(dl)
while True:
images,labels=next(dl_iter)
images=normalize_func.decode(images).to(dl.device)
yield Batch(inputs=images, labels=labels)
def visualize(self, inp_data, debug=True):
_baseline_func= lambda o: o*0
_get_vocab = lambda vocab: list(map(str,vocab)) if isinstance(vocab[0],bool) else vocab
dl = self.dls.test_dl(L(inp_data),with_labels=True, bs=4)
normalize_func= next((func for func in dl.after_batch if type(func)==Normalize),noop)
visualizer = AttributionVisualizer(
models=[self.model],
score_func=lambda o: torch.nn.functional.softmax(o, 1),
classes=_get_vocab(dl.vocab),
features=[
ImageFeature(
"Image",
baseline_transforms=[_baseline_func],
input_transforms=[normalize_func],
)
],
dataset=self._formatted_data_iter(dl,normalize_func)
)
visualizer.render(debug=debug)
from fastai2.vision.all import *
path = untar_data(URLs.PETS)/'images'
def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
path, get_image_files(path), valid_pct=0.2, seed=42,
label_func=is_cat, item_tfms=Resize(128))
learn = cnn_learner(dls, resnet34, metrics=error_rate,cbs=CaptumInsightsCallback())
learn.fine_tune(1)
paths=list(path.iterdir())
learn.captum_insights.visualize(paths)
###Output
_____no_output_____
###Markdown
CaptumCaptum is the Model Interpretation Library from PyTorch as available [here](https://captum.ai)To use this we need to install the package using `conda install captum -c pytorch`or `pip install captum`This is a Call back to use Captum.
###Code
# export
# Dirty hack as json_clean doesn't support CategoryMap type
from ipykernel import jsonutil
_json_clean=jsonutil.json_clean
def json_clean(o):
o = list(o.items) if isinstance(o,CategoryMap) else o
return _json_clean(o)
jsonutil.json_clean = json_clean
#export
from captum.attr import IntegratedGradients,NoiseTunnel,GradientShap,Occlusion
from captum.attr import visualization as viz
from matplotlib.colors import LinearSegmentedColormap
from captum.insights import AttributionVisualizer, Batch
from captum.insights.features import ImageFeature
###Output
_____no_output_____
###Markdown
In all this notebook, we will use the following data:
###Code
from fastai2.vision.all import *
path = untar_data(URLs.PETS)/'images'
fnames = get_image_files(path)
def is_cat(x): return x[0].isupper()
dls = ImageDataLoaders.from_name_func(
path, fnames, valid_pct=0.2, seed=42,
label_func=is_cat, item_tfms=Resize(128))
from random import randint
###Output
_____no_output_____
###Markdown
Gradient Based Attribution Integrated Gradients Callback The Distill Article [here](https://distill.pub/2020/attribution-baselines/) provides a good overview of what baseline image to choose. We can try them one by one.
###Code
#export
class IntegratedGradientsCallback(Callback):
"Integrated Gradient Captum Callback for Resnet Interpretation"
def __init__(self):
pass
def visualize(self,inp, baseline_type='zeros',n_steps=1000 ,cmap_name='custom blue',colors=None,N=256,methods=['original_image','heat_map'],signs=["all", "positive"],outlier_perc=1):
tls = L([TfmdLists(inp, t) for t in L(ifnone(self.dl.tfms,[None]))])
inp_data=list(zip(*(tls[0],tls[1])))[0]
return self._visualize(inp_data,n_steps,cmap_name,colors,N,methods,signs,outlier_perc,baseline_type)
def get_baseline_img(self, img_tensor,baseline_type):
if baseline_type=='zeros': return img_tensor*0
if baseline_type=='uniform': return torch.rand(img_tensor.shape)
def _visualize(self,inp_data,n_steps=200,cmap_name='custom blue',colors=None,N=256,methods=['original_image','heat_map'],signs=["all", "positive"],outlier_perc=1,baseline_type='zeros'):
self._integrated_gradients = self._integrated_gradients if hasattr(self,'_integrated_gradients') else IntegratedGradients(self.model)
dl = self.dls
dec_data=dl.after_item(inp_data)
dec_pred=inp_data[1]
dec_img=dec_data[0]
enc_inp,enc_preds=dl.after_batch(to_device(dl.before_batch(dec_data),dl.device))
baseline=self.get_baseline_img(enc_inp,baseline_type).to(dl.device)
colors = [(0, '#ffffff'),(0.25, '#000000'),(1, '#000000')] if colors is None else colors
attributions_ig = self._integrated_gradients.attribute(enc_inp,baseline, target=enc_preds, n_steps=200)
default_cmap = LinearSegmentedColormap.from_list(cmap_name,colors, N=N)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_ig.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(dec_img.numpy(), (1,2,0)),
methods=methods,
cmap=default_cmap,
show_colorbar=True,
signs=signs,
outlier_perc=outlier_perc, titles=[f'Original Image - ({dec_pred})', 'IG'])
learn = cnn_learner(dls, resnet34, metrics=error_rate,cbs=IntegratedGradientsCallback())
learn.fine_tune(1)
idx=randint(0,len(fnames))
learn.integrated_gradients.visualize(fnames[idx],baseline_type='uniform')
###Output
_____no_output_____
###Markdown
Noise Tunnel
###Code
#export
class NoiseTunnelCallback(Callback):
"Captum Callback for Resnet Interpretation"
def __init__(self):
pass
def after_fit(self):
self.integrated_gradients = IntegratedGradients(self.model)
self._noise_tunnel= NoiseTunnel(self.integrated_gradients)
def visualize(self,inp_data,cmap_name='custom blue',colors=None,N=256,methods=['original_image','heat_map'],signs=["all", "positive"],nt_type='smoothgrad'):
dl = self.dls.test_dl(L(inp_data),with_labels=True, bs=1)
self.enc_inp,self.enc_preds= dl.one_batch()
dec_data=dl.decode((self.enc_inp,self.enc_preds))
self.dec_img,self.dec_pred=dec_data[0][0],dec_data[1][0]
self.colors = [(0, '#ffffff'),(0.25, '#000000'),(1, '#000000')] if colors is None else colors
attributions_ig_nt = self._noise_tunnel.attribute(self.enc_inp.to(self.dl.device), n_samples=1, nt_type=nt_type, target=self.enc_preds)
default_cmap = LinearSegmentedColormap.from_list(cmap_name,
self.colors, N=N)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_ig_nt.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(self.dec_img.numpy(), (1,2,0)),
methods,signs,
cmap=default_cmap,
show_colorbar=True,titles=[f'Original Image - ({self.dec_pred})', 'Noise Tunnel'])
learn = cnn_learner(dls, resnet34, metrics=error_rate,cbs=NoiseTunnelCallback())
learn.fine_tune(1)
idx=randint(0,len(fnames))
learn.noise_tunnel.visualize(fnames[idx], nt_type='smoothgrad')
###Output
_____no_output_____
###Markdown
Occlusion
###Code
#export
class OcclusionCallback(Callback):
"Captum Callback for Resnet Interpretation"
def __init__(self):
pass
def after_fit(self):
self._occlusion = Occlusion(self.model)
def _formatted_data_iter(self,dl):
normalize_func= next((func for func in dl.after_batch if type(func)==Normalize),noop)
dl_iter=iter(dl)
while True:
images,labels=next(dl_iter)
images=normalize_func.decode(images).to(dl.device)
return images,labels
def visualize(self,inp_data,cmap_name='custom blue',colors=None,N=256,methods=['original_image','heat_map'],signs=["all", "positive"],strides = (3, 4, 4), sliding_window_shapes=(3,15, 15), outlier_perc=2):
dl = self.dls.test_dl(L(inp_data),with_labels=True, bs=1)
self.dec_img,self.dec_pred=self._formatted_data_iter(dl)
attributions_occ = self._occlusion.attribute(self.dec_img,
strides = strides,
target=self.dec_pred,
sliding_window_shapes=sliding_window_shapes,
baselines=0)
self.colors = [(0, '#ffffff'),(0.25, '#000000'),(1, '#000000')] if colors is None else colors
default_cmap = LinearSegmentedColormap.from_list(cmap_name,
self.colors, N=N)
_ = viz.visualize_image_attr_multiple(np.transpose(attributions_occ.squeeze().cpu().detach().numpy(), (1,2,0)),
np.transpose(self.dec_img.squeeze().cpu().numpy(), (1,2,0)),methods,signs,
cmap=default_cmap,
show_colorbar=True,
outlier_perc=outlier_perc,titles=[f'Original Image - ({self.dec_pred.cpu().item()})', 'Occlusion']
)
learn = cnn_learner(dls, resnet34, metrics=error_rate,cbs=OcclusionCallback())
learn.fine_tune(1)
idx=randint(0,len(fnames))
learn.occlusion.visualize(fnames[idx])
###Output
_____no_output_____
###Markdown
Captum Insights Callback
###Code
#export
class CaptumInsightsCallback(Callback):
"Captum Insights Callback for Image Interpretation"
def __init__(self): pass
def _formatted_data_iter(self,dl,normalize_func):
dl_iter=iter(dl)
while True:
images,labels=next(dl_iter)
images=normalize_func.decode(images).to(dl.device)
yield Batch(inputs=images, labels=labels)
def visualize(self,inp_data,debug=True):
_baseline_func= lambda o: o*0
_get_vocab = lambda vocab: list(map(str,vocab)) if isinstance(vocab[0],bool) else vocab
dl = self.dls.test_dl(L(inp_data),with_labels=True, bs=4)
normalize_func= next((func for func in dl.after_batch if type(func)==Normalize),noop)
visualizer = AttributionVisualizer(
models=[self.model],
score_func=lambda o: torch.nn.functional.softmax(o, 1),
classes=_get_vocab(dl.vocab),
features=[
ImageFeature(
"Image",
baseline_transforms=[_baseline_func],
input_transforms=[normalize_func],
)
],
dataset=self._formatted_data_iter(dl,normalize_func)
)
visualizer.render(debug=debug)
learn = cnn_learner(dls, resnet34, metrics=error_rate,cbs=CaptumInsightsCallback())
learn.fine_tune(1)
learn.captum_insights.visualize(fnames)
###Output
_____no_output_____ |
assets/posts/2020-02-10-python-barplot/.ipynb_checkpoints/Untitled-checkpoint.ipynb | ###Markdown
๋ง๋ ๊ทธ๋ํ (Bar Chart) ๊ทธ๋ฆฌ๋ ๋ฐฉ๋ฒ - pandas, matplotlib, seaborn ์๊ฐํํ ๋ ๋ง๋ ๊ทธ๋ํ ์์ฃผ ์ฌ์ฉํ๋๋ฐ, ๊ฒ์ํ ๋ ๋ง๋ค ๋ฐฉ๋ฒ์ด ๋๋ฌด ๋ค์ํ๋ค... ์ ๋ฆฌํด๋ณด์.
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
plt.style.use('ggplot')
import random
np.random.seed(seed=1)
group_list = ['A','B','C','D']
n_size = 20
group = [random.choice(group_list) for i in range(n_size)]
xval = np.random.poisson(lam=10,size=n_size)
label = np.random.binomial(n=1, p=0.5, size=n_size)
label = list(map(str, label))
df = pd.DataFrame({'xval':xval, 'group':group, 'label':label})
df.head()
df_by_group = df.groupby(['group'])['xval'].sum()
df_by_group_label = df.groupby(['group','label'])['xval'].sum()
df_by_group
df_by_group_label
###Output
_____no_output_____
###Markdown
1. pandas > __DataFrame.plot.bar(self, x=None, y=None, **kwargs)__ > x: xlabel or position, optional > y: ylabel or position, optional ํ ๊ฐ์ ๊ทธ๋ฃน์ด ์์ ๊ฒฝ์ฐ
###Code
df_by_group = df_by_group.reset_index()
df_by_group.plot.bar(x='group',y='xval',rot=0)
df_by_group.plot.barh(x='group',y='xval',rot=0)
###Output
_____no_output_____
###Markdown
๋ ๊ฐ์ ๊ทธ๋ฃน์ด ์์ ๊ฒฝ์ฐ- ๊ทธ๋ฃนํ๋ ์์ฝ ํ
์ด๋ธ์ ํผ๋ดํ
์ด๋ธ๋ก ๋ง๋ ๋ค
###Code
df_by_group_label = df_by_group_label.reset_index()
df_pivot = df_by_group_label.pivot(index='group',columns='label',values='xval')
df_pivot
df_pivot.plot.bar(rot=0)
df_pivot.plot.bar(stacked=True, rot=0)
###Output
_____no_output_____
###Markdown
2. matplotlib > __matplotlib.pyplot.bar(x, height, width=0.8, bottom=None, *, align='center', data=None, **kwargs)__ > x : sequence of scalars > height : scalar or sequence of scalars > width : scalar or array-like, optional > bottom : scalar or array-like, optional ํ ๊ฐ์ ๊ทธ๋ฃน์ด ์์ ๊ฒฝ์ฐ
###Code
df_by_group = df.groupby(['group'])['xval'].sum()
df_by_group
label = df_by_group.index
index = np.arange(len(label)) # 0,1,2,3
plt.bar(index, df_by_group)
plt.xticks(index, label, fontsize=15) # label ์ด๋ฆ ๋ฃ๊ธฐ
plt.barh(index, df_by_group)
plt.yticks(index, label, fontsize=15)
###Output
_____no_output_____
###Markdown
๋ ๊ฐ์ ๊ทธ๋ฃน์ด ์์ ๊ฒฝ์ฐ- plt.bar์ 'bottom' ๋๋ 'width' ์ต์
์ ํ์ฉํ์- 2๋ฒ์งธ์ธต ๊ทธ๋ฃน์ ๋ผ๋ฒจ์ ๋ฐ๋ผ ๋ฐ์ดํฐํ๋ ์์ ๋ฐ๋ก ์ ์ํด์ผํ๋ค
###Code
df_by_group_by0 = df[df['label']=='0'].groupby(['group'])['xval'].sum()
df_by_group_by1 = df[df['label']=='1'].groupby(['group'])['xval'].sum()
label = df.group.unique()
label = sorted(label)
index = np.arange(len(label))
p1 = plt.bar(index,df_by_group_by0, color='red', alpha=0.5)
p2 = plt.bar(index,df_by_group_by1, color='blue', alpha=0.5,
bottom=df_by_group_by0)
plt.xticks(index,label)
plt.legend((p1[0], p2[0]), ('0', '1'), fontsize=15)
p1 = plt.bar(index,df_by_group_by0, color='red', alpha=0.5,
width=0.4)
p2 = plt.bar(index+0.4,df_by_group_by1, color='blue', alpha=0.5,
width=0.4)
plt.xticks(index,label)
plt.legend((p1[0], p2[0]), ('0', '1'), fontsize=15)
###Output
_____no_output_____
###Markdown
* ๋ฌธ์์ด ๋ฆฌ์คํธ ์ ๋ ฌํ๊ธฐ[์ฐธ์กฐ](https://hashcode.co.kr/questions/1058/%EB%A6%AC%EC%8A%A4%ED%8A%B8%EB%A5%BC-%EC%82%AC%EC%A0%84%EC%88%9C%EC%9C%BC%EB%A1%9C-%EC%A0%95%EB%A0%AC%ED%95%98%EB%A0%A4%EA%B3%A0-%ED%95%A9%EB%8B%88%EB%8B%A4)
###Code
import locale
import functools
mylist = ["์ฌ๊ณผ", "๋ฐ๋๋", "๋ธ๊ธฐ", "ํฌ๋"]
locale.setlocale(locale.LC_ALL, '') #ํ๊ตญ ๊ธฐ์ค์ผ๋ก set
sortedByLocale = sorted(mylist, key=functools.cmp_to_key(locale.strcoll))
sortedByLocale
###Output
_____no_output_____
###Markdown
3. seaborn > __seaborn.barplot(x=None, y=None, hue=None, data=None, order=None, hue_order=None, estimator=, ci=95, n_boot=1000, units=None, seed=None, orient=None, color=None, palette=None, saturation=0.75, errcolor='.26', errwidth=None, capsize=None, dodge=True, ax=None, **kwargs)__ > x, y, hue: x, y, huenames of variables in data or vector data, optional > data: dataDataFrame, array, or list of arrays, optional > dodge: dodgebool, optional (When hue nesting is used, whether elements should be shifted along the categorical axis.) ํ ๊ฐ์ ๊ทธ๋ฃน์ด ์์ ๊ฒฝ์ฐ
###Code
df_by_group = df.groupby(['group'])['xval'].sum().reset_index()
sns.barplot(x='group', y='xval', data=df_by_group)
###Output
_____no_output_____
###Markdown
๋ ๊ฐ์ ๊ทธ๋ฃน์ด ์์ ๊ฒฝ์ฐ
###Code
df_by_group_label = df.groupby(['group','label'])['xval'].sum().reset_index()
sns.barplot(x='group', y='xval', hue='label',data=df_by_group_label )
df_by_group_by0 = df[df['label']=='0'].groupby(['group'])['xval'].sum().reset_index()
df_by_group_by1 = df[df['label']=='1'].groupby(['group'])['xval'].sum().reset_index()
sns.barplot(x='group', y='xval', data=df_by_group,color="red",alpha=0.5)
sns.barplot(x='group', y='xval', data=df_by_group_by0 ,color="blue",alpha=0.5)
###Output
_____no_output_____
###Markdown
4. ํ์ด์ฌ์์ R์ 'ggplot2' ์ฌ์ฉํ๊ธฐ
###Code
%matplotlib inline
import plotnine as p9
p9.ggplot(data=df,mapping=p9.aes(x='group',y='xval'))+p9.geom_bar(stat='identity')
p9.ggplot(data=df,mapping=p9.aes(x='group',y='xval',fill='label'))+p9.geom_bar(stat='identity')
p9.ggplot(data=df,mapping=p9.aes(x='group',y='xval',fill='label'))+p9.geom_bar(stat='identity')+p9.coord_flip()
p9.ggplot(data=df,mapping=p9.aes(x='group',y='xval',fill='label'))+p9.geom_bar(stat='identity',position='dodge')
###Output
_____no_output_____ |
scripts/watershed/Watershed Transform 3D Sample Based.ipynb | ###Markdown
Watershed Distance Transform for 3D Data---Implementation of papers:[Deep Watershed Transform for Instance Segmentation](http://openaccess.thecvf.com/content_cvpr_2017/papers/Bai_Deep_Watershed_Transform_CVPR_2017_paper.pdf)[Learn to segment single cells with deep distance estimator and deep cell detector](https://arxiv.org/abs/1803.10829)
###Code
import os
import errno
import numpy as np
import deepcell
###Output
Using TensorFlow backend.
###Markdown
Load the Training Data
###Code
# Download the data (saves to ~/.keras/datasets)
filename = 'mousebrain.npz'
(X_train, y_train), (X_test, y_test) = deepcell.datasets.mousebrain.load_data(filename)
print('X.shape: {}\ny.shape: {}'.format(X_train.shape, y_train.shape))
###Output
Downloading data from https://deepcell-data.s3.amazonaws.com/nuclei/mousebrain.npz
1730158592/1730150850 [==============================] - 75s 0us/step
X.shape: (176, 15, 256, 256, 1)
y.shape: (176, 15, 256, 256, 1)
###Markdown
Set up filepath constants
###Code
# the path to the data file is currently required for `train_model_()` functions
# change DATA_DIR if you are not using `deepcell.datasets`
DATA_DIR = os.path.expanduser(os.path.join('~', '.keras', 'datasets'))
# DATA_FILE should be a npz file, preferably from `make_training_data`
DATA_FILE = os.path.join(DATA_DIR, filename)
# confirm the data file is available
assert os.path.isfile(DATA_FILE)
# Set up other required filepaths
# If the data file is in a subdirectory, mirror it in MODEL_DIR and LOG_DIR
PREFIX = os.path.relpath(os.path.dirname(DATA_FILE), DATA_DIR)
ROOT_DIR = '/data' # TODO: Change this! Usually a mounted volume
MODEL_DIR = os.path.abspath(os.path.join(ROOT_DIR, 'models', PREFIX))
LOG_DIR = os.path.abspath(os.path.join(ROOT_DIR, 'logs', PREFIX))
# create directories if they do not exist
for d in (MODEL_DIR, LOG_DIR):
try:
os.makedirs(d)
except OSError as exc: # Guard against race condition
if exc.errno != errno.EEXIST:
raise
###Output
_____no_output_____
###Markdown
Set up training parameters
###Code
from tensorflow.keras.optimizers import SGD
from deepcell.utils.train_utils import rate_scheduler
fgbg_model_name = 'sample_fgbg_3d_model'
sample_model_name = 'sample_watershed_3d_model'
n_epoch = 1 # Number of training epochs
test_size = .10 # % of data saved as test
norm_method = 'std' # data normalization
receptive_field = 61 # should be adjusted for the scale of the data
optimizer = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
lr_sched = rate_scheduler(lr=0.01, decay=0.99)
# Transformation settings
transform = 'watershed'
distance_bins = 4 # number of distance classes
erosion_width = 0 # erode edges
# 3D Settings
frames_per_batch = 3
norm_method = 'whole_image' # data normalization - `whole_image` for 3d conv
# Sample mode settings
batch_size = 64 # number of images per batch (should be 2 ^ n)
win = (receptive_field - 1) // 2 # sample window size
win_z = (frames_per_batch - 1) // 2 # z window size
balance_classes = True # sample each class equally
max_class_samples = 1e7 # max number of samples per class.
###Output
_____no_output_____
###Markdown
First, create a foreground/background separation model Instantiate the fgbg model
###Code
from deepcell import model_zoo
fgbg_model = model_zoo.bn_feature_net_3D(
receptive_field=receptive_field,
n_features=2,
norm_method=norm_method,
n_frames=frames_per_batch,
n_channels=X_train.shape[-1])
###Output
_____no_output_____
###Markdown
Train the model fgbg model
###Code
from deepcell.training import train_model_sample
fgbg_model = train_model_sample(
model=fgbg_model,
dataset=DATA_FILE, # full path to npz file
model_name=fgbg_model_name,
window_size=(win, win, win_z),
optimizer=optimizer,
batch_size=batch_size,
balance_classes=balance_classes,
max_class_samples=max_class_samples,
transform='fgbg',
n_epoch=n_epoch,
model_dir=MODEL_DIR,
lr_sched=lr_sched,
rotation_range=180,
flip=True,
shear=False,
zoom_range=(0.8, 1.2))
###Output
X_train shape: (198, 15, 256, 256, 1)
y_train shape: (198, 15, 256, 256, 1)
X_test shape: (22, 15, 256, 256, 1)
y_test shape: (22, 15, 256, 256, 1)
Output Shape: (None, 2)
Number of Classes: 2
Training on 1 GPUs
Epoch 1/1
265793/265794 [============================>.] - ETA: 0s - loss: 0.1711 - acc: 0.9354
Epoch 00001: val_loss improved from inf to 0.26627, saving model to /data/models/sample_fgbg_3d_model.h5
265794/265794 [==============================] - 23715s 89ms/step - loss: 0.1711 - acc: 0.9354 - val_loss: 0.2663 - val_acc: 0.9266
###Markdown
Next, Create a model for the watershed energy transform Instantiate the deepcell transform model
###Code
from deepcell import model_zoo
watershed_model = model_zoo.bn_feature_net_3D(
receptive_field=receptive_field,
n_features=distance_bins,
norm_method=norm_method,
n_frames=frames_per_batch,
n_channels=X_train.shape[-1])
###Output
_____no_output_____
###Markdown
Train the watershed transform model
###Code
from deepcell.training import train_model_sample
watershed_model = train_model_sample(
model=watershed_model,
dataset=DATA_FILE, # full path to npz file
model_name=sample_model_name,
window_size=(win, win, win_z),
transform='watershed',
distance_bins=distance_bins,
erosion_width=erosion_width,
optimizer=optimizer,
batch_size=batch_size,
balance_classes=balance_classes,
max_class_samples=max_class_samples,
n_epoch=n_epoch,
model_dir=MODEL_DIR,
expt='sample_watershed',
lr_sched=lr_sched,
rotation_range=180,
flip=True,
shear=False,
zoom_range=(0.8, 1.2))
###Output
X_train shape: (198, 15, 256, 256, 1)
y_train shape: (198, 15, 256, 256, 1)
X_test shape: (22, 15, 256, 256, 1)
y_test shape: (22, 15, 256, 256, 1)
Output Shape: (None, 4)
Number of Classes: 4
Training on 1 GPUs
Epoch 1/1
23577/23578 [============================>.] - ETA: 0s - loss: 0.6835 - acc: 0.6812
Epoch 00001: val_loss improved from inf to 0.39299, saving model to /data/models/sample_watershed_3d_model.h5
23578/23578 [==============================] - 5415s 230ms/step - loss: 0.6835 - acc: 0.6812 - val_loss: 0.3930 - val_acc: 0.9062
###Markdown
Run the modelThe model was trained on small samples of data of shape `(receptive_field, receptive_field)`.in order to process full-sized images, the trained weights will be saved and loaded into a new model with `dilated=True` and proper `input_shape`. Save weights of trained models
###Code
fgbg_weights_file = os.path.join(MODEL_DIR, '{}.h5'.format(fgbg_model_name))
fgbg_model.save_weights(fgbg_weights_file)
watershed_weights_file = os.path.join(MODEL_DIR, '{}.h5'.format(sample_model_name))
watershed_model.save_weights(watershed_weights_file)
###Output
_____no_output_____
###Markdown
Initialize dilated models and load the weights
###Code
from deepcell import model_zoo
run_fgbg_model = model_zoo.bn_feature_net_3D(
receptive_field=receptive_field,
dilated=True,
n_features=2,
n_frames=frames_per_batch,
input_shape=tuple(X_test.shape[1:]))
run_fgbg_model.load_weights(fgbg_weights_file)
run_watershed_model = model_zoo.bn_feature_net_3D(
receptive_field=receptive_field,
dilated=True,
n_features=distance_bins,
n_frames=frames_per_batch,
input_shape=tuple(X_test.shape[1:]))
run_watershed_model.load_weights(watershed_weights_file)
###Output
_____no_output_____
###Markdown
Make predictions on test data
###Code
test_images = run_watershed_model.predict(X_test[:4])
test_images_fgbg = run_fgbg_model.predict(X_test[:4])
print('watershed transform shape:', test_images.shape)
print('segmentation mask shape:', test_images_fgbg.shape)
###Output
watershed transform shape: (4, 15, 256, 256, 4)
segmentation mask shape: (4, 15, 256, 256, 2)
###Markdown
Watershed post-processing
###Code
argmax_images = []
for i in range(test_images.shape[0]):
max_image = np.argmax(test_images[i], axis=-1)
argmax_images.append(max_image)
argmax_images = np.array(argmax_images)
argmax_images = np.expand_dims(argmax_images, axis=-1)
print('watershed argmax shape:', argmax_images.shape)
# threshold the foreground/background
# and remove back ground from watershed transform
threshold = 0.8
fg_thresh = test_images_fgbg[..., 1] > threshold
fg_thresh = np.expand_dims(fg_thresh, axis=-1)
argmax_images_post_fgbg = argmax_images * fg_thresh
# Apply watershed method with the distance transform as seed
from skimage.measure import label
from skimage.morphology import watershed
from skimage.feature import peak_local_max
watershed_images = []
for i in range(argmax_images_post_fgbg.shape[0]):
image = fg_thresh[i, ..., 0]
distance = argmax_images_post_fgbg[i, ..., 0]
local_maxi = peak_local_max(test_images[i, ..., -1],
min_distance=15,
exclude_border=False,
indices=False,
labels=image)
markers = label(local_maxi)
segments = watershed(-distance, markers, mask=image)
watershed_images.append(segments)
watershed_images = np.array(watershed_images)
watershed_images = np.expand_dims(watershed_images, axis=-1)
# Plot the results
import matplotlib.pyplot as plt
index = np.random.randint(low=0, high=watershed_images.shape[0])
frame = np.random.randint(low=0, high=watershed_images.shape[1])
print('Image:', index)
print('Frame:', frame)
fig, axes = plt.subplots(ncols=3, nrows=2, figsize=(15, 15), sharex=True, sharey=True)
ax = axes.ravel()
ax[0].imshow(X_test[index, frame, ..., 0])
ax[0].set_title('Source Image')
ax[1].imshow(test_images_fgbg[index, frame, ..., 1])
ax[1].set_title('Segmentation Prediction')
ax[2].imshow(fg_thresh[index, frame, ..., 0], cmap='jet')
ax[2].set_title('Thresholded Segmentation')
ax[3].imshow(argmax_images[index, frame, ..., 0], cmap='jet')
ax[3].set_title('Watershed Transform')
ax[4].imshow(argmax_images_post_fgbg[index, frame, ..., 0], cmap='jet')
ax[4].set_title('Watershed Transform w/o Background')
ax[5].imshow(watershed_images[index, frame, ..., 0], cmap='jet')
ax[5].set_title('Watershed Segmentation')
fig.tight_layout()
plt.show()
from deepcell.utils.plot_utils import get_js_video
from IPython.display import HTML
HTML(get_js_video(watershed_images, batch=0, channel=0))
###Output
_____no_output_____
###Markdown
Watershed Distance Transform for 3D Data---Implementation of papers:[Deep Watershed Transform for Instance Segmentation](http://openaccess.thecvf.com/content_cvpr_2017/papers/Bai_Deep_Watershed_Transform_CVPR_2017_paper.pdf)[Learn to segment single cells with deep distance estimator and deep cell detector](https://arxiv.org/abs/1803.10829)
###Code
import os
import errno
import numpy as np
import deepcell
###Output
Using TensorFlow backend.
###Markdown
Load the Training Data
###Code
# Download the data (saves to ~/.keras/datasets)
filename = 'mousebrain.npz'
(X_train, y_train), (X_test, y_test) = deepcell.datasets.mousebrain.load_data(filename)
print('X.shape: {}\ny.shape: {}'.format(X_train.shape, y_train.shape))
###Output
Downloading data from https://deepcell-data.s3.amazonaws.com/nuclei/mousebrain.npz
1730158592/1730150850 [==============================] - 75s 0us/step
X.shape: (176, 15, 256, 256, 1)
y.shape: (176, 15, 256, 256, 1)
###Markdown
Set up filepath constants
###Code
# the path to the data file is currently required for `train_model_()` functions
# change DATA_DIR if you are not using `deepcell.datasets`
DATA_DIR = os.path.expanduser(os.path.join('~', '.keras', 'datasets'))
# DATA_FILE should be a npz file, preferably from `make_training_data`
DATA_FILE = os.path.join(DATA_DIR, filename)
# confirm the data file is available
assert os.path.isfile(DATA_FILE)
# Set up other required filepaths
# If the data file is in a subdirectory, mirror it in MODEL_DIR and LOG_DIR
PREFIX = os.path.relpath(os.path.dirname(DATA_FILE), DATA_DIR)
ROOT_DIR = '/data' # TODO: Change this! Usually a mounted volume
MODEL_DIR = os.path.abspath(os.path.join(ROOT_DIR, 'models', PREFIX))
LOG_DIR = os.path.abspath(os.path.join(ROOT_DIR, 'logs', PREFIX))
# create directories if they do not exist
for d in (MODEL_DIR, LOG_DIR):
try:
os.makedirs(d)
except OSError as exc: # Guard against race condition
if exc.errno != errno.EEXIST:
raise
###Output
_____no_output_____
###Markdown
Set up training parameters
###Code
from tensorflow.keras.optimizers import SGD
from deepcell.utils.train_utils import rate_scheduler
fgbg_model_name = 'sample_fgbg_3d_model'
sample_model_name = 'sample_watershed_3d_model'
n_epoch = 1 # Number of training epochs
test_size = .10 # % of data saved as test
norm_method = 'std' # data normalization
receptive_field = 61 # should be adjusted for the scale of the data
optimizer = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
lr_sched = rate_scheduler(lr=0.01, decay=0.99)
# Transformation settings
transform = 'watershed'
distance_bins = 4 # number of distance classes
erosion_width = 0 # erode edges
# 3D Settings
frames_per_batch = 3
norm_method = 'whole_image' # data normalization - `whole_image` for 3d conv
# Sample mode settings
batch_size = 64 # number of images per batch (should be 2 ^ n)
win = (receptive_field - 1) // 2 # sample window size
win_z = (frames_per_batch - 1) // 2 # z window size
balance_classes = True # sample each class equally
max_class_samples = 1e7 # max number of samples per class.
###Output
_____no_output_____
###Markdown
First, create a foreground/background separation model Instantiate the fgbg model
###Code
from deepcell import model_zoo
fgbg_model = model_zoo.bn_feature_net_3D(
receptive_field=receptive_field,
n_features=2,
norm_method=norm_method,
n_frames=frames_per_batch,
n_channels=X_train.shape[-1])
###Output
_____no_output_____
###Markdown
Train the model fgbg model
###Code
from deepcell.training import train_model_sample
fgbg_model = train_model_sample(
model=fgbg_model,
dataset=DATA_FILE, # full path to npz file
model_name=fgbg_model_name,
test_size=test_size,
window_size=(win, win, win_z),
optimizer=optimizer,
batch_size=batch_size,
balance_classes=balance_classes,
max_class_samples=max_class_samples,
transform='fgbg',
n_epoch=n_epoch,
model_dir=MODEL_DIR,
lr_sched=lr_sched,
rotation_range=180,
flip=True,
shear=False,
zoom_range=(0.8, 1.2))
###Output
X_train shape: (198, 15, 256, 256, 1)
y_train shape: (198, 15, 256, 256, 1)
X_test shape: (22, 15, 256, 256, 1)
y_test shape: (22, 15, 256, 256, 1)
Output Shape: (None, 2)
Number of Classes: 2
Training on 1 GPUs
Epoch 1/1
265793/265794 [============================>.] - ETA: 0s - loss: 0.1711 - acc: 0.9354
Epoch 00001: val_loss improved from inf to 0.26627, saving model to /data/models/sample_fgbg_3d_model.h5
265794/265794 [==============================] - 23715s 89ms/step - loss: 0.1711 - acc: 0.9354 - val_loss: 0.2663 - val_acc: 0.9266
###Markdown
Next, Create a model for the watershed energy transform Instantiate the deepcell transform model
###Code
from deepcell import model_zoo
watershed_model = model_zoo.bn_feature_net_3D(
receptive_field=receptive_field,
n_features=distance_bins,
norm_method=norm_method,
n_frames=frames_per_batch,
n_channels=X_train.shape[-1])
###Output
_____no_output_____
###Markdown
Train the watershed transform model
###Code
from deepcell.training import train_model_sample
watershed_model = train_model_sample(
model=watershed_model,
dataset=DATA_FILE, # full path to npz file
model_name=sample_model_name,
test_size=test_size,
window_size=(win, win, win_z),
transform='watershed',
distance_bins=distance_bins,
erosion_width=erosion_width,
optimizer=optimizer,
batch_size=batch_size,
balance_classes=balance_classes,
max_class_samples=max_class_samples,
n_epoch=n_epoch,
model_dir=MODEL_DIR,
expt='sample_watershed',
lr_sched=lr_sched,
rotation_range=180,
flip=True,
shear=False,
zoom_range=(0.8, 1.2))
###Output
X_train shape: (198, 15, 256, 256, 1)
y_train shape: (198, 15, 256, 256, 1)
X_test shape: (22, 15, 256, 256, 1)
y_test shape: (22, 15, 256, 256, 1)
Output Shape: (None, 4)
Number of Classes: 4
Training on 1 GPUs
Epoch 1/1
23577/23578 [============================>.] - ETA: 0s - loss: 0.6835 - acc: 0.6812
Epoch 00001: val_loss improved from inf to 0.39299, saving model to /data/models/sample_watershed_3d_model.h5
23578/23578 [==============================] - 5415s 230ms/step - loss: 0.6835 - acc: 0.6812 - val_loss: 0.3930 - val_acc: 0.9062
###Markdown
Run the modelThe model was trained on small samples of data of shape `(receptive_field, receptive_field)`.in order to process full-sized images, the trained weights will be saved and loaded into a new model with `dilated=True` and proper `input_shape`. Save weights of trained models
###Code
fgbg_weights_file = os.path.join(MODEL_DIR, '{}.h5'.format(fgbg_model_name))
fgbg_model.save_weights(fgbg_weights_file)
watershed_weights_file = os.path.join(MODEL_DIR, '{}.h5'.format(sample_model_name))
watershed_model.save_weights(watershed_weights_file)
###Output
_____no_output_____
###Markdown
Initialize dilated models and load the weights
###Code
from deepcell import model_zoo
run_fgbg_model = model_zoo.bn_feature_net_3D(
receptive_field=receptive_field,
dilated=True,
n_features=2,
n_frames=frames_per_batch,
input_shape=tuple(X_test.shape[1:]))
run_fgbg_model.load_weights(fgbg_weights_file)
run_watershed_model = model_zoo.bn_feature_net_3D(
receptive_field=receptive_field,
dilated=True,
n_features=distance_bins,
n_frames=frames_per_batch,
input_shape=tuple(X_test.shape[1:]))
run_watershed_model.load_weights(watershed_weights_file)
###Output
_____no_output_____
###Markdown
Make predictions on test data
###Code
test_images = run_watershed_model.predict(X_test[:4])
test_images_fgbg = run_fgbg_model.predict(X_test[:4])
print('watershed transform shape:', test_images.shape)
print('segmentation mask shape:', test_images_fgbg.shape)
###Output
watershed transform shape: (4, 15, 256, 256, 4)
segmentation mask shape: (4, 15, 256, 256, 2)
###Markdown
Watershed post-processing
###Code
argmax_images = []
for i in range(test_images.shape[0]):
max_image = np.argmax(test_images[i], axis=-1)
argmax_images.append(max_image)
argmax_images = np.array(argmax_images)
argmax_images = np.expand_dims(argmax_images, axis=-1)
print('watershed argmax shape:', argmax_images.shape)
# threshold the foreground/background
# and remove back ground from watershed transform
threshold = 0.8
fg_thresh = test_images_fgbg[..., 1] > threshold
fg_thresh = np.expand_dims(fg_thresh, axis=-1)
argmax_images_post_fgbg = argmax_images * fg_thresh
# Apply watershed method with the distance transform as seed
from skimage.measure import label
from skimage.morphology import watershed
from skimage.feature import peak_local_max
watershed_images = []
for i in range(argmax_images_post_fgbg.shape[0]):
image = fg_thresh[i, ..., 0]
distance = argmax_images_post_fgbg[i, ..., 0]
local_maxi = peak_local_max(test_images[i, ..., -1],
min_distance=15,
exclude_border=False,
indices=False,
labels=image)
markers = label(local_maxi)
segments = watershed(-distance, markers, mask=image)
watershed_images.append(segments)
watershed_images = np.array(watershed_images)
watershed_images = np.expand_dims(watershed_images, axis=-1)
# Plot the results
import matplotlib.pyplot as plt
index = np.random.randint(low=0, high=watershed_images.shape[0])
frame = np.random.randint(low=0, high=watershed_images.shape[1])
print('Image:', index)
print('Frame:', frame)
fig, axes = plt.subplots(ncols=3, nrows=2, figsize=(15, 15), sharex=True, sharey=True)
ax = axes.ravel()
ax[0].imshow(X_test[index, frame, ..., 0])
ax[0].set_title('Source Image')
ax[1].imshow(test_images_fgbg[index, frame, ..., 1])
ax[1].set_title('Segmentation Prediction')
ax[2].imshow(fg_thresh[index, frame, ..., 0], cmap='jet')
ax[2].set_title('Thresholded Segmentation')
ax[3].imshow(argmax_images[index, frame, ..., 0], cmap='jet')
ax[3].set_title('Watershed Transform')
ax[4].imshow(argmax_images_post_fgbg[index, frame, ..., 0], cmap='jet')
ax[4].set_title('Watershed Transform w/o Background')
ax[5].imshow(watershed_images[index, frame, ..., 0], cmap='jet')
ax[5].set_title('Watershed Segmentation')
fig.tight_layout()
plt.show()
from deepcell.utils.plot_utils import get_js_video
from IPython.display import HTML
HTML(get_js_video(watershed_images, batch=0, channel=0))
###Output
_____no_output_____
###Markdown
Watershed Distance Transform for 3D Data---Implementation of papers:[Deep Watershed Transform for Instance Segmentation](http://openaccess.thecvf.com/content_cvpr_2017/papers/Bai_Deep_Watershed_Transform_CVPR_2017_paper.pdf)[Learn to segment single cells with deep distance estimator and deep cell detector](https://arxiv.org/abs/1803.10829)
###Code
import os
import errno
import numpy as np
import deepcell
###Output
Using TensorFlow backend.
###Markdown
Load the Training Data
###Code
# Download the data (saves to ~/.keras/datasets)
filename = 'mousebrain.npz'
test_size = 0.1 # % of data saved as test
seed = 0 # seed for random train-test split
(X_train, y_train), (X_test, y_test) = deepcell.datasets.mousebrain.load_data(filename, test_size=test_size, seed=seed)
print('X.shape: {}\ny.shape: {}'.format(X_train.shape, y_train.shape))
###Output
Downloading data from https://deepcell-data.s3.amazonaws.com/nuclei/mousebrain.npz
1730158592/1730150850 [==============================] - 75s 0us/step
X.shape: (176, 15, 256, 256, 1)
y.shape: (176, 15, 256, 256, 1)
###Markdown
Set up filepath constants
###Code
# the path to the data file is currently required for `train_model_()` functions
# change DATA_DIR if you are not using `deepcell.datasets`
DATA_DIR = os.path.expanduser(os.path.join('~', '.keras', 'datasets'))
# DATA_FILE should be a npz file, preferably from `make_training_data`
DATA_FILE = os.path.join(DATA_DIR, filename)
# confirm the data file is available
assert os.path.isfile(DATA_FILE)
# Set up other required filepaths
# If the data file is in a subdirectory, mirror it in MODEL_DIR and LOG_DIR
PREFIX = os.path.relpath(os.path.dirname(DATA_FILE), DATA_DIR)
ROOT_DIR = '/data' # TODO: Change this! Usually a mounted volume
MODEL_DIR = os.path.abspath(os.path.join(ROOT_DIR, 'models', PREFIX))
LOG_DIR = os.path.abspath(os.path.join(ROOT_DIR, 'logs', PREFIX))
# create directories if they do not exist
for d in (MODEL_DIR, LOG_DIR):
try:
os.makedirs(d)
except OSError as exc: # Guard against race condition
if exc.errno != errno.EEXIST:
raise
###Output
_____no_output_____
###Markdown
Set up training parameters
###Code
from tensorflow.keras.optimizers import SGD
from deepcell.utils.train_utils import rate_scheduler
fgbg_model_name = 'sample_fgbg_3d_model'
sample_model_name = 'sample_watershed_3d_model'
n_epoch = 1 # Number of training epochs
norm_method = 'std' # data normalization
receptive_field = 61 # should be adjusted for the scale of the data
optimizer = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
lr_sched = rate_scheduler(lr=0.01, decay=0.99)
# Transformation settings
transform = 'watershed'
distance_bins = 4 # number of distance classes
erosion_width = 1 # erode edges, improves segmentation when cells are close
# 3D Settings
frames_per_batch = 3
norm_method = 'whole_image' # data normalization - `whole_image` for 3d conv
# Sample mode settings
batch_size = 64 # number of images per batch (should be 2 ^ n)
win = (receptive_field - 1) // 2 # sample window size
win_z = (frames_per_batch - 1) // 2 # z window size
balance_classes = True # sample each class equally
max_class_samples = 1e7 # max number of samples per class.
###Output
_____no_output_____
###Markdown
First, create a foreground/background separation model Instantiate the fgbg model
###Code
from deepcell import model_zoo
fgbg_model = model_zoo.bn_feature_net_3D(
receptive_field=receptive_field,
n_features=2,
norm_method=norm_method,
n_frames=frames_per_batch,
n_channels=X_train.shape[-1])
###Output
_____no_output_____
###Markdown
Train the model fgbg model
###Code
from deepcell.training import train_model_sample
fgbg_model = train_model_sample(
model=fgbg_model,
dataset=DATA_FILE, # full path to npz file
model_name=fgbg_model_name,
test_size=test_size,
seed=seed,
window_size=(win, win, win_z),
optimizer=optimizer,
batch_size=batch_size,
balance_classes=balance_classes,
max_class_samples=max_class_samples,
transform='fgbg',
n_epoch=n_epoch,
model_dir=MODEL_DIR,
lr_sched=lr_sched,
rotation_range=180,
flip=True,
shear=False,
zoom_range=(0.8, 1.2))
###Output
X_train shape: (198, 15, 256, 256, 1)
y_train shape: (198, 15, 256, 256, 1)
X_test shape: (22, 15, 256, 256, 1)
y_test shape: (22, 15, 256, 256, 1)
Output Shape: (None, 2)
Number of Classes: 2
Training on 1 GPUs
Epoch 1/1
265793/265794 [============================>.] - ETA: 0s - loss: 0.1711 - acc: 0.9354
Epoch 00001: val_loss improved from inf to 0.26627, saving model to /data/models/sample_fgbg_3d_model.h5
265794/265794 [==============================] - 23715s 89ms/step - loss: 0.1711 - acc: 0.9354 - val_loss: 0.2663 - val_acc: 0.9266
###Markdown
Next, Create a model for the watershed energy transform Instantiate the deepcell transform model
###Code
from deepcell import model_zoo
watershed_model = model_zoo.bn_feature_net_3D(
receptive_field=receptive_field,
n_features=distance_bins,
norm_method=norm_method,
n_frames=frames_per_batch,
n_channels=X_train.shape[-1])
###Output
_____no_output_____
###Markdown
Train the watershed transform model
###Code
from deepcell.training import train_model_sample
watershed_model = train_model_sample(
model=watershed_model,
dataset=DATA_FILE, # full path to npz file
model_name=sample_model_name,
test_size=test_size,
seed=seed,
window_size=(win, win, win_z),
transform='watershed',
distance_bins=distance_bins,
erosion_width=erosion_width,
optimizer=optimizer,
batch_size=batch_size,
balance_classes=balance_classes,
max_class_samples=max_class_samples,
n_epoch=n_epoch,
model_dir=MODEL_DIR,
expt='sample_watershed',
lr_sched=lr_sched,
rotation_range=180,
flip=True,
shear=False,
zoom_range=(0.8, 1.2))
###Output
X_train shape: (198, 15, 256, 256, 1)
y_train shape: (198, 15, 256, 256, 1)
X_test shape: (22, 15, 256, 256, 1)
y_test shape: (22, 15, 256, 256, 1)
Output Shape: (None, 4)
Number of Classes: 4
Training on 1 GPUs
Epoch 1/1
23577/23578 [============================>.] - ETA: 0s - loss: 0.6835 - acc: 0.6812
Epoch 00001: val_loss improved from inf to 0.39299, saving model to /data/models/sample_watershed_3d_model.h5
23578/23578 [==============================] - 5415s 230ms/step - loss: 0.6835 - acc: 0.6812 - val_loss: 0.3930 - val_acc: 0.9062
###Markdown
Run the modelThe model was trained on small samples of data of shape `(receptive_field, receptive_field)`.in order to process full-sized images, the trained weights will be saved and loaded into a new model with `dilated=True` and proper `input_shape`. Save weights of trained models
###Code
fgbg_weights_file = os.path.join(MODEL_DIR, '{}.h5'.format(fgbg_model_name))
fgbg_model.save_weights(fgbg_weights_file)
watershed_weights_file = os.path.join(MODEL_DIR, '{}.h5'.format(sample_model_name))
watershed_model.save_weights(watershed_weights_file)
###Output
_____no_output_____
###Markdown
Initialize dilated models and load the weights
###Code
from deepcell import model_zoo
run_fgbg_model = model_zoo.bn_feature_net_3D(
receptive_field=receptive_field,
dilated=True,
n_features=2,
n_frames=frames_per_batch,
input_shape=tuple(X_test.shape[1:]))
run_fgbg_model.load_weights(fgbg_weights_file)
run_watershed_model = model_zoo.bn_feature_net_3D(
receptive_field=receptive_field,
dilated=True,
n_features=distance_bins,
n_frames=frames_per_batch,
input_shape=tuple(X_test.shape[1:]))
run_watershed_model.load_weights(watershed_weights_file)
###Output
_____no_output_____
###Markdown
Make predictions on test data
###Code
test_images = run_watershed_model.predict(X_test[:4])
test_images_fgbg = run_fgbg_model.predict(X_test[:4])
print('watershed transform shape:', test_images.shape)
print('segmentation mask shape:', test_images_fgbg.shape)
###Output
watershed transform shape: (4, 15, 256, 256, 4)
segmentation mask shape: (4, 15, 256, 256, 2)
###Markdown
Watershed post-processing
###Code
argmax_images = []
for i in range(test_images.shape[0]):
max_image = np.argmax(test_images[i], axis=-1)
argmax_images.append(max_image)
argmax_images = np.array(argmax_images)
argmax_images = np.expand_dims(argmax_images, axis=-1)
print('watershed argmax shape:', argmax_images.shape)
# threshold the foreground/background
# and remove back ground from watershed transform
threshold = 0.8
fg_thresh = test_images_fgbg[..., 1] > threshold
fg_thresh = np.expand_dims(fg_thresh, axis=-1)
argmax_images_post_fgbg = argmax_images * fg_thresh
# Apply watershed method with the distance transform as seed
from skimage.measure import label
from skimage.morphology import watershed
from skimage.feature import peak_local_max
watershed_images = []
for i in range(argmax_images_post_fgbg.shape[0]):
image = fg_thresh[i, ..., 0]
distance = argmax_images_post_fgbg[i, ..., 0]
local_maxi = peak_local_max(test_images[i, ..., -1],
min_distance=15,
exclude_border=False,
indices=False,
labels=image)
markers = label(local_maxi)
segments = watershed(-distance, markers, mask=image)
watershed_images.append(segments)
watershed_images = np.array(watershed_images)
watershed_images = np.expand_dims(watershed_images, axis=-1)
# Plot the results
import matplotlib.pyplot as plt
index = np.random.randint(low=0, high=watershed_images.shape[0])
frame = np.random.randint(low=0, high=watershed_images.shape[1])
print('Image:', index)
print('Frame:', frame)
fig, axes = plt.subplots(ncols=3, nrows=2, figsize=(15, 15), sharex=True, sharey=True)
ax = axes.ravel()
ax[0].imshow(X_test[index, frame, ..., 0])
ax[0].set_title('Source Image')
ax[1].imshow(test_images_fgbg[index, frame, ..., 1])
ax[1].set_title('Segmentation Prediction')
ax[2].imshow(fg_thresh[index, frame, ..., 0], cmap='jet')
ax[2].set_title('Thresholded Segmentation')
ax[3].imshow(argmax_images[index, frame, ..., 0], cmap='jet')
ax[3].set_title('Watershed Transform')
ax[4].imshow(argmax_images_post_fgbg[index, frame, ..., 0], cmap='jet')
ax[4].set_title('Watershed Transform w/o Background')
ax[5].imshow(watershed_images[index, frame, ..., 0], cmap='jet')
ax[5].set_title('Watershed Segmentation')
fig.tight_layout()
plt.show()
from deepcell.utils.plot_utils import get_js_video
from IPython.display import HTML
HTML(get_js_video(watershed_images, batch=0, channel=0))
###Output
_____no_output_____ |
day-3/day-3.ipynb | ###Markdown
--- Day 3: Crossed Wires ---Specifically, two wires are connected to a central port and extend outward on a grid. You trace the path each wire takes as it leaves the central port, one wire per line of text (your puzzle input).The wires twist and turn, but the two wires occasionally cross paths. To fix the circuit, you need to find the intersection point closest to the central port. Because the wires are on a grid, use the Manhattan distance for this measurement. While the wires do technically cross right at the central port where they both start, this point does not count, nor does a wire count as crossing with itself.
###Code
from pathlib import Path
def travel(points, instruction):
direction = instruction[0]
distance = int(instruction[1:])
x = points[-1][0]
y = points[-1][1]
if direction == "R":
new_points = [(x + t, y) for t in range(1, distance + 1)]
if direction == "L":
new_points = [(x - t, y) for t in range(1, distance + 1)]
if direction == "U":
new_points = [(x, y + t) for t in range(1, distance + 1)]
if direction == "D":
new_points = [(x, y - t) for t in range(1, distance + 1)]
points += new_points
return points
def get_points(start, instructions):
points = start
for instruction in instructions:
points = travel(points, instruction)
return points
def manhattan(p1, p2):
return sum([abs(a - b) for a, b in zip(p1, p2)])
def get_shortest_distance(paths):
line_one_instructions = paths[0].split(",")
line_two_instructions = paths[1].split(",")
line_one_points = get_points([(0, 0)], line_one_instructions)
line_two_points = get_points([(0, 0)], line_two_instructions)
intersections = list(set(line_one_points).intersection(set(line_two_points)))
distances = [manhattan((0, 0), point) for point in intersections]
distances_greater_than_zero = [distance for distance in distances if distance != 0]
return min(distances_greater_than_zero)
test_paths_one = [
"R75,D30,R83,U83,L12,D49,R71,U7,L72",
"U62,R66,U55,R34,D71,R55,D58,R83",
]
test_paths_two = [
"R98,U47,R26,D63,R33,U87,L62,D20,R33,U53,R51",
"U98,R91,D20,R16,D67,R40,U7,R15,U6,R7",
]
paths = Path("input").read_text().splitlines()
# Should be 159
get_shortest_distance(test_paths_one)
# Should be 135
get_shortest_distance(test_paths_two)
get_shortest_distance(paths)
###Output
_____no_output_____
###Markdown
--- Part Two ---It turns out that this circuit is very timing-sensitive; you actually need to minimize the signal delay.To do this, calculate the number of steps each wire takes to reach each intersection; choose the intersection where the sum of both wires' steps is lowest. If a wire visits a position on the grid multiple times, use the steps value from the first time it visits that position when calculating the total value of a specific intersection.The number of steps a wire takes is the total number of grid squares the wire has entered to get to that location, including the intersection being considered.
###Code
def get_fewest_steps(paths):
line_one_instructions = paths[0].split(",")
line_two_instructions = paths[1].split(",")
line_one_points = get_points([(0, 0)], line_one_instructions)
line_two_points = get_points([(0, 0)], line_two_instructions)
intersections = list(set(line_one_points).intersection(set(line_two_points)))
steps_dict = {
line_one_points.index(intersection)
+ (line_two_points).index(intersection): intersection
for intersection in intersections
if intersection != (0, 0)
}
return min(steps_dict.keys())
# Should be 610
get_fewest_steps(test_paths_one)
# Should be 410
get_fewest_steps(test_paths_two)
get_fewest_steps(paths)
###Output
_____no_output_____ |
Projects/Project2/Project2_Prashant.ipynb | ###Markdown
```Project: Project 2: LutherDate: 02/03/2017Name: Prashant Tatineni``` Project OverviewFor Project Luther, I gathered the set of all films listed under movie franchises on boxofficemojo.com. My goal was to predict the success of a movie sequel (i.e., domestic gross in USD) based on the performance of other sequels, and especially based on previous films in that particular franchise. I saw some linear correlation between certain variables, like number of theaters, and the total domestic gross, but the predictions from my final model were not entirely reasonable. More time could be spent on better addressing the various outliers in the dataset. Summary of Solution Steps1. Retrieve data from boxofficemojo.com.2. Clean up data and reduce to a set of predictor variables, with "Adjusted Gross" as the target for prediction.3. Run Linear Regression model.4. Review model performance.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import Image
import requests
from bs4 import BeautifulSoup
import dateutil.parser
import statsmodels.api as sm
import patsy
from sklearn.linear_model import LinearRegression
from sklearn.preprocessing import PolynomialFeatures
import sys, sklearn
from sklearn import linear_model, preprocessing
from sklearn import metrics
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 1I started with the "Franchises" list on Boxofficemojo.com. Within each franchise page, I scraped each movie's information and enter it into a Python dictionary. If it's already in the dictionary, the entry will be overwritten, except with a different Franchise name. But note below that the url for "Franchises" list was sorted Ascending, so this conveniently rolls "subfranchises" into their "parent" franchise.E.g., "Fantastic Beasts" and the "Harry Potter" movies have their own separate Franchises, but they will all be tagged as the "JKRowling" franchise, i.e. "./chart/?id=jkrowling.htm"Also, because I was comparing sequels to their predecessors, I focused on Domestic Gross, adjusted for ticket price inflation.
###Code
url = 'http://www.boxofficemojo.com/franchises/?view=Franchise&sort=nummovies&order=ASC&p=.htm'
response = requests.get(url)
page = response.text
soup = BeautifulSoup(page,"lxml")
tables = soup.find_all("table")
rows = [row for row in tables[3].find_all('tr')]
rows = rows[1:]
# Initialize empty dictionary of movies
movies = {}
for row in rows:
items = row.find_all('td')
franchise = items[0].find('a')['href']
franchiseurl = 'http://www.boxofficemojo.com/franchises/' + franchise[2:]
response = requests.get(franchiseurl)
franchise_page = response.text
franchise_soup = BeautifulSoup(franchise_page,"lxml")
franchise_tables = franchise_soup.find_all("table")
franchise_gross = [row for row in franchise_tables[4].find_all('tr')]
franchise_gross = franchise_gross[1:len(franchise_gross)-2]
franchise_adjgross = [row for row in franchise_tables[5].find_all('tr')]
franchise_adjgross = franchise_adjgross[1:len(franchise_adjgross)-2]
# Assign movieurl as key
# Add title, franchise, inflation-adjusted gross, release date.
for row in franchise_adjgross:
movie_info = row.find_all('td')
movieurl = movie_info[1].find('a')['href']
title = movie_info[1]
adjgross = movie_info[3]
release = movie_info[5]
movies[movieurl] = [title.text]
movies[movieurl].append(franchise)
movies[movieurl].append(adjgross.text)
movies[movieurl].append(release.text)
# Add number of theaters for the above movies
for row in franchise_gross:
movie_info = row.find_all('td')
movieurl = movie_info[1].find('a')['href']
theaters = movie_info[4]
if movieurl in movies.keys():
movies[movieurl].append(theaters.text)
df = pd.DataFrame(movies.values())
df.columns = ['Title','Franchise', 'AdjGross', 'Release', 'Theaters']
df.head()
df.shape
###Output
_____no_output_____
###Markdown
Step 2Clean up data.
###Code
# Remove movies that were re-issues, special editions, or separate 3D or IMAX versions.
df['Ignore'] = df['Title'].apply(lambda x: 're-issue' in x.lower() or 're-release' in x.lower() or 'special edition' in x.lower() or '3d)' in x.lower() or 'imax' in x.lower())
df = df[(df.Ignore == False)]
del df['Ignore']
df.shape
# Convert Adjusted Gross to a number
df['AdjGross'] = df['AdjGross'].apply(lambda x: int(x.replace('$','').replace(',','')))
# Convert Date string to dateobject. Need to prepend '19' for dates > 17 because Python treats '/60' as year '2060'
df['Release'] = df['Release'].apply(lambda x: (x[:-2] + '19' + x[-2:]) if int(x[-2:]) > 17 else x)
df['Release'] = df['Release'].apply(lambda x: dateutil.parser.parse(x))
###Output
_____no_output_____
###Markdown
The films need to be grouped by franchise so that franchise-related data can be included as featured for each observation.- The Average Adjusted Gross of all previous films in the franchise- The Adjusted Gross of the very first film in the franchise- The Release Date of the previous film in the franchise- The Release Date of the very first film in the franchise- The Series Number of the film in that franchise -- I considered using the film's number in the franchise as a rank value that could be split into indicator variables, but it's useful as a linear value because the total accrued sum of $ earned by the franchise is a linear combination of "SeriesNum" and "PrevAvgGross"
###Code
df = df.sort_values(['Franchise','Release'])
df['CumGross'] = df.groupby(['Franchise'])['AdjGross'].apply(lambda x: x.cumsum())
df['SeriesNum'] = df.groupby(['Franchise'])['Release'].apply(lambda x: x.rank())
df['PrevAvgGross'] = (df['CumGross'] - df['AdjGross'])/(df['SeriesNum'] - 1)
###Output
_____no_output_____
###Markdown
- Number of Theaters in which the film showed -- Where this number was unavailable, replaced '-' with 0; the 0 will later be replaced with the mean number of theaters for the other films in the same franchise. I chose the average as a reasonable estimate.
###Code
df.Theaters = df.Theaters.replace('-','0')
df['Theaters'] = df['Theaters'].apply(lambda x: int(x.replace(',','')))
df['PrevRelease'] = df['Release'].shift()
# Create a second dataframe with franchise group-related information.
df_group = pd.DataFrame(df.groupby(['Franchise'])['Title'].apply(lambda x: x.count()))
df_group['FirstGross'] = df.groupby(['Franchise'])['AdjGross'].first()
df_group['FirstRelease'] = df.groupby(['Franchise'])['Release'].first()
df_group['SumTheaters'] = df.groupby(['Franchise'])['Theaters'].apply(lambda x: x.sum())
df_group.columns = ['NumOfFilms','FirstGross','FirstRelease','SumTheaters']
df_group['AvgTheaters'] = df_group['SumTheaters']/df_group['NumOfFilms']
df_group['Franchise'] = df.groupby(['Franchise'])['Franchise'].first()
df = df.merge(df_group, on='Franchise')
df.head()
df['Theaters'] = df.Theaters.replace(0,df.AvgTheaters)
# Drop rows with NaN. Drops all first films, but I've already stored first film information within other features.
df = df.dropna()
df.shape
df['DaysSinceFirstFilm'] = df.Release - df.FirstRelease
df['DaysSinceFirstFilm'] = df['DaysSinceFirstFilm'].apply(lambda x: x.days)
df['DaysSincePrevFilm'] = df.Release - df.PrevRelease
df['DaysSincePrevFilm'] = df['DaysSincePrevFilm'].apply(lambda x: x.days)
df.sort_values('Release',ascending=False).head()
###Output
_____no_output_____
###Markdown
For the regression model, I decided to keep data for films released through 2016, but drop the 3 films released this year; because of their recent release date, their gross earnings will not yet be representative.
###Code
films17 = df.loc[[530,712,676]]
# Grabbing columns for regression model and dropping 2017 films
dfreg = df[['AdjGross','Theaters','SeriesNum','PrevAvgGross','FirstGross','DaysSinceFirstFilm','DaysSincePrevFilm']]
dfreg = dfreg.drop([530,712,676])
dfreg.shape
###Output
_____no_output_____
###Markdown
Step 3Apply Linear Regression.
###Code
dfreg.corr()
sns.pairplot(dfreg);
sns.regplot((dfreg.PrevAvgGross), (dfreg.AdjGross));
sns.regplot(np.log(dfreg.Theaters), np.log(dfreg.AdjGross));
###Output
_____no_output_____
###Markdown
In the pairplot we can see that 'AdjGross' may have some correlation with the variables, particularly 'Theaters' and 'PrevAvgGross'. However, it looks like a polynomial model, or natural log / some other transformation will be required before fitting a linear model.
###Code
y, X = patsy.dmatrices('AdjGross ~ Theaters + SeriesNum + PrevAvgGross + FirstGross + DaysSinceFirstFilm + DaysSincePrevFilm', data=dfreg, return_type="dataframe")
###Output
_____no_output_____
###Markdown
First try: Initial linear regression model with statsmodels
###Code
model = sm.OLS(y, X)
fit = model.fit()
fit.summary()
fit.resid.plot(style='o');
###Output
_____no_output_____
###Markdown
Try Polynomial Regression
###Code
polyX=PolynomialFeatures(2).fit_transform(X)
polymodel = sm.OLS(y, polyX)
polyfit = polymodel.fit()
polyfit.rsquared
polyfit.resid.plot(style='o');
polyfit.rsquared_adj
###Output
_____no_output_____
###Markdown
HeteroskedasticityThe polynomial regression improved the Adjusted Rsquared and the residual plot, but there's still issues with other statistics including skew. It's worth running the Breusch-Pagan test:
###Code
hetnames = ['Lagrange multiplier statistic', 'p-val', 'f-val', 'f p-val']
hettest = sm.stats.diagnostic.het_breushpagan(fit.resid, fit.model.exog)
zip(hetnames,hettest)
hetnames = ['Lagrange multiplier statistic', 'p-val', 'f-val', 'f p-val']
hettest = sm.stats.diagnostic.het_breushpagan(polyfit.resid, fit.model.exog)
zip(hetnames,hettest)
###Output
_____no_output_____
###Markdown
Apply Box-Cox TransformationAs seen above the p-values were very low, suggesting the data is indeed tending towards heteroskedasticity. To improve the data we can apply boxcox.
###Code
dfPolyX = pd.DataFrame(polyX)
bcPolyX = pd.DataFrame()
for i in range(dfPolyX.shape[1]):
bcPolyX[i] = scipy.stats.boxcox(dfPolyX[i])[0]
# Transformed data with Box-Cox:
bcPolyX.head()
# Introduce log(y) for target variable:
y = y.reset_index(drop=True)
logy = np.log(y)
###Output
_____no_output_____
###Markdown
Try Polynomial Regression again with Log Y and Box-Cox transformed X
###Code
logPolyModel = sm.OLS(logy, bcPolyX)
logPolyFit = logPolyModel.fit()
logPolyFit.rsquared_adj
###Output
_____no_output_____
###Markdown
Apply Regularization using Elastic Net to optimize this model.
###Code
X_scaled = preprocessing.scale(bcPolyX)
en_cv = linear_model.ElasticNetCV(cv=10, normalize=False)
en_cv.fit(X_scaled, logy)
en_cv.coef_
logy_en = en_cv.predict(X_scaled)
mse = metrics.mean_squared_error(logy, logy_en)
# The mean square error for this model
mse
plt.scatter([x for x in range(540)],(pd.DataFrame(logy_en)[0] - logy['AdjGross']));
###Output
_____no_output_____
###Markdown
Step 4As seen above, Polynomial Regression with Elastic Net produces a model with several nonzero coefficients for the given features. I decided to try testing this model on the three new sequels for 2017.
###Code
films17
df17 = films17[['AdjGross','Theaters','SeriesNum','PrevAvgGross','FirstGross','DaysSinceFirstFilm','DaysSincePrevFilm']]
y17, X17 = patsy.dmatrices('AdjGross ~ Theaters + SeriesNum + PrevAvgGross + FirstGross + DaysSinceFirstFilm + DaysSincePrevFilm', data=df17, return_type="dataframe")
polyX17 = PolynomialFeatures(2).fit_transform(X17)
dfPolyX17 = pd.DataFrame(polyX17)
bcPolyX17 = pd.DataFrame()
for i in range(dfPolyX17.shape[1]):
bcPolyX17[i] = scipy.stats.boxcox(dfPolyX17[i])[0]
X17_scaled = preprocessing.scale(bcPolyX17)
# Run the "en_cv" model from above on the 2017 data:
logy_en_2017 = en_cv.predict(X17_scaled)
# Predicted Adjusted Gross:
pd.DataFrame(np.exp(logy_en_2017))
# Adjusted Gross as of 2/1:
y17
###Output
_____no_output_____ |
python-workshop/file-entrada-salida.ipynb | ###Markdown
Archivo Entrada y Salida Los programas requieren datos, y a veces muchos datos. Hay diferentes maneras de guardar y acceder a ello, pero una de las mas comunes es a traves del sistema de archivos de una computadora. Por tanto, trabajar con archivos es una herramienta sumamente util para cualquier programador. Operacions comunes involucran: guardando el dato de salida de un programa a un archivo de texto, limpiar un documento tabulado a fin que cada columna este en el formato correcto, o eliminar archivos grandes de un directorio en el disco duro.En esta seccion aprenderemos a:1. Leer y escribir archivos2. Trabajar con rutas de archivos3. Trabajar con archivos CSV Leer y escribir archivos Hasta ahora hemos visto programas que toman dato de entrada del mismo programa o del usuario. Si han trabajado con muchos datos, estos metodos son problematicos. En muchas aplicaciones, el dato de entrada es leido de algun(os) archivo(s). Python tiene herramientas que nos permiten realizar estas operaciones. Escribir a un archivo Para escribir un archivo de texto sin formato, podemos utilizar la funcion general incorporada `open()`. Cuando abrimos un archivo con `open()`, lo primero que debemos determinar es si realizamos una operacion de lectura o de escritura.
###Code
# abrir y escribir a un archivo
archivo_salida = open('hola.txt', 'w') # abrimos en modo "w" de escritura
# escribimos una linea de texto con writelines()
archivo_salida.writelines('Este es mi primer archivo')
archivo_salida.close()
###Output
_____no_output_____
###Markdown
Cuando solo proporcionamos el nombre del archivo, este se crearรก en el mismo directorio que tiene el script, toda vez que no proporcionamos la ruta del archivo.Cuando abrimos un archivo con `open()`, siempre debemos cerrar el archivo con `close()`. Python cierra automaticamente los archivos que uno abre, pero si no los cerramos, puede ocasionar problemas no esperados. Despues de ejecutar el script, observamos un archivo nuevo en el directorio denominado `hola.txt`, con la linea escrita `Este es mi primer archivo`.
###Code
# writelines() tambien toma una lista de lineas
archivo_salida = open('hola.txt', 'w') # si abrimos un archivo existente, el contenido viejo se borra
lineas = [
'Este archivo es nuevo',
'Contiene esta linea',
'Esta otra linea tambien'
]
archivo_salida.writelines(lineas) # las lineas son escritas seguidamente sin espacio
archivo_salida.close()
# para escribir en una linea nueva, insertemos el caracter de nueva linea "\n"
archivo_salida = open('hola.txt', 'w')
lineas = [
'Este archivo es nuevo',
'\nContiene esta linea', # \n en linea nueva
'\nEsta otra linea tambien' # \n en linea nueva
]
archivo_salida.writelines(lineas) # las lineas son escritas seguidamente sin espacio
archivo_salida.close()
###Output
_____no_output_____
###Markdown
Si abrimos el archivo, observamos cada linea en una linea nueva:```Este archivo es nuevoContiene esta lineaEsta otra linea tambien```
###Code
# podemos abrir el archivo en modo "a" que permita "adjuntar"
archivo_salida = open('hola.txt', 'a') # abrimos el archivo pero no borra el contenido
archivo_salida.writelines('\nEsta linea se adjunta') # linea nueva
archivo_salida.close()
###Output
_____no_output_____
###Markdown
Si abrimos el archivo, observamos cada linea en una linea nueva:```Este archivo es nuevoContiene esta lineaEsta otra linea tambienEsta linea se adjunta``` Leer un archivo
###Code
# abrimos el archivo en modo lectura "r"
archivo_entrada = open('hola.txt', 'r')
print(archivo_entrada.readlines()) # utilizamos el metodo readlines() que retorna una lista de lineas
archivo_entrada.close()
# ya que el archivo retorna una lista, podemos ciclar sobre la misma
archivo_entrada = open('hola.txt', 'r')
for linea in archivo_entrada.readlines():
print(linea)
archivo_entrada.close()
# observamos que hay una nueva linea vacia entre cada linea, podemos anular este comportamiento
archivo_entrada = open('hola.txt', 'r')
for linea in archivo_entrada.readlines():
print(linea, end='') # print incluye nueva linea automaticamente, especificamos comportamiento deseado end=''
archivo_entrada.close()
# podemos leer linea por linea con readline() en singular
archivo_entrada = open('hola.txt', 'r')
linea = archivo_entrada.readline() # lee la primera linea
while linea != '': # si no hemos llegao al fin
print(linea, end='') # imprime la linea
linea = archivo_entrada.readline() # lee la proxima linea
archivo_entrada.close()
###Output
Este archivo es nuevo
Contiene esta linea
Esta otra linea tambien
Esta linea se adjunta
###Markdown
Cuando Python lee un archivo, gestiona un tipo de marcador que recuerda su ubicacion en el documento. Es asi como el metodo `readline()` funciona, ya que lee la primera linea, y cuando el metodo se ejecuta nuevamente, Python lee el documento a partir de donde dejรณ el marcador la รบltima vez, por tanto lee la prรณxima linea del documento, y asi sucesivamente hasta llegar al final del archivo. Cuando se cierra el archivo con `close()`, el marcador vuelve a reiniciarse, a fin de que la proxima vez pueda empezar desde el principio del archivo. El metodo `readlines()` tambien se comporta de la misma manera. Esto lo podemos comprobar si leemos el archivo, ejecutamos readlines, y sin cerrar el archivo, ejecutamos nuevamente el metodo `readlines()`. Ya que el marcador sigue al final del documento, `readlines()` no retorna linea alguna, porque no hay nada que leer.
###Code
archivo_entrada = open('hola.txt', 'r')
print('Primera vez:')
for linea in archivo_entrada.readlines():
print(linea, end='')
print('\n\nSegunda vez:')
for linea in archivo_entrada.readlines():
print(linea, end='')
archivo_entrada.close()
# si queremos acceder al contenido del archivo nuevamente, es mejor guardarlo en una lista
archivo_entrada = open('hola.txt', 'r')
lineas = archivo_entrada.readlines()
print('Primera vez:')
for linea in lineas:
print(linea, end='')
print('\n\nSegunda vez:')
for linea in lineas:
print(linea, end='')
archivo_entrada.close()
# para no tener que cerrar el archivo, Python lo hace automaticamente con la palabra with
# with establece un contexto
with open('hola.txt', 'r') as archivo:
for line in archivo.readlines():
print(line)
# podemos abrir mutiples archivos en la misma operacion
with open('hola.txt', 'r') as fuente, open('salida.txt', 'w') as salida:
for line in fuente.readlines():
salida.write(line)
# todo el contenido del primer archivo se copio a este
with open('salida.txt', 'r') as archivo:
for line in archivo.readlines():
print(line)
###Output
Este archivo es nuevo
Contiene esta linea
Esta otra linea tambien
Esta linea se adjunta
###Markdown
Ejercicios1. Abra y escriba un archivo que contenga varias lineas utilizando writelines()2. Abra el archivo que escribio y lea sus contenidos con readlines() y readline()3. Abra y lea el archivo que escribio utilizando with() 4. Abra y lear al archivo que escribio utilizando with() y copie los contenidos del mismo a otro archivo Trabajando con Rutas en Python Lo mas probable es que vamos a necesitar abrir archivos ubicados en otros directorios, y no solamente los archivos ubicados en el directorio actual donde reside el script. Para acceder a distintos directorions, podemos ingresar la ruta completa directamente como argumento a la funcion incorporada `open(ruta_absoluta_del_archivo)`.```archivo = open('C:/home/adriaanbd/documentos/hola.txt', 'r')``` Observemos el uso del `/`. Las rutas de Windows contienen un `\` en vez de un `/`, pero en Python podemos substituir el `\` por el `/`, toda vez que el `\` tiene un significado especial en Python por ser utilizado como un caracter de escape, lo que quiere decir que el caracter que le sigue inmediatamente, e.g. `\n` es tratado como caracter especial. Python entiende que el uso del `\` con el caracter a continuacion es un caracter especial. Por ejemplo, `\n` significa una nueva linea, `\t` significa un caracter `tab` que representa 2 o 4 caracteres de espacios en la misma linea.``` podemos utilizar el \ de la siguiente manerapath = r'C:\home\adriaanbd\documentos\hola.txt'``` El modulo `os` Si deseamos hacer algo mas avanzado con estructuras de archivos, vamos a tener que hacer uso del modulo `os`, que expone varias funciones del sistema operativo. Lo primero que tenemos que hacer es importar el modulo.
###Code
# esto importa el modulo al programa
import os
# para crear un directorio nuevo en el directorio donde reside este programa
os.mkdir('mi-directorio')
# para crear el directorio en una ruta especifica
ruta = 'mi-directorio'
os.mkdir(os.path.join(ruta, 'subdirectorio')) # utilizemos os.path.join para concatenar dos strings
# pudimos haber concatenado asi tambien:
ruta = 'C:/home/adriaanbd/documentos'
directorio = ruta + '/' + 'mi-directorio'
print(directorio)
ruta = 'mi-directorio'
directorio = os.path.join(ruta, 'subdirectorio')
directorio
# para eliminar un directorio usemos rmdir()
os.rmdir(directorio)
ruta = 'mi-directorio'
os.rmdir(ruta)
# para obtener una lista de los archivos en un directorio usemos os.listdir()
os.listdir() # su dato de salida podrรก ser distinto
# una lista de los archivos con terminacion txt usando endswith()
for archivo in os.listdir():
if archivo.lower().endswith('txt'):
print(archivo)
###Output
hola.txt
salida.txt
###Markdown
el modulo `glob`
###Code
# este modulo nos ayuda a encontrar patrones con caracteres comodin
import glob
glob.glob('*.txt') # el asterisco * es un comodin que representa todo, por tanto todo archivo con extension .txt
###Output
_____no_output_____
###Markdown
Verificando la existencia de archivos y directorios
###Code
for archivo in os.listdir():
print(f'Archivo: "{archivo}", \nes un directorio: {os.path.isdir(archivo)}\n\n') # es un directorio?
for archivo in os.listdir():
print(f'Archivo: "{archivo}" \nes un archivo: {os.path.isfile(archivo)}\n\n') # es un archivo?
for archivo in os.listdir():
print(f'Archivo: "{archivo}" \nexiste: {os.path.exists(archivo)}\n\n') # el archivo existe?
###Output
Archivo: "contenido.ipynb"
existe: True
Archivo: ".vscode"
existe: True
Archivo: "intro.ipynb"
existe: True
Archivo: "errores.ipynb"
existe: True
Archivo: "contenido.md"
existe: True
Archivo: "funciones-y-ciclos.ipynb"
existe: True
Archivo: "salida"
existe: True
Archivo: "file-entrada-salida.ipynb"
existe: True
Archivo: ".ipynb_checkpoints"
existe: True
Archivo: "otros-temas"
existe: True
Archivo: "oop.ipynb"
existe: True
Archivo: ".gitignore"
existe: True
Archivo: "numeros-y-matematica.ipynb"
existe: True
Archivo: "tips.ipynb"
existe: True
Archivo: "encontrando-resolviendo-errores.ipynb"
existe: True
Archivo: ".git"
existe: True
Archivo: "datos"
existe: True
Archivo: "logica-condicional-control-de-flujo.ipynb"
existe: True
Archivo: "variables.ipynb"
existe: True
Archivo: "strings.ipynb"
existe: True
Archivo: "textos-llamadas.ipynb"
existe: True
Archivo: "hola.txt"
existe: True
Archivo: "juego-de-aventura.ipynb"
existe: True
Archivo: "tuplas-listas-diccionarios.ipynb"
existe: True
Archivo: ".python-version"
existe: True
Archivo: "salida.txt"
existe: True
###Markdown
Ejercicios1. Imprime la ruta absoluta de todos los archivos y directorios en el directorio de `Documentos/` en su computador2. Imprima la ruta absoluta de todos los archivos .txt en el directorio actual Lea y Escriba Data CSV Los archivos del dia a dia son un poco mas complicados que archivos simples de texto. Para modificar el contenido de estos archivo, necesitamos un poco mas de herramientas. Una manera comun para guardar datos de texto es en archivos CSV, que por sus siglas en ingles significa Valores Separados por Comma, toda vez que cada entrada en una fila de datos es usualmente separada de otras entrada con una coma. Por ejemplo:```Nombre, Apellido, EdadJuan, Perez, 40Juana, Perez, 45```Cada linea representa una fila de datos, incluyendo la primera fila que representa el encabezamiento de la informacion. Cada entrada aparece en el mismo orden para cada fila, con cada entrada separada de otras con comas. Python tiene un modulo `csv` que nos permite realizar operacions necesarias para la gestion de archivos CSV. Leer un archivo CSV
###Code
# leemos un archivo con csv.reader(archivo)
import csv
import os
archivo = 'datos/llamadas.csv'
with open(archivo, 'r') as datos:
lector = csv.reader(datos)
for registro in lector:
print(registro)
###Output
['numero_saliente', 'numero_entrante', 'fecha_tiempo', 'tiempo_segundos']
['(473) 5373591', '(221) 3829872', '2019-08-23 07:13:28', '779']
['(712) 6079829', '(521) 9979466', '2019-06-02 19:01:04', '150']
['(170) 7207064', '(667) 9707152', '2019-02-18 11:22:21', '1425']
['(267) 6838416', '(704) 6053438', '2019-12-26 22:29:30', '3278']
['(202) 7159564', '(848) 5356715', '2019-11-11 14:06:47', '2823']
['(971) 4270187', '(312) 3476941', '2019-07-21 22:30:45', '2824']
['(688) 1872860', '(580) 6692170', '2019-01-05 20:40:15', '363']
['(527) 3643293', '(700) 6013130', '2019-03-21 10:25:15', '1090']
['(824) 3120489', '(736) 5219693', '2019-06-15 19:31:29', '2383']
['(135) 5879807', '(210) 4726824', '2019-12-11 06:37:28', '3289']
['(946) 9885969', '(967) 6260487', '2019-04-30 18:12:15', '266']
['(822) 1999029', '(394) 2159591', '2019-07-20 13:24:02', '3171']
['(214) 1831354', '(407) 4594421', '2019-10-22 18:27:53', '2987']
['(301) 9038508', '(117) 3599538', '2019-08-11 14:34:08', '472']
['(975) 2050968', '(225) 7340340', '2019-05-24 17:07:56', '1297']
['(532) 4461437', '(159) 6755397', '2019-07-27 09:02:02', '2548']
['(854) 2632368', '(865) 1092554', '2019-11-12 18:27:12', '256']
['(302) 7956136', '(427) 4230223', '2019-04-17 13:49:54', '360']
['(694) 4605593', '(423) 9644633', '2019-10-12 10:25:53', '3476']
['(361) 6243068', '(817) 9801242', '2019-01-13 16:55:39', '740']
['(832) 2674004', '(134) 4315303', '2019-10-07 07:16:17', '3135']
['(833) 8445033', '(191) 5366913', '2019-03-18 09:42:11', '2112']
['(823) 4146625', '(263) 8920846', '2019-03-17 16:10:28', '1635']
['(901) 2728567', '(997) 8431267', '2019-06-05 11:45:06', '2793']
['(695) 8465544', '(486) 6125527', '2019-08-19 14:22:47', '1563']
['(715) 6420894', '(828) 6640394', '2019-03-16 00:36:28', '1891']
['(600) 8596964', '(762) 7724562', '2019-12-19 15:44:12', '3157']
['(454) 4219619', '(432) 1223026', '2019-02-05 02:43:10', '1050']
['(699) 8211331', '(123) 8577076', '2019-01-26 02:35:52', '2547']
['(502) 2393708', '(748) 6208057', '2019-04-28 12:23:38', '3047']
['(319) 7353522', '(588) 9583209', '2019-01-02 05:17:31', '3584']
['(519) 2780596', '(359) 2449867', '2019-02-05 23:35:17', '1436']
['(439) 1787485', '(802) 8632114', '2019-01-03 02:05:09', '2878']
['(611) 2732835', '(605) 7128788', '2019-05-06 22:24:59', '636']
['(481) 4216326', '(288) 7103116', '2019-05-06 01:35:05', '2339']
['(819) 2841562', '(651) 9421311', '2019-12-05 16:05:32', '3449']
['(561) 7890310', '(487) 1704598', '2019-08-09 16:54:44', '1187']
['(205) 8012873', '(348) 6088588', '2019-04-09 18:36:15', '3194']
['(656) 2596247', '(645) 3744183', '2019-12-18 21:38:31', '2428']
['(784) 1502772', '(732) 5122798', '2019-08-06 05:30:25', '983']
['(187) 7805812', '(831) 1984447', '2019-10-09 02:03:01', '3577']
['(404) 9897959', '(810) 9464280', '2019-03-31 01:10:37', '1188']
['(320) 9017964', '(105) 7031191', '2019-03-19 16:47:31', '2009']
['(441) 9421898', '(352) 2239520', '2019-10-09 07:19:24', '3024']
['(289) 3939816', '(897) 5873250', '2019-06-24 08:18:43', '2230']
['(268) 8129614', '(109) 5020811', '2019-10-18 20:45:40', '1559']
['(530) 2679399', '(929) 7641354', '2019-05-03 08:45:23', '128']
['(655) 5001076', '(216) 9767752', '2019-11-13 19:03:39', '171']
['(883) 8587195', '(449) 7773819', '2019-06-29 14:01:03', '1818']
['(815) 4795720', '(312) 1327386', '2019-02-11 06:32:37', '3119']
['(197) 3603866', '(412) 8148714', '2019-06-13 18:10:46', '595']
['(911) 1379852', '(804) 6251709', '2019-03-06 21:18:13', '2234']
['(795) 2762776', '(661) 8174095', '2019-02-18 06:09:36', '2300']
['(501) 1466641', '(602) 3090356', '2019-08-01 05:07:26', '734']
['(154) 9559400', '(632) 8185869', '2019-05-16 08:59:38', '3506']
['(639) 8951743', '(742) 1588632', '2019-09-25 00:11:59', '73']
['(792) 8079631', '(598) 3917497', '2019-06-14 15:26:01', '1908']
['(755) 2227215', '(235) 5321774', '2019-06-19 23:02:17', '1065']
['(712) 6065475', '(794) 3858022', '2019-10-22 20:49:53', '3485']
['(680) 3236045', '(804) 9903489', '2019-09-10 16:34:14', '2922']
['(148) 6443267', '(169) 8934639', '2019-05-22 10:47:23', '129']
['(790) 6530469', '(814) 3215137', '2019-10-25 13:33:34', '2664']
['(202) 1610658', '(607) 3944087', '2019-04-28 23:54:28', '1569']
['(262) 4164407', '(399) 5230169', '2019-02-19 03:10:29', '450']
['(262) 6287235', '(522) 8488463', '2019-06-01 19:05:29', '3383']
['(304) 7491008', '(244) 5322157', '2019-08-10 13:52:18', '3064']
['(615) 3509514', '(708) 4135633', '2019-05-25 05:54:32', '147']
['(459) 3930189', '(149) 4330839', '2019-01-08 10:47:42', '3140']
['(855) 2282632', '(666) 2793624', '2019-11-24 23:08:45', '1022']
['(476) 5233902', '(820) 5595528', '2019-01-31 21:41:20', '1837']
['(537) 6546615', '(612) 2202646', '2019-04-04 19:51:31', '2036']
['(541) 4800549', '(138) 3724141', '2019-02-04 06:45:02', '2469']
['(384) 8739072', '(941) 6726850', '2019-04-19 02:27:44', '3105']
['(147) 7291940', '(326) 5393948', '2019-03-05 17:08:14', '2586']
['(270) 2354861', '(273) 7690535', '2019-09-02 16:24:08', '3249']
['(636) 1133234', '(462) 1957853', '2019-09-13 12:22:37', '1939']
['(570) 6207945', '(581) 2812391', '2019-04-04 03:03:56', '1225']
['(291) 5185327', '(531) 9281928', '2019-10-22 11:18:33', '2989']
['(243) 3167219', '(570) 5034926', '2019-04-13 11:54:36', '789']
['(542) 4546760', '(567) 2828533', '2019-05-03 20:25:18', '1865']
['(249) 7029277', '(295) 8985580', '2019-09-23 06:24:47', '319']
['(916) 5096404', '(376) 2884045', '2019-04-24 08:30:37', '246']
['(996) 9736898', '(969) 2964664', '2019-09-02 15:31:42', '2342']
['(207) 4248725', '(456) 8645080', '2019-11-15 20:30:18', '630']
['(729) 4815293', '(763) 9893406', '2019-08-22 07:42:54', '2279']
['(188) 3501714', '(464) 3997111', '2019-01-14 04:57:34', '121']
['(534) 4568556', '(792) 9326352', '2019-06-09 05:23:38', '1046']
['(892) 1686376', '(249) 7615536', '2019-03-02 22:11:17', '2444']
['(310) 6945801', '(164) 9416529', '2019-04-20 07:44:28', '1683']
['(741) 8134173', '(712) 6154466', '2019-01-12 02:12:28', '210']
['(780) 2506688', '(246) 9160852', '2019-08-10 12:18:32', '512']
['(677) 3634048', '(650) 1143542', '2019-03-09 14:08:49', '2166']
['(770) 5974145', '(270) 5953021', '2019-01-20 09:19:34', '608']
['(251) 5430038', '(570) 1985179', '2019-03-20 13:05:08', '3447']
['(823) 1821952', '(835) 1658609', '2019-08-10 07:35:24', '2504']
['(302) 9131957', '(738) 3350982', '2019-09-30 18:05:45', '1176']
['(787) 2744903', '(435) 1451178', '2019-07-27 09:08:10', '989']
['(723) 1155427', '(810) 5853913', '2019-04-09 19:24:35', '2467']
['(368) 3851791', '(631) 8084245', '2019-07-31 18:03:27', '556']
['(285) 1009067', '(219) 1745803', '2019-11-25 18:22:30', '3054']
###Markdown
Escribir a un archivo
###Code
# escribimos un archivo con csv.writer(archivo)
import csv
import os
archivo = 'salida/ejemplo.csv'
# os.mkdir('salida')
nombres = [
['Nombre', 'Apellido'],
['Juan', 'Perez'],
['Juana', 'Perez']
]
with open(archivo, 'w') as salida:
escritor = csv.writer(salida)
escritor.writerows(nombres)
with open(archivo, 'r') as entrada:
lector = csv.reader(entrada)
print(list(lector))
###Output
[['Nombre', 'Apellido'], ['Juan', 'Perez'], ['Juana', 'Perez']]
###Markdown
Ejercicios1. Escriba un script que escriba un archivo csv con informacion inventada, puede ser cualquier cosa, pero que tenga columnas, e.g. Nombre, Apellido, Edad, etc.2. Escriba un script que lea el archivo escrito e imprima cada fila del archivo Reto: Puntos de Acceso de la Red Nacional de Internet por ProvinciaEscriba un script que lea el archivo csv `datos/rni-puntos-de-acceso.csv` (se le entregarรก) que contiene informacion de todos los puntos de acceso de la red nacional de internet de Panama, y escriba un archivo csv nuevo `datos/rni-pda-por-provincia.csv`, que contiene lo siguiente:1. Provincia y Puntos de Acceso como encabezado2. El nombre de la provincia, y el numero total de Puntos de Acceso como fila
###Code
# un ejemplo practico
import csv
import os
archivo = 'datos/rni-puntos-de-acceso.csv'
with open(archivo, 'r', encoding='latin-1') as entrada, open('salida/rni-pda-chiriqui.csv', 'w') as salida:
lector = csv.reader(entrada) # para leer
escritor = csv.writer(salida) # para escribir
for registro in lector:
provincia = registro[3]
if provincia.startswith('Chi'):
escritor.writerow(registro)
with open(archivo, 'r', encoding='latin-1') as entrada:
lector = csv.reader(entrada)
for registro in list(lector)[:5]:
print(registro)
###Output
['Regiยขn', 'PA', 'Nombre', 'Provincia', 'Distrito', 'Corregimiento', 'Tipo UM', 'Latitude', 'Longitude', 'Fecha de Activaciยขn']
['1', '1', 'Colegio Rogelio Josu\x82 Ibarra', 'Bocas del Toro', 'Bocas del Toro', 'Bocas del Toro', 'FO', '9.340654', '-82.242499', '12/07/17']
['1', '3', 'Escuela Repยฃblica de Nicaragua', 'Bocas del Toro', 'Bocas del Toro', 'Bocas del Toro', 'FO', '9.338938', '-82.242668', '12/07/17']
['1', '4', 'Gobernaciยขn', 'Bocas del Toro', 'Bocas del Toro', 'Bocas del Toro', 'FO', '9.297858', '-82.41136', '23/11/17']
['1', '5', 'Parque Simยขn Bolยกvar', 'Bocas del Toro', 'Bocas del Toro', 'Bocas del Toro', 'FO', '9.340183', '-82.240631', '12/07/17']
|
blockboard/taxCalculations.ipynb | ###Markdown
Calculating gas for a single transaction gasUsed - Total GAS units to compute this txn gasPrice - Cost of one unit of GAS in GWEISooo..... GAS ($USD) = gasUsed * (gasPrice / 10^9) * ethPrice ($USD) To get price data from CoinGecko, need a wide timestamp range Rounding to the ten-thousands will guarantee at least one spot price is captured
###Code
print(taxTools.get_yearly_gas_costs_in_USD(2021))
###Output
2459.283222340261
###Markdown
Need to filter for token conversions & nft purchases Bought on exchange (cost basis) - have to manually map to CoinBase purchases Transferred to wallet Made trade (taxable event) - Compute price differenece of eth from cost basis to now Sold for eth (taxable event) - Compute gain as amount of eth to USD compared to previous
###Code
last_txn = etherscanTools.get_normal_txns_for_year(2021)[-1]
last_txn
import web3
from web3 import Web3, EthereumTesterProvider
w3 = Web3(EthereumTesterProvider())
w3.isConnected()
# txn_hash = int(last_txn['hash'][2:],16)
# receipt = w3.eth.getTransactionByBlock(last_txn['blockNumber'], 0)
w3.eth.get_transaction_by_block(46147, 0)
###Output
_____no_output_____ |
MS/Python/python_5.ipynb | ###Markdown
ะะฐัะตะดัะฐ ะดะธัะบัะตัะฝะพะน ะผะฐัะตะผะฐัะธะบะธ ะะคะขะ ะััั ะผะฐัะตะผะฐัะธัะตัะบะพะน ััะฐัะธััะธะบะธะะธะบะธัะฐ ะะพะปะบะพะฒ ะะฐ ะพัะฝะพะฒะต http://www.inp.nsk.su/~grozin/python/ ะะธะฑะปะธะพัะตะบะฐ numpyะะฐะบะตั `numpy` ะฟัะตะดะพััะฐะฒะปัะตั $n$-ะผะตัะฝัะต ะพะดะฝะพัะพะดะฝัะต ะผะฐััะธะฒั (ะฒัะต ัะปะตะผะตะฝัั ะพะดะฝะพะณะพ ัะธะฟะฐ); ะฒ ะฝะธั
ะฝะตะปัะทั ะฒััะฐะฒะธัั ะธะปะธ ัะดะฐะปะธัั ัะปะตะผะตะฝั ะฒ ะฟัะพะธะทะฒะพะปัะฝะพะผ ะผะตััะต. ะ `numpy` ัะตะฐะปะธะทะพะฒะฐะฝะพ ะผะฝะพะณะพ ะพะฟะตัะฐัะธะน ะฝะฐะด ะผะฐััะธะฒะฐะผะธ ะฒ ัะตะปะพะผ. ะัะปะธ ะทะฐะดะฐัั ะผะพะถะฝะพ ัะตัะธัั, ะฟัะพะธะทะฒะตะดั ะฝะตะบะพัะพััั ะฟะพัะปะตะดะพะฒะฐัะตะปัะฝะพััั ะพะฟะตัะฐัะธะน ะฝะฐะด ะผะฐััะธะฒะฐะผะธ, ัะพ ััะพ ะฑัะดะตั ััะพะปั ะถะต ัััะตะบัะธะฒะฝะพ, ะบะฐะบ ะฒ `C` ะธะปะธ `matlab` - ะปัะฒะธะฝะฐั ะดะพะปั ะฒัะตะผะตะฝะธ ััะฐัะธััั ะฒ ะฑะธะฑะปะธะพัะตัะฝัั
ััะฝะบัะธัั
, ะฝะฐะฟะธัะฐะฝะฝัั
ะฝะฐ `C`. ะะดะฝะพะผะตัะฝัะต ะผะฐััะธะฒั
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
ะะพะถะฝะพ ะฟัะตะพะฑัะฐะทะพะฒะฐัั ัะฟะธัะพะบ ะฒ ะผะฐััะธะฒ.
###Code
a = np.array([0, 2, 1])
a, type(a)
###Output
_____no_output_____
###Markdown
`print` ะฟะตัะฐัะฐะตั ะผะฐััะธะฒั ะฒ ัะดะพะฑะฝะพะน ัะพัะผะต.
###Code
print(a)
###Output
[0 2 1]
###Markdown
ะะปะฐัั `ndarray` ะธะผะตะตั ะผะฝะพะณะพ ะผะตัะพะดะพะฒ.
###Code
set(dir(a)) - set(dir(object))
###Output
_____no_output_____
###Markdown
ะะฐั ะผะฐััะธะฒ ะพะดะฝะพะผะตัะฝัะน.
###Code
a.ndim
###Output
_____no_output_____
###Markdown
ะ $n$-ะผะตัะฝะพะผ ัะปััะฐะต ะฒะพะทะฒัะฐัะฐะตััั ะบะพััะตะถ ัะฐะทะผะตัะพะฒ ะฟะพ ะบะฐะถะดะพะน ะบะพะพัะดะธะฝะฐัะต.
###Code
a.shape
###Output
_____no_output_____
###Markdown
`size` - ััะพ ะฟะพะปะฝะพะต ัะธัะปะพ ัะปะตะผะตะฝัะพะฒ ะฒ ะผะฐััะธะฒะต; `len` - ัะฐะทะผะตั ะฟะพ ะฟะตัะฒะพะน ะบะพะพัะดะธะฝะฐัะต (ะฒ 1-ะผะตัะฝะพะผ ัะปััะฐะต ััะพ ัะพ ะถะต ัะฐะผะพะต).
###Code
len(a), a.size
###Output
_____no_output_____
###Markdown
`numpy` ะฟัะตะดะพััะฐะฒะปัะตั ะฝะตัะบะพะปัะบะพ ัะธะฟะพะฒ ะดะปั ัะตะปัั
(`int16`, `int32`, `int64`) ะธ ัะธัะตะป ั ะฟะปะฐะฒะฐััะตะน ัะพัะบะพะน (`float32`, `float64`).
###Code
a.dtype, a.dtype.name, a.itemsize
###Output
_____no_output_____
###Markdown
ะะฝะดะตะบัะธัะพะฒะฐัั ะผะฐััะธะฒ ะผะพะถะฝะพ ะพะฑััะฝัะผ ะพะฑัะฐะทะพะผ.
###Code
a[1]
###Output
_____no_output_____
###Markdown
ะะฐััะธะฒั - ะธะทะผะตะฝัะตะผัะต ะพะฑัะตะบัั.
###Code
a[1] = 3
print(a)
###Output
[0 3 1]
###Markdown
ะะฐััะธะฒั, ัะฐะทัะผะตะตััั, ะผะพะถะฝะพ ะธัะฟะพะปัะทะพะฒะฐัั ะฒ `for` ัะธะบะปะฐั
. ะะพ ะฟัะธ ััะพะผ ัะตััะตััั ะณะปะฐะฒะฝะพะต ะฟัะตะธะผััะตััะฒะพ `numpy` - ะฑััััะพะดะตะนััะฒะธะต. ะัะตะณะดะฐ, ะบะพะณะดะฐ ััะพ ะฒะพะทะผะพะถะฝะพ, ะปัััะต ะธัะฟะพะปัะทะพะฒะฐัั ะพะฟะตัะฐัะธะธ ะฝะฐะด ะผะฐััะธะฒะฐะผะธ ะบะฐะบ ะตะดะธะฝัะผะธ ัะตะปัะผะธ.
###Code
for i in a:
print(i)
###Output
0
3
1
###Markdown
ะะฐััะธะฒ ัะธัะตะป ั ะฟะปะฐะฒะฐััะตะน ัะพัะบะพะน.
###Code
b = np.array([0., 2, 1])
b.dtype
###Output
_____no_output_____
###Markdown
ะขะพัะฝะพ ัะฐะบะพะน ะถะต ะผะฐััะธะฒ.
###Code
c = np.array([0, 2, 1], dtype=np.float64)
print(c)
###Output
[ 0. 2. 1.]
###Markdown
ะัะตะพะฑัะฐะทะพะฒะฐะฝะธะต ะดะฐะฝะฝัั
###Code
print(c.dtype)
print(c.astype(int))
print(c.astype(str))
###Output
float64
[0 2 1]
['0.0' '2.0' '1.0']
###Markdown
ะะฐััะธะฒ, ะทะฝะฐัะตะฝะธั ะบะพัะพัะพะณะพ ะฒััะธัะปััััั ััะฝะบัะธะตะน. ะคัะฝะบัะธะธ ะฟะตัะตะดะฐัััั ะผะฐััะธะฒ. ะขะฐะบ ััะพ ะฒ ะฝะตะน ะผะพะถะฝะพ ะธัะฟะพะปัะทะพะฒะฐัั ัะพะปัะบะพ ัะฐะบะธะต ะพะฟะตัะฐัะธะธ, ะบะพัะพััะต ะฟัะธะผะตะฝะธะผั ะบ ะผะฐััะธะฒะฐะผ.
###Code
def f(i):
print(i)
return i ** 2
a = np.fromfunction(f, (5,), dtype=np.int64)
print(a)
a = np.fromfunction(f, (5,), dtype=np.float64)
print(a)
###Output
[ 0. 1. 2. 3. 4.]
[ 0. 1. 4. 9. 16.]
###Markdown
ะะฐััะธะฒั, ะทะฐะฟะพะปะฝะตะฝะฝัะต ะฝัะปัะผะธ ะธะปะธ ะตะดะธะฝะธัะฐะผะธ. ะงะฐััะพ ะปัััะต ัะฝะฐัะฐะปะฐ ัะพะทะดะฐัั ัะฐะบะพะน ะผะฐััะธะฒ, ะฐ ะฟะพัะพะผ ะฟัะธัะฒะฐะธะฒะฐัั ะทะฝะฐัะตะฝะธั ะตะณะพ ัะปะตะผะตะฝัะฐะผ.
###Code
a = np.zeros(3)
print(a)
b = np.ones(3, dtype=np.int64)
print(b)
###Output
[1 1 1]
###Markdown
ะัะปะธ ะฝัะถะฝะพ ัะพะทะดะฐัั ะผะฐััะธะฒ, ะทะฐะฟะพะปะฝะตะฝะฝัะน ะฝัะปัะผะธ, ะดะปะธะฝั ะดััะณะพะณะพ ะผะฐััะธะฒะฐ, ัะพ ะผะพะถะฝะพ ะธัะฟะพะปัะทะพะฒะฐัั ะบะพะฝััััะบัะธั
###Code
np.zeros_like(b)
###Output
_____no_output_____
###Markdown
ะคัะฝะบัะธั `arange` ะฟะพะดะพะฑะฝะฐ `range`. ะัะณัะผะตะฝัั ะผะพะณัั ะฑััั ั ะฟะปะฐะฒะฐััะตะน ัะพัะบะพะน. ะกะปะตะดัะตั ะธะทะฑะตะณะฐัั ัะธััะฐัะธะน, ะบะพะณะดะฐ $(ะบะพะฝะตั-ะฝะฐัะฐะปะพ)/ัะฐะณ$ - ัะตะปะพะต ัะธัะปะพ, ะฟะพัะพะผั ััะพ ะฒ ััะพะผ ัะปััะฐะต ะฒะบะปััะตะฝะธะต ะฟะพัะปะตะดะฝะตะณะพ ัะปะตะผะตะฝัะฐ ะทะฐะฒะธัะธั ะพั ะพัะธะฑะพะบ ะพะบััะณะปะตะฝะธั. ะัััะต, ััะพะฑั ะบะพะฝะตั ะดะธะฐะฟะฐะทะพะฝะฐ ะฑัะป ะณะดะต-ัะพ ะฟะพััะตะดะธะฝะต ัะฐะณะฐ.
###Code
a = np.arange(0, 9, 2)
print(a)
b = np.arange(0., 9, 2)
print(b)
###Output
[ 0. 2. 4. 6. 8.]
###Markdown
ะะพัะปะตะดะพะฒะฐัะตะปัะฝะพััะธ ัะธัะตะป ั ะฟะพััะพัะฝะฝัะผ ัะฐะณะพะผ ะผะพะถะฝะพ ัะฐะบะถะต ัะพะทะดะฐะฒะฐัั ััะฝะบัะธะตะน `linspace`. ะะฐัะฐะปะพ ะธ ะบะพะฝะตั ะดะธะฐะฟะฐะทะพะฝะฐ ะฒะบะปััะฐัััั; ะฟะพัะปะตะดะฝะธะน ะฐัะณัะผะตะฝั - ัะธัะปะพ ัะพัะตะบ.
###Code
a = np.linspace(0, 8, 5)
print(a)
###Output
[ 0. 2. 4. 6. 8.]
###Markdown
ะะพัะปะตะดะพะฒะฐัะตะปัะฝะพััั ัะธัะตะป ั ะฟะพััะพัะฝะฝัะผ ัะฐะณะพะผ ะฟะพ ะปะพะณะฐัะธัะผะธัะตัะบะพะน ัะบะฐะปะต ะพั $10^0$ ะดะพ $10^1$.
###Code
b = np.logspace(0, 1, 5)
print(b)
###Output
[ 1. 1.77827941 3.16227766 5.62341325 10. ]
###Markdown
ะะฐััะธะฒ ัะปััะฐะนะฝัั
ัะธัะตะป.
###Code
print(np.random.random(5))
###Output
[ 0.17754706 0.13481988 0.85711884 0.18696899 0.55900193]
###Markdown
ะกะปััะฐะนะฝัะต ัะธัะปะฐ ั ะฝะพัะผะฐะปัะฝัะผ (ะณะฐัััะพะฒัะผ) ัะฐัะฟัะตะดะตะปะตะฝะธะตะผ (ััะตะดะฝะตะต `0`, ััะตะดะฝะตะบะฒะฐะดัะฐัะธัะฝะพะต ะพัะบะปะพะฝะตะฝะธะต `1`).
###Code
print(np.random.normal(size=5))
###Output
[-1.51473227 1.0408142 3.07774644 -0.67956312 0.20781344]
###Markdown
ะะฟะตัะฐัะธะธ ะฝะฐะด ะพะดะฝะพะผะตัะฝัะผะธ ะผะฐััะธะฒะฐะผะธะัะธัะผะตัะธัะตัะบะธะต ะพะฟะตัะฐัะธะธ ะฟัะพะฒะพะดัััั ะฟะพัะปะตะผะตะฝัะฝะพ.
###Code
print(a + b)
print(a - b)
print(a * b)
print(a / b)
print(a ** 2)
###Output
[ 0. 4. 16. 36. 64.]
###Markdown
ะะพะณะดะฐ ะพะฟะตัะฐะฝะดั ัะฐะทะฝัั
ัะธะฟะพะฒ, ะพะฝะธ ะฟะธะฒะพะดัััั ะบ ะฑะพะปััะตะผั ัะธะฟั.
###Code
i = np.ones(5, dtype=np.int64)
print(a + i)
###Output
[ 1. 3. 5. 7. 9.]
###Markdown
`numpy` ัะพะดะตัะถะธั ัะปะตะผะตะฝัะฐัะฝัะต ััะฝะบัะธะธ, ะบะพัะพััะต ัะพะถะต ะฟัะธะผะตะฝััััั ะบ ะผะฐััะธะฒะฐะผ ะฟะพัะปะตะผะตะฝัะฝะพ. ะะฝะธ ะฝะฐะทัะฒะฐัััั ัะฝะธะฒะตััะฐะปัะฝัะผะธ ััะฝะบัะธัะผะธ (`ufunc`).
###Code
np.sin, type(np.sin)
print(np.sin(a))
###Output
[ 0. 0.90929743 -0.7568025 -0.2794155 0.98935825]
###Markdown
ะะดะธะฝ ะธะท ะพะฟะตัะฐะฝะดะพะฒ ะผะพะถะตั ะฑััั ัะบะฐะปััะพะผ, ะฐ ะฝะต ะผะฐััะธะฒะพะผ.
###Code
print(a + 1)
print(2 * a)
###Output
[ 0. 4. 8. 12. 16.]
###Markdown
ะกัะฐะฒะฝะตะฝะธั ะดะฐัั ะฑัะปะตะฒั ะผะฐััะธะฒั.
###Code
print(a > b)
print(a == b)
c = a > 5
print(c)
###Output
[False False False True True]
###Markdown
ะะฒะฐะฝัะพัั "ัััะตััะฒัะตั" ะธ "ะดะปั ะฒัะตั
".
###Code
np.any(c), np.all(c)
###Output
_____no_output_____
###Markdown
ะะพะดะธัะธะบะฐัะธั ะฝะฐ ะผะตััะต.
###Code
a += 1
print(a)
b *= 2
print(b)
b /= a
print(b)
###Output
[ 2. 1.18551961 1.26491106 1.6066895 2.22222222]
###Markdown
ะัะธ ะฒัะฟะพะปะฝะตะฝะธะธ ะพะฟะตัะฐัะธะน ะฝะฐะด ะผะฐััะธะฒะฐะผะธ ะดะตะปะตะฝะธะต ะฝะฐ 0 ะฝะต ะฒะพะทะฑัะถะดะฐะตั ะธัะบะปััะตะฝะธั, ะฐ ะดะฐัั ะทะฝะฐัะตะฝะธั `np.nan` ะธะปะธ `np.inf`.
###Code
print(np.array([0.0, 0.0, 1.0, -1.0]) / np.array([1.0, 0.0, 0.0, 0.0]))
np.nan + 1, np.inf + 1, np.inf * 0, 1. / np.inf
###Output
_____no_output_____
###Markdown
ะกัะผะผะฐ ะธ ะฟัะพะธะทะฒะตะดะตะฝะธะต ะฒัะตั
ัะปะตะผะตะฝัะพะฒ ะผะฐััะธะฒะฐ; ะผะฐะบัะธะผะฐะปัะฝัะน ะธ ะผะธะฝะธะผะฐะปัะฝัะน ัะปะตะผะตะฝั; ััะตะดะฝะตะต ะธ ััะตะดะฝะตะบะฒะฐะดัะฐัะธัะฝะพะต ะพัะบะปะพะฝะตะฝะธะต.
###Code
b.sum(), b.prod(), b.max(), b.min(), b.mean(), b.std()
x = np.random.normal(size=1000)
x.mean(), x.std()
###Output
_____no_output_____
###Markdown
ะะผะตัััั ะฒัััะพะตะฝะฝัะต ััะฝะบัะธะธ
###Code
print(np.sqrt(b))
print(np.exp(b))
print(np.log(b))
print(np.sin(b))
print(np.e, np.pi)
###Output
[ 1.41421356 1.08881569 1.12468265 1.26755256 1.49071198]
[ 7.3890561 3.27238673 3.54277764 4.98627681 9.22781435]
[ 0.69314718 0.17018117 0.23500181 0.47417585 0.7985077 ]
[ 0.90929743 0.92669447 0.95358074 0.99935591 0.79522006]
2.718281828459045 3.141592653589793
###Markdown
ะะฝะพะณะดะฐ ะฑัะฒะฐะตั ะฝัะถะฝะพ ะธัะฟะพะปัะทะพะฒะฐัั ัะฐััะธัะฝัะต (ะบัะผัะปััะธะฒะฝัะต) ััะผะผั. ะ ะฝะฐัะตะผ ะบัััะต ัะฐะบะพะต ะฟัะธะณะพะดะธััั.
###Code
print(b.cumsum())
###Output
[ 2. 3.18551961 4.45043067 6.05712017 8.27934239]
###Markdown
ะคัะฝะบัะธั `sort` ะฒะพะทะฒัะฐัะฐะตั ะพััะพััะธัะพะฒะฐะฝะฝัั ะบะพะฟะธั, ะผะตัะพะด `sort` ัะพััะธััะตั ะฝะฐ ะผะตััะต.
###Code
print(np.sort(b))
print(b)
b.sort()
print(b)
###Output
[ 1.18551961 1.26491106 1.6066895 2. 2.22222222]
###Markdown
ะะฑัะตะดะธะฝะตะฝะธะต ะผะฐััะธะฒะพะฒ.
###Code
a = np.hstack((a, b))
print(a)
###Output
[ 1. 3. 5. 7. 9. 1.18551961
1.26491106 1.6066895 2. 2.22222222]
###Markdown
ะ ะฐััะตะฟะปะตะฝะธะต ะผะฐััะธะฒะฐ ะฒ ะฟะพะทะธัะธัั
3 ะธ 6.
###Code
np.hsplit(a, [3, 6])
###Output
_____no_output_____
###Markdown
ะคัะฝะบัะธะธ `delete`, `insert` ะธ `append` ะฝะต ะผะตะฝััั ะผะฐััะธะฒ ะฝะฐ ะผะตััะต, ะฐ ะฒะพะทะฒัะฐัะฐัั ะฝะพะฒัะน ะผะฐััะธะฒ, ะฒ ะบะพัะพัะพะผ ัะดะฐะปะตะฝั, ะฒััะฐะฒะปะตะฝั ะฒ ัะตัะตะดะธะฝั ะธะปะธ ะดะพะฑะฐะฒะปะตะฝั ะฒ ะบะพะฝะตั ะบะฐะบะธะต-ัะพ ัะปะตะผะตะฝัั.
###Code
a = np.delete(a, [5, 7])
print(a)
a = np.insert(a, 2, [0, 0])
print(a)
a = np.append(a, [1, 2, 3])
print(a)
###Output
[ 1. 3. 0. 0. 5. 7. 9.
1.26491106 2. 2.22222222 1. 2. 3. ]
###Markdown
ะััั ะฝะตัะบะพะปัะบะพ ัะฟะพัะพะฑะพะฒ ะธะฝะดะตะบัะฐัะธะธ ะผะฐััะธะฒะฐ. ะะพั ะพะฑััะฝัะน ะธะฝะดะตะบั.
###Code
a = np.linspace(0, 1, 11)
print(a)
b = a[2]
print(b)
###Output
0.2
###Markdown
ะะธะฐะฟะฐะทะพะฝ ะธะฝะดะตะบัะพะฒ. ะกะพะทะดะฐัััั ะฝะพะฒัะน ะทะฐะณะพะปะพะฒะพะบ ะผะฐััะธะฒะฐ, ัะบะฐะทัะฒะฐััะธะน ะฝะฐ ัะต ะถะต ะดะฐะฝะฝัะต. ะะทะผะตะฝะตะฝะธั, ัะดะตะปะฐะฝะฝัะต ัะตัะตะท ัะฐะบะพะน ะผะฐััะธะฒ, ะฒะธะดะฝั ะธ ะฒ ะธัั
ะพะดะฝะพะผ ะผะฐััะธะฒะต.
###Code
b = a[2:6]
print(b)
b[0] = -0.2
print(b)
print(a)
###Output
[ 0. 0.1 -0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1. ]
###Markdown
ะะธะฐะฟะฐะทะพะฝ ั ัะฐะณะพะผ 2.
###Code
b = a[1:10:2]
print(b)
b[0] = -0.1
print(a)
###Output
[ 0. -0.1 -0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1. ]
###Markdown
ะะฐััะธะฒ ะฒ ะพะฑัะฐัะฝะพะผ ะฟะพััะดะบะต.
###Code
b = a[len(a):0:-1]
print(b)
###Output
[ 1. 0.9 0.8 0.7 0.6 0.5 0.4 0.3 -0.2 -0.1]
###Markdown
ะะพะดะผะฐััะธะฒั ะผะพะถะฝะพ ะฟัะธัะฒะพะธัั ะทะฝะฐัะตะฝะธะต - ะผะฐััะธะฒ ะฟัะฐะฒะธะปัะฝะพะณะพ ัะฐะทะผะตัะฐ ะธะปะธ ัะบะฐะปัั.
###Code
a[1:10:3] = 0
print(a)
###Output
[ 0. 0. -0.2 0.3 0. 0.5 0.6 0. 0.8 0.9 1. ]
###Markdown
ะขัั ะพะฟััั ัะพะทะดะฐัััั ัะพะปัะบะพ ะฝะพะฒัะน ะทะฐะณะพะปะพะฒะพะบ, ัะบะฐะทัะฒะฐััะธะน ะฝะฐ ัะต ะถะต ะดะฐะฝะฝัะต.
###Code
b = a[:]
b[1] = 0.1
print(a)
###Output
[ 0. 0.1 -0.2 0.3 0. 0.5 0.6 0. 0.8 0.9 1. ]
###Markdown
ะงัะพะฑั ัะบะพะฟะธัะพะฒะฐัั ะธ ะดะฐะฝะฝัะต ะผะฐััะธะฒะฐ, ะฝัะถะฝะพ ะธัะฟะพะปัะทะพะฒะฐัั ะผะตัะพะด `copy`.
###Code
b = a.copy()
b[2] = 0
print(b)
print(a)
###Output
[ 0. 0.1 0. 0.3 0. 0.5 0.6 0. 0.8 0.9 1. ]
[ 0. 0.1 -0.2 0.3 0. 0.5 0.6 0. 0.8 0.9 1. ]
###Markdown
ะะพะถะฝะพ ะทะฐะดะฐัั ัะฟะธัะพะบ ะธะฝะดะตะบัะพะฒ.
###Code
print(a[[2, 3, 5]])
###Output
[-0.2 0.3 0.5]
###Markdown
ะะพะถะฝะพ ะทะฐะดะฐัั ะฑัะปะตะฒ ะผะฐััะธะฒ ัะพะน ะถะต ะฒะตะปะธัะธะฝั.
###Code
b = a > 0
print(b)
print(a[b])
###Output
[ 0.1 0.3 0.5 0.6 0.8 0.9 1. ]
###Markdown
2-ะผะตัะฝัะต ะผะฐััะธะฒั
###Code
a = np.array([[0.0, 1.0], [-1.0, 0.0]])
print(a)
a.ndim
a.shape
len(a), a.size
a[1, 0]
###Output
_____no_output_____
###Markdown
ะััะธะฑััั `shape` ะผะพะถะฝะพ ะฟัะธัะฒะพะธัั ะฝะพะฒะพะต ะทะฝะฐัะตะฝะธะต - ะบะพััะตะถ ัะฐะทะผะตัะพะฒ ะฟะพ ะฒัะตะผ ะบะพะพัะดะธะฝะฐัะฐะผ. ะะพะปััะธััั ะฝะพะฒัะน ะทะฐะณะพะปะพะฒะพะบ ะผะฐััะธะฒะฐ; ะตะณะพ ะดะฐะฝะฝัะต ะฝะต ะธะทะผะตะฝัััั.
###Code
b = np.linspace(0, 3, 4)
print(b)
b.shape
b.shape = 2, 2
print(b)
###Output
[[ 0. 1.]
[ 2. 3.]]
###Markdown
ะะพะถะฝะพ ัะฐัััะฝััั ะฒ ะพะดะฝะพะผะตัะฝัะน ะผะฐััะธะฒ
###Code
print(b.ravel())
###Output
[ 0. 1. 2. 3.]
###Markdown
ะัะธัะผะตัะธัะตัะบะธะต ะพะฟะตัะฐัะธะธ ะฟะพัะปะตะผะตะฝัะฝัะต
###Code
print(a + 1)
print(a * 2)
print(a + [0, 1]) # ะฒัะพัะพะต ัะปะฐะณะฐะตะผะพะต ะดะพะฟะพะปะฝัะตััั ะดะพ ะผะฐััะธัั ะบะพะฟะธัะพะฒะฐะฝะธะตะผ ัััะพะบ
print(a + np.array([[0, 2]]).T) # .T - ััะฐะฝัะฟะพะฝะธัะพะฒะฐะฝะธะต
print(a + b)
###Output
[[ 1. 2.]
[ 0. 1.]]
[[ 0. 2.]
[-2. 0.]]
[[ 0. 2.]
[-1. 1.]]
[[ 0. 1.]
[ 1. 2.]]
[[ 0. 2.]
[ 1. 3.]]
###Markdown
ะะพัะปะตะผะตะฝัะฝะพะต ะธ ะผะฐััะธัะฝะพะต (ัะพะปัะบะพ ะฒ Python 3.5) ัะผะฝะพะถะตะฝะธะต.
###Code
print(a * b)
print(a @ b)
print(b @ a)
###Output
[[-1. 0.]
[-3. 2.]]
###Markdown
ะฃะผะฝะพะถะตะฝะธะต ะผะฐััะธัั ะฝะฐ ะฒะตะบัะพั.
###Code
v = np.array([1, -1], dtype=np.float64)
print(b @ v)
print(v @ b)
###Output
[-2. -2.]
###Markdown
ะัะปะธ ั ะฒะฐั ะะธัะพะฝ ะฑะพะปะตะต ัะฐะฝะฝะตะน ะฒะตััะธะธ, ัะพ ะดะปั ัะฐะฑะพัั ั ะผะฐััะธัะฐะผะธ ะผะพะถะฝะพ ะธัะฟะพะปัะทะพะฒะฐัั ะบะปะฐัั `np.matrix`, ะฒ ะบะพัะพัะพะผ ะพะฟะตัะฐัะธั ัะผะฝะพะถะตะฝะธั ัะตะฐะปะธะทัะตััั ะบะฐะบ ะผะฐััะธัะฝะพะต ัะผะฝะพะถะตะฝะธะต.
###Code
np.matrix(a) * np.matrix(b)
###Output
_____no_output_____
###Markdown
ะะฝะตัะฝะตะต ะฟัะพะธะทะฒะตะดะตะฝะธะต $a_{ij}=u_i v_j$
###Code
u = np.linspace(1, 2, 2)
v = np.linspace(2, 4, 3)
print(u)
print(v)
a = np.outer(u, v)
print(a)
###Output
[[ 2. 3. 4.]
[ 4. 6. 8.]]
###Markdown
ะะฒัะผะตัะฝัะต ะผะฐััะธะฒั, ะทะฐะฒะธัััะธะต ัะพะปัะบะพ ะพั ะพะดะฝะพะณะพ ะธะฝะดะตะบัะฐ: $x_{ij}=u_j$, $y_{ij}=v_i$
###Code
x, y = np.meshgrid(u, v)
print(x)
print(y)
###Output
[[ 1. 2.]
[ 1. 2.]
[ 1. 2.]]
[[ 2. 2.]
[ 3. 3.]
[ 4. 4.]]
###Markdown
ะะดะธะฝะธัะฝะฐั ะผะฐััะธัะฐ.
###Code
I = np.eye(4)
print(I)
###Output
[[ 1. 0. 0. 0.]
[ 0. 1. 0. 0.]
[ 0. 0. 1. 0.]
[ 0. 0. 0. 1.]]
###Markdown
ะะตัะพะด `reshape` ะดะตะปะฐะตั ัะพ ะถะต ัะฐะผะพะต, ััะพ ะฟัะธัะฒะฐะธะฒะฐะฝะธะต ะฐััะธะฑััั `shape`.
###Code
print(I.reshape(16))
print(I.reshape(2, 8))
###Output
[[ 1. 0. 0. 0. 0. 1. 0. 0.]
[ 0. 0. 1. 0. 0. 0. 0. 1.]]
###Markdown
ะกััะพะบะฐ.
###Code
print(I[1])
###Output
[ 0. 1. 0. 0.]
###Markdown
ะฆะธะบะป ะฟะพ ัััะพะบะฐะผ.
###Code
for row in I:
print(row)
###Output
[ 1. 0. 0. 0.]
[ 0. 1. 0. 0.]
[ 0. 0. 1. 0.]
[ 0. 0. 0. 1.]
###Markdown
ะกัะพะปะฑะตั.
###Code
print(I[:, 2])
###Output
[ 0. 0. 1. 0.]
###Markdown
ะะพะดะผะฐััะธัะฐ.
###Code
print(I[0:2, 1:3])
###Output
[[ 0. 0.]
[ 1. 0.]]
###Markdown
ะะพะถะฝะพ ะฟะพัััะพะธัั ะดะฒัะผะตัะฝัะน ะผะฐััะธะฒ ะธะท ััะฝะบัะธะธ.
###Code
def f(i, j):
print(i)
print(j)
return 10 * i + j
print(np.fromfunction(f, (4, 4), dtype=np.int64))
###Output
[[0 0 0 0]
[1 1 1 1]
[2 2 2 2]
[3 3 3 3]]
[[0 1 2 3]
[0 1 2 3]
[0 1 2 3]
[0 1 2 3]]
[[ 0 1 2 3]
[10 11 12 13]
[20 21 22 23]
[30 31 32 33]]
###Markdown
ะขัะฐะฝัะฟะพะฝะธัะพะฒะฐะฝะฝะฐั ะผะฐััะธัะฐ.
###Code
print(b.T)
###Output
[[ 0. 2.]
[ 1. 3.]]
###Markdown
ะกะพะตะดะธะฝะตะฝะธะต ะผะฐััะธั ะฟะพ ะณะพัะธะทะพะฝัะฐะปะธ ะธ ะฟะพ ะฒะตััะธะบะฐะปะธ.
###Code
a = np.array([[0, 1], [2, 3]])
b = np.array([[4, 5, 6], [7, 8, 9]])
c = np.array([[4, 5], [6, 7], [8, 9]])
print(a)
print(b)
print(c)
print(np.hstack((a, b)))
print(np.vstack((a, c)))
###Output
[[0 1]
[2 3]
[4 5]
[6 7]
[8 9]]
###Markdown
ะกัะผะผะฐ ะฒัะตั
ัะปะตะผะตะฝัะพะฒ; ััะผะผั ััะพะปะฑัะพะฒ; ััะผะผั ัััะพะบ.
###Code
print(b.sum())
print(b.sum(axis=0))
print(b.sum(axis=1))
###Output
39
[11 13 15]
[15 24]
###Markdown
ะะฝะฐะปะพะณะธัะฝะพ ัะฐะฑะพัะฐัั `prod`, `max`, `min` ะธ ั.ะด.
###Code
print(b.max())
print(b.max(axis=0))
print(b.min(axis=1))
###Output
9
[7 8 9]
[4 7]
###Markdown
ะกะปะตะด - ััะผะผะฐ ะดะธะฐะณะพะฝะฐะปัะฝัั
ัะปะตะผะตะฝัะพะฒ.
###Code
np.trace(a)
###Output
_____no_output_____
###Markdown
ะะฝะพะณะพะผะตัะฝัะต ะผะฐััะธะฒั
###Code
X = np.arange(24).reshape(2, 3, 4)
print(X)
###Output
[[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
[[12 13 14 15]
[16 17 18 19]
[20 21 22 23]]]
###Markdown
ะกัะผะผะธัะพะฒะฐะฝะธะต (ะฐะฝะฐะปะพะณะธัะฝะพ ะพััะฐะปัะฝัะต ะพะฟะตัะฐัะธะธ)
###Code
# ััะผะผะธััะตะผ ัะพะปัะบะพ ะฟะพ ะฝัะปะตะฒะพะน ะพัะธ, ัะพ ะตััั ะดะปั ัะธะบัะธัะพะฒะฐะฝะฝัั
j ะธ k ััะผะผะธััะตะผ ัะพะปัะบะพ ัะปะตะผะตะฝัั ั ะธะฝะดะตะบัะฐะผะธ (*, j, k)
print(X.sum(axis=0))
# ััะผะผะธััะตะผ ััะฐะทั ะฟะพ ะดะฒัะผ ะพััะผ, ัะพ ะตััั ะดะปั ัะธะบัะธัะพะฒะฐะฝะฝะพะน i ััะผะผะธััะตะผ ัะพะปัะบะพ ัะปะตะผะตะฝัั ั ะธะฝะดะตะบัะฐะผะธ (i, *, *)
print(X.sum(axis=(1, 2)))
###Output
[[12 14 16 18]
[20 22 24 26]
[28 30 32 34]]
[ 66 210]
###Markdown
ะะธะฝะตะนะฝะฐั ะฐะปะณะตะฑัะฐ
###Code
np.linalg.det(a)
###Output
_____no_output_____
###Markdown
ะะฑัะฐัะฝะฐั ะผะฐััะธัะฐ.
###Code
a1 = np.linalg.inv(a)
print(a1)
print(a @ a1)
print(a1 @ a)
###Output
[[ 1. 0.]
[ 0. 1.]]
[[ 1. 0.]
[ 0. 1.]]
###Markdown
ะ ะตัะตะฝะธะต ะปะธะฝะตะนะฝะพะน ัะธััะตะผั $au=v$.
###Code
v = np.array([0, 1], dtype=np.float64)
print(a1 @ v)
u = np.linalg.solve(a, v)
print(u)
###Output
[ 0.5 0. ]
###Markdown
ะัะพะฒะตัะธะผ.
###Code
print(a @ u - v)
###Output
[ 0. 0.]
###Markdown
ะกะพะฑััะฒะตะฝะฝัะต ะทะฝะฐัะตะฝะธั ะธ ัะพะฑััะฒะตะฝะฝัะต ะฒะตะบัะพัั: $a u_i = \lambda_i u_i$. `l` - ะพะดะฝะพะผะตัะฝัะน ะผะฐััะธะฒ ัะพะฑััะฒะตะฝะฝัั
ะทะฝะฐัะตะฝะธะน $\lambda_i$, ััะพะปะฑัั ะผะฐััะธัั $u$ - ัะพะฑััะฒะตะฝะฝัะต ะฒะตะบัะพัั $u_i$.
###Code
l, u = np.linalg.eig(a)
print(l)
print(u)
###Output
[[-0.87192821 -0.27032301]
[ 0.48963374 -0.96276969]]
###Markdown
ะัะพะฒะตัะธะผ.
###Code
for i in range(2):
print(a @ u[:, i] - l[i] * u[:, i])
###Output
[ 0.00000000e+00 1.66533454e-16]
[ 0.00000000e+00 -4.44089210e-16]
###Markdown
ะคัะฝะบัะธั `diag` ะพั ะพะดะฝะพะผะตัะฝะพะณะพ ะผะฐััะธะฒะฐ ัััะพะธั ะดะธะฐะณะพะฝะฐะปัะฝัั ะผะฐััะธัั; ะพั ะบะฒะฐะดัะฐัะฝะพะน ะผะฐััะธัั - ะฒะพะทะฒัะฐัะฐะตั ะพะดะฝะพะผะตัะฝัะน ะผะฐััะธะฒ ะตั ะดะธะฐะณะพะฝะฐะปัะฝัั
ัะปะตะผะตะฝัะพะฒ.
###Code
L = np.diag(l)
print(L)
print(np.diag(L))
###Output
[[-0.56155281 0. ]
[ 0. 3.56155281]]
[-0.56155281 3.56155281]
###Markdown
ะัะต ััะฐะฒะฝะตะฝะธั $a u_i = \lambda_i u_i$ ะผะพะถะฝะพ ัะพะฑัะฐัั ะฒ ะพะดะฝะพ ะผะฐััะธัะฝะพะต ััะฐะฒะฝะตะฝะธะต $a u = u \Lambda$, ะณะดะต $\Lambda$ - ะดะธะฐะณะพะฝะฐะปัะฝะฐั ะผะฐััะธัะฐ ั ัะพะฑััะฒะตะฝะฝัะผะธ ะทะฝะฐัะตะฝะธัะผะธ $\lambda_i$ ะฟะพ ะดะธะฐะณะพะฝะฐะปะธ.
###Code
print(a @ u - u @ L)
###Output
[[ 0.00000000e+00 0.00000000e+00]
[ 1.66533454e-16 -4.44089210e-16]]
###Markdown
ะะพััะพะผั $u^{-1} a u = \Lambda$.
###Code
print(np.linalg.inv(u) @ a @ u)
###Output
[[ -5.61552813e-01 2.77555756e-17]
[ -2.22044605e-16 3.56155281e+00]]
###Markdown
ะะฐะนะดัะผ ัะตะฟะตัั ะปะตะฒัะต ัะพะฑััะฒะตะฝะฝัะต ะฒะตะบัะพัั $v_i a = \lambda_i v_i$ (ัะพะฑััะฒะตะฝะฝัะต ะทะฝะฐัะตะฝะธั $\lambda_i$ ัะต ะถะต ัะฐะผัะต).
###Code
l, v = np.linalg.eig(a.T)
print(l)
print(v)
###Output
[-0.56155281 3.56155281]
[[-0.96276969 -0.48963374]
[ 0.27032301 -0.87192821]]
###Markdown
ะกะพะฑััะฒะตะฝะฝัะต ะฒะตะบัะพัั ะฝะพัะผะธัะพะฒะฐะฝั ะฝะฐ 1.
###Code
print(u.T @ u)
print(v.T @ v)
###Output
[[ 1. -0.23570226]
[-0.23570226 1. ]]
[[ 1. 0.23570226]
[ 0.23570226 1. ]]
###Markdown
ะะตะฒัะต ะธ ะฟัะฐะฒัะต ัะพะฑััะฒะตะฝะฝัะต ะฒะตะบัะพัั, ัะพะพัะฒะตัััะฒัััะธะต ัะฐะทะฝัะผ ัะพะฑััะฒะตะฝะฝัะผ ะทะฝะฐัะตะฝะธัะผ, ะพััะพะณะพะฝะฐะปัะฝั, ะฟะพัะพะผั ััะพ $v_i a u_j = \lambda_i v_i u_j = \lambda_j v_i u_j$.
###Code
print(v.T @ u)
###Output
[[ 9.71825316e-01 0.00000000e+00]
[ -5.55111512e-17 9.71825316e-01]]
###Markdown
ะะฝัะตะณัะธัะพะฒะฐะฝะธะต
###Code
from scipy.integrate import quad, odeint
from scipy.special import erf
def f(x):
return np.exp(-x ** 2)
###Output
_____no_output_____
###Markdown
ะะดะฐะฟัะธะฒะฝะพะต ัะธัะปะตะฝะฝะพะต ะธะฝัะตะณัะธัะพะฒะฐะฝะธะต (ะผะพะถะตั ะฑััั ะดะพ ะฑะตัะบะพะฝะตัะฝะพััะธ). `err` - ะพัะตะฝะบะฐ ะพัะธะฑะบะธ.
###Code
res, err = quad(f, 0, np.inf)
print(np.sqrt(np.pi) / 2, res, err)
res, err = quad(f, 0, 1)
print(np.sqrt(np.pi) / 2 * erf(1), res, err)
###Output
0.746824132812 0.7468241328124271 8.291413475940725e-15
###Markdown
ะกะพั
ัะฐะฝะตะฝะธะต ะฒ ัะฐะนะป ะธ ััะตะฝะธะต ะธะท ัะฐะนะปะฐ
###Code
x = np.arange(0, 25, 0.5).reshape((5, 10))
# ะกะพั
ัะฐะฝัะตะผ ะฒ ัะฐะนะป example.txt ะดะฐะฝะฝัะต x ะฒ ัะพัะผะฐัะต ั ะดะฒัะผั ัะพัะบะฐะผะธ ะฟะพัะปะต ะทะฐะฟััะพะน ะธ ัะฐะทะดะตะปะธัะตะปะตะผ ';'
np.savetxt('example.txt', x, fmt='%.2f', delimiter=';')
###Output
_____no_output_____
###Markdown
ะะพะปััะธััั ัะฐะบะพะน ัะฐะนะป
###Code
! cat example.txt
###Output
0.00;0.50;1.00;1.50;2.00;2.50;3.00;3.50;4.00;4.50
5.00;5.50;6.00;6.50;7.00;7.50;8.00;8.50;9.00;9.50
10.00;10.50;11.00;11.50;12.00;12.50;13.00;13.50;14.00;14.50
15.00;15.50;16.00;16.50;17.00;17.50;18.00;18.50;19.00;19.50
20.00;20.50;21.00;21.50;22.00;22.50;23.00;23.50;24.00;24.50
###Markdown
ะขะตะฟะตัั ะตะณะพ ะผะพะถะฝะพ ะฟัะพัะธัะฐัั
###Code
x = np.loadtxt('example.txt', delimiter=';')
print(x)
###Output
[[ 0. 0.5 1. 1.5 2. 2.5 3. 3.5 4. 4.5]
[ 5. 5.5 6. 6.5 7. 7.5 8. 8.5 9. 9.5]
[ 10. 10.5 11. 11.5 12. 12.5 13. 13.5 14. 14.5]
[ 15. 15.5 16. 16.5 17. 17.5 18. 18.5 19. 19.5]
[ 20. 20.5 21. 21.5 22. 22.5 23. 23.5 24. 24.5]]
###Markdown
ะะธะฑะปะธะพัะตะบะฐ scipy (ะผะพะดัะปั scipy.stats)ะะฐะผ ะฟัะธะณะพะดะธััั ัะพะปัะบะพ ะผะพะดัะปั `scipy.stats`.ะะพะปะฝะพะต ะพะฟะธัะฐะฝะธะต http://docs.scipy.org/doc/scipy/reference/stats.html
###Code
import scipy.stats as sps
###Output
_____no_output_____
###Markdown
ะะฑัะธะน ะฟัะธะฝัะธะฟ:$X$ โ ะฝะตะบะพัะพัะพะต ัะฐัะฟัะตะดะตะปะตะฝะธะต ั ะฟะฐัะฐะผะตััะฐะผะธ `params` `X.rvs(size=N, params)` โ ะณะตะฝะตัะฐัะธั ะฒัะฑะพัะบะธ ัะฐะทะผะตัะฐ $N$ (Random VariateS). ะะพะทะฒัะฐัะฐะตั `numpy.array` `X.cdf(x, params)` โ ะทะฝะฐัะตะฝะธะต ััะฝะบัะธะธ ัะฐัะฟัะตะดะตะปะตะฝะธั ะฒ ัะพัะบะต $x$ (Cumulative Distribution Function) `X.logcdf(x, params)` โ ะทะฝะฐัะตะฝะธะต ะปะพะณะฐัะธัะผะฐ ััะฝะบัะธะธ ัะฐัะฟัะตะดะตะปะตะฝะธั ะฒ ัะพัะบะต $x$ `X.ppf(q, params)` โ $q$-ะบะฒะฐะฝัะธะปั (Percent Point Function) `X.mean(params)` โ ะผะฐัะตะผะฐัะธัะตัะบะพะต ะพะถะธะดะฐะฝะธะต `X.median(params)` โ ะผะตะดะธะฐะฝะฐ `X.var(params)` โ ะดะธัะฟะตััะธั (Variance) `X.std(params)` โ ััะฐะฝะดะฐััะฝะพะต ะพัะบะปะพะฝะตะฝะธะต = ะบะพัะตะฝั ะธะท ะดะธัะฟะตััะธะธ (Standard Deviation)ะัะพะผะต ัะพะณะพ ะดะปั ะฝะตะฟัะตััะฒะฝัั
ัะฐัะฟัะตะดะตะปะตะฝะธะน ะพะฟัะตะดะตะปะตะฝั ััะฝะบัะธะธ `X.pdf(x, params)` โ ะทะฝะฐัะตะฝะธะต ะฟะปะพัะฝะพััะธ ะฒ ัะพัะบะต $x$ (Probability Density Function) `X.logpdf(x, params)` โ ะทะฝะฐัะตะฝะธะต ะปะพะณะฐัะธัะผะฐ ะฟะปะพัะฝะพััะธ ะฒ ัะพัะบะต $x$ะ ะดะปั ะดะธัะบัะตัะฝัั
`X.pmf(k, params)` โ ะทะฝะฐัะตะฝะธะต ะดะธัะบัะตัะฝะพะน ะฟะปะพัะฝะพััะธ ะฒ ัะพัะบะต $k$ (Probability Mass Function) `X.logpdf(k, params)` โ ะทะฝะฐัะตะฝะธะต ะปะพะณะฐัะธัะผะฐ ะดะธัะบัะตัะฝะพะน ะฟะปะพัะฝะพััะธ ะฒ ัะพัะบะต $k$ะะฐัะฐะผะตััั ะผะพะณัั ะฑััั ัะปะตะดัััะธะผะธ: `loc` โ ะฟะฐัะฐะผะตัั ัะดะฒะธะณะฐ `scale` โ ะฟะฐัะฐะผะตัั ะผะฐัััะฐะฑะฐ ะธ ะดััะณะธะต ะฟะฐัะฐะผะตััั (ะฝะฐะฟัะธะผะตั, $n$ ะธ $p$ ะดะปั ะฑะธะฝะพะผะธะฐะปัะฝะพะณะพ) ะะปั ะฟัะธะผะตัะฐ ัะณะตะฝะตัะธััะตะผ ะฒัะฑะพัะบั ัะฐะทะผะตัะฐ $N = 200$ ะธะท ัะฐัะฟัะตะดะตะปะตะฝะธั $\mathscr{N}(1, 9)$ ะธ ะฟะพััะธัะฐะตะผ ะฝะตะบะพัะพััะต ััะฐัะธััะธะบะธ.ะ ัะตัะผะธะฝะฐั
ะฒััะต ะพะฟะธัะฐะฝะฝัั
ััะฝะบัะธะน ั ะฝะฐั $X$ = `sps.norm`, ะฐ `params` = (`loc=1, scale=3`).
###Code
sample = sps.norm.rvs(size=200, loc=1, scale=3)
print('ะะตัะฒัะต 10 ะทะฝะฐัะตะฝะธะน ะฒัะฑะพัะบะธ:\n', sample[:10])
print('ะัะฑะพัะพัะฝะพะต ััะตะดะตะฝะตะต: %.3f' % sample.mean())
print('ะัะฑะพัะพัะฝะฐั ะดะธัะฟะตััะธั: %.3f' % sample.var())
print('ะะปะพัะฝะพััั:\t\t', sps.norm.pdf([-1, 0, 1, 2, 3], loc=1, scale=3))
print('ะคัะฝะบัะธั ัะฐัะฟัะตะดะตะปะตะฝะธั:\t', sps.norm.cdf([-1, 0, 1, 2, 3], loc=1, scale=3))
print('ะะฒะฐะฝัะธะปะธ:', sps.norm.ppf([0.05, 0.1, 0.5, 0.9, 0.95], loc=1, scale=3))
###Output
ะะฒะฐะฝัะธะปะธ: [-3.93456088 -2.8446547 1. 4.8446547 5.93456088]
###Markdown
Cะณะตะฝะตัะธััะตะผ ะฒัะฑะพัะบั ัะฐะทะผะตัะฐ $N = 200$ ะธะท ัะฐัะฟัะตะดะตะปะตะฝะธั $Bin(10, 0.6)$ ะธ ะฟะพััะธัะฐะตะผ ะฝะตะบะพัะพััะต ััะฐัะธััะธะบะธ.ะ ัะตัะผะธะฝะฐั
ะฒััะต ะพะฟะธัะฐะฝะฝัั
ััะฝะบัะธะน ั ะฝะฐั $X$ = `sps.binom`, ะฐ `params` = (`n=10, p=0.6`).
###Code
sample = sps.binom.rvs(size=200, n=10, p=0.6)
print('ะะตัะฒัะต 10 ะทะฝะฐัะตะฝะธะน ะฒัะฑะพัะบะธ:\n', sample[:10])
print('ะัะฑะพัะพัะฝะพะต ััะตะดะตะฝะตะต: %.3f' % sample.mean())
print('ะัะฑะพัะพัะฝะฐั ะดะธัะฟะตััะธั: %.3f' % sample.var())
print('ะะธัะบัะตัะฝะฐั ะฟะปะพัะฝะพััั:\t', sps.binom.pmf([-1, 0, 5, 5.5, 10], n=10, p=0.6))
print('ะคัะฝะบัะธั ัะฐัะฟัะตะดะตะปะตะฝะธั:\t', sps.binom.cdf([-1, 0, 5, 5.5, 10], n=10, p=0.6))
print('ะะฒะฐะฝัะธะปะธ:', sps.binom.ppf([0.05, 0.1, 0.5, 0.9, 0.95], n=10, p=0.6))
###Output
ะะฒะฐะฝัะธะปะธ: [ 3. 4. 6. 8. 8.]
###Markdown
ะัะดะตะปัะฝะพ ะตััั ะบะปะฐัั ะดะปั ะผะฝะพะณะพะผะตัะฝะพะณะพ ะฝะพัะผะฐะปัะฝะพะณะพ ัะฐัะฟัะตะดะตะปะตะฝะธั.ะะปั ะฟัะธะผะตัะฐ ัะณะตะฝะตัะธััะตะผ ะฒัะฑะพัะบั ัะฐะทะผะตัะฐ $N=200$ ะธะท ัะฐัะฟัะตะดะตะปะตะฝะธั $\mathscr{N} \left( \begin{pmatrix} 1 \\ 1 \end{pmatrix}, \begin{pmatrix} 2 & 1 \\ 1 & 2 \end{pmatrix} \right)$.
###Code
sample = sps.multivariate_normal.rvs(mean=[1, 1], cov=[[2, 1], [1, 2]], size=200)
print('ะะตัะฒัะต 10 ะทะฝะฐัะตะฝะธะน ะฒัะฑะพัะบะธ:\n', sample[:10])
print('ะัะฑะพัะพัะฝะพะต ััะตะดะตะฝะตะต:', sample.mean(axis=0))
print('ะัะฑะพัะพัะฝะฐั ะผะฐััะธัะฐ ะบะพะฒะฐัะธะฐัะธะน:\n', np.cov(sample.T))
###Output
ะะตัะฒัะต 10 ะทะฝะฐัะตะฝะธะน ะฒัะฑะพัะบะธ:
[[-1.9861816 -0.94358461]
[ 1.93376109 0.34449948]
[ 1.76689 3.25707287]
[ 1.14967263 -0.71283847]
[ 1.44368489 1.27636574]
[ 1.48994732 2.03350446]
[ 2.02426618 1.21057156]
[ 1.67851671 2.30199687]
[ 1.90705893 2.1001483 ]
[ 2.96734234 2.58021913]]
ะัะฑะพัะพัะฝะพะต ััะตะดะตะฝะตะต: [ 1.14018367 0.98307564]
ะัะฑะพัะพัะฝะฐั ะผะฐััะธัะฐ ะบะพะฒะฐัะธะฐัะธะน:
[[ 2.10650447 0.94076559]
[ 0.94076559 1.87049463]]
###Markdown
ะะตะบะพัะพัะฐั ั
ะธััะพััั :)
###Code
sample = sps.norm.rvs(size=10, loc=np.arange(10), scale=0.1)
print(sample)
###Output
[-0.25874425 0.97813837 2.04639019 3.0187115 4.05480661 4.94792113
6.01970204 7.00142419 7.9675934 8.88900013]
###Markdown
ะัะฒะฐะตั ัะฐะบ, ััะพ ะฝะฐะดะพ ัะณะตะฝะตัะธัะพะฒะฐัั ะฒัะฑะพัะบั ะธะท ัะฐัะฟัะตะดะตะปะตะฝะธั, ะบะพัะพัะพะณะพ ะฝะตั ะฒ `scipy.stats`.ะะปั ััะพะณะพ ะฝะฐะดะพ ัะพะทะดะฐัั ะบะปะฐัั, ะบะพัะพััะน ะฑัะดะตั ะฝะฐัะปะตะดะพะฒะฐัััั ะพั ะบะปะฐััะฐ `rv_continuous` ะดะปั ะฝะตะฟัะตััะฒะฝัั
ัะปััะฐะนะฝัั
ะฒะตะปะธัะธะฝ ะธ ะพั ะบะปะฐััะฐ `rv_discrete` ะดะปั ะดะธัะบัะตัะฝัั
ัะปััะฐะนะฝัั
ะฒะตะปะธัะธะฝ.ะัะธะผะตั ะตััั ะฝะฐ ัััะฐะฝะธัะต http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_continuous.htmlscipy.stats.rv_continuous ะะปั ะฟัะธะผะตัะฐ ัะณะตะฝะตัะธััะตะผ ะฒัะฑะพัะบั ะธะท ัะฐัะฟัะตะดะตะปะตะฝะธั ั ะฟะปะพัะฝะพัััั $f(x) = \frac{4}{15} x^3 I\{x \in [1, 2] = [a, b]\}$.
###Code
class cubic_gen(sps.rv_continuous):
def _pdf(self, x):
return 4 * x ** 3 / 15
cubic = cubic_gen(a=1, b=2, name='cubic')
sample = cubic.rvs(size=200)
print('ะะตัะฒัะต 10 ะทะฝะฐัะตะฝะธะน ะฒัะฑะพัะบะธ:\n', sample[:10])
print('ะัะฑะพัะพัะฝะพะต ััะตะดะตะฝะตะต: %.3f' % sample.mean())
print('ะัะฑะพัะพัะฝะฐั ะดะธัะฟะตััะธั: %.3f' % sample.var())
###Output
ะะตัะฒัะต 10 ะทะฝะฐัะตะฝะธะน ะฒัะฑะพัะบะธ:
[ 1.8838009 1.80617825 1.09789444 1.65771829 1.72582776 1.57311372
1.7174875 1.99153808 1.90110246 1.69306301]
ะัะฑะพัะพัะฝะพะต ััะตะดะตะฝะตะต: 1.652
ะัะฑะพัะพัะฝะฐั ะดะธัะฟะตััะธั: 0.064
###Markdown
ะัะปะธ ะดะธัะบัะตัะฝะฐั ัะปััะฐะนะฝะฐั ะฒะตะปะธัะธะฝะฐ ะผะพะถะตั ะฟัะธะฝะธะผะฐัั ะฝะตะฑะพะปััะพะต ัะธัะปะพ ะทะฝะฐัะตะฝะธะน, ัะพ ะผะพะถะฝะพ ะฝะต ัะพะทะดะฐะฒะฐัั ะฝะพะฒัะน ะบะปะฐัั, ะบะฐะบ ะฟะพะบะฐะทะฐะฝะพ ะฒััะต, ะฐ ัะฒะฝะพ ัะบะฐะทะฐัั ััะธ ะทะฝะฐัะตะฝะธั ะธ ะธะท ะฒะตัะพััะฝะพััะธ.
###Code
some_distribution = sps.rv_discrete(name='some_distribution', values=([1, 2, 3], [0.6, 0.1, 0.3]))
sample = some_distribution.rvs(size=200)
print('ะะตัะฒัะต 10 ะทะฝะฐัะตะฝะธะน ะฒัะฑะพัะบะธ:\n', sample[:10])
print('ะัะฑะพัะพัะฝะพะต ััะตะดะตะฝะตะต: %.3f' % sample.mean())
print('ะงะฐััะพัะฐ ะทะฝะฐัะตะฝะธะน ะฟะพ ะฒัะฑะพัะบะต:', (sample == 1).mean(), (sample == 2).mean(), (sample == 3).mean())
###Output
ะะตัะฒัะต 10 ะทะฝะฐัะตะฝะธะน ะฒัะฑะพัะบะธ:
[3 1 1 3 3 1 3 1 1 1]
ะัะฑะพัะพัะฝะพะต ััะตะดะตะฝะตะต: 1.725
ะงะฐััะพัะฐ ะทะฝะฐัะตะฝะธะน ะฟะพ ะฒัะฑะพัะบะต: 0.575 0.125 0.3
|
MySolutions/MIT_6S191_Part1_MNIST.ipynb | ###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# ยฉ MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
[?25l Downloading https://files.pythonhosted.org/packages/8b/3b/b9174b68dc10832356d02a2d83a64b43a24f1762c172754407d22fc8f960/mitdeeplearning-0.1.2.tar.gz (2.1MB)
[K |โ | 10kB 26.8MB/s eta 0:00:01
[K |โ | 20kB 1.7MB/s eta 0:00:02
[K |โ | 30kB 2.3MB/s eta 0:00:01
[K |โ | 40kB 2.5MB/s eta 0:00:01
[K |โ | 51kB 2.0MB/s eta 0:00:02
[K |โ | 61kB 2.3MB/s eta 0:00:01
[K |โ | 71kB 2.5MB/s eta 0:00:01
[K |โโ | 81kB 2.7MB/s eta 0:00:01
[K |โโ | 92kB 2.9MB/s eta 0:00:01
[K |โโ | 102kB 2.8MB/s eta 0:00:01
[K |โโ | 112kB 2.8MB/s eta 0:00:01
[K |โโ | 122kB 2.8MB/s eta 0:00:01
[K |โโ | 133kB 2.8MB/s eta 0:00:01
[K |โโโ | 143kB 2.8MB/s eta 0:00:01
[K |โโโ | 153kB 2.8MB/s eta 0:00:01
[K |โโโ | 163kB 2.8MB/s eta 0:00:01
[K |โโโ | 174kB 2.8MB/s eta 0:00:01
[K |โโโ | 184kB 2.8MB/s eta 0:00:01
[K |โโโ | 194kB 2.8MB/s eta 0:00:01
[K |โโโโ | 204kB 2.8MB/s eta 0:00:01
[K |โโโโ | 215kB 2.8MB/s eta 0:00:01
[K |โโโโ | 225kB 2.8MB/s eta 0:00:01
[K |โโโโ | 235kB 2.8MB/s eta 0:00:01
[K |โโโโ | 245kB 2.8MB/s eta 0:00:01
[K |โโโโ | 256kB 2.8MB/s eta 0:00:01
[K |โโโโ | 266kB 2.8MB/s eta 0:00:01
[K |โโโโโ | 276kB 2.8MB/s eta 0:00:01
[K |โโโโโ | 286kB 2.8MB/s eta 0:00:01
[K |โโโโโ | 296kB 2.8MB/s eta 0:00:01
[K |โโโโโ | 307kB 2.8MB/s eta 0:00:01
[K |โโโโโ | 317kB 2.8MB/s eta 0:00:01
[K |โโโโโ | 327kB 2.8MB/s eta 0:00:01
[K |โโโโโโ | 337kB 2.8MB/s eta 0:00:01
[K |โโโโโโ | 348kB 2.8MB/s eta 0:00:01
[K |โโโโโโ | 358kB 2.8MB/s eta 0:00:01
[K |โโโโโโ | 368kB 2.8MB/s eta 0:00:01
[K |โโโโโโ | 378kB 2.8MB/s eta 0:00:01
[K |โโโโโโ | 389kB 2.8MB/s eta 0:00:01
[K |โโโโโโโ | 399kB 2.8MB/s eta 0:00:01
[K |โโโโโโโ | 409kB 2.8MB/s eta 0:00:01
[K |โโโโโโโ | 419kB 2.8MB/s eta 0:00:01
[K |โโโโโโโ | 430kB 2.8MB/s eta 0:00:01
[K |โโโโโโโ | 440kB 2.8MB/s eta 0:00:01
[K |โโโโโโโ | 450kB 2.8MB/s eta 0:00:01
[K |โโโโโโโ | 460kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโ | 471kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโ | 481kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโ | 491kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโ | 501kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโ | 512kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโ | 522kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโ | 532kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโ | 542kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโ | 552kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโ | 563kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโ | 573kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโ | 583kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโ | 593kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโ | 604kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโ | 614kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโ | 624kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโ | 634kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโ | 645kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโ | 655kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโ | 665kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโ | 675kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโ | 686kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโ | 696kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโ | 706kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโ | 716kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโ | 727kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโ | 737kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโ | 747kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโ | 757kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโ | 768kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโ | 778kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโ | 788kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโ | 798kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโ | 808kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโ | 819kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโ | 829kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโ | 839kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโ | 849kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโ | 860kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโ | 870kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโ | 880kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโ | 890kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโ | 901kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโ | 911kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโ | 921kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโ | 931kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโ | 942kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโ | 952kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโ | 962kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโ | 972kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโ | 983kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโ | 993kB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโ | 1.0MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโ | 1.0MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโ | 1.0MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโ | 1.0MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโ | 1.0MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโ | 1.1MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโ | 1.1MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโ | 1.1MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโ | 1.1MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโ | 1.1MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโ | 1.1MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโ | 1.1MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโ | 1.1MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโ | 1.1MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโ | 1.1MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโ | 1.2MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโ | 1.2MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโ | 1.2MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโ | 1.2MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโ | 1.2MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโ | 1.2MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโ | 1.2MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโ | 1.2MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโ | 1.2MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโ | 1.2MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโ | 1.3MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโ | 1.3MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโ | 1.3MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโ | 1.3MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโ | 1.3MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโ | 1.3MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโ | 1.3MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโ | 1.3MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโ | 1.3MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโ | 1.4MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโ | 1.4MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโ | 1.4MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโ | 1.4MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโ | 1.4MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโ | 1.4MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโ | 1.4MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโ | 1.4MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโ | 1.4MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโ | 1.4MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโ | 1.5MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโ | 1.5MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโ | 1.5MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโ | 1.5MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโ | 1.5MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโ | 1.5MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโ | 1.5MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโ | 1.5MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโ | 1.5MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโ | 1.5MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโ | 1.6MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโ | 1.6MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโ | 1.6MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโ | 1.6MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโ | 1.6MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโ | 1.6MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโ | 1.6MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโ | 1.6MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.6MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.6MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.7MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.7MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.7MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.7MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.7MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.7MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.7MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.7MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.7MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.8MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.8MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.8MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.8MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.8MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.8MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.8MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.8MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.8MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.8MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.9MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.9MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.9MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.9MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.9MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.9MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.9MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.9MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.9MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 1.9MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 2.0MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 2.0MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 2.0MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 2.0MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 2.0MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 2.0MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ | 2.0MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2.0MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2.0MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2.0MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2.1MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2.1MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2.1MB 2.8MB/s eta 0:00:01
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2.1MB 5.1MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (1.18.4)
Requirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (4.41.1)
Requirement already satisfied: gym in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (0.17.1)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.12.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: cloudpickle<1.4.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.1.2-cp36-none-any.whl size=2114586 sha256=49eacd1f3644c9179d4fad50653e30d09c2bc79c1a16676218c4671a9614df16
Stored in directory: /root/.cache/pip/wheels/27/e1/73/5f01c787621d8a3c857f59876c79e304b9b64db9ff5bd61b74
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.1.2
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below: Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation = tf.keras.activations.relu),
tf.keras.layers.Dense(10, activation = tf.keras.activations.softmax)
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* โ This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* โ This defines how the model is updated based on the data it sees and its loss function.* *Metrics* โ Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 15
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/15
938/938 [==============================] - 2s 2ms/step - loss: 0.3099 - accuracy: 0.9228
Epoch 2/15
938/938 [==============================] - 2s 2ms/step - loss: 0.2956 - accuracy: 0.9235
Epoch 3/15
938/938 [==============================] - 2s 2ms/step - loss: 0.2879 - accuracy: 0.9250
Epoch 4/15
938/938 [==============================] - 2s 2ms/step - loss: 0.2825 - accuracy: 0.9261
Epoch 5/15
938/938 [==============================] - 2s 2ms/step - loss: 0.2801 - accuracy: 0.9266
Epoch 6/15
938/938 [==============================] - 2s 2ms/step - loss: 0.2783 - accuracy: 0.9275
Epoch 7/15
938/938 [==============================] - 2s 2ms/step - loss: 0.2785 - accuracy: 0.9277
Epoch 8/15
938/938 [==============================] - 2s 2ms/step - loss: 0.2785 - accuracy: 0.9278
Epoch 9/15
938/938 [==============================] - 2s 2ms/step - loss: 0.2747 - accuracy: 0.9282
Epoch 10/15
938/938 [==============================] - 2s 2ms/step - loss: 0.2751 - accuracy: 0.9284
Epoch 11/15
938/938 [==============================] - 2s 2ms/step - loss: 0.2739 - accuracy: 0.9284
Epoch 12/15
938/938 [==============================] - 2s 2ms/step - loss: 0.2835 - accuracy: 0.9278
Epoch 13/15
938/938 [==============================] - 2s 2ms/step - loss: 0.2773 - accuracy: 0.9285
Epoch 14/15
938/938 [==============================] - 2s 2ms/step - loss: 0.2772 - accuracy: 0.9284
Epoch 15/15
938/938 [==============================] - 2s 2ms/step - loss: 0.2718 - accuracy: 0.9290
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels)# TODO
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.5552 - accuracy: 0.9159
Test accuracy: 0.9158999919891357
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better... 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below: Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([tf.keras.layers.Conv2D(24, kernel_size = (3,3), activation=tf.keras.activations.relu),
tf.keras.layers.MaxPool2D(pool_size = (2,2)),
tf.keras.layers.Conv2D(36, kernel_size=(3,3), activation=tf.keras.activations.relu),
tf.keras.layers.MaxPool2D(pool_size=(2,2)),tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
tf.keras.layers.Dense(10, activation = tf.keras.activations.softmax)])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_2 (Conv2D) multiple 240
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 multiple 0
_________________________________________________________________
conv2d_3 (Conv2D) multiple 7812
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 multiple 0
_________________________________________________________________
flatten_2 (Flatten) multiple 0
_________________________________________________________________
dense_4 (Dense) multiple 115328
_________________________________________________________________
dense_5 (Dense) multiple 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1), loss= 'sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, batch_size=64, epochs=5)
###Output
Epoch 1/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0259 - accuracy: 0.9917
Epoch 2/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0215 - accuracy: 0.9934
Epoch 3/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0179 - accuracy: 0.9945
Epoch 4/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0154 - accuracy: 0.9955
Epoch 5/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0130 - accuracy: 0.9959
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.0268 - accuracy: 0.9909
Test accuracy: 0.9908999800682068
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = test_labels[0]
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 96 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____ |
SIC_AI_Coding_Exercises/SIC_AI_Chapter_09_Coding_Exercises/ex_0801.ipynb | ###Markdown
Coding Exercise 0801 1. Keras Sequential API model:
###Code
# Install if necessary.
#!pip install keras
import pandas as pd
import numpy as np
import os
import warnings
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam, RMSprop, SGD
warnings.filterwarnings('ignore') # Turn the warnings off.
%matplotlib inline
###Output
_____no_output_____
###Markdown
1.1. Read in the data and explore:
###Code
# Go to the directory where the data file is located.
# os.chdir(r'~~') # Please, replace the path with your own.
# Read.
df = pd.read_csv('data_boston.csv', header='infer',encoding = 'latin1')
X = df.drop(columns=['PRICE'])
y = df['PRICE']
# View.
df.head(5)
# Scale the X data.
scaler = MinMaxScaler()
X = scaler.fit_transform(X)
# Spit the data into training and testing.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=123)
n_vars = X_train.shape[1]
###Output
_____no_output_____
###Markdown
1.2. Define a Sequential API model:
###Code
# Add layers on a Sequential object.
my_model1 = Sequential()
my_model1.add(Dense(input_dim = n_vars, units = 1, activation="linear")) # Add a output layer for linear regression.
# Summary of the model.
my_model1.summary()
###Output
_____no_output_____
###Markdown
1.3. Define the hyperparameters and optimizer:
###Code
# Hyperparameters.
n_epochs = 2000
batch_size = 10
learn_rate = 0.002
# Define the optimizer and then compile.
my_optimizer=Adam(lr=learn_rate)
my_model1.compile(loss = "mae", optimizer = my_optimizer, metrics=["mse"])
###Output
_____no_output_____
###Markdown
1.4. Train the model and visualize the history:
###Code
# Train the model.
# verbose = 0 means no output. verbose = 1 to view the epochs.
my_summary = my_model1.fit(X_train, y_train, epochs=n_epochs, batch_size = batch_size, validation_split = 0.2, verbose = 0)
# View the keys.
my_summary.history.keys()
# Visualize the training history.
n_skip = 100 # Skip the first few steps.
plt.plot(my_summary.history['mse'][n_skip:], c="b")
plt.plot(my_summary.history['val_mse'][n_skip:], c="g")
plt.title('Training History')
plt.ylabel('MSE')
plt.xlabel('Epoch')
plt.legend(['Train', 'Validation'], loc='upper right')
plt.show()
###Output
_____no_output_____
###Markdown
1.5. Testing:
###Code
# Predict and test using a formula.
y_pred = my_model1.predict(X_test)[:,0]
RMSE = np.sqrt(np.mean((y_test-y_pred)**2))
np.round(RMSE,3)
# Use the evaluate() method.
MSE = my_model1.evaluate(X_test, y_test, verbose=0)[1] # Returns the 0 = loss value and 1 = metrics value.
RMSE = np.sqrt(MSE)
print("Test RMSE : {}".format(np.round(RMSE,3)))
###Output
_____no_output_____
###Markdown
2. Keras Functional API model:
###Code
from keras.models import Model
from keras.layers import Input, Dense
###Output
_____no_output_____
###Markdown
2.1. Define a Functional API model:
###Code
my_input = Input(shape=(n_vars,)) # Input layer.
my_output = Dense(units=1,activation='linear')(my_input) # Output layer.
my_model2 = Model(inputs=my_input,outputs=my_output) # The model.
# Summary of the model.
my_model2.summary()
# Define the optimizer and then compile.
my_optimizer=Adam(lr=learn_rate)
my_model2.compile(loss = "mae", optimizer = my_optimizer, metrics=["mse"]) # Loss = MAE (L1) and Metrics = MSE (L2).
###Output
_____no_output_____
###Markdown
2.2. Train the model and visualize the history:
###Code
# Train the model.
my_summary = my_model2.fit(X_train, y_train, epochs=n_epochs, batch_size = batch_size, validation_split = 0.2, verbose = 0)
# Visualize the training history.
n_skip = 100 # Skip the first few steps.
plt.plot(my_summary.history['mse'][n_skip:], c="b")
plt.plot(my_summary.history['val_mse'][n_skip:], c="g")
plt.title('Training History')
plt.ylabel('MSE')
plt.xlabel('Epoch')
plt.legend(['Train', 'Validation'], loc='upper right')
plt.show()
# Use the evaluate() method.
MSE = my_model2.evaluate(X_test, y_test, verbose=0)[1] # Returns the 0 = loss value and 1 = metrics value.
RMSE = np.sqrt(MSE)
print("Test RMSE : {}".format(np.round(RMSE,3)))
###Output
_____no_output_____
###Markdown
Coding Exercise 0801 1. Keras Sequential API model:
###Code
# Install if necessary.
#!pip install keras
import pandas as pd
import numpy as np
import os
import warnings
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam, RMSprop, SGD
warnings.filterwarnings('ignore') # Turn the warnings off.
%matplotlib inline
###Output
_____no_output_____
###Markdown
1.1. Read in the data and explore:
###Code
!wget --no-clobber https://raw.githubusercontent.com/stefannae/SIC-Artificial-Intelligence/main/SIC_AI_Coding_Exercises/SIC_AI_Chapter_09_Coding_Exercises/data_boston.csv
# Read.
df = pd.read_csv('data_boston.csv', header='infer',encoding = 'latin1')
X = df.drop(columns=['PRICE'])
y = df['PRICE']
# View.
df.head(5)
# Scale the X data.
scaler = MinMaxScaler()
X = scaler.fit_transform(X)
# Spit the data into training and testing.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=123)
n_vars = X_train.shape[1]
###Output
_____no_output_____
###Markdown
1.2. Define a Sequential API model:
###Code
# Add layers on a Sequential object.
my_model1 = Sequential()
my_model1.add(Dense(input_dim = n_vars, units = 1, activation="linear")) # Add a output layer for linear regression.
# Summary of the model.
my_model1.summary()
###Output
_____no_output_____
###Markdown
1.3. Define the hyperparameters and optimizer:
###Code
# Hyperparameters.
n_epochs = 2000
batch_size = 10
learn_rate = 0.002
# Define the optimizer and then compile.
my_optimizer=Adam(lr=learn_rate)
my_model1.compile(loss = "mae", optimizer = my_optimizer, metrics=["mse"])
###Output
_____no_output_____
###Markdown
1.4. Train the model and visualize the history:
###Code
# Train the model.
# verbose = 0 means no output. verbose = 1 to view the epochs.
my_summary = my_model1.fit(X_train, y_train, epochs=n_epochs, batch_size = batch_size, validation_split = 0.2, verbose = 0)
# View the keys.
my_summary.history.keys()
# Visualize the training history.
n_skip = 100 # Skip the first few steps.
plt.plot(my_summary.history['mse'][n_skip:], c="b")
plt.plot(my_summary.history['val_mse'][n_skip:], c="g")
plt.title('Training History')
plt.ylabel('MSE')
plt.xlabel('Epoch')
plt.legend(['Train', 'Validation'], loc='upper right')
plt.show()
###Output
_____no_output_____
###Markdown
1.5. Testing:
###Code
# Predict and test using a formula.
y_pred = my_model1.predict(X_test)[:,0]
RMSE = np.sqrt(np.mean((y_test-y_pred)**2))
np.round(RMSE,3)
# Use the evaluate() method.
MSE = my_model1.evaluate(X_test, y_test, verbose=0)[1] # Returns the 0 = loss value and 1 = metrics value.
RMSE = np.sqrt(MSE)
print("Test RMSE : {}".format(np.round(RMSE,3)))
###Output
_____no_output_____
###Markdown
2. Keras Functional API model:
###Code
from keras.models import Model
from keras.layers import Input, Dense
###Output
_____no_output_____
###Markdown
2.1. Define a Functional API model:
###Code
my_input = Input(shape=(n_vars,)) # Input layer.
my_output = Dense(units=1,activation='linear')(my_input) # Output layer.
my_model2 = Model(inputs=my_input,outputs=my_output) # The model.
# Summary of the model.
my_model2.summary()
# Define the optimizer and then compile.
my_optimizer=Adam(lr=learn_rate)
my_model2.compile(loss = "mae", optimizer = my_optimizer, metrics=["mse"]) # Loss = MAE (L1) and Metrics = MSE (L2).
###Output
_____no_output_____
###Markdown
2.2. Train the model and visualize the history:
###Code
# Train the model.
my_summary = my_model2.fit(X_train, y_train, epochs=n_epochs, batch_size = batch_size, validation_split = 0.2, verbose = 0)
# Visualize the training history.
n_skip = 100 # Skip the first few steps.
plt.plot(my_summary.history['mse'][n_skip:], c="b")
plt.plot(my_summary.history['val_mse'][n_skip:], c="g")
plt.title('Training History')
plt.ylabel('MSE')
plt.xlabel('Epoch')
plt.legend(['Train', 'Validation'], loc='upper right')
plt.show()
# Use the evaluate() method.
MSE = my_model2.evaluate(X_test, y_test, verbose=0)[1] # Returns the 0 = loss value and 1 = metrics value.
RMSE = np.sqrt(MSE)
print("Test RMSE : {}".format(np.round(RMSE,3)))
###Output
_____no_output_____ |
ecdsa_ethereum_playground.ipynb | ###Markdown
Ethereum ECDSA playground
###Code
from eth_keys import keys
eth_priv_key = keys.PrivateKey(b'\x01' * 32)
eth_priv_key
eth_pub_key = eth_priv_key.public_key
eth_pub_key
eth_pub_key.to_checksum_address() # Ethereum address
signature = eth_priv_key.sign_msg(b'message')
signature
signature.verify_msg(b'message', eth_pub_key)
###Output
_____no_output_____ |
macros_ffn/01_vis_save_knossos.ipynb | ###Markdown
Progress: finishedSpeed: 36.108 MB or MPix /s, time 0.11077737808227539Progress: 25.00%knossos_cuber_project_mag1_mag1x0y0z0.seg.szCube does not exist, cube with 0 only assignedProgress: 50.00%knossos_cuber_project_mag1_mag1x1y0z0.seg.szCube does not exist, cube with 0 only assignedProgress: 75.00%knossos_cuber_project_mag1_mag1x0y1z0.seg.szCube does not exist, cube with 0 only assignedProgress: 100.00%knossos_cuber_project_mag1_mag1x1y1z0.seg.szCube does not exist, cube with 0 only assignedapplying mergelist nowCorrect shape
###Code
print(cube.shape, cube.dtype)
print(anno.shape, anno.dtype)
anno[anno<delete_anno_low] = 0
anno[anno>delete_anno_high] = 0
ids = np.unique(anno[...],return_counts=1)
print (ids)
viewer = neuroglancer.Viewer()
with viewer.txn() as s:
s.layers['image'] = neuroglancer.ImageLayer(
source=neuroglancer.LocalVolume(data=cube, volume_type='image'))
s.layers['labels'] = neuroglancer.SegmentationLayer(
source=neuroglancer.LocalVolume(data=anno, volume_type='segmentation',mesh_options={'max_quadrics_error':100}),segments=ids[0])
print(viewer.get_viewer_url())
del viewer
#this will create the training data for FFN from knossos files
#change here for the training data filename
#DONT TOUCH
labels = anno.astype('int64')
print ("Working Dir")
print (os.getcwd())
print ('Cube Properties!')
print (cube.dtype)
print (cube.shape)
print ('Mean : '+str(cube.mean()))
print ('Std : '+str(cube.std()))
print ('Labels Properties!')
print (labels.dtype)
print (labels.shape)
print ('Ids Properties!')
ids = np.unique(labels,return_counts=1)
print (ids)
h5file = h5py.File(training_data_file+'.h5', 'w')
h5file.create_dataset('image',data=cube)
h5file.create_dataset('labels',data=labels)
h5file.close()
print ("Finished!! Goodbye!!")
#DONT TOUCH
###Output
Working Dir
/media/Trantor2/Public/ffn_test_goodpeople
Cube Properties!
uint8
(128, 128, 128)
Mean : 126.22100448608398
Std : 41.172601119937475
Labels Properties!
int64
(128, 128, 128)
Ids Properties!
(array([ 0, 114, 121, 136, 185, 213, 255, 347, 437, 451, 478, 592, 636,
640, 644, 660, 737]), array([2073568, 628, 2472, 1615, 895, 1060, 2155,
1294, 2806, 3381, 318, 1451, 141, 1009,
641, 781, 2937]))
Finished!! Goodbye!!
|
ExportModel.ipynb | ###Markdown
Export Pegasus Model to pb Format Place this file inside pegasus [folder](https://github.com/google-research/pegasus)
###Code
import itertools
import os
import time
from absl import logging
from pegasus.data import infeed
from pegasus.params import all_params # pylint: disable=unused-import
from pegasus.params import estimator_utils
from pegasus.params import registry
import tensorflow as tf
from pegasus.eval import text_eval
from pegasus.ops import public_parsing_ops
import pandas as pd
from random import choice
from tensorflow.python.estimator.export import export
tf.enable_eager_execution()
# import tensorflow_transform as tft
data_name = 'newsroom'
import tensorflow_transform as tft
master = ""
model_dir = "./ckpt/pegasus_ckpt/%s"%data_name
use_tpu = False
iterations_per_loop = 1000
num_shards = 1
param_overrides = "vocab_filename=ckpt/pegasus_ckpt/c4.unigram.newline.10pct.96000.model,batch_size=1,beam_size=5,beam_alpha=0.6"
eval_dir = os.path.dirname(model_dir)
checkpoint_path = model_dir
checkpoint_path = tf.train.latest_checkpoint(checkpoint_path )
params = registry.get_params('%s_transformer'%data_name)(param_overrides)
pattern = params.dev_pattern
input_fn = infeed.get_input_fn(params.parser, pattern,
tf.estimator.ModeKeys.PREDICT)
parser, shapes = params.parser(mode=tf.estimator.ModeKeys.PREDICT)
RAW_DATA_FEATURE_SPEC = dict([("inputs", tf.io.FixedLenFeature(shapes['inputs'], tf.int64)),
('targets', tf.io.FixedLenFeature(shapes['targets'], tf.string))])
raw_feature_spec = RAW_DATA_FEATURE_SPEC.copy()
raw_feature_spec.pop('targets')
# raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
# raw_feature_spec, default_batch_size=0)
def serving_input_fn():
raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec, default_batch_size=1)
serving_input_receiver = raw_input_fn()
raw_features = serving_input_receiver.features
return tf.estimator.export.ServingInputReceiver(
raw_features, serving_input_receiver.receiver_tensors)
print(tf.executing_eagerly())
estimator = estimator_utils.create_estimator(master,
model_dir,
use_tpu,
iterations_per_loop,
num_shards, params, include_features_in_predictions=False)
estimator.export_saved_model(
"model/", serving_input_fn
)
###Output
_____no_output_____ |
arize/examples/tutorials/Use_Cases/LTV_Use_case.ipynb | ###Markdown
Getting Started with the Arize Platform - Customer Lifetime Value in Telecommunication Industry**You are part of a team in a telecommunication company that monitors and maintains a customer lifetime value (LTV) regression model, which predicts the LTV for each customer.** The business objective of this regression model is to accurately predict customer lifetime value in order to improve customer segmentation and profiling to customize offers and target customers based on their potential value and recognize best customers.You understand that flaws in your model performance will have a huge negative impact on your company and with your LTV model in production you don't have any effective tool at your disposal to monitor the performance of your models, identify any issues and troubleshoot costly model degradations. Therefore, you turn to Arize to help you understand what went wrong in your model and how you can improve it. **In this walkthrough, we are going to investigate your production LTV model. We will validate degradation in model performance, take a deep dive to investigate the root causes of those inaccurate predictions, and set up proactive monitors to mitigate the impact of future degradations.**You will learn how to:1. Get training and production data into the Arize platform2. Setup performance dashboards and monitors to look at prediction performance3. Understand where the model is underperforming4. Discover the root cause of issues5. Set up pro-active monitoring to mitigate the impact of such degradations in the futureThe production data contains 1 month of data where 2 main issues exist. You will work on identifying these issues over the course of this exercise.1. A data source has introduced changes in the distribution of particular features2. The model is inacurate during some time period due to particular features Step 0. Setup and Getting the DataThe first step is to load our preexisting dataset which includes training and production environments for our LTV model. Using a preexisting dataset illustrates how simple it is to get started with the Arize platform. Install Dependencies and Import Libraries ๐
###Code
!pip install arize -q
!pip install tables --upgrade -q
!pip install -q arize shap
import datetime, uuid, requests, tempfile
from datetime import timedelta
import numpy as np
import pandas as pd
from arize.utils.types import ModelTypes, Environments
from arize.pandas.logger import Client, Schema
###Output
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 23.6 MB 121 kB/s
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 4.3 MB 28.4 MB/s
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 356 kB 26.7 MB/s
[?25h Building wheel for shap (setup.py) ... [?25l[?25hdone
###Markdown
**๐ Download the Data**In this walkthrough, weโll be sending real historical data. Note, that while feature names and values are made explicit in this dataset, you can achieve the same level of ML Observability using obfuscated features. | Feature | Type | Description ||||:-|:-|:-|---|---|| `City`| `str`| city in California where the customer resides |||| `Gender`| `str`| customer's gender |||| `Partner`| `str`| flag indicating if the customer has a partner |||| `Dependents`| `str`| flag indicating if the customer has dependents |||| `Phone Service`| `str `| flag indicating if the customer has phone service |||| `Internet Service`| `str`| flag indicating if the customer has internet service |||| `Streaming TV`| `str`| flag indicating if the customer streams TV |||| `Streaming Movies`| `str`| flag indicating if the customer streams movies |||| `Churn Value`| `int (0 or 1)`| flag indicating if the customer churned ||| Inspect the Data The data represents a regression model trained to predict LTV for a customer. The dataset contains one month of data and the performance will be evaluated by comparing:* **`Predicted LTV`**: Predicted value of LTV for each customer* **`Actual LTV`**: Actual value of LTV for each customer
###Code
# Preparing dataset for this tutorial
train_df = pd.read_csv('https://storage.googleapis.com/arize-assets/fixtures/LTV%20Use-Case/LTV_train.csv')
test_df = pd.read_csv('https://storage.googleapis.com/arize-assets/fixtures/LTV%20Use-Case/LTV_test.csv')
print('โ
Dependencies installed and data successfully downloaded!')
###Output
โ
Dependencies installed and data successfully downloaded!
###Markdown
Inspect and Prepare the Data
###Code
#Preparing Training Data
train_df["prediction_id"] = [str(uuid.uuid4()) for _ in range(len(train_df))]
train_df = train_df.drop(columns=['Unnamed: 0'])
train_df
#Preparing production data
def prod_ID_time(df, start, end):
max_d = df['day'].max()
out_df = pd.DataFrame()
dts = pd.date_range(start, end).to_pydatetime().tolist()
for dt in dts:
day_df = df.loc[df["day"] == (dt.day % max_d)].copy()
day_df["prediction_ts"] = int(dt.strftime('%s'))
out_df = pd.concat([out_df, day_df], ignore_index=True)
out_df["prediction_id"] = [str(uuid.uuid4()) for _ in range(out_df.shape[0])]
return out_df.drop(columns = "day")
today= datetime.date.today()
END_DATE = (today).strftime('%Y-%m-%d')
START_DATE = (today - timedelta(31)).strftime('%Y-%m-%d')
test_df = prod_ID_time(test_df, START_DATE, END_DATE)
test_df = test_df.drop(columns=['Unnamed: 0'])
test_df
###Output
_____no_output_____
###Markdown
Step 1. Sending Data into Arize ๐ซNow that we have our dataset imported, we are ready to integrate into Arize. We do this by logging (sending) important data we want to analyze to the platform. There, the data will be easily visualized and troubleshooting workflows will help us find the source of our problem.For our model, we are going to log:* feature data* predictions* actuals Import and Setup Arize ClientThe first step is to setup our Arize client. After that we will log the data.First, copy the Arize `API_KEY` and `ORG_KEY` from your admin page linked below! Copy those over to the set-up section. We will also be setting up some metadata to use across all logging.[](https://app.arize.com/admin) 
###Code
ORGANIZATION_KEY = "ORGANIZATION_KEY"
API_KEY = "API_KEY"
arize_client = Client(organization_key=ORGANIZATION_KEY, api_key=API_KEY)
# Saving model metadata for passing in later
model_id = "LTV-use-case-tutorial"
model_version = "v1.0"
print("Step 1 โ
: Import and Setup Arize Client Done! Now we can start using Arize!")
###Output
Step 1 โ
: Import and Setup Arize Client Done! Now we can start using Arize!
###Markdown
Log Training & Production Data to Arize Now that our Arize client is setup, let's go ahead and log all of our data to the platform. For more details on how **`arize.pandas.logger`** works, visit out documentations page below.[](https://docs.arize.com/arize/sdks-and-integrations/python-sdk/arize.pandas)Key parameters:* **prediction_label_column_name**: tells Arize which column contains the predictions* **actual_label_column_name**: tells Arize which column contains the actual results from field dataWe will use [ModelTypes.NUMERIC](https://docs.arize.com/arize/concepts-and-terminology/model-types) to perform this analysis. 3.1: Log the training data for your model to Arize!
###Code
# Define a Schema() for Arize to pick up the data from the correct column for logging
train_schema = Schema(
prediction_id_column_name="prediction_id",
prediction_label_column_name="Predicted LTV",
actual_label_column_name="Actual LTV",
feature_column_names=train_df.columns.drop(
["prediction_id", "Predicted LTV", "Actual LTV"]
),
)
train_res = arize_client.log(
dataframe=train_df,
model_id=model_id,
model_version=model_version,
model_type=ModelTypes.NUMERIC,
environment=Environments.TRAINING,
schema=train_schema,
)
if train_res.status_code != 200:
print(f"future failed with response code {train_res.status_code}, {train_res.text}")
else:
print(f"future completed with response code {train_res.status_code}")
###Output
future completed with response code 200
###Markdown
3.3: Log the production dataNote: We will be sending our test data to emulate sending production data.
###Code
# Logging production
all_cols = test_df.columns
feature_cols = all_cols.drop(["prediction_id", "prediction_ts", "Predicted LTV", "Actual LTV"]
)
test_schema = Schema(
prediction_id_column_name="prediction_id",
timestamp_column_name="prediction_ts",
prediction_label_column_name="Predicted LTV",
actual_label_column_name="Actual LTV",
feature_column_names=feature_cols)
test_res = arize_client.log(
dataframe=test_df,
model_id=model_id,
model_version=model_version,
model_type=ModelTypes.NUMERIC,
environment=Environments.PRODUCTION,
schema=test_schema,
)
if test_res.status_code != 200:
print(f"future failed with response code {test_res.status_code}, {test_res.text}")
else:
print(f"future completed with response code {test_res.status_code}")
###Output
future completed with response code 200
|
03_net.ipynb | ###Markdown
Net
###Code
import tensorflow as tf
%pylab inline
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train / 255
x_test = x_test / 255
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(28*28, activation='sigmoid'),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(
optimizer=tf.keras.optimizers.SGD(lr=0.01),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=['accuracy']
)
history = model.fit(x_train, y_train, epochs=10)
###Output
Train on 60000 samples
Epoch 1/10
60000/60000 [==============================] - 5s 76us/sample - loss: 1.2824 - accuracy: 0.7118
Epoch 2/10
60000/60000 [==============================] - 5s 81us/sample - loss: 0.6083 - accuracy: 0.8557
Epoch 3/10
60000/60000 [==============================] - 5s 81us/sample - loss: 0.4741 - accuracy: 0.8762
Epoch 4/10
60000/60000 [==============================] - 5s 81us/sample - loss: 0.4186 - accuracy: 0.8864
Epoch 5/10
60000/60000 [==============================] - 5s 81us/sample - loss: 0.3876 - accuracy: 0.8923
Epoch 6/10
60000/60000 [==============================] - 5s 82us/sample - loss: 0.3678 - accuracy: 0.8968
Epoch 7/10
60000/60000 [==============================] - 5s 82us/sample - loss: 0.3537 - accuracy: 0.8989
Epoch 8/10
60000/60000 [==============================] - 5s 82us/sample - loss: 0.3430 - accuracy: 0.9012
Epoch 9/10
60000/60000 [==============================] - 5s 85us/sample - loss: 0.3345 - accuracy: 0.9047
Epoch 10/10
60000/60000 [==============================] - 5s 81us/sample - loss: 0.3274 - accuracy: 0.9058
|
TextClassification/TextClassification.ipynb | ###Markdown
Text Classification(This notebook is created by [liyinnbw](https://github.com/liyinnbw/ML/tree/master/TextClassification) under the MIT license)Problem Definition:* Classify english news titles into one of the given topics.* Assuming each news is associated with one and only one of the topics. Import Training & Testing DataInstead of using provided dataset, you can also use your own dataset as long as the data contains a text column and a label column.
###Code
import pandas as pd
labelMeanings=['Ratings downgrade','Sanctions','Growth into new markets','New product coverage','Others']
col = ['title', 'category']
df_train = pd.read_csv('https://raw.githubusercontent.com/liyinnbw/ML/master/NewsClassification/Data/train.csv')[col]
df_test = pd.read_csv('https://raw.githubusercontent.com/liyinnbw/ML/master/NewsClassification/Data/test.csv')[col]
X_train = df_train.title
y_train = df_train.category
X_test = df_test.title
y_test = df_test.category
print('train shape:', X_train.shape)
print('test shape:', X_test.shape)
df_train.head()
###Output
train shape: (6027,)
test shape: (3826,)
###Markdown
Data Preprocessing & Feature Extraction* Replace numbers by common token* Keep only letters* Stemming (remove word tense)* tf-idf feature extractionThe same preprocessing should be applied to test data.Wrap the preprocessing inside a custom transformer which can be used inside a training pipeline.
###Code
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction.text import TfidfVectorizer
from nltk.stem.snowball import SnowballStemmer
class CustomPreprocessor(BaseEstimator,TransformerMixin):
def __init__(self):
self.tfidf = TfidfVectorizer(
sublinear_tf=True,
min_df=1,
norm='l2',
strip_accents='ascii',
analyzer='word',
ngram_range=(1, 2),
stop_words= [
'i', 'me', 'my', 'myself', 'we', 'our', 'our',
'ourselv', 'you', 'your', 'youv', 'youll', 'youd',
'your', 'your', 'yourself', 'yourselv', 'he', 'him',
'his', 'himself', 'she', 'shes', 'her', 'her',
'herself', 'it', 'it', 'it', 'itself', 'they', 'them',
'their', 'their', 'themselv', 'what', 'which', 'who',
'whom', 'this', 'that', 'thatll', 'these', 'those',
'am', 'is', 'are', 'was', 'were', 'be', 'been', 'be',
'have', 'has', 'had', 'have', 'do', 'doe', 'did',
'do', 'a', 'an', 'the', 'and', 'but', 'if', 'or',
'becaus', 'as', 'until', 'while', 'of', 'at', 'by',
'for', 'with', 'about', 'against', 'between', 'into',
'through', 'dure', 'befor', 'after', 'abov', 'below',
'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off',
'over', 'under', 'again', 'further', 'then', 'onc',
'here', 'there', 'when', 'where', 'whi', 'how', 'all',
'ani', 'both', 'each', 'few', 'more', 'most', 'other',
'some', 'such', 'no', 'nor', 'not', 'onli', 'own',
'same', 'so', 'than', 'too', 'veri', 's', 't', 'can',
'will', 'just', 'don', 'dont', 'should', 'shouldv',
'now', 'd', 'll', 'm', 'o', 're', 've', 'y', 'ain',
'aren', 'arent', 'couldn', 'couldnt', 'didn', 'didnt',
'doesn', 'doesnt', 'hadn', 'hadnt', 'hasn', 'hasnt',
'haven', 'havent', 'isn', 'isnt', 'ma', 'mightn',
'mightnt', 'mustn', 'mustnt', 'needn', 'neednt',
'shan', 'shant', 'shouldn', 'shouldnt', 'wasn',
'wasnt', 'weren', 'werent', 'won', 'wont', 'wouldn',
'wouldnt', 'numtoken', 'again'
]
)
def clean(self, X):
X_processed = X
# replace numbers
X_processed = X_processed.str.replace('\d*\.\d+|\d+', ' NUMTOKEN ', regex=True)
# remove '
X_processed = X_processed.str.replace("'", '', regex=False)
# keep only letters
X_processed = X_processed.str.replace('[^A-Za-z]+', ' ', regex=True)
# stemming
st = SnowballStemmer("english")
X_processed = X_processed.apply(lambda row: row.split(" "))
X_processed = X_processed.apply(lambda row: [st.stem(word) for word in row])
X_processed = X_processed.apply(lambda row: " ".join(row))
return X_processed
def fit(self, X, y=None):
# clean data
X_processed = self.clean(X)
# train tf-idf model
self.tfidf.fit(X)
return self
def transform(self, X):
# clean data
X_processed = self.clean(X)
# transform sentence to numerical vector using tf-idf
X_processed = self.tfidf.transform(X_processed)
return X_processed
def fit_transform(self, X, y=None, **fit_params):
self.fit(X,y)
return self.transform(X)
prep = CustomPreprocessor()
x_train = prep.fit_transform(X_train)
x_test = prep.transform(X_test)
print('train shape:', x_train.shape)
print('test shape:', x_test.shape)
###Output
train shape: (6027, 22822)
test shape: (3826, 22822)
###Markdown
TrainingUsing training data only, cross-validated, compared across different model choices.
###Code
from sklearn.naive_bayes import MultinomialNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import LinearSVC
from sklearn.linear_model import SGDClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.model_selection import cross_val_score
models = [
MultinomialNB(alpha=1.0, fit_prior=True),
DecisionTreeClassifier(criterion='gini', max_depth=3, min_samples_split=0.1, min_samples_leaf=1, min_impurity_decrease=0, class_weight=None, random_state=27),
LinearSVC(random_state=27, max_iter=1000),
SGDClassifier(loss='hinge', penalty='l2', alpha=1e-3, random_state=27, max_iter=1000),
LogisticRegression(solver='lbfgs', multi_class='ovr', random_state=27, max_iter=1000),
RandomForestClassifier(criterion='gini', max_depth=3, min_samples_split=0.1, min_samples_leaf=1, min_impurity_decrease=0, class_weight=None, n_estimators=100, random_state=27),
GradientBoostingClassifier(criterion='friedman_mse', max_depth=3, min_samples_split=0.1, min_samples_leaf=1, min_impurity_decrease=0, n_estimators=100, random_state=27)
]
models.append(BaggingClassifier(models[0], n_estimators=100, random_state=27))
models.append(AdaBoostClassifier(models[0], algorithm="SAMME.R", n_estimators=100, random_state=27))
CV = 5
cv_df = pd.DataFrame(index=range(CV * len(models)))
entries = []
for model in models:
model_name = model.__class__.__name__
accuracies = cross_val_score(model, x_train, y_train, scoring='accuracy', cv=CV)
for fold_idx, accuracy in enumerate(accuracies):
entries.append((model_name, fold_idx, accuracy))
cv_df = pd.DataFrame(entries, columns=['model_name', 'fold_idx', 'accuracy'])
###Output
_____no_output_____
###Markdown
Visualize Model Performances
###Code
import seaborn as sns
chart = sns.boxplot(x='model_name', y='accuracy', data=cv_df)
chart.set_xticklabels(chart.get_xticklabels(), rotation=45)
chart = sns.stripplot(x='model_name', y='accuracy', data=cv_df,
size=8, jitter=True, edgecolor="gray", linewidth=2)
chart.set_xticklabels(chart.get_xticklabels(), rotation=45)
###Output
_____no_output_____
###Markdown
Testing
###Code
from sklearn.metrics import confusion_matrix
from sklearn import metrics
for model in models:
model_name = model.__class__.__name__
model.fit(x_train, y_train)
y_pred = model.predict(x_test)
conf_mat = confusion_matrix(y_test, y_pred)
print(model_name)
print(conf_mat)
# report = metrics.classification_report(y_test, y_pred)
# print(report)
print("accuracy: {:0.5f}".format(metrics.accuracy_score(y_test, y_pred)))
print("f2: {:0.5f}".format(metrics.fbeta_score(y_pred, y_test, beta=2, average="macro")))
# import numpy as np
# labls = np.arange(5).tolist()
# fig = plt.figure()
# ax = fig.add_subplot(111)
# cax = ax.matshow(conf_mat, cmap=plt.cm.Blues, vmin=0)
# fig.colorbar(cax)
# ax.set_xticklabels([''] + labls)
# ax.set_yticklabels([''] + labls)
# plt.xlabel('Predicted')
# plt.ylabel('Expected')
# plt.show()
###Output
MultinomialNB
[[ 465 0 3 0 232]
[ 0 48 0 0 203]
[ 0 0 666 0 61]
[ 0 0 26 175 72]
[ 2 0 96 6 1771]]
accuracy: 0.81678
f2: 0.79381
DecisionTreeClassifier
[[ 198 0 0 0 502]
[ 0 0 0 0 251]
[ 0 0 323 0 404]
[ 0 0 0 171 102]
[ 0 0 3 0 1872]]
accuracy: 0.67015
f2: 0.60044
LinearSVC
[[ 627 0 12 0 61]
[ 0 215 0 0 36]
[ 2 0 678 6 41]
[ 0 0 4 260 9]
[ 85 48 210 45 1487]]
accuracy: 0.85389
f2: 0.84562
SGDClassifier
[[ 603 0 0 0 97]
[ 0 194 0 0 57]
[ 0 0 694 6 27]
[ 0 0 5 258 10]
[ 58 21 104 11 1681]]
accuracy: 0.89650
f2: 0.89857
LogisticRegression
[[ 602 0 6 0 92]
[ 6 178 0 0 67]
[ 0 0 704 0 23]
[ 0 0 4 254 15]
[ 56 12 132 12 1663]]
accuracy: 0.88892
f2: 0.89520
RandomForestClassifier
[[ 0 0 0 0 700]
[ 0 0 0 0 251]
[ 0 0 153 0 574]
[ 0 0 0 0 273]
[ 0 0 3 0 1872]]
accuracy: 0.52927
f2: 0.22633
GradientBoostingClassifier
[[ 570 6 0 0 124]
[ 0 225 0 0 26]
[ 0 0 710 0 17]
[ 0 0 3 267 3]
[ 58 34 74 13 1696]]
accuracy: 0.90643
f2: 0.90559
BaggingClassifier
[[ 465 0 3 0 232]
[ 0 48 0 0 203]
[ 0 0 678 0 49]
[ 0 0 26 174 73]
[ 2 0 100 6 1767]]
accuracy: 0.81861
f2: 0.79426
AdaBoostClassifier
[[ 0 0 0 0 700]
[ 0 0 0 0 251]
[ 0 0 240 0 487]
[ 0 0 0 0 273]
[ 0 0 3 0 1872]]
accuracy: 0.55201
f2: 0.25677
###Markdown
Problem1: Class ImbalanceThere is an uneven distribution of labels in the training data known as the "Class Imbalance" problem. This problem can cause the trained model to sacrifice precision and recall on under-represented classes in favour of improving the precision and recall on over-represented classes (which is observed in the above testing results). This problem is common in real world where not all classes are observed evenly. You could:1. Over sample under-represented classes to match up with the class that occurred most frequently (many repeated samples adversely affect the network decision).2. Under sample the over-represented classes to match up with the class that occurred least frequently (waste of precious data).3. Synthesize under-represented classes to match up with the class that occurred most frequently (design of synthesize algorithm is difficult).4. Change the way you group classes (if allowed to).
###Code
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(8,6))
df_train.groupby('category').title.count().plot.bar(ylim=0)
plt.show()
###Output
_____no_output_____
###Markdown
Problem2: Wrong Training LabelsIt could happen that the training data obtained contains wrong labels. Usually these wrong labels are relatively few as compared to the correct ones. Hence we have a solution for it:* We can use a simple clustering method to find news that are very similar in text but are given different labels. We can correct these wrong labels with high confidence by majority vote.
###Code
from sklearn.cluster import DBSCAN
import numpy as np
doc_lbls = DBSCAN(eps=0.03, min_samples=3, metric='cosine').fit_predict(x_train)
clusters = np.unique(doc_lbls)
print("# of clusters = ",len(clusters)-1)
# only relabel if >0.5 fraction of the data have same label
relable_percent_thresh = 0.5
# and only if majority label > 2 times the second majority label
relable_second_thresh = 2.0
y_train_corrected = y_train.copy()
for lbl in clusters:
if lbl<0:
# does not belong to any cluster
continue
X_cluster = X_train[doc_lbls==lbl]
y_cluster = y_train_corrected[doc_lbls==lbl]
binCounts = np.bincount(y_cluster)
binMax = np.argmax(binCounts)
binMaxCount = binCounts[binMax]
binCounts[binMax] = 0
binSecondMax = np.argmax(binCounts)
binSecondMaxCount = binCounts[binSecondMax]
if (binSecondMaxCount == 0):
# print('consistent')
continue
elif binMaxCount*1.0/binSecondMaxCount>=relable_second_thresh and binMaxCount*1.0/len(X_cluster) >= relable_percent_thresh:
y_train_corrected[doc_lbls==lbl] = binMax
print(X_cluster)
print('relabel to:', binMax)
else:
# print('cant decide')
continue
###Output
_____no_output_____
###Markdown
We consider the problem of classifying text messages (such as costumer feedbacks) according to their sentiment. A data point is a one particular text snippet. The features of the data point are a numeric encoding of the text. The label is a number 1,..,5 which encodes a particular sentiment. In order to learn a classifier that takes the feature vector of a text and outputs a predicted label, we have some labeled data points in the file "train.tsv". Each line of this file contains one text snippet for which the sentiment is known.
###Code
import csv
with open('train.tsv') as f:
reader = csv.reader(f)
your_list = list(reader)
print(your_list[:3])
##
import pandas as pd
df=pd.read_csv('train.tsv', sep='\t')
df.iloc[:,3]
df.head()
corpus = df['Phrase']
y = df['Sentiment'] # labal vector
from sklearn.feature_extraction.text import CountVectorizer
# compute numeric features for each text snippet
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(corpus)
print(X.toarray()) # the matrix X contains the feature vectors for each text snippet
m = X.shape[0]
n = X.shape[1]
print("number of data points m=",m)
print("\n number of features n=",n)
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from imblearn.over_sampling import RandomOverSampler
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=42)
#ros = RandomOverSampler(random_state=0)
#X_resampled, y_resampled = ros.fit_resample(X_train, y_train)
clf = LogisticRegression(random_state=0, solver='lbfgs').fit(X_train, y_train)
print("test-set",clf.score(X_test,y_test))
print("train-set",clf.score(X_train,y_train))
###Output
/Users/alexanderjung/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:469: FutureWarning: Default multi_class will be changed to 'auto' in 0.22. Specify the multi_class option to silence this warning.
"this warning.", FutureWarning)
/Users/alexanderjung/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:947: ConvergenceWarning: lbfgs failed to converge. Increase the number of iterations.
"of iterations.", ConvergenceWarning)
/Users/alexanderjung/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:947: ConvergenceWarning: lbfgs failed to converge. Increase the number of iterations.
"of iterations.", ConvergenceWarning)
/Users/alexanderjung/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:947: ConvergenceWarning: lbfgs failed to converge. Increase the number of iterations.
"of iterations.", ConvergenceWarning)
/Users/alexanderjung/anaconda3/lib/python3.7/site-packages/sklearn/linear_model/logistic.py:947: ConvergenceWarning: lbfgs failed to converge. Increase the number of iterations.
"of iterations.", ConvergenceWarning)
|
Chapters/05.OptimalTransport/Chapter5.ipynb | ###Markdown
Optimal transport*Selected Topics in Mathematical Optimization: 2017-2018***Michiel Stock** ([email]([email protected]))
###Code
from optimal_transport import red, green, orange, yellow, blue, black
import matplotlib.pyplot as plt
import numpy as np
from optimal_transport import pairwise_distances
%matplotlib inline
###Output
_____no_output_____
###Markdown
Cell trackingIn a microscopy imaging experiment we monitor ten moving cells at time $t_1$ and some time later at time $t_2$. Between these times, the cells have moved. An image processing algorithm determined the coordinates of every cell in the two images. We want to know which cell in the first corresponds to the second image. To this end, search the assignment that minimizes the sum of the squared Euclidian distances between cells from the first image versus the corresponding cell of the second image.1. `X1` and `X2` contain the $x,y$ coordinates of the cells for the two images. Compute the matrix $C$ containing the pairwise squared Euclidean distance. You can use the function `pairwise_distances` from `sklearn`.2. Complete the function `monge_brute_force` to use brute-force search for the best permutation.3. Make a plot connecting the cells.
###Code
from cell_tracking import X1, X2, plot_cells
# all permutations can easily be generated in python
from itertools import permutations
for perm in permutations([1, 2, 3]):
print(perm)
fig, ax = plot_cells(X1, X2)
def monge_brute_force(C):
"""
Solves the Monge assignment problem using
brute force.
Inputs:
- C: cost matrix (square, size n x n)
Outputs:
- best_perm: optimal assigments (list of n indices matching the rows
to the columns)
- best_cost: optimal cost corresponding to the best permutation
DO NOT USE FOR PROBLEMS OF A SIZE LARGER THAN 12!!!
"""
n, m = C.shape
assert n==m # C should be square
best_perm = None
best_cost = np.inf
# loop over all permutations and to find the
# matching with the lowest cost
return best_perm, best_cost
from optimal_transport import monge_brute_force
# get the cost matrix (i.e. pairwise squared
# Euclidean distances between the cells at the different times)
C = ...
# get matching
best_perm, best_cost = monge_brute_force(C)
# make a plot with the connections of the cells
###Output
_____no_output_____
###Markdown
Cell differentiationThree types of cells are cultured together. At $t_1$ we know the expression of some cells of every type (two genes). After some time $t_2$, the cells have multiplied are have differentiated somewhat. A new gene expression analysis is done for a set of cells from the culture (without information about the type). How did the expression change for every type?1. Link the cells from the two time points using OT. Use Sinkhorn with $\lambda=10$ and squared Euclidean distance for cost.2. Plot the mapping (use the \texttt{alpha} argument to set the shade of a color).3. Compute the `drift' (difference in average gene expression) in gene expression for every cell type.
###Code
# X1 and X2 are gene expressions for the cells at time 1 and 2
# y1 is the indicator of the type of cells, only known at t1
from cell_differentiation import X1, X2, y1, plot_cells
fig, ax = plt.subplots()
plot_cells(ax)
def compute_optimal_transport(C, a, b, lam, epsilon=1e-8,
verbose=False, return_iterations=False):
"""
Computes the optimal transport matrix and Slinkhorn distance using the
Sinkhorn-Knopp algorithm
Inputs:
- C : cost matrix (n x m)
- a : vector of marginals (n, )
- b : vector of marginals (m, )
- lam : strength of the entropic regularization
- epsilon : convergence parameter
- verbose : report number of steps while running
- return_iterations : report number of iterations till convergence,
default False
Output:
- P : optimal transport matrix (n x m)
- dist : Sinkhorn distance
- n_iterations : number of iterations, if `return_iterations` is set to
True
"""
n, m = C.shape
P = np.exp(- lam * C)
iteration = 0
while True:
iteration += 1
u = P.sum(1) # marginals of rows
max_deviation = np.max(np.abs(u - a))
if verbose: print('Iteration {}: max deviation={}'.format(
iteration, max_deviation
))
if max_deviation < epsilon:
break
# scale rows
...
# scale columns
...
if return_iterations:
return P, np.sum(P * C), iteration
else:
return P, np.sum(P * C)
from optimal_transport import compute_optimal_transport
# get the cost matrix (i.e. pairwise squared
# Euclidean distances of the expression vectors
# of the cells at the different times)
C = ...
# get matching
P, _ = compute_optimal_transport(...
# plot the cells with the mapping between the times
# compute the drift (average change in gene expression
# for different classes between the two time points)
###Output
_____no_output_____
###Markdown
Optimal transport*Selected Topics in Mathematical Optimization***Michiel Stock** ([email]([email protected]))
###Code
from optimal_transport import red, green, orange, yellow, blue, black
import matplotlib.pyplot as plt
import numpy as np
from optimal_transport import pairwise_distances
%matplotlib inline
###Output
_____no_output_____
###Markdown
Cell trackingIn a microscopy imaging experiment we monitor ten moving cells at time $t_1$ and some time later at time $t_2$. Between these times, the cells have moved. An image processing algorithm determined the coordinates of every cell in the two images. We want to know which cell in the first corresponds to the second image. To this end, search the assignment that minimizes the sum of the squared Euclidian distances between cells from the first image versus the corresponding cell of the second image.1. `X1` and `X2` contain the $x,y$ coordinates of the cells for the two images. Compute the matrix $C$ containing the pairwise squared Euclidean distance. You can use the function `pairwise_distances` from `sklearn`.2. Complete the function `monge_brute_force` to use brute-force search for the best permutation.3. Make a plot connecting the cells.
###Code
from cell_tracking import X1, X2, plot_cells
# all permutations can easily be generated in python
from itertools import permutations
for perm in permutations([1, 2, 3]):
print(perm)
fig, ax = plot_cells(X1, X2)
def monge_brute_force(C):
"""
Solves the Monge assignment problem using
brute force.
Inputs:
- C: cost matrix (square, size n x n)
Outputs:
- best_perm: optimal assigments (list of n indices matching the rows
to the columns)
- best_cost: optimal cost corresponding to the best permutation
DO NOT USE FOR PROBLEMS OF A SIZE LARGER THAN 12!!!
"""
n, m = C.shape
assert n==m # C should be square
best_perm = None
best_cost = np.inf
# loop over all permutations and to find the
# matching with the lowest cost
return best_perm, best_cost
from optimal_transport import monge_brute_force
# get the cost matrix (i.e. pairwise squared
# Euclidean distances between the cells at the different times)
C = ...
# get matching
best_perm, best_cost = monge_brute_force(C)
# make a plot with the connections of the cells
###Output
_____no_output_____
###Markdown
Cell differentiationThree types of cells are cultured together. At $t_1$ we know the expression of some cells of every type (two genes). After some time $t_2$, the cells have multiplied are have differentiated somewhat. A new gene expression analysis is done for a set of cells from the culture (without information about the type). How did the expression change for every type?1. Link the cells from the two time points using OT. Use Sinkhorn with $\lambda=10$ and squared Euclidean distance for cost.2. Plot the mapping (use the \texttt{alpha} argument to set the shade of a color).3. Compute the `drift' (difference in average gene expression) in gene expression for every cell type.
###Code
# X1 and X2 are gene expressions for the cells at time 1 and 2
# y1 is the indicator of the type of cells, only known at t1
from cell_differentiation import X1, X2, y1, plot_cells
fig, ax = plt.subplots()
plot_cells(ax)
def compute_optimal_transport(C, a, b, lam, epsilon=1e-8,
verbose=False, return_iterations=False):
"""
Computes the optimal transport matrix and Slinkhorn distance using the
Sinkhorn-Knopp algorithm
Inputs:
- C : cost matrix (n x m)
- a : vector of marginals (n, )
- b : vector of marginals (m, )
- lam : strength of the entropic regularization
- epsilon : convergence parameter
- verbose : report number of steps while running
- return_iterations : report number of iterations till convergence,
default False
Output:
- P : optimal transport matrix (n x m)
- dist : Sinkhorn distance
- n_iterations : number of iterations, if `return_iterations` is set to
True
"""
n, m = C.shape
P = np.exp(- lam * C)
iteration = 0
while True:
iteration += 1
u = P.sum(1) # marginals of rows
max_deviation = np.max(np.abs(u - a))
if verbose: print('Iteration {}: max deviation={}'.format(
iteration, max_deviation
))
if max_deviation < epsilon:
break
# scale rows
...
# scale columns
...
if return_iterations:
return P, np.sum(P * C), iteration
else:
return P, np.sum(P * C)
from optimal_transport import compute_optimal_transport
# get the cost matrix (i.e. pairwise squared
# Euclidean distances of the expression vectors
# of the cells at the different times)
C = ...
# get matching
P, _ = compute_optimal_transport(...
# plot the cells with the mapping between the times
# compute the drift (average change in gene expression
# for different classes between the two time points)
###Output
_____no_output_____
###Markdown
Illustration: color transferThis is a demonstration of a simple color transfer using optimal transport.
###Code
from optimal_transport import compute_optimal_transport
from skimage import io
from sklearn.cluster import MiniBatchKMeans as KMeans
from sklearn.preprocessing import StandardScaler
from collections import Counter
from sklearn.metrics.pairwise import pairwise_distances
import seaborn as sns
sns.set_style('white')
# change as you see fit!
image_name1 = 'Figures/butterfly3.jpg'
image_name2 = 'Figures/butterfly2.jpg'
n_clusters = 400
def clip_image(im):
"""
Clips an image such that its values are between 0 and 255
"""
return np.maximum(0, np.minimum(im, 255))
class Image():
"""simple class to work with an image"""
def __init__(self, image_name, n_clusters=100, use_location=True):
super(Image, self).__init__()
self.image = io.imread(image_name) + 0.0
self.shape = self.image.shape
n, m, _ = self.shape
X = self.image.reshape(-1, 3)
if use_location:
col_indices = np.repeat(np.arange(n), m).reshape(-1,1)
row_indices = np.tile(np.arange(m), n).reshape(-1,1)
#self.standardizer = StandardScaler()
#self.standardizer.fit_transform(
self.X = np.concatenate([X, row_indices, col_indices], axis=1)
else: self.X = X
self.kmeans = KMeans(n_clusters=n_clusters)
self.kmeans.fit(self.X)
def compute_clusted_image(self, center_colors=None):
"""
Returns the image with the pixels changes by their cluster center
If center_colors is provided, uses these for the clusters, otherwise use
centers computed by K-means.
"""
clusters = self.kmeans.predict(self.X)
if center_colors is None:
X_transformed = self.kmeans.cluster_centers_[clusters,:3]
else:
X_transformed = center_colors[clusters,:3]
return clip_image(X_transformed).reshape(self.shape)
def get_color_distribution(self):
"""
Returns the distribution of the colored pixels
Returns:
- counts : number of pixels in each cluster
- centers : colors of every cluster center
"""
clusters = self.kmeans.predict(self.X)
count_dict = Counter(clusters)
counts = np.array([count_dict[i] for i in range(len(count_dict))],
dtype=float)
centers = self.kmeans.cluster_centers_
return counts, clip_image(centers[:,:3])
print('loading and clustering images...')
image1 = Image(image_name1, n_clusters=n_clusters)
image2 = Image(image_name2, n_clusters=n_clusters)
r, X1 = image1.get_color_distribution()
c, X2 = image2.get_color_distribution()
print('loading and clustering images...')
image1 = Image(image_name1, n_clusters=n_clusters)
image2 = Image(image_name2, n_clusters=n_clusters)
r, X1 = image1.get_color_distribution()
c, X2 = image2.get_color_distribution()
C = pairwise_distances(X1, X2, metric="sqeuclidean")
print('performing optimal transport...')
P, d = compute_optimal_transport(C, r/r.sum(), c/c.sum(), 1e-2)
sns.clustermap(P, row_colors=X1/255, col_colors=X2/255,
yticklabels=[], xticklabels=[])
plt.savefig('Figures/color_mapping.png')
print('computing and plotting color distributions...')
X1to2 = P.sum(1)**-1 * P @ X2
X2to1 = P.sum(0)**-1 * P.T @ X1
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(10, 10))
axes[0, 0].imshow(image1.image/255)
axes[0, 1].imshow(image2.image/255)
axes[1, 0].imshow(image1.compute_clusted_image(X1to2)/255)
axes[1, 1].imshow(image2.compute_clusted_image(X2to1)/255)
for ax in axes.flatten():
ax.set_xticks([])
ax.set_yticks([])
###Output
_____no_output_____ |
Week5-Lab-Visualizing-the-Translatlantic-Slave-Trade/Practicum-Visualize-Trans-Atlantic-Slave-Trade.ipynb | ###Markdown
Visualizing the Trans-Atlantic Slave Trade Table of Contents - Recap- About the Dataset - The Transatlantic Slave Trade - Facts about the dataset- Labs and Methodology- Goals- **Part 1 - Getting Our Basic Data Analysis Set-Up** - Import Libraries and unpack a file - Load file - Observing the Dataset using Pandas - Important Facts About the Dataset - Visualize Year of Arrival vs Number of Slaves arrived- **Part 2 - Getting Started with Data Wrangling** - Create a copy of the Original Dataset - Changing Column Names - Moving Column Positions - ```df.reindex()``` - Remove Voyage ID -```df.drop()``` - Using ```dropna()``` - Changing Column Type and Sorting - ```df.sort_values()``` - Finding Unique and similar strings - Working with Strings - ```df['column_name'].str.replace()```- **Part 3 - Micro Wrangling and Visualization** - Between 1500 - 1600 - Between 1601 - 1700 - Between 1701 - 1800 - Between 1801 - 1900- **Part 4 - Conclusion**- Resources- Appendix Recap - By this time, you should have an understanding of how to implement the following:- Loading a Dataset '.csv' as a dataframe using ```pd.read_csv```- Observing the properties of the loaded dataset using functions such as: - ```pd.head()``` - ```pd.describe()``` - ```pd.info()```- Modifying the dataset by removing ```NaN``` values.- A conceptual understanding of the term ```object``` in DataFrames. (really what it means is that the value is probably text/string)*- Re-indexing columns- Visualizing Data using ```matplotlib``` and ```pandas```: - Scatter plots - Barplots - Line plots - Histograms About the Dataset The Trans-Atlantic Slave Trade It is difficult to believe in the first decades of the twenty-first century that **just over two centuries ago**, for those Europeans who thought about the issue, the shipping of enslaved Africans across the Atlantic was morally indistinguishable from shipping textiles, wheat, or even sugar. Our reconstruction of a major part of this migration experience covers an era in which there was a massive technological change (*steamers were among the last slave ships*), as well as very dramatic shifts in perceptions of good and evil. Just as important perhaps were the relations between the Western and non-Western worlds that the trade both reflected and encapsulated. **Slaves constituted the most important reason for contact between Europeans and Africans for nearly two centuries**. The shipment of slaves from Africa was related to the demographic disaster consequent to the meeting of Europeans and Amerindians, which greatly reduced the numbers of Amerindian laborers and raised the demand for labor drawn from elsewhere, particularly Africa. As Europeans colonized the Americas, a steady stream of European peoples migrated to the Americas between 1492 and the early nineteenth century. But what is often overlooked is that, before 1820, perhaps three times as many enslaved Africans crossed the Atlantic as Europeans. This was the largest transoceanic migration of a people until that day, and it provided the Americas with a crucial labor force for their own economic development. The slave trade is thus a vital part of the history of some millions of Africans and their descendants who helped shape the modern Americas culturally as well as in the material sense.The details of the more than **36,000** voyages presented here greatly facilitate the study of cultural, demographic, and economic change in the Atlantic world from the late *sixteenth to the mid-nineteenth centuries*. Trends and cycles in the flow of African captives from specific coastal outlets should provide scholars with new, basic information useful in examining the relationships among slavery, warfare in both Africa and Europeโpolitical instability, and climatic and ecological change, among other forces. Facts about the dataset- The dataset approximately 36,110 trans-Atlantic voyages.- The estimates suggest around 12,520,000 captives departed Africa to the Americas. - Not all 36,000 voyages in the database carried slaves from Africa.- A total of 633 voyages (1.8%) never reached the African coast because they were lost at sea, captured, or affected by some other misfortune. - The database also contains records of 34,106 voyages that disembarked slaves, or could have done so (in other words, for some of these we do not know the outcome of the voyage).- The latter group comprised mainly of ships captured in the nineteenth century which were taken to Sierra Leone and St. Helena as part of the attempt to suppress the trade. This is a very insightful resource titled,'The Atlantic Slave Trade in Two Minutes. You can read it [here](http://www.slate.com/articles/life/the_history_of_american_slavery/2015/06/animated_interactive_of_the_history_of_the_atlantic_slave_trade.html). Practicum and Methodology Congratulations, you have made it to the first practicum of this course. The purpose of these practicums is to help you apply the Data Science pipeline in a project-based environment. You will be using the tools taught to you in the previous modules and adopt and Question and Answer based approach when you work with the dataset.For this project, we start by asking questions which you will answer in code and simple explanations.**example** - change the name of 'column_x' to 'column y' **answer**: ```df = df.rename(columns = {'column_x : 'column_y})```We have divided our approach into 4 parts:- The first part is the traditional set up. These are some things we should do before modifying the dataset.- The second part involves cleaning the dataset and choosing columns that fit our methodology.- The third part involves further splitting our cleaned dataframe into smaller dataframes and visualizing them.- Finally, the fourth part involves summarizing our conclusion. GradingThis exercise has a total of 27 questions. Every question has 1 point. Some questions might have multiple parts but the weight of the question is the same.In order to work on the questions in this Practicum and submit them for grading, you'll need to run the code block below. It will ask for your student ID number and then create a folder that will house your answers for each question. At the very end of the notebook, there is a code section that will download this folder as a zip file to your computer. This zip file will be your final submission.
###Code
import os
import shutil
!rm -rf sample_data
student_id = input('Please Enter your Student ID: ') # Enter Student ID.
while len(student_id) != 9:
student_id = int('Please Enter your Student ID: ')
folder_location = f'{student_id}/Week_Six/Practicum'
if not os.path.exists(folder_location):
os.makedirs(folder_location)
print('Successfully Created Directory, Lets get started')
else:
print('Directory Already Exists')
###Output
_____no_output_____
###Markdown
Part 1 - Getting Our Basic Data Analysis Set-Up Import the libraries you will be using for this project. These libraries are the ones we have used in the previous labs. Q1 Load libraries and file
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/1.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
import _____ as pd # INSERT CODE HERE
import __________ as plt # INSERT CODE HERE
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/1.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
url = 'https://rb.gy/cjfen3'
trans_atlc_trade = __.read_csv(___) # INSERT CODE HERE
###Output
_____no_output_____
###Markdown
Observing the Dataset using Pandas Now, the dataset is loaded as a dataframe `trans_atlc_trade` ```head()```Let's check what columns this file has by calling 'head()' function.It returns first n rows, and it's useful to see the dataset at a quick glance.By default, the head() function returns the first 5 rows.You can specify the number of rows to display by calling `df.head(number)`
###Code
# INSERT CODE HERE
###Output
_____no_output_____
###Markdown
```tail()```The ```tail()``` method prints the last 5 rows of our dataset.
###Code
# INSERT CODE HERE
###Output
_____no_output_____
###Markdown
```info()```This will return all of the column names and its types. This function is useful to get the idea of what the dataframe is like.
###Code
## INSERT CODE HERE
###Output
_____no_output_____
###Markdown
Observations: Questions:- List down the number of unique ```Dtype``` in this dataset- Is the dataset uneven? If so list down the column with the most missing rows? ```describe()```describe() is used to view summary statistics of numeric columns. This will help you to have general idea of the dataset.
###Code
trans_atlc_trade._____() # Insert code here
###Output
_____no_output_____
###Markdown
Observations: Question:- Why is ```describe``` showing only 3 columns?Is it because of their types e.g(int,float,object)?- What could be reason for the counts not being the same?- Are the mean, standard deviation,..., max. important for Voyage ID? ```shape```To see the size of the dataset, we can use shape function, which returns the number of rows and columns in a format of (rows, columns)
###Code
trans_atlc_trade.shape
###Output
_____no_output_____
###Markdown
Observations: Question:How many **rows** and **columns** are there? Answer: Q2. Important Facts About the Dataset The next thing we want to do is count the number of trips that have been unaccounted for. We'll know this by observing the ```Slaves arrived at 1st port``` This is simple, all we have to do is run two functions:- The first one will be to check if the column has null values, ```isna()```.- The second one will be sum the number of null rows in the column, ```sum()```.
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/2.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
Unaccounted_trips = trans_atlc_trade['Slaves arrived at 1st port'].____()._____() # Insert Code Here
print(f'The total number of unaccounted trips is: {Unaccounted_trips}')
###Output
_____no_output_____
###Markdown
**What about the total of slaves accounted for?** In the following line of code, we will have to ```sum``` the the total number of slaves in every column.
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/2.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
number_of_slaves_accounted = trans_atlc_trade['Slaves arrived at 1st port'].___() # Insert code here
print(f'The total number of slaves accounted for are: {number_of_slaves_accounted}')
###Output
_____no_output_____
###Markdown
Historical estimates suggest that the total number of slave traded are estimated to be ~12.5 Million. This means that according to this dataset:
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/2.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
possible_unaccounted_slaves = 12500000 - ________________ # INSERT CODE HERE
possible_unaccounted_slaves
###Output
_____no_output_____
###Markdown
Visualize Year of Arrival vs Number of Slaves arrived
###Code
fig = plt.figure(figsize = (35,10))
ax1 = fig.add_subplot(1,2,1)
ax2 = fig.add_subplot(1,2,2)
trans_atlc_trade.plot(x = 'Year of arrival at port of disembarkation',
y = 'Slaves arrived at 1st port',
kind = 'scatter',
c = 'Slaves arrived at 1st port',
title = 'Year of arrival at port of disembarkation vs Slaves arrived at 1st port',
alpha = 0.3,
cmap = plt.get_cmap('ocean'),
colorbar = True,
ax = ax1,
)
trans_atlc_trade.plot(x = 'Year of arrival at port of disembarkation',
y = 'Slaves arrived at 1st port',
kind = 'area',
title = 'Year of arrival at port of disembarkation vs Slaves arrived at 1st port',
ax = ax2,
)
###Output
_____no_output_____
###Markdown
Questions/Observations- Are the plots above useful? - Can we get anything specific by observing them?- Is there a visible trend?- What are the possible issues with the plot above? - Lastly, which one is more practical, the ```scatter``` or ```area```? Part 2 - Getting Started with Data Wrangling Now that we have observed the basic features of our dataset raw, we will began cleaning it. This involves several steps that you will be working through. Create a copy of the Original Dataset
###Code
df = trans_atlc_trade.copy(deep = True) # We have used deep = True to make sure the copy is not linked to the trans_atlc_trade dataframe.
# If we did not add it, any changes made to the new df would be made on the tran_atlc_trade too.
###Output
_____no_output_____
###Markdown
Q3. Column ListList down the names of all the columns in our dataframe
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/3.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
df._______ # Insert Code Here
###Output
_____no_output_____
###Markdown
Q4. Change column names For this exercise, you will change the names of the previously existing columns to something that is more readable. Using the columns above write down the name of the column in place of ```COLUMN_NAME_HERE```.The last column name will be tricky to change, this is because it has a ```'``` here. In order to change that right before the ```'``` add a ```\```.
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/4.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
df = df.rename(
columns={'COLUMN_NAME_HERE':'voyage_id', # Insert Column Name you want to change
'COLUMN_NAME_HERE':'vessel_name', # Insert Column Name you want to change
'COLUMN_NAME_HERE':'voyage_started', # Insert Column Name you want to change
'COLUMN_NAME_HERE':'voyage_pit_stop', # Insert Column Name you want to change
'COLUMN_NAME_HERE':'end_port', # Insert Column Name you want to change
'COLUMN_NAME_HERE':'year_of_arrival', # Insert Column Name you want to change
'COLUMN_NAME_HERE':'slaves_onboard', # Insert Column Name you want to change
'COLUMN_NAME_HERE':'captain_names' # Insert Column Name you want to change
})
df
###Output
_____no_output_____
###Markdown
Q5 Moving Column Positions - ```df.reindex()``` When we're looking at the renamed database, for our purposes, we don't want to work with the ```captain_names```. Next, we will use ```df.reindex()``` to change the order of our columns.You can see below a list below which has the ```column_names``` in the order we want and is without ```captains_name```.
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/5.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
column_names = ['voyage_id',"year_of_arrival","vessel_name", "voyage_started","voyage_pit_stop", "end_port","slaves_onboard"]
df = df._____(columns=________) # Insert Code here
df
###Output
_____no_output_____
###Markdown
**Is Voyage ID a good index and do we need it as a column?**No, But we need an index.**Can 'year_of_arrival' be an Index?**No, because there are repeating dates in the charts, there for we need a simple log counter. Q6. Remove Voyage ID -```df.drop()``` Now that we have a new index from 0 to 15299.Do we need ```voyage_id```. I don't think so, because it doesn't help us find anything useful. Every Voyage ID is unique.Next, drop this columnn
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/6.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
df = __.drop(columns='_____') # INSERT CODE HERE
df
###Output
_____no_output_____
###Markdown
Using ```dropna()``` For this data set, we will be working with trips that were completely accounted for in all of the remaining features.The ```dropna()``` method is designed top drop every value in our dataframe whos cell might have a null or undefined value. They are usually shown as ```NaN```
###Code
df = df.dropna()
df
df.info()
###Output
_____no_output_____
###Markdown
Questions/ObservationsHow many rows are we left with? Q7. Sorting Column using ```year_of_arrival``` using - ```df.sort_values()```
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/7.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
df = df.sort_values(by='_______', ascending=_____) # INSERT CODE HERE
df
###Output
_____no_output_____
###Markdown
Q8. Reseting the Index Reseting Index using ```df.reset_index``` with respect 'year_of_arrival' in Ascending Order.
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/8.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
df.______(inplace=True, drop=True) # INSERT CODE HERE
df
###Output
_____no_output_____
###Markdown
Finding Unique and similar strings First, we will list down all the unique names in these columns. Next, we will sort these in alphabetical order in order to make it easier to observe.```df['column_name].unique()``` and ```df.sort()```for simplicity i have just declared the first line as a variable ```a``` in order to print it
###Code
a = df['voyage_started'].unique()
a.sort()
a
a = df['voyage_pit_stop'].unique()
a.sort()
a
a = df['end_port'].unique()
a.sort()
a
###Output
_____no_output_____
###Markdown
As we can see above our object columns, ```voytage_started```, ```voyage_pit_stop``` and ```end_port``` have phrases such as ```., port unspecified```, ```,unspecified```. We need to clean these out. Q9. Working with Strings - ```df['column_name'].str.replace()``` To replace unwanted parts of a string, we use the function ```df['columun_name'].str.replace('string to find','string to replace')```. This command looks for the string we have specified and replaces with what we want.For example:If have an entry in the 'voyage_started' column, 'Virginia, port unspecified'. By running the command:```df['voyage_started'].str.replace(', port unspecified', '')```The string will be changed from ''Virginia, port unspecified' to 'Virginia'.
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/9.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
df['voyage_started'] = df['voyage_started'].str.replace('', '')
df['voyage_started'] = df['voyage_started'].str.replace('', '')
df['voyage_started'] = df['voyage_started'].str.replace('', '')
df['voyage_pit_stop'] = df['voyage_pit_stop'].str.replace('', '')
df['voyage_pit_stop'] = df['voyage_pit_stop'].str.replace('', '')
df['voyage_pit_stop'] = df['voyage_pit_stop'].str.replace('.', '')
df['end_port'] = df['end_port'].str.replace('', '')
df['end_port'] = df['end_port'].str.replace('', '')
df['end_port'] = df['end_port'].str.replace('.', '')
df['end_port'] = df['end_port'].str.replace('', '')
df['end_port'] = df['end_port'].str.replace('', '')
df['end_port'] = df['end_port'].str.replace('', '')
df['end_port'] = df['end_port'].str.replace(', south coast', '') # Insert string to replace
df['end_port'] = df['end_port'].str.replace(', west coast', '')
df
df.dtypes
###Output
_____no_output_____
###Markdown
Creating a Copy of our modified Dataset
###Code
modified_dataset = df.copy(deep = True)
###Output
_____no_output_____
###Markdown
Part 3 - Micro Wrangling and Visualization We will Start this part by dividing our dataset into multiple smaller dataframes. The approach we will be taking is separating dataframes based on the ```year_of_arrival``` dataset.For example, in the blocks below you will see code for 4 intervals:- ```1500 to 1600```- ```1601 to 1700```- ```1701 to 1800```- ```1801 to 1900```To help you understand the procedure we have worked through ```1500 to 1600``` . You will be required to do the same for the next 2 periods. The last one i.e. 1801 to 1900 is optional and we hope you will attempt it as a significant chunk of the voyages (specially the number of slaves transported) occured in the early 19th century. Between 1500 to 1600 Create a new dataframe for the given date range.Here we are creating a new dataframe from the copy of our dataset ```modified_dataset``` from step 2. we are using the range ```1500``` and ```1600``` from the dataset
###Code
dataset_between15_16 = modified_dataset.where((modified_dataset['year_of_arrival'] >= 1500) & (modified_dataset['year_of_arrival'] <= 1600))
dataset_between15_16
###Output
_____no_output_____
###Markdown
Dropping the Null valuesIf you notice the column above, we can only see ```NaN``` values. This is because when we made a new dataframe. We haven't changed the shape of the dataset at all. Infact, we have only made the rows between our defined ranges ```True```. The rest of them have been converted into empty calls. Therefore the next step will be to drop them.You can simply do that by running the line below. Another way to simply do this is by ```dataset_between15_16.dropna(inplace = True)```
###Code
dataset_between15_16 = dataset_between15_16.dropna()
dataset_between15_16
###Output
_____no_output_____
###Markdown
Total Number of Slaves Transported between 1501-1600 - Complete RecordsLets check the number of slaves transported between 1501-1600.*Please remember we are looking at rows that donot have any empty cells. Look back to this [part](https://colab.research.google.com/github/bitprj/DigitalHistory/blob/master/Week5-Lab-Visualizing-the-Translatlantic-Slave-Trade/Lab-Visualize-Trans-Atlantic-Slave-Trade.ipynbscrollTo=d_Ds03fRHleS). We dropped a significant amount of rows over there because we they had atleast 1 or more ```NaN``` values.
###Code
dataset_between15_16.slaves_onboard.sum()
###Output
_____no_output_____
###Markdown
Visualizing Trips During 1501-1601Lets quickly visuallize our data. We will plot 4 plots 2 bar and 2 scatter. All of them will use the columns ```year_of_arrrival``` or ```slaves_onboard```. The twist is that in two we will switch the x and y columns.
###Code
fig = plt.figure(figsize = (20,10))
ax1 = fig.add_subplot(2,2,1)
ax2 = fig.add_subplot(2,2,2)
ax3 = fig.add_subplot(2,2,3)
ax4 = fig.add_subplot(2,2,4)
ax1.scatter(dataset_between15_16['year_of_arrival'],
dataset_between15_16['slaves_onboard'],
alpha = 0.4)
ax2.scatter(dataset_between15_16['slaves_onboard'],
dataset_between15_16['year_of_arrival'],
alpha = 0.4)
ax3.bar(
dataset_between15_16['year_of_arrival'],
dataset_between15_16['slaves_onboard'],
alpha = 0.4)
ax4.set_ylim(1500,1600)
ax4.bar(
dataset_between15_16['slaves_onboard'],
dataset_between15_16['year_of_arrival'],
alpha = 0.4)
###Output
_____no_output_____
###Markdown
3.1 Choosing Graphs Questions/Observations- Which of these graphs seem useful and which ones are unnecessary?**Write a 2 sentence explanation about why these two plots seem or might be useful.** **Did the visualization style influence your decision?**Select the two graphs you think are more useful and add the following:- Add ```title``` for both subplots.- Add ```xlabel``` and ```ylabel```.- Change the color for one of the plots Plot the ```vessel_name``` vs the ```slaves_onboard```.Next, we'll use the ```pandas``` ```plot``` function and use the columns ```vessel_name``` as ```x``` and ```slaves_onboard``` as ```y```.Remember, ```vessel_name``` is a categorical value so we're plotting a bar chart of categorical vs numerical here.
###Code
dataset_between15_16.plot(x= 'vessel_name',
y = 'slaves_onboard',
kind = 'bar',
rot = 90)
###Output
_____no_output_____
###Markdown
Plotting voyages carrying Less than 100 slaves per tripThe plot above is crowded since there are alot of ships that were used throughout the 16th century. Our next step will be to simplify the plotting a little bit and to actually be able to visualize the plots properly.Below we create a new variable for our plot and we name it ```temp_df```.*You can name it anything you want.*We first make a dataframe that only contains rows where the number of slaves onboard were less than 100.
###Code
temp_df = dataset_between15_16.where(dataset_between15_16['slaves_onboard'] < 100.0).dropna()
temp_df
temp_df.plot(x='vessel_name',
y = 'slaves_onboard',
kind = 'bar',
rot = 90)
###Output
_____no_output_____
###Markdown
Plotting voyages carrying greater than 100 slaves per tripNext we check for ships where the number of slaves carried was greater than 100.Notice we added a ```dropna``` at the end. This is the same as the step we take to drop the null values from our dataframes but instead of writing it as a new line we have simply attactched at to our ```dataset_between15_16.where```
###Code
temp_df = dataset_between15_16.where(dataset_between15_16['slaves_onboard'] > 100.0).dropna()
temp_df.plot(x='vessel_name',
y = 'slaves_onboard',
kind = 'bar',
rot = 90,
grid = True,
figsize = (20,10)
)
###Output
_____no_output_____
###Markdown
As we can see above it is still a little congested. Therefore we'll narrow down our search a little more.As you can see plotting using random numbers might not give us the best results.One thing we can do is select our values based on the ```mean```,```standard deviation```,```25%```,```50%```,```75%```So lets check those values for our current dataframe which is ```dataset_between15_16```
###Code
dataset_between15_16.describe()
###Output
_____no_output_____
###Markdown
We can see the values above, for this project we will be looking at the ```75%``` value for the ```slaves_onboard``` column.Therefore:
###Code
num_of_slaves_3q = 202 # 3q means third quartile. The value is 201.5 but we are rounding up
###Output
_____no_output_____
###Markdown
Plotting voyages with respect to ```num_of_slaves_3q```
###Code
temp_df = dataset_between15_16.where(dataset_between15_16['slaves_onboard'] >num_of_slaves_3q).dropna()
print(f'There are {temp_df.shape[0]} trips that carries more than {num_of_slaves_3q} slaves.')
###Output
_____no_output_____
###Markdown
The second line written above is simply an ```f-string```. These are not important but are still useful when printing statements and let us print variables inside a string.
###Code
temp_df.plot(x= 'vessel_name',
y = 'slaves_onboard',
kind = 'bar',
rot = 90, # Adjusted accordingly, you can do the same
grid = True, # Adjusted accordingly, you can do the same
figsize = (20,10) # Adjusted accordingly, you can do the same
)
###Output
_____no_output_____
###Markdown
Plotting the most used ```start_port```
###Code
temp_df['voyage_started'].hist(bins = 20, # Adjusted accordingly, you can do the same
alpha = 0.5, # Adjusted accordingly, you can do the same
xrot = 45, # Adjusted accordingly, you can do the same
figsize = (10,10) # Adjusted accordingly, you can do the same
)
###Output
_____no_output_____
###Markdown
Histogram - Check the most used ```voyage_pit_stop```
###Code
temp_df['voyage_pit_stop'].hist(bins=10, # Adjusted accordingly, you can do the same
alpha=0.7, # Adjusted accordingly, you can do the same
xrot = 0, # Adjusted accordingly, you can do the same
figsize = (10,10) # Adjusted accordingly, you can do the same
)
###Output
_____no_output_____
###Markdown
Histogram - Check the most used ```End_Port```
###Code
temp_df['end_port'].hist(bins=10, # Adjusted accordingly, you can do the same
alpha=0.7, # Adjusted accordingly, you can do the same
xrot = 0, # Adjusted accordingly, you can do the same
figsize = (10,10) # Adjusted accordingly, you can do the same
)
###Output
_____no_output_____
###Markdown
Questions/Observations- Where were most of the trips made? - Where did they start from.- Any other important observations? Between 1601 - 1700 Q10. Create a new dataframe for the given date range.
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/10.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
dataset_between16_17 = modified_dataset.____ #INSERT CODE HERE
dataset_between16_17
###Output
_____no_output_____
###Markdown
Q11. Drop null values
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/11.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
dataset_between16_17 = dataset_between16_17._____() # Insert Code here (drop nul values)
dataset_between16_17
###Output
_____no_output_____
###Markdown
Q12. Total Number of Slaves Transported between 1601-1700 - Complete Records
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/12.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
dataset_between16_17.slaves_onboard.___() # Insert Code Here - Sum of slaves
###Output
_____no_output_____
###Markdown
Q13. Visualizing Trips During 1601-1701
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/13.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
fig = # INSERT CODE HERE
ax1 = # INSERT CODE HERE
ax2 = # INSERT CODE HERE
ax1.scatter(# INSERT CODE HERE
# INSERT CODE HERE
# INSERT CODE HERE
)
ax2.bar(
# INSERT CODE HERE
# INSERT CODE HERE
# INSERT CODE HERE
)
###Output
_____no_output_____
###Markdown
Q14. Plot the ```vessel_name``` vs the ```slaves_onboard```.Next, we'll use the ```pandas``` ```plot``` function and use the columns ```vessel_name``` as ```x``` and ```slaves_onboard``` as ```y```.Remember, ```vessel_name``` is a categorical value so we're plotting a bar chart of categorical vs numerical here.
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/14.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
fig = plt.figure(figsize = (50,20))
ax1 = fig.add_subplot(2,2,1)
dataset_between16_17.plot(# INSERT CODE HERE
# INSERT CODE HERE
# INSERT CODE HERE
# INSERT CODE HERE
# INSERT CODE HERE
)
###Output
_____no_output_____
###Markdown
Note: The graph above will be more congested compared to the ```1500-1601``` plot. This is because the number of trips are more As you can see plotting using random numbers might not give us the best results.One thing we can do is select our values based on the ```mean```,```standard deviation```,```25%```,```50%```,```75%```So lets check those values for our current dataframe which is ```dataset_between16_17```
###Code
dataset_between16_17.describe()
###Output
_____no_output_____
###Markdown
Q15. Plotting voyages with respect to ```num_of_slaves_3q``` We can see the values above, for this project we will be looking at the ```75%``` value for the ```slaves_onboard``` column.Therefore:
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/15.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
slaves_onboard_3q = ### INSERT VALUE
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/15.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
temp_df = ##
print(f'There are {temp_df.shape[0]} trips that carries more than {slaves_onboard_3q} slaves.')
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/15.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
temp_df.plot(# INSERT CODE HERE
# INSERT CODE HERE
# INSERT CODE HERE
rot = 90, # Adjusted accordingly, you can do the same
grid = True, # Adjusted accordingly, you can do the same
figsize = (20,10) # Adjusted accordingly, you can do the same
)
###Output
_____no_output_____
###Markdown
Q16. Plotting the most used ```start_port```
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/16.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
# INSERT CODE HERE
###Output
_____no_output_____
###Markdown
Q17. Plotting the most used ```voyage_pit_stop```
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/17.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
# INSERT CODE HERE
###Output
_____no_output_____
###Markdown
Q18. Plotting the most used ```End_Port```
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/18.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
# INSERT CODE HERE
###Output
_____no_output_____
###Markdown
Between 1701 - 1800 Q19. Create a new dataframe for the given date range.
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/19.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
dataset_between17_18 = # INSERT CODE HERE
# INSERT CODE HERE
###Output
_____no_output_____
###Markdown
Q20. Drop null values
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/20.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
# INSERT CODE HERE
# INSERT CODE HERE
###Output
_____no_output_____
###Markdown
Q21. Total Number of Slaves Transported between 1701-1800 - Complete Records.
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/21.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
# INSERT CODE HERE
###Output
_____no_output_____
###Markdown
Q22. Visualizing Trips During 1701-1800
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/22.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
fig = # INSERT CODE HERE
ax1 = # INSERT CODE HERE
ax2 = f# INSERT CODE HERE
ax1.scatter(# INSERT CODE HERE
# INSERT CODE HERE
# INSERT CODE HERE
)
ax2.bar(
# INSERT CODE HERE
# INSERT CODE HERE
# INSERT CODE HERE
)
###Output
_____no_output_____
###Markdown
Q23. Plot the ```vessel_name``` vs the ```slaves_onboard```.
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/23.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
###Output
_____no_output_____
###Markdown
Note: The graph above will also be more congested compared to the ```1500-1600``` and ```1601-1700``` plot. This is because the number of trips are more the previous century.As you can see plotting using random numbers might not give us the best results.One thing we can do is select our values based on the ```mean```,```standard deviation```,```25%```,```50%```,```75%```So lets check those values for our current dataframe which is ```dataset_between17_18```
###Code
dataset_between17_18.describe()
###Output
_____no_output_____
###Markdown
Q24. Plotting voyages with respect to ```num_of_slaves_3q``` We can see the values above, for this project we will be looking at the ```75%``` value for the ```slaves_onboard``` column.Therefore:
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/24.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
slaves_onboard_3q = ### INSERT VALUE
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/24.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
temp_df = ##
print(f'There are {temp_df.shape[0]} trips that carries more than {slaves_onboard_3q} slaves.')
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/24.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
temp_df.plot(# INSERT CODE HERE
# INSERT CODE HERE
# INSERT CODE HERE
rot = 90, # Adjusted accordingly, you can do the same
grid = True, # Adjusted accordingly, you can do the same
figsize = (20,10) # Adjusted accordingly, you can do the same
)
###Output
_____no_output_____
###Markdown
Q25. Plotting the most used ```start_port```
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/25.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
# INSERT CODE HERE
###Output
_____no_output_____
###Markdown
Q26. Plotting the most used ```voyage_pit_stop```
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/26.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
# INSERT CODE HERE
###Output
_____no_output_____
###Markdown
Q27. Plotting the most used ```End_Port```
###Code
#Once your have verified your answer please uncomment the line below and run it, this will save your code
#%%writefile -a {folder_location}/27.py
#Please note that if you uncomment and run multiple times, the program will keep appending to the file.
# INSERT CODE HERE
###Output
_____no_output_____
###Markdown
Extra Between 1801 - 1900 ConclusionFor this you will write a summary of what steps you followed throughout this notebook, why they were important and your findings.For example:- The findings you observed when working through the 4 centuries of slave trade voyages.- Are our findings reliable or do we need further research?- Was ```vessel_name``` useful?- What could we have found if we kept the captains name column?- What else could we find with this dataset?- What are our limitations?You can also add your answers to the questions posted throughout the notebook here. SubmissionRun this code block to download your answers.
###Code
from google.colab import files
!zip -r "{student_id}.zip" "{student_id}"
files.download(f"{student_id}.zip")
###Output
_____no_output_____
###Markdown
Appendix Connecting to Your Google Drive
###Code
# Start by connecting google drive into google colab
from google.colab import drive
drive.mount('/content/gdrive')
!ls "/content/gdrive/My Drive/DigitalHistory"
cd "/content/gdrive/My Drive/DigitalHistory/tmp/trans-atlantic-slave-trade"
ls
### Extracting ZipFiles
import zipfile
file_location = 'data/trans-atlantic-slave-trade.csv.zip'
zip_ref = zipfile.ZipFile(file_location,'r')
zip_ref.extractall('data/tmp/trans-atlantic-slave-trade')
zip_ref.close()
###Output
_____no_output_____
###Markdown
Checking and Changing Column Types ```df.dtypes``` and ```df.astype()```
###Code
df.dtypes
df.year_of_arrival.astype(int)
df.dtypes
df.year_of_arrival = df.year_of_arrival.astype(int)
df.dtypes
###Output
_____no_output_____
###Markdown
**Extra**:```df.slaves_onboard = df.slaves_onboard.astype(int)```
###Code
df.slaves_onboard = df.slaves_onboard.astype(int)
df.dtypes
df
###Output
_____no_output_____
###Markdown
GeoTagging Locations
###Code
!pip install geopandas
!pip install googlemaps
from googlemaps import Client as GoogleMaps
import pandas as pd
gmaps = GoogleMaps('')# ENTER KEY
df
addresses = df.filter(['Voyage itinerary imputed port where began (ptdepimp) place'], axis=1)
addresses.head()
addresses['long'] = ""
addresses['lat'] = ""
addresses
###Output
_____no_output_____ |
Lecture 5 - Differential equations/lecture_topic5_differential_eq_part2.ipynb | ###Markdown
Lecture topic 5: Ordinary and partial differential equations Part 2
###Code
from lecture_utils import *
###Output
_____no_output_____
###Markdown
Topics of Part 21. Continuation with integrators - Leapfrog - Verlet 2. Partial differential equations Repetition: Leapfrog methodScheme comparing RK2 and leapfrog (Figure adapted from "Computational Physics" by Marc Newman)- RK2: $$ \begin {align} x\left(t+\frac{1}{2}h\right) &= x(t) + \frac{1}{2}hf(x(t),t)\\ x(t+h) &= x(t) + hf\left(x\left(t+\frac{1}{2}h\right),t+\frac{1}{2}h\right) \end{align}$$- Leapfrog\begin{align} x\left(t+h\right) &= x(t) + hf\left(x\left(t+\frac{1}{2}h\right),t+\frac{1}{2}h\right)\\ x\left(t+\frac{3}{2}h\right) &= x\left(t+\frac{1}{2}h\right) + hf(x(t+h),t+h)\\\end{align} When/why would one use Leapfrog instead of RK?- RK4 more accurate, but not time-reversal symmetric- time-reversal symmetric behavior important for energy conservation- energy conservation important for many problems in physics, for example: - nonlinear pendulum - planet orbiting a star - molecular dynamics (computer simulation of movement of atoms and molecules) Time reversal and energy conservation- forward and backward solution should be identical- forward means we have a positive interval $h$- backwards means we have a negative interval $-h$ Forward and backward calculation with LeapfrogEquations for backward calculation ($h \rightarrow -h$):$$\begin{align} x\left(t-h\right) &= x(t) - hf\left(x\left(t-\frac{1}{2}h\right),t-\frac{1}{2}h\right)\\ x\left(t-\frac{3}{2}h\right) &= x\left(t-\frac{1}{2}h\right) - hf(x(t-h),t-h)\end{align}$$Let's now start the backward calculation from $t+\frac{3}{2}h$, i.e., $t\rightarrow t+\frac{3}{2}h$$$\begin{align} x\left(t+\frac{1}{2}h\right) &= x\left(t + \frac{3}{2}h\right) - hf(x\left(t+h),t+h\right)\\ x(t) &= x(t+h) - hf\left(x\left(t+\frac{1}{2}h\right),t+\frac{1}{2}h\right)\end{align}$$ The Leapfrog method is time-reversal symmetric. Time reversal symmetry means that if start from a certain time and go backwards in time we can exactly retrace all steps of the forward solution. Let's start our backward calculation at $t + \frac{3}{2}h$ as shown in the figure. We perform the step between the midpoints from $t+\frac{3}{2}h$ to $t+\frac{1}{2}h$ and the step between the full integer points from $t+h$ to $t$. The same steps are performed in the forward algorithm, just reversed. The mathematical operations are the same, just reversed and the last two equations are identical to the forward equations on slide 3. Forward and backward calculation with RK2Equations for backward calculation ($h \rightarrow -h$):$$ \begin {align} x\left(t-\frac{1}{2}h\right) &= x(t) - \frac{1}{2}hf(x,t)\\ x(t-h) &= x(t) - hf\left(x\left(t-\frac{1}{2}h\right),t-\frac{1}{2}h\right) \end{align}$$Let's start the backward calculation from $t+h$, i.e., $t\rightarrow t+h$$$ \begin {align} x\left(t+\frac{1}{2}h\right) &= x(t+h) - \frac{1}{2}hf(x(t+h),t+h)\\ x(t) &= x(t+h)- hf\left(x\left(t+\frac{1}{2}h\right),t+\frac{1}{2}h\right) \end{align}$$ RK2 is not time-reversal. Let's start the backward calculation from $t+h$. We perform a midpoint step from $t+h$ to $t+\frac{1}{2}h$ and a full step form $t+h$ to $h$. However, in the forward algorithm we don't perform a midpoint step from $t+\frac{1}{2}h$ to $t+h$. It is easy to show that the last two equations are not identical to the RK2 forward equations on slide 3. Example: Nonlinear pendulum$$\frac{\mathrm{d}^2\theta}{\mathrm{d}t^2} = - \frac{g}{l}\sin(\theta)$$Transformation into two first-order equations$$\begin{align} \frac{\mathrm{d}\theta}{\mathrm{d}t} = \omega, \qquad \frac{\mathrm{d}\omega}{\mathrm{d}t} = - \frac{g}{l}\sin(\theta)\end{align}$$The motion of the pendulum is time-reversal symmetric. The motion the pendulum makes in a single period, is exactly the same backwards as it is forwards. The length of the arm is 10 cm and m =1 kg. The potential energy of the pendulum is $V = mgl(1-\cos(\theta))$ and the kinetic energy is $T= 0.5ml^2(\mathrm{d}\theta/\mathrm{d}t)^2$.
###Code
from numpy import sin, cos, pi, array, arange
from matplotlib import pyplot as plt
""""Definition of parameters and initial conditions"""
g = 9.81
l = 0.1 # Lenght of arm is 10 cm
a = 0.0 # start time
b = 10.0 # end time
h = 0.001 # time step
start_theta_degree = 10
start_theta = pi*start_theta_degree/180
tpoints = arange(a,b,h)
"""Right-hand side of differential equations"""
def f(r):
theta = r[0]
omega = r[1]
ftheta = omega
fomega = -(g/l)*sin(theta)
return array([ftheta,fomega],float)
""" Potential energy """
def V(theta):
return g*l*(1-cos(theta))
""" Kinetic energy """
def T(omega):
return 0.5*l*l*omega*omega
""" RK2 integration for pendulum """
def rk2_integration(tpoints,start_theta):
#Initial condition: r[0] = start_theta ; and r[1] = omega = 0
r = array([start_theta,0.0],float)
thetapoints = []
Vpoints = []
Tpoints = []
Epoints =[]
for t in tpoints:
thetapoints.append(r[0]*180/pi)
Vpoints.append(V(r[0]))
Tpoints.append(T(r[1]))
Epoints.append(V(r[0])+T(r[1]))
k1 = h*f(r)
k2 = h*f(r+0.5*k1)
r += k2
return thetapoints, Epoints
""" Leapfrog integration for pendulum """
def leapfrog_integration(tpoints,start_theta):
thetapoints = []
Vpoints = []
Tpoints = []
Epoints =[]
r1 = array([start_theta,0.0],float) # Initial value for point 1
r2 = r1 + 0.5*h*f(r1) # Initial value for point 2 = midpoint
for t in tpoints:
thetapoints.append(r1[0]*180/pi)
Vpoints.append(V(r1[0]))
Tpoints.append(T(r1[1]))
Epoints.append(V(r1[0])+T(r1[1]))
r1 += h*f(r2)
r2 += h*f(r1)
return thetapoints, Epoints
"""Solve problem with RK2"""
thetapointsRK2,EpointsRK2 = rk2_integration(tpoints,start_theta)
"""Solve problem with Leapfrog"""
thetapointsLF,EpointsLF = leapfrog_integration(tpoints,start_theta)
"""Plot theta with respect to time"""
plt.rc('font', size=16)
xstart = a
xend = b
plt.figure(1)
plt.xlim(xstart,xend)
plt.ylim(-start_theta_degree-10,start_theta_degree+10)
#plt.ylim(-200,200)
plt.ylabel('Angle displacement theta')
plt.plot(tpoints,thetapointsRK2,label='RK2',linewidth=1.0)
plt.legend()
plt.figure(2)
plt.xlim(xstart,xend)
plt.ylim(-start_theta_degree-10,start_theta_degree+10)
plt.ylabel('Angle displacement theta')
plt.xlabel('Time')
plt.plot(tpoints,thetapointsLF,label='Leapfrog',linewidth=1.0)
plt.legend()
plt.show()
"""Plot total energy with respect to time"""
plt.rc('font', size=16)
xstart = a
xend = b
plt.figure(1)
plt.xlim(xstart,xend)
plt.ylabel('Total energy')
plt.plot(tpoints,EpointsRK2,label='RK2',linewidth=1.0)
plt.legend()
plt.figure(2)
plt.xlim(xstart,xend)
plt.ylabel('Total energy')
plt.xlabel('Time')
plt.plot(tpoints,EpointsLF,label='LF',linewidth=1.0)
plt.plot(tpoints,EpointsRK2,label='RK2',linewidth=1.0)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Now the settings are $t_{end}=b =10$s and $h = 0.001$. Try the following things- decrease $h$ by an order of magnitude - set $h$ to 0.01 and $b$ to 4000s. Especially the last point gives some interesting insights. Watch the videos to get some more explanations. The solution with our numerical solvers is only approximate, which means the total energy of the system is only approximately constant. With RK2 we see clearly a drift in the energy. The leapfrog algorithm conserves energy at the end of a full swing of the pendulum, i.e., at the beginning and the end of the swing the energy will be the same.The energy fluctuates during the course of the swing though, i.e., it is not conserved during fractions of a period. However, at the end of the swing, it will return to the correct value. The leapfrog method is thus useful for solving the equations of motion of energy conserving physical systems. If we wait long enough with RK2, the energy will drift, also with RK4, i.e., the pendulum might stop swinging or the planet might fall out of orbit into the star. Verlet method- specialized method to solve ODEs of the form\begin{align} \frac{\mathrm{d}^2x}{\mathrm{d}t^2} = f(x,t)\end{align}- most important example: Newton's equation of motion\begin{equation} \frac{\mathrm{d}^2\mathbf{r}_i}{\mathrm{d}t^2} = \frac{\mathbf{F}_i}{m_i}\end{equation}- Verlet is a variant of Leapfrog method In the case of molecular dynamics (MD), the force on atom $i$ will depend on the positions of all other atoms in the system. The interaction potentials are also in the case of classical MD non-linear functions of $\mathbf{r}$. In the case of ab initio MD the forces are obtained from quantum mechanics, i.e., the corresponding equations of motion must be always solved numerically. Derivation of Verlet methodWe transform\begin{align} \frac{\mathrm{d}^2x}{\mathrm{d}t^2} = f(x,t)\end{align}to 2 first-order differential equations\begin{equation} \frac{\mathrm{d}x}{\mathrm{d}t} = v, \qquad \frac{\mathrm{d}v}{\mathrm{d}t} = f(x,t)\end{equation} Using Leapfrog we would define a vector $\mathbf{r} = (x,v)$ and combine the two equations to a single vector equation\begin{equation} \frac{\mathrm{d}\mathbf{r}}{\mathrm{d}t} = \mathbf{f}(\mathbf{r},t)\end{equation}With Leapfrog, the explicit expression to solve for $\mathbf{r}$ are- Full step\begin{align} x(t+h) &= x(t) + hv\left(t+\frac{1}{2}h\right)\\ v(t+h) &= v(t) + h f\left(x\left(t+\frac{1}{2}h\right),t+\frac{1}{2}h\right)\end{align}- Midpoint step\begin{align} x\left(t+\frac{3}{2}h\right) &= x\left(t+\frac{1}{2}h\right) + hv(t+h)\\ v\left(t+\frac{3}{2}h\right) &= v\left(t+\frac{1}{2}h\right) + hf(x(t+h),t+h)\end{align} We can derive a full solution to the problem by only using\begin{align} x(t+h) &= x(t) + hv\left(t+\frac{1}{2}h\right)\\ v\left(t+\frac{3}{2}h\right) &= v\left(t+\frac{1}{2}h\right) + hf(x(t+h),t+h)\end{align}The initial value for $v\left(t+\frac{1}{2}h\right)$ can be obtained from Euler's method with a step size of $\frac{1}{2}h$\begin{equation} v\left(t + \frac{1}{2}h\right) = v(t) + \frac{1}{2}hf(x(t),t)\end{equation} We never need to calculate $v$ at any integer point or $x$ at half integers. This is an improvement over leapfrog where we would solve all four equations, i.e., we have to do only half the work compared to leapfrog. This simplification only works for differential equations that have the specific form $ \frac{\mathrm{d}^2x}{\mathrm{d}t^2} = f(x,t)$. For these type of ODEs, the right-hand side of the first equation ($ \frac{\mathrm{d}x}{\mathrm{d}t} =v$) depends on $v$, but not on $x$. The right-hand side of the second equation ($\frac{\mathrm{d}v}{\mathrm{d}t} = f(x,t)$) depends on $x$, but not on $v$. However, solving the equations of motion are ODEs of this form and they are a pretty common problem in physics. There is a small problem so far: We know $v$ at half-integer points and $x$ at full integer points, i.e., we never know both quantities at the same time. This is problematic if we want to calculate the potential, kinetic and total energy of the system because then we have to know $x$ and $v$ at the same time. To make sure that we know also know $v$ at the integer points, we perform an additional half step. Let's assume we would know $v(t+h)$ then could perform a half-step with step size $-\frac{1}{2}h$ using Euler's method\begin{equation} v\left(t+\frac{1}{2}h\right) = v(t+h) - \frac{1}{2}hf(x(t+h),t+h)\end{equation}Rearranging yields:\begin{equation} v(t+h) = v\left(t+\frac{1}{2}h\right) + \frac{1}{2}hf(x(t+h),t+h)\end{equation}In combination with the equations on the previous slide, this give us the Verlet method. Working equations for the Verlet algorithmStart:\begin{equation} v\left(t + \frac{1}{2}h\right) = v(t) + \frac{1}{2}hf(x(t),t)\end{equation}Then iterate:\begin{align} x(t+h) &= x(t) + hv\left(t+\frac{1}{2}h\right)\\ k &= hf(x(t+h),t+h)\\ v(t+h) &= v\left(t+\frac{1}{2}h\right) + \frac{1}{2}k\\v\left(t+\frac{3}{2}h\right) &= v\left(t+\frac{1}{2}h\right) + k\end{align} We are given initial values of $x$ and $v$ at some time $t$. We start by calculating the $v$ at the $t+\frac{1}{2}h$. The subsequent values of $x$ and $v$ are then derived by applying the set of equations above. Verlet working equations for simultaneous differential equationsLet's assume we have an equation of the form\begin{align} \frac{\mathrm{d}^2\mathbf{r}}{\mathrm{d}t^2} = \mathbf{f}(\mathbf{r},t)\end{align}wher $\mathbf{r}=(x,y,...)$ is a vector. The Verlet working equations then transform toStart:\begin{equation} \mathbf{v}\left(t + \frac{1}{2}h\right) = \mathbf{v}(t) + \frac{1}{2}h\mathbf{f}(\mathbf{r}(t),t)\end{equation}Iterate:\begin{align} \mathbf{r}(t+h) &= \mathbf{r}(t) + h\mathbf{v}\left(t+\frac{1}{2}h\right)\\ \mathbf{k} &= h\mathbf{f}(\mathbf{r}(t+h),t+h)\\ \mathbf{v}(t+h) &= \mathbf{v}\left(t+\frac{1}{2}h\right) + \frac{1}{2}\mathbf{k}\\ \mathbf{v}\left(t+\frac{3}{2}h\right) &= \mathbf{v}\left(t+\frac{1}{2}h\right) + \mathbf{k}\end{align} When solving equations of motion, we usually have simulateneous second-order differential equations with position vector $\mathbf{r}=(x,y,z)$, i.e., three simultaneous second-order differential equations, which can be transformed to 6 simulatenous first-order equations. If we want to solve the equation of motions for $n$ interacting particles, we have $6n$ simulatenous euqations to solve. There are different flavors of the Verlet method. What we have discussed here is often called the Velocity Verlet algorithm. Error propagation with the (Velocity) Verlet method - Verlet conserves, since it is a variant of leapfrog, also the energy- error for single step is $\mathcal{O}(n^3)$ - accumulated error is $\mathcal{O}(n^2)$ Error propagation with the Verlet method Example: Earth (blue) + moon (gray) orbiting around sun (yellow)- Equations of motion solved by the (Velocity) Verlet method- yields right behavior- Video by Miguel Caro (Advanced Statistical Physics course at Aalto), see also https://youtu.be/KQAP90SWtiQ
###Code
play_VelocityVerlet()
###Output
_____no_output_____
###Markdown
Summary: Methods for solving ODEs and error| method | single step error | accumulated error || --- | --- | --- || Euler | $\mathcal{O}(n^2)$ | $\mathcal{O}(n)$ || RK2 | $\mathcal{O}(n^3)$ | $\mathcal{O}(n^2)$ || RK4 | $\mathcal{O}(n^5)$ | $\mathcal{O}(n^4)$ || Leapfrog | $\mathcal{O}(n^3)$ | $\mathcal{O}(n^2)$ || (Velocity) Verlet | $\mathcal{O}(n^3)$ | $\mathcal{O}(n^2)$ |Leapfrog and Velocity Verlet are in addition time-reversal symmetric and conserve, e.g., energy.Lesson learned: it is important to choose a sensible integration scheme for the ODEs together with a proper step size Partial differential equations Many problems in physics are partial differential equations, e.g.,- Laplace and Poisson equations- Maxwell's equations- Schrรถdinger equation- wave equation- diffusion equation$\rightarrow$ solving them is usually computationally more demanding than the ODE case Types of problems1. boundary problems2. initial value problems- $\rightarrow$ initial value problems are typically harder to solve for partial differential equations- we will learn one method for each For ODEs we discussed only initial value problems, meaning that we are solving differential equations given the initial values of the variables. This is the most common from for differential equations in physics. However, there are also boundary value problems. For instance, let's consider the example of a ball thrown into the air. We could specify two initial conditions: the height of the ball at $t=0$ and its initial upward velocity. Another possibilty is to formulate this example as boundary value problem. We could specify our two conditions as initial and end condition instead. We could specify that the ball has the height $x(t=0)=0$ and $x(t_1)=0$, where $t_1$ is a later time, i.e., we specify the time when the ball is thrown and when it lands.Initial value problems are generally easier to solve for ODEs. For partial differential equations, the opposite is true. Relaxation method for boundary value problems(Figure from "Computational Physics" by Marc NewmanTo introduce the method we will look at a simple electrostatics problem: an empty box, which as conducting wall, all of which are grounded to 0 V except for the wall at the top, which is at some other voltage $V$. The (very) small gaps between the top wall and the others are intended to show that they are insulated from one another.Goal: determine value of the electrostatic potential at points within the box This also a boundary value problem: we want to describe the behavior of a variable in space and we are given some constraints on the variable around that space. We can find the electrostatic potential $\phi$ inside the box by solving the 2D Laplace equation\begin{equation} \frac{\partial^2\phi}{\partial x^2} + \frac{\partial^2\phi}{\partial y^2} = 0\end{equation}with the boundary conditions that $\phi =V$ on the top wall and $\phi =0$ on the other walls.Procedure: - use method of finite difference to express the second derivatives of $\phi$ $\rightarrow$ see lecture topic 2 - use relaxation method to solve the obtained set of linear simultaneous equations $\rightarrow$ see lecture topic 3 Repetition: Finite differences for second derivativesWe calculate the first derivates first using the central difference method for derivatives at $x+h/2$ and $x-h/2$.\begin{equation} f'(x+h/2) \approx \frac{f(x+h)-f(x)}{h} \qquad f'(x-h/2) \approx \frac{f(x)-f(x-h)}{h}\end{equation}Now we applyt the central difference method again for the second derivative\begin{align} f'' &\approx \frac{f'(x+h/2)-f'(x-h/2)}{h}\\ & = \frac{f(x+h)-2f(x)+f(x-h)}{h^2}\end{align} Apply the finite difference method to the Laplace equation(Figure from "Computational Physics" by Marc Newman- divide the space in the box into grid points with spacing $a$ as shown in the figure. - put points also on interior and boundaries of that spaceCalculate now 2nd derivatives:\begin{align} \frac{\partial^2\phi}{\partial x^2} &=\frac{\phi(x+a,y) + \phi(x-a,y) -2\phi(x,y)}{a^2}\\ \frac{\partial^2\phi}{\partial y^2} &=\frac{\phi(x,y+a) + \phi(x,y-a) -2\phi(x,y)}{a^2}\end{align}The Laplace equation in 2D is now:\begin{equation}\frac{\partial^2\phi}{\partial x^2} + \frac{\partial^2\phi}{\partial y^2} = \frac{\phi(x+a,y) + \phi(x-a,y) +\phi(x,y+a) + \phi(x,y-a)-4\phi(x,y)}{a^2} = 0\end{equation} We basically add the values of $\phi$ at all the grid points adjacent to $(x,y)$ and subtract 4x the value at $(x,y)$ and then divide by $a^2$. We need to solve\begin{equation} \phi(x+a,y) + \phi(x-a,y) +\phi(x,y+a) + \phi(x,y-a)-4\phi(x,y) = 0\end{equation}- we have one equation like this for every grid point $(x,y)$- the solution to the entire set gives us $\phi(x,y)$ at every grid point$\rightarrow$ large set of linear simultaneous equations$\rightarrow$ method of choice is here the relaxation method (lecture topic 3) This linear set of equations could be solved, in principle, with Gaussian elimination or LU decomposition. In this case, using the relaxation method is a better choice because it is computationally cheaper. We introduced the relaxation method for non-linear equations, but they -of course- also be applied to linear equations. Application of the relaxation method to our electrostatics problemLet's first rearrange\begin{equation}\phi(x,y) =\frac{1}{4} (\phi(x+a,y) + \phi(x-a,y) +\phi(x,y+a) + \phi(x,y-a)) \end{equation}Procedure:- fix $\phi(x,y)$ at the boundaries of the system - guess some initial values for $\phi(x,y)$, can be bad, doesn't matter- calculate new values $\phi'$ we the guessed initial values- repeat until convergence reached$\rightarrow$ this is also known as Jacobi method Solution of the 2D Laplace equationLet's now solve our electrostatics problem with the Jacobi method assuming that the box is 1 m long each side, $V = 1$ volt and the grid spacing $a = 1cm$. We hve 100 grid points or 101 if we count the points at both beginning and end.
###Code
from numpy import empty,zeros,max
from matplotlib import pyplot as plt
#Constants
M = 100 # Grid squares on a side
V = 1.0 # Voltage at top wall
target = 1e-6 # Target accuracy
# Create arrays to hold potential values
phi = zeros([M+1,M+1],float)
phi[0,:] = V
phiprime = empty([M+1,M+1],float)
# Main loop
delta = 1.0
while delta>target:
# Calculate new values of the potential
for i in range(M+1):
for j in range(M+1):
if i==0 or i==M or j==0 or j==M:
phiprime[i,j] = phi[i,j]
else:
phiprime[i,j] = (phi[i+1,j] + phi[i-1,j] \
+ phi[i,j+1] + phi[i,j-1])/4
# Calculate maximum difference from old values
delta = max(abs(phi-phiprime))
# Swap the two arrays around
phi,phiprime = phiprime,phi
# Alternative to swapping
#phi = phiprime
#phiprime = empty([M+1,M+1],float)
# Make a plot
plt.imshow(phi)
###Output
_____no_output_____
###Markdown
The produced figure shows that there is a region of high electric potential around the top wall of the box, as expected, and low potential aroudn the other three walls. Note that the program will run for a while. A few notes about the program:- if point $i,j$ at boundary, then set to start values; the values at the boundaries never change- if points $i,j$ not at boundary, calculate $\phi'$ Things to note about the Jacobi methodAccuracy:- only approximate since we use finite differences for derivatives- small target accuracy won't fix this- higher-order derivative approximation necessary to improve accuracyAccessible points - the calculation give the value of $\phi$ only at the grid points and not elsewhere- in between values: interpolation schemes possible Division of space - boundaries around space may not always be square- can be difficult to divide space with square grid $\rightarrow$ grid points don't fall on boundaries Other, faster methods for boundary value problems- overrelaxation - Gauss-Seidel method Initial value problems- starting point of variable known- goal: prediction of future variation as function of timeExample: Diffusion equation\begin{equation} \frac{\partial \phi}{\partial t} = D\frac{\partial^2\phi}{\partial x^2}\end{equation}- use space-time grid? ๐ค - we have boundary conditions in the spatial dimension ($x$), but not in time dimension$\rightarrow$ relaxation method breaks down because we don't know what value to use for time-like end of the grid We have now (compared to the 2D Laplace equation), the independent variables $x$ and $t$, instead of $x$ and $y$ One might think that we can proceed as before and create a space-time grid, then write the derivatives in finite difference form and get a set of simultaneous equations that can be solved by the relaxation method. This doesn't work because we have only boundary conditions in the spatial direction $x$. For the time dimension we have an initial conditions. We know where the value starts, but not where it ends. The relaxation method breaks thus down because we don't know what value of $\phi$ to use for time-like end of the grid. FTCS method- short for forward-time centered-space method- method to solve initial value problems for partial differential equationsProcedure: - divide spatial dimension $x$ into a grid of points with spacing $a$ - calculate second derivative with respect to $x$ with finite differences $\rightarrow$ simultaneous ordinary differential equations are obtained - use the Euler method to solve them For the ODEs we arrived at the conclusion that we shouldn't use Euler's method due to the poor accuracy. Why do we use it here? Estimating the second derivative with finite differences is not very accurate. There is no point in using a very accurate solver. Euler's method gives errros which are comparable to the error introduced by the finite difference approach. Example: diffusion equation\begin{equation} \frac{\partial \phi}{\partial t} = D\frac{\partial^2\phi}{\partial x^2}\end{equation}Take second derivatives with finite differences\begin{align} \frac{\partial^2\phi}{\partial x^2} =\frac{\phi(x+a,y) + \phi(x-a,y) -2\phi(x,y)}{a^2}\end{align}and insert in diffusion equation\begin{equation} \frac{\partial \phi}{\partial t} = \frac{D}{a^2}\left[\phi(x+a,y) + \phi(x-a,y) -2\phi(x,y)\right]\end{equation}We can think of the value of $\phi$ at the different grid points as separate variables $\rightarrow$ we have a set of simultaneous ODEs now. We have the ODE\begin{equation} \frac{\mathrm{d}\phi}{\mathrm{d}t} = f(\phi,t)\end{equation}where $f(\phi,t)$ is the right-hand side of the last equation. Solving with Euler yields\begin{equation} \phi(x,t+h) = \phi(x,t) + h\frac{D}{a^2}\left[\phi(x+a,y) + \phi(x-a,y) -2\phi(x,y)\right]\end{equation}If we know $\phi$ at every grid pint $x$ hat some time $t$, then we get from this equation the value of each grid point at time $t+h$. Example: Solving the heat equation with FTCSWe have a steel container, which is 1 cm thick and is initially at a uniform temperature of 20 degree Celsius everywhere. The container is placed in a bath of cold water at 0 degree celsius and filled with hot water at 50 degree Celsius. Assumptions: container is arbitrarily wide. Neither cold not hot water change temperature. We have to solve the 1D diffusion equation for the temperature $T$\begin{equation} \frac{\partial T}{\partial t} = D \frac{\partial^2 T}{\partial x^2}\end{equation}The $x$-axis is divided into 100 equal points (101 in total counting boundaries). The first and last point have fixes temperatures at 50 degree Celsius and 0 degree Celsius, respectively. Intermediate points are initially at 20 degree Celsius. The diffusion coefficient is $D = 4.25\times10^{-6}\text{m}^2\text{s}^{-1}$.Task: Plot of temperatures profile at times $t = 0.01, 0.1, 0.4, 1, 10$ s.
###Code
from numpy import empty
from matplotlib import pyplot as plt
import time
# Constants
L = 0.01 # Thickness of steel in meters
D = 4.25e-6 # Thermal diffusivity
N = 100 # Number of divisions in grid
a = L/N # Grid spacing
h = 1e-4 # Time-step
epsilon = h/1000
Tlo = 0.0 # Low temperature in Celcius
Tmid = 20.0 # Intermediate temperature in Celcius
Thi = 50.0 # Hi temperature in Celcius
t1 = 0.01
t2 = 0.1
t3 = 0.4
t4 = 1.0
t5 = 10.0
tend = t5 + epsilon
# Create arrays
T = empty(N+1,float)
T[0] = Thi
T[N] = Tlo
T[1:N] = Tmid
Tp = empty(N+1,float)
Tp[0] = Thi
Tp[N] = Tlo
# Main loop
t = 0.0
c = h*D/(a*a)
start = time.time()
while t<tend:
# Calculate the new values of T
#for i in range(1,N): ## Loop is the slow alternative
# Tp[i] = T[i] + c*(T[i+1]+T[i-1]-2*T[i])
Tp[1:N] = T[1:N] + c* (T[2:N+1]+T[0:N-1]-2*T[1:N])
T,Tp = Tp,T
t += h
# Make plots at the given times
if abs(t-t1)<epsilon:
plt.plot(T,label='t1')
if abs(t-t2)<epsilon:
plt.plot(T,label='t2')
if abs(t-t3)<epsilon:
plt.plot(T,label='t3')
if abs(t-t4)<epsilon:
plt.plot(T,label='t4')
if abs(t-t5)<epsilon:
plt.plot(T,label='t5')
end = time.time()
print(end - start)
plt.xlabel("x")
plt.ylabel("T in degree Celsius")
plt.legend()
###Output
0.5903024673461914
|
Spam Email Classifier/Bayes Classifier - Training.ipynb | ###Markdown
Notebook Imports
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Constants
###Code
TRAIN_DATA_FILE = 'SpamData/02_Training/train-data.txt'
TEST_DATA_FILE = 'SpamData/02_Training/test-data.txt'
TOKEN_SPAM_PROB_FILE = 'SpamData/03_Testing/prob-spam.txt'
TOKEN_HAM_PROB_FILE = 'SpamData/03_Testing/prob-nonspam.txt'
TOKEN_ALL_PROB_FILE = 'SpamData/03_Testing/prob-all-tokens.txt'
TEST_FEATURE_MATRIX = 'SpamData/03_Testing/test-features.txt'
TEST_TARGET_FILE = 'SpamData/03_Testing/test-target.txt'
VOCAB_SIZE = 2500
###Output
_____no_output_____
###Markdown
Read and Load features from .txt Files into NumPy Array
###Code
sparse_train_data = np.loadtxt(TRAIN_DATA_FILE, delimiter=' ', dtype=int)
sparse_test_data = np.loadtxt(TEST_DATA_FILE, delimiter=' ', dtype=int)
sparse_train_data[:5]
sparse_test_data[-5:]
print('Number of rows in training file:', sparse_train_data.shape[0])
print('Number of rows in test file:', sparse_test_data.shape[0])
print('Number of emails in training file:', np.unique(sparse_train_data[:, 0]).size)
print('Number of emails in testing file:', np.unique(sparse_test_data[:, 0]).size)
###Output
Number of emails in testing file: 1724
###Markdown
How to create an empty DataFrame
###Code
column_names = ['DOC_ID'] + ['CATEGORY'] + list(range(0, VOCAB_SIZE))
column_names[:5]
len(column_names)
index_names = np.unique(sparse_train_data[:, 0])
index_names
full_train_data = pd.DataFrame(index=index_names, columns=column_names)
full_train_data.fillna(value=0, inplace=True)
full_train_data.head()
###Output
_____no_output_____
###Markdown
Create a Full Matrix from Sparse Matrix
###Code
def make_full_matrix(sparse_matrix, nr_words, doc_idx=0, word_idx=1, cat_idx=2, freq_idx=3):
"""
Form a full matrix full matrix from a sparse matrix. Return a pandas DataFrame.
Keywords arguments:
sparse_matrix -- numpy array
nr_words -- size of the vocabulary. Total number of tokens.
doc_idx -- position of the document id in sparse matrix. Default: 1st column
word_idx -- position of the word id in sparse matrix. Default: 2nd column
cat_idx -- position of the label (spam is 1, nonspam is 0). Default 3rd column
freq_idx -- position of occurrence of word in sparse matrix. Default 4th column
"""
column_names = ['DOC_ID'] + ['CATEGORY'] + list(range(0, nr_words))
doc_id_names = np.unique(sparse_matrix[:, 0])
full_matrix = pd.DataFrame(index=doc_id_names, columns=column_names)
full_matrix.fillna(value=0, inplace=True)
for i in range(sparse_matrix.shape[0]):
doc_nr = sparse_matrix[i][doc_idx]
word_id = sparse_matrix[i][word_idx]
label = sparse_matrix[i][cat_idx]
occurrence = sparse_matrix[i][freq_idx]
full_matrix.at[doc_nr, 'DOC_ID'] = doc_nr
full_matrix.at[doc_nr, 'CATEGORY'] = label
full_matrix.at[doc_nr, word_id] = occurrence
full_matrix.set_index('DOC_ID', inplace=True)
return full_matrix
%%time
full_train_data = make_full_matrix(sparse_train_data, VOCAB_SIZE)
full_train_data.head()
full_train_data.tail()
full_train_data.shape
###Output
_____no_output_____
###Markdown
Training the Naive Bayes Model Calculating the Probability of Spam
###Code
full_train_data.CATEGORY.size
full_train_data.CATEGORY.sum()
prob_spam = full_train_data.CATEGORY.sum() / full_train_data.CATEGORY.size
print('Probability of Spam is:', prob_spam)
###Output
Probability of Spam is: 0.310989284824321
###Markdown
Total Number of Words / Tokens
###Code
full_train_features = full_train_data.loc[:, full_train_data.columns != 'CATEGORY']
full_train_features.head()
email_lengths = full_train_features.sum(axis=1)
email_lengths
# Total Word Count
total_wc = email_lengths.sum()
total_wc
###Output
_____no_output_____
###Markdown
Number of Tokens in Spam and Ham emails
###Code
spam_lengths = email_lengths[full_train_data.CATEGORY == 1]
spam_lengths
spam_wc = spam_lengths.sum()
spam_wc
ham_lengths = email_lengths[full_train_data.CATEGORY == 0]
ham_lengths
nonspam_wc = ham_lengths.sum()
nonspam_wc
email_lengths.shape[0] - spam_lengths.shape[0] - ham_lengths.shape[0]
total_wc - spam_wc - nonspam_wc
print('Average number of words in spam emails: {:.0f}'.format(spam_wc/spam_lengths.shape[0]))
print('Average number of words in ham emails: {:.0f}'.format(nonspam_wc/ham_lengths.shape[0]))
full_train_features.shape
###Output
_____no_output_____
###Markdown
Summing the tokens occurring in spam
###Code
train_spam_tokens = full_train_features.loc[full_train_data.CATEGORY == 1]
train_spam_tokens
# We do not want zero in our calculations. So we add 1 to each! It is called Laplace Smoothing Technique
summed_spam_tokens = train_spam_tokens.sum(axis=0) + 1
summed_spam_tokens
###Output
_____no_output_____
###Markdown
Summing the tokens occurring in ham
###Code
train_ham_tokens = full_train_features.loc[full_train_data.CATEGORY == 0]
train_ham_tokens
# We do not want zero in our calculations. So we add 1 to each! It is called Laplace Smoothing Technique
summed_ham_tokens = train_ham_tokens.sum(axis=0) + 1
summed_ham_tokens
###Output
_____no_output_____
###Markdown
P(Token | Spam) - Probability that a Token Occurs given the email is spam
###Code
# Vocab Size added to deal with effect of Laplace Smoothing Technique which we used before to avoid zeroes!
prob_tokens_spam = summed_spam_tokens / (spam_wc + VOCAB_SIZE)
prob_tokens_spam
prob_tokens_spam.sum()
###Output
_____no_output_____
###Markdown
P(Token | Spam) - Probability that a Token Occurs given the email is spam
###Code
# Vocab Size added to deal with effect of Laplace Smoothing Technique which we used before to avoid zeroes!
prob_tokens_nonspam = summed_ham_tokens / (nonspam_wc + VOCAB_SIZE)
prob_tokens_spam
prob_tokens_spam.sum()
###Output
_____no_output_____
###Markdown
P(Token) - Probability that Token Occurs
###Code
prob_tokens_all = full_train_features.sum(axis=0) / total_wc
prob_tokens_all
prob_tokens_all.sum()
###Output
_____no_output_____
###Markdown
Save Trained Model
###Code
np.savetxt(TOKEN_SPAM_PROB_FILE, prob_tokens_spam)
np.savetxt(TOKEN_HAM_PROB_FILE, prob_tokens_nonspam)
np.savetxt(TOKEN_ALL_PROB_FILE, prob_tokens_all)
###Output
_____no_output_____
###Markdown
Prepare Test Data
###Code
sparse_test_data.shape
%%time
full_test_data = make_full_matrix(sparse_test_data, VOCAB_SIZE)
X_test = full_test_data.loc[:, full_test_data.columns != 'CATEGORY']
y_test = full_test_data.CATEGORY
np.savetxt(TEST_TARGET_FILE, y_test)
np.savetxt(TEST_FEATURE_MATRIX, X_test)
###Output
_____no_output_____ |
test-notebooks/sp-location-extraction-testing.ipynb | ###Markdown
This notebook explores methods for extracting locations related to mentions of taxa.Status: In Development Last Updated: 201904Summary: Using output from the eXtract Dark Data (xDD) (previously named GeoDeepDive) database we are exploring ways to extract information about species/taxa of interest from literature. These efforts are using a list of taxa being studied by the USGS Nonindigenous Aquatic Species Program, but should be applicable to any list of taxanomic names.Inputs: *Taxa Information (url='https://nas.er.usgs.gov/api/v1/species') *xDD processed data, output from https://github.com/dwief-usgs/app-template-nasContact: Daniel Wieferich ([email protected])
###Code
#Import needed packages
import pandas as pd
import requests
#Import Functions
def get_species_list(url='https://nas.er.usgs.gov/api/v1/species'):
"""return list of taxa information for NAS species of interest
----------
URL : API that returns JSON results of NAS specie taxonomy
"""
try:
r = requests.get(url)
if r.status_code == 200:
return r.json()
else:
raise Exception('NAS API URL returning: {}'.format(r.status_code))
except Exception as e:
raise Exception(e)
#Keeps rows in pd from being truncated
pd.set_option('display.max_colwidth', -1)
###Output
_____no_output_____
###Markdown
Step 1--------------*Import source datasets including list of taxa names (from NAS API) and literature passages from xDD Progress---------------Currently using a set of passages from dam removal exercise for testing while taxa information is being processed by xDD staff-Need to rethink logic behind taxanomic names to process, based on conversations with NAS team. For example, species 3118 is returning a common name of "mussel". This is currently being processed but should not be.
###Code
#Import example passage output from xDD
#This will be updated with taxa mentions coming from
xdd_export = 'dam_year_river_22h33m_06Nov2018_a4c1766/river-cand-df.csv'
xdd_df = pd.read_csv(xdd_export, encoding='utf-8')
#This is a big file, lets make it smaller (5,000 records) for testing purpose
xdd_df.shape
xdd_df_sub = xdd_df[:1000]
#Run function to return NAS taxa information as JSON response
taxa_r = get_species_list()
taxa_list = []
for taxa in taxa_r['results']:
#captures a hybrid based on x of species, only return common name
if ' x ' in taxa['species']:
taxa_list.append({'speciesID': taxa['speciesID'], 'common_name': taxa['common_name']})
#for taxa with species = sp., return genus and common name
elif 'sp.' in taxa['species']:
taxa_list.append({'speciesID': taxa['speciesID'], 'genus': taxa['genus'], 'common_name': taxa['common_name']})
#for everything else return scientific name (including subspecies and variety as available) and common name
else:
sci_name = (taxa['genus']+' '+taxa['species'] + ' '+ taxa['subspecies'] + ' ' + taxa['variety']).strip()
taxa_list.append({'speciesID': taxa['speciesID'], 'sci_name': sci_name, 'common_name': taxa['common_name']})
taxa_df = pd.DataFrame(taxa_list)
###Output
_____no_output_____
###Markdown
Step 2--------------*For each passage identify mentions of species and explore ways to extract location information Progress---------------starting with basic use of NER tags within close proximity-we have efforts in progress to create NER tags specific to rivers (using SpaCy), to better understand and extract river mentions-first pass on running this with full 2 million records and full taxa list did not complete in a full 8 hr work day... need to incorporate a mode of doing batches
###Code
import ast
mention = []
for row_xdd in xdd_df_sub.itertuples():
for row_taxa in taxa_df.itertuples():
speciesID = row_taxa.speciesID
if str(row_taxa.sci_name)!= 'nan' and str(row_taxa.sci_name) in ast.literal_eval(row_xdd.passage):
#print (str(speciesID)+': '+str(row_river.passage))
#record speciesID, passageID, passage
mention.append({'species_id': speciesID, 'taxa':row_taxa.sci_name, 'passage': row_river.passage, 'docid':row_xdd.docid, 'ner': row_xdd.ner, 'sentid':row_xdd.sentid})
#if str(row_taxa.genus)!= 'nan' and str(row_taxa.genus) in ast.literal_eval(row_river.passage):
# mention.append({'species_id': speciesID, 'taxa':row_taxa.genus, 'passage': row_xdd.passage, 'docid':row_xdd.docid, 'ner': row_xdd.ner, 'sentid':row_xdd.sentid})
#print (row_taxa.genus)
#print (str(speciesID)+': '+str(row_river.passage))
#if str(row_taxa.common_name)!= 'nan' and str(row_taxa.common_name)!='' and str(row_taxa.common_name) in ast.literal_eval(row_river.passage):
# mention.append({'species_id': speciesID, 'taxa':row_taxa.common_name, 'passage': row_xdd.passage, 'docid':row_xdd.docid, 'ner': row_xdd.ner, 'sentid':row_xdd.sentid})
#print (row_taxa.common_name)
#print (str(speciesID)+': '+str(row_xdd.passage))
mention_df = pd.DataFrame(mention)
mention_df.to_csv("./mention_df_sciname.csv", sep=',', index=False)
mention_df.tail()
#List Pairs of Species / Locations using NER tags
import itertools
def intervals_extract(iterable):
iterable = sorted(set(iterable))
for key, group in itertools.groupby(enumerate(iterable),
lambda t: t[1] - t[0]):
group = list(group)
yield [group[0][1], group[-1][1]]
import ast
for row_xdd in xdd_df_sub.itertuples():
passage = list(ast.literal_eval(row_xdd.passage))
passage_str = ' '.join(word for word in passage)
ner = list(ast.literal_eval(row_xdd.ner))
docid = row_xdd.docid
sentid = row_xdd.sentid
if 'Anguilla' in passage_str:
print (passage_str)
index_locations = list([i for i,s in enumerate(ner) if 'LOCATION' in s])
location_intervals = list(intervals_extract(index_locations))
#index_taxa = list([i for i,s in enumerate(passage) if 'Anguilla' in s])
print (location_intervals)
#for i in index_locations:
# p = passage[i]
# n = ner[i]
# print (sentid + ': ' + str(i)+ ': '+ p)
# print (i)
###Output
Generally , there is no gradient in salinity between the lower lakes and the Coorong ; instead , there is an abrupt transition between fresh and brackish/marine salinities . The impact that changes in such physiochemical signals have on the upstream movements of these species is uncertain . In the Murray-Darling Basin , connectivity between the Southern Ocean , estuary and the freshwater environments of the lower lakes and Murray River is imperative for at least ๏ฌve species of diadromous ๏ฌshes , namely anadromous Short-headed and Pouched Lamprey -LRB- Mordacia mordax and Geotria australis -RRB- and catadromous Common Galaxias -LRB- Galaxias maculatus -RRB- , Congolli and Short-๏ฌnned Eel -LRB- Anguilla australis -RRB- .
[[14, 14], [50, 51], [56, 57], [69, 70], [92, 92], [109, 109]]
The impact that changes in such physiochemical signals have on the upstream movements of these species is uncertain . In the Murray-Darling Basin , connectivity between the Southern Ocean , estuary and the freshwater environments of the lower lakes and Murray River is imperative for at least ๏ฌve species of diadromous ๏ฌshes , namely anadromous Short-headed and Pouched Lamprey -LRB- Mordacia mordax and Geotria australis -RRB- and catadromous Common Galaxias -LRB- Galaxias maculatus -RRB- , Congolli and Short-๏ฌnned Eel -LRB- Anguilla australis -RRB- . The original intent of the Sea to Hume Dam Fish Passage programme was to construct a number of experimen - tal ๏ฌshways at the Murray barrages use assessment results to inform additional ๏ฌshways in the region .
[[21, 22], [27, 28], [40, 41], [63, 63], [80, 80]]
In the Murray-Darling Basin , connectivity between the Southern Ocean , estuary and the freshwater environments of the lower lakes and Murray River is imperative for at least ๏ฌve species of diadromous ๏ฌshes , namely anadromous Short-headed and Pouched Lamprey -LRB- Mordacia mordax and Geotria australis -RRB- and catadromous Common Galaxias -LRB- Galaxias maculatus -RRB- , Congolli and Short-๏ฌnned Eel -LRB- Anguilla australis -RRB- . The original intent of the Sea to Hume Dam Fish Passage programme was to construct a number of experimen - tal ๏ฌshways at the Murray barrages use assessment results to inform additional ๏ฌshways in the region . The need for several additional ๏ฌshways at the Murray barrages was seen as a priority in South Australian Government 's Coorong , Lower Lakes and Murray Mouth Long Term Plan .
[[2, 3], [8, 9], [21, 22], [44, 44], [61, 61]]
|
GAN/WGAN_DIV.ipynb | ###Markdown
Pytorch ImageFolder ๊ฐ์ฒด์ ๋ง๋๋ก datafolder ๊ตฌ์ฑ (๋ ์ด๋ธ ํ์ํ ๊ฒฝ์ฐ)
###Code
# filename ์ class ๊ฐ ๋ฐ๋ก ๋์๋ dictionary ํ์ผ ์ฝ์ด์ด
import pickle
# dataset์์ file๋ค ๊ฐ์ ธ์ด
import os
import shutil
with open('/content/drive/Shareddrives/machine_learning_in_practice/Analog-PILGI-to-DIgital/GAN/data/pFileNameToClass.pickle','rb') as fw:
pFileNameToClass = pickle.load(fw) # O(1) ๋ก ๋ฐ๋ก class ์ฐพ์ ์ ์๋ค.
# ์ธ์์ฒด ๋ฐ์ดํฐ ๋ชจ์ ํด๋์ ์ด๋ฏธ์ง๋ค file list ๋ฐ์
path = "/content/drive/Shareddrives/machine_learning_in_practice/Analog-PILGI-to-DIgital/GAN/data/printed"
file_list = os.listdir(path) # 35765 -> augmentation ํ์
# imageFolder ๊ฐ์ฒด์ ๋ง๋๋ก datafolder ๊ตฌ์ฑ
pretrain_dir_path = "/content/drive/Shareddrives/machine_learning_in_practice/Analog-PILGI-to-DIgital/GAN/data/pretrainDataset"
os.makedirs(pretrain_dir_path, exist_ok=True)
for filename in file_list:
label = pFileNameToClass[filename]
folder_path = "/content/drive/Shareddrives/machine_learning_in_practice/Analog-PILGI-to-DIgital/GAN/data/pretrainDataset/" + str(label)
os.makedirs(folder_path, exist_ok=True)
shutil.move(path + '/' + filename, folder_path + '/' + filename)
###Output
_____no_output_____
###Markdown
Pretrain_DataLoader OSError: errno 5 input/output error ํด๊ฒฐํ๊ธฐ 
###Code
import os
import shutil
# colab VM disc ์ฌ์ฉ
dir_path = "/content/data/printed"
os.makedirs(dir_path, exist_ok=True)
# ๊ณต์ ๋๋ผ์ด๋ธ์ ๋ฐ์ดํฐ๋ฅผ ๋ณต์ฌํด ์์ฑํ๊ธฐ
driveFolder = '/content/drive/Shareddrives/machine_learning_in_practice/Analog-PILGI-to-DIgital/GAN/data/printed'
newFolder = '/content/data/printed'
shutil.copytree(driveFolder, newFolder)
path = "/content/data/printed"
file_list = os.listdir(path) # 35765 -> augmentation ํ์
len(file_list)
###Output
_____no_output_____
###Markdown
DataLoader
###Code
!unzip '/content/drive/Shareddrives/machine_learning_in_practice/Analog-PILGI-to-DIgital/GAN/data/cropped_printed.zip' -d .
from PIL import Image
import os
from torch.utils.data import Dataset, DataLoader
import torchvision.transforms as transforms
class SyllablePrintedDataset(Dataset):
def __init__(self, path, transform):
file_list = []
for filename in os.listdir(path):
fileName = path + '/' + filename
file_list.append(fileName)
self.transform = transform
self.dataset = []
for img_path in file_list[:2500]:
image = Image.open(img_path)
img_transformed = self.transform(image)
self.dataset.append(img_transformed)
def __len__(self):
return len(self.dataset)
def __getitem__(self, index):
return self.dataset[index]
# # dataloader test
# transform = transforms.Compose([
# transforms.Resize((64,64)),
# transforms.RandomAffine(30),
# transforms.ColorJitter(brightness=(0.2, 1.5),
# contrast=(0.2, 3),
# saturation=(0.2, 1.5)),
# transforms.ToTensor(),
# ])
# folderpath = '/content/cropped_printed'
# dataset = SyllablePrintedDataset(folderpath, transform)
# print("ํ์ต์ ์ฌ์ฉํ๋ ๋ฐ์ดํฐ ์ : ", dataset.__len__())
# dataloader = DataLoader(dataset, batch_size=64, shuffle=True)
# # dataloader test
# for imgs in dataloader: # ๋ฐฐ์น ๋จ์๋ก iter
# print(".")
###Output
.
.
###Markdown
WGAN_div Model
###Code
import math
import sys
import numpy as np
import torchvision.transforms as transforms
from torchvision.utils import save_image
from torch.utils.data import DataLoader
from torchvision import datasets
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.autograd as autograd
import torch
output_path = '/content/drive/Shareddrives/machine_learning_in_practice/Analog-PILGI-to-DIgital/GAN/outputs/wgan_div'
os.makedirs(output_path, exist_ok=True)
g_lossL = []
d_lossL = []
class Opt:
def __init__(self, epoch=100, batch_size=64, lr=0.0002, b1=0.5, b2=0.999, n_cpu=2, latent_dim=100, img_size=64, channels=3, n_critic=5, clip_value=0.01, sample_interval=400):
self.n_epochs = epoch # number of epochs of training
self.batch_size = batch_size # size of the batches
self.lr = lr # adam: learning rate
self.b1 = b1 # adam: decay of first order momentum of gradient
self.b2 = b2 # adam: decay of first order momentum of gradient
self.n_cpu = n_cpu # number of cpu threads to use during batch generation
self.latent_dim = latent_dim # dimensionality of the latent space
self.img_size = img_size # size of each image dimension
self.channels = channels # number of image channels
self.n_critic = n_critic # number of training steps for discriminator per iter
self.clip_value = clip_value # lower and upper clip value for disc. weights
self.sample_interval = sample_interval # interval between image sampling
opt = Opt()
img_shape = (opt.channels, opt.img_size, opt.img_size)
cuda = True if torch.cuda.is_available() else False
class Generator(nn.Module):
def __init__(self):
super(Generator, self).__init__()
def block(in_feat, out_feat, normalize=True):
layers = [nn.Linear(in_feat, out_feat)]
if normalize:
layers.append(nn.BatchNorm1d(out_feat, 0.8))
layers.append(nn.LeakyReLU(0.2, inplace=True))
return layers
self.model = nn.Sequential(
*block(opt.latent_dim, 128, normalize=False),
*block(128, 256),
*block(256, 512),
# *block(512, 1024), # ์์
nn.Linear(512, int(np.prod(img_shape))),
nn.Tanh()
)
def forward(self, z):
img = self.model(z)
img = img.view(img.shape[0], *img_shape) # img_shape : 1, 64, 64
return img
class Discriminator(nn.Module):
def __init__(self):
super(Discriminator, self).__init__()
self.model = nn.Sequential(
nn.Linear(int(np.prod(img_shape)), 512),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(512, 256),
nn.LeakyReLU(0.2, inplace=True),
nn.Linear(256, 1),
)
def forward(self, img):
img_flat = img.view(img.shape[0], -1)
validity = self.model(img_flat)
return validity
k = 2
p = 6
if not cuda:
print("GPU ์จ๋ผ")
# Initialize generator and discriminator
generator = Generator().cuda()
discriminator = Discriminator().cuda()
# ์์
transform = transforms.Compose([
transforms.Resize((64,64)),
transforms.RandomAffine(30),
transforms.ColorJitter(brightness=(0.2, 1.5),
contrast=(0.2, 1.5),
saturation=(0.2, 1.5)),
transforms.ToTensor(),
])
folderpath = '/content/cropped_printed'
dataset = SyllablePrintedDataset(folderpath, transform)
print("ํ์ต์ ์ฌ์ฉํ๋ ๋ฐ์ดํฐ ์ : ", dataset.__len__())
dataloader = DataLoader(dataset, batch_size=64, shuffle=True)
# Optimizers
optimizer_G = torch.optim.Adam(generator.parameters(), lr=opt.lr, betas=(opt.b1, opt.b2))
optimizer_D = torch.optim.Adam(discriminator.parameters(), lr=opt.lr, betas=(opt.b1, opt.b2))
Tensor = torch.cuda.FloatTensor if cuda else torch.FloatTensor
# ----------
# Training
# ----------
batches_done = 0
for epoch in range(opt.n_epochs):
for i, imgs in enumerate(dataloader):
# Configure input
real_imgs = Variable(imgs.type(Tensor), requires_grad=True)
print(np.shape(real_imgs))
# ---------------------
# Train Discriminator
# ---------------------
optimizer_D.zero_grad()
# Sample noise as generator input
z = Variable(Tensor(np.random.normal(0, 1, (imgs.shape[0], opt.latent_dim))))
# Generate a batch of images
fake_imgs = generator(z)
# Real images
real_validity = discriminator(real_imgs)
# Fake images
fake_validity = discriminator(fake_imgs)
# Compute W-div gradient penalty
real_grad_out = Variable(Tensor(real_imgs.size(0), 1).fill_(1.0), requires_grad=False)
real_grad = autograd.grad(
real_validity, real_imgs, real_grad_out, create_graph=True, retain_graph=True, only_inputs=True
)[0]
real_grad_norm = real_grad.view(real_grad.size(0), -1).pow(2).sum(1) ** (p / 2)
fake_grad_out = Variable(Tensor(fake_imgs.size(0), 1).fill_(1.0), requires_grad=False)
fake_grad = autograd.grad(
fake_validity, fake_imgs, fake_grad_out, create_graph=True, retain_graph=True, only_inputs=True
)[0]
fake_grad_norm = fake_grad.view(fake_grad.size(0), -1).pow(2).sum(1) ** (p / 2)
div_gp = torch.mean(real_grad_norm + fake_grad_norm) * k / 2
# Adversarial loss
d_loss = -torch.mean(real_validity) + torch.mean(fake_validity) + div_gp
d_lossL.append(d_loss)
d_loss.backward()
optimizer_D.step()
optimizer_G.zero_grad()
# Train the generator every n_critic steps
if i % opt.n_critic == 0:
# -----------------
# Train Generator
# -----------------
# Generate a batch of images
fake_imgs = generator(z)
# Loss measures generator's ability to fool the discriminator
# Train on fake images
fake_validity = discriminator(fake_imgs)
g_loss = -torch.mean(fake_validity)
g_lossL.append(g_loss)
g_loss.backward()
optimizer_G.step()
print(
"[Epoch %d/%d] [Batch %d/%d] [D loss: %f] [G loss: %f]"
% (epoch+1, opt.n_epochs, i+1, len(dataloader), d_loss.item(), g_loss.item())
)
if batches_done % opt.sample_interval == 0:
save_image(fake_imgs.data[:25], "/content/drive/Shareddrives/machine_learning_in_practice/Analog-PILGI-to-DIgital/GAN/outputs/wgan_div/images/%d.png" % batches_done, nrow=5, normalize=True)
batches_done += opt.n_critic
# ํ์ต๋ ๋ชจ๋ธ ์ ์ฅ
generator_out_path = '/content/drive/Shareddrives/machine_learning_in_practice/Analog-PILGI-to-DIgital/GAN/model/wgan_div/generator.pth'
torch.save(generator.state_dict(), generator_out_path)
discriminator_out_path = '/content/drive/Shareddrives/machine_learning_in_practice/Analog-PILGI-to-DIgital/GAN/model/wgan_div/discriminator.pth'
torch.save(discriminator.state_dict(), discriminator_out_path)
import csv # csvํ์ผ๋ก ์ ๊ธฐ # newline ์ค์ ์ ์ํ๋ฉด ํ์ค๋ง๋ค ๊ณต๋ฐฑ์๋ ์ค์ด ์๊ธด๋ค.
with open('/content/drive/Shareddrives/machine_learning_in_practice/Analog-PILGI-to-DIgital/GAN/data/lossFile.csv', 'w', newline='') as f:
writer = csv.writer(f)
writer.writerow(g_lossL)
writer.writerow(d_lossL)
###Output
_____no_output_____
###Markdown
github ์ปค๋ฐ
###Code
MY_GOOGLE_DRIVE_PATH = "/content/drive/Shareddrives/machine_learning_in_practice/Analog-PILGI-to-DIgital"
%cd "{MY_GOOGLE_DRIVE_PATH}"
!git config --global user.email [email protected] # ์ด๋ฉ์ผ ์
๋ ฅ ex) [email protected]
!git config --global user.name hyeneung #๊นํ ์์ด๋ ์
๋ ฅ ex)luckydipper
!git pull
!git status
!git add GAN/WGAN_DIV.ipynb
!git commit -m"[FIX] Dataloader using zipFile"
!git push
###Output
_____no_output_____ |
Clustering/Spectral Clustering/SpectralClustering_StandardScaler.ipynb | ###Markdown
Spectral Clustering with Standard Scaler This Code template is for the Cluster analysis using a Spectral Clustering algorithm with the StandardScaler feature rescaling technique and includes 2D and 3D cluster visualization of the Clusters. Required Packages
###Code
!pip install plotly
import operator
import warnings
import itertools
import numpy as np
import pandas as pd
import seaborn as sns
import plotly.express as px
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
import plotly.graph_objects as go
from sklearn.cluster import SpectralClustering
from sklearn.preprocessing import StandardScaler
from scipy.spatial.distance import pdist, squareform
import scipy
from scipy.sparse import csgraph
from numpy import linalg as LA
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
#filepath
file_path= ""
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
#x_values
features=[]
###Output
_____no_output_____
###Markdown
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path)
df.head()
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X .
###Code
X=df[features]
###Output
_____no_output_____
###Markdown
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
###Output
_____no_output_____
###Markdown
Calling preprocessing functions on the feature and target set.
###Code
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
X.head()
###Output
_____no_output_____
###Markdown
Rescaling techniqueStandardize features by removing the mean and scaling to unit varianceThe standard score of a sample x is calculated as:z = (x - u) / swhere u is the mean of the training samples or zero if with_mean=False, and s is the standard deviation of the training samples or one if with_std=False.
###Code
X_Standard=StandardScaler().fit_transform(X)
X_Standard=pd.DataFrame(data = X_Standard,columns = X.columns)
X_Standard.head()
###Output
_____no_output_____
###Markdown
How to select optimal number of cluster in Spectral Clustering:- In spectral clustering, one way to identify the number of clusters is to plot the eigenvalue spectrum. If the clusters are clearly defined, there should be a โgapโ in the smallest eigenvalues at the โoptimalโ k. This is called eigengap heuristic.Eigengap heuristic suggests the number of clusters k is usually given by the value of k that maximizes the eigengap (difference between consecutive eigenvalues). The larger this eigengap is, the closer the eigenvectors of the ideal case and hence the better spectral clustering works. This method performs the eigen decomposition on a affinity matrix. Steps are:- 1. Construct the normalized affinity matrix: L = Dโ1/2ADห โ1/2. 2. Find the eigenvalues and their associated eigen vectors 3. Identify the maximum gap which corresponds to the number of clusters by eigengap heuristic Affinity matrixCalculate affinity matrix based on input coordinates matrix and the number of nearest neighbours.
###Code
def getAffinityMatrix(coordinates, k = 7):
dists = squareform(pdist(coordinates))
knn_distances = np.sort(dists, axis=0)[k]
knn_distances = knn_distances[np.newaxis].T
local_scale = knn_distances.dot(knn_distances.T)
affinity_matrix = -pow(dists,2)/ local_scale
affinity_matrix[np.where(np.isnan(affinity_matrix))] = 0.0
affinity_matrix = np.exp(affinity_matrix)
np.fill_diagonal(affinity_matrix, 0)
return affinity_matrix
def eigenDecomposition(A, plot = True, topK = 10): #A: Affinity matrix
L = csgraph.laplacian(A, normed=True)
n_components = A.shape[0]
eigenvalues, eigenvectors = LA.eig(L)
if plot:
plt.figure(1,figsize=(20,8))
plt.title('Largest eigen values of input matrix')
plt.scatter(np.arange(len(eigenvalues)), eigenvalues)
plt.grid()
index_largest_gap = np.argsort(np.diff(eigenvalues))[::-1][:topK]
nb_clusters = index_largest_gap + 1
return nb_clusters
affinity_matrix = getAffinityMatrix(X_Standard, k = 10)
k = eigenDecomposition(affinity_matrix)
k.sort()
print(f'Top 10 Optimal number of clusters {k}')
###Output
Top 10 Optimal number of clusters [ 2 4 6 8 9 20 34 48 63 79]
###Markdown
ModelSpectral Clustering is very useful when the structure of the individual clusters is highly non-convex, or more generally when a measure of the center and spread of the cluster is not a suitable description of the complete cluster, such as when clusters are nested circles on the 2D plane. Model Tuning Parameters > - n_clusters -> The dimension of the projection subspace. > - eigen_solver -> The eigenvalue decomposition strategy to use. AMG requires pyamg to be installed. It can be faster on very large, sparse problems, but may also lead to instabilities. If None, then 'arpack' is used. > - n_components -> Number of eigenvectors to use for the spectral embedding. > - gamma -> Kernel coefficient for rbf, poly, sigmoid, laplacian and chi2 kernels. Ignored for affinity='nearest_neighbors'.[More information](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.SpectralClustering.html)
###Code
model = SpectralClustering(n_clusters=4, affinity='nearest_neighbors' ,random_state=101)
ClusterDF = X_Standard.copy()
ClusterDF['ClusterID'] = model.fit_predict(X_Standard)
ClusterDF.head()
###Output
_____no_output_____
###Markdown
Cluster RecordsThe below bar graphs show the number of data points in each available cluster.
###Code
ClusterDF['ClusterID'].value_counts().plot(kind='bar')
###Output
_____no_output_____
###Markdown
Cluster PlotsBelow written functions get utilized to plot 2-Dimensional and 3-Dimensional cluster plots on the available set of features in the dataset. Plots include different available clusters along with cluster centroid.
###Code
def Plot2DCluster(X_Cols,df):
for i in list(itertools.combinations(X_Cols, 2)):
plt.rcParams["figure.figsize"] = (8,6)
xi,yi=df.columns.get_loc(i[0]),df.columns.get_loc(i[1])
for j in df['ClusterID'].unique():
DFC=df[df.ClusterID==j]
plt.scatter(DFC[i[0]],DFC[i[1]],cmap=plt.cm.Accent,label=j)
plt.xlabel(i[0])
plt.ylabel(i[1])
plt.legend()
plt.show()
def Plot3DCluster(X_Cols,df):
for i in list(itertools.combinations(X_Cols, 3)):
xi,yi,zi=df.columns.get_loc(i[0]),df.columns.get_loc(i[1]),df.columns.get_loc(i[2])
fig,ax = plt.figure(figsize = (16, 10)),plt.axes(projection ="3d")
ax.grid(b = True, color ='grey',linestyle ='-.',linewidth = 0.3,alpha = 0.2)
for j in df['ClusterID'].unique():
DFC=df[df.ClusterID==j]
ax.scatter3D(DFC[i[0]],DFC[i[1]],DFC[i[2]],alpha = 0.8,cmap=plt.cm.Accent,label=j)
ax.set_xlabel(i[0])
ax.set_ylabel(i[1])
ax.set_zlabel(i[2])
plt.legend()
plt.show()
def Plotly3D(X_Cols,df):
for i in list(itertools.combinations(X_Cols,3)):
xi,yi,zi=df.columns.get_loc(i[0]),df.columns.get_loc(i[1]),df.columns.get_loc(i[2])
fig2=px.scatter_3d(df, x=i[0], y=i[1],z=i[2],color=df['ClusterID'])
fig2.show()
Plot2DCluster(X.columns,ClusterDF)
Plot3DCluster(X.columns,ClusterDF)
Plotly3D(X.columns,ClusterDF)
###Output
_____no_output_____
###Markdown
Spectral Clustering with Standard Scaler This Code template is for the Cluster analysis using a Spectral Clustering algorithm with the StandardScaler feature rescaling technique and includes 2D and 3D cluster visualization of the Clusters. Required Packages
###Code
!pip install plotly
import operator
import warnings
import itertools
import numpy as np
import pandas as pd
import seaborn as sns
import plotly.express as px
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
import plotly.graph_objects as go
from sklearn.cluster import SpectralClustering
from sklearn.preprocessing import StandardScaler
from scipy.spatial.distance import pdist, squareform
import scipy
from scipy.sparse import csgraph
from numpy import linalg as LA
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
#filepath
file_path= ""
###Output
_____no_output_____
###Markdown
List of features which are required for model training .
###Code
#x_values
features=[]
###Output
_____no_output_____
###Markdown
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path)
df.head()
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X .
###Code
X=df[features]
###Output
_____no_output_____
###Markdown
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
###Output
_____no_output_____
###Markdown
Calling preprocessing functions on the feature and target set.
###Code
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
X.head()
###Output
_____no_output_____
###Markdown
Rescaling techniqueStandardize features by removing the mean and scaling to unit varianceThe standard score of a sample x is calculated as:z = (x - u) / swhere u is the mean of the training samples or zero if with_mean=False, and s is the standard deviation of the training samples or one if with_std=False.
###Code
X_Standard=StandardScaler().fit_transform(X)
X_Standard=pd.DataFrame(data = X_Standard,columns = X.columns)
X_Standard.head()
###Output
_____no_output_____
###Markdown
How to select optimal number of cluster in Spectral Clustering:- In spectral clustering, one way to identify the number of clusters is to plot the eigenvalue spectrum. If the clusters are clearly defined, there should be a โgapโ in the smallest eigenvalues at the โoptimalโ k. This is called eigengap heuristic.Eigengap heuristic suggests the number of clusters k is usually given by the value of k that maximizes the eigengap (difference between consecutive eigenvalues). The larger this eigengap is, the closer the eigenvectors of the ideal case and hence the better spectral clustering works. This method performs the eigen decomposition on a affinity matrix. Steps are:- 1. Construct the normalized affinity matrix: L = Dโ1/2ADห โ1/2. 2. Find the eigenvalues and their associated eigen vectors 3. Identify the maximum gap which corresponds to the number of clusters by eigengap heuristic Affinity matrixCalculate affinity matrix based on input coordinates matrix and the number of nearest neighbours.
###Code
def getAffinityMatrix(coordinates, k = 7):
dists = squareform(pdist(coordinates))
knn_distances = np.sort(dists, axis=0)[k]
knn_distances = knn_distances[np.newaxis].T
local_scale = knn_distances.dot(knn_distances.T)
affinity_matrix = -pow(dists,2)/ local_scale
affinity_matrix[np.where(np.isnan(affinity_matrix))] = 0.0
affinity_matrix = np.exp(affinity_matrix)
np.fill_diagonal(affinity_matrix, 0)
return affinity_matrix
def eigenDecomposition(A, plot = True, topK = 10): #A: Affinity matrix
L = csgraph.laplacian(A, normed=True)
n_components = A.shape[0]
eigenvalues, eigenvectors = LA.eig(L)
if plot:
plt.figure(1,figsize=(20,8))
plt.title('Largest eigen values of input matrix')
plt.scatter(np.arange(len(eigenvalues)), eigenvalues)
plt.grid()
index_largest_gap = np.argsort(np.diff(eigenvalues))[::-1][:topK]
nb_clusters = index_largest_gap + 1
return nb_clusters
affinity_matrix = getAffinityMatrix(X_Standard, k = 10)
k = eigenDecomposition(affinity_matrix)
k.sort()
print(f'Top 10 Optimal number of clusters {k}')
###Output
Top 10 Optimal number of clusters [ 2 4 6 8 9 20 34 48 63 79]
###Markdown
ModelSpectral Clustering is very useful when the structure of the individual clusters is highly non-convex, or more generally when a measure of the center and spread of the cluster is not a suitable description of the complete cluster, such as when clusters are nested circles on the 2D plane. Model Tuning Parameters > - n_clusters -> The dimension of the projection subspace. > - eigen_solver -> The eigenvalue decomposition strategy to use. AMG requires pyamg to be installed. It can be faster on very large, sparse problems, but may also lead to instabilities. If None, then 'arpack' is used. > - n_components -> Number of eigenvectors to use for the spectral embedding. > - gamma -> Kernel coefficient for rbf, poly, sigmoid, laplacian and chi2 kernels. Ignored for affinity='nearest_neighbors'.[More information](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.SpectralClustering.html)
###Code
model = SpectralClustering(n_clusters=4, affinity='nearest_neighbors' ,random_state=101)
ClusterDF = X_Standard.copy()
ClusterDF['ClusterID'] = model.fit_predict(X_Standard)
ClusterDF.head()
###Output
_____no_output_____
###Markdown
Cluster RecordsThe below bar graphs show the number of data points in each available cluster.
###Code
ClusterDF['ClusterID'].value_counts().plot(kind='bar')
###Output
_____no_output_____
###Markdown
Cluster PlotsBelow written functions get utilized to plot 2-Dimensional and 3-Dimensional cluster plots on the available set of features in the dataset. Plots include different available clusters along with cluster centroid.
###Code
def Plot2DCluster(X_Cols,df):
for i in list(itertools.combinations(X_Cols, 2)):
plt.rcParams["figure.figsize"] = (8,6)
xi,yi=df.columns.get_loc(i[0]),df.columns.get_loc(i[1])
for j in df['ClusterID'].unique():
DFC=df[df.ClusterID==j]
plt.scatter(DFC[i[0]],DFC[i[1]],cmap=plt.cm.Accent,label=j)
plt.xlabel(i[0])
plt.ylabel(i[1])
plt.legend()
plt.show()
def Plot3DCluster(X_Cols,df):
for i in list(itertools.combinations(X_Cols, 3)):
xi,yi,zi=df.columns.get_loc(i[0]),df.columns.get_loc(i[1]),df.columns.get_loc(i[2])
fig,ax = plt.figure(figsize = (16, 10)),plt.axes(projection ="3d")
ax.grid(b = True, color ='grey',linestyle ='-.',linewidth = 0.3,alpha = 0.2)
for j in df['ClusterID'].unique():
DFC=df[df.ClusterID==j]
ax.scatter3D(DFC[i[0]],DFC[i[1]],DFC[i[2]],alpha = 0.8,cmap=plt.cm.Accent,label=j)
ax.set_xlabel(i[0])
ax.set_ylabel(i[1])
ax.set_zlabel(i[2])
plt.legend()
plt.show()
def Plotly3D(X_Cols,df):
for i in list(itertools.combinations(X_Cols,3)):
xi,yi,zi=df.columns.get_loc(i[0]),df.columns.get_loc(i[1]),df.columns.get_loc(i[2])
fig2=px.scatter_3d(df, x=i[0], y=i[1],z=i[2],color=df['ClusterID'])
fig2.show()
Plot2DCluster(X.columns,ClusterDF)
Plot3DCluster(X.columns,ClusterDF)
Plotly3D(X.columns,ClusterDF)
###Output
_____no_output_____ |
Task-2.md/Linear Regression Task-2.ipynb | ###Markdown
TASK-2 Supervised Machine Learning Model Problem statement In this regression task we will predict the percentage of marks that a student is expected to score based upon the number of hours they studied. This is a simple linear regression task as it involves just two variables. Data Preprocessing 1. Importing Libraries
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
###Output
_____no_output_____
###Markdown
2. Import Dataset Data can be found at url http://bit.ly/w-data
###Code
dataset =pd.read_csv("student_scores.csv")
type(dataset)
###Output
_____no_output_____
###Markdown
This will provide us our whole datase.
###Code
dataset
###Output
_____no_output_____
###Markdown
Now, Using the Head function gives the First five rows of our dataset
###Code
dataset.head()
###Output
_____no_output_____
###Markdown
To check the Overview of our dataset, We use Info function.
###Code
dataset.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 25 entries, 0 to 24
Data columns (total 2 columns):
Hours 25 non-null float64
Scores 25 non-null int64
dtypes: float64(1), int64(1)
memory usage: 480.0 bytes
###Markdown
To check how many rows and columns our dataset have, we will use shape.
###Code
dataset.shape #It shows that our dataset have 25 rows and 2 column
###Output
_____no_output_____
###Markdown
Let's Check unique values in Both hours and scores column
###Code
dataset['Hours'].unique() #Below are the unique values in Hours column
dataset['Scores'].unique() #Below are the uniique values in Scores column
###Output
_____no_output_____
###Markdown
Now, To get the datatype of our particuar column, we will use dtypes as shown
###Code
dataset.dtypes
###Output
_____no_output_____
###Markdown
Now, We will check if our dataset contains any null values or not in both the column
###Code
dataset['Hours'].isnull().sum() # No null values is present
dataset['Scores'].isnull().sum() #No null value is present
###Output
_____no_output_____
###Markdown
3. Statistical Information related to our data.
###Code
dataset.describe()
dataset.rename(columns={'Hours':'Study_hours'},inplace=True)
dataset.head()
###Output
_____no_output_____
###Markdown
4. Split Dependent and Independent variables and Visualize the data:
###Code
dataset.isnull().sum()
x= dataset.iloc[:,:1]
print(x)
type(x)
x= dataset.iloc[:,:-1].values
print(x)
x.ndim
type(x)
y= dataset.iloc[:,1:]
print(y)
type(y)
y= dataset.iloc[:,1:].values #convert from dataframe to numpy array
print(y)
###Output
[[21]
[47]
[27]
[75]
[30]
[20]
[88]
[60]
[81]
[25]
[85]
[62]
[41]
[42]
[17]
[95]
[30]
[24]
[67]
[69]
[30]
[54]
[35]
[76]
[86]]
###Markdown
5. Countplot:
###Code
sns.countplot(x='Study_hours',data=dataset)
sns.countplot('Scores',data=dataset)
###Output
_____no_output_____
###Markdown
6. Plotting the distribution of scores Let's plot our data points on 2-D graph to eyeball our dataset and see if we can manually find any relationship between the data. We can create the plot with the following script:
###Code
dataset.plot(x='Study_hours', y='Scores', style='o')
plt.title('Hours vs Percentage ')
plt.xlabel('Hours Studied')
plt.ylabel('Percentage Score')
plt.show()
sns.heatmap(dataset.corr())
###Output
_____no_output_____
###Markdown
From the graph above, we can clearly see that there is a positive linear relation between the number of hours studied and percentage of score. 7. BOX PLOT Box plots plays an important role as it provide us a visual summary of data all the statistical values in terms of graph.
###Code
plt.figure(figsize=(5,8))
sns.boxplot(y='Study_hours',data=dataset,color='yellow')
plt.figure(figsize=(5,8))
sns.boxplot(y='Scores',data=dataset,color='blue')
###Output
_____no_output_____
###Markdown
8. Prepare the data The next step is to divide the data into "attributes" (inputs) and "labels" (outputs).
###Code
x=dataset.iloc[:,:-1].values
y=dataset.iloc[:,1].values
###Output
_____no_output_____
###Markdown
Split Test and train data
###Code
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
It splits 80% of the data to training set while 20% of the data to test set. The test_size variable is where we actually specify the proportion of test set. 9. Training the Algorithm
###Code
from sklearn.linear_model import LinearRegression
regressor = LinearRegression()
regressor.fit(x_train, y_train)
print("End of Training")
###Output
End of Training
###Markdown
To retrieve the intercept:
###Code
print(regressor.intercept_)
###Output
2.018160041434683
###Markdown
For retrieving the slope (coefficient of x):
###Code
print(regressor.coef_)
line = regressor.coef_*x+regressor.intercept_
line
plt.scatter(x, y,color='r')
plt.plot(x, line);
plt.show()
###Output
_____no_output_____
###Markdown
10. Predicting the Values: As our model is already trained now it's time to make some prediction.
###Code
print(x_test) # Testing data - In Hours
y_pred = regressor.predict(x_test) # Predicting the scores
data = pd.DataFrame({'Actual': y_test, 'Predicted': y_pred})
data
from sklearn.linear_model import LinearRegression
lr=LinearRegression()
lr.fit(x_train,y_train)
y_predict=lr.predict(x_test)
y_predict
y_test
lr.predict(np.array([[5]]))
#visualization of trained data
plt.scatter(x_train,y_train,color = 'Red')
plt.plot(x_train,lr.predict(x_train),color = 'blue')
plt.xlabel("Hours Studied")
plt.ylabel("Percentage Score")
plt.title("Hours vs scores(train)")
plt.show()
#visualization of Predicted data
plt.scatter(x_test,y_test,color = 'Red')
plt.plot(x_test,lr.predict(x_test),color = 'blue')
plt.xlabel("Hours Studied")
plt.ylabel("Percentage Score")
plt.title("Hours vs scores(train)")
plt.show()
###Output
_____no_output_____
###Markdown
You can also test your own data as given below.
###Code
Study_hours=9.25
own_prediction=regressor.predict([[Study_hours]]).round(2)
print("No of Hours = {}".format(Study_hours))
print("Predicted Score = {}".format(own_prediction[0]))
###Output
No of Hours = 9.25
Predicted Score = 93.69
###Markdown
Evaluating the model: The final step is to evaluate the performance of algorithm. This step is particularly important to compare how well different algorithms perform on a particular dataset. For simplicity here, we have chosen the mean square error. There are many such metrics.
###Code
from sklearn import metrics
print('Mean Absolute Error:',
metrics.mean_absolute_error(y_test, y_pred))
print('Root Of Mean Squared Error:',np.sqrt(metrics.mean_squared_error(y_test, y_pred)))
print('Mean Squared Error:',metrics.mean_squared_error(y_test,y_pred))
###Output
Mean Squared Error: 21.5987693072174
|
kaggle/getting_started/titanic/notebooks/externals/titanic-data-science-solutions.ipynb | ###Markdown
Titanic Data Science Solutions This notebook is a companion to the book [Data Science Solutions](https://www.amazon.com/Data-Science-Solutions-Startup-Workflow/dp/1520545312). The notebook walks us through a typical workflow for solving data science competitions at sites like Kaggle.There are several excellent notebooks to study data science competition entries. However many will skip some of the explanation on how the solution is developed as these notebooks are developed by experts for experts. The objective of this notebook is to follow a step-by-step workflow, explaining each step and rationale for every decision we take during solution development. Workflow stagesThe competition solution workflow goes through seven stages described in the Data Science Solutions book.1. Question or problem definition.2. Acquire training and testing data.3. Wrangle, prepare, cleanse the data.4. Analyze, identify patterns, and explore the data.5. Model, predict and solve the problem.6. Visualize, report, and present the problem solving steps and final solution.7. Supply or submit the results.The workflow indicates general sequence of how each stage may follow the other. However there are use cases with exceptions.- We may combine mulitple workflow stages. We may analyze by visualizing data.- Perform a stage earlier than indicated. We may analyze data before and after wrangling.- Perform a stage multiple times in our workflow. Visualize stage may be used multiple times.- Drop a stage altogether. We may not need supply stage to productize or service enable our dataset for a competition. Question and problem definitionCompetition sites like Kaggle define the problem to solve or questions to ask while providing the datasets for training your data science model and testing the model results against a test dataset. The question or problem definition for Titanic Survival competition is [described here at Kaggle](https://www.kaggle.com/c/titanic).> Knowing from a training set of samples listing passengers who survived or did not survive the Titanic disaster, can our model determine based on a given test dataset not containing the survival information, if these passengers in the test dataset survived or not.We may also want to develop some early understanding about the domain of our problem. This is described on the [Kaggle competition description page here](https://www.kaggle.com/c/titanic). Here are the highlights to note.- On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. Translated 32% survival rate.- One of the reasons that the shipwreck led to such loss of life was that there were not enough lifeboats for the passengers and crew.- Although there was some element of luck involved in surviving the sinking, some groups of people were more likely to survive than others, such as women, children, and the upper-class. Workflow goalsThe data science solutions workflow solves for seven major goals.**Classifying.** We may want to classify or categorize our samples. We may also want to understand the implications or correlation of different classes with our solution goal.**Correlating.** One can approach the problem based on available features within the training dataset. Which features within the dataset contribute significantly to our solution goal? Statistically speaking is there a [correlation](https://en.wikiversity.org/wiki/Correlation) among a feature and solution goal? As the feature values change does the solution state change as well, and visa-versa? This can be tested both for numerical and categorical features in the given dataset. We may also want to determine correlation among features other than survival for subsequent goals and workflow stages. Correlating certain features may help in creating, completing, or correcting features.**Converting.** For modeling stage, one needs to prepare the data. Depending on the choice of model algorithm one may require all features to be converted to numerical equivalent values. So for instance converting text categorical values to numeric values.**Completing.** Data preparation may also require us to estimate any missing values within a feature. Model algorithms may work best when there are no missing values.**Correcting.** We may also analyze the given training dataset for errors or possibly innacurate values within features and try to corrent these values or exclude the samples containing the errors. One way to do this is to detect any outliers among our samples or features. We may also completely discard a feature if it is not contribting to the analysis or may significantly skew the results.**Creating.** Can we create new features based on an existing feature or a set of features, such that the new feature follows the correlation, conversion, completeness goals.**Charting.** How to select the right visualization plots and charts depending on nature of the data and the solution goals. Refactor Release 2017-Jan-29We are significantly refactoring the notebook based on (a) comments received by readers, (b) issues in porting notebook from Jupyter kernel (2.7) to Kaggle kernel (3.5), and (c) review of few more best practice kernels. User comments- Combine training and test data for certain operations like converting titles across dataset to numerical values. (thanks @Sharan Naribole)- Correct observation - nearly 30% of the passengers had siblings and/or spouses aboard. (thanks @Reinhard)- Correctly interpreting logistic regresssion coefficients. (thanks @Reinhard) Porting issues- Specify plot dimensions, bring legend into plot. Best practices- Performing feature correlation analysis early in the project.- Using multiple plots instead of overlays for readability.
###Code
# data analysis and wrangling
import pandas as pd
import numpy as np
import random as rnd
# visualization
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# machine learning
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
###Output
_____no_output_____
###Markdown
Acquire dataThe Python Pandas packages helps us work with our datasets. We start by acquiring the training and testing datasets into Pandas DataFrames. We also combine these datasets to run certain operations on both datasets together.
###Code
train_df = pd.read_csv('../../data/raw/train.csv')
test_df = pd.read_csv('../../data/raw/test.csv')
combine = [train_df, test_df]
###Output
_____no_output_____
###Markdown
Analyze by describing dataPandas also helps describe the datasets answering following questions early in our project.**Which features are available in the dataset?**Noting the feature names for directly manipulating or analyzing these. These feature names are described on the [Kaggle data page here](https://www.kaggle.com/c/titanic/data).
###Code
print(train_df.columns.values)
###Output
['PassengerId' 'Survived' 'Pclass' 'Name' 'Sex' 'Age' 'SibSp' 'Parch'
'Ticket' 'Fare' 'Cabin' 'Embarked']
###Markdown
**Which features are categorical?**These values classify the samples into sets of similar samples. Within categorical features are the values nominal, ordinal, ratio, or interval based? Among other things this helps us select the appropriate plots for visualization.- Categorical: Survived, Sex, and Embarked. Ordinal: Pclass.**Which features are numerical?**Which features are numerical? These values change from sample to sample. Within numerical features are the values discrete, continuous, or timeseries based? Among other things this helps us select the appropriate plots for visualization.- Continous: Age, Fare. Discrete: SibSp, Parch.
###Code
# preview the data
train_df.head()
###Output
_____no_output_____
###Markdown
**Which features are mixed data types?**Numerical, alphanumeric data within same feature. These are candidates for correcting goal.- Ticket is a mix of numeric and alphanumeric data types. Cabin is alphanumeric.**Which features may contain errors or typos?**This is harder to review for a large dataset, however reviewing a few samples from a smaller dataset may just tell us outright, which features may require correcting.- Name feature may contain errors or typos as there are several ways used to describe a name including titles, round brackets, and quotes used for alternative or short names.
###Code
train_df.tail()
###Output
_____no_output_____
###Markdown
**Which features contain blank, null or empty values?**These will require correcting.- Cabin > Age > Embarked features contain a number of null values in that order for the training dataset.- Cabin > Age are incomplete in case of test dataset.**What are the data types for various features?**Helping us during converting goal.- Seven features are integer or floats. Six in case of test dataset.- Five features are strings (object).
###Code
train_df.info()
print('_'*40)
test_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 891 entries, 0 to 890
Data columns (total 12 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 PassengerId 891 non-null int64
1 Survived 891 non-null int64
2 Pclass 891 non-null int64
3 Name 891 non-null object
4 Sex 891 non-null object
5 Age 714 non-null float64
6 SibSp 891 non-null int64
7 Parch 891 non-null int64
8 Ticket 891 non-null object
9 Fare 891 non-null float64
10 Cabin 204 non-null object
11 Embarked 889 non-null object
dtypes: float64(2), int64(5), object(5)
memory usage: 83.7+ KB
________________________________________
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 418 entries, 0 to 417
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 PassengerId 418 non-null int64
1 Pclass 418 non-null int64
2 Name 418 non-null object
3 Sex 418 non-null object
4 Age 332 non-null float64
5 SibSp 418 non-null int64
6 Parch 418 non-null int64
7 Ticket 418 non-null object
8 Fare 417 non-null float64
9 Cabin 91 non-null object
10 Embarked 418 non-null object
dtypes: float64(2), int64(4), object(5)
memory usage: 36.0+ KB
###Markdown
**What is the distribution of numerical feature values across the samples?**This helps us determine, among other early insights, how representative is the training dataset of the actual problem domain.- Total samples are 891 or 40% of the actual number of passengers on board the Titanic (2,224).- Survived is a categorical feature with 0 or 1 values.- Around 38% samples survived representative of the actual survival rate at 32%.- Most passengers (> 75%) did not travel with parents or children.- Nearly 30% of the passengers had siblings and/or spouse aboard.- Fares varied significantly with few passengers (<1%) paying as high as $512.- Few elderly passengers (<1%) within age range 65-80.
###Code
train_df.describe()
# Review survived rate using `percentiles=[.61, .62]` knowing our problem description mentions 38% survival rate.
# Review Parch distribution using `percentiles=[.75, .8]`
# SibSp distribution `[.68, .69]`
# Age and Fare `[.1, .2, .3, .4, .5, .6, .7, .8, .9, .99]`
###Output
_____no_output_____
###Markdown
**What is the distribution of categorical features?**- Names are unique across the dataset (count=unique=891)- Sex variable as two possible values with 65% male (top=male, freq=577/count=891).- Cabin values have several dupicates across samples. Alternatively several passengers shared a cabin.- Embarked takes three possible values. S port used by most passengers (top=S)- Ticket feature has high ratio (22%) of duplicate values (unique=681).
###Code
train_df.describe(include=['O'])
###Output
_____no_output_____
###Markdown
Assumtions based on data analysisWe arrive at following assumptions based on data analysis done so far. We may validate these assumptions further before taking appropriate actions.**Correlating.**We want to know how well does each feature correlate with Survival. We want to do this early in our project and match these quick correlations with modelled correlations later in the project.**Completing.**1. We may want to complete Age feature as it is definitely correlated to survival.2. We may want to complete the Embarked feature as it may also correlate with survival or another important feature.**Correcting.**1. Ticket feature may be dropped from our analysis as it contains high ratio of duplicates (22%) and there may not be a correlation between Ticket and survival.2. Cabin feature may be dropped as it is highly incomplete or contains many null values both in training and test dataset.3. PassengerId may be dropped from training dataset as it does not contribute to survival.4. Name feature is relatively non-standard, may not contribute directly to survival, so maybe dropped.**Creating.**1. We may want to create a new feature called Family based on Parch and SibSp to get total count of family members on board.2. We may want to engineer the Name feature to extract Title as a new feature.3. We may want to create new feature for Age bands. This turns a continous numerical feature into an ordinal categorical feature.4. We may also want to create a Fare range feature if it helps our analysis.**Classifying.**We may also add to our assumptions based on the problem description noted earlier.1. Women (Sex=female) were more likely to have survived.2. Children (Age<?) were more likely to have survived. 3. The upper-class passengers (Pclass=1) were more likely to have survived. Analyze by pivoting featuresTo confirm some of our observations and assumptions, we can quickly analyze our feature correlations by pivoting features against each other. We can only do so at this stage for features which do not have any empty values. It also makes sense doing so only for features which are categorical (Sex), ordinal (Pclass) or discrete (SibSp, Parch) type.- **Pclass** We observe significant correlation (>0.5) among Pclass=1 and Survived (classifying 3). We decide to include this feature in our model.- **Sex** We confirm the observation during problem definition that Sex=female had very high survival rate at 74% (classifying 1).- **SibSp and Parch** These features have zero correlation for certain values. It may be best to derive a feature or a set of features from these individual features (creating 1).
###Code
train_df[['Pclass', 'Survived']].groupby(['Pclass'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["Sex", "Survived"]].groupby(['Sex'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["SibSp", "Survived"]].groupby(['SibSp'], as_index=False).mean().sort_values(by='Survived', ascending=False)
train_df[["Parch", "Survived"]].groupby(['Parch'], as_index=False).mean().sort_values(by='Survived', ascending=False)
###Output
_____no_output_____
###Markdown
Analyze by visualizing dataNow we can continue confirming some of our assumptions using visualizations for analyzing the data. Correlating numerical featuresLet us start by understanding correlations between numerical features and our solution goal (Survived).A histogram chart is useful for analyzing continous numerical variables like Age where banding or ranges will help identify useful patterns. The histogram can indicate distribution of samples using automatically defined bins or equally ranged bands. This helps us answer questions relating to specific bands (Did infants have better survival rate?)Note that x-axis in historgram visualizations represents the count of samples or passengers.**Observations.**- Infants (Age <=4) had high survival rate.- Oldest passengers (Age = 80) survived.- Large number of 15-25 year olds did not survive.- Most passengers are in 15-35 age range.**Decisions.**This simple analysis confirms our assumptions as decisions for subsequent workflow stages.- We should consider Age (our assumption classifying 2) in our model training.- Complete the Age feature for null values (completing 1).- We should band age groups (creating 3).
###Code
g = sns.FacetGrid(train_df, col='Survived')
g.map(plt.hist, 'Age', bins=20)
###Output
_____no_output_____
###Markdown
Correlating numerical and ordinal featuresWe can combine multiple features for identifying correlations using a single plot. This can be done with numerical and categorical features which have numeric values.**Observations.**- Pclass=3 had most passengers, however most did not survive. Confirms our classifying assumption 2.- Infant passengers in Pclass=2 and Pclass=3 mostly survived. Further qualifies our classifying assumption 2.- Most passengers in Pclass=1 survived. Confirms our classifying assumption 3.- Pclass varies in terms of Age distribution of passengers.**Decisions.**- Consider Pclass for model training.
###Code
# grid = sns.FacetGrid(train_df, col='Pclass', hue='Survived')
grid = sns.FacetGrid(train_df, col='Survived', row='Pclass', size=2.2, aspect=1.6)
grid.map(plt.hist, 'Age', alpha=.5, bins=20)
grid.add_legend();
###Output
/Users/gkesler/Documents/GitHub/kaggle-competitions/.venv/lib/python3.8/site-packages/seaborn/axisgrid.py:337: UserWarning: The `size` parameter has been renamed to `height`; please update your code.
warnings.warn(msg, UserWarning)
###Markdown
Correlating categorical featuresNow we can correlate categorical features with our solution goal.**Observations.**- Female passengers had much better survival rate than males. Confirms classifying (1).- Exception in Embarked=C where males had higher survival rate. This could be a correlation between Pclass and Embarked and in turn Pclass and Survived, not necessarily direct correlation between Embarked and Survived.- Males had better survival rate in Pclass=3 when compared with Pclass=2 for C and Q ports. Completing (2).- Ports of embarkation have varying survival rates for Pclass=3 and among male passengers. Correlating (1).**Decisions.**- Add Sex feature to model training.- Complete and add Embarked feature to model training.
###Code
# grid = sns.FacetGrid(train_df, col='Embarked')
grid = sns.FacetGrid(train_df, row='Embarked', size=2.2, aspect=1.6)
grid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette='deep')
grid.add_legend()
###Output
/Users/gkesler/Documents/GitHub/kaggle-competitions/.venv/lib/python3.8/site-packages/seaborn/axisgrid.py:337: UserWarning: The `size` parameter has been renamed to `height`; please update your code.
warnings.warn(msg, UserWarning)
/Users/gkesler/Documents/GitHub/kaggle-competitions/.venv/lib/python3.8/site-packages/seaborn/axisgrid.py:670: UserWarning: Using the pointplot function without specifying `order` is likely to produce an incorrect plot.
warnings.warn(warning)
/Users/gkesler/Documents/GitHub/kaggle-competitions/.venv/lib/python3.8/site-packages/seaborn/axisgrid.py:675: UserWarning: Using the pointplot function without specifying `hue_order` is likely to produce an incorrect plot.
warnings.warn(warning)
###Markdown
Correlating categorical and numerical featuresWe may also want to correlate categorical features (with non-numeric values) and numeric features. We can consider correlating Embarked (Categorical non-numeric), Sex (Categorical non-numeric), Fare (Numeric continuous), with Survived (Categorical numeric).**Observations.**- Higher fare paying passengers had better survival. Confirms our assumption for creating (4) fare ranges.- Port of embarkation correlates with survival rates. Confirms correlating (1) and completing (2).**Decisions.**- Consider banding Fare feature.
###Code
# grid = sns.FacetGrid(train_df, col='Embarked', hue='Survived', palette={0: 'k', 1: 'w'})
grid = sns.FacetGrid(train_df, row='Embarked', col='Survived', size=2.2, aspect=1.6)
grid.map(sns.barplot, 'Sex', 'Fare', alpha=.5, ci=None)
grid.add_legend()
###Output
/Users/gkesler/Documents/GitHub/kaggle-competitions/.venv/lib/python3.8/site-packages/seaborn/axisgrid.py:337: UserWarning: The `size` parameter has been renamed to `height`; please update your code.
warnings.warn(msg, UserWarning)
/Users/gkesler/Documents/GitHub/kaggle-competitions/.venv/lib/python3.8/site-packages/seaborn/axisgrid.py:670: UserWarning: Using the barplot function without specifying `order` is likely to produce an incorrect plot.
warnings.warn(warning)
###Markdown
Wrangle dataWe have collected several assumptions and decisions regarding our datasets and solution requirements. So far we did not have to change a single feature or value to arrive at these. Let us now execute our decisions and assumptions for correcting, creating, and completing goals. Correcting by dropping featuresThis is a good starting goal to execute. By dropping features we are dealing with fewer data points. Speeds up our notebook and eases the analysis.Based on our assumptions and decisions we want to drop the Cabin (correcting 2) and Ticket (correcting 1) features.Note that where applicable we perform operations on both training and testing datasets together to stay consistent.
###Code
print("Before", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape)
train_df = train_df.drop(['Ticket', 'Cabin'], axis=1)
test_df = test_df.drop(['Ticket', 'Cabin'], axis=1)
combine = [train_df, test_df]
"After", train_df.shape, test_df.shape, combine[0].shape, combine[1].shape
###Output
Before (891, 12) (418, 11) (891, 12) (418, 11)
###Markdown
Creating new feature extracting from existingWe want to analyze if Name feature can be engineered to extract titles and test correlation between titles and survival, before dropping Name and PassengerId features.In the following code we extract Title feature using regular expressions. The RegEx pattern `(\w+\.)` matches the first word which ends with a dot character within Name feature. The `expand=False` flag returns a DataFrame.**Observations.**When we plot Title, Age, and Survived, we note the following observations.- Most titles band Age groups accurately. For example: Master title has Age mean of 5 years.- Survival among Title Age bands varies slightly.- Certain titles mostly survived (Mme, Lady, Sir) or did not (Don, Rev, Jonkheer).**Decision.**- We decide to retain the new Title feature for model training.
###Code
for dataset in combine:
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.', expand=False)
pd.crosstab(train_df['Title'], train_df['Sex'])
###Output
_____no_output_____
###Markdown
We can replace many titles with a more common name or classify them as `Rare`.
###Code
for dataset in combine:
dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col',\
'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')
dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss')
dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')
dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')
train_df[['Title', 'Survived']].groupby(['Title'], as_index=False).mean()
###Output
_____no_output_____
###Markdown
We can convert the categorical titles to ordinal.
###Code
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
for dataset in combine:
dataset['Title'] = dataset['Title'].map(title_mapping)
dataset['Title'] = dataset['Title'].fillna(0)
train_df.head()
###Output
_____no_output_____
###Markdown
Now we can safely drop the Name feature from training and testing datasets. We also do not need the PassengerId feature in the training dataset.
###Code
train_df = train_df.drop(['Name', 'PassengerId'], axis=1)
test_df = test_df.drop(['Name'], axis=1)
combine = [train_df, test_df]
train_df.shape, test_df.shape
###Output
_____no_output_____
###Markdown
Converting a categorical featureNow we can convert features which contain strings to numerical values. This is required by most model algorithms. Doing so will also help us in achieving the feature completing goal.Let us start by converting Sex feature to a new feature called Gender where female=1 and male=0.
###Code
for dataset in combine:
dataset['Sex'] = dataset['Sex'].map( {'female': 1, 'male': 0} ).astype(int)
train_df.head()
###Output
_____no_output_____
###Markdown
Completing a numerical continuous featureNow we should start estimating and completing features with missing or null values. We will first do this for the Age feature.We can consider three methods to complete a numerical continuous feature.1. A simple way is to generate random numbers between mean and [standard deviation](https://en.wikipedia.org/wiki/Standard_deviation).2. More accurate way of guessing missing values is to use other correlated features. In our case we note correlation among Age, Gender, and Pclass. Guess Age values using [median](https://en.wikipedia.org/wiki/Median) values for Age across sets of Pclass and Gender feature combinations. So, median Age for Pclass=1 and Gender=0, Pclass=1 and Gender=1, and so on...3. Combine methods 1 and 2. So instead of guessing age values based on median, use random numbers between mean and standard deviation, based on sets of Pclass and Gender combinations.Method 1 and 3 will introduce random noise into our models. The results from multiple executions might vary. We will prefer method 2.
###Code
# grid = sns.FacetGrid(train_df, col='Pclass', hue='Gender')
grid = sns.FacetGrid(train_df, row='Pclass', col='Sex', size=2.2, aspect=1.6)
grid.map(plt.hist, 'Age', alpha=.5, bins=20)
grid.add_legend()
###Output
/opt/conda/lib/python3.6/site-packages/seaborn/axisgrid.py:230: UserWarning: The `size` paramter has been renamed to `height`; please update your code.
warnings.warn(msg, UserWarning)
###Markdown
Let us start by preparing an empty array to contain guessed Age values based on Pclass x Gender combinations.
###Code
guess_ages = np.zeros((2,3))
guess_ages
###Output
_____no_output_____
###Markdown
Now we iterate over Sex (0 or 1) and Pclass (1, 2, 3) to calculate guessed values of Age for the six combinations.
###Code
for dataset in combine:
for i in range(0, 2):
for j in range(0, 3):
guess_df = dataset[(dataset['Sex'] == i) & \
(dataset['Pclass'] == j+1)]['Age'].dropna()
# age_mean = guess_df.mean()
# age_std = guess_df.std()
# age_guess = rnd.uniform(age_mean - age_std, age_mean + age_std)
age_guess = guess_df.median()
# Convert random age float to nearest .5 age
guess_ages[i,j] = int( age_guess/0.5 + 0.5 ) * 0.5
for i in range(0, 2):
for j in range(0, 3):
dataset.loc[ (dataset.Age.isnull()) & (dataset.Sex == i) & (dataset.Pclass == j+1),\
'Age'] = guess_ages[i,j]
dataset['Age'] = dataset['Age'].astype(int)
train_df.head()
###Output
_____no_output_____
###Markdown
Let us create Age bands and determine correlations with Survived.
###Code
train_df['AgeBand'] = pd.cut(train_df['Age'], 5)
train_df[['AgeBand', 'Survived']].groupby(['AgeBand'], as_index=False).mean().sort_values(by='AgeBand', ascending=True)
###Output
_____no_output_____
###Markdown
Let us replace Age with ordinals based on these bands.
###Code
for dataset in combine:
dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0
dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1
dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2
dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3
dataset.loc[ dataset['Age'] > 64, 'Age']
train_df.head()
###Output
_____no_output_____
###Markdown
We can not remove the AgeBand feature.
###Code
train_df = train_df.drop(['AgeBand'], axis=1)
combine = [train_df, test_df]
train_df.head()
###Output
_____no_output_____
###Markdown
Create new feature combining existing featuresWe can create a new feature for FamilySize which combines Parch and SibSp. This will enable us to drop Parch and SibSp from our datasets.
###Code
for dataset in combine:
dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
train_df[['FamilySize', 'Survived']].groupby(['FamilySize'], as_index=False).mean().sort_values(by='Survived', ascending=False)
###Output
_____no_output_____
###Markdown
We can create another feature called IsAlone.
###Code
for dataset in combine:
dataset['IsAlone'] = 0
dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1
train_df[['IsAlone', 'Survived']].groupby(['IsAlone'], as_index=False).mean()
###Output
_____no_output_____
###Markdown
Let us drop Parch, SibSp, and FamilySize features in favor of IsAlone.
###Code
train_df = train_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)
test_df = test_df.drop(['Parch', 'SibSp', 'FamilySize'], axis=1)
combine = [train_df, test_df]
train_df.head()
###Output
_____no_output_____
###Markdown
We can also create an artificial feature combining Pclass and Age.
###Code
for dataset in combine:
dataset['Age*Class'] = dataset.Age * dataset.Pclass
train_df.loc[:, ['Age*Class', 'Age', 'Pclass']].head(10)
###Output
_____no_output_____
###Markdown
Completing a categorical featureEmbarked feature takes S, Q, C values based on port of embarkation. Our training dataset has two missing values. We simply fill these with the most common occurance.
###Code
freq_port = train_df.Embarked.dropna().mode()[0]
freq_port
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].fillna(freq_port)
train_df[['Embarked', 'Survived']].groupby(['Embarked'], as_index=False).mean().sort_values(by='Survived', ascending=False)
###Output
_____no_output_____
###Markdown
Converting categorical feature to numericWe can now convert the EmbarkedFill feature by creating a new numeric Port feature.
###Code
for dataset in combine:
dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)
train_df.head()
###Output
_____no_output_____
###Markdown
Quick completing and converting a numeric featureWe can now complete the Fare feature for single missing value in test dataset using mode to get the value that occurs most frequently for this feature. We do this in a single line of code.Note that we are not creating an intermediate new feature or doing any further analysis for correlation to guess missing feature as we are replacing only a single value. The completion goal achieves desired requirement for model algorithm to operate on non-null values.We may also want round off the fare to two decimals as it represents currency.
###Code
test_df['Fare'].fillna(test_df['Fare'].dropna().median(), inplace=True)
test_df.head()
###Output
_____no_output_____
###Markdown
We can not create FareBand.
###Code
train_df['FareBand'] = pd.qcut(train_df['Fare'], 4)
train_df[['FareBand', 'Survived']].groupby(['FareBand'], as_index=False).mean().sort_values(by='FareBand', ascending=True)
###Output
_____no_output_____
###Markdown
Convert the Fare feature to ordinal values based on the FareBand.
###Code
for dataset in combine:
dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0
dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1
dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare'] = 2
dataset.loc[ dataset['Fare'] > 31, 'Fare'] = 3
dataset['Fare'] = dataset['Fare'].astype(int)
train_df = train_df.drop(['FareBand'], axis=1)
combine = [train_df, test_df]
train_df.head(10)
###Output
_____no_output_____
###Markdown
And the test dataset.
###Code
test_df.head(10)
###Output
_____no_output_____
###Markdown
Model, predict and solveNow we are ready to train a model and predict the required solution. There are 60+ predictive modelling algorithms to choose from. We must understand the type of problem and solution requirement to narrow down to a select few models which we can evaluate. Our problem is a classification and regression problem. We want to identify relationship between output (Survived or not) with other variables or features (Gender, Age, Port...). We are also perfoming a category of machine learning which is called supervised learning as we are training our model with a given dataset. With these two criteria - Supervised Learning plus Classification and Regression, we can narrow down our choice of models to a few. These include:- Logistic Regression- KNN or k-Nearest Neighbors- Support Vector Machines- Naive Bayes classifier- Decision Tree- Random Forrest- Perceptron- Artificial neural network- RVM or Relevance Vector Machine
###Code
X_train = train_df.drop("Survived", axis=1)
Y_train = train_df["Survived"]
X_test = test_df.drop("PassengerId", axis=1).copy()
X_train.shape, Y_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Logistic Regression is a useful model to run early in the workflow. Logistic regression measures the relationship between the categorical dependent variable (feature) and one or more independent variables (features) by estimating probabilities using a logistic function, which is the cumulative logistic distribution. Reference [Wikipedia](https://en.wikipedia.org/wiki/Logistic_regression).Note the confidence score generated by the model based on our training dataset.
###Code
# Logistic Regression
logreg = LogisticRegression()
logreg.fit(X_train, Y_train)
Y_pred = logreg.predict(X_test)
acc_log = round(logreg.score(X_train, Y_train) * 100, 2)
acc_log
###Output
/opt/conda/lib/python3.6/site-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
###Markdown
We can use Logistic Regression to validate our assumptions and decisions for feature creating and completing goals. This can be done by calculating the coefficient of the features in the decision function.Positive coefficients increase the log-odds of the response (and thus increase the probability), and negative coefficients decrease the log-odds of the response (and thus decrease the probability).- Sex is highest positivie coefficient, implying as the Sex value increases (male: 0 to female: 1), the probability of Survived=1 increases the most.- Inversely as Pclass increases, probability of Survived=1 decreases the most.- This way Age*Class is a good artificial feature to model as it has second highest negative correlation with Survived.- So is Title as second highest positive correlation.
###Code
coeff_df = pd.DataFrame(train_df.columns.delete(0))
coeff_df.columns = ['Feature']
coeff_df["Correlation"] = pd.Series(logreg.coef_[0])
coeff_df.sort_values(by='Correlation', ascending=False)
###Output
_____no_output_____
###Markdown
Next we model using Support Vector Machines which are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training samples, each marked as belonging to one or the other of **two categories**, an SVM training algorithm builds a model that assigns new test samples to one category or the other, making it a non-probabilistic binary linear classifier. Reference [Wikipedia](https://en.wikipedia.org/wiki/Support_vector_machine).Note that the model generates a confidence score which is higher than Logistics Regression model.
###Code
# Support Vector Machines
svc = SVC()
svc.fit(X_train, Y_train)
Y_pred = svc.predict(X_test)
acc_svc = round(svc.score(X_train, Y_train) * 100, 2)
acc_svc
###Output
/opt/conda/lib/python3.6/site-packages/sklearn/svm/base.py:196: FutureWarning: The default value of gamma will change from 'auto' to 'scale' in version 0.22 to account better for unscaled features. Set gamma explicitly to 'auto' or 'scale' to avoid this warning.
"avoid this warning.", FutureWarning)
###Markdown
In pattern recognition, the k-Nearest Neighbors algorithm (or k-NN for short) is a non-parametric method used for classification and regression. A sample is classified by a majority vote of its neighbors, with the sample being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. Reference [Wikipedia](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm).KNN confidence score is better than Logistics Regression but worse than SVM.
###Code
knn = KNeighborsClassifier(n_neighbors = 3)
knn.fit(X_train, Y_train)
Y_pred = knn.predict(X_test)
acc_knn = round(knn.score(X_train, Y_train) * 100, 2)
acc_knn
###Output
_____no_output_____
###Markdown
In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features. Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features) in a learning problem. Reference [Wikipedia](https://en.wikipedia.org/wiki/Naive_Bayes_classifier).The model generated confidence score is the lowest among the models evaluated so far.
###Code
# Gaussian Naive Bayes
gaussian = GaussianNB()
gaussian.fit(X_train, Y_train)
Y_pred = gaussian.predict(X_test)
acc_gaussian = round(gaussian.score(X_train, Y_train) * 100, 2)
acc_gaussian
###Output
_____no_output_____
###Markdown
The perceptron is an algorithm for supervised learning of binary classifiers (functions that can decide whether an input, represented by a vector of numbers, belongs to some specific class or not). It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. The algorithm allows for online learning, in that it processes elements in the training set one at a time. Reference [Wikipedia](https://en.wikipedia.org/wiki/Perceptron).
###Code
# Perceptron
perceptron = Perceptron()
perceptron.fit(X_train, Y_train)
Y_pred = perceptron.predict(X_test)
acc_perceptron = round(perceptron.score(X_train, Y_train) * 100, 2)
acc_perceptron
# Linear SVC
linear_svc = LinearSVC()
linear_svc.fit(X_train, Y_train)
Y_pred = linear_svc.predict(X_test)
acc_linear_svc = round(linear_svc.score(X_train, Y_train) * 100, 2)
acc_linear_svc
# Stochastic Gradient Descent
sgd = SGDClassifier()
sgd.fit(X_train, Y_train)
Y_pred = sgd.predict(X_test)
acc_sgd = round(sgd.score(X_train, Y_train) * 100, 2)
acc_sgd
###Output
/opt/conda/lib/python3.6/site-packages/sklearn/linear_model/stochastic_gradient.py:166: FutureWarning: max_iter and tol parameters have been added in SGDClassifier in 0.19. If both are left unset, they default to max_iter=5 and tol=None. If tol is not None, max_iter defaults to max_iter=1000. From 0.21, default max_iter will be 1000, and default tol will be 1e-3.
FutureWarning)
###Markdown
This model uses a decision tree as a predictive model which maps features (tree branches) to conclusions about the target value (tree leaves). Tree models where the target variable can take a finite set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. Reference [Wikipedia](https://en.wikipedia.org/wiki/Decision_tree_learning).The model confidence score is the highest among models evaluated so far.
###Code
# Decision Tree
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, Y_train)
Y_pred = decision_tree.predict(X_test)
acc_decision_tree = round(decision_tree.score(X_train, Y_train) * 100, 2)
acc_decision_tree
###Output
_____no_output_____
###Markdown
The next model Random Forests is one of the most popular. Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees (n_estimators=100) at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Reference [Wikipedia](https://en.wikipedia.org/wiki/Random_forest).The model confidence score is the highest among models evaluated so far. We decide to use this model's output (Y_pred) for creating our competition submission of results.
###Code
# Random Forest
random_forest = RandomForestClassifier(n_estimators=100)
random_forest.fit(X_train, Y_train)
Y_pred = random_forest.predict(X_test)
random_forest.score(X_train, Y_train)
acc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2)
acc_random_forest
###Output
_____no_output_____
###Markdown
Model evaluationWe can now rank our evaluation of all the models to choose the best one for our problem. While both Decision Tree and Random Forest score the same, we choose to use Random Forest as they correct for decision trees' habit of overfitting to their training set.
###Code
models = pd.DataFrame({
'Model': ['Support Vector Machines', 'KNN', 'Logistic Regression',
'Random Forest', 'Naive Bayes', 'Perceptron',
'Stochastic Gradient Decent', 'Linear SVC',
'Decision Tree'],
'Score': [acc_svc, acc_knn, acc_log,
acc_random_forest, acc_gaussian, acc_perceptron,
acc_sgd, acc_linear_svc, acc_decision_tree]})
models.sort_values(by='Score', ascending=False)
submission = pd.DataFrame({
"PassengerId": test_df["PassengerId"],
"Survived": Y_pred
})
# submission.to_csv('../output/submission.csv', index=False)
###Output
_____no_output_____ |
notebooks/community/gapic/automl/showcase_automl_image_segmentation_batch.ipynb | ###Markdown
Vertex client library: AutoML image segmentation model for batch prediction Run in Colab View on GitHub OverviewThis tutorial demonstrates how to use the Vertex client library for Python to create image segmentation models and do batch prediction using Google Cloud's [AutoML](https://cloud.google.com/vertex-ai/docs/start/automl-users). DatasetThe dataset used for this tutorial is the [TODO](https://). This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. ObjectiveIn this tutorial, you create an AutoML image segmentation model from a Python script, and then do a batch prediction using the Vertex client library. You can alternatively create and deploy models using the `gcloud` command-line tool or online using the Google Cloud Console.The steps performed include:- Create a Vertex `Dataset` resource.- Train the model.- View the model evaluation.- Make a batch prediction.There is one key difference between using batch prediction and using online prediction:* Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time.* Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready. CostsThis tutorial uses billable components of Google Cloud (GCP):* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. InstallationInstall the latest version of Vertex client library.
###Code
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
###Output
_____no_output_____
###Markdown
Install the latest GA version of *google-cloud-storage* library as well.
###Code
! pip3 install -U google-cloud-storage $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the Vertex client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
###Code
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin GPU runtime*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the Vertex APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook.5. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the [Vertex locations documentation](https://cloud.google.com/vertex-ai/docs/general/locations)
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.**Click Create service account**.In the **Service account name** field, enter a name, and click **Create**.In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.Click Create. A JSON file that contains your key downloads to your local environment.Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
###Code
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex client libraryImport the Vertex client library into our Python environment.
###Code
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
###Output
_____no_output_____
###Markdown
Vertex constantsSetup up the following constants for Vertex:- `API_ENDPOINT`: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.- `PARENT`: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
###Code
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
###Output
_____no_output_____
###Markdown
AutoML constantsSet constants unique to AutoML datasets and training:- Dataset Schemas: Tells the `Dataset` resource service which type of dataset it is.- Data Labeling (Annotations) Schemas: Tells the `Dataset` resource service how the data is labeled (annotated).- Dataset Training Schemas: Tells the `Pipeline` resource service the task (e.g., classification) to train the model for.
###Code
# Image Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml"
# Image Labeling type
LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_segmentation_io_format_1.0.0.yaml"
# Image Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_segmentation_1.0.0.yaml"
###Output
_____no_output_____
###Markdown
Hardware AcceleratorsSet the hardware accelerators (e.g., GPU), if any, for prediction.Set the variable `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)For GPU, available accelerators include: - aip.AcceleratorType.NVIDIA_TESLA_K80 - aip.AcceleratorType.NVIDIA_TESLA_P100 - aip.AcceleratorType.NVIDIA_TESLA_P4 - aip.AcceleratorType.NVIDIA_TESLA_T4 - aip.AcceleratorType.NVIDIA_TESLA_V100Otherwise specify `(None, None)` to use a container image to run on a CPU.
###Code
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
###Output
_____no_output_____
###Markdown
Container (Docker) imageFor AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected. Machine TypeNext, set the machine type to use for prediction.- Set the variable `DEPLOY_COMPUTE` to configure the compute resources for the VM you will use for prediction. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU. - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*
###Code
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
###Output
_____no_output_____
###Markdown
TutorialNow you are ready to start creating your own AutoML image segmentation model. Set up clientsThe Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.- Dataset Service for `Dataset` resources.- Model Service for `Model` resources.- Pipeline Service for training.- Job Service for batch prediction and custom training.
###Code
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["job"] = create_job_client()
for client in clients.items():
print(client)
###Output
_____no_output_____
###Markdown
DatasetNow that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it. Create `Dataset` resource instanceUse the helper function `create_dataset` to create the instance of a `Dataset` resource. This function does the following:1. Uses the dataset client service.2. Creates an Vertex `Dataset` resource (`aip.Dataset`), with the following parameters: - `display_name`: The human-readable name you choose to give it. - `metadata_schema_uri`: The schema for the dataset type.3. Calls the client dataset service method `create_dataset`, with the following parameters: - `parent`: The Vertex location root path for your `Database`, `Model` and `Endpoint` resources. - `dataset`: The Vertex dataset object instance you created.4. The method returns an `operation` object.An `operation` object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.You can use the `operation` object to get status on the operation (e.g., create `Dataset` resource) or to cancel the operation, by invoking an operation method:| Method | Description || ----------- | ----------- || result() | Waits for the operation to complete and returns a result object in JSON format. || running() | Returns True/False on whether the operation is still running. || done() | Returns True/False on whether the operation is completed. || canceled() | Returns True/False on whether the operation was canceled. || cancel() | Cancels the operation (this may take up to 30 seconds). |
###Code
TIMEOUT = 90
def create_dataset(name, schema, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
dataset = aip.Dataset(
display_name=name, metadata_schema_uri=schema, labels=labels
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("unknown-" + TIMESTAMP, DATA_SCHEMA)
###Output
_____no_output_____
###Markdown
Now save the unique dataset identifier for the `Dataset` resource instance you created.
###Code
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
###Output
_____no_output_____
###Markdown
Data preparationThe Vertex `Dataset` resource for images has some requirements for your data:- Images must be stored in a Cloud Storage bucket.- Each image file must be in an image format (PNG, JPEG, BMP, ...).- There must be an index file stored in your Cloud Storage bucket that contains the path and label for each image.- The index file must be either CSV or JSONL. JSONLFor image segmentation, the JSONL index file has the requirements:- Each data item is a separate JSON object, on a separate line.- The key/value pair `image_gcs_uri` is the Cloud Storage path to the image.- The key/value pair `category_mask_uri` is the Cloud Storage path to the mask image in PNG format.- The key/value pair `'annotation_spec_colors'` is a list mapping mask colors to a label. - The key/value pair pair `display_name` is the label for the pixel color mask. - The key/value pair pair `color` are the RGB normalized pixel values (between 0 and 1) of the mask for the corresponding label. { 'image_gcs_uri': image, 'segmentation_annotations': { 'category_mask_uri': mask_image, 'annotation_spec_colors' : [ { 'display_name': label, 'color': {"red": value, "blue", value, "green": value} }, ...] }*Note*: The dictionary key fields may alternatively be in camelCase. For example, 'image_gcs_uri' can also be 'imageGcsUri'. Location of Cloud Storage training data.Now set the variable `IMPORT_FILE` to the location of the JSONL index file in Cloud Storage.
###Code
IMPORT_FILE = "gs://ucaip-test-us-central1/dataset/isg_data.jsonl"
###Output
_____no_output_____
###Markdown
Quick peek at your dataYou will use a version of the Unknown dataset that is stored in a public Cloud Storage bucket, using a JSONL index file.Start by doing a quick peek at the data. You count the number of examples by counting the number of objects in a JSONL index file (`wc -l`) and then peek at the first few rows.
###Code
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
###Output
_____no_output_____
###Markdown
Import dataNow, import the data into your Vertex Dataset resource. Use this helper function `import_data` to import the data. The function does the following:- Uses the `Dataset` client.- Calls the client method `import_data`, with the following parameters: - `name`: The human readable name you give to the `Dataset` resource (e.g., unknown). - `import_configs`: The import configuration.- `import_configs`: A Python list containing a dictionary, with the key/value entries: - `gcs_sources`: A list of URIs to the paths of the one or more index files. - `import_schema_uri`: The schema identifying the labeling type.The `import_data()` method returns a long running `operation` object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break.
###Code
def import_data(dataset, gcs_sources, schema):
config = [{"gcs_source": {"uris": gcs_sources}, "import_schema_uri": schema}]
print("dataset:", dataset_id)
start_time = time.time()
try:
operation = clients["dataset"].import_data(
name=dataset_id, import_configs=config
)
print("Long running operation:", operation.operation.name)
result = operation.result()
print("result:", result)
print("time:", int(time.time() - start_time), "secs")
print("error:", operation.exception())
print("meta :", operation.metadata)
print(
"after: running:",
operation.running(),
"done:",
operation.done(),
"cancelled:",
operation.cancelled(),
)
return operation
except Exception as e:
print("exception:", e)
return None
import_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA)
###Output
_____no_output_____
###Markdown
Train the modelNow train an AutoML image segmentation model using your Vertex `Dataset` resource. To train the model, do the following steps:1. Create an Vertex training pipeline for the `Dataset` resource.2. Execute the pipeline to start the training. Create a training pipelineYou may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:1. Being reusable for subsequent training jobs.2. Can be containerized and ran as a batch job.3. Can be distributed.4. All the steps are associated with the same pipeline job for tracking progress.Use this helper function `create_pipeline`, which takes the following parameters:- `pipeline_name`: A human readable name for the pipeline job.- `model_name`: A human readable name for the model.- `dataset`: The Vertex fully qualified dataset identifier.- `schema`: The dataset labeling (annotation) training schema.- `task`: A dictionary describing the requirements for the training job.The helper function calls the `Pipeline` client service'smethod `create_pipeline`, which takes the following parameters:- `parent`: The Vertex location root path for your `Dataset`, `Model` and `Endpoint` resources.- `training_pipeline`: the full specification for the pipeline training job.Let's look now deeper into the *minimal* requirements for constructing a `training_pipeline` specification:- `display_name`: A human readable name for the pipeline job.- `training_task_definition`: The dataset labeling (annotation) training schema.- `training_task_inputs`: A dictionary describing the requirements for the training job.- `model_to_upload`: A human readable name for the model.- `input_data_config`: The dataset specification. - `dataset_id`: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier. - `fraction_split`: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
###Code
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
###Output
_____no_output_____
###Markdown
Construct the task requirementsNext, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the `task` field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the `json_format.ParseDict` method for the conversion.The minimal fields you need to specify are:- `budget_milli_node_hours`: The maximum time to budget (billed) for training the model, where 1000 = 1 hour.- `model_type`: The type of deployed model: - `CLOUD_HIGH_ACCURACY_1`: For deploying to Google Cloud and optimizing for accuracy. - `CLOUD_LOW_LATENCY_1`: For deploying to Google Cloud and optimizing for latency (response time),Finally, create the pipeline by calling the helper function `create_pipeline`, which returns an instance of a training pipeline object.
###Code
PIPE_NAME = "unknown_pipe-" + TIMESTAMP
MODEL_NAME = "unknown_model-" + TIMESTAMP
task = json_format.ParseDict(
{"budget_milli_node_hours": 2000, "model_type": "CLOUD_LOW_ACCURACY_1"}, Value()
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
###Output
_____no_output_____
###Markdown
Now save the unique identifier of the training pipeline you created.
###Code
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
###Output
_____no_output_____
###Markdown
Get information on a training pipelineNow get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's `get_training_pipeline` method, with the following parameter:- `name`: The Vertex fully qualified pipeline identifier.When the model is done training, the pipeline state will be `PIPELINE_STATE_SUCCEEDED`.
###Code
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
###Output
_____no_output_____
###Markdown
DeploymentTraining the above model may take upwards of 30 minutes time.Once your model is done training, you can calculate the actual time it took to train the model by subtracting `end_time` from `start_time`. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field `model_to_deploy.name`.
###Code
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
###Output
_____no_output_____
###Markdown
Model informationNow that your model is trained, you can get some information on your model. Evaluate the Model resourceNow find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model. List evaluations for all slicesUse this helper function `list_model_evaluations`, which takes the following parameter:- `name`: The Vertex fully qualified model identifier for the `Model` resource.This helper function uses the model client service's `list_model_evaluations` method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.For each evaluation -- you probably only have one, we then print all the key names for each metric in the evaluation, and for a small set (`confidenceMetricsEntries`) you will print the result.
###Code
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("confidenceMetricsEntries", metrics["confidenceMetricsEntries"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
###Output
_____no_output_____
###Markdown
Model deployment for batch predictionNow deploy the trained Vertex `Model` resource you created for batch prediction. This differs from deploying a `Model` resource for on-demand prediction.For online prediction, you:1. Create an `Endpoint` resource for deploying the `Model` resource to.2. Deploy the `Model` resource to the `Endpoint` resource.3. Make online prediction requests to the `Endpoint` resource.For batch-prediction, you:1. Create a batch prediction job.2. The job service will provision resources for the batch prediction request.3. The results of the batch prediction request are returned to the caller.4. The job service will unprovision the resoures for the batch prediction request. Make a batch prediction requestNow do a batch prediction to your deployed model. Get test item(s)Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
###Code
import json
test_items = !gsutil cat $IMPORT_FILE | head -n2
test_data_1 = test_items[0].replace("'", '"')
test_data_1 = json.loads(test_data_1)
test_data_2 = test_items[0].replace("'", '"')
test_data_2 = json.loads(test_data_2)
try:
test_item_1 = test_data_1["image_gcs_uri"]
test_label_1 = test_data_1["segmentation_annotation"]["annotation_spec_colors"]
test_item_2 = test_data_2["image_gcs_uri"]
test_label_2 = test_data_2["segmentation_annotation"]["annotation_spec_colors"]
except:
test_item_1 = test_data_1["imageGcsUri"]
test_label_1 = test_data_1["segmentationAnnotation"]["annotationSpecColors"]
test_item_2 = test_data_2["imageGcsUri"]
test_label_2 = test_data_2["segmentationAnnotation"]["annotationSpecColors"]
print(test_item_1, test_label_1)
print(test_item_2, test_label_2)
###Output
_____no_output_____
###Markdown
Copy test item(s)For the batch prediction, you will copy the test items over to your Cloud Storage bucket.
###Code
file_1 = test_item_1.split("/")[-1]
file_2 = test_item_2.split("/")[-1]
! gsutil cp $test_item_1 $BUCKET_NAME/$file_1
! gsutil cp $test_item_2 $BUCKET_NAME/$file_2
test_item_1 = BUCKET_NAME + "/" + file_1
test_item_2 = BUCKET_NAME + "/" + file_2
###Output
_____no_output_____
###Markdown
Make the batch input fileNow make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:- `content`: The Cloud Storage path to the image.- `mime_type`: The content type. In our example, it is an `jpeg` file.For example: {'content': '[your-bucket]/file1.jpg', 'mime_type': 'jpeg'}
###Code
import json
import tensorflow as tf
gcs_input_uri = BUCKET_NAME + "/test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
data = {"content": test_item_1, "mime_type": "image/jpeg"}
f.write(json.dumps(data) + "\n")
data = {"content": test_item_2, "mime_type": "image/jpeg"}
f.write(json.dumps(data) + "\n")
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
###Output
_____no_output_____
###Markdown
Compute instance scalingYou have several choices on scaling the compute instances for handling your batch prediction requests:- Single Instance: The batch prediction requests are processed on a single compute instance. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to one.- Manual Scaling: The batch prediction requests are split across a fixed number of compute instances that you manually specified. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and batch prediction requests are evenly distributed across them.- Auto Scaling: The batch prediction requests are split across a scaleable number of compute instances. - Set the minimum (`MIN_NODES`) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.The minimum number of compute instances corresponds to the field `min_replica_count` and the maximum number of compute instances corresponds to the field `max_replica_count`, in your subsequent deployment request.
###Code
MIN_NODES = 1
MAX_NODES = 1
###Output
_____no_output_____
###Markdown
Make batch prediction requestNow that your batch of two test items is ready, let's do the batch request. Use this helper function `create_batch_prediction_job`, with the following parameters:- `display_name`: The human readable name for the prediction job.- `model_name`: The Vertex fully qualified identifier for the `Model` resource.- `gcs_source_uri`: The Cloud Storage path to the input file -- which you created above.- `gcs_destination_output_uri_prefix`: The Cloud Storage path that the service will write the predictions to.- `parameters`: Additional filtering parameters for serving prediction results.The helper function calls the job client service's `create_batch_prediction_job` metho, with the following parameters:- `parent`: The Vertex location root path for Dataset, Model and Pipeline resources.- `batch_prediction_job`: The specification for the batch prediction job.Let's now dive into the specification for the `batch_prediction_job`:- `display_name`: The human readable name for the prediction batch job.- `model`: The Vertex fully qualified identifier for the `Model` resource.- `dedicated_resources`: The compute resources to provision for the batch prediction job. - `machine_spec`: The compute instance to provision. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated. - `starting_replica_count`: The number of compute instances to initially provision, which you set earlier as the variable `MIN_NODES`. - `max_replica_count`: The maximum number of compute instances to scale to, which you set earlier as the variable `MAX_NODES`.- `model_parameters`: Additional filtering parameters for serving prediction results. *Note*, image segmentation models do not support additional parameters.- `input_config`: The input source and format type for the instances to predict. - `instances_format`: The format of the batch prediction request file: `jsonl` only supported. - `gcs_source`: A list of one or more Cloud Storage paths to your batch prediction requests.- `output_config`: The output destination and format for the predictions. - `prediction_format`: The format of the batch prediction response file: `jsonl` only supported. - `gcs_destination`: The output destination for the predictions.This call is an asychronous operation. You will print from the response object a few select fields, including:- `name`: The Vertex fully qualified identifier assigned to the batch prediction job.- `display_name`: The human readable name for the prediction batch job.- `model`: The Vertex fully qualified identifier for the Model resource.- `generate_explanations`: Whether True/False explanations were provided with the predictions (explainability).- `state`: The state of the prediction job (pending, running, etc).Since this call will take a few moments to execute, you will likely get `JobState.JOB_STATE_PENDING` for `state`.
###Code
BATCH_MODEL = "unknown_batch-" + TIMESTAMP
def create_batch_prediction_job(
display_name,
model_name,
gcs_source_uri,
gcs_destination_output_uri_prefix,
parameters=None,
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
batch_prediction_job = {
"display_name": display_name,
# Format: 'projects/{project}/locations/{location}/models/{model_id}'
"model": model_name,
"model_parameters": json_format.ParseDict(parameters, Value()),
"input_config": {
"instances_format": IN_FORMAT,
"gcs_source": {"uris": [gcs_source_uri]},
},
"output_config": {
"predictions_format": OUT_FORMAT,
"gcs_destination": {"output_uri_prefix": gcs_destination_output_uri_prefix},
},
"dedicated_resources": {
"machine_spec": machine_spec,
"starting_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
},
}
response = clients["job"].create_batch_prediction_job(
parent=PARENT, batch_prediction_job=batch_prediction_job
)
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try:
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", response.labels)
return response
IN_FORMAT = "jsonl"
OUT_FORMAT = "jsonl" # [jsonl]
response = create_batch_prediction_job(
BATCH_MODEL, model_to_deploy_id, gcs_input_uri, BUCKET_NAME, None
)
###Output
_____no_output_____
###Markdown
Now get the unique identifier for the batch prediction job you created.
###Code
# The full unique ID for the batch job
batch_job_id = response.name
# The short numeric ID for the batch job
batch_job_short_id = batch_job_id.split("/")[-1]
print(batch_job_id)
###Output
_____no_output_____
###Markdown
Get information on a batch prediction jobUse this helper function `get_batch_prediction_job`, with the following paramter:- `job_name`: The Vertex fully qualified identifier for the batch prediction job.The helper function calls the job client service's `get_batch_prediction_job` method, with the following paramter:- `name`: The Vertex fully qualified identifier for the batch prediction job. In this tutorial, you will pass it the Vertex fully qualified identifier for your batch prediction job -- `batch_job_id`The helper function will return the Cloud Storage path to where the predictions are stored -- `gcs_destination`.
###Code
def get_batch_prediction_job(job_name, silent=False):
response = clients["job"].get_batch_prediction_job(name=job_name)
if silent:
return response.output_config.gcs_destination.output_uri_prefix, response.state
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try: # not all data types support explanations
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" error:", response.error)
gcs_destination = response.output_config.gcs_destination
print(" gcs_destination")
print(" output_uri_prefix:", gcs_destination.output_uri_prefix)
return gcs_destination.output_uri_prefix, response.state
predictions, state = get_batch_prediction_job(batch_job_id)
###Output
_____no_output_____
###Markdown
Get the predictionsWhen the batch prediction is done processing, the job state will be `JOB_STATE_SUCCEEDED`.Finally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated at the time you made the batch prediction job, under a subfolder starting with the name `prediction`, and under that folder will be a file called `predictions*.jsonl`.Now display (cat) the contents. You will see multiple JSON objects, one for each prediction.The first field `ID` is the image file you did the prediction on, and the second field `annotations` is the prediction, which is further broken down into:- `confidenceMask`: PNG pixel mask indicating confidence in prediction per pixel.- `categoryMask`: PNG pixel mask indicating prediction per pixel.
###Code
def get_latest_predictions(gcs_out_dir):
""" Get the latest prediction subfolder using the timestamp in the subfolder name"""
folders = !gsutil ls $gcs_out_dir
latest = ""
for folder in folders:
subfolder = folder.split("/")[-2]
if subfolder.startswith("prediction-"):
if subfolder > latest:
latest = folder[:-1]
return latest
while True:
predictions, state = get_batch_prediction_job(batch_job_id, True)
if state != aip.JobState.JOB_STATE_SUCCEEDED:
print("The job has not completed:", state)
if state == aip.JobState.JOB_STATE_FAILED:
raise Exception("Batch Job Failed")
else:
folder = get_latest_predictions(predictions)
! gsutil ls $folder/prediction*.jsonl
! gsutil cat $folder/prediction*.jsonl
break
time.sleep(60)
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all GCP resources used in this project, you can [delete the GCPproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Vertex client library: AutoML image segmentation model for batch prediction Run in Colab View on GitHub OverviewThis tutorial demonstrates how to use the Vertex client library for Python to create image segmentation models and do batch prediction using Google Cloud's [AutoML](https://cloud.google.com/vertex-ai/docs/start/automl-users). DatasetThe dataset used for this tutorial is the [TODO](https://). This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. ObjectiveIn this tutorial, you create an AutoML image segmentation model from a Python script, and then do a batch prediction using the Vertex client library. You can alternatively create and deploy models using the `gcloud` command-line tool or online using the Google Cloud Console.The steps performed include:- Create a Vertex `Dataset` resource.- Train the model.- View the model evaluation.- Make a batch prediction.There is one key difference between using batch prediction and using online prediction:* Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time.* Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready. CostsThis tutorial uses billable components of Google Cloud (GCP):* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. InstallationInstall the latest version of Vertex client library.
###Code
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
###Output
_____no_output_____
###Markdown
Install the latest GA version of *google-cloud-storage* library as well.
###Code
! pip3 install -U google-cloud-storage $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the Vertex client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
###Code
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin GPU runtime*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the Vertex APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook.5. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the [Vertex locations documentation](https://cloud.google.com/vertex-ai/docs/general/locations)
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.**Click Create service account**.In the **Service account name** field, enter a name, and click **Create**.In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.Click Create. A JSON file that contains your key downloads to your local environment.Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
###Code
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Create a Cloud Storage bucket**The following steps are required, regardless of your notebook environment.**This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
###Code
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
###Output
_____no_output_____
###Markdown
**Only if your bucket doesn't already exist**: Run the following cell to create your Cloud Storage bucket.
###Code
! gsutil mb -l $REGION $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Finally, validate access to your Cloud Storage bucket by examining its contents:
###Code
! gsutil ls -al $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex client libraryImport the Vertex client library into our Python environment.
###Code
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
###Output
_____no_output_____
###Markdown
Vertex constantsSetup up the following constants for Vertex:- `API_ENDPOINT`: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.- `PARENT`: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
###Code
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
###Output
_____no_output_____
###Markdown
AutoML constantsSet constants unique to AutoML datasets and training:- Dataset Schemas: Tells the `Dataset` resource service which type of dataset it is.- Data Labeling (Annotations) Schemas: Tells the `Dataset` resource service how the data is labeled (annotated).- Dataset Training Schemas: Tells the `Pipeline` resource service the task (e.g., classification) to train the model for.
###Code
# Image Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml"
# Image Labeling type
LABEL_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/ioformat/image_segmentation_io_format_1.0.0.yaml"
# Image Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_segmentation_1.0.0.yaml"
###Output
_____no_output_____
###Markdown
Hardware AcceleratorsSet the hardware accelerators (e.g., GPU), if any, for prediction.Set the variable `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)For GPU, available accelerators include: - aip.AcceleratorType.NVIDIA_TESLA_K80 - aip.AcceleratorType.NVIDIA_TESLA_P100 - aip.AcceleratorType.NVIDIA_TESLA_P4 - aip.AcceleratorType.NVIDIA_TESLA_T4 - aip.AcceleratorType.NVIDIA_TESLA_V100Otherwise specify `(None, None)` to use a container image to run on a CPU.
###Code
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
###Output
_____no_output_____
###Markdown
Container (Docker) imageFor AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected. Machine TypeNext, set the machine type to use for prediction.- Set the variable `DEPLOY_COMPUTE` to configure the compute resources for the VM you will use for prediction. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU. - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*
###Code
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
###Output
_____no_output_____
###Markdown
TutorialNow you are ready to start creating your own AutoML image segmentation model. Set up clientsThe Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.- Dataset Service for `Dataset` resources.- Model Service for `Model` resources.- Pipeline Service for training.- Job Service for batch prediction and custom training.
###Code
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_job_client():
client = aip.JobServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["job"] = create_job_client()
for client in clients.items():
print(client)
###Output
_____no_output_____
###Markdown
DatasetNow that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it. Create `Dataset` resource instanceUse the helper function `create_dataset` to create the instance of a `Dataset` resource. This function does the following:1. Uses the dataset client service.2. Creates an Vertex `Dataset` resource (`aip.Dataset`), with the following parameters: - `display_name`: The human-readable name you choose to give it. - `metadata_schema_uri`: The schema for the dataset type.3. Calls the client dataset service method `create_dataset`, with the following parameters: - `parent`: The Vertex location root path for your `Database`, `Model` and `Endpoint` resources. - `dataset`: The Vertex dataset object instance you created.4. The method returns an `operation` object.An `operation` object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.You can use the `operation` object to get status on the operation (e.g., create `Dataset` resource) or to cancel the operation, by invoking an operation method:| Method | Description || ----------- | ----------- || result() | Waits for the operation to complete and returns a result object in JSON format. || running() | Returns True/False on whether the operation is still running. || done() | Returns True/False on whether the operation is completed. || canceled() | Returns True/False on whether the operation was canceled. || cancel() | Cancels the operation (this may take up to 30 seconds). |
###Code
TIMEOUT = 90
def create_dataset(name, schema, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
dataset = aip.Dataset(
display_name=name, metadata_schema_uri=schema, labels=labels
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("unknown-" + TIMESTAMP, DATA_SCHEMA)
###Output
_____no_output_____
###Markdown
Now save the unique dataset identifier for the `Dataset` resource instance you created.
###Code
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
###Output
_____no_output_____
###Markdown
Data preparationThe Vertex `Dataset` resource for images has some requirements for your data:- Images must be stored in a Cloud Storage bucket.- Each image file must be in an image format (PNG, JPEG, BMP, ...).- There must be an index file stored in your Cloud Storage bucket that contains the path and label for each image.- The index file must be either CSV or JSONL. JSONLFor image segmentation, the JSONL index file has the requirements:- Each data item is a separate JSON object, on a separate line.- The key/value pair `image_gcs_uri` is the Cloud Storage path to the image.- The key/value pair `category_mask_uri` is the Cloud Storage path to the mask image in PNG format.- The key/value pair `'annotation_spec_colors'` is a list mapping mask colors to a label. - The key/value pair pair `display_name` is the label for the pixel color mask. - The key/value pair pair `color` are the RGB normalized pixel values (between 0 and 1) of the mask for the corresponding label. { 'image_gcs_uri': image, 'segmentation_annotations': { 'category_mask_uri': mask_image, 'annotation_spec_colors' : [ { 'display_name': label, 'color': {"red": value, "blue", value, "green": value} }, ...] }*Note*: The dictionary key fields may alternatively be in camelCase. For example, 'image_gcs_uri' can also be 'imageGcsUri'. Location of Cloud Storage training data.Now set the variable `IMPORT_FILE` to the location of the JSONL index file in Cloud Storage.
###Code
IMPORT_FILE = "gs://ucaip-test-us-central1/dataset/isg_data.jsonl"
###Output
_____no_output_____
###Markdown
Quick peek at your dataYou will use a version of the Unknown dataset that is stored in a public Cloud Storage bucket, using a JSONL index file.Start by doing a quick peek at the data. You count the number of examples by counting the number of objects in a JSONL index file (`wc -l`) and then peek at the first few rows.
###Code
if "IMPORT_FILES" in globals():
FILE = IMPORT_FILES[0]
else:
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $FILE | head
###Output
_____no_output_____
###Markdown
Import dataNow, import the data into your Vertex Dataset resource. Use this helper function `import_data` to import the data. The function does the following:- Uses the `Dataset` client.- Calls the client method `import_data`, with the following parameters: - `name`: The human readable name you give to the `Dataset` resource (e.g., unknown). - `import_configs`: The import configuration.- `import_configs`: A Python list containing a dictionary, with the key/value entries: - `gcs_sources`: A list of URIs to the paths of the one or more index files. - `import_schema_uri`: The schema identifying the labeling type.The `import_data()` method returns a long running `operation` object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break.
###Code
def import_data(dataset, gcs_sources, schema):
config = [{"gcs_source": {"uris": gcs_sources}, "import_schema_uri": schema}]
print("dataset:", dataset_id)
start_time = time.time()
try:
operation = clients["dataset"].import_data(
name=dataset_id, import_configs=config
)
print("Long running operation:", operation.operation.name)
result = operation.result()
print("result:", result)
print("time:", int(time.time() - start_time), "secs")
print("error:", operation.exception())
print("meta :", operation.metadata)
print(
"after: running:",
operation.running(),
"done:",
operation.done(),
"cancelled:",
operation.cancelled(),
)
return operation
except Exception as e:
print("exception:", e)
return None
import_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA)
###Output
_____no_output_____
###Markdown
Train the modelNow train an AutoML image segmentation model using your Vertex `Dataset` resource. To train the model, do the following steps:1. Create an Vertex training pipeline for the `Dataset` resource.2. Execute the pipeline to start the training. Create a training pipelineYou may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:1. Being reusable for subsequent training jobs.2. Can be containerized and ran as a batch job.3. Can be distributed.4. All the steps are associated with the same pipeline job for tracking progress.Use this helper function `create_pipeline`, which takes the following parameters:- `pipeline_name`: A human readable name for the pipeline job.- `model_name`: A human readable name for the model.- `dataset`: The Vertex fully qualified dataset identifier.- `schema`: The dataset labeling (annotation) training schema.- `task`: A dictionary describing the requirements for the training job.The helper function calls the `Pipeline` client service'smethod `create_pipeline`, which takes the following parameters:- `parent`: The Vertex location root path for your `Dataset`, `Model` and `Endpoint` resources.- `training_pipeline`: the full specification for the pipeline training job.Let's look now deeper into the *minimal* requirements for constructing a `training_pipeline` specification:- `display_name`: A human readable name for the pipeline job.- `training_task_definition`: The dataset labeling (annotation) training schema.- `training_task_inputs`: A dictionary describing the requirements for the training job.- `model_to_upload`: A human readable name for the model.- `input_data_config`: The dataset specification. - `dataset_id`: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier. - `fraction_split`: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
###Code
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
###Output
_____no_output_____
###Markdown
Construct the task requirementsNext, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the `task` field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the `json_format.ParseDict` method for the conversion.The minimal fields you need to specify are:- `budget_milli_node_hours`: The maximum time to budget (billed) for training the model, where 1000 = 1 hour.- `model_type`: The type of deployed model: - `CLOUD_HIGH_ACCURACY_1`: For deploying to Google Cloud and optimizing for accuracy. - `CLOUD_LOW_LATENCY_1`: For deploying to Google Cloud and optimizing for latency (response time),Finally, create the pipeline by calling the helper function `create_pipeline`, which returns an instance of a training pipeline object.
###Code
PIPE_NAME = "unknown_pipe-" + TIMESTAMP
MODEL_NAME = "unknown_model-" + TIMESTAMP
task = json_format.ParseDict(
{"budget_milli_node_hours": 2000, "model_type": "CLOUD_LOW_ACCURACY_1"}, Value()
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
###Output
_____no_output_____
###Markdown
Now save the unique identifier of the training pipeline you created.
###Code
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
###Output
_____no_output_____
###Markdown
Get information on a training pipelineNow get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's `get_training_pipeline` method, with the following parameter:- `name`: The Vertex fully qualified pipeline identifier.When the model is done training, the pipeline state will be `PIPELINE_STATE_SUCCEEDED`.
###Code
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
###Output
_____no_output_____
###Markdown
DeploymentTraining the above model may take upwards of 30 minutes time.Once your model is done training, you can calculate the actual time it took to train the model by subtracting `end_time` from `start_time`. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field `model_to_deploy.name`.
###Code
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
###Output
_____no_output_____
###Markdown
Model informationNow that your model is trained, you can get some information on your model. Evaluate the Model resourceNow find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model. List evaluations for all slicesUse this helper function `list_model_evaluations`, which takes the following parameter:- `name`: The Vertex fully qualified model identifier for the `Model` resource.This helper function uses the model client service's `list_model_evaluations` method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.For each evaluation -- you probably only have one, we then print all the key names for each metric in the evaluation, and for a small set (`confidenceMetricsEntries`) you will print the result.
###Code
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("confidenceMetricsEntries", metrics["confidenceMetricsEntries"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
###Output
_____no_output_____
###Markdown
Model deployment for batch predictionNow deploy the trained Vertex `Model` resource you created for batch prediction. This differs from deploying a `Model` resource for on-demand prediction.For online prediction, you:1. Create an `Endpoint` resource for deploying the `Model` resource to.2. Deploy the `Model` resource to the `Endpoint` resource.3. Make online prediction requests to the `Endpoint` resource.For batch-prediction, you:1. Create a batch prediction job.2. The job service will provision resources for the batch prediction request.3. The results of the batch prediction request are returned to the caller.4. The job service will unprovision the resoures for the batch prediction request. Make a batch prediction requestNow do a batch prediction to your deployed model. Get test item(s)Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
###Code
import json
test_items = !gsutil cat $IMPORT_FILE | head -n2
test_data_1 = test_items[0].replace("'", '"')
test_data_1 = json.loads(test_data_1)
test_data_2 = test_items[0].replace("'", '"')
test_data_2 = json.loads(test_data_2)
try:
test_item_1 = test_data_1["image_gcs_uri"]
test_label_1 = test_data_1["segmentation_annotation"]["annotation_spec_colors"]
test_item_2 = test_data_2["image_gcs_uri"]
test_label_2 = test_data_2["segmentation_annotation"]["annotation_spec_colors"]
except:
test_item_1 = test_data_1["imageGcsUri"]
test_label_1 = test_data_1["segmentationAnnotation"]["annotationSpecColors"]
test_item_2 = test_data_2["imageGcsUri"]
test_label_2 = test_data_2["segmentationAnnotation"]["annotationSpecColors"]
print(test_item_1, test_label_1)
print(test_item_2, test_label_2)
###Output
_____no_output_____
###Markdown
Copy test item(s)For the batch prediction, you will copy the test items over to your Cloud Storage bucket.
###Code
file_1 = test_item_1.split("/")[-1]
file_2 = test_item_2.split("/")[-1]
! gsutil cp $test_item_1 $BUCKET_NAME/$file_1
! gsutil cp $test_item_2 $BUCKET_NAME/$file_2
test_item_1 = BUCKET_NAME + "/" + file_1
test_item_2 = BUCKET_NAME + "/" + file_2
###Output
_____no_output_____
###Markdown
Make the batch input fileNow make a batch input file, which you will store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each data item (instance). The dictionary contains the key/value pairs:- `content`: The Cloud Storage path to the image.- `mime_type`: The content type. In our example, it is an `jpeg` file.For example: {'content': '[your-bucket]/file1.jpg', 'mime_type': 'jpeg'}
###Code
import json
import tensorflow as tf
gcs_input_uri = BUCKET_NAME + "/test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
data = {"content": test_item_1, "mime_type": "image/jpeg"}
f.write(json.dumps(data) + "\n")
data = {"content": test_item_2, "mime_type": "image/jpeg"}
f.write(json.dumps(data) + "\n")
print(gcs_input_uri)
! gsutil cat $gcs_input_uri
###Output
_____no_output_____
###Markdown
Compute instance scalingYou have several choices on scaling the compute instances for handling your batch prediction requests:- Single Instance: The batch prediction requests are processed on a single compute instance. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to one.- Manual Scaling: The batch prediction requests are split across a fixed number of compute instances that you manually specified. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and batch prediction requests are evenly distributed across them.- Auto Scaling: The batch prediction requests are split across a scaleable number of compute instances. - Set the minimum (`MIN_NODES`) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.The minimum number of compute instances corresponds to the field `min_replica_count` and the maximum number of compute instances corresponds to the field `max_replica_count`, in your subsequent deployment request.
###Code
MIN_NODES = 1
MAX_NODES = 1
###Output
_____no_output_____
###Markdown
Make batch prediction requestNow that your batch of two test items is ready, let's do the batch request. Use this helper function `create_batch_prediction_job`, with the following parameters:- `display_name`: The human readable name for the prediction job.- `model_name`: The Vertex fully qualified identifier for the `Model` resource.- `gcs_source_uri`: The Cloud Storage path to the input file -- which you created above.- `gcs_destination_output_uri_prefix`: The Cloud Storage path that the service will write the predictions to.- `parameters`: Additional filtering parameters for serving prediction results.The helper function calls the job client service's `create_batch_prediction_job` metho, with the following parameters:- `parent`: The Vertex location root path for Dataset, Model and Pipeline resources.- `batch_prediction_job`: The specification for the batch prediction job.Let's now dive into the specification for the `batch_prediction_job`:- `display_name`: The human readable name for the prediction batch job.- `model`: The Vertex fully qualified identifier for the `Model` resource.- `dedicated_resources`: The compute resources to provision for the batch prediction job. - `machine_spec`: The compute instance to provision. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated. - `starting_replica_count`: The number of compute instances to initially provision, which you set earlier as the variable `MIN_NODES`. - `max_replica_count`: The maximum number of compute instances to scale to, which you set earlier as the variable `MAX_NODES`.- `model_parameters`: Additional filtering parameters for serving prediction results. *Note*, image segmentation models do not support additional parameters.- `input_config`: The input source and format type for the instances to predict. - `instances_format`: The format of the batch prediction request file: `jsonl` only supported. - `gcs_source`: A list of one or more Cloud Storage paths to your batch prediction requests.- `output_config`: The output destination and format for the predictions. - `prediction_format`: The format of the batch prediction response file: `jsonl` only supported. - `gcs_destination`: The output destination for the predictions.This call is an asychronous operation. You will print from the response object a few select fields, including:- `name`: The Vertex fully qualified identifier assigned to the batch prediction job.- `display_name`: The human readable name for the prediction batch job.- `model`: The Vertex fully qualified identifier for the Model resource.- `generate_explanations`: Whether True/False explanations were provided with the predictions (explainability).- `state`: The state of the prediction job (pending, running, etc).Since this call will take a few moments to execute, you will likely get `JobState.JOB_STATE_PENDING` for `state`.
###Code
BATCH_MODEL = "unknown_batch-" + TIMESTAMP
def create_batch_prediction_job(
display_name,
model_name,
gcs_source_uri,
gcs_destination_output_uri_prefix,
parameters=None,
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
batch_prediction_job = {
"display_name": display_name,
# Format: 'projects/{project}/locations/{location}/models/{model_id}'
"model": model_name,
"model_parameters": json_format.ParseDict(parameters, Value()),
"input_config": {
"instances_format": IN_FORMAT,
"gcs_source": {"uris": [gcs_source_uri]},
},
"output_config": {
"predictions_format": OUT_FORMAT,
"gcs_destination": {"output_uri_prefix": gcs_destination_output_uri_prefix},
},
"dedicated_resources": {
"machine_spec": machine_spec,
"starting_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
},
}
response = clients["job"].create_batch_prediction_job(
parent=PARENT, batch_prediction_job=batch_prediction_job
)
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try:
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", response.labels)
return response
IN_FORMAT = "jsonl"
OUT_FORMAT = "jsonl" # [jsonl]
response = create_batch_prediction_job(
BATCH_MODEL, model_to_deploy_id, gcs_input_uri, BUCKET_NAME, None
)
###Output
_____no_output_____
###Markdown
Now get the unique identifier for the batch prediction job you created.
###Code
# The full unique ID for the batch job
batch_job_id = response.name
# The short numeric ID for the batch job
batch_job_short_id = batch_job_id.split("/")[-1]
print(batch_job_id)
###Output
_____no_output_____
###Markdown
Get information on a batch prediction jobUse this helper function `get_batch_prediction_job`, with the following paramter:- `job_name`: The Vertex fully qualified identifier for the batch prediction job.The helper function calls the job client service's `get_batch_prediction_job` method, with the following paramter:- `name`: The Vertex fully qualified identifier for the batch prediction job. In this tutorial, you will pass it the Vertex fully qualified identifier for your batch prediction job -- `batch_job_id`The helper function will return the Cloud Storage path to where the predictions are stored -- `gcs_destination`.
###Code
def get_batch_prediction_job(job_name, silent=False):
response = clients["job"].get_batch_prediction_job(name=job_name)
if silent:
return response.output_config.gcs_destination.output_uri_prefix, response.state
print("response")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" model:", response.model)
try: # not all data types support explanations
print(" generate_explanation:", response.generate_explanation)
except:
pass
print(" state:", response.state)
print(" error:", response.error)
gcs_destination = response.output_config.gcs_destination
print(" gcs_destination")
print(" output_uri_prefix:", gcs_destination.output_uri_prefix)
return gcs_destination.output_uri_prefix, response.state
predictions, state = get_batch_prediction_job(batch_job_id)
###Output
_____no_output_____
###Markdown
Get the predictionsWhen the batch prediction is done processing, the job state will be `JOB_STATE_SUCCEEDED`.Finally you view the predictions stored at the Cloud Storage path you set as output. The predictions will be in a JSONL format, which you indicated at the time you made the batch prediction job, under a subfolder starting with the name `prediction`, and under that folder will be a file called `predictions*.jsonl`.Now display (cat) the contents. You will see multiple JSON objects, one for each prediction.The first field `ID` is the image file you did the prediction on, and the second field `annotations` is the prediction, which is further broken down into:- `confidenceMask`: PNG pixel mask indicating confidence in prediction per pixel.- `categoryMask`: PNG pixel mask indicating prediction per pixel.
###Code
def get_latest_predictions(gcs_out_dir):
""" Get the latest prediction subfolder using the timestamp in the subfolder name"""
folders = !gsutil ls $gcs_out_dir
latest = ""
for folder in folders:
subfolder = folder.split("/")[-2]
if subfolder.startswith("prediction-"):
if subfolder > latest:
latest = folder[:-1]
return latest
while True:
predictions, state = get_batch_prediction_job(batch_job_id, True)
if state != aip.JobState.JOB_STATE_SUCCEEDED:
print("The job has not completed:", state)
if state == aip.JobState.JOB_STATE_FAILED:
raise Exception("Batch Job Failed")
else:
folder = get_latest_predictions(predictions)
! gsutil ls $folder/prediction*.jsonl
! gsutil cat $folder/prediction*.jsonl
break
time.sleep(60)
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all GCP resources used in this project, you can [delete the GCPproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____ |
Data/Code/rbc_data.ipynb | ###Markdown
US Production Data for RBC Modeling
###Code
import pandas as pd
import numpy as np
import fredpy as fp
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
pd.plotting.register_matplotlib_converters()
# Load API key
fp.api_key = fp.load_api_key('fred_api_key.txt')
# Download nominal GDP, nominal personal consumption expenditures, nominal
# gross private domestic investment, the GDP deflator, and an index of hours
# worked in the nonfarm business sector produced by the BLS. All data are
# from FRED and are quarterly.
gdp = fp.series('GDP')
cons = fp.series('PCEC')
invest = fp.series('GPDI')
hours = fp.series('HOANBS')
defl = fp.series('GDPDEF')
# Make sure that all of the downloaded series have the same data ranges
gdp,cons,invest,hours,defl = fp.window_equalize([gdp,cons,invest,hours,defl])
# Compute real GDP, real consumption, real investment
gdp.data = gdp.data/defl.data*100
cons.data = cons.data/defl.data*100
invest.data = invest.data/defl.data*100
# Print units
print('Hours units: ',hours.units)
print('Deflator units:',defl.units)
###Output
Hours units: Index 2012=100
Deflator units: Index 2012=100
###Markdown
Next, compute the quarterly capital stock series for the US using the perpetual inventory method. The discrete-time Solow growth model is given by:\begin{align}Y_t & = A_tK_t^{\alpha}L_t^{1-\alpha} \tag{1}\\C_t & = (1-s)Y_t \tag{2}\\Y_t & = C_t + I_t \tag{3}\\K_{t+1} & = I_t + (1-\delta)K_t \tag{4}\\A_{t+1} & = (1+g)A_t \tag{5}\\L_{t+1} & = (1+n)L_t \tag{6}.\end{align}Here the model is assumed to be quarterly so $n$ is the *quarterly* growth rate of labor hours, $g$ is the *quarterly* growth rate of TFP, and $\delta$ is the *quarterly* rate of depreciation of the capital stock. Given a value of the quarterly depreciation rate $\delta$, an investment series $I_t$, and an initial capital stock $K_0$, the law of motion for the capital stock, Equation (4), can be used to compute an implied capital series. But we don't know $K_0$ or $\delta$ so we'll have to *calibrate* these values using statistics computed from the data that we've already obtained.Let lowercase letters denote a variable that's been divided by $A_t^{1/(1-\alpha)}L_t$. E.g.,\begin{align}y_t = \frac{Y_t}{A_t^{1/(1-\alpha)}L_t}\tag{7}\end{align}Then (after substituting consumption from the model), the scaled version of the model can be written as: \begin{align}y_t & = k_t^{\alpha} \tag{8}\\i_t & = sy_t \tag{9}\\k_{t+1} & = i_t + (1-\delta-n-g')k_t,\tag{10}\end{align}where $g' = g/(1-\alpha)$ is the growth rate of $A_t^{1/(1-\alpha)}$. In the steady state:\begin{align}k & = \left(\frac{s}{\delta+n+g'}\right)^{\frac{1}{1-\alpha}} \tag{11}\end{align}which means that the ratio of capital to output is constant:\begin{align}\frac{k}{y} & = \frac{s}{\delta+n+g'} \tag{12}\end{align}and therefore the steady state ratio of depreciation to output is:\begin{align}\overline{\delta K/ Y} & = \frac{\delta s}{\delta + n + g'} \tag{13}\end{align}where $\overline{\delta K/ Y}$ is the long-run average ratio of depreciation to output. We can use Equation (13) to calibrate $\delta$ given $\overline{\delta K/ Y}$, $s$, $n$, and $g'$.Furthermore, in the steady state, the growth rate of output is constant:\begin{align}\frac{\Delta Y}{Y} & = n + g' \tag{14}\end{align} 1. Assume $\alpha = 0.35$.2. Calibrate $s$ as the average of ratio of investment to GDP.3. Calibrate $n$ as the average quarterly growth rate of labor hours.4. Calibrate $g'$ as the average quarterly growth rate of real GDP minus n.5. Calculate the average ratio of depreciation to GDP $\overline{\delta K/ Y}$ and use the result to calibrate $\delta$. That is, find the average ratio of Current-Cost Depreciation of Fixed Assets (FRED series ID: M1TTOTL1ES000) to GDP (FRED series ID: GDPA). Then calibrate $\delta$ from the following steady state relationship:\begin{align}\delta & = \frac{\left( \overline{\delta K/ Y} \right)\left(n + g' \right)}{s - \left( \overline{\delta K/ Y} \right)} \tag{15}\end{align}6. Calibrate $K_0$ by asusming that the capital stock is initially equal to its steady state value:\begin{align}K_0 & = \left(\frac{s}{\delta + n + g'}\right) Y_0 \tag{16}\end{align}Then, armed with calibrated values for $K_0$ and $\delta$, compute $K_1, K_2, \ldots$ recursively. See Timothy Kehoe's notes for more information on the perpetual inventory method:http://users.econ.umn.edu/~tkehoe/classes/GrowthAccountingNotes.pdf
###Code
# Set the capital share of income
alpha = 0.35
# Average saving rate
s = np.mean(invest.data/gdp.data)
# Average quarterly labor hours growth rate
n = (hours.data[-1]/hours.data[0])**(1/(len(hours.data)-1)) - 1
# Average quarterly real GDP growth rate
g = ((gdp.data[-1]/gdp.data[0])**(1/(len(gdp.data)-1)) - 1) - n
# Compute annual depreciation rate
depA = fp.series('M1TTOTL1ES000')
gdpA = fp.series('gdpa')
gdpA = gdpA.window([gdp.data.index[0],gdp.data.index[-1]])
gdpA,depA = fp.window_equalize([gdpA,depA])
deltaKY = np.mean(depA.data/gdpA.data)
delta = (n+g)*deltaKY/(s-deltaKY)
# print calibrated values:
print('Avg saving rate: ',round(s,5))
print('Avg annual labor growth:',round(4*n,5))
print('Avg annual gdp growth: ',round(4*g,5))
print('Avg annual dep rate: ',round(4*delta,5))
# Construct the capital series. Note that the GPD and investment data are reported on an annualized basis
# so divide by 4 to get quarterly data.
capital = np.zeros(len(gdp.data))
capital[0] = gdp.data[0]/4*s/(n+g+delta)
for t in range(len(gdp.data)-1):
capital[t+1] = invest.data[t]/4 + (1-delta)*capital[t]
# Save in a fredpy series
capital = fp.to_fred_series(data = capital,dates =gdp.data.index,units = gdp.units,title='Capital stock of the US',frequency='Quarterly')
# plot the computed capital series
plt.plot(capital.data.index,capital.data,'-',lw=3,alpha = 0.7)
plt.ylabel(capital.units)
plt.title(capital.title)
plt.grid()
# Compute TFP
tfp = gdp.data/capital.data**alpha/hours.data**(1-alpha)
tfp = fp.to_fred_series(data = tfp,dates =gdp.data.index,units = gdp.units,title='TFP of the US',frequency='Quarterly')
# Plot the computed capital series
plt.plot(tfp.data.index,tfp.data,'-',lw=3,alpha = 0.7)
plt.ylabel(tfp.units)
plt.title(tfp.title)
plt.grid()
# Convert each series into per capita using civilian pop 16 and over
gdp = gdp.per_capita(civ_pop=True)
cons = cons.per_capita(civ_pop=True)
invest = invest.per_capita(civ_pop=True)
hours = hours.per_capita(civ_pop=True)
capital = capital.per_capita(civ_pop=True)
# Put GDP, consumption, and investment in units of thousands of dollars per person
gdp.data = gdp.data*1000
cons.data = cons.data*1000
invest.data = invest.data*1000
capital.data = capital.data*1000
# Scale hours per person to equal 100 on October (Quarter III) of 2012
hours.data = hours.data/hours.data.loc['2012-10-01']*100
# Make sure TFP series has same length as the rest (since the .per_capita() function may affect the date range.
tfp,gdp = fp.window_equalize([tfp,gdp])
# Compute and plot log real GDP, log consumption, log investment, log hours
gdp_log = gdp.log()
cons_log = cons.log()
invest_log = invest.log()
hours_log = hours.log()
capital_log = capital.log()
tfp_log = tfp.log()
# HP filter to isolate trend and cyclical components
gdp_log_cycle,gdp_log_trend = gdp_log.hp_filter()
cons_log_cycle,cons_log_trend = cons_log.hp_filter()
invest_log_cycle,invest_log_trend = invest_log.hp_filter()
hours_log_cycle,hours_log_trend = hours_log.hp_filter()
capital_log_cycle,capital_log_trend = capital_log.hp_filter()
tfp_log_cycle,tfp_log_trend = tfp_log.hp_filter()
# Create a DataFrame with actual and trend data
data = pd.DataFrame({
'gdp':gdp.data,
'gdp_trend':np.exp(gdp_log_trend.data),
'gdp_cycle':gdp_log_cycle.data,
'consumption':cons.data,
'consumption_trend':np.exp(cons_log_trend.data),
'consumption_cycle':cons_log_cycle.data,
'investment':invest.data,
'investment_trend':np.exp(invest_log_trend.data),
'investment_cycle':invest_log_cycle.data,
'hours':hours.data,
'hours_trend':np.exp(hours_log_trend.data),
'hours_cycle':hours_log_cycle.data,
'capital':capital.data,
'capital_trend':np.exp(capital_log_trend.data),
'capital_cycle':capital_log_cycle.data,
'tfp':tfp.data,
'tfp_trend':np.exp(tfp_log_trend.data),
'tfp_cycle':tfp_log_cycle.data,
},index = gdp.data.index)
columns_ordered =[]
names = ['gdp','consumption','investment','hours','capital','tfp']
for name in names:
columns_ordered.append(name)
columns_ordered.append(name+'_trend')
data[columns_ordered].to_csv('../Csv/rbc_data_actual_trend.csv')
# Create a DataFrame with actual, trend, and cycle data
columns_ordered =[]
names = ['gdp','consumption','investment','hours','capital','tfp']
for name in names:
columns_ordered.append(name)
columns_ordered.append(name+'_trend')
columns_ordered.append(name+'_cycle')
data[columns_ordered].to_csv('../Csv/rbc_data_actual_trend_cycle.csv')
###Output
_____no_output_____
###Markdown
US Production Data for RBC Modeling
###Code
import pandas as pd
import numpy as np
import fredpy as fp
import matplotlib.pyplot as plt
plt.style.use('classic')
%matplotlib inline
pd.plotting.register_matplotlib_converters()
# Load API key
fp.api_key = fp.load_api_key('fred_api_key.txt')
# Download nominal GDP, nominal personal consumption expenditures, nominal
# gross private domestic investment, the GDP deflator, and an index of hours
# worked in the nonfarm business sector produced by the BLS. All data are
# from FRED and are quarterly.
gdp = fp.series('GDP')
cons = fp.series('PCEC')
invest = fp.series('GPDI')
hours = fp.series('HOANBS')
defl = fp.series('GDPDEF')
# Make sure that all of the downloaded series have the same data ranges
gdp,cons,invest,hours,defl = fp.window_equalize([gdp,cons,invest,hours,defl])
# Compute real GDP, real consumption, real investment
gdp.data = gdp.data/defl.data*100
cons.data = cons.data/defl.data*100
invest.data = invest.data/defl.data*100
# Print units
print('Hours units: ',hours.units)
print('Deflator units:',defl.units)
###Output
Hours units: Index 2012=100
Deflator units: Index 2012=100
###Markdown
Next, compute the quarterly capital stock series for the US using the perpetual inventory method. The discrete-time Solow growth model is given by:\begin{align}Y_t & = A_tK_t^{\alpha}L_t^{1-\alpha} \tag{1}\\C_t & = (1-s)Y_t \tag{2}\\Y_t & = C_t + I_t \tag{3}\\K_{t+1} & = I_t + (1-\delta)K_t \tag{4}\\A_{t+1} & = (1+g)A_t \tag{5}\\L_{t+1} & = (1+n)L_t \tag{6}.\end{align}Here the model is assumed to be quarterly so $n$ is the *quarterly* growth rate of labor hours, $g$ is the *quarterly* growth rate of TFP, and $\delta$ is the *quarterly* rate of depreciation of the capital stock. Given a value of the quarterly depreciation rate $\delta$, an investment series $I_t$, and an initial capital stock $K_0$, the law of motion for the capital stock, Equation (4), can be used to compute an implied capital series. But we don't know $K_0$ or $\delta$ so we'll have to *calibrate* these values using statistics computed from the data that we've already obtained.Let lowercase letters denote a variable that's been divided by $A_t^{1/(1-\alpha)}L_t$. E.g.,\begin{align}y_t = \frac{Y_t}{A_t^{1/(1-\alpha)}L_t}\tag{7}\end{align}Then (after substituting consumption from the model), the scaled version of the model can be written as: \begin{align}y_t & = k_t^{\alpha} \tag{8}\\i_t & = sy_t \tag{9}\\k_{t+1} & = i_t + (1-\delta-n-g')k_t,\tag{10}\end{align}where $g' = g/(1-\alpha)$ is the growth rate of $A_t^{1/(1-\alpha)}$. In the steady state:\begin{align}k & = \left(\frac{s}{\delta+n+g'}\right)^{\frac{1}{1-\alpha}} \tag{11}\end{align}which means that the ratio of capital to output is constant:\begin{align}\frac{k}{y} & = \frac{s}{\delta+n+g'} \tag{12}\end{align}and therefore the steady state ratio of depreciation to output is:\begin{align}\overline{\delta K/ Y} & = \frac{\delta s}{\delta + n + g'} \tag{13}\end{align}where $\overline{\delta K/ Y}$ is the long-run average ratio of depreciation to output. We can use Equation (13) to calibrate $\delta$ given $\overline{\delta K/ Y}$, $s$, $n$, and $g'$.Furthermore, in the steady state, the growth rate of output is constant:\begin{align}\frac{\Delta Y}{Y} & = n + g' \tag{14}\end{align} 1. Assume $\alpha = 0.35$.2. Calibrate $s$ as the average of ratio of investment to GDP.3. Calibrate $n$ as the average quarterly growth rate of labor hours.4. Calibrate $g'$ as the average quarterly growth rate of real GDP minus n.5. Calculate the average ratio of depreciation to GDP $\overline{\delta K/ Y}$ and use the result to calibrate $\delta$. That is, find the average ratio of Current-Cost Depreciation of Fixed Assets (FRED series ID: M1TTOTL1ES000) to GDP (FRED series ID: GDPA). Then calibrate $\delta$ from the following steady state relationship:\begin{align}\delta & = \frac{\left( \overline{\delta K/ Y} \right)\left(n + g' \right)}{s - \left( \overline{\delta K/ Y} \right)} \tag{15}\end{align}6. Calibrate $K_0$ by asusming that the capital stock is initially equal to its steady state value:\begin{align}K_0 & = \left(\frac{s}{\delta + n + g'}\right) Y_0 \tag{16}\end{align}Then, armed with calibrated values for $K_0$ and $\delta$, compute $K_1, K_2, \ldots$ recursively. See Timothy Kehoe's notes for more information on the perpetual inventory method:http://users.econ.umn.edu/~tkehoe/classes/GrowthAccountingNotes.pdf
###Code
# Set the capital share of income
alpha = 0.35
# Average saving rate
s = np.mean(invest.data/gdp.data)
# Average quarterly labor hours growth rate
n = (hours.data[-1]/hours.data[0])**(1/(len(hours.data)-1)) - 1
# Average quarterly real GDP growth rate
g = ((gdp.data[-1]/gdp.data[0])**(1/(len(gdp.data)-1)) - 1) - n
# Compute annual depreciation rate
depA = fp.series('M1TTOTL1ES000')
gdpA = fp.series('gdpa')
gdpA = gdpA.window([gdp.data.index[0],gdp.data.index[-1]])
gdpA,depA = fp.window_equalize([gdpA,depA])
deltaKY = np.mean(depA.data/gdpA.data)
delta = (n+g)*deltaKY/(s-deltaKY)
# print calibrated values:
print('Avg saving rate: ',round(s,5))
print('Avg annual labor growth:',round(4*n,5))
print('Avg annual gdp growth: ',round(4*g,5))
print('Avg annual dep rate: ',round(4*delta,5))
# Construct the capital series. Note that the GPD and investment data are reported on an annualized basis
# so divide by 4 to get quarterly data.
capital = np.zeros(len(gdp.data))
capital[0] = gdp.data[0]/4*s/(n+g+delta)
for t in range(len(gdp.data)-1):
capital[t+1] = invest.data[t]/4 + (1-delta)*capital[t]
# Save in a fredpy series
capital = fp.to_fred_series(data = capital,dates =gdp.data.index,units = gdp.units,title='Capital stock of the US',frequency='Quarterly')
# plot the computed capital series
plt.plot(capital.data.index,capital.data,'-',lw=3,alpha = 0.7)
plt.ylabel(capital.units)
plt.title(capital.title)
plt.grid()
# Compute TFP
tfp = gdp.data/capital.data**alpha/hours.data**(1-alpha)
tfp = fp.to_fred_series(data = tfp,dates =gdp.data.index,units = gdp.units,title='TFP of the US',frequency='Quarterly')
# Plot the computed capital series
plt.plot(tfp.data.index,tfp.data,'-',lw=3,alpha = 0.7)
plt.ylabel(tfp.units)
plt.title(tfp.title)
plt.grid()
# Convert each series into per capita using civilian pop 16 and over
gdp = gdp.per_capita(civ_pop=True)
cons = cons.per_capita(civ_pop=True)
invest = invest.per_capita(civ_pop=True)
hours = hours.per_capita(civ_pop=True)
capital = capital.per_capita(civ_pop=True)
# Put GDP, consumption, and investment in units of thousands of dollars per person
gdp.data = gdp.data*1000
cons.data = cons.data*1000
invest.data = invest.data*1000
capital.data = capital.data*1000
# Scale hours per person to equal 100 on October (Quarter III) of 2012
hours.data = hours.data/hours.data.loc['2012-10-01']*100
# Make sure TFP series has same length as the rest (since the .per_capita() function may affect the date range.
tfp,gdp = fp.window_equalize([tfp,gdp])
# Compute and plot log real GDP, log consumption, log investment, log hours
gdp_log = gdp.log()
cons_log = cons.log()
invest_log = invest.log()
hours_log = hours.log()
capital_log = capital.log()
tfp_log = tfp.log()
# HP filter to isolate trend and cyclical components
gdp_log_cycle,gdp_log_trend = gdp_log.hp_filter()
cons_log_cycle,cons_log_trend = cons_log.hp_filter()
invest_log_cycle,invest_log_trend = invest_log.hp_filter()
hours_log_cycle,hours_log_trend = hours_log.hp_filter()
capital_log_cycle,capital_log_trend = capital_log.hp_filter()
tfp_log_cycle,tfp_log_trend = tfp_log.hp_filter()
# Create a DataFrame with actual and trend data
data = pd.DataFrame({
'gdp':gdp.data,
'gdp_trend':np.exp(gdp_log_trend.data),
'gdp_cycle':gdp_log_cycle.data,
'consumption':cons.data,
'consumption_trend':np.exp(cons_log_trend.data),
'consumption_cycle':cons_log_cycle.data,
'investment':invest.data,
'investment_trend':np.exp(invest_log_trend.data),
'investment_cycle':invest_log_cycle.data,
'hours':hours.data,
'hours_trend':np.exp(hours_log_trend.data),
'hours_cycle':hours_log_cycle.data,
'capital':capital.data,
'capital_trend':np.exp(capital_log_trend.data),
'capital_cycle':capital_log_cycle.data,
'tfp':tfp.data,
'tfp_trend':np.exp(tfp_log_trend.data),
'tfp_cycle':tfp_log_cycle.data,
},index = gdp.data.index)
columns_ordered =[]
names = ['gdp','consumption','investment','hours','capital','tfp']
for name in names:
columns_ordered.append(name)
columns_ordered.append(name+'_trend')
data[columns_ordered].to_csv('../Csv/rbc_data_actual_trend.csv')
# Create a DataFrame with actual, trend, and cycle data
columns_ordered =[]
names = ['gdp','consumption','investment','hours','capital','tfp']
for name in names:
columns_ordered.append(name)
columns_ordered.append(name+'_trend')
columns_ordered.append(name+'_cycle')
data[columns_ordered].to_csv('../Csv/rbc_data_actual_trend_cycle.csv')
###Output
_____no_output_____ |
Lesson4/Activity7.ipynb | ###Markdown
Activity 2: Extracting data from Packt's websiteExtract the following from Packt website 1) faqs and their answers from https://www.packtpub.com/books/info/packt/faq 2) Phone numbers and emails from https://www.packtpub.com/books/info/packt/terms-and-conditions
###Code
import urllib3
import requests
from bs4 import BeautifulSoup
r = requests.get('https://www.packtpub.com/books/info/packt/faq')
r.status_code
r.text
#403 means forbidden
http = urllib3.PoolManager()
rr = http.request('GET', 'https://www.packtpub.com/books/info/packt/faq')
rr.status
rr.data[:1000]
soup = BeautifulSoup(rr.data, 'html.parser')
questions = [question.text.strip() for question in soup.find_all('div',attrs={"class":"faq-item-question-text float-left"})]
questions
answers = [answer.text.strip() for answer in soup.find_all('div',attrs={"class":"faq-item-answer"})]
answers
import pandas as pd
pd.DataFrame({'questions':questions, 'answers':answers}).head()
###Output
_____no_output_____
###Markdown
Extrcat phone/fax numbers and email address from terms and conditions page of Packt
###Code
rr_tc = http.request('GET', 'https://www.packtpub.com/books/info/packt/terms-and-conditions')
rr_tc.status
soup2 = BeautifulSoup(rr_tc.data, 'html.parser')
import re
set(re.findall(r"[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,4}",soup2.text))
re.findall(r"\+\d{2}\s{1}\(0\)\s\d{3}\s\d{3}\s\d{3}",soup2.text)
###Output
_____no_output_____
###Markdown
Activity 2: Extracting data from Packt's websiteExtract the following from Packt website 1) faqs and their answers from https://www.packtpub.com/books/info/packt/faq 2) Phone numbers and emails from https://www.packtpub.com/books/info/packt/terms-and-conditions
###Code
import urllib3
import requests
from bs4 import BeautifulSoup
r = requests.get('https://www.packtpub.com/books/info/packt/faq')
r.status_code
r.text
#403 means forbidden
http = urllib3.PoolManager()
rr = http.request('GET', 'https://www.packtpub.com/books/info/packt/faq')
rr.status
rr.data[:1000]
soup = BeautifulSoup(rr.data, 'html.parser')
questions = [question.text.strip() for question in soup.find_all('div',attrs={"class":"faq-item-question-text float-left"})]
questions
answers = [answer.text.strip() for answer in soup.find_all('div',attrs={"class":"faq-item-answer"})]
answers
import pandas as pd
pd.DataFrame({'questions':questions, 'answers':answers}).head()
###Output
_____no_output_____
###Markdown
Extrcat phone/fax numbers and email address from terms and conditions page of Packt
###Code
rr_tc = http.request('GET', 'https://www.packtpub.com/books/info/packt/terms-and-conditions')
rr_tc.status
soup2 = BeautifulSoup(rr_tc.data, 'html.parser')
import re
set(re.findall(r"[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,4}",soup2.text))
re.findall(r"\+\d{2}\s{1}\(0\)\s\d{3}\s\d{3}\s\d{3}",soup2.text)
###Output
_____no_output_____ |
studies/2 - Computer Vision 1/Atividade2.ipynb | ###Markdown
Atividade 2 - Visรฃo Computacional
###Code
import sys
import cv2
import matplotlib.pyplot as plt
import fotogrametria
if (sys.version_info > (3, 0)):
# Modo Python 3
import importlib
importlib.reload(fotogrametria) # Para garantir que o Jupyter sempre relรช seu trabalho
else:
# Modo Python 2
reload(fotogrametria)
###Output
_____no_output_____
###Markdown
**Deadline: 02/09** O entregรกvel de toda esta atividade vai ser um cรณdigo-fonte em *Python*. Vocรชs *devem* fazer vรญdeos demonstrando o resultado e a postar (pode ser privadamente) no YouTube. Para quem usar Linux o atalho para gravar รฉ Ctrl + Alt + Shift + RVocรช pode entregar enviando o cรณdigo para o Github e postando o vรญdeo *ou* mostrando ao vivo aos professores**Nรฃo programe no Jupyter** - use um programa Python**Link para o vรญdeo**: Vocรช deve ter uma folha com o padrรฃo anexo. *Dica:* Se nรฃo tiver, รฉ possรญvel fazer tambรฉm com um tablet ou *smartphone* Parte 1 - calibraรงรฃo Ouรงa a explicaรงรฃo do professor sobre o modelo de cรขmera *pinhole* e desenhe a medida $f$ que separa o plano focal da pupila da cรขmera Modifique a funรงรฃo `encontrar_foco` do arquivo [fotogrametria.py](fotogrametria.py) para calcular o foco, teste sua funรงรฃo com a celula abaixo
###Code
f = fotogrametria.encontrar_foco(80,12.70,100)
print(f)
# Saida Esperada:
# 629.9212598425197
###Output
629.9212598425197
###Markdown
Parte 2 Segmentar os CirculosModifique a funรงรฃo `segmenta_circulo_ciano` e `segmenta_circulo_magenta` do arquivo [fotogrametria.py](fotogrametria.py) para segmentar os circulos cianos e magenta
###Code
img = cv2.imread("img/calib01.jpg")
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
segmentado_ciano = fotogrametria.segmenta_circulo_ciano(hsv)
segmentado_magenta = fotogrametria.segmenta_circulo_magenta(hsv)
f, ax = plt.subplots(1, 3, figsize=(16,6))
ax[0].imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
ax[1].imshow(segmentado_ciano, cmap="gray")
ax[2].imshow(segmentado_magenta, cmap="gray")
ax[0].set_title('original')
ax[1].set_title('segmentado_ciano')
ax[2].set_title('segmentado_magenta')
ax[0].axis('off')
ax[1].axis('off')
ax[2].axis('off')
plt.show()
# Saida Esperada:
# Uma imagem com o circulo ciano em branco e outro com o magenta
###Output
_____no_output_____
###Markdown
Parte 3 Encontrar os maior contorno para cada um dos CirculosModifique a funรงรฃo `encontrar_maior_contorno` do arquivo [fotogrametria.py](fotogrametria.py) para calcular apenas o maior contorno do circulo ciano e o maior contorno do circulo magenta.
###Code
# Desenhar os Contornos
# Ciano
ccontorno_ciano = fotogrametria.encontrar_maior_contorno(segmentado_ciano)
contornos_img = img.copy()
if ccontorno_ciano is not None:
cv2.drawContours(contornos_img, [ccontorno_ciano], -1, [0, 0, 255], 3)
# Magenta
contorno_magenta = fotogrametria.encontrar_maior_contorno(segmentado_magenta)
if contorno_magenta is not None:
cv2.drawContours(contornos_img, [contorno_magenta], -1, [255, 0, 0], 3)
plt.axis('off')
plt.imshow(cv2.cvtColor(contornos_img, cv2.COLOR_BGR2RGB))
# Saida Esperada:
# Uma imagem com o um contorno desenhado em ambos os circulos (Circulo vemermelho no ciano e azul no magenta)
###Output
_____no_output_____
###Markdown
Parte 4 Com os contornos, calcular o centro dos circulosModifique a funรงรฃo `encontrar_centro_contorno` do arquivo [fotogrametria.py](fotogrametria.py) para calcular a posiรงรฃo, em pixel, do centro de cada circulo.
###Code
# Encontrar Centro dos contornos
if ccontorno_ciano is not None and contorno_magenta is not None:
centro_ciano = fotogrametria.encontrar_centro_contorno(ccontorno_ciano)
centro_magenta = fotogrametria.encontrar_centro_contorno(contorno_magenta)
cv2.line(contornos_img, centro_ciano, centro_magenta, (0, 255, 0), thickness=3, lineType=8)
plt.axis('off')
plt.imshow(cv2.cvtColor(contornos_img, cv2.COLOR_BGR2RGB))
# Saida Esperada:
# Uma imagem uma linha no conectando o centros dos circulos
###Output
_____no_output_____
###Markdown
Parte 5 Calcular a distancia entre os circulosModifique a funรงรฃo `calcular_h` do arquivo [fotogrametria.py](fotogrametria.py) para calcular o valor da distancia vertical entre os circulos e com isso, calcular o foco da camera.
###Code
try:
h = fotogrametria.calcular_h(centro_ciano, centro_magenta)
print('Distancia entre os circulos = %s'%h)
f = fotogrametria.encontrar_foco(80,12.70,h)
print('Distancia focal = %s'%f)
except:
pass
# Saida Esperada:
# Distancia entre os circulos = 161
# Distancia focal = 1014.1732283464568
###Output
Distancia entre os circulos = 161.0
Distancia focal = 1014.1732283464568
###Markdown
Parte 6 Calcular atรฉ a imagemAgora, utilizando o foco encontrado e as funรงรตes para calcular a distancia dos circulos, modifique as funรงรตes `calcular_distancia_entre_circulos` e `encontrar_distancia` do arquivo [fotogrametria.py](fotogrametria.py) para calcular a distancia entre os circulos, e com o foco, a distancia, em cm, atรฉ a imagem.
###Code
img_test = cv2.imread("img/test01.jpg")
h, centro_ciano, centro_magenta, contornos_img = fotogrametria.calcular_distancia_entre_circulos(img_test)
d = fotogrametria.encontrar_distancia(f,12.70,h)
print('Distancia ate a imagem = %s'%d)
# Saida Esperada:
# Distancia ate a imagem = 40.124610591900314
###Output
Distancia ate a imagem = 40.124415891157206
###Markdown
Parte 7 Calcular anguloModifique a funรงรฃo `calcular_angulo_com_horizontal_da_imagem` do arquivo [fotogrametria.py](fotogrametria.py) para calcular o angulo dos circulos com relaรงรฃo a horizontal da imagem
###Code
img_test = cv2.imread("img/angulo04.jpg")
h, centro_ciano, centro_magenta, contornos_img = fotogrametria.calcular_distancia_entre_circulos(img_test)
d = fotogrametria.encontrar_distancia(f,12.70,h)
angulo = fotogrametria.calcular_angulo_com_horizontal_da_imagem(centro_ciano, centro_magenta)
print('Angulo deu %s graus'%angulo)
# Saida Esperada:
# angulo01.jpg: Angulo de 90.0 graos
# angulo02.jpg: Angulo de 141.9836231u755637 graus
# angulo03.jpg: Angulo de 178.929175u4521327 graus
# angulo04.jpg: Angulo de 28.7246296098617 graus
###Output
Angulo deu 28.56758430890487 graus
|
src/test/datascience/uiTests/notebooks/ipyvolume_widgets.ipynb | ###Markdown
Install ipyvolumepip install ipyvolume
###Code
import ipyvolume as ipv
import numpy as np
x, y, z, u, v, w = np.random.random((6, 1000))*2-1
selected = np.random.randint(0, 1000, 100)
ipv.figure()
quiver = ipv.quiver(x, y, z, u, v, w, size=5, size_selected=8, selected=selected)
from ipywidgets import FloatSlider, ColorPicker, VBox, jslink
size = FloatSlider(min=0, max=30, step=0.1)
size_selected = FloatSlider(min=0, max=30, step=0.1)
color = ColorPicker()
color_selected = ColorPicker()
jslink((quiver, 'size'), (size, 'value'))
jslink((quiver, 'size_selected'), (size_selected, 'value'))
jslink((quiver, 'color'), (color, 'value'))
jslink((quiver, 'color_selected'), (color_selected, 'value'))
VBox([ipv.gcc(), size, size_selected, color, color_selected])
import ipyvolume as ipv
import numpy as np
s = 1/2**0.5
# 4 vertices for the tetrahedron
x = np.array([1., -1, 0, 0])
y = np.array([0, 0, 1., -1])
z = np.array([-s, -s, s, s])
# and 4 surfaces (triangles), where the number refer to the vertex index
triangles = [(0, 1, 2), (0, 1, 3), (0, 2, 3), (1,3,2)]
ipv.figure()
# we draw the tetrahedron
mesh = ipv.plot_trisurf(x, y, z, triangles=triangles, color='orange')
# and also mark the vertices
ipv.scatter(x, y, z, marker='sphere', color='blue')
ipv.xyzlim(-2, 2)
ipv.show()
###Output
_____no_output_____
###Markdown
Install ipyvolumepip install ipyvolume
###Code
import ipyvolume as ipv
import numpy as np
x, y, z, u, v, w = np.random.random((6, 1000))*2-1
selected = np.random.randint(0, 1000, 100)
ipv.figure()
quiver = ipv.quiver(x, y, z, u, v, w, size=5, size_selected=8, selected=selected)
from ipywidgets import FloatSlider, ColorPicker, VBox, jslink
size = FloatSlider(min=0, max=30, step=0.1)
size_selected = FloatSlider(min=0, max=30, step=0.1)
color = ColorPicker()
color_selected = ColorPicker()
jslink((quiver, 'size'), (size, 'value'))
jslink((quiver, 'size_selected'), (size_selected, 'value'))
jslink((quiver, 'color'), (color, 'value'))
jslink((quiver, 'color_selected'), (color_selected, 'value'))
VBox([ipv.gcc(), size, size_selected, color, color_selected])
import ipyvolume as ipv
import numpy as np
s = 1/2**0.5
# 4 vertices for the tetrahedron
x = np.array([1., -1, 0, 0])
y = np.array([0, 0, 1., -1])
z = np.array([-s, -s, s, s])
# and 4 surfaces (triangles), where the number refer to the vertex index
triangles = [(0, 1, 2), (0, 1, 3), (0, 2, 3), (1,3,2)]
ipv.figure()
# we draw the tetrahedron
mesh = ipv.plot_trisurf(x, y, z, triangles=triangles, color='orange')
# and also mark the vertices
ipv.scatter(x, y, z, marker='sphere', color='blue')
ipv.xyzlim(-2, 2)
ipv.show()
###Output
_____no_output_____ |
s5/sheet05.ipynb | ###Markdown
Osnabrรผck University - Computer Vision (Winter Term 2020/21) - Prof. Dr.-Ing. G. Heidemann, Ulf Krumnack, Axel Schaffland Exercise Sheet 05: Segmentation 2 IntroductionThis week's sheet should be solved and handed in before the end of **Saturday, December 5, 2020**. If you need help (and Google and other resources were not enough), feel free to contact your groups' designated tutor or whomever of us you run into first. Please upload your results to your group's Stud.IP folder. Assignment 0: Math recap (Periodic functions) [0 Points]This exercise is supposed to be very easy, does not give any points, and is voluntary. There will be a similar exercise on every sheet. It is intended to revise some basic mathematical notions that are assumed throughout this class and to allow you to check if you are comfortable with them. Usually you should have no problem to answer these questions offhand, but if you feel unsure, this is a good time to look them up again. You are always welcome to discuss questions with the tutors or in the practice session. Also, if you have a (math) topic you would like to recap, please let us know. **a)** What are periodic functions? Can you provide a definition? YOUR ANSWER HERE **b)** What are *amplitude*, *frequency*, *wave length*, and *phase* of a sine function? How can you change these properties? YOUR ANSWER HERE **c)** How are sine and cosine defined for complex arguments? In what sense does this generalize the real case? YOUR ANSWER HERE Assignment 1: Edge-based segmentation [5 Points] a) GradientsWhat is the gradient of a pixel? How do we calculate the first, how the second derivative of an image? The gradient of a pixel is given by the difference in contrast to its neighboring pixels (4- or 8-neighborhood). The gradient points into the direction with highest divergence. We can imagine an image as a function consisting of two variables (x- and y-axes) and its color shading in each pixel as the outcome. The whole image presents a landscape of valleys and hills in respect to its shading and coloring. A sobel-filtered image presents the first derivative of each pixel while the laplace-filter creates the second derivative. b) Edge linkingDescribe in your own words the idea of edge linking. What is the goal? Why does it not necessarily yield closededge contours? Edge linking is a variant of **edge-based segmentation** that uses gradient magnitude to link edges. The stronger the gradient value at position $(x, y)$, the higher the probability that it is a real edge and not noise. If $(x, y)$ belongs to an edge, the idea is that there should be more edge pixels orthogonal to the gradient direction.**Goal:** Find segments by a search for boundaries between regions of different features.**TODO: Why not closed edge contours?** c) Zero crossingsExplain what zero crossings are. Why does the detection of zero crossings always lead to closed contours? A zero-crossing in general is a point where the sign of a function changes, represented by an intercept of the axis in the graph of the function. In our context, zero crossings of the second derivative correspond to edges.**TODO:** why lead to closed contours? c) Zero crossings (implementation)Provide an implementation of the zero crossing procedure described in (CV-07 slide 71). To get sensible results you should smooth the image before applying the Laplacian filter, e.g. using the Laplacian of a Gaussian (you may use buildin functions for the filterings steps).
###Code
from skimage import filters
from imageio import imread
import matplotlib.pyplot as plt
from scipy.ndimage import shift
import numpy as np
%matplotlib inline
img = imread('images/swampflower.png').astype(float)
img /= img.max()
# Now compute edges and then zero crossings using the 4-neighborhood and the 8-neighborhood
# YOUR CODE HERE
def four_shift(edges):
x_shift = shift(edges, (1, 0))
y_shift = shift(edges, (0, 1))
return (edges * x_shift <= 0) + (edges * y_shift <= 0)
def eight_shift(edges):
tmp = four_shift(edges)
xy_shift_one = shift(edges, (1, -1))
xy_shift_two = shift(edges, (1, 1))
return tmp + (edges * xy_shift_one <= 0) + (edges * xy_shift_two <= 0)
smooth_img = filters.gaussian(img, sigma=5)
edges = filters.laplace(smooth_img)
zero_crossings_n4 = four_shift(edges)
zero_crossings_n8 = eight_shift(edges)
plt.figure(figsize=(12, 12))
plt.gray()
plt.subplot(2,2,1); plt.axis('off'); plt.imshow(img); plt.title('original')
plt.subplot(2,2,2); plt.axis('off'); plt.imshow(edges); plt.title('edges')
plt.subplot(2,2,3); plt.axis('off'); plt.imshow(zero_crossings_n4); plt.title('zero crossings (N4)')
plt.subplot(2,2,4); plt.axis('off'); plt.imshow(zero_crossings_n8); plt.title('zero crossings (N8)' )
plt.show()
###Output
_____no_output_____
###Markdown
Assignment 2: Watershed transform [5 Points] a) Watershed transformExplain in your own words the idea of watershed transform. How do the two different approaches from the lecture work? Why does watershed transform always give a closed contour? Watershed transform finds segments included by edges. The gradient magnitude image represents the heights of the watershed as segment boundaries. The water flows downhill to a local minimum and the result are segments enclosed by edges, but ignoring the differing strength of edges (noise).Two methods:- **rain**: compute for each pixel the local minimum (where the water gathers)- **flood**: starting at local minima, the groundwater floats the relief**TODO:** Why does watershed transform always give a closed contour? b) ImplementationNow implement the watershed transform using the flooding approach (CV-07 slide 76, but note, that the algorithm presented there is somewhat simplified!). Obviously, buildin functions for computing watershed transform are not allowed, but all other functions may be used. In this example we appply the watershed transform to a distance transformed image, so you **do not** have to take the gradient image, but can apply the watershed transform directly.
###Code
import numpy as np
import imageio
import matplotlib.pyplot as plt
%matplotlib inline
def watershed(img, step=1):
"""
Perform watershed transform on a grayscale image.
Args:
img (ndarray): The grayscale image.
step (int): The rise of the waterlevel at each step. Default 1.
Returns:
edges (ndarray): A binary image containing the watersheds.
"""
NO_LABEL = 0
WATERSHED = 1
new_label = 2
# initialize labels
label = np.zeros(img.shape, np.uint16)
# YOUR CODE HERE
for h in range(int(img.max())):
for x in range(img.shape[0] - 1):
for y in range(img.shape[1] - 1):
if h >= img[x][y] and label[x][y] == 0:
# flooded - 3 cases
nl = get_neighbor_labels(label, x, y)
# isolated
if np.sum(nl) == 0:
label[x][y] = new_label
# segment
elif np.sum(nl) == np.all(nl == nl[0]):
label[x][y] = nl[0]
# watershed
else:
label[x][y] = WATERSHED
for x in range(label.shape[0]):
for y in range(label.shape[1]):
if label[x][y] == WATERSHED:
label[x][y] = 0
else:
label[x][y] = 1
return label
def get_neighbor_labels(label, x, y):
return [
label[x - 1][y - 1], label[x][y - 1], label[x + 1][y - 1], label[x - 1][y],
label[x + 1][y], label[x - 1][y + 1], label[x][y + 1], label[x + 1][y + 1]
]
img = imageio.imread('images/dist_circles.png', pilmode='L')
plt.gray()
plt.subplot(1,2,1)
plt.axis('off')
plt.imshow(img)
plt.subplot(1,2,2)
plt.axis('off')
plt.imshow(watershed(img))
plt.show()
###Output
_____no_output_____
###Markdown
c) Application: mazeYou can use watershed transform to find your way through a maze. To do so, first apply a distance transform to the maze and then flood the result. The watershed will show you the way through the maze. Explain why this works.You can use build-in functions instead of your own watershed function.
###Code
import numpy as np
import imageio
import matplotlib.pyplot as plt
from scipy.ndimage.morphology import distance_transform_edt
from skimage.segmentation import watershed
%matplotlib inline
img = imageio.imread('images/maze2.png', pilmode = 'L') # 'maze1.png' or 'maze2.png'
result = img[:, :, np.newaxis].repeat(3, 2)
# YOUR CODE HERE
dt = distance_transform_edt(img)
water = watershed(dt)
result[water == 1] = (255, 0, 0)
plt.figure(figsize=(10, 10))
plt.title('Solution')
plt.axis('off')
plt.gray()
plt.imshow(result)
plt.show()
###Output
_____no_output_____
###Markdown
The solution path is the watershed between the catchment basins. Assignment 3: $k$-means segmentation [5 Points] **a)** Explain the idea of $k$-means clustering and how it can be used for segmentation. Color segmentation in general is used to find segments of constant color. $k-$Means in general is used to separate data into $k$ clusters of similar properties represented by a cluster center.$k-$Means for color segmentation starts with with $k$ random RGB values as cluster centers and assigns each RGB value in the image to its closestcluster center based on the RGB difference. Afterwards, a new center is computed for each cluster based on its average RGB value. It's an iterative procedure of the two steps 'center computation' and 'cluster assignment update' until convergence up to a certain threshold is reached. **b)** Implement k-means clustering for color segmentation of an RGB image (no use of `scipy.cluster.vq.kmeans` or similar functions allowed here, but you may use functions like `numpy.mean`, `scipy.spatial.distance.pdist` and similar utility functions). Stop calculation when center vectors do not change more than a predefined threshold. Avoid empty clusters by re-initializing the corresponding center vector. (Empirically) determine a good value for $k$ for clustering the image 'peppers.png'.**Bonus** If you want you can visualize the intermediate steps of the clustering process. First lets take a look at how our image looks in RGB colorspace.
###Code
from mpl_toolkits.mplot3d import Axes3D
from imageio import imread
import matplotlib.pyplot as plt
%matplotlib inline
img = imread('images/peppers.png')
vec = img.reshape((-1, img.shape[2]))
vec_scaled = vec / 255
fig = plt.figure(figsize=(12, 12))
ax = fig.add_subplot(111, projection='3d')
ret = ax.scatter(vec[:, 0], vec[:, 1], vec[:, 2], c=vec_scaled, marker='.')
import numpy as np
from scipy.spatial import distance
from IPython import display
from imageio import imread
import time
import matplotlib.pyplot as plt
%matplotlib inline
def kmeans_rgb(img, k, threshold=0, do_display=None):
"""
k-means clustering in RGB space.
Args:
img (numpy.ndarray): an RGB image
k (int): the number of clusters
threshold (float): Maximal change for convergence criterion.
do_display (bool): Whether or not to plot, intermediate steps.
Results:
cluster (numpy.ndarray): an array of the same size as `img`,
containing for each pixel the cluster it belongs to
centers (numpy.ndarray): 'number of clusters' x 3 array.
RGB color for each cluster center.
"""
# YOUR CODE HERE
# initialize random cluster centers (k random rgb tuples)
centers = np.array([np.random.randint(255, size=3) for _ in range(k)])
# list of rgb values in img
rgb_list = [[img[x][y][0], img[x][y][1], img[x][y][2]] for x in range(img.shape[0]) for y in range(img.shape[1])]
change = np.inf
while change > threshold:
change = 0
# compute distance between each pair of the two collections of inputs
rgb_dist_to_centers = distance.cdist(rgb_list, centers)
# assign closest cluster center to each rgb value
cluster_for_each_rgb = np.array([np.argmin(distances) for distances in rgb_dist_to_centers])
for i in range(k):
if i in cluster_for_each_rgb:
# determine colors that are assigned to the currently considered cluster
colors = [rgb_list[x] for x in range(len(rgb_list)) if cluster_for_each_rgb[x] == i]
# update cluster center
new_center = []
for channel in range(3):
avg = 0
for x in colors:
avg += x[channel]
new_center.append(int(avg / len(colors)))
else:
# re-initialize center
new_center = np.random.randint(255, size=3)
change += distance.cdist([centers[i]], [new_center])
centers[i] = new_center
return cluster_for_each_rgb.reshape((img.shape[0], img.shape[1])), centers
img = imread('images/peppers.png')
cluster, centers = kmeans_rgb(img, k=7, threshold=0, do_display=True)
plt.imshow(centers[cluster])
plt.show()
###Output
_____no_output_____
###Markdown
**c)** Now do the same in the HSV space (remember its special topological structure). Check if you can improve the results by ignoring some of the HSV channels.
###Code
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial import distance
from skimage import color
from imageio import imread
%matplotlib inline
# from matplotlib.colors import rgb_to_hsv, hsv_to_rgb
img = imread('images/peppers.png', pilmode = 'RGB')
def kmeans_hsv(img, k, threshold = 0):
"""
k-means clustering in HSV space.
Args:
img (numpy.ndarray): an HSV image
k (int): the number of clusters
threshold (float):
Results:
cluster (numpy.ndarray): an array of the same size as `img`,
containing for each pixel the cluster it belongs to
centers (numpy.ndarray): an array
"""
# YOUR CODE HERE
# initialize random cluster centers (k random hsv tuples)
centers = np.array([np.random.uniform(0, 1, size=3) for _ in range(k)])
# list of rgb values in img
hsv_list = [[img[x][y][0], img[x][y][1], img[x][y][2]] for x in range(img.shape[0]) for y in range(img.shape[1])]
change = np.inf
while change > threshold:
change = 0
# compute distance between each pair of the two collections of inputs
hsv_dist_to_centers = distance.cdist(hsv_list, centers)
# assign closest cluster center to each hsv value
cluster_for_each_hsv = np.array([np.argmin(distances) for distances in hsv_dist_to_centers])
for i in range(k):
if i in cluster_for_each_hsv:
# determine colors that are assigned to the currently considered cluster
colors = [hsv_list[x] for x in range(len(hsv_list)) if cluster_for_each_hsv[x] == i]
# update cluster center
new_center = []
for channel in range(3):
avg = 0
for x in colors:
avg += x[channel]
new_center.append(avg / len(colors))
else:
# re-initialize center
new_center = np.random.uniform(0, 1, size=3)
change += distance.cdist([centers[i]], [new_center])
centers[i] = new_center
return cluster_for_each_hsv.reshape((img.shape[0], img.shape[1])), centers
img_hsv = color.rgb2hsv(img)
k = 7
theta = 0.01
cluster, centers_hsv = kmeans_hsv(img_hsv[:,:,:], k, theta)
if (centers_hsv.shape[1] == 3):
plt.imshow(color.hsv2rgb(centers_hsv[cluster]))
else:
plt.gray()
plt.imshow(np.squeeze(centers_hsv[cluster]))
plt.show()
###Output
_____no_output_____
###Markdown
Assignment 4: Interactive Region Growing [5 Points]Implement flood fill as described in (CV07 slides 123ff.).In a recursive implementation the floodfill function is called for the seed pixel. In the function a recursive call for the four neighbouring pixels is made, if the color of the pixel, the function is called with, is similar to the seed color. If this is the case the pixel is added to the region. [Other](https://en.wikipedia.org/wiki/Flood_fill) more elegant solutions exist aswell.The function `on_press` is called when a mouse button is pressed inside the canvas. From there call `floodfill`. Use the filtered hsv image `img_filtered` for your computation, and show the computed region around the seed point (the position where the mousebutton was pressed) in the original image. You may use a mask to save which pixels belong the the region (and to save which pixels you already visited). Hint: If you can not see the image, try restarting the kernel.
###Code
%matplotlib widget
import imageio
import math
import numpy as np
from matplotlib import pyplot as plt
from skimage import color
import scipy.ndimage as ndimage
from sys import setrecursionlimit
from scipy.spatial import distance
threshold = .08;
setrecursionlimit(100000)
def floodfill(img, mask, x, y, color):
"""Recursively grows region around seed point
Args:
img (ndarray): The image in which the region is grown
mask (boolean ndarray): Visited pixels which belong to the region.
x (uint): X coordinate of the pixel. Checks if this pixels belongs to the region
y (uint): Y coordinate of the pixel.
color (list): The color at the seed position
Return:
mask (boolean ndarray): mask containing region
"""
# YOUR CODE HERE
if distance.cdist([img[x][y]], [color]) < threshold:
mask[x,y] = True
eight_neighbourhood = get_neighbors(x, y)
for x, y in eight_neighbourhood:
if not mask[x][y]:
mask = floodfill(img, mask, x, y, color)
return mask
def get_neighbors(x, y):
return [
(x - 1, y - 1), (x, y - 1), (x + 1, y - 1), (x - 1, y),
(x + 1, y), (x - 1, y + 1), (x, y + 1), (x + 1, y + 1)
]
def on_press(event):
"""Mouse button press event handler
Args:
event: The mouse event
"""
y = math.floor(event.xdata)
x = math.floor(event.ydata)
color = img_filtered[x, y, :]
# YOUR CODE HERE
mask = floodfill(img_filtered, np.zeros((img.shape[0], img.shape[1])), x, y, color)
img[mask == True] = (255, 255, 255)
plt.imshow(img)
fig.canvas.draw()
def fill_from_pixel(img, img_filtered, x,y):
""" Calls floodfill from a pixel position
Args:
img (ndarray): IO image on which fill is drawn.
img_filtered (ndarray): Processing image on which floodfill is computed.
x (uint): Coordinates of pixel position.
y (uint): Coordinates of pixel position.
Returns:
img (ndarray): Image with grown area in white
"""
mask = np.zeros((img.shape[0],img.shape[1]))
color = img_filtered[x,y, :]
mask = floodfill(img_filtered, mask, x, y, color)
img[mask] = (255, 255, 255)
return img
img = imageio.imread('images/peppers.png')
img_hsv = color.rgb2hsv(img)
img_filtered = ndimage.median_filter(img_hsv, 5)
#img = fill_from_pixel(img, img_filtered, 200, 300) # Comment in to deactivate simple testing at fixed position
fig = plt.figure()
ax = fig.add_subplot(111)
plt.imshow(img)
fig.canvas.mpl_connect('button_press_event', on_press)
plt.show()
###Output
_____no_output_____ |
00_ImageWang_Inpainting_baseline_ep80_192.ipynb | ###Markdown
Image็ฝ Submission `192x192` This contains a submission for the Image็ฝ leaderboard in the `128x128` category.In this notebook we:1. Train on 1 pretext task: - Train a network to do image inpatining on Image็ฝ's `/train`, `/unsup` and `/val` images. 2. Train on 4 downstream tasks: - We load the pretext weights and train for `5` epochs. - We load the pretext weights and train for `20` epochs. - We load the pretext weights and train for `80` epochs. - We load the pretext weights and train for `200` epochs. Our leaderboard submissions are the accuracies we get on each of the downstream tasks.
###Code
import json
import torch
import numpy as np
from functools import partial
from fastai2.layers import Mish, MaxPool, LabelSmoothingCrossEntropy
from fastai2.learner import Learner
from fastai2.metrics import accuracy, top_k_accuracy
from fastai2.basics import DataBlock, RandomSplitter, GrandparentSplitter, CategoryBlock
from fastai2.optimizer import ranger, Adam, SGD, RMSProp
from fastai2.vision.all import *
from fastai2.vision.core import *
from fastai2.vision.augment import *
from fastai2.vision.learner import unet_learner, unet_config
from fastai2.vision.models.xresnet import xresnet50, xresnet34
from fastai2.data.transforms import Normalize, parent_label
from fastai2.data.external import download_url, URLs, untar_data
from fastcore.utils import num_cpus
from torch.nn import MSELoss
from torchvision.models import resnet34
###Output
_____no_output_____
###Markdown
Pretext Task: Image Inpainting
###Code
# We create this dummy class in order to create a transform that ONLY operates on images of this type
# We will use it to create all input images
class PILImageInput(PILImage): pass
class RandomCutout(RandTransform):
"Picks a random scaled crop of an image and resize it to `size`"
split_idx = None
def __init__(self, min_n_holes=5, max_n_holes=10, min_length=5, max_length=50, **kwargs):
super().__init__(**kwargs)
self.min_n_holes=min_n_holes
self.max_n_holes=max_n_holes
self.min_length=min_length
self.max_length=max_length
def encodes(self, x:PILImageInput):
"""
Note that we're accepting our dummy PILImageInput class
fastai2 will only pass images of this type to our encoder.
This means that our transform will only be applied to input images and won't
be run against output images.
"""
n_holes = np.random.randint(self.min_n_holes, self.max_n_holes)
pixels = np.array(x) # Convert to mutable numpy array. FeelsBadMan
h,w = pixels.shape[:2]
for n in range(n_holes):
h_length = np.random.randint(self.min_length, self.max_length)
w_length = np.random.randint(self.min_length, self.max_length)
h_y = np.random.randint(0, h)
h_x = np.random.randint(0, w)
y1 = int(np.clip(h_y - h_length / 2, 0, h))
y2 = int(np.clip(h_y + h_length / 2, 0, h))
x1 = int(np.clip(h_x - w_length / 2, 0, w))
x2 = int(np.clip(h_x + w_length / 2, 0, w))
pixels[y1:y2, x1:x2, :] = 0
return Image.fromarray(pixels, mode='RGB')
torch.cuda.set_device(4)
# Default parameters
gpu=None
lr=1e-2
size=128
sqrmom=0.99
mom=0.9
eps=1e-6
epochs=15
bs=64
mixup=0.
opt='ranger',
arch='xresnet50'
sh=0.
sa=0
sym=0
beta=0.
act_fn='Mish'
fp16=0
pool='AvgPool',
dump=0
runs=1
meta=''
# Chosen parameters
lr=8e-3
sqrmom=0.99
mom=0.95
eps=1e-6
bs=64
opt='ranger'
sa=1
fp16=1 #NOTE: My GPU cannot run fp16 :'(
arch='xresnet50'
pool='MaxPool'
gpu=0
# NOTE: Normally loaded from their corresponding string
m = xresnet34
act_fn = Mish
pool = MaxPool
def get_dbunch(size, bs, sh=0., workers=None):
if size<=160:
path = URLs.IMAGEWANG_160
else:
path = URLs.IMAGEWANG
source = untar_data(path)
if workers is None: workers = min(8, num_cpus())
#CHANGE: Input is ImageBlock(cls=PILImageInput)
#CHANGE: Output is ImageBlock
#CHANGE: Splitter is RandomSplitter (instead of on /val folder)
item_tfms=[RandomResizedCrop(size, min_scale=0.35), FlipItem(0.5), RandomCutout]
# batch_tfms=RandomErasing(p=0.9, max_count=3, sh=sh) if sh else None
batch_tfms = [Normalize.from_stats(*imagenet_stats)]
dblock = DataBlock(blocks=(ImageBlock(cls=PILImageInput), ImageBlock),
splitter=RandomSplitter(0.1),
get_items=get_image_files,
get_y=lambda o: o,
item_tfms=item_tfms,
batch_tfms=batch_tfms)
return dblock.dataloaders(source, path=source, bs=bs, num_workers=workers)
name = 'imagewang_inpainting_80_192.pth'
# Use the Ranger optimizer
opt_func = partial(ranger, mom=mom, sqr_mom=sqrmom, eps=eps, beta=beta)
size = 192
bs = 64
dbunch = get_dbunch(size, bs, sh=sh)
#CHANGE: We're predicting pixel values, so we're just going to predict an output for each RGB channel
dbunch.vocab = ['R', 'G', 'B']
len(dbunch.train.dataset), len(dbunch.valid.dataset)
dbunch.show_batch()
learn = unet_learner(dbunch, partial(m, sa=sa), pretrained=False, opt_func=opt_func, metrics=[], loss_func=MSELoss()).to_fp16()
cbs = MixUp(mixup) if mixup else []
learn.fit_flat_cos(80, lr, wd=1e-2, cbs=cbs)
# I'm not using fastai2's .export() because I only want to save
# the model's parameters.
torch.save(learn.model[0].state_dict(), name)
###Output
_____no_output_____
###Markdown
Downstream Task: Image Classification
###Code
def get_dbunch(size, bs, sh=0., workers=None):
if size<=224:
path = URLs.IMAGEWANG_160
else:
path = URLs.IMAGEWANG
source = untar_data(path)
if workers is None: workers = min(8, num_cpus())
item_tfms=[RandomResizedCrop(size, min_scale=0.35), FlipItem(0.5)]
batch_tfms = [Normalize.from_stats(*imagenet_stats)]
# batch_tfms=RandomErasing(p=0.9, max_count=3, sh=sh) if sh else None
dblock = DataBlock(blocks=(ImageBlock, CategoryBlock),
splitter=GrandparentSplitter(valid_name='val'),
get_items=get_image_files, get_y=parent_label,
item_tfms=item_tfms, batch_tfms=batch_tfms)
return dblock.dataloaders(source, path=source, bs=bs, num_workers=workers,
)#item_tfms=item_tfms, batch_tfms=batch_tfms)
dbunch = get_dbunch(size, bs, sh=sh)
m_part = partial(m, c_out=20, act_cls=torch.nn.ReLU, sa=sa, sym=sym, pool=pool)
###Output
_____no_output_____
###Markdown
5 Epochs
###Code
epochs = 5
runs = 5
for run in range(runs):
print(f'Run: {run}')
ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20))
learn = cnn_learner(dbunch, m_part, opt_func=opt_func, pretrained=False,
metrics=[accuracy,top_k_accuracy], loss_func=LabelSmoothingCrossEntropy(),
config={'custom_head':ch})
if dump: print(learn.model); exit()
if fp16: learn = learn.to_fp16()
cbs = MixUp(mixup) if mixup else []
# # Load weights generated from training on our pretext task
state_dict = torch.load(name)
learn.model[0].load_state_dict(state_dict)
learn.unfreeze()
learn.fit_flat_cos(epochs, lr, wd=1e-2, cbs=cbs)
###Output
Run: 0
###Markdown
* Run 1: 0.362942* Run 2: 0.372868* Run 3: 0.342326* Run 4: 0.360143* Run 5: 0.357088Accuracy: **35.91%** 20 Epochs
###Code
epochs = 20
runs = 3
for run in range(runs):
print(f'Run: {run}')
ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20))
learn = cnn_learner(dbunch, m_part, opt_func=opt_func, pretrained=False,
metrics=[accuracy,top_k_accuracy], loss_func=LabelSmoothingCrossEntropy(),
config={'custom_head':ch})
if dump: print(learn.model); exit()
if fp16: learn = learn.to_fp16()
cbs = MixUp(mixup) if mixup else []
# # Load weights generated from training on our pretext task
state_dict = torch.load(name)
learn.model[0].load_state_dict(state_dict)
learn.unfreeze()
learn.fit_flat_cos(epochs, lr, wd=1e-2, cbs=cbs)
###Output
Run: 0
###Markdown
* Run 1: 0.592263* Run 2: 0.588445* Run 3: 0.595571Accuracy: **59.21%** 80 epochs
###Code
epochs = 80
runs = 1
for run in range(runs):
print(f'Run: {run}')
ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20))
learn = cnn_learner(dbunch, m_part, opt_func=opt_func, pretrained=False,
metrics=[accuracy,top_k_accuracy], loss_func=LabelSmoothingCrossEntropy(),
config={'custom_head':ch})
if dump: print(learn.model); exit()
if fp16: learn = learn.to_fp16()
cbs = MixUp(mixup) if mixup else []
# # Load weights generated from training on our pretext task
state_dict = torch.load(name)
learn.model[0].load_state_dict(state_dict)
learn.unfreeze()
learn.fit_flat_cos(epochs, lr, wd=1e-2, cbs=cbs)
###Output
Run: 0
###Markdown
Accuracy: **61.44%** 200 epochs
###Code
epochs = 200
runs = 1
for run in range(runs):
print(f'Run: {run}')
ch = nn.Sequential(nn.AdaptiveAvgPool2d(1), Flatten(), nn.Linear(512, 20))
learn = cnn_learner(dbunch, m_part, opt_func=opt_func, pretrained=False,
metrics=[accuracy,top_k_accuracy], loss_func=LabelSmoothingCrossEntropy(),
config={'custom_head':ch})
if dump: print(learn.model); exit()
if fp16: learn = learn.to_fp16()
cbs = MixUp(mixup) if mixup else []
# # Load weights generated from training on our pretext task
state_dict = torch.load('imagewang_inpainting_15_epochs_nopretrain.pth')
learn.model[0].load_state_dict(state_dict)
learn.unfreeze()
learn.fit_flat_cos(epochs, lr, wd=1e-2, cbs=cbs)
###Output
Run: 0
|
0_set_up.ipynb | ###Markdown
Deep Learning - Part 0This notebook explains how to install all the preriquistes and libraries that you will need to run the following tutorials. If you can execute all the following cells, you are good to go. Environment configuration Install condaThere are two major package managers in Python: pip and conda. For this tutorial we will be using conda which, besides being a package manager is also useful as a version manager. There are two main ways to install conda: Anaconda and Miniconda. Any will be useful for this course, just follow instructions here, according to your operative system:https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.htmlregular-installation Create an environment with all the Anaconda libraries $ conda create --name deeplearning python=3.7 anacondaDon't forget to activate the new env $ conda activate deeplearning Install PyTorchThis year we will be using [PyTorch](https://pytorch.org/) as the library to build and train the deep learning models. The library is a little less abstract than other possibilities such as [Keras](https://www.tensorflow.org/guide/keras) but gives a little more control to the user which in turns allows more customisation.In order to install PyTorch we recommend following the [official documentation](https://pytorch.org/get-started/locally/). In your local machine, you will install the version that only has CPU support (i.e. no CUDA version), but in Nabucodonosor you need to install the version with GPU support. CPUInstall pytorch for CPU: (deeplearning) $ conda install pytorch torchvision cpuonly -c pytorch Then just check the version installed is >= 1.7.0
###Code
import torch
torch.__version__
###Output
_____no_output_____
###Markdown
GPUThe GPU PyTorch depends on the CUDA version installed. Nabucodonosor has many installations of CUDA in the `/opt/cuda` directory. You need to add `nvcc` to the `$PATH`. For example, to setup for CUDA 10.2, do the following: (deeplearning) $ export PATH=/opt/cuda/10.2/bin:$PATHThat has to be done every time you enter nabucodonosor, to avoid that add it to your `.bashrc` file: (deeplearning) $ echo "export PATH=/opt/cuda/10.2/bin:$PATH" >> $HOME/.bashrcThen, install the PyTorch library: (deeplearning) $ conda install pytorch torchvision cudatoolkit=10.2 -c pytorchCheck if this is working by running the following cell:
###Code
torch.cuda.is_available()
###Output
_____no_output_____
###Markdown
Google ColabIn case you want to install PyTorch on a Google Colab, is possible, but first you need to check what version of `nvcc` is running. For that run the following:
###Code
!nvcc --version
###Output
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2021 NVIDIA Corporation
Built on Thu_Jan_28_19:32:09_PST_2021
Cuda compilation tools, release 11.2, V11.2.142
Build cuda_11.2.r11.2/compiler.29558016_0
###Markdown
According to what the previous cell tells you, you'll need to install the proper drivers, with `pip` instead of conda. Please refer to the [getting started](https://pytorch.org/get-started/locally/) page and check what to do. Install other librariesWe need the `gensim` library to deal with word embeddings, so you need to install it. Plus, the `mlflow` tool to keep track of experiments. Finally, `tqdm` is a handful progress bar to keep track of different processes. (deeplearning) $ conda install gensim mlflow tqdm -c conda-forgeIf you have problems importing `gensim` and get this error: ImportError: cannot import name 'open' from 'smart_open' (C:\ProgramData\Anaconda3\lib\site-packages\smart_open\__init__.py)Then try updating `smart_open`: (deeplearning) $ conda update smart_open Download embeddings and dataset CIFAR10The dataset we will use (CIFAR10) is part of the `torchvision` package, which makes it fairly easy to download. You can learn more details on it [here](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.htmlloading-and-normalizing-cifar10):
###Code
import torchvision
torchvision.datasets.CIFAR10(root='./data', download=True);
###Output
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz
###Markdown
Glove Embeddings and IMDB reviews DatasetSome examples that we will run for text classification using Convolutional Neural Networks require the Glove Embeddings as well as the IMDB reviews dataset:
###Code
%%bash
curl -L https://cs.famaf.unc.edu.ar/\~ccardellino/resources/diplodatos/glove.6B.50d.txt.gz -o ./data/glove.6B.50d.txt.gz
curl -L https://cs.famaf.unc.edu.ar/\~ccardellino/resources/diplodatos/imdb_reviews.csv.gz -o ./data/imdb_reviews.csv.gz
###Output
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
6 65.9M 6 4224k 0 0 12.7M 0 0:00:05 --:--:-- 0:00:05 12.6M
58 65.9M 58 38.6M 0 0 29.1M 0 0:00:02 0:00:01 0:00:01 29.1M
100 65.9M 100 65.9M 0 0 31.5M 0 0:00:02 0:00:02 --:--:-- 31.5M
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
29 25.3M 29 7616k 0 0 33.2M 0 --:--:-- --:--:-- --:--:-- 33.0M
100 25.3M 100 25.3M 0 0 35.0M 0 --:--:-- --:--:-- --:--:-- 35.0M
###Markdown
MeLi Challenge 2019 DatasetFor the course project, we will be using a dataset based on the 2019 MeLi Challenge dataset, for automatic classification of products categories:
###Code
%%bash
curl -L https://cs.famaf.unc.edu.ar/\~ccardellino/resources/diplodatos/meli-challenge-2019.tar.bz2 -o ./data/meli-challenge-2019.tar.bz2
tar jxvf ./data/meli-challenge-2019.tar.bz2 -C ./data/
###Output
meli-challenge-2019/
meli-challenge-2019/spanish.test.jsonl.gz
meli-challenge-2019/portuguese.validation.jsonl.gz
meli-challenge-2019/portuguese.train.jsonl.gz
meli-challenge-2019/spanish.train.jsonl.gz
meli-challenge-2019/spanish_token_to_index.json.gz
meli-challenge-2019/portuguese_token_to_index.json.gz
meli-challenge-2019/spanish.validation.jsonl.gz
meli-challenge-2019/portuguese.test.jsonl.gz
###Markdown
Deep Learning - Part 0This notebook explains how to install all the preriquistes and libraries that you will need to run the following tutorials. If you can execute all the following cells, you are good to go. Environment configuration Install condaThere are two major package managers in Python: pip and conda. For this tutorial we will be using conda which, besides being a package manager is also useful as a version manager. There are two main ways to install conda: Anaconda and Miniconda. Any will be useful for this course, just follow instructions here, according to your operative system:https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.htmlregular-installation Create an environment with all the Anaconda libraries $ conda create --name deeplearning python=3.7 anacondaDon't forget to activate the new env $ conda activate deeplearning Install PyTorchThis year we will be using [PyTorch](https://pytorch.org/) as the library to build and train the deep learning models. The library is a little less abstract than other possibilities such as [Keras](https://www.tensorflow.org/guide/keras) but gives a little more control to the user which in turns allows more customisation.In order to install PyTorch we recommend following the [official documentation](https://pytorch.org/get-started/locally/). In your local machine, you will install the version that only has CPU support (i.e. no CUDA version), but in Nabucodonosor you need to install the version with GPU support. CPUInstall pytorch for CPU: (deeplearning) $ conda install pytorch torchvision cpuonly -c pytorch Then just check the version installed is >= 1.7.0
###Code
import torch
torch.__version__
###Output
_____no_output_____
###Markdown
GPUThe GPU PyTorch depends on the CUDA version installed. Nabucodonosor has many installations of CUDA in the `/opt/cuda` directory. You need to add `nvcc` to the `$PATH`. For example, to setup for CUDA 10.2, do the following: (deeplearning) $ export PATH=/opt/cuda/10.2/bin:$PATHThat has to be done every time you enter nabucodonosor, to avoid that add it to your `.bashrc` file: (deeplearning) $ echo "export PATH=/opt/cuda/10.2/bin:$PATH" >> $HOME/.bashrcThen, install the PyTorch library: (deeplearning) $ conda install pytorch torchvision cudatoolkit=10.2 -c pytorchCheck if this is working by running the following cell:
###Code
torch.cuda.is_available()
###Output
_____no_output_____
###Markdown
Google ColabIn case you want to install PyTorch on a Google Colab, is possible, but first you need to check what version of `nvcc` is running. For that run the following:
###Code
!nvcc --version
###Output
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Wed_Oct_23_19:24:38_PDT_2019
Cuda compilation tools, release 10.2, V10.2.89
###Markdown
According to what the previous cell tells you, you'll need to install the proper drivers, with `pip` instead of conda. Please refer to the [getting started](https://pytorch.org/get-started/locally/) page and check what to do. Install other librariesWe need the `gensim` library to deal with word embeddings, so you need to install it. Plus, the `mlflow` tool to keep track of experiments. Finally, `tqdm` is a handful progress bar to keep track of different processes. (deeplearning) $ conda install gensim mlflow tqdm -c conda-forgeIf you have problems importing `gensim` and get this error: ImportError: cannot import name 'open' from 'smart_open' (C:\ProgramData\Anaconda3\lib\site-packages\smart_open\__init__.py)Then try updating `smart_open`: (deeplearning) $ conda update smart_open Download embeddings and dataset CIFAR10The dataset we will use (CIFAR10) is part of the `torchvision` package, which makes it fairly easy to download. You can learn more details on it [here](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.htmlloading-and-normalizing-cifar10):
###Code
import torchvision
torchvision.datasets.CIFAR10(root='./data', download=True);
###Output
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz
###Markdown
Glove Embeddings and IMDB reviews DatasetSome examples that we will run for text classification using Convolutional Neural Networks require the Glove Embeddings as well as the IMDB reviews dataset:
###Code
%%bash
curl -L https://cs.famaf.unc.edu.ar/\~ccardellino/resources/diplodatos/glove.6B.50d.txt.gz -o ./data/glove.6B.50d.txt.gz
curl -L https://cs.famaf.unc.edu.ar/\~ccardellino/resources/diplodatos/imdb_reviews.csv.gz -o ./data/imdb_reviews.csv.gz
###Output
###Markdown
MeLi Challenge 2019 DatasetFor the course project, we will be using a dataset based on the 2019 MeLi Challenge dataset, for automatic classification of products categories:
###Code
%%bash
curl -L https://cs.famaf.unc.edu.ar/\~ccardellino/resources/diplodatos/meli-challenge-2019.tar.bz2 -o ./data/meli-challenge-2019.tar.bz2
tar jxvf ./data/meli-challenge-2019.tar.bz2 -C ./data/
###Output
meli-challenge-2019/
meli-challenge-2019/spanish.train.csv.gz
meli-challenge-2019/portuguese.train.csv.gz
meli-challenge-2019/spanish.test.csv.gz
meli-challenge-2019/portuguese.test.csv.gz
###Markdown
Deep Learning - Part 0This notebook explains how to install all the preriquistes and libraries that you will need to run the following tutorials. If you can execute all the following cells, you are good to go. Environment configuration Install condaThere are two major package managers in Python: pip and conda. For this tutorial we will be using conda which, besides being a package manager is also useful as a version manager. There are two main ways to install conda: Anaconda and Miniconda. Any will be useful for this course, just follow instructions here, according to your operative system:https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.htmlregular-installation Create an environment with all the Anaconda libraries $ conda create --name deeplearning python=3.7 anacondaDon't forget to activate the new env $ conda activate deeplearning Install PyTorchThis year we will be using [PyTorch](https://pytorch.org/) as the library to build and train the deep learning models. The library is a little less abstract than other possibilities such as [Keras](https://www.tensorflow.org/guide/keras) but gives a little more control to the user which in turns allows more customisation.In order to install PyTorch we recommend following the [official documentation](https://pytorch.org/get-started/locally/). In your local machine, you will install the version that only has CPU support (i.e. no CUDA version), but in Nabucodonosor you need to install the version with GPU support. CPUInstall pytorch for CPU: (deeplearning) $ conda install pytorch torchvision cpuonly -c pytorch Then just check the version installed is >= 1.7.0
###Code
import torch
torch.__version__
###Output
_____no_output_____
###Markdown
GPUThe GPU PyTorch depends on the CUDA version installed. Nabucodonosor has many installations of CUDA in the `/opt/cuda` directory. You need to add `nvcc` to the `$PATH`. For example, to setup for CUDA 10.2, do the following: (deeplearning) $ export PATH=/opt/cuda/10.2/bin:$PATHThat has to be done every time you enter nabucodonosor, to avoid that add it to your `.bashrc` file: (deeplearning) $ echo "export PATH=/opt/cuda/10.2/bin:$PATH" >> $HOME/.bashrcThen, install the PyTorch library: (deeplearning) $ conda install pytorch torchvision cudatoolkit=10.2 -c pytorchCheck if this is working by running the following cell:
###Code
torch.cuda.is_available()
###Output
_____no_output_____
###Markdown
Google ColabIn case you want to install PyTorch on a Google Colab, is possible, but first you need to check what version of `nvcc` is running. For that run the following:
###Code
!nvcc --version
###Output
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Wed_Oct_23_19:24:38_PDT_2019
Cuda compilation tools, release 10.2, V10.2.89
###Markdown
According to what the previous cell tells you, you'll need to install the proper drivers, with `pip` instead of conda. Please refer to the [getting started](https://pytorch.org/get-started/locally/) page and check what to do. Install other librariesWe need the `gensim` library to deal with word embeddings, so you need to install it. Plus, the `mlflow` tool to keep track of experiments. Finally, `tqdm` is a handful progress bar to keep track of different processes. (deeplearning) $ conda install gensim mlflow tqdm -c conda-forgeIf you have problems importing `gensim` and get this error: ImportError: cannot import name 'open' from 'smart_open' (C:\ProgramData\Anaconda3\lib\site-packages\smart_open\__init__.py)Then try updating `smart_open`: (deeplearning) $ conda update smart_open Download embeddings and dataset CIFAR10The dataset we will use (CIFAR10) is part of the `torchvision` package, which makes it fairly easy to download. You can learn more details on it [here](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.htmlloading-and-normalizing-cifar10):
###Code
import torchvision
torchvision.datasets.CIFAR10(root='./data', download=True);
###Output
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz
###Markdown
Glove Embeddings and IMDB reviews DatasetSome examples that we will run for text classification using Convolutional Neural Networks require the Glove Embeddings as well as the IMDB reviews dataset:
###Code
%%bash
curl -L https://cs.famaf.unc.edu.ar/\~ccardellino/resources/diplodatos/glove.6B.50d.txt.gz -o ./data/glove.6B.50d.txt.gz
curl -L https://cs.famaf.unc.edu.ar/\~ccardellino/resources/diplodatos/imdb_reviews.csv.gz -o ./data/imdb_reviews.csv.gz
###Output
###Markdown
MeLi Challenge 2019 DatasetFor the course project, we will be using a dataset based on the 2019 MeLi Challenge dataset, for automatic classification of products categories:
###Code
%%bash
curl -L https://cs.famaf.unc.edu.ar/\~ccardellino/resources/diplodatos/meli-challenge-2019.tar.bz2 -o ./data/meli-challenge-2019.tar.bz2
tar jxvf ./data/meli-challenge-2019.tar.bz2 -C ./data/
###Output
meli-challenge-2019/
meli-challenge-2019/spanish.train.csv.gz
meli-challenge-2019/portuguese.train.csv.gz
meli-challenge-2019/spanish.test.csv.gz
meli-challenge-2019/portuguese.test.csv.gz
###Markdown
Deep Learning - Part 0This notebook explains how to install all the preriquistes and libraries that you will need to run the following tutorials. If you can execute all the following cells, you are good to go. Environment configuration Install condaThere are two major package managers in Python: pip and conda. For this tutorial we will be using conda which, besides being a package manager is also useful as a version manager. There are two main ways to install conda: Anaconda and Miniconda. Any will be useful for this course, just follow instructions here, according to your operative system:https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.htmlregular-installation Create an environment with all the Anaconda libraries $ conda create --name deeplearning python=3.7 anacondaDon't forget to activate the new env $ conda activate deeplearning Install TensorFlowWe will use the [TensorFlow](https://www.tensorflow.org/) library to build and train models. In particular, we will use [Keras](https://www.tensorflow.org/guide/keras) module, which are simpler to implement and understand, at the cost of lossing flexibility when defining the architectures.In order to install tensorflow we recommend following the [official documentation](https://www.tensorflow.org/install). In your local machine, you will install the version that only has cpu support, but in Nabucodonosor you need to install the version with [GPU support](https://www.tensorflow.org/install/gpu). CPUUpgrade `pip` to the latest version: (deeplearning) $ pip install --upgrade pipInstall tensorflow: (deeplearning) $ pip install --upgrade tensorflow Then just check the version installed is 2.0.
###Code
import tensorflow as tf
tf.__version__
###Output
_____no_output_____
###Markdown
GPUThe supported version of Tensorlfow depends on the cuda drivers intalled on the machine. In the case of Nabucodonosor, cuda and cudnn libraries are located in the /opt directory. You can check the system has intalled cuda 10.X, and cuddnn >= 7.4.1, enough to intall tensorflow 2.0. (deeplearning) $ pip install tensorflow-gpu**WARNING**: changes between tensorflow and keras versions are not minor and your code will break if you don't migrate. For example: https://www.tensorflow.org/beta/guide/effective_tf2Now we need to tell tensorflow where cuda is installed by setting the environment variable LD_LIBRARY_PATH $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/cuda/10.0/lib64:/opt/cudnn/v7.6-cu10.0/ $ export CUDA_HOME=/opt/cuda/10.0It is convenient to add this statement to your `~/.bashrc` file, so it is executed everytime you open a new console.To check if it works, execute the following cell
###Code
import tensorflow as tf
tf.test.is_gpu_available()
###Output
_____no_output_____
###Markdown
Install other librariesWe need the `gensim` library to deal with word embeddings, so you need to install it. Plus, the `mlflow` tool to keep track of experiments. Also, for seeing a graphical representation of the Keras models, you need `graphviz` and `pydot`.```(deeplearning) $ pip install gensim mlflow(deeplearning) $ conda install graphviz python-graphviz pydot``` Download embeddings and dataset MNISTThe dataset we will use (MNIST) will be downloaded by Keras automatically the first time you use it. To save time, you can download it now running the next cell.
###Code
df = tf.keras.datasets.mnist.load_data()
###Output
_____no_output_____
###Markdown
Deep Learning - Part 0This notebook explains how to install all the preriquistes and libraries that you will need to run the following tutorials. If you can execute all the following cells, you are good to go. Environment configuration Install condaThere are two major package managers in Python: pip and conda. For this tutorial we will be using conda which, besides being a package manager is also useful as a version manager. There are two main ways to install conda: Anaconda and Miniconda. Any will be useful for this course, just follow instructions here, according to your operative system:https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.htmlregular-installation Create an environment with all the Anaconda libraries $ conda create --name deeplearning python=3.7 anacondaDon't forget to activate the new env $ conda activate deeplearning Install PyTorchThis year we will be using [PyTorch](https://pytorch.org/) as the library to build and train the deep learning models. The library is a little less abstract than other possibilities such as [Keras](https://www.tensorflow.org/guide/keras) but gives a little more control to the user which in turns allows more customisation.In order to install PyTorch we recommend following the [official documentation](https://pytorch.org/get-started/locally/). In your local machine, you will install the version that only has CPU support (i.e. no CUDA version), but in Nabucodonosor you need to install the version with GPU support. CPUInstall pytorch for CPU: (deeplearning) $ conda install pytorch torchvision cpuonly -c pytorch Then just check the version installed is >= 1.7.0
###Code
import torch
torch.__version__
###Output
_____no_output_____
###Markdown
GPUThe GPU PyTorch depends on the CUDA version installed. Nabucodonosor has many installations of CUDA in the `/opt/cuda` directory. You need to add `nvcc` to the `$PATH`. For example, to setup for CUDA 10.2, do the following: (deeplearning) $ export PATH=/opt/cuda/10.2/bin:$PATHThat has to be done every time you enter nabucodonosor, to avoid that add it to your `.bashrc` file: (deeplearning) $ echo "export PATH=/opt/cuda/10.2/bin:$PATH" >> $HOME/.bashrcThen, install the PyTorch library: (deeplearning) $ conda install pytorch torchvision cudatoolkit=10.2 -c pytorchCheck if this is working by running the following cell:
###Code
torch.cuda.is_available()
###Output
_____no_output_____
###Markdown
Google ColabIn case you want to install PyTorch on a Google Colab, is possible, but first you need to check what version of `nvcc` is running. For that run the following:
###Code
!nvcc --version
###Output
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Wed_Oct_23_19:24:38_PDT_2019
Cuda compilation tools, release 10.2, V10.2.89
###Markdown
According to what the previous cell tells you, you'll need to install the proper drivers, with `pip` instead of conda. Please refer to the [getting started](https://pytorch.org/get-started/locally/) page and check what to do. Install other librariesWe need the `gensim` library to deal with word embeddings, so you need to install it. Plus, the `mlflow` tool to keep track of experiments. Finally, `tqdm` is a handful progress bar to keep track of different processes. (deeplearning) $ conda install gensim mlflow tqdm -c conda-forgeIf you have problems importing `gensim` and get this error: ImportError: cannot import name 'open' from 'smart_open' (C:\ProgramData\Anaconda3\lib\site-packages\smart_open\__init__.py)Then try updating `smart_open`: (deeplearning) $ conda update smart_open Download embeddings and dataset CIFAR10The dataset we will use (CIFAR10) is part of the `torchvision` package, which makes it fairly easy to download. You can learn more details on it [here](https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.htmlloading-and-normalizing-cifar10):
###Code
import torchvision
torchvision.datasets.CIFAR10(root='./data', download=True);
###Output
Downloading https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz to ./data/cifar-10-python.tar.gz
###Markdown
Glove Embeddings and IMDB reviews DatasetSome examples that we will run for text classification using Convolutional Neural Networks require the Glove Embeddings as well as the IMDB reviews dataset:
###Code
%%bash
curl -L https://cs.famaf.unc.edu.ar/\~ccardellino/resources/diplodatos/glove.6B.50d.txt.gz -o ./data/glove.6B.50d.txt.gz
curl -L https://cs.famaf.unc.edu.ar/\~ccardellino/resources/diplodatos/imdb_reviews.csv.gz -o ./data/imdb_reviews.csv.gz
###Output
###Markdown
MeLi Challenge 2019 DatasetFor the course project, we will be using a dataset based on the 2019 MeLi Challenge dataset, for automatic classification of products categories:
###Code
%%bash
curl -L https://cs.famaf.unc.edu.ar/\~ccardellino/resources/diplodatos/meli-challenge-2019.tar.bz2 -o ./data/meli-challenge-2019.tar.bz2
tar jxvf ./data/meli-challenge-2019.tar.bz2 -C ./data/
###Output
meli-challenge-2019/
meli-challenge-2019/spanish.train.csv.gz
meli-challenge-2019/portuguese.train.csv.gz
meli-challenge-2019/spanish.test.csv.gz
meli-challenge-2019/portuguese.test.csv.gz
###Markdown
Deep Learning - Part 0This notebook explains how to install all the preriquistes and libraries that you will need to run the following tutorials. If you can execute all the following cells, you are good to go. Environment configuration Install condaThere are two major package managers in Python: pip and conda. For this tutorial we will be using conda which, besides being a package manager is also useful as a version manager. There are two main ways to install conda: Anaconda and Miniconda. Any will be useful for this course, just follow instructions here, according to your operative system:https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.htmlregular-installation Create an environment with all the Anaconda libraries $ conda create --name deeplearning python=3.7 anacondaDon't forget to activate the new env $ conda activate deeplearning Install TensorFlowWe will use the [TensorFlow](https://www.tensorflow.org/) library to build and train models. In particular, we will use [Keras](https://www.tensorflow.org/guide/keras) module, which are simpler to implement and understand, at the cost of lossing flexibility when defining the architectures.In order to install tensorflow we recommend following the [official documentation](https://www.tensorflow.org/install). In your local machine, you will install the version that only has cpu support, but in Nabucodonosor you need to install the version with [GPU support](https://www.tensorflow.org/install/gpu). CPUUpgrade `pip` to the latest version: (deeplearning) $ pip install --upgrade pipInstall tensorflow: (deeplearning) $ pip install --upgrade tensorflow Then just check the version installed is 2.0.
###Code
import tensorflow as tf
tf.__version__
###Output
_____no_output_____
###Markdown
GPUThe supported version of Tensorlfow depends on the cuda drivers intalled on the machine. In the case of Nabucodonosor, cuda and cudnn libraries are located in the /opt directory. You can check the system has intalled cuda 10.X, and cuddnn >= 7.4.1, enough to intall tensorflow 2.0. (deeplearning) $ pip install tensorflow-gpu**WARNING**: changes between tensorflow and keras versions are not minor and your code will break if you don't migrate. For example: https://www.tensorflow.org/beta/guide/effective_tf2Now we need to tell tensorflow where cuda is installed by setting the environment variable LD_LIBRARY_PATH $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/cuda/10.0/lib64:/opt/cudnn/v7.6-cu10.0/ $ export CUDA_HOME=/opt/cuda/10.0It is convenient to add this statement to your `~/.bashrc` file, so it is executed everytime you open a new console.To check if it works, execute the following cell
###Code
import tensorflow as tf
tf.test.is_gpu_available()
###Output
_____no_output_____
###Markdown
Install other librariesWe need the `gensim` library to deal with word embeddings, so you need to install it. Plus, the `mlflow` tool to keep track of experiments. Also, for seeing a graphical representation of the Keras models, you need `graphviz` and `pydot`.```(deeplearning) $ pip install gensim mlflow(deeplearning) $ conda install graphviz python-graphviz pydot``` Download embeddings and dataset MNISTThe dataset we will use (MNIST) will be downloaded by Keras automatically the first time you use it. To save time, you can download it now running the next cell.
###Code
df = tf.keras.datasets.mnist.load_data()
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
Deep Learning - Part 0This notebook explains how to install all the preriquistes and libraries that you will need to run the following tutorials. If you can execute all the following cells, you are good to go. Environment configuration Install condaThere are two major package managers in Python: pip and conda. For this tutorial we will be using conda which, besides being a package manager is also useful as a version manager. There are two main ways to install conda: Anaconda and Miniconda. Any will be useful for this course, just follow instructions here, according to your operative system:https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.htmlregular-installation Create an environment with all the Anaconda libraries $ conda create --name deeplearning python=3.7 anacondaDon't forget to activate the new env $ conda activate deeplearning Install TensorFlowWe will use the [TensorFlow](https://www.tensorflow.org/) library to build and train models. In particular, we will use [Keras](https://www.tensorflow.org/guide/keras) module, which are simpler to implement and understand, at the cost of lossing flexibility when defining the architectures.In order to install tensorflow we recommend following the [official documentation](https://www.tensorflow.org/install). In your local machine, you will install the version that only has cpu support, but in Nabucodonosor you need to install the version with [GPU support](https://www.tensorflow.org/install/gpu). CPUUpgrade `pip` to the latest version: (deeplearning) $ pip install --upgrade pipInstall tensorflow: (deeplearning) $ pip install --upgrade tensorflow Then just check the version installed is 2.0.
###Code
!pwd
import tensorflow as tf
tf.__version__
###Output
_____no_output_____
###Markdown
GPUThe supported version of Tensorlfow depends on the cuda drivers intalled on the machine. In the case of Nabucodonosor, cuda and cudnn libraries are located in the /opt directory. You can check the system has intalled cuda 10.X, and cuddnn >= 7.4.1, enough to intall tensorflow 2.0. (deeplearning) $ pip install tensorflow-gpu**WARNING**: changes between tensorflow and keras versions are not minor and your code will break if you don't migrate. For example: https://www.tensorflow.org/beta/guide/effective_tf2Now we need to tell tensorflow where cuda is installed by setting the environment variable LD_LIBRARY_PATH $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/cuda/10.0/lib64:/opt/cudnn/v7.6-cu10.0/ $ export CUDA_HOME=/opt/cuda/10.0It is convenient to add this statement to your `~/.bashrc` file, so it is executed everytime you open a new console.To check if it works, execute the following cell
###Code
import tensorflow as tf
tf.test.is_gpu_available()
###Output
_____no_output_____
###Markdown
Install other librariesWe need the `gensim` library to deal with word embeddings, so you need to install it. Plus, the `mlflow` tool to keep track of experiments. Also, for seeing a graphical representation of the Keras models, you need `graphviz` and `pydot`.```(deeplearning) $ pip install gensim mlflow(deeplearning) $ conda install graphviz python-graphviz pydot```
###Code
%%bash
pip install gensim mlflow graphviz pydot nltk
###Output
Requirement already satisfied: gensim in /users/mramirez/venv/lib/python3.7/site-packages (3.8.1)
Requirement already satisfied: mlflow in /users/mramirez/venv/lib/python3.7/site-packages (1.3.0)
Requirement already satisfied: graphviz in /users/mramirez/venv/lib/python3.7/site-packages (0.13)
Requirement already satisfied: pydot in /users/mramirez/venv/lib/python3.7/site-packages (1.4.1)
Collecting nltk
Downloading https://files.pythonhosted.org/packages/f6/1d/d925cfb4f324ede997f6d47bea4d9babba51b49e87a767c170b77005889d/nltk-3.4.5.zip (1.5MB)
Requirement already satisfied: scipy>=0.18.1 in /users/mramirez/venv/lib/python3.7/site-packages (from gensim) (1.3.1)
Requirement already satisfied: six>=1.5.0 in /users/mramirez/venv/lib/python3.7/site-packages (from gensim) (1.12.0)
Requirement already satisfied: smart-open>=1.8.1 in /users/mramirez/venv/lib/python3.7/site-packages (from gensim) (1.8.4)
Requirement already satisfied: numpy>=1.11.3 in /users/mramirez/venv/lib/python3.7/site-packages (from gensim) (1.17.2)
Requirement already satisfied: gitpython>=2.1.0 in /users/mramirez/venv/lib/python3.7/site-packages (from mlflow) (3.0.3)
Requirement already satisfied: python-dateutil in /users/mramirez/venv/lib/python3.7/site-packages (from mlflow) (2.8.0)
Requirement already satisfied: sqlparse in /users/mramirez/venv/lib/python3.7/site-packages (from mlflow) (0.3.0)
Requirement already satisfied: databricks-cli>=0.8.7 in /users/mramirez/venv/lib/python3.7/site-packages (from mlflow) (0.9.0)
Requirement already satisfied: gorilla in /users/mramirez/venv/lib/python3.7/site-packages (from mlflow) (0.3.0)
Requirement already satisfied: querystring-parser in /users/mramirez/venv/lib/python3.7/site-packages (from mlflow) (1.2.4)
Requirement already satisfied: sqlalchemy in /users/mramirez/venv/lib/python3.7/site-packages (from mlflow) (1.3.10)
Requirement already satisfied: docker>=4.0.0 in /users/mramirez/venv/lib/python3.7/site-packages (from mlflow) (4.1.0)
Requirement already satisfied: pyyaml in /users/mramirez/venv/lib/python3.7/site-packages (from mlflow) (5.1.2)
Requirement already satisfied: protobuf>=3.6.0 in /users/mramirez/venv/lib/python3.7/site-packages (from mlflow) (3.10.0)
Requirement already satisfied: cloudpickle in /users/mramirez/venv/lib/python3.7/site-packages (from mlflow) (1.2.2)
Requirement already satisfied: Flask in /users/mramirez/venv/lib/python3.7/site-packages (from mlflow) (1.1.1)
Requirement already satisfied: simplejson in /users/mramirez/venv/lib/python3.7/site-packages (from mlflow) (3.16.0)
Requirement already satisfied: entrypoints in /users/mramirez/venv/lib/python3.7/site-packages (from mlflow) (0.3)
Requirement already satisfied: gunicorn; platform_system != "Windows" in /users/mramirez/venv/lib/python3.7/site-packages (from mlflow) (19.9.0)
Requirement already satisfied: click>=7.0 in /users/mramirez/venv/lib/python3.7/site-packages (from mlflow) (7.0)
Requirement already satisfied: pandas in /users/mramirez/venv/lib/python3.7/site-packages (from mlflow) (0.25.1)
Requirement already satisfied: alembic in /users/mramirez/venv/lib/python3.7/site-packages (from mlflow) (1.2.1)
Requirement already satisfied: requests>=2.17.3 in /users/mramirez/venv/lib/python3.7/site-packages (from mlflow) (2.22.0)
Requirement already satisfied: pyparsing>=2.1.4 in /users/mramirez/venv/lib/python3.7/site-packages (from pydot) (2.4.2)
Requirement already satisfied: boto>=2.32 in /users/mramirez/venv/lib/python3.7/site-packages (from smart-open>=1.8.1->gensim) (2.49.0)
Requirement already satisfied: boto3 in /users/mramirez/venv/lib/python3.7/site-packages (from smart-open>=1.8.1->gensim) (1.9.248)
Requirement already satisfied: gitdb2>=2.0.0 in /users/mramirez/venv/lib/python3.7/site-packages (from gitpython>=2.1.0->mlflow) (2.0.6)
Requirement already satisfied: configparser>=0.3.5 in /users/mramirez/venv/lib/python3.7/site-packages (from databricks-cli>=0.8.7->mlflow) (4.0.2)
Requirement already satisfied: tabulate>=0.7.7 in /users/mramirez/venv/lib/python3.7/site-packages (from databricks-cli>=0.8.7->mlflow) (0.8.5)
Requirement already satisfied: websocket-client>=0.32.0 in /users/mramirez/venv/lib/python3.7/site-packages (from docker>=4.0.0->mlflow) (0.56.0)
Requirement already satisfied: setuptools in /users/mramirez/venv/lib/python3.7/site-packages (from protobuf>=3.6.0->mlflow) (41.4.0)
Requirement already satisfied: itsdangerous>=0.24 in /users/mramirez/venv/lib/python3.7/site-packages (from Flask->mlflow) (1.1.0)
Requirement already satisfied: Werkzeug>=0.15 in /users/mramirez/venv/lib/python3.7/site-packages (from Flask->mlflow) (0.16.0)
Requirement already satisfied: Jinja2>=2.10.1 in /users/mramirez/venv/lib/python3.7/site-packages (from Flask->mlflow) (2.10.3)
Requirement already satisfied: pytz>=2017.2 in /users/mramirez/venv/lib/python3.7/site-packages (from pandas->mlflow) (2019.3)
Requirement already satisfied: python-editor>=0.3 in /users/mramirez/venv/lib/python3.7/site-packages (from alembic->mlflow) (1.0.4)
Requirement already satisfied: Mako in /users/mramirez/venv/lib/python3.7/site-packages (from alembic->mlflow) (1.1.0)
Requirement already satisfied: certifi>=2017.4.17 in /users/mramirez/venv/lib/python3.7/site-packages (from requests>=2.17.3->mlflow) (2019.9.11)
Requirement already satisfied: idna<2.9,>=2.5 in /users/mramirez/venv/lib/python3.7/site-packages (from requests>=2.17.3->mlflow) (2.8)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /users/mramirez/venv/lib/python3.7/site-packages (from requests>=2.17.3->mlflow) (1.25.6)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /users/mramirez/venv/lib/python3.7/site-packages (from requests>=2.17.3->mlflow) (3.0.4)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /users/mramirez/venv/lib/python3.7/site-packages (from boto3->smart-open>=1.8.1->gensim) (0.9.4)
Requirement already satisfied: botocore<1.13.0,>=1.12.248 in /users/mramirez/venv/lib/python3.7/site-packages (from boto3->smart-open>=1.8.1->gensim) (1.12.248)
Requirement already satisfied: s3transfer<0.3.0,>=0.2.0 in /users/mramirez/venv/lib/python3.7/site-packages (from boto3->smart-open>=1.8.1->gensim) (0.2.1)
Requirement already satisfied: smmap2>=2.0.0 in /users/mramirez/venv/lib/python3.7/site-packages (from gitdb2>=2.0.0->gitpython>=2.1.0->mlflow) (2.0.5)
Requirement already satisfied: MarkupSafe>=0.23 in /users/mramirez/venv/lib/python3.7/site-packages (from Jinja2>=2.10.1->Flask->mlflow) (1.1.1)
Requirement already satisfied: docutils<0.16,>=0.10 in /users/mramirez/venv/lib/python3.7/site-packages (from botocore<1.13.0,>=1.12.248->boto3->smart-open>=1.8.1->gensim) (0.15.2)
Building wheels for collected packages: nltk
Building wheel for nltk (setup.py): started
Building wheel for nltk (setup.py): finished with status 'done'
Created wheel for nltk: filename=nltk-3.4.5-cp37-none-any.whl size=1449909 sha256=9fbe0ff2464b4c98a243f056b00650c710f731fb631466d35e13620d561a685b
Stored in directory: /users/mramirez/.cache/pip/wheels/96/86/f6/68ab24c23f207c0077381a5e3904b2815136b879538a24b483
Successfully built nltk
Installing collected packages: nltk
Successfully installed nltk-3.4.5
###Markdown
Download embeddings and dataset MNISTThe dataset we will use (MNIST) will be downloaded by Keras automatically the first time you use it. To save time, you can download it now running the next cell.
###Code
df = tf.keras.datasets.mnist.load_data()
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
|
Data-Analytics-Projects-in-python-main/COVID19/notebooks/Exploratory_analysis_fancy_plot.ipynb | ###Markdown
Imports
###Code
import sys
import os
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
project_path = os.path.abspath(os.path.join('..'))
if project_path not in sys.path:
sys.path.append(f'{project_path}/src/visualizations/')
from covid_data_viz import CovidDataViz
###Output
_____no_output_____
###Markdown
Setup
###Code
mpl.rcParams['figure.figsize'] = (9, 5)
###Output
_____no_output_____
###Markdown
GoalMy goal is to visualize various aspect of the `COVID-19` pandemic. Data sourcesIn this project I use data from the following sources:- https://github.com/CSSEGISandData/COVID-19 - JHU CSSE COVID-19 Data.- https://datahub.io/JohnSnowLabs/country-and-continent-codes-list - country codes and continents. Data loading
###Code
cdv = CovidDataViz()
###Output
_____no_output_____
###Markdown
Fancy plotVisual for repo readme.
###Code
countries = ['Germany',
'France',
'Italy',
'Spain',
'United Kingdom',
'Russia',
'India',
'Brazil',
'US',
'Poland',
'Mexico']
width = 1600
height = width / 2
dpi = 200
period = 7
step = 30
label_size = 12
n_clabels = 6
countries = sorted(countries)
plot_df = cdv.data['Confirmed chg'][countries]
plot_df = plot_df.rename(columns={'United Kingdom': 'UK'})
countries = plot_df.columns.to_list()
plot_df = plot_df.rolling(period)
plot_df = plot_df.mean()
plot_df = plot_df.dropna()
plot_df = plot_df.to_numpy()
plot_df = plot_df.astype(float)
plot_df = plot_df.transpose()
plot_df = np.sqrt(plot_df)
xticks = range(plot_df.shape[1])[::step]
xlabels = list(cdv.data['Confirmed chg']['Date'])[period:]
xlabels = [x.strftime(format='%Y-%m') for x in xlabels]
# xlabels = [x.date() for x in xlabels]
xlabels = xlabels[::step]
yticks = range(len(countries))
ylabels = countries
cticks = np.round(np.linspace(0, np.max(plot_df), 6), -1)
cticks = cticks.astype(np.int)
clabels = np.power(cticks, 2)
cticks = sorted(set(cticks))
clabels = np.power(cticks, 2)
clabels = [int((round(x, -3))/1000) for x in clabels]
clabels = [str(x)+'k' for x in clabels]
# clabels = list(map(str, clabels))
plt.figure(figsize=(width / dpi, height / dpi))
plt.imshow(plot_df, aspect='auto', interpolation='nearest')
plt.set_cmap('hot')
plt.yticks(ticks=yticks,
labels=ylabels,
fontsize=label_size,
verticalalignment='center')
plt.xticks(ticks=xticks,
labels=xlabels,
rotation=45,
fontsize=label_size,
horizontalalignment='center')
cbar = plt.colorbar()
cbar.set_ticks(cticks)
cbar.set_ticklabels(clabels)
cbar.ax.tick_params(labelsize=label_size)
plt.title('New COVID-19 cases', fontsize=20)
plt.tight_layout()
plt.savefig('../img/covid_tiles.png')
plt.show()
###Output
_____no_output_____ |
Data Science and Machine Learning Bootcamp - JP/02.Python for Data Analysis - NumPy/02-Numpy Indexing and Selection.ipynb | ###Markdown
___ ___ NumPy Indexing and SelectionIn this lecture we will discuss how to select elements or groups of elements from an array.
###Code
import numpy as np
#Creating sample array
arr = np.arange(0,11)
#Show
arr
###Output
_____no_output_____
###Markdown
Bracket Indexing and SelectionThe simplest way to pick one or some elements of an array looks very similar to python lists:
###Code
#Get a value at an index
arr[8]
#Get values in a range
arr[1:5]
#Get values in a range
arr[0:5]
###Output
_____no_output_____
###Markdown
BroadcastingNumpy arrays differ from a normal Python list because of their ability to broadcast:
###Code
#Setting a value with index range (Broadcasting)
arr[0:5]=100
#Show
arr
# Reset array, we'll see why I had to reset in a moment
arr = np.arange(0,11)
#Show
arr
#Important notes on Slices
slice_of_arr = arr[0:6]
#Show slice
slice_of_arr
#Change Slice
slice_of_arr[:]=99
#Show Slice again
slice_of_arr
###Output
_____no_output_____
###Markdown
Now note the changes also occur in our original array!
###Code
arr
###Output
_____no_output_____
###Markdown
Data is not copied, it's a view of the original array! This avoids memory problems!
###Code
#To get a copy, need to be explicit
arr_copy = arr.copy()
arr_copy
###Output
_____no_output_____
###Markdown
Indexing a 2D array (matrices)The general format is **arr_2d[row][col]** or **arr_2d[row,col]**. I recommend usually using the comma notation for clarity.
###Code
arr_2d = np.array(([5,10,15],[20,25,30],[35,40,45]))
#Show
arr_2d
#Indexing row
arr_2d[1]
# Format is arr_2d[row][col] or arr_2d[row,col]
# Getting individual element value
arr_2d[1][0]
# Getting individual element value
arr_2d[1,0]
# 2D array slicing
#Shape (2,2) from top right corner
arr_2d[:2,1:]
#Shape bottom row
arr_2d[2]
#Shape bottom row
arr_2d[2,:]
###Output
_____no_output_____
###Markdown
Fancy IndexingFancy indexing allows you to select entire rows or columns out of order,to show this, let's quickly build out a numpy array:
###Code
#Set up matrix
arr2d = np.zeros((10,10))
#Length of array
arr_length = arr2d.shape[1]
#Set up array
for i in range(arr_length):
arr2d[i] = i
arr2d
###Output
_____no_output_____
###Markdown
Fancy indexing allows the following
###Code
arr2d[[2,4,6,8]]
#Allows in any order
arr2d[[6,4,2,7]]
###Output
_____no_output_____
###Markdown
More Indexing HelpIndexing a 2d matrix can be a bit confusing at first, especially when you start to add in step size. Try google image searching NumPy indexing to fins useful images, like this one: SelectionLet's briefly go over how to use brackets for selection based off of comparison operators.
###Code
arr = np.arange(1,11)
arr
arr > 4
bool_arr = arr>4
bool_arr
arr[bool_arr]
arr[arr>2]
x = 2
arr[arr>x]
###Output
_____no_output_____ |
scripts/d21-en/pytorch/chapter_deep-learning-computation/use-gpu.ipynb | ###Markdown
GPUs:label:`sec_use_gpu`In :numref:`tab_intro_decade`, we discussed the rapid growthof computation over the past two decades.In a nutshell, GPU performance has increasedby a factor of 1000 every decade since 2000.This offers great opportunities but it also suggestsa significant need to provide such performance.In this section, we begin to discuss how to harnessthis computational performance for your research.First by using single GPUs and at a later point,how to use multiple GPUs and multiple servers (with multiple GPUs).Specifically, we will discuss howto use a single NVIDIA GPU for calculations.First, make sure you have at least one NVIDIA GPU installed.Then, download the [NVIDIA driver and CUDA](https://developer.nvidia.com/cuda-downloads)and follow the prompts to set the appropriate path.Once these preparations are complete,the `nvidia-smi` command can be usedto (**view the graphics card information**).
###Code
!nvidia-smi
###Output
Fri Apr 23 07:57:29 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.67 Driver Version: 418.67 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
###Markdown
In PyTorch, every array has a device, we often refer it as a context.So far, by default, all variablesand associated computationhave been assigned to the CPU.Typically, other contexts might be various GPUs.Things can get even hairier whenwe deploy jobs across multiple servers.By assigning arrays to contexts intelligently,we can minimize the time spenttransferring data between devices.For example, when training neural networks on a server with a GPU,we typically prefer for the model's parameters to live on the GPU.Next, we need to confirm thatthe GPU version of PyTorch is installed.If a CPU version of PyTorch is already installed,we need to uninstall it first.For example, use the `pip uninstall torch` command,then install the corresponding PyTorch versionaccording to your CUDA version.Assuming you have CUDA 10.0 installed,you can install the PyTorch versionthat supports CUDA 10.0 via `pip install torch-cu100`. To run the programs in this section,you need at least two GPUs.Note that this might be extravagant for most desktop computersbut it is easily available in the cloud, e.g.,by using the AWS EC2 multi-GPU instances.Almost all other sections do *not* require multiple GPUs.Instead, this is simply to illustratehow data flow between different devices. [**Computing Devices**]We can specify devices, such as CPUs and GPUs,for storage and calculation.By default, tensors are created in the main memoryand then use the CPU to calculate it. In PyTorch, the CPU and GPU can be indicated by `torch.device('cpu')` and `torch.cuda.device('cuda')`.It should be noted that the `cpu` devicemeans all physical CPUs and memory.This means that PyTorch's calculationswill try to use all CPU cores.However, a `gpu` device only represents one cardand the corresponding memory.If there are multiple GPUs, we use `torch.cuda.device(f'cuda:{i}')`to represent the $i^\mathrm{th}$ GPU ($i$ starts from 0).Also, `gpu:0` and `gpu` are equivalent.
###Code
import torch
from torch import nn
torch.device('cpu'), torch.cuda.device('cuda'), torch.cuda.device('cuda:1')
###Output
_____no_output_____
###Markdown
We can (**query the number of available GPUs.**)
###Code
torch.cuda.device_count()
###Output
_____no_output_____
###Markdown
Now we [**define two convenient functions that allow usto run code even if the requested GPUs do not exist.**]
###Code
def try_gpu(i=0): #@save
"""Return gpu(i) if exists, otherwise return cpu()."""
if torch.cuda.device_count() >= i + 1:
return torch.device(f'cuda:{i}')
return torch.device('cpu')
def try_all_gpus(): #@save
"""Return all available GPUs, or [cpu(),] if no GPU exists."""
devices = [
torch.device(f'cuda:{i}') for i in range(torch.cuda.device_count())]
return devices if devices else [torch.device('cpu')]
try_gpu(), try_gpu(10), try_all_gpus()
###Output
_____no_output_____
###Markdown
Tensors and GPUsBy default, tensors are created on the CPU.We can [**query the device where the tensor is located.**]
###Code
x = torch.tensor([1, 2, 3])
x.device
###Output
_____no_output_____
###Markdown
It is important to note that whenever we wantto operate on multiple terms,they need to be on the same device.For instance, if we sum two tensors,we need to make sure that both argumentslive on the same device---otherwise the frameworkwould not know where to store the resultor even how to decide where to perform the computation. Storage on the GPUThere are several ways to [**store a tensor on the GPU.**]For example, we can specify a storage device when creating a tensor.Next, we create the tensor variable `X` on the first `gpu`.The tensor created on a GPU only consumes the memory of this GPU.We can use the `nvidia-smi` command to view GPU memory usage.In general, we need to make sure that we do not create data that exceed the GPU memory limit.
###Code
X = torch.ones(2, 3, device=try_gpu())
X
###Output
_____no_output_____
###Markdown
Assuming that you have at least two GPUs, the following code will (**create a random tensor on the second GPU.**)
###Code
Y = torch.rand(2, 3, device=try_gpu(1))
Y
###Output
_____no_output_____
###Markdown
Copying[**If we want to compute `X + Y`,we need to decide where to perform this operation.**]For instance, as shown in :numref:`fig_copyto`,we can transfer `X` to the second GPUand perform the operation there.*Do not* simply add `X` and `Y`,since this will result in an exception.The runtime engine would not know what to do:it cannot find data on the same device and it fails.Since `Y` lives on the second GPU,we need to move `X` there before we can add the two.:label:`fig_copyto`
###Code
Z = X.cuda(1)
print(X)
print(Z)
###Output
tensor([[1., 1., 1.],
[1., 1., 1.]], device='cuda:0')
tensor([[1., 1., 1.],
[1., 1., 1.]], device='cuda:1')
###Markdown
Now that [**the data are on the same GPU(both `Z` and `Y` are),we can add them up.**]
###Code
Y + Z
###Output
_____no_output_____
###Markdown
Imagine that your variable `Z` already lives on your second GPU.What happens if we still call `Z.cuda(1)`?It will return `Z` instead of making a copy and allocating new memory.
###Code
Z.cuda(1) is Z
###Output
_____no_output_____
###Markdown
Side NotesPeople use GPUs to do machine learningbecause they expect them to be fast.But transferring variables between devices is slow.So we want you to be 100% certainthat you want to do something slow before we let you do it.If the deep learning framework just did the copy automaticallywithout crashing then you might not realizethat you had written some slow code.Also, transferring data between devices (CPU, GPUs, and other machines)is something that is much slower than computation.It also makes parallelization a lot more difficult,since we have to wait for data to be sent (or rather to be received)before we can proceed with more operations.This is why copy operations should be taken with great care.As a rule of thumb, many small operationsare much worse than one big operation.Moreover, several operations at a timeare much better than many single operations interspersed in the codeunless you know what you are doing.This is the case since such operations can block if one devicehas to wait for the other before it can do something else.It is a bit like ordering your coffee in a queuerather than pre-ordering it by phoneand finding out that it is ready when you are.Last, when we print tensors or convert tensors to the NumPy format,if the data is not in the main memory,the framework will copy it to the main memory first,resulting in additional transmission overhead.Even worse, it is now subject to the dreaded global interpreter lockthat makes everything wait for Python to complete. [**Neural Networks and GPUs**]Similarly, a neural network model can specify devices.The following code puts the model parameters on the GPU.
###Code
net = nn.Sequential(nn.Linear(3, 1))
net = net.to(device=try_gpu())
###Output
_____no_output_____
###Markdown
We will see many more examples ofhow to run models on GPUs in the following chapters,simply since they will become somewhat more computationally intensive.When the input is a tensor on the GPU, the model will calculate the result on the same GPU.
###Code
net(X)
###Output
_____no_output_____
###Markdown
Let us (**confirm that the model parameters are stored on the same GPU.**)
###Code
net[0].weight.data.device
###Output
_____no_output_____ |
IsingHamitonian_GA.ipynb | ###Markdown
Ising model hamiltonian Step 1: Generating the weights (summation over the pseudo-spin along the stacking chain) for interaction coefficients J0, J1, J2, J3, K, K', L, where K' = 1/2(coefficient[2,3]+coefficient[1,3]).List the pseudo-spin for a polytype, 1 for face-sharing octahedra pair, -1 for corner sharing octahedra pair.
###Code
spin2H = [1,1]
spin3C = [-1,-1,-1]
spin4H = [1,-1,1,-1]
spin6H = [1,-1,-1,1,-1,-1]
spin9R = [1,1,-1,1,1,-1,1,1,-1]
spin12R = [1,1,-1,-1,1,1,-1,-1,1,1,-1,-1]
spin12H = [1,-1,-1,-1,-1,-1, 1,-1,-1,-1,-1,-1]
spin2C9H11 = [1,1,1,1,1,1,1,1,-1,-1,-1]
spin2H9C11 = [1,1,-1,-1,-1,-1,-1,-1,-1,-1,-1]
spin2C9H18 = [1,1,1,1,1,1,1,-1,-1,1,1,1,1,1,1,1,-1,-1]
spin2H9C18 = [1,-1,-1,-1,-1,-1,-1,-1,-1,1,-1,-1,-1,-1,-1,-1,-1,-1]
def coefficients(spin,nnn):
length = len(spin)
temp = 0
for i in range(length):
j = (i+nnn) % length
temp += spin[i] * spin[j]
print(temp)
def generalCoefficients(spin,nnn):
length = len(spin)
temp = 0
for i in range(length):
tempi = spin[i]
for j in range(len(nnn)):
if nnn == [0]:
tempi = tempi
else:
k = (i+nnn[j]) % length
tempi = tempi * spin[k]
temp += tempi
print(temp)
def getAll(spin):
generalCoefficients(spin,[0])
generalCoefficients(spin,[1])
generalCoefficients(spin,[2])
generalCoefficients(spin,[3])
generalCoefficients(spin,[1,2])
generalCoefficients(spin,[2,3])
generalCoefficients(spin,[1,3])
generalCoefficients(spin,[1,2,3])
getAll(spin2H)
#getAll(spin3C)
#getAll(spin4H)
#getAll(spin6H)
#getAll(spin9R)
#getAll(spin12R)
#getAll(spin12H)
#getAll(spin2H9C11)
#getAll(spin2C9H11)
#getAll(spin2H9C18)
#getAll(spin2C9H18)
#the K' = 1/2([2,3]+[1,3])
###Output
2
2
2
2
2
2
2
2
###Markdown
Step 2: with DFT calculated total energies for several polytypes, the values of the interaction coefficients are numerially fitted.
###Code
import numpy as np
MAT = np.array([[2,2,2,2,2,2,2,2], #2H
[3,-3,3,3,3,-3,-3,3],#3C
[4,0,-4,4,-4,0,0,4],#4H
[6,-2,-2,-2,6,6,-2,-2],#6H
[9,3,-3,-3,9,-9,3,-3],#9R
[12,0,0,-12,0,0,0,12],#12R
[12,-8,4,4,4,0,0,-4],#12H
])
ENG = [-24.651502, -36.707196, -49.048539, -73.554815, -110.63025, -147.36321, -146.929] #CsPbI3
#ENG = [-28.057165, -42.207545, -56.101650, -84.211709, -126.20073, -168.32148, -168.60493] #CsPbBr3
print(MAT)
print(ENG)
J, residuals, rank, s = np.linalg.lstsq(MAT,ENG,rcond=None)
#J = np.linalg.solve(MAT,ENG)
print(J)
#print(residuals)
###Output
[[ 2 2 2 2 2 2 2 2]
[ 3 -3 3 3 3 -3 -3 3]
[ 4 0 -4 4 -4 0 0 4]
[ 6 -2 -2 -2 6 6 -2 -2]
[ 9 3 -3 -3 9 -9 3 -3]
[ 12 0 0 -12 0 0 0 12]
[ 12 -8 4 4 4 0 0 -4]]
[-24.651502, -36.707196, -49.048539, -73.554815, -110.63025, -147.36321, -146.929]
[-1.12432698e+01 2.01506745e+00 1.02438188e+00 4.41468750e-03
-1.03368526e+00 1.16543750e-03 -2.06124239e+00 -1.03258301e+00]
###Markdown
Genetic algorithm with the model hamiltonian as the optimisation function
###Code
pip install func-timeout
###Output
Collecting func-timeout
Downloading func_timeout-4.3.5.tar.gz (44 kB)
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 44 kB 723 kB/s eta 0:00:01
[?25hBuilding wheels for collected packages: func-timeout
Building wheel for func-timeout (setup.py) ... [?25ldone
[?25h Created wheel for func-timeout: filename=func_timeout-4.3.5-py3-none-any.whl size=15077 sha256=ce395887227d0057a77867b8b14ab960cae5575e11abf6df6071fa30fa80d5a4
Stored in directory: /Users/zhenzhu/Library/Caches/pip/wheels/a8/92/ca/5bbab358275e310af23b73fc32ebf37d6a7a08c87c8d2cdbc1
Successfully built func-timeout
Installing collected packages: func-timeout
Successfully installed func-timeout-4.3.5
Note: you may need to restart the kernel to use updated packages.
###Markdown
The following code contains two parts: the main genetic algorithm and the optimisation function. Important parameters: algorithm_param = {'max_num_iteration': 100,\ 'population_size':300,\ 'mutation_probability':0.6,\ 'elit_ratio': 0.2,\ 'crossover_probability': 0.1,\ 'parents_portion': 0.6,\ 'crossover_type':'uniform',\ 'max_iteration_without_improv':None} in the genetic algorithm function, tuning the performance of the genetic algorithm. layer = 12 in the optimisation function, defines the layer number, can be any integer of interestreturn toten in the optimisation function, finding the low energy structuresreturn -toten in the optimisation function, finding the symmetry forbidden sequences
###Code
###############################################################################
############################GA GA GA GA GA #############################
###############################################################################
import numpy as np
import sys
import time
from func_timeout import func_timeout, FunctionTimedOut
import matplotlib.pyplot as plt
import timeit
class geneticalgorithm():
''' Genetic Algorithm (Elitist version) for Python
An implementation of elitist genetic algorithm for solving problems with
continuous, integers, or mixed variables.
Implementation and output:
methods:
run(): implements the genetic algorithm
outputs:
output_dict: a dictionary including the best set of variables
found and the value of the given function associated to it.
{'variable': , 'function': }
report: a list including the record of the progress of the
algorithm over iterations
'''
#############################################################
def __init__(self, function, dimension, variable_type='bool', \
variable_boundaries=None,\
variable_type_mixed=None, \
function_timeout=10,\
algorithm_parameters={'max_num_iteration': None,\
'population_size':100,\
'mutation_probability':0.1,\
'elit_ratio': 0.01,\
'crossover_probability': 0.5,\
'parents_portion': 0.3,\
'crossover_type':'uniform',\
'max_iteration_without_improv':None}):
'''
@param function <Callable> - the given objective function to be minimized
NOTE: This implementation minimizes the given objective function.
(For maximization multiply function by a negative sign: the absolute
value of the output would be the actual objective function)
@param dimension <integer> - the number of decision variables
@param variable_type <string> - 'bool' if all variables are Boolean;
'int' if all variables are integer; and 'real' if all variables are
real value or continuous (for mixed type see @param variable_type_mixed)
@param variable_boundaries <numpy array/None> - Default None; leave it
None if variable_type is 'bool'; otherwise provide an array of tuples
of length two as boundaries for each variable;
the length of the array must be equal dimension. For example,
np.array([0,100],[0,200]) determines lower boundary 0 and upper boundary 100 for first
and upper boundary 200 for second variable where dimension is 2.
@param variable_type_mixed <numpy array/None> - Default None; leave it
None if all variables have the same type; otherwise this can be used to
specify the type of each variable separately. For example if the first
variable is integer but the second one is real the input is:
np.array(['int'],['real']). NOTE: it does not accept 'bool'. If variable
type is Boolean use 'int' and provide a boundary as [0,1]
in variable_boundaries. Also if variable_type_mixed is applied,
variable_boundaries has to be defined.
@param function_timeout <float> - if the given function does not provide
output before function_timeout (unit is seconds) the algorithm raise error.
For example, when there is an infinite loop in the given function.
@param algorithm_parameters:
@ max_num_iteration <int> - stoping criteria of the genetic algorithm (GA)
@ population_size <int>
@ mutation_probability <float in [0,1]>
@ elit_ration <float in [0,1]>
@ crossover_probability <float in [0,1]>
@ parents_portion <float in [0,1]>
@ crossover_type <string> - Default is 'uniform'; 'one_point' or
'two_point' are other options
@ max_iteration_without_improv <int> - maximum number of
successive iterations without improvement. If None it is ineffective
for more details and examples of implementation please visit:
https://github.com/rmsolgi/geneticalgorithm
'''
self.__name__=geneticalgorithm
#############################################################
# input function
assert (callable(function)),"function must be callable"
self.f=function
#############################################################
#dimension
self.dim=int(dimension)
#############################################################
# input variable type
assert(variable_type=='bool' or variable_type=='int' or\
variable_type=='real'), \
"\n variable_type must be 'bool', 'int', or 'real'"
#############################################################
# input variables' type (MIXED)
if variable_type_mixed is None:
if variable_type=='real':
self.var_type=np.array([['real']]*self.dim)
else:
self.var_type=np.array([['int']]*self.dim)
else:
assert (type(variable_type_mixed).__module__=='numpy'),\
"\n variable_type must be numpy array"
assert (len(variable_type_mixed) == self.dim), \
"\n variable_type must have a length equal dimension."
for i in variable_type_mixed:
assert (i=='real' or i=='int'),\
"\n variable_type_mixed is either 'int' or 'real' "+\
"ex:['int','real','real']"+\
"\n for 'boolean' use 'int' and specify boundary as [0,1]"
self.var_type=variable_type_mixed
#############################################################
# input variables' boundaries
if variable_type!='bool' or type(variable_type_mixed).__module__=='numpy':
assert (type(variable_boundaries).__module__=='numpy'),\
"\n variable_boundaries must be numpy array"
assert (len(variable_boundaries)==self.dim),\
"\n variable_boundaries must have a length equal dimension"
for i in variable_boundaries:
assert (len(i) == 2), \
"\n boundary for each variable must be a tuple of length two."
assert(i[0]<=i[1]),\
"\n lower_boundaries must be smaller than upper_boundaries [lower,upper]"
self.var_bound=variable_boundaries
else:
self.var_bound=np.array([[0,1]]*self.dim)
#############################################################
#Timeout
self.funtimeout=float(function_timeout)
#############################################################
# input algorithm's parameters
self.param=algorithm_parameters
self.pop_s=int(self.param['population_size'])
assert (self.param['parents_portion']<=1\
and self.param['parents_portion']>=0),\
"parents_portion must be in range [0,1]"
self.par_s=int(self.param['parents_portion']*self.pop_s)
trl=self.pop_s-self.par_s
if trl % 2 != 0:
self.par_s+=1
self.prob_mut=self.param['mutation_probability']
assert (self.prob_mut<=1 and self.prob_mut>=0), \
"mutation_probability must be in range [0,1]"
self.prob_cross=self.param['crossover_probability']
assert (self.prob_cross<=1 and self.prob_cross>=0), \
"mutation_probability must be in range [0,1]"
assert (self.param['elit_ratio']<=1 and self.param['elit_ratio']>=0),\
"elit_ratio must be in range [0,1]"
trl=self.pop_s*self.param['elit_ratio']
if trl<1 and self.param['elit_ratio']>0:
self.num_elit=1
else:
self.num_elit=int(trl)
assert(self.par_s>=self.num_elit), \
"\n number of parents must be greater than number of elits"
if self.param['max_num_iteration']==None:
self.iterate=0
for i in range (0,self.dim):
if self.var_type[i]=='int':
self.iterate+=(self.var_bound[i][1]-self.var_bound[i][0])*self.dim*(100/self.pop_s)
else:
self.iterate+=(self.var_bound[i][1]-self.var_bound[i][0])*50*(100/self.pop_s)
self.iterate=int(self.iterate)
if (self.iterate*self.pop_s)>10000000:
self.iterate=10000000/self.pop_s
else:
self.iterate=int(self.param['max_num_iteration'])
self.c_type=self.param['crossover_type']
assert (self.c_type=='uniform' or self.c_type=='one_point' or\
self.c_type=='two_point'),\
"\n crossover_type must 'uniform', 'one_point', or 'two_point' Enter string"
self.stop_mniwi=False
if self.param['max_iteration_without_improv']==None:
self.mniwi=self.iterate+1
else:
self.mniwi=int(self.param['max_iteration_without_improv'])
#############################################################
def run(self):
#############################################################
# Initial Population
self.integers=np.where(self.var_type=='int')
self.reals=np.where(self.var_type=='real')
pop=np.array([np.ones(self.dim+1)]*self.pop_s)
solo=np.ones(self.dim+1)
var=np.ones(self.dim)
for p in range(0,self.pop_s):
for i in self.integers[0]:
#var[i]=np.random.randint(self.var_bound[i][0],\
#self.var_bound[i][1]+1)
s = [-1, 1]
var[i]=np.random.choice(s)
solo[i]=var[i].copy()
for i in self.reals[0]:
var[i]=self.var_bound[i][0]+np.random.random()*\
(self.var_bound[i][1]-self.var_bound[i][0])
solo[i]=var[i].copy()
obj=self.sim(var)
solo[self.dim]=obj
pop[p]=solo.copy()
#print(pop[p])
#############################################################
#############################################################
# Report
self.report=[]
self.test_obj=obj
self.best_variable=var.copy()
self.best_function=obj
##############################################################
t=1
counter=0
while t<=self.iterate:
self.progress(t,self.iterate,status="GA is running...")
#############################################################
#Sort
pop = pop[pop[:,self.dim].argsort()]
if pop[0,self.dim]<self.best_function:
counter=0
self.best_function=pop[0,self.dim].copy()
self.best_variable=pop[0,: self.dim].copy()
else:
counter+=1
#############################################################
# Report
self.report.append(pop[0,self.dim])
##############################################################
# Normalizing objective function
normobj=np.ones(self.pop_s)
minobj=pop[0,self.dim]
if minobj<0:
normobj=pop[:,self.dim]+abs(minobj)
else:
normobj=pop[:,self.dim].copy()
maxnorm=np.amax(normobj)
normobj=maxnorm-normobj+1
#############################################################
# Calculate probability
sum_normobj=np.sum(normobj)
prob=np.ones(self.pop_s)
prob=normobj/sum_normobj
cumprob=np.cumsum(prob)
#print(cumprob)
#############################################################
# Select parents
par=np.array([np.ones(self.dim+1)]*self.par_s)
for k in range(0,self.num_elit):
par[k]=pop[k].copy()
for k in range(self.num_elit,self.par_s):
index=np.searchsorted(cumprob,np.random.random())
par[k]=pop[index].copy()
ef_par_list=np.array([False]*self.par_s)
par_count=0
while par_count==0:
for k in range(0,self.par_s):
if np.random.random()<=self.prob_cross:
ef_par_list[k]=True
par_count+=1
ef_par=par[ef_par_list].copy()
#############################################################
#New generation
pop=np.array([np.ones(self.dim+1)]*self.pop_s)
for k in range(0,self.par_s):
pop[k]=par[k].copy()
for k in range(self.par_s, self.pop_s, 2):
r1=np.random.randint(0, par_count)
r2=np.random.randint(0, par_count)
pvar1=ef_par[r1,: self.dim].copy()
pvar2=ef_par[r2,: self.dim].copy()
ch=self.cross(pvar1,pvar2,self.c_type)
ch1=ch[0].copy()
ch2=ch[1].copy()
ch1=self.mut(ch1)
ch2=self.mutmidle(ch2,pvar1,pvar2)
solo[: self.dim]=ch1.copy()
obj=self.sim(ch1)
solo[self.dim]=obj
pop[k]=solo.copy()
solo[: self.dim]=ch2.copy()
obj=self.sim(ch2)
solo[self.dim]=obj
pop[k+1]=solo.copy()
#############################################################
t+=1
if counter > self.mniwi:
pop = pop[pop[:,self.dim].argsort()]
if pop[0,self.dim]>=self.best_function:
t=self.iterate
self.progress(t,self.iterate,status="GA is running...")
time.sleep(2)
t+=1
self.stop_mniwi=True
#############################################################
#Sort
pop = pop[pop[:,self.dim].argsort()]
if pop[0,self.dim]<self.best_function:
self.best_function=pop[0,self.dim].copy()
self.best_variable=pop[0,: self.dim].copy()
#############################################################
# Report
self.report.append(pop[0,self.dim])
self.output_dict={'variable': self.best_variable, 'function':\
self.best_function}
show=' '*100
sys.stdout.write('\r%s' % (show))
sys.stdout.write('\r The best solution found:\n %s' % (self.best_variable))
sys.stdout.write('\n\n Objective function:\n %s\n' % (self.best_function))
sys.stdout.flush()
re=np.array(self.report)
#plt.plot(re, color="crimson", linewidth=3, marker='*', linestyle='--', markersize = 4, markeredgecolor = 'none',alpha=1)
#plt.plot(re, color="crimson", marker='o', markersize = 3, markeredgecolor = 'none',alpha=0.5)
plt.plot(re, color="crimson", linewidth=3,alpha=1)
plt.xlabel('Iteration', fontname ='Arial',fontsize=11)
plt.ylabel('Objective function', fontname ='Arial',fontsize=11)
#plt.ylim([-147.95,-147.2])
#plt.ylim([97,116])
#plt.ylim(self.best_function-1, self.best_function+1)
plt.title('Genetic Algorithm', fontname ='Arial',fontsize=8)
plt.xticks(fontname = 'Arial',fontsize=11)
plt.yticks(fontname = 'Arial',fontsize=11)
figure =plt.gcf()
figure.set_size_inches(6,4)
#plt.savefig('/Users/lizhenzhu/Downloads/'+'ga_h1.png', dpi = 600, transparent=True)
plt.show()
if self.stop_mniwi==True:
sys.stdout.write('\nWarning: GA is terminated due to the'+\
' maximum number of iterations without improvement was met!')
##############################################################################
##############################################################################
def cross(self,x,y,c_type):
ofs1=x.copy()
ofs2=y.copy()
if c_type=='one_point':
ran=np.random.randint(0,self.dim)
for i in range(0,ran):
ofs1[i]=y[i].copy()
ofs2[i]=x[i].copy()
if c_type=='two_point':
ran1=np.random.randint(0,self.dim)
ran2=np.random.randint(ran1,self.dim)
for i in range(ran1,ran2):
ofs1[i]=y[i].copy()
ofs2[i]=x[i].copy()
if c_type=='uniform':
for i in range(0, self.dim):
ran=np.random.random()
if ran <0.5:
ofs1[i]=y[i].copy()
ofs2[i]=x[i].copy()
return np.array([ofs1,ofs2])
###############################################################################
def mut(self,x):
for i in self.integers[0]:
ran=np.random.random()
if ran < self.prob_mut:
s = [-1, 1]
x[i]=np.random.choice(s)
for i in self.reals[0]:
ran=np.random.random()
if ran < self.prob_mut:
x[i]=self.var_bound[i][0]+np.random.random()*\
(self.var_bound[i][1]-self.var_bound[i][0])
return x
###############################################################################
def mutmidle(self, x, p1, p2):
for i in self.integers[0]:
ran=np.random.random()
if ran < self.prob_mut:
if p1[i]<p2[i]:
s = [-1, 1]
x[i]=np.random.choice(s)
elif p1[i]>p2[i]:
s = [-1, 1]
x[i]=np.random.choice(s)
else:
s = [-1, 1]
x[i]=np.random.choice(s)
for i in self.reals[0]:
ran=np.random.random()
if ran < self.prob_mut:
if p1[i]<p2[i]:
x[i]=p1[i]+np.random.random()*(p2[i]-p1[i])
elif p1[i]>p2[i]:
x[i]=p2[i]+np.random.random()*(p1[i]-p2[i])
else:
x[i]=self.var_bound[i][0]+np.random.random()*\
(self.var_bound[i][1]-self.var_bound[i][0])
return x
###############################################################################
def sim(self,X):
self.temp=X.copy()
obj=self.f(self.temp)
return obj
###############################################################################
def progress(self, count, total, status=''):
bar_len = 50
filled_len = int(round(bar_len * count / float(total)))
percents = round(100.0 * count / float(total), 1)
bar = '|' * filled_len + '_' * (bar_len - filled_len)
sys.stdout.write('\r%s %s%s %s' % (bar, percents, '%', status))
sys.stdout.flush()
###############################################################################
###############################################################################
###############################################################################
############################Optimisation function #############################
###############################################################################
import numpy as np
array_co =[]
layer = 12 # define the layer number of interested, this can be changed to any integer
def coefficients(spin,nnn):
length = len(spin)
temp = 0
for i in range(length):
j = (i+nnn) % length
temp += spin[i] * spin[j]
#print(temp)
def generalCoefficients(spin,nnn):
length = len(spin)
temp = 0
for i in range(length):
tempi = spin[i]
for j in range(len(nnn)):
if nnn == [0]:
tempi = tempi
else:
k = (i+nnn[j]) % length
tempi = tempi * spin[k]
temp += tempi
#print(temp)
array_co.append(str(temp)+' ')
#print(array_co)
def getAll(spin):
generalCoefficients(spin,[0])
generalCoefficients(spin,[1])
generalCoefficients(spin,[2])
generalCoefficients(spin,[3])
generalCoefficients(spin,[1,2])
generalCoefficients(spin,[2,3])
generalCoefficients(spin,[1,3])
generalCoefficients(spin,[1,2,3])
def f(spin):
#print (spin[0] == 0)
#if (spin[0] != 0) & (spin[1] != 0) & (spin[2] != 0) & (spin[3] != 0) & (spin[4] != 0) & (spin[5] != 0) & (spin[6] != 0) & (spin[7] != 0) & (spin[8] != 0) & (spin[9] != 0) & (spin[10] != 0) & (spin[11] != 0):
getAll(spin)
#print(array_co)
co1 = layer*(-11.2432698)
co2 = float(array_co[0])* (2.01506745)
co3 = float(array_co[1])* (1.02438188)
co4 = float(array_co[2])* (0.004414688)
co5 = float(array_co[3])* (-1.03368526)
co6 = float(array_co[4]) * (0.001165438)
co7 = (1/2)*(float(array_co[6])+float(array_co[5])) * (-2.06124239)
co8 = float(array_co[7])* (-1.03258301)
#toten = co1 + co2 + co3 + co4 + co5 + co6+co7+co8
del array_co[0]
del array_co[0]
del array_co[0]
del array_co[0]
del array_co[0]
del array_co[0]
del array_co[0]
del array_co[0]
#print(toten)
toten = (co1 + co2 + co3 + co4 + co5 + co6+co7+co8)
return toten #finding the low energy structures
#return -toten #finding the symmetry forbidden sequences
varbound=np.array([[-1,1]]*layer)
algorithm_param = {'max_num_iteration': 100,\
'population_size':300,\
'mutation_probability':0.6,\
'elit_ratio': 0.2,\
'crossover_probability': 0.1,\
'parents_portion': 0.6,\
'crossover_type':'uniform',\
'max_iteration_without_improv':None}
model=geneticalgorithm(function=f,\
dimension=layer,\
variable_type='int',\
variable_boundaries=varbound,\
function_timeout=10,\
algorithm_parameters=algorithm_param)
start = timeit.default_timer()
model.run()
stop = timeit.default_timer()
print('Time: ', stop - start)
###Output
The best solution found:
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
Objective function:
-147.90901204800002
|
02-fmri_data_manipulation_in_python/02-data_frames_manipulation_template.ipynb | ###Markdown
Data frames manipulation with pandas [pandas](https://pandas.pydata.org/) - fast, powerful, flexible and easy to use open source data analysis and manipulation tool,built on top of the Python programming language.----------------
###Code
## Load libraries
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
1. Load & plot confounds variables **Confounds** (or *nuisance regressors*) are variables representing fluctuations with a potential non-neuronal origin. Confounding variables calculated by [fMRIPrep](https://fmriprep.readthedocs.io/en/stable/outputs.htmlconfounds) are stored separately for each subject, session and run in TSV files - one column for each confound variable.
###Code
# Load data
confounds_path = "data/sub-01_ses-1_task-rest_bold_confounds.tsv"
# Print first 5 rows of data
# Print column names
# Plot mean timeseries from cerebrospinal fluid (CSF) and white matter
# Plot cerebrospinal fluid (CSF) and white matter timeseries on a scatterplot
# Plot cerebrospinal fluid (CSF) and white matter timeseries on a scatterplot (use seaborn joint plot)
import seaborn as sns
###Output
_____no_output_____
###Markdown
2. Load and plot COVID-19 dataDownload data:[COVID-19](http://shinyapps.org/apps/corona/) (check out their GH)
###Code
# Load some COVID-19 data
# covid_path =
# Print first 5 rows of data
# Group by country & sum cases
# Filter dataframe by cases in Poland
# Plot cases in Poland
# Plot cases in other country
# Plot cases of multiple countries on a one plot
###Output
_____no_output_____
###Markdown
Data frames manipulation with pandas [pandas](https://pandas.pydata.org/) - fast, powerful, flexible and easy to use open source data analysis and manipulation tool,built on top of the Python programming language.----------------
###Code
## Load libraries
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
1. Load & plot confounds variables **Confounds** (or *nuisance regressors*) are variables representing fluctuations with a potential non-neuronal origin. Confounding variables calculated by [fMRIPrep](https://fmriprep.readthedocs.io/en/stable/outputs.htmlconfounds) are stored separately for each subject, session and run in TSV files - one column for each confound variable.
###Code
# Load data
confounds_path = "data/sub-01_ses-1_task-rest_bold_confounds.tsv"
# Print first 5 rows of data
# Print column names
# Plot mean timeseries from cerebrospinal fluid (CSF) and white matter
# Plot cerebrospinal fluid (CSF) and white matter timeseries on a scatterplot
# Plot cerebrospinal fluid (CSF) and white matter timeseries on a scatterplot (use seaborn joint plot)
import seaborn as sns
###Output
_____no_output_____
###Markdown
2. Load and plot COVID-19 dataDownload data:[COVID-19](http://shinyapps.org/apps/corona/) (check out their GH)
###Code
# Load some COVID-19 data
# covid_path =
# Print first 5 rows of data
# Group by country & sum cases
# Filter dataframe by cases in Poland
# Plot cases in Poland
# Plot cases in other country
# Plot cases of multiple countries on a one plot
###Output
_____no_output_____
###Markdown
Data frames manipulation with pandas __Packages__:- [Matplotlib](https://matplotlib.org/) - a comprehensive library for creating static, animated, and interactive visualizations in Python- [Pandas](https://pandas.pydata.org/) - fast, powerful, flexible and easy to use open source data analysis and manipulation tool,built on top of the Python programming language- [Seaborn](https://seaborn.pydata.org/) - data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics.----------------
###Code
## Load libraries
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
1. Load & plot confounds variables **Confounds** (or *nuisance regressors*) are variables representing fluctuations with a potential non-neuronal origin. Confounding variables calculated by [fMRIPrep](https://fmriprep.readthedocs.io/en/stable/outputs.htmlconfounds) are stored separately for each subject, session and run in TSV files - one column for each confound variable.
###Code
# Load data
confounds_path = "data/sub-01_ses-1_task-rest_bold_confounds.tsv"
# Print first 5 rows of data
# Print column names
# Plot mean timeseries from cerebrospinal fluid (CSF) and white matter
# Plot cerebrospinal fluid (CSF) and white matter timeseries on a scatterplot
# Plot cerebrospinal fluid (CSF) and white matter timeseries on a scatterplot (use seaborn joint plot)
###Output
_____no_output_____
###Markdown
2. Load and plot COVID-19 dataDownload data from [European Centre for Disease Prevention and Control](https://www.ecdc.europa.eu/en/publications-data/download-todays-data-geographic-distribution-covid-19-cases-worldwide).
###Code
# Load some COVID-19 data
covid_path = "data/covid-ecdpc.csv"
# Print first 5 rows of data
# Group by country, month & day and sum cases
# covid_grouped =
# Filter dataframe by cases in Poland
# poland =
# Plot cases in Poland
# Plot cases in other country
# Plot cases of multiple countries on a one plot
###Output
_____no_output_____ |
module1-log-linear-regression/log_linear_regression_feature_engineering.ipynb | ###Markdown
_Lambda School Data Science โ Regression 2_ This sprint, your project is Caterpillar Tube Pricing: Predict the prices suppliers will quote for industrial tube assemblies. Log-Linear Regression, Feature Engineering Objectives- log-transform regression target with right-skewed distribution- use regression metric: RMSLE- do feature engineering with relational data Process Francois Chollet, [Deep Learning with Python](https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/README.md), Chapter 4: Fundamentals of machine learning, "A universal workflow of machine learning" > **1. Define the problem at hand and the data on which youโll train.** Collect this data, or annotate it with labels if need be.> **2. Choose how youโll measure success on your problem.** Which metrics will you monitor on your validation data?> **3. Determine your evaluation protocol:** hold-out validation? K-fold validation? Which portion of the data should you use for validation?> **4. Develop a first model that does better than a basic baseline:** a model with statistical power.> **5. Develop a model that overfits.** The universal tension in machine learning is between optimization and generalization; the ideal model is one that stands right at the border between underfitting and overfitting; between undercapacity and overcapacity. To figure out where this border lies, first you must cross it.> **6. Regularize your model and tune its hyperparameters, based on performance on the validation data.** Repeatedly modify your model, train it, evaluate on your validation data (not the test data, at this point), modify it again, and repeat, until the model is as good as it can get. > **Iterate on feature engineering: add new features, or remove features that donโt seem to be informative.** Once youโve developed a satisfactory model configuration, you can train your final production model on all the available data (training and validation) and evaluate it one last time on the test set. Define the problem ๐ [Description](https://www.kaggle.com/c/caterpillar-tube-pricing/overview/description)> Like snowflakes, it's difficult to find two tubes in Caterpillar's diverse catalogue of machinery that are exactly alike. Tubes can vary across a number of dimensions, including base materials, number of bends, bend radius, bolt patterns, and end types.> Currently, Caterpillar relies on a variety of suppliers to manufacture these tube assemblies, each having their own unique pricing model. This competition provides detailed tube, component, and annual volume datasets, and challenges you to predict the price a supplier will quote for a given tube assembly. Define the data on which you'll train [Data Description](https://www.kaggle.com/c/caterpillar-tube-pricing/data)> The dataset is comprised of a large number of relational tables that describe the physical properties of tube assemblies. You are challenged to combine the characteristics of each tube assembly with supplier pricing dynamics in order to forecast a quote price for each tube. The quote price is labeled as cost in the data. Get data Option 1. Kaggle web UI Sign in to Kaggle and go to the [Caterpillar Tube Pricing](https://www.kaggle.com/c/caterpillar-tube-pricing) competition. Go to the Data page. After you have accepted the rules of the competition, use the download buttons to download the data. Option 2. Kaggle API1. [Follow these instructions](https://github.com/Kaggle/kaggle-apiapi-credentials) to create a Kaggle โAPI Tokenโ and download your `kaggle.json` file.2. Put `kaggle.json` in the correct location. - If you're using Anaconda, put the file in the directory specified in the [instructions](https://github.com/Kaggle/kaggle-apiapi-credentials). - If you're using Google Colab, upload the file to your Google Drive, and run this cell: ``` from google.colab import drive drive.mount('/content/drive') %env KAGGLE_CONFIG_DIR=/content/drive/My Drive/ ```3. Install the Kaggle API package.```pip install kaggle```4. After you have accepted the rules of the competiton, use the Kaggle API package to get the data.```kaggle competitions download -c caterpillar-tube-pricing``` Option 3. Google DriveDownload [zip file](https://drive.google.com/uc?export=download&id=1oGky3xR6133pub7S4zIEFbF4x1I87jvC) from Google Drive.
###Code
# from google.colab import files
# files.upload()
# !unzip caterpillar-tube-pricing.zip
# !unzip data.zip
###Output
_____no_output_____
###Markdown
Get filenames & shapes[Python Standard Library: glob](https://docs.python.org/3/library/glob.html)> The `glob` module finds all the pathnames matching a specified pattern
###Code
from glob import glob
import pandas as pd
for path in glob('competition_data/*.csv'):
df = pd.read_csv(path)
print(path, df.shape)
###Output
_____no_output_____
###Markdown
Choose how you'll measure success on your problem> Which metrics will you monitor on your validation data? [Evaluation](https://www.kaggle.com/c/caterpillar-tube-pricing/overview/evaluation)> Submissions are evaluated one the Root Mean Squared Logarithmic Error (RMSLE). The RMSLE is calculated as>> $\sqrt{\frac{1}{n} \sum_{i=1}^{n}\left(\log \left(p_{i}+1\right)-\log \left(a_{i}+1\right)\right)^{2}}$>> Where:>> - $n$ is the number of price quotes in the test set> - $p_i$ is your predicted price> - $a_i$ is the actual price> - $log(x)$ is the natural logarithm [Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/model_evaluation.htmlmean-squared-log-error)> The `mean_squared_log_error` function is best to use when targets have exponential growth, such as population counts, average sales of a commodity over a span of years etc. Note that this metric penalizes an under-predicted estimate greater than an over-predicted estimate. Determine your evaluation protocol> Which portion of the data should you use for validation? Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> You will want to create your own training and validation sets (by splitting the Kaggle โtrainingโ data). You will just use your smaller training set (a subset of Kaggleโs training data) for building your model, and you can evaluate it on your validation set (also a subset of Kaggleโs training data) before you submit to Kaggle.> When is a random subset not good enough?> - Time series> - New people, new boats, newโฆ Does the test set have different dates? Does the test set have different tube assemblies? Make the validation set like the test set Begin with baselines for regression Develop a first model that does better than a basic baseline Fit Random Forest with 1 feature: `quantity` Log-transform regression target with right-skewed distribution Plot right-skewed distribution Terence Parr & Jeremy Howard, [The Mechanics of Machine Learning, Chapter 5.5](https://mlbook.explained.ai/prep.htmllogtarget)> Transforming the target variable (using the mathematical log function) into a tighter, more uniform space makes life easier for any model.> The only problem is that, while easy to execute, understanding why taking the log of the target variable works and how it affects the training/testing process is intellectually challenging. You can skip this section for now, if you like, but just remember that this technique exists and check back here if needed in the future.> Optimally, the distribution of prices would be a narrow โbell curveโ distribution without a tail. This would make predictions based upon average prices more accurate. We need a mathematical operation that transforms the widely-distributed target prices into a new space. The โprice in dollars spaceโ has a long right tail because of outliers and we want to squeeze that space into a new space that is normally distributed (โbell curvedโ). More specifically, we need to shrink large values a lot and smaller values a little. That magic operation is called the logarithm or log for short. > To make actual predictions, we have to take the exp of model predictions to get prices in dollars instead of log dollars. Wikipedia, [Logarithm](https://en.wikipedia.org/wiki/Logarithm)> Addition, multiplication, and exponentiation are three fundamental arithmetic operations. Addition can be undone by subtraction. Multiplication can be undone by division. The idea and purpose of **logarithms** is also to **undo** a fundamental arithmetic operation, namely raising a number to a certain power, an operation also known as **exponentiation.** > For example, raising 2 to the third power yields 8.> The logarithm (with respect to base 2) of 8 is 3, reflecting the fact that 2 was raised to the third power to get 8. Use Numpy for exponents and logarithms functions- https://docs.scipy.org/doc/numpy/reference/routines.math.htmlexponents-and-logarithms Refit model with log-transformed target Interlude: Moore's Law dataset Background- https://en.wikipedia.org/wiki/Moore%27s_law- https://en.wikipedia.org/wiki/Transistor_count Scrape HTML tables with Pandas!- https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_html.html- https://medium.com/@ageitgey/quick-tip-the-easiest-way-to-grab-data-out-of-a-web-page-in-python-7153cecfca58 More web scraping options- https://automatetheboringstuff.com/chapter11/
###Code
# Scrape data
tables = pd.read_html('https://en.wikipedia.org/wiki/Transistor_count', header=0)
moore = tables[0]
moore = moore[['Date of introduction', 'Transistor count']].dropna()
# Clean data
for column in moore:
moore[column] = (moore[column]
.str.split('[').str[0] # Remove citations
.str.replace(r'\D','') # Remove non-digit characters
.astype(int))
moore = moore.sort_values(by='Date of introduction')
# Plot distribution of transistor counts
sns.distplot(moore['Transistor count']);
# Plot relationship between date & transistors
moore.plot(x='Date of introduction', y='Transistor count', kind='scatter', alpha=0.5);
# Log-transform the target
moore['log(Transistor count)'] = np.log1p(moore['Transistor count'])
# Plot distribution of log-transformed target
sns.distplot(moore['log(Transistor count)']);
# Plot relationship between date & log-transformed target
moore.plot(x='Date of introduction', y='log(Transistor count)', kind='scatter', alpha=0.5);
# Fit Linear Regression with log-transformed target
from sklearn.linear_model import LinearRegression
model = LinearRegression()
X = moore[['Date of introduction']]
y_log = moore['log(Transistor count)']
model.fit(X, y_log)
y_pred_log = model.predict(X)
# Plot line of best fit, in units of log-transistors
ax = moore.plot(x='Date of introduction', y='log(Transistor count)', kind='scatter', alpha=0.5)
ax.plot(X, y_pred_log);
# Convert log-transistors to transistors
y_pred = np.expm1(y_pred_log)
# Plot line of best fit, in units of transistors
ax = moore.plot(x='Date of introduction', y='Transistor count', kind='scatter', alpha=0.5)
ax.plot(X, y_pred);
###Output
_____no_output_____
###Markdown
Back to Caterpillar ๐ Select more features [Data Description](https://www.kaggle.com/c/caterpillar-tube-pricing/data)> **train_set.csv and test_set.csv** > This file contains information on price quotes from our suppliers. Prices can be quoted in 2 ways: bracket and non-bracket pricing. Bracket pricing has multiple levels of purchase based on quantity (in other words, the cost is given assuming a purchase of quantity tubes). Non-bracket pricing has a minimum order amount (min_order) for which the price would apply. Each quote is issued with an annual_usage, an estimate of how many tube assemblies will be purchased in a given year.
###Code
# !pip install category_encoders
###Output
_____no_output_____
###Markdown
Do feature engineering with relational data [Data Description](https://www.kaggle.com/c/caterpillar-tube-pricing/data)> The dataset is comprised of a large number of relational tables that describe the physical properties of tube assemblies. You are challenged to combine the characteristics of each tube assembly with supplier pricing dynamics in order to forecast a quote price for each tube.> **tube.csv** > This file contains information on tube assemblies, which are the primary focus of the competition. Tube Assemblies are made of multiple parts. The main piece is the tube which has a specific diameter, wall thickness, length, number of bends and bend radius. Either end of the tube (End A or End X) typically has some form of end connection allowing the tube assembly to attach to other features. Special tooling is typically required for short end straight lengths (end_a_1x, end_a_2x refer to if the end length is less than 1 times or 2 times the tube diameter, respectively). Other components can be permanently attached to a tube such as bosses, brackets or other custom features.
###Code
for path in glob('competition_data/*.csv'):
df = pd.read_csv(path)
shared_columns = set(df.columns) & set(train.columns)
if shared_columns:
print(path, df.shape)
print(df.columns.tolist(), '\n')
###Output
_____no_output_____
###Markdown
_Lambda School Data Science โ Regression 2_ This sprint, your project is Caterpillar Tube Pricing: Predict the prices suppliers will quote for industrial tube assemblies. Log-Linear Regression, Feature Engineering Objectives- log-transform regression target with right-skewed distribution- use regression metric: RMSLE- do feature engineering with relational data Process Francois Chollet, [Deep Learning with Python](https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/README.md), Chapter 4: Fundamentals of machine learning, "A universal workflow of machine learning" > **1. Define the problem at hand and the data on which youโll train.** Collect this data, or annotate it with labels if need be.> **2. Choose how youโll measure success on your problem.** Which metrics will you monitor on your validation data?> **3. Determine your evaluation protocol:** hold-out validation? K-fold validation? Which portion of the data should you use for validation?> **4. Develop a first model that does better than a basic baseline:** a model with statistical power.> **5. Develop a model that overfits.** The universal tension in machine learning is between optimization and generalization; the ideal model is one that stands right at the border between underfitting and overfitting; between undercapacity and overcapacity. To figure out where this border lies, first you must cross it.> **6. Regularize your model and tune its hyperparameters, based on performance on the validation data.** Repeatedly modify your model, train it, evaluate on your validation data (not the test data, at this point), modify it again, and repeat, until the model is as good as it can get. > **Iterate on feature engineering: add new features, or remove features that donโt seem to be informative.** Once youโve developed a satisfactory model configuration, you can train your final production model on all the available data (training and validation) and evaluate it one last time on the test set. Define the problem ๐ [Description](https://www.kaggle.com/c/caterpillar-tube-pricing/overview/description)> Like snowflakes, it's difficult to find two tubes in Caterpillar's diverse catalogue of machinery that are exactly alike. Tubes can vary across a number of dimensions, including base materials, number of bends, bend radius, bolt patterns, and end types.> Currently, Caterpillar relies on a variety of suppliers to manufacture these tube assemblies, each having their own unique pricing model. This competition provides detailed tube, component, and annual volume datasets, and challenges you to predict the price a supplier will quote for a given tube assembly. Define the data on which you'll train [Data Description](https://www.kaggle.com/c/caterpillar-tube-pricing/data)> The dataset is comprised of a large number of relational tables that describe the physical properties of tube assemblies. You are challenged to combine the characteristics of each tube assembly with supplier pricing dynamics in order to forecast a quote price for each tube. The quote price is labeled as cost in the data. Get data Option 1. Kaggle web UI Sign in to Kaggle and go to the [Caterpillar Tube Pricing](https://www.kaggle.com/c/caterpillar-tube-pricing) competition. Go to the Data page. After you have accepted the rules of the competition, use the download buttons to download the data. Option 2. Kaggle API1. [Follow these instructions](https://github.com/Kaggle/kaggle-apiapi-credentials) to create a Kaggle โAPI Tokenโ and download your `kaggle.json` file.2. Put `kaggle.json` in the correct location. - If you're using Anaconda, put the file in the directory specified in the [instructions](https://github.com/Kaggle/kaggle-apiapi-credentials). - If you're using Google Colab, upload the file to your Google Drive, and run this cell: ``` from google.colab import drive drive.mount('/content/drive') %env KAGGLE_CONFIG_DIR=/content/drive/My Drive/ ```3. Install the Kaggle API package.```pip install kaggle```4. After you have accepted the rules of the competiton, use the Kaggle API package to get the data.```kaggle competitions download -c caterpillar-tube-pricing``` Option 3. Google DriveDownload [zip file](https://drive.google.com/uc?export=download&id=1oGky3xR6133pub7S4zIEFbF4x1I87jvC) from Google Drive.
###Code
# from google.colab import files
# files.upload()
# !unzip caterpillar-tube-pricing.zip
# !unzip data.zip
###Output
_____no_output_____
###Markdown
Get filenames & shapes[Python Standard Library: glob](https://docs.python.org/3/library/glob.html)> The `glob` module finds all the pathnames matching a specified pattern
###Code
from glob import glob
import pandas as pd
for path in glob('competition_data/*.csv'):
df = pd.read_csv(path)
print(path, df.shape)
###Output
_____no_output_____
###Markdown
Choose how you'll measure success on your problem> Which metrics will you monitor on your validation data? [Evaluation](https://www.kaggle.com/c/caterpillar-tube-pricing/overview/evaluation)> Submissions are evaluated one the Root Mean Squared Logarithmic Error (RMSLE). The RMSLE is calculated as>> $\sqrt{\frac{1}{n} \sum_{i=1}^{n}\left(\log \left(p_{i}+1\right)-\log \left(a_{i}+1\right)\right)^{2}}$>> Where:>> - $n$ is the number of price quotes in the test set> - $p_i$ is your predicted price> - $a_i$ is the actual price> - $log(x)$ is the natural logarithm [Scikit-Learn User Guide](https://scikit-learn.org/stable/modules/model_evaluation.htmlmean-squared-log-error)> The `mean_squared_log_error` function is best to use when targets have exponential growth, such as population counts, average sales of a commodity over a span of years etc. Note that this metric penalizes an under-predicted estimate greater than an over-predicted estimate. Determine your evaluation protocol> Which portion of the data should you use for validation? Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> You will want to create your own training and validation sets (by splitting the Kaggle โtrainingโ data). You will just use your smaller training set (a subset of Kaggleโs training data) for building your model, and you can evaluate it on your validation set (also a subset of Kaggleโs training data) before you submit to Kaggle.> When is a random subset not good enough?> - Time series> - New people, new boats, newโฆ Does the test set have different dates? Does the test set have different tube assemblies? Make the validation set like the test set Begin with baselines for regression Develop a first model that does better than a basic baseline Fit Random Forest with 1 feature: `quantity` Log-transform regression target with right-skewed distribution Plot right-skewed distribution Terence Parr & Jeremy Howard, [The Mechanics of Machine Learning, Chapter 5.5](https://mlbook.explained.ai/prep.htmllogtarget)> Transforming the target variable (using the mathematical log function) into a tighter, more uniform space makes life easier for any model.> The only problem is that, while easy to execute, understanding why taking the log of the target variable works and how it affects the training/testing process is intellectually challenging. You can skip this section for now, if you like, but just remember that this technique exists and check back here if needed in the future.> Optimally, the distribution of prices would be a narrow โbell curveโ distribution without a tail. This would make predictions based upon average prices more accurate. We need a mathematical operation that transforms the widely-distributed target prices into a new space. The โprice in dollars spaceโ has a long right tail because of outliers and we want to squeeze that space into a new space that is normally distributed (โbell curvedโ). More specifically, we need to shrink large values a lot and smaller values a little. That magic operation is called the logarithm or log for short. > To make actual predictions, we have to take the exp of model predictions to get prices in dollars instead of log dollars. Wikipedia, [Logarithm](https://en.wikipedia.org/wiki/Logarithm)> Addition, multiplication, and exponentiation are three fundamental arithmetic operations. Addition can be undone by subtraction. Multiplication can be undone by division. The idea and purpose of **logarithms** is also to **undo** a fundamental arithmetic operation, namely raising a number to a certain power, an operation also known as **exponentiation.** > For example, raising 2 to the third power yields 8.> The logarithm (with respect to base 2) of 8 is 3, reflecting the fact that 2 was raised to the third power to get 8. Use Numpy for exponents and logarithms functions- https://docs.scipy.org/doc/numpy/reference/routines.math.htmlexponents-and-logarithms Refit model with log-transformed target Interlude: Moore's Law dataset Background- https://en.wikipedia.org/wiki/Moore%27s_law- https://en.wikipedia.org/wiki/Transistor_count Scrape HTML tables with Pandas!- https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_html.html- https://medium.com/@ageitgey/quick-tip-the-easiest-way-to-grab-data-out-of-a-web-page-in-python-7153cecfca58 More web scraping options- https://automatetheboringstuff.com/chapter11/
###Code
# Scrape data
tables = pd.read_html('https://en.wikipedia.org/wiki/Transistor_count', header=0)
moore = tables[0]
moore = moore[['Date of introduction', 'Transistor count']].dropna()
# Clean data
for column in moore:
moore[column] = (moore[column]
.str.split('[').str[0] # Remove citations
.str.replace(r'\D','') # Remove non-digit characters
.astype(int))
moore = moore.sort_values(by='Date of introduction')
# Plot distribution of transistor counts
sns.distplot(moore['Transistor count']);
# Plot relationship between date & transistors
moore.plot(x='Date of introduction', y='Transistor count', kind='scatter', alpha=0.5);
# Log-transform the target
moore['log(Transistor count)'] = np.log1p(moore['Transistor count'])
# Plot distribution of log-transformed target
sns.distplot(moore['log(Transistor count)']);
# Plot relationship between date & log-transformed target
moore.plot(x='Date of introduction', y='log(Transistor count)', kind='scatter', alpha=0.5);
# Fit Linear Regression with log-transformed target
from sklearn.linear_model import LinearRegression
model = LinearRegression()
X = moore[['Date of introduction']]
y_log = moore['log(Transistor count)']
model.fit(X, y_log)
y_pred_log = model.predict(X)
# Plot line of best fit, in units of log-transistors
ax = moore.plot(x='Date of introduction', y='log(Transistor count)', kind='scatter', alpha=0.5)
ax.plot(X, y_pred_log);
# Convert log-transistors to transistors
y_pred = np.expm1(y_pred_log)
# Plot line of best fit, in units of transistors
ax = moore.plot(x='Date of introduction', y='Transistor count', kind='scatter', alpha=0.5)
ax.plot(X, y_pred);
###Output
_____no_output_____
###Markdown
Back to Caterpillar ๐ Select more features [Data Description](https://www.kaggle.com/c/caterpillar-tube-pricing/data)> **train_set.csv and test_set.csv** > This file contains information on price quotes from our suppliers. Prices can be quoted in 2 ways: bracket and non-bracket pricing. Bracket pricing has multiple levels of purchase based on quantity (in other words, the cost is given assuming a purchase of quantity tubes). Non-bracket pricing has a minimum order amount (min_order) for which the price would apply. Each quote is issued with an annual_usage, an estimate of how many tube assemblies will be purchased in a given year.
###Code
# !pip install category_encoders
###Output
_____no_output_____
###Markdown
Do feature engineering with relational data [Data Description](https://www.kaggle.com/c/caterpillar-tube-pricing/data)> The dataset is comprised of a large number of relational tables that describe the physical properties of tube assemblies. You are challenged to combine the characteristics of each tube assembly with supplier pricing dynamics in order to forecast a quote price for each tube.> **tube.csv** > This file contains information on tube assemblies, which are the primary focus of the competition. Tube Assemblies are made of multiple parts. The main piece is the tube which has a specific diameter, wall thickness, length, number of bends and bend radius. Either end of the tube (End A or End X) typically has some form of end connection allowing the tube assembly to attach to other features. Special tooling is typically required for short end straight lengths (end_a_1x, end_a_2x refer to if the end length is less than 1 times or 2 times the tube diameter, respectively). Other components can be permanently attached to a tube such as bosses, brackets or other custom features.
###Code
for path in glob('competition_data/*.csv'):
df = pd.read_csv(path)
shared_columns = set(df.columns) & set(train.columns)
if shared_columns:
print(path, df.shape)
print(df.columns.tolist(), '\n')
###Output
_____no_output_____ |
docs/tutorial/02_Basic_objects.ipynb | ###Markdown
Basic objects A `striplog` depends on a hierarchy of objects. This notebook shows the objects and their basic functionality.- [Lexicon](Lexicon): A dictionary containing the words and word categories to use for rock descriptions.- [Component](Component): A set of attributes. - [Interval](Interval): One element from a Striplog โย consists of a top, base, a description, one or more Components, and a source.Striplogs (a set of `Interval`s) are described in [a separate notebook](01_Basics.ipynb).Decors and Legends are also described in [another notebook](03_Display_objects.ipynb).
###Code
import striplog
striplog.__version__
# If you get a lot of warnings here, just run it again.
###Output
_____no_output_____
###Markdown
Lexicon
###Code
from striplog import Lexicon
print(Lexicon.__doc__)
help(Lexicon)
lexicon = Lexicon.default()
lexicon
lexicon.synonyms
###Output
_____no_output_____
###Markdown
Most of the lexicon works 'behind the scenes' when processing descriptions into `Rock` components.
###Code
lexicon.find_synonym('Halite')
s = "grysh gn ss w/ sp gy sh"
lexicon.expand_abbreviations(s)
###Output
_____no_output_____
###Markdown
Component A set of attributes. All are optional.
###Code
from striplog import Component
print(Component.__doc__)
###Output
Initialize with a dictionary of properties. You can use any
properties you want e.g.:
- lithology: a simple one-word rock type
- colour, e.g. 'grey'
- grainsize or range, e.g. 'vf-f'
- modifier, e.g. 'rippled'
- quantity, e.g. '35%', or 'stringers'
- description, e.g. from cuttings
###Markdown
We define a new rock with a Python `dict` object:
###Code
r = {'colour': 'grey',
'grainsize': 'vf-f',
'lithology': 'sand'}
rock = Component(r)
rock
###Output
_____no_output_____
###Markdown
The Rock has a colour:
###Code
rock['colour']
###Output
_____no_output_____
###Markdown
And it has a summary, which is generated from its attributes.
###Code
rock.summary()
###Output
_____no_output_____
###Markdown
We can format the summary if we wish:
###Code
rock.summary(fmt="My rock: {lithology} ({colour}, {grainsize!u})")
###Output
_____no_output_____
###Markdown
The formatting supports the usual `s`, `r`, and `a`: * `s`: `str`* `r`: `repr`* `a`: `ascii`Also some string functions:* `u`: `str.upper`* `l`: `str.lower`* `c`: `str.capitalize`* `t`: `str.title`And some numerical ones, for arrays of numbers:* `+` or `โ`: `np.sum`* `m` or `ยต`: `np.mean`* `v`: `np.var`* `d`: `np.std`* `x`: `np.product`
###Code
x = {'colour': ['Grey', 'Brown'],
'bogosity': [0.45, 0.51, 0.66],
'porosity': [0.2003, 0.1998, 0.2112, 0.2013, 0.1990],
'grainsize': 'VF-F',
'lithology': 'Sand',
}
X = Component(x)
# This is not working at the moment.
#fmt = 'The {colour[0]!u} {lithology!u} has a total of {bogosity!โ:.2f} bogons'
#fmt += 'and a mean porosity of {porosity!ยต:2.0%}.'
fmt = 'The {lithology!u} is {colour[0]!u}.'
X.summary(fmt)
X.json()
###Output
_____no_output_____
###Markdown
We can compare rocks with the usual `==` operator:
###Code
rock2 = Component({'grainsize': 'VF-F',
'colour': 'Grey',
'lithology': 'Sand'})
rock == rock2
rock
###Output
_____no_output_____
###Markdown
In order to create a Component object from text, we need a lexicon to compare the text against. The lexicon describes the language we want to extract, and what it means.
###Code
rock3 = Component.from_text('Grey fine sandstone.', lexicon)
rock3
###Output
_____no_output_____
###Markdown
Components support double-star-unpacking:
###Code
"My rock: {lithology} ({colour}, {grainsize})".format(**rock3)
###Output
_____no_output_____
###Markdown
PositionPositions define points in the earth, like a top, but with uncertainty. You can define:* `upper` โย the highest possible location* `middle` โย the most likely location* `lower` โย the lowest possible location* `units` โย the units of measurement* `x` and `y` โย the _x_ and _y_ location (these don't have uncertainty, sorry)* `meta` โ a Python dictionary containing anything you wantPositions don't have a 'way up'.
###Code
from striplog import Position
print(Position.__doc__)
params = {'upper': 95,
'middle': 100,
'lower': 110,
'meta': {'kind': 'erosive', 'source': 'DOE'}
}
p = Position(**params)
p
###Output
_____no_output_____
###Markdown
Even if you don't give a `middle`, you can always get `z`: the central, most likely position:
###Code
params = {'upper': 75, 'lower': 85}
p = Position(**params)
p
p.z
###Output
_____no_output_____
###Markdown
IntervalIntervals are where it gets interesting. An interval can have:* a top* a base* a description (in natural language)* a list of `Component`sIntervals don't have a 'way up', it's implied by the order of `top` and `base`.
###Code
from striplog import Interval
print(Interval.__doc__)
###Output
Used to represent a lithologic or stratigraphic interval, or single point,
such as a sample location.
Initialize with a top (and optional base) and a description and/or
an ordered list of components.
Args:
top (float): Required top depth. Required.
base (float): Base depth. Optional.
description (str): Textual description.
lexicon (dict): A lexicon. See documentation. Optional unless you only
provide descriptions, because it's needed to extract components.
max_component (int): The number of components to extract. Default 1.
abbreviations (bool): Whether to parse for abbreviations.
TODO:
Seems like I should be able to instantiate like this:
Interval({'top': 0, 'components':[Component({'age': 'Neogene'})
I can get around it for now like this:
Interval(**{'top': 0, 'components':[Component({'age': 'Neogene'})
Question: should Interval itself cope with only being handed 'top' and
either fill in down to the next or optionally create a point?
###Markdown
I might make an `Interval` explicitly from a Component...
###Code
Interval(10, 20, components=[rock])
###Output
_____no_output_____
###Markdown
... or I might pass a description and a `lexicon` and Striplog will parse the description and attempt to extract structured `Component` objects from it.
###Code
Interval(20, 40, "Grey sandstone with shale flakes.", lexicon=lexicon).__repr__()
###Output
_____no_output_____
###Markdown
Notice I only got one `Component`, even though the description contains a subordinate lithology. This is the default behaviour, we have to ask for more components:
###Code
interval = Interval(20, 40, "Grey sandstone with black shale flakes.", lexicon=lexicon, max_component=2)
print(interval)
###Output
{'components': [Component({'colour': 'grey', 'lithology': 'sandstone'}), Component({'amount': 'flakes', 'colour': 'black', 'lithology': 'shale'})], 'top': Position({'middle': 20.0, 'units': 'm'}), 'data': {}, 'description': 'Grey sandstone with black shale flakes.', 'base': Position({'middle': 40.0, 'units': 'm'})}
###Markdown
`Interval`s have a `primary` attribute, which holds the first component, no matter how many components there are.
###Code
interval.primary
###Output
_____no_output_____
###Markdown
Ask for the summary to see the thickness and a `Rock` summary of the primary component. Note that the format code only applies to the `Rock` part of the summary.
###Code
interval.summary(fmt="{colour} {lithology}")
###Output
_____no_output_____
###Markdown
We can change an interval's properties:
###Code
interval.top = 18
interval
interval.top
###Output
_____no_output_____
###Markdown
Comparing and combining intervals
###Code
# Depth ordered
i1 = Interval(top=61, base=62.5, components=[Component({'lithology': 'limestone'})])
i2 = Interval(top=62, base=63, components=[Component({'lithology': 'sandstone'})])
i3 = Interval(top=62.5, base=63.5, components=[Component({'lithology': 'siltstone'})])
i4 = Interval(top=63, base=64, components=[Component({'lithology': 'shale'})])
i5 = Interval(top=63.1, base=63.4, components=[Component({'lithology': 'dolomite'})])
# Elevation ordered
i8 = Interval(top=200, base=100, components=[Component({'lithology': 'sandstone'})])
i7 = Interval(top=150, base=50, components=[Component({'lithology': 'limestone'})])
i6 = Interval(top=100, base=0, components=[Component({'lithology': 'siltstone'})])
i2.order
###Output
_____no_output_____
###Markdown
**Technical aside:** The `Interval` class is a `functools.total_ordering`, so providing `__eq__` and one other comparison (such as `__lt__`) in the class definition means that instances of the class have implicit order. So you can use `sorted` on a Striplog, for example.It wasn't clear to me whether this should compare tops (say), so that '>' might mean 'above', or if it should be keyed on thickness. I chose the former, and implemented other comparisons instead.
###Code
print(i3 == i2) # False, they don't have the same top
print(i1 > i4) # True, i1 is above i4
print(min(i1, i2, i5).summary()) # 0.3 m of dolomite
i2 > i4 > i5 # True
###Output
_____no_output_____
###Markdown
We can combine intervals with the `+` operator. (However, you cannot subtract intervals.)
###Code
i2 + i3
###Output
_____no_output_____
###Markdown
Adding a rock adds a (minor) component and adds to the description.
###Code
interval + rock3
i6.relationship(i7), i5.relationship(i4)
print(i1.partially_overlaps(i2)) # True
print(i2.partially_overlaps(i3)) # True
print(i2.partially_overlaps(i4)) # False
print()
print(i6.partially_overlaps(i7)) # True
print(i7.partially_overlaps(i6)) # True
print(i6.partially_overlaps(i8)) # False
print()
print(i5.is_contained_by(i3)) # True
print(i5.is_contained_by(i4)) # True
print(i5.is_contained_by(i2)) # False
x = i4.merge(i5)
x[-1].base = 65
x
i1.intersect(i2, blend=False)
i1.intersect(i2)
i1.union(i3)
i3.difference(i5)
###Output
_____no_output_____ |
D'wave tutorials/6.D-wave Quantum Solvers.ipynb | ###Markdown
Defining a sample QUBO \begin{equation}H_{1}^{QUBO}=-4.4x_{1}^2+0.6x_{2}^2-2x_{3}^2+2.8x_{1}x_{2}-0.8x_{2}x_{3}+2.4\end{equation}
###Code
linear = {0: -4.4, 1: 0.6, 2: -2}
quadratic = {(0,1): 2.8, (1,2):-0.8}
offset = 2.4
bqm_qubo = dimod.BinaryQuadraticModel(linear,quadratic,offset,dimod.Vartype.BINARY)
print(bqm_qubo)
print('\n',bqm_qubo.to_numpy_matrix().astype(float))
from dwave.system import EmbeddingComposite, DWaveSampler
sampler = EmbeddingComposite(DWaveSampler())
sampleset = sampler.sample(bqm_qubo, num_reads=100)
print(sampleset)
sampler.properties
###Output
_____no_output_____ |
code/algorithms/course_udemy_1/Stacks, Queues and Deques/Interview/Questions - PRACTICE/.ipynb_checkpoints/Implement a Queue -Using Two Stacks -checkpoint.ipynb | ###Markdown
Implement a Queue - Using Two StacksGiven the Stack class below, implement a Queue class using **two** stacks! Note, this is a "classic" interview problem. Use a Python list data structure as your Stack.
###Code
# Uses lists instead of your own Stack class.
stack1 = []
stack2 = []
###Output
_____no_output_____
###Markdown
SolutionFill out your solution below:
###Code
class Queue2Stacks(object):
def __init__(self):
# Two Stacks
self.in_stack = []
self.out_stack = []
def enqueue(self, element):
# FILL OUT CODE HERE
self.in_stack.append(element)
pass
def dequeue(self):
# FILL OUT CODE HERE
if not self.out_stack:
while self.in_stack:
self.out_stack.append(self.in_stack.pop())
return self.out_stack.pop()
pass
###Output
_____no_output_____
###Markdown
Test Your SolutionYou should be able to tell with your current knowledge of Stacks and Queues if this is working as it should. For example, the following should print as such:
###Code
"""
RUN THIS CELL TO CHECK THAT YOUR SOLUTION OUTPUT MAKES SENSE AND BEHAVES AS A QUEUE
"""
q = Queue2Stacks()
for i in range(5):
q.enqueue(i)
for i in range(5):
print (q.dequeue())
###Output
0
1
2
3
4
|
week3/python_for_data_analysis4.ipynb | ###Markdown
ใใผใฟใใคใใณใฐๆฆ่ซ Pythonๅบ็ค Matplotlibใฉใคใใฉใช**Matplotlib**ใฉใคใใฉใชใซใฏใฐใฉใใๅฏ่ฆๅใใใใใฎใขใธใฅใผใซใๅซใพใใฆใใพใใไปฅไธใงใฏใMatplotlibใฉใคใใฉใชใฎใขใธใฅใผใซใไฝฟใฃใใใฐใฉใใฎๅบๆฌ็ใชๆ็ปใซใคใใฆ่ชฌๆใใพใใ ็ทใฐใฉใMatoplotlibใฉใคใใฉใชใไฝฟ็จใใใซใฏใใพใ`matplotlib`ใฎใขใธใฅใผใซใใคใณใใผใใใพใใใใใงใฏใๅบๆฌ็ใชใฐใฉใใๆ็ปใใใใใฎ`matplotlib.pyplot`ใขใธใฅใผใซใใคใณใใผใใใพใใๆ
ฃไพใจใใฆใๅใขใธใฅใผใซใ`plt`ใจๅฅๅใใคใใฆใณใผใใฎไธญใงไฝฟ็จใใพใใใพใใใฐใฉใใงๅฏ่ฆๅใใใใผใฟใฏใชในใใ้
ๅใ็จใใใใจใๅคใใใใ`numpy`ใขใธใฅใผใซใไฝตใใฆใคใณใใผใใใพใใใชใใ`%matplotlib inline`ใฏjupyter notebookๅ
ใงใฐใฉใใ่กจ็คบใใใใใซๅฟ
่ฆใงใใ`matplotlib`ใงใฏใ้ๅธธ`show()`้ขๆฐใๅผใถใจๆ็ปใ่กใใพใใใ`inline`่กจ็คบๆๅฎใฎๅ ดๅใ`show()`้ขๆฐใ็็ฅใงใใพใใใใฎๆใใปใซใฎๆๅพใซ่ฉไพกใใใใชใใธใงใฏใใฎๅบๅ่กจ็คบใๆๅถใใใใใซใไปฅไธใงใฏใปใซใฎๆๅพใฎ่กใซใปใใณใญใณใใคใใฆใใพใใ
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
ไปฅไธใงใฏใ`pyplot`ใขใธใฅใผใซใฎ**`plot`**`()`้ขๆฐใ็จใใฆใใชในใใฎ่ฆ็ด ใฎๆฐๅคใy่ปธใฎๅคใจใใฆใฐใฉใใๆ็ปใใฆใใพใใy่ปธใฎๅคใซๅฏพๅฟใใx่ปธใฎๅคใฏใใชในใใฎๅ่ฆ็ด ใฎใคใณใใใฏในใจใชใฃใฆใใพใใ
###Code
# plotใใใใผใฟ
d =[0, 1, 4, 9, 16]
# plot้ขๆฐใงๆ็ป
plt.plot(d);
###Output
_____no_output_____
###Markdown
`plot()`้ขๆฐใงใฏใx, yใฎไธกๆนใฎ่ปธใฎๅคใๅผๆฐใซๆธกใใใจใใงใใพใใ
###Code
# plotใใใใผใฟ
x =[0, 1, 2, 3, 4]
y =[0, 1, 2, 3, 4]
# plot้ขๆฐใงๆ็ป
plt.plot(x,y);
###Output
_____no_output_____
###Markdown
ไปฅไธใฎใใใซใฐใฉใใ่คๆฐใพใจใใฆ่กจ็คบใใใใจใใงใใพใใ่คๆฐใฎใฐใฉใใ่กจ็คบใใใจใ็ทใใจใซ็ฐใชใ่ฒใ่ชๅใงๅฒใๅฝใฆใใใพใใ`plot()`้ขๆฐใงใฏใฐใฉใใฎ็ทใฎ่ฒใๅฝข็ถใใใผใฟใใคใณใใฎใใผใซใฎ็จฎ้กใใใใใใไปฅไธใฎใใใซ`linestyle`, `color`, `marker`ๅผๆฐใงๆๅฎใใฆๅคๆดใใใใจใใงใใพใใใใใใใฎๅผๆฐใงๆๅฎๅฏ่ฝใชๅคใฏไปฅไธใๅ็
งใใฆใใ ใใใ- [linestyle](https://matplotlib.org/api/_as_gen/matplotlib.lines.Line2D.htmlmatplotlib.lines.Line2D.set_linestyle)- [color](https://matplotlib.org/api/colors_api.html?highlight=colormodule-matplotlib.colors)- [marker](https://matplotlib.org/api/markers_api.htmlmodule-matplotlib.markers)`plot()`้ขๆฐใฎ`label`ๅผๆฐใซใฐใฉใใฎๅ็ทใฎๅกไพใๆๅญๅใจใใฆๆธกใใ**`legend`**`()`้ขๆฐใๅผใถใใจใงใใฐใฉใๅ
ใซๅกไพใ่กจ็คบใงใใพใใ`legend()`้ขๆฐใฎ`loc`ๅผๆฐใงๅกไพใ่กจ็คบใใไฝ็ฝฎใๆๅฎใใใใจใใงใใพใใๅผๆฐใงๆๅฎๅฏ่ฝใชๅคใฏไปฅไธใๅ็
งใใฆใใ ใใใ- [lengend()้ขๆฐ](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.legend.htmlmatplotlib.pyplot.legend)
###Code
# plotใใใใผใฟ
data =[0, 1, 4, 9, 16]
x =[0, 1, 2, 3, 4]
y =[0, 1, 2, 3, 4]
# plot้ขๆฐใงๆ็ปใ็ทใฎๅฝข็ถใ่ฒใใใผใฟใใคใณใใฎใใผใซใๅกไพใๆๅฎ
plt.plot(x,y, linestyle='--', color='blue', marker='o', label="linear")
plt.plot(data, linestyle=':', color='green', marker='*', label="quad")
plt.legend();
###Output
_____no_output_____
###Markdown
`pyplot`ใขใธใฅใผใซใงใฏใไปฅไธใฎใใใซใฐใฉใใฎใฟใคใใซใจๅ่ปธใฎใฉใใซใๆๅฎใใฆ่กจ็คบใใใใจใใงใใพใใใฟใคใใซใx่ปธใฎใฉใใซใy่ปธใฎใฉใใซใใฏใใใใ**`title`**`()`้ขๆฐใ**`xlabel`**`()`้ขๆฐใ**`ylabel`**`()`้ขๆฐใซๆๅญๅใๆธกใใฆๆๅฎใใพใใใพใใ**`grid`**`()`้ขๆฐใ็จใใใจใฐใชใใใไฝตใใฆ่กจ็คบใใใใจใใงใใพใใใฐใชใใใ่กจ็คบใใใใๅ ดๅใฏใ`grid()`้ขๆฐใซ`True`ใๆธกใใฆใใ ใใใ
###Code
# plotใใใใผใฟ
data =[0, 1, 4, 9, 16]
x =[0, 1, 2, 3, 4]
y =[0, 1, 2, 3, 4]
# plot้ขๆฐใงๆ็ปใ็ทใฎๅฝข็ถใ่ฒใใใผใฟใใคใณใใฎใใผใซใๅกไพใๆๅฎ
plt.plot(x,y, linestyle='--', color='blue', marker='o', label="linear")
plt.plot(data, linestyle=':', color='green', marker='*', label="quad")
plt.legend()
plt.title("My First Graph") # ใฐใฉใใฎใฟใคใใซ
plt.xlabel("x") #x่ปธใฎใฉใใซ
plt.ylabel("y") #y่ปธใฎใฉใใซ
plt.grid(True); #ใฐใชใใใฎ่กจ็คบ
###Output
_____no_output_____
###Markdown
ใฐใฉใใๆ็ปใใใจใใฎใใญใใๆฐใๅขใใใใจใงไปปๆใฎๆฒ็ทใฎใฐใฉใใไฝๆใใใใจใใงใใพใใไปฅไธใงใฏใ`numpy`ใขใธใฅใผใซใฎ`arange()`้ขๆฐใ็จใใฆใ$- \pi$ใใ$\pi$ใฎ็ฏๅฒใ0.1ๅปใฟใงx่ปธใฎๅคใ้
ๅใจใใฆๆบๅใใฆใใพใใใใฎx่ปธใฎๅคใซๅฏพใใฆใ`numpy`ใขใธใฅใผใซใฎ`cos()`้ขๆฐใจ`sin()`้ขๆฐใ็จใใฆใy่ปธใฎๅคใใใใใๆบๅใใ`cos`ใซใผใใจ`sin`ใซใผใใๆ็ปใใฆใใพใใ
###Code
#ใใฐใฉใใฎx่ปธใฎๅคใจใชใ้
ๅ
x = np.arange(-np.pi, np.pi, 0.1)
# ไธ่จ้
ๅใcos, sin้ขๆฐใซๆธกใ, y่ปธใฎๅคใจใใฆๆ็ป
plt.plot(x,np.cos(x))
plt.plot(x,np.sin(x))
plt.title("cos ans sin Curves") # ใฐใฉใใฎใฟใคใใซ
plt.xlabel("x") #x่ปธใฎใฉใใซ
plt.ylabel("y") #y่ปธใฎใฉใใซ
plt.grid(True); #ใฐใชใใใฎ่กจ็คบ
###Output
_____no_output_____
###Markdown
ใใญใใใฎๆฐใๅฐใชใใใใจใๆฒ็ทใฏ็ด็ทใใคใชใๅใใใใใจใงๆ็ปใใใใฆใใใใจใใใใใพใใ
###Code
x = np.arange(-np.pi, np.pi, 0.5)
plt.plot(x,np.cos(x), marker='o')
plt.plot(x,np.sin(x), marker='o')
plt.title("cos ans sin Curves")
plt.xlabel("x")
plt.ylabel("y")
plt.grid(True);
###Output
_____no_output_____
###Markdown
ๆฃๅธๅณๆฃๅธๅณใฏใ`pyplot`ใขใธใฅใผใซใฎ**`scatter`**`()`้ขๆฐใ็จใใฆๆ็ปใงใใพใใไปฅไธใงใฏใใฉใณใใ ใซ็ๆใใ20ๅใฎ่ฆ็ด ใใใชใ้
ๅx,yใฎๅ่ฆ็ด ใฎๅคใฎ็ตใฟใ็นใจใใฆใใญใใใใๆฃๅธๅณใ่กจ็คบใใฆใใพใใใใญใใใใ็นใฎใใผใซใผใฎ่ฒใๅฝข็ถใฏใ็ทใฐใฉใใฎๆใจๅๆงใซใ `color`, `marker`ๅผๆฐใงๆๅฎใใฆๅคๆดใใใใจใใงใใพใใๅ ใใฆใ`s`, `alpha`ๅผๆฐใงใใใใใใใผใซใผใฎๅคงใใใจ้ๆๅบฆใๆๅฎใใใใจใใงใใพใใ
###Code
#ใใฐใฉใใฎx่ปธใฎๅคใจใชใ้
ๅ
x = np.random.rand(20)
#ใใฐใฉใใฎy่ปธใฎๅคใจใชใ้
ๅ
y = np.random.rand(20)
# scatter้ขๆฐใงๆฃๅธๅณใๆ็ป
plt.scatter(x, y, s=100, alpha=0.5);
###Output
_____no_output_____
###Markdown
ไปฅไธใฎใใใซใ`plot()`้ขๆฐใ็จใใฆใๅๆงใฎๆฃๅธๅณใ่กจ็คบใใใใจใใงใใพใใใใฎๆใ`plot()`้ขๆฐใงใฏใใใญใใใใ็นใฎใใผใซใผใฎๅฝข็ถใๅผๆฐใซๆๅฎใใฆใใพใใ
###Code
x = np.random.rand(20)
y = np.random.rand(20)
plt.plot(x, y, 'o', color='blue');
###Output
_____no_output_____
###Markdown
ๆฃใฐใฉใๆฃใฐใฉใใฏใ`pyplot`ใขใธใฅใผใซใฎ**`bar`**`()`้ขๆฐใ็จใใฆๆ็ปใงใใพใใไปฅไธใงใฏใใฉใณใใ ใซ็ๆใใ10ๅใฎ่ฆ็ด ใใใชใ้
ๅ`y`ใฎๅ่ฆ็ด ใฎๅคใ็ธฆใฎๆฃใฐใฉใใง่กจ็คบใใฆใใพใใ`x`ใฏใx่ปธไธใงๆฃใฐใฉใใฎใใผใฎไธฆใถไฝ็ฝฎใ็คบใใฆใใพใใใใใงใฏใ`numpy`ใขใธใฅใผใซใฎ`arange()`้ขๆฐใ็จใใฆใ1ใใ10ใฎ็ฏๅฒใ1ๅปใฟใงx่ปธไธใฎใใผใฎไธฆใถไฝ็ฝฎใจใใฆ้
ๅใๆบๅใใฆใใพใใ
###Code
# x่ปธไธใงๆฃใฎไธฆใถไฝ็ฝฎใจใชใ้
ๅ
x = np.arange(1, 11, 1)
#ใใฐใฉใใฎy่ปธใฎๅคใจใชใ้
ๅ
y = np.random.rand(10)
# bar้ขๆฐใงๆฃใฐใฉใใๆ็ป
plt.bar(x,y);
###Output
_____no_output_____
###Markdown
ใในใใฐใฉใ ใในใใฐใฉใ ใฏใ`pyplot`ใขใธใฅใผใซใฎ**`hist`**`()`้ขๆฐใ็จใใฆๆ็ปใงใใพใใไปฅไธใงใฏใ`numpy`ใขใธใฅใผใซใฎ`random.randn()`้ขๆฐใ็จใใฆใๆญฃ่ฆๅๅธใซๅบใฅใ1000ๅใฎๆฐๅคใฎ่ฆ็ด ใใใชใ้
ๅใ็จๆใใใในใใฐใฉใ ใจใใฆ่กจ็คบใใฆใใพใใ`hist()`้ขๆฐใฎ`bins`ๅผๆฐใงใในใใฐใฉใ ใฎ็ฎฑ๏ผใใณ๏ผใฎๆฐใๆๅฎใใพใใ
###Code
# ๆญฃ่ฆๅๅธใซๅบใฅใ1000ๅใฎๆฐๅคใฎ่ฆ็ด ใใใชใ้
ๅ
d = np.random.randn(1000)
# hist้ขๆฐใงใในใใฐใฉใ ใๆ็ป
plt.hist(d, bins=20);
###Output
_____no_output_____
###Markdown
ใใผใใใใ`impshow()`้ขๆฐใ็จใใใจใไปฅไธใฎใใใซ่กๅใฎ่ฆ็ด ใฎๅคใซๅฟใใฆ่ฒใฎๆฟๆทกใๅคใใใใจใงใ่กๅใใใผใใใใใจใใฆๅฏ่ฆๅใใใใจใใงใใพใใ`colorbar()`้ขๆฐใฏ่กๅใฎๅคใจ่ฒใฎๆฟๆทกใฎๅฏพๅฟใ่กจ็คบใใพใใ
###Code
# 10่ก10ๅใฎใฉใณใใ ่ฆ็ด ใใใชใ่กๅ
a = np.random.rand(100).reshape(10,10)
# imshow้ขๆฐใงใใผใใใใใๆ็ป
im=plt.imshow(a)
plt.colorbar(im);
###Output
_____no_output_____
###Markdown
ใฐใฉใใฎ็ปๅใใกใคใซๅบๅ**`savefig`**`()`้ขๆฐใ็จใใใจใไปฅไธใฎใใใซไฝๆใใใฐใฉใใ็ปๅใจใใฆใใกใคใซใซไฟๅญใใใใจใใงใใพใใ
###Code
x = np.arange(-np.pi, np.pi, 0.1)
plt.plot(x,np.cos(x), label='cos')
plt.plot(x,np.sin(x), label='sin')
plt.legend()
plt.title("cos ans sin Curves")
plt.xlabel("x")
plt.ylabel("y")
plt.grid(True)
# savefig้ขๆฐใงใฐใฉใใ็ปๅไฟๅญ
plt.savefig('cos_sin.png');
###Output
_____no_output_____ |
Problem 2.9.ipynb | ###Markdown
Solution {-}A common autocorrelation function encountered in physical problems is:\begin{equation*} R(\tau)=\sigma^2 e^{-\beta |\tau|} \cos \omega_0 \tau\end{equation*}1. Find the corresponding spectral density function\begin{equation*} S(j\omega)=\frac{\sigma^2 \beta}{(\omega+\omega_0)^2+\beta^2} + \frac{\sigma^2 \beta}{(\omega-\omega_0)^2+\beta^2}\end{equation*}2. Plot the autocorrelation and the spectral density
###Code
from numpy import linspace, exp, cos
import matplotlib.pyplot as plt
sigma = 1
beta = 2
omega0 = 10
tau = linspace(-2, 2, 100)
R = sigma**2*exp(-beta*abs(tau))*cos(omega0*tau)
omega = linspace(-20, 20, 100)
S = sigma**2*beta/((omega + omega0)**2 + beta**2) + sigma**2*beta/((omega - omega0)**2 + beta**2)
# Plot autocorrelation
plt.plot(tau, R)
plt.title("Autocorrelation")
plt.xlabel("Lag")
plt.ylabel("Correlation")
plt.grid()
plt.show()
# Plot spectral density
plt.plot(tau, S)
plt.title("Spectral density")
plt.xlabel("Frequency")
plt.ylabel("Power")
plt.grid()
plt.show()
###Output
_____no_output_____ |
notebooks/archived/Leg-to-leg network.ipynb | ###Markdown
Community detection
###Code
graph = nx.Graph()
graph.add_edges_from([(l[2], l[3], {'cnt': l[1]}) for l in leg_links.itertuples()])
# Also make a graph withouth the most common nodes
nodes_to_remove = ["Wet op de rechterlijke organisatie, Artikel 81",
"Wet op de rechterlijke organisatie, Artikel 80a",
"Wetboek van Strafvordering",
"Wetboek van Strafrecht",
"Burgerlijk Wetboek Boek 1",
"Burgerlijk Wetboek Boek 2",
"Burgerlijk Wetboek Boek 3",
"Burgerlijk Wetboek Boek 6",
"Burgerlijk Wetboek Boek 7",
"Algemene wet bestuursrecht",
"Opiumwet",
"Wetboek van Burgerlijke Rechtsvordering",
"Faillissementswet"]
graph2 = graph.copy()
for n in nodes_to_remove:
graph2.remove_node(n)
commmunities_weighted = community.best_partition(graph, weight='cnt')
commmunities_unweighted = community.best_partition(graph)
commmunities_weighted_small = community.best_partition(graph, weight='cnt', resolution=0.5)
commmunities_unweighted_small = community.best_partition(graph, resolution=0.5)
communities_weighted2 = community.best_partition(graph2, weight='cnt')
leg_nodes_df = pd.DataFrame()
leg_nodes_df['louvain_weighted'] = pd.Series(commmunities_weighted)
leg_nodes_df['louvain_unweighted'] = pd.Series(commmunities_unweighted)
leg_nodes_df['louvain_weighted_small'] = pd.Series(commmunities_weighted_small)
leg_nodes_df['louvain_unweighted_small'] = pd.Series(commmunities_unweighted_small)
leg_nodes_df['louvain_weighted_sub'] = pd.Series(communities_weighted2)
leg_nodes_df = leg_nodes_df.reset_index().rename(columns={"index": "name"})
leg_nodes_df[leg_nodes_df['name']=="Wetboek van Strafrecht"]
###Output
_____no_output_____
###Markdown
meta info legislation
###Code
leg_nodes_df = leg_nodes_df.set_index(['name'])
leg_nodes_df['nr_references'] = nr_references
leg_nodes_df = leg_nodes_df.reset_index()
leg_nodes_df.head()
leg_nodes_df.to_csv(os.path.join(inputpath, 'leg_to_leg_nodes_min10.tsv'), index=False, sep='\t')
leg_nodes_df.to_csv(os.path.join(inputpath, 'leg_to_leg_nodes_min10.csv'), index=False)
leg_nodes_df['book'] = leg_nodes_df['name'].str.split(',').map(lambda l: l[0])
leg_nodes_df['book'].value_counts()
###Output
_____no_output_____
###Markdown
look into clusters
###Code
case_to_leg_merged = case_leg.merge(leg_nodes_df, left_on='title', right_on='name')
case_to_leg_merged.head()
com_name = 'louvain_weighted_sub'
grouped_by_com = leg_nodes_df.groupby(com_name)
community_summary = pd.DataFrame()
community_summary['nodes'] = grouped_by_com['name'].apply(lambda l: "|".join(list(sorted(l))))
community_summary['nr_leg_nodes'] = grouped_by_com['name'].nunique()
community_summary['nr_cases'] = case_to_leg_merged.groupby(com_name)['source'].nunique()
community_summary = community_summary.sort_values('nr_cases', ascending=False)
community_summary.head(50)
community_summary.to_csv(os.path.join(inputpath, 'leg_to_leg_communities.csv'))
###Output
_____no_output_____ |
module_1/Deep Neural Network - Application.ipynb | ###Markdown
Deep Neural Network for Image Classification: ApplicationBy the time you complete this notebook, you will have finished the last programming assignment of Week 4, and also the last programming assignment of Course 1! Go you! To build your cat/not-a-cat classifier, you'll use the functions from the previous assignment to build a deep network. Hopefully, you'll see an improvement in accuracy over your previous logistic regression implementation. **After this assignment you will be able to:**- Build and train a deep L-layer neural network, and apply it to supervised learningLet's get started! Table of Contents- [1 - Packages](1)- [2 - Load and Process the Dataset](2)- [3 - Model Architecture](3) - [3.1 - 2-layer Neural Network](3-1) - [3.2 - L-layer Deep Neural Network](3-2) - [3.3 - General Methodology](3-3)- [4 - Two-layer Neural Network](4) - [Exercise 1 - two_layer_model](ex-1) - [4.1 - Train the model](4-1)- [5 - L-layer Neural Network](5) - [Exercise 2 - L_layer_model](ex-2) - [5.1 - Train the model](5-1)- [6 - Results Analysis](6)- [7 - Test with your own image (optional/ungraded exercise)](7) 1 - Packages Begin by importing all the packages you'll need during this assignment. - [numpy](https://www.numpy.org/) is the fundamental package for scientific computing with Python.- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.- [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.- [PIL](http://www.pythonware.com/products/pil/) and [scipy](https://www.scipy.org/) are used here to test your model with your own picture at the end.- `dnn_app_utils` provides the functions implemented in the "Building your Deep Neural Network: Step by Step" assignment to this notebook.- `np.random.seed(1)` is used to keep all the random function calls consistent. It helps grade your work - so please don't change it!
###Code
import time
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
from dnn_app_utils_v3 import *
from public_tests import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
###Output
_____no_output_____
###Markdown
2 - Load and Process the DatasetYou'll be using the same "Cat vs non-Cat" dataset as in "Logistic Regression as a Neural Network" (Assignment 2). The model you built back then had 70% test accuracy on classifying cat vs non-cat images. Hopefully, your new model will perform even better!**Problem Statement**: You are given a dataset ("data.h5") containing: - a training set of `m_train` images labelled as cat (1) or non-cat (0) - a test set of `m_test` images labelled as cat and non-cat - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB).Let's get more familiar with the dataset. Load the data by running the cell below.
###Code
train_x_orig, train_y, test_x_orig, test_y, classes = load_data()
###Output
_____no_output_____
###Markdown
The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to check out other images.
###Code
# Example of a picture
index = 10
plt.imshow(train_x_orig[index])
print ("y = " + str(train_y[0,index]) + ". It's a " + classes[train_y[0,index]].decode("utf-8") + " picture.")
# Explore your dataset
m_train = train_x_orig.shape[0]
num_px = train_x_orig.shape[1]
m_test = test_x_orig.shape[0]
print ("Number of training examples: " + str(m_train))
print ("Number of testing examples: " + str(m_test))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_x_orig shape: " + str(train_x_orig.shape))
print ("train_y shape: " + str(train_y.shape))
print ("test_x_orig shape: " + str(test_x_orig.shape))
print ("test_y shape: " + str(test_y.shape))
###Output
Number of training examples: 209
Number of testing examples: 50
Each image is of size: (64, 64, 3)
train_x_orig shape: (209, 64, 64, 3)
train_y shape: (1, 209)
test_x_orig shape: (50, 64, 64, 3)
test_y shape: (1, 50)
###Markdown
As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below.Figure 1: Image to vector conversion.
###Code
# Reshape the training and test examples
train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T # The "-1" makes reshape flatten the remaining dimensions
test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T
# Standardize data to have feature values between 0 and 1.
train_x = train_x_flatten/255.
test_x = test_x_flatten/255.
print ("train_x's shape: " + str(train_x.shape))
print ("test_x's shape: " + str(test_x.shape))
###Output
train_x's shape: (12288, 209)
test_x's shape: (12288, 50)
###Markdown
**Note**:$12,288$ equals $64 \times 64 \times 3$, which is the size of one reshaped image vector. 3 - Model Architecture 3.1 - 2-layer Neural NetworkNow that you're familiar with the dataset, it's time to build a deep neural network to distinguish cat images from non-cat images!You're going to build two different models:- A 2-layer neural network- An L-layer deep neural networkThen, you'll compare the performance of these models, and try out some different values for $L$. Let's look at the two architectures:Figure 2: 2-layer neural network. The model can be summarized as: INPUT -> LINEAR -> RELU -> LINEAR -> SIGMOID -> OUTPUT.Detailed Architecture of Figure 2:- The input is a (64,64,3) image which is flattened to a vector of size $(12288,1)$. - The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ of size $(n^{[1]}, 12288)$.- Then, add a bias term and take its relu to get the following vector: $[a_0^{[1]}, a_1^{[1]},..., a_{n^{[1]}-1}^{[1]}]^T$.- Repeat the same process.- Multiply the resulting vector by $W^{[2]}$ and add the intercept (bias). - Finally, take the sigmoid of the result. If it's greater than 0.5, classify it as a cat. 3.2 - L-layer Deep Neural NetworkIt's pretty difficult to represent an L-layer deep neural network using the above representation. However, here is a simplified network representation:Figure 3: L-layer neural network. The model can be summarized as: [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOIDDetailed Architecture of Figure 3:- The input is a (64,64,3) image which is flattened to a vector of size (12288,1).- The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ and then you add the intercept $b^{[1]}$. The result is called the linear unit.- Next, take the relu of the linear unit. This process could be repeated several times for each $(W^{[l]}, b^{[l]})$ depending on the model architecture.- Finally, take the sigmoid of the final linear unit. If it is greater than 0.5, classify it as a cat. 3.3 - General MethodologyAs usual, you'll follow the Deep Learning methodology to build the model:1. Initialize parameters / Define hyperparameters2. Loop for num_iterations: a. Forward propagation b. Compute cost function c. Backward propagation d. Update parameters (using parameters, and grads from backprop) 3. Use trained parameters to predict labelsNow go ahead and implement those two models! 4 - Two-layer Neural Network Exercise 1 - two_layer_model Use the helper functions you have implemented in the previous assignment to build a 2-layer neural network with the following structure: *LINEAR -> RELU -> LINEAR -> SIGMOID*. The functions and their inputs are:```pythondef initialize_parameters(n_x, n_h, n_y): ... return parameters def linear_activation_forward(A_prev, W, b, activation): ... return A, cachedef compute_cost(AL, Y): ... return costdef linear_activation_backward(dA, cache, activation): ... return dA_prev, dW, dbdef update_parameters(parameters, grads, learning_rate): ... return parameters```
###Code
### CONSTANTS DEFINING THE MODEL ####
n_x = 12288 # num_px * num_px * 3
n_h = 7
n_y = 1
layers_dims = (n_x, n_h, n_y)
learning_rate = 0.0075
# GRADED FUNCTION: two_layer_model
def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):
"""
Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (n_x, number of examples)
Y -- true "label" vector (containing 1 if cat, 0 if non-cat), of shape (1, number of examples)
layers_dims -- dimensions of the layers (n_x, n_h, n_y)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- If set to True, this will print the cost every 100 iterations
Returns:
parameters -- a dictionary containing W1, W2, b1, and b2
"""
np.random.seed(1)
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
(n_x, n_h, n_y) = layers_dims
# Initialize parameters dictionary, by calling one of the functions you'd previously implemented
#(โ 1 line of code)
# parameters = ...
# YOUR CODE STARTS HERE
parameters =initialize_parameters(n_x, n_h, n_y)
# YOUR CODE ENDS HERE
# Get W1, b1, W2 and b2 from the dictionary parameters.
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1, W2, b2". Output: "A1, cache1, A2, cache2".
#(โ 2 lines of code)
# A1, cache1 = ...
# A2, cache2 = ...
# YOUR CODE STARTS HERE
A1, cache1 =linear_activation_forward(X, W1, b1, "relu")
A2, cache2 =linear_activation_forward(A1, W2, b2, "sigmoid")
# YOUR CODE ENDS HERE
# Compute cost
#(โ 1 line of code)
# cost = ...
# YOUR CODE STARTS HERE
cost =compute_cost(A2, Y)
# YOUR CODE ENDS HERE
# Initializing backward propagation
dA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))
# Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1".
#(โ 2 lines of code)
# dA1, dW2, db2 = ...
# dA0, dW1, db1 = ...
# YOUR CODE STARTS HERE
dA1, dW2, db2 =linear_activation_backward(dA2, cache2, "sigmoid")
dA0, dW1, db1 =linear_activation_backward(dA1, cache1, "relu")
# YOUR CODE ENDS HERE
# Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2
grads['dW1'] = dW1
grads['db1'] = db1
grads['dW2'] = dW2
grads['db2'] = db2
# Update parameters.
#(approx. 1 line of code)
# parameters = ...
# YOUR CODE STARTS HERE
parameters =update_parameters(parameters, grads, learning_rate)
# YOUR CODE ENDS HERE
# Retrieve W1, b1, W2, b2 from parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Print the cost every 100 iterations
if print_cost and i % 100 == 0 or i == num_iterations - 1:
print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))
if i % 100 == 0 or i == num_iterations:
costs.append(cost)
return parameters, costs
def plot_costs(costs, learning_rate=0.0075):
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
parameters, costs = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2, print_cost=False)
print("Cost after first iteration: " + str(costs[0]))
two_layer_model_test(two_layer_model)
train_x.shape
###Output
_____no_output_____
###Markdown
**Expected output:**```cost after iteration 1 must be around 0.69``` 4.1 - Train the model If your code passed the previous cell, run the cell below to train your parameters. - The cost should decrease on every iteration. - It may take up to 5 minutes to run 2500 iterations.
###Code
parameters, costs = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2500, print_cost=True)
plot_costs(costs, learning_rate)
###Output
Cost after iteration 0: 0.693049735659989
Cost after iteration 100: 0.6464320953428849
Cost after iteration 200: 0.6325140647912677
Cost after iteration 300: 0.6015024920354665
Cost after iteration 400: 0.5601966311605747
Cost after iteration 500: 0.5158304772764729
Cost after iteration 600: 0.4754901313943325
Cost after iteration 700: 0.43391631512257495
Cost after iteration 800: 0.4007977536203886
Cost after iteration 900: 0.3580705011323798
Cost after iteration 1000: 0.3394281538366413
Cost after iteration 1100: 0.30527536361962654
Cost after iteration 1200: 0.2749137728213015
Cost after iteration 1300: 0.2468176821061484
Cost after iteration 1400: 0.19850735037466102
Cost after iteration 1500: 0.17448318112556638
Cost after iteration 1600: 0.1708076297809692
Cost after iteration 1700: 0.11306524562164715
Cost after iteration 1800: 0.09629426845937156
Cost after iteration 1900: 0.0834261795972687
Cost after iteration 2000: 0.07439078704319085
Cost after iteration 2100: 0.06630748132267933
Cost after iteration 2200: 0.05919329501038172
Cost after iteration 2300: 0.053361403485605606
Cost after iteration 2400: 0.04855478562877019
Cost after iteration 2499: 0.04421498215868956
###Markdown
**Expected Output**: Cost after iteration 0 0.6930497356599888 Cost after iteration 100 0.6464320953428849 ... ... Cost after iteration 2499 0.04421498215868956 **Nice!** You successfully trained the model. Good thing you built a vectorized implementation! Otherwise it might have taken 10 times longer to train this.Now, you can use the trained parameters to classify images from the dataset. To see your predictions on the training and test sets, run the cell below.
###Code
predictions_train = predict(train_x, train_y, parameters)
###Output
Accuracy: 0.9999999999999998
###Markdown
**Expected Output**: Accuracy 0.9999999999999998
###Code
predictions_test = predict(test_x, test_y, parameters)
###Output
Accuracy: 0.72
###Markdown
**Expected Output**: Accuracy 0.72 Congratulations! It seems that your 2-layer neural network has better performance (72%) than the logistic regression implementation (70%, assignment week 2). Let's see if you can do even better with an $L$-layer model.**Note**: You may notice that running the model on fewer iterations (say 1500) gives better accuracy on the test set. This is called "early stopping" and you'll hear more about it in the next course. Early stopping is a way to prevent overfitting. 5 - L-layer Neural Network Exercise 2 - L_layer_model Use the helper functions you implemented previously to build an $L$-layer neural network with the following structure: *[LINEAR -> RELU]$\times$(L-1) -> LINEAR -> SIGMOID*. The functions and their inputs are:```pythondef initialize_parameters_deep(layers_dims): ... return parameters def L_model_forward(X, parameters): ... return AL, cachesdef compute_cost(AL, Y): ... return costdef L_model_backward(AL, Y, caches): ... return gradsdef update_parameters(parameters, grads, learning_rate): ... return parameters```
###Code
### CONSTANTS ###
layers_dims = [12288, 20, 7, 5, 1] # 4-layer model
# GRADED FUNCTION: L_layer_model
def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):
"""
Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.
Arguments:
X -- data, numpy array of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).
learning_rate -- learning rate of the gradient descent update rule
num_iterations -- number of iterations of the optimization loop
print_cost -- if True, it prints the cost every 100 steps
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
np.random.seed(1)
costs = [] # keep track of cost
# Parameters initialization.
#(โ 1 line of code)
# parameters = ...
# YOUR CODE STARTS HERE
parameters =initialize_parameters_deep(layers_dims)
# YOUR CODE ENDS HERE
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
#(โ 1 line of code)
# AL, caches = ...
# YOUR CODE STARTS HERE
AL, caches =L_model_forward(X, parameters)
# YOUR CODE ENDS HERE
# Compute cost.
#(โ 1 line of code)
# cost = ...
# YOUR CODE STARTS HERE
cost =compute_cost(AL, Y)
# YOUR CODE ENDS HERE
# Backward propagation.
#(โ 1 line of code)
# grads = ...
# YOUR CODE STARTS HERE
grads =L_model_backward(AL, Y, caches)
# YOUR CODE ENDS HERE
# Update parameters.
#(โ 1 line of code)
# parameters = ...
# YOUR CODE STARTS HERE
parameters= update_parameters(parameters, grads, learning_rate)
# YOUR CODE ENDS HERE
# Print the cost every 100 iterations
if print_cost and i % 100 == 0 or i == num_iterations - 1:
print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))
if i % 100 == 0 or i == num_iterations:
costs.append(cost)
return parameters, costs
parameters, costs = L_layer_model(train_x, train_y, layers_dims, num_iterations = 1, print_cost = False)
print("Cost after first iteration: " + str(costs[0]))
L_layer_model_test(L_layer_model)
###Output
Cost after iteration 0: 0.7717493284237686
Cost after first iteration: 0.7717493284237686
Cost after iteration 1: 0.7070709008912569
Cost after iteration 1: 0.7070709008912569
Cost after iteration 1: 0.7070709008912569
Cost after iteration 2: 0.7063462654190897
[92m All tests passed.
###Markdown
5.1 - Train the model If your code passed the previous cell, run the cell below to train your model as a 4-layer neural network. - The cost should decrease on every iteration. - It may take up to 5 minutes to run 2500 iterations.
###Code
parameters, costs = L_layer_model(train_x, train_y, layers_dims, num_iterations = 2500, print_cost = True)
###Output
Cost after iteration 0: 0.7717493284237686
Cost after iteration 100: 0.6720534400822914
Cost after iteration 200: 0.6482632048575212
Cost after iteration 300: 0.6115068816101356
Cost after iteration 400: 0.5670473268366111
Cost after iteration 500: 0.5401376634547801
Cost after iteration 600: 0.5279299569455267
Cost after iteration 700: 0.4654773771766851
Cost after iteration 800: 0.369125852495928
Cost after iteration 900: 0.39174697434805344
Cost after iteration 1000: 0.31518698886006163
Cost after iteration 1100: 0.2726998441789385
Cost after iteration 1200: 0.23741853400268137
Cost after iteration 1300: 0.19960120532208644
Cost after iteration 1400: 0.18926300388463307
Cost after iteration 1500: 0.16118854665827753
Cost after iteration 1600: 0.14821389662363316
Cost after iteration 1700: 0.13777487812972944
Cost after iteration 1800: 0.1297401754919012
Cost after iteration 1900: 0.12122535068005211
Cost after iteration 2000: 0.11382060668633713
Cost after iteration 2100: 0.10783928526254133
Cost after iteration 2200: 0.10285466069352679
Cost after iteration 2300: 0.10089745445261786
Cost after iteration 2400: 0.09287821526472398
Cost after iteration 2499: 0.08843994344170202
###Markdown
**Expected Output**: Cost after iteration 0 0.771749 Cost after iteration 100 0.672053 ... ... Cost after iteration 2499 0.088439
###Code
pred_train = predict(train_x, train_y, parameters)
###Output
Accuracy: 0.9856459330143539
###Markdown
**Expected Output**: Train Accuracy 0.985645933014
###Code
pred_test = predict(test_x, test_y, parameters)
###Output
Accuracy: 0.8
###Markdown
**Expected Output**: Test Accuracy 0.8 Congrats! It seems that your 4-layer neural network has better performance (80%) than your 2-layer neural network (72%) on the same test set. This is pretty good performance for this task. Nice job! In the next course on "Improving deep neural networks," you'll be able to obtain even higher accuracy by systematically searching for better hyperparameters: learning_rate, layers_dims, or num_iterations, for example. 6 - Results AnalysisFirst, take a look at some images the L-layer model labeled incorrectly. This will show a few mislabeled images.
###Code
print_mislabeled_images(classes, test_x, test_y, pred_test)
###Output
_____no_output_____
###Markdown
**A few types of images the model tends to do poorly on include:** - Cat body in an unusual position- Cat appears against a background of a similar color- Unusual cat color and species- Camera Angle- Brightness of the picture- Scale variation (cat is very large or small in image) Congratulations on finishing this assignment! You just built and trained a deep L-layer neural network, and applied it in order to distinguish cats from non-cats, a very serious and important task in deep learning. ;) By now, you've also completed all the assignments for Course 1 in the Deep Learning Specialization. Amazing work! If you'd like to test out how closely you resemble a cat yourself, there's an optional ungraded exercise below, where you can test your own image. Great work and hope to see you in the next course! 7 - Test with your own image (optional/ungraded exercise) From this point, if you so choose, you can use your own image to test the output of your model. To do that follow these steps:1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.2. Add your image to this Jupyter Notebook's directory, in the "images" folder3. Change your image's name in the following code4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
###Code
## START CODE HERE ##
my_image = "c.jpg" # change this to the name of your image file
my_label_y = [1] # the true class of your image (1 -> cat, 0 -> non-cat)
## END CODE HERE ##
fname = "images/" + my_image
image = np.array(Image.open(fname).resize((num_px, num_px)))
plt.imshow(image)
print(image.shape)
image = image / 255.
image = image.reshape((1, num_px * num_px * 3)).T
my_predicted_image = predict(image, my_label_y, parameters)
print ("y = " + str(np.squeeze(my_predicted_image)) + ", your L-layer model predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
###Output
(64, 64, 3)
Accuracy: 1.0
y = 1.0, your L-layer model predicts a "cat" picture.
|
gaussian_elimination.ipynb | ###Markdown
Gaussian elimination in python Reference:* The Python code is copied from [Gaussian elimination method with pivoting](https://www.kaggle.com/code/sanjeetkp46/gaussian-elimination-method-with-pivoting/notebook) with slight modifications.
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Gaussian elimination without pivoting
###Code
def Cal_LU(D,g):
A=np.array((D),dtype=float)
f=np.array((g),dtype=float)
n = f.size
for i in range(0,n-1): # Loop through the columns of the matrix
for j in range(i+1,n): # Loop through rows below diagonal for each column
if A[i,i] == 0:
print("Error: Zero on diagonal!")
print("Need algorithm with pivoting")
break
m = A[j,i]/A[i,i]
A[j,:] = A[j,:] - m*A[i,:]
f[j] = f[j] - m*f[i]
return A,f
def Back_Subs(A,f):
n = f.size
x = np.zeros(n) # Initialize the solution vector, x, to zero
x[n-1] = f[n-1]/A[n-1,n-1] # Solve for last entry first
for i in range(n-2,-1,-1): # Loop from the end to the beginning
sum_ = 0
for j in range(i+1,n): # For known x values, sum and move to rhs
sum_ = sum_ + A[i,j]*x[j]
x[i] = (f[i] - sum_)/A[i,i]
return x
###Output
_____no_output_____
###Markdown
Example
###Code
# To solve Ax=b
A = np.array([
[10**(-12),1],
[1,1]
])
b = np.array([1,2])
#
B,g = Cal_LU(A,b)
x= Back_Subs(B,g)
print('solution obtained by gaussian elimination without pivoting')
print('x= ', x)
###Output
solution obtained by gaussian elimination without pivoting
x= [0.99997788 1. ]
###Markdown
Gaussian elimination with pivoting
###Code
def Cal_LU_pivot(D,g):
A=np.array((D),dtype=float)
f=np.array((g),dtype=float)
n = len(f)
for i in range(0,n-1): # Loop through the columns of the matrix
for k in range(i+1,n):
if np.abs(A[k,i])>np.abs(A[i,i]):
A[[i,k]]=A[[k,i]] # Swaps ith and kth rows to each other
f[[i,k]]=f[[k,i]]
break
for j in range(i+1,n): # Loop through rows below diagonal for each column
m = A[j,i]/A[i,i]
A[j,:] = A[j,:] - m*A[i,:]
f[j] = f[j] - m*f[i]
return A,f
###Output
_____no_output_____
###Markdown
Example
###Code
# To solve Ax=b
A = np.array([
[10**(-12),1],
[1,1]
])
b = np.array([1,2])
#
B,g = Cal_LU_pivot(A,b)
x= Back_Subs(B,g)
print('solution obtained by gaussian elimination with pivoting')
print('x= ', x)
###Output
solution obtained by gaussian elimination with pivoting
x= [1. 1.]
|
Yahoo Finance/Yahoo Finance Get_stock_data.ipynb | ###Markdown
Yahoo Finance - Get stock data 1. Install the quandl package
###Code
#!pip install yfinance
###Output
_____no_output_____
###Markdown
2. Import the quandl package
###Code
import yfinance as yf
###Output
_____no_output_____
###Markdown
3. Get the data for the stock AAPL
###Code
data = yf.download('TSLA','2016-01-01','2019-08-01')
###Output
[*********************100%***********************] 1 of 1 completed
###Markdown
4. Import the plotting library
###Code
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
5. Plot the close price of the AAPL
###Code
data['Adj Close'].plot()
plt.show()
###Output
_____no_output_____
###Markdown
Yahoo Finance - Get stock data 1. Install the quandl package
###Code
#!pip install yfinance
###Output
_____no_output_____
###Markdown
2. Import the quandl package
###Code
import yfinance as yf
###Output
_____no_output_____
###Markdown
3. Get the data for the stock AAPL
###Code
data = yf.download('TSLA','2016-01-01','2019-08-01')
###Output
_____no_output_____
###Markdown
4. Import the plotting library
###Code
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
5. Plot the close price of the AAPL
###Code
data['Adj Close'].plot()
plt.show()
###Output
_____no_output_____ |
nb/maxent_ffjord.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount('/content/drive')
%cd /content/drive/My\ Drive/xd_vs_flow
!pip install torchdiffeq
import os, sys, time
import pickle
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch.utils.data import TensorDataset, DataLoader
from torch.optim import Adam
import lib.toy_data as toy_data
import lib.utils as utils
from lib.visualize_flow import visualize_transform
import lib.layers.odefunc as odefunc
from train_misc import standard_normal_logprob
from train_misc import set_cnf_options, count_nfe, count_parameters, count_total_time
from train_misc import add_spectral_norm, spectral_norm_power_iteration
from train_misc import create_regularization_fns, get_regularization, append_regularization_to_log
from train_misc import build_model_tabular
from sklearn.preprocessing import StandardScaler
ss = StandardScaler()
# read in 1 Gyr SSFR and 100 Myr SSFR
__ssfr_1gyr = np.load('_ssfr_1gyr.npy')
__ssfr_100myr = np.load('_ssfr_100myr.npy')
_ssfr_1gyr = np.log10(__ssfr_1gyr.copy())
_ssfr_100myr = np.log10(__ssfr_100myr.copy())
N = len(_ssfr_1gyr)
print('N=%i' % N)
npdata = np.array([_ssfr_1gyr, _ssfr_100myr]).T
ss.fit(npdata)
data = torch.from_numpy(npdata.astype(np.float32))
raw_np = np.load('quick_isochrone.npy')
raw_np = raw_np
raw = pd.DataFrame({'G': raw_np[:, 0], 'bp_rp': raw_np[:, 1]})
#Add some noise:
raw['g_std'] = np.random.rand(len(raw))*0.3 + 1e-3
raw['bp_rp_p'] = raw['bp_rp'] + raw['g_std']*np.random.randn(len(raw))
raw['G_p'] = raw['G'] + raw['g_std']*np.random.randn(len(raw))
N = len(raw)
print('N=%i' % N)
use_cols = ['G_p', 'bp_rp_p']#, 'g_std']#, 'BP', 'RP']
cond_cols = []
npdata = np.array(raw[use_cols + cond_cols])
ss.fit(npdata)
data = torch.from_numpy(npdata.astype(np.float32))
print(ss.scale_)
print(ss.mean_)
traindataset = TensorDataset(data[:-(N//5)])
testdataset = TensorDataset(data[-(N//5):])
batch = 1024
train = DataLoader(traindataset, batch_size=batch, shuffle=True, drop_last=True)
test = DataLoader(testdataset, batch_size=batch, shuffle=False)
len(traindataset)/batch
args = pickle.load(open('args.pkl', 'rb'))
###Output
_____no_output_____
###Markdown
###Code
regularization_fns, regularization_coeffs = create_regularization_fns(args)
model = build_model_tabular(args, 2, regularization_fns).cuda()
#if args.spectral_norm: add_spectral_norm(model)
set_cnf_options(args, model)
optimizer = Adam(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)
transform_scale = torch.from_numpy(ss.scale_).float().cuda()[np.newaxis, :]
transform_mean = torch.from_numpy(ss.mean_).float().cuda()[np.newaxis, :]
def compute_loss(args, model, X, batch_size=None):
if batch_size is None: batch_size = args.batch_size
X = X.cuda()
x = (X - transform_mean)/transform_scale
zero = torch.zeros(x.shape[0], 1).to(x)
# transform to z
z, delta_logp = model(x, zero)
#z = model(x)
# compute log q(z)
logpz = standard_normal_logprob(z).sum(1, keepdim=True)
#return -torch.mean(logpz)
logpx = logpz - delta_logp
loss = -torch.mean(logpx)
return loss
time_meter = utils.RunningAverageMeter(0.93)
loss_meter = utils.RunningAverageMeter(0.93)
nfef_meter = utils.RunningAverageMeter(0.93)
nfeb_meter = utils.RunningAverageMeter(0.93)
tt_meter = utils.RunningAverageMeter(0.93)
itr = 0
model = model.cuda()
model.train()
end = time.time()
while itr < 10000:
for X in train:
optimizer.zero_grad()
#if args.spectral_norm: spectral_norm_power_iteration(model, 1)
loss = compute_loss(args, model, X[0])
loss_meter.update(loss.item())
total_time = count_total_time(model)
nfe_forward = count_nfe(model)
loss.backward()
optimizer.step()
itr += 1
if itr % 50 == 0:
nfe_total = count_nfe(model)
nfe_backward = nfe_total - nfe_forward
nfef_meter.update(nfe_forward)
nfeb_meter.update(nfe_backward)
time_meter.update(time.time() - end)
tt_meter.update(total_time)
log_message = (
'Iter {:04d} | Time {:.4f}({:.4f}) | Loss {:.6f}({:.6f}) | NFE Forward {:.0f}({:.1f})'
' | NFE Backward {:.0f}({:.1f}) | CNF Time {:.4f}({:.4f})'.format(
itr, time_meter.val, time_meter.avg, loss_meter.val, loss_meter.avg, nfef_meter.val, nfef_meter.avg,
nfeb_meter.val, nfeb_meter.avg, tt_meter.val, tt_meter.avg
)
)
print(log_message)
from torch.functional import F
def ezmodel(model, x):
zero = torch.zeros(x.shape[0], 1)
# transform to z
z, delta_logp = model(x, zero)
# compute log q(z)
logpz = standard_normal_logprob(z).sum(1, keepdim=True)
logpx = logpz - delta_logp
return logpx
def soft_lo_clamp(x, lo):
return F.softplus(x-lo) + lo
min_1gyr = 0
max_1gyr = 3e-9
min_100myr = 0
max_100myr = 3e-9
num = 100
p_1gyr, p_100myr = np.meshgrid(np.linspace(min_1gyr, max_1gyr, num=num), np.linspace(min_100myr, max_100myr, num=num))
p_1gyr = p_1gyr.reshape(-1)
p_100myr = p_100myr.reshape(-1)
pdatap = np.zeros((len(p_1gyr), 2), dtype=np.float32)
pdatap[:, 0] = p_1gyr
pdatap[:, 1] = p_100myr
pdata = ss.transform(pdatap)
pdata = torch.from_numpy(pdata)#.cuda()
pdata_set = TensorDataset(pdata)
pdata_loader = DataLoader(pdata_set, batch_size=1000, shuffle=False)
model.eval()
logprob = soft_lo_clamp(torch.cat([ezmodel(model, q[:, :2]).cpu().detach() for (q,) in pdata_loader], dim=0), -100).numpy()
_exp = lambda _x: np.exp(_x/4)
prob = _exp(logprob.reshape(num, num))
fig = plt.figure(figsize=(12,6))
sub = fig.add_subplot(121)
norm = CustomNorm(0.5)
h, _, _, _ = sub.hist2d(_ssfr_1gyr, _ssfr_100myr, bins=100, range=[(min_1gyr, max_1gyr), (min_100myr, max_100myr)], normed=True)
sub = fig.add_subplot(122)
sub.imshow(prob.T, origin='lower', extent=[min_1gyr, max_1gyr, min_100myr, max_100myr], aspect='auto')
###Output
_____no_output_____ |
courses/machine_learning/tensorflow/b_estimator.ipynb | ###Markdown
Machine Learning using tf.estimator In this notebook, we will create a machine learning model using tf.estimator and evaluate its performance. The dataset is rather small (7700 samples), so we can do it all in-memory. We will also simply pass the raw data in as-is.
###Code
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.1
import tensorflow as tf
import pandas as pd
import numpy as np
import shutil
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Read data created in the previous chapter.
###Code
# In CSV, label is the first column, after the features, followed by the key
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
FEATURES = CSV_COLUMNS[1:len(CSV_COLUMNS) - 1]
LABEL = CSV_COLUMNS[0]
df_train = pd.read_csv('./taxi-train.csv', header = None, names = CSV_COLUMNS)
df_valid = pd.read_csv('./taxi-valid.csv', header = None, names = CSV_COLUMNS)
###Output
_____no_output_____
###Markdown
Input function to read from Pandas Dataframe into tf.constant
###Code
def make_input_fn(df, num_epochs):
return tf.compat.v1.estimator.inputs.pandas_input_fn(
x = df,
y = df[LABEL],
batch_size = 128,
num_epochs = num_epochs,
shuffle = True,
queue_capacity = 1000,
num_threads = 1
)
###Output
_____no_output_____
###Markdown
Create feature columns for estimator
###Code
def make_feature_cols():
input_columns = [tf.feature_column.numeric_column(k) for k in FEATURES]
return input_columns
###Output
_____no_output_____
###Markdown
Linear Regression with tf.Estimator framework
###Code
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.LinearRegressor(
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_input_fn(df_train, num_epochs = 10))
###Output
_____no_output_____
###Markdown
Evaluate on the validation data (we should defer using the test data to after we have selected a final model).
###Code
def print_rmse(model, name, df):
metrics = model.evaluate(input_fn = make_input_fn(df, 1))
print('RMSE on {} dataset = {}'.format(name, np.sqrt(metrics['average_loss'])))
print_rmse(model, 'validation', df_valid)
###Output
_____no_output_____
###Markdown
This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction.
###Code
import itertools
# Read saved model and use it for prediction
model = tf.estimator.LinearRegressor(
feature_columns = make_feature_cols(), model_dir = OUTDIR)
preds_iter = model.predict(input_fn = make_input_fn(df_valid, 1))
print([pred['predictions'][0] for pred in list(itertools.islice(preds_iter, 5))])
###Output
_____no_output_____
###Markdown
This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well. Deep Neural Network regression
###Code
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.DNNRegressor(hidden_units = [32, 8, 2],
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_input_fn(df_train, num_epochs = 100));
print_rmse(model, 'validation', df_valid)
###Output
_____no_output_____
###Markdown
We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about!But, for the record, let's say we had to choose between the two models. We'd choose the one with the lower validation error. Finally, we'd measure the RMSE on the test data with this chosen model. Benchmark dataset Let's do this on the benchmark dataset.
###Code
from google.cloud import bigquery
import numpy as np
import pandas as pd
def create_query(phase, EVERY_N):
"""Creates a query with the proper splits.
Args:
phase: int, 1=train, 2=valid.
EVERY_N: int, take an example EVERY_N rows.
Returns:
Query string with the proper splits.
"""
base_query = """
WITH daynames AS
(SELECT ['Sun', 'Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat'] AS daysofweek)
SELECT
(tolls_amount + fare_amount) AS fare_amount,
daysofweek[ORDINAL(EXTRACT(DAYOFWEEK FROM pickup_datetime))] AS dayofweek,
EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count AS passengers,
'notneeded' AS key
FROM
`nyc-tlc.yellow.trips`, daynames
WHERE
trip_distance > 0 AND fare_amount > 0
"""
if EVERY_N is None:
if phase < 2:
# training
query = """{0} AND ABS(MOD(FARM_FINGERPRINT(CAST
(pickup_datetime AS STRING), 4)) < 2""".format(base_query)
else:
query = """{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(
pickup_datetime AS STRING), 4)) = {1}""".format(base_query, phase)
else:
query = """{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(
pickup_datetime AS STRING)), {1})) = {2}""".format(
base_query, EVERY_N, phase)
return query
query = create_query(2, 100000)
df = bigquery.Client().query(query).to_dataframe()
print_rmse(model, 'benchmark', df)
###Output
_____no_output_____
###Markdown
Machine Learning using tf.estimator In this notebook, we will create a machine learning model using tf.estimator and evaluate its performance. The dataset is rather small (7700 samples), so we can do it all in-memory. We will also simply pass the raw data in as-is.
###Code
import tensorflow as tf
import pandas as pd
import numpy as np
import shutil
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Read data created in the previous chapter.
###Code
# In CSV, label is the first column, after the features, followed by the key
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
FEATURES = CSV_COLUMNS[1:len(CSV_COLUMNS) - 1]
LABEL = CSV_COLUMNS[0]
df_train = pd.read_csv('./taxi-train.csv', header = None, names = CSV_COLUMNS)
df_valid = pd.read_csv('./taxi-valid.csv', header = None, names = CSV_COLUMNS)
###Output
_____no_output_____
###Markdown
Input function to read from Pandas Dataframe into tf.constant
###Code
def make_input_fn(df, num_epochs):
return tf.estimator.inputs.pandas_input_fn(
x = df,
y = df[LABEL],
batch_size = 128,
num_epochs = num_epochs,
shuffle = True,
queue_capacity = 1000,
num_threads = 1
)
###Output
_____no_output_____
###Markdown
Create feature columns for estimator
###Code
def make_feature_cols():
input_columns = [tf.feature_column.numeric_column(k) for k in FEATURES]
return input_columns
###Output
_____no_output_____
###Markdown
Linear Regression with tf.Estimator framework
###Code
tf.logging.set_verbosity(tf.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.LinearRegressor(
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_input_fn(df_train, num_epochs = 10))
###Output
_____no_output_____
###Markdown
Evaluate on the validation data (we should defer using the test data to after we have selected a final model).
###Code
def print_rmse(model, name, df):
metrics = model.evaluate(input_fn = make_input_fn(df, 1))
print('RMSE on {} dataset = {}'.format(name, np.sqrt(metrics['average_loss'])))
print_rmse(model, 'validation', df_valid)
###Output
_____no_output_____
###Markdown
This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction.
###Code
import itertools
# Read saved model and use it for prediction
model = tf.estimator.LinearRegressor(
feature_columns = make_feature_cols(), model_dir = OUTDIR)
preds_iter = model.predict(input_fn = make_input_fn(df_valid, 1))
print([pred['predictions'][0] for pred in list(itertools.islice(preds_iter, 5))])
###Output
_____no_output_____
###Markdown
This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well. Deep Neural Network regression
###Code
tf.logging.set_verbosity(tf.logging.INFO)
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.DNNRegressor(hidden_units = [32, 8, 2],
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_input_fn(df_train, num_epochs = 100));
print_rmse(model, 'validation', df_valid)
###Output
_____no_output_____
###Markdown
We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about!But, for the record, let's say we had to choose between the two models. We'd choose the one with the lower validation error. Finally, we'd measure the RMSE on the test data with this chosen model. Benchmark dataset Let's do this on the benchmark dataset.
###Code
from google.cloud import bigquery
import numpy as np
import pandas as pd
def create_query(phase, EVERY_N):
"""Creates a query with the proper splits.
Args:
phase: int, 1=train, 2=valid.
EVERY_N: int, take an example EVERY_N rows.
Returns:
Query string with the proper splits.
"""
base_query = """
WITH daynames AS
(SELECT ['Sun', 'Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat'] AS daysofweek)
SELECT
(tolls_amount + fare_amount) AS fare_amount,
daysofweek[ORDINAL(EXTRACT(DAYOFWEEK FROM pickup_datetime))] AS dayofweek,
EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count AS passengers,
'notneeded' AS key
FROM
`nyc-tlc.yellow.trips`, daynames
WHERE
trip_distance > 0 AND fare_amount > 0
"""
if EVERY_N is None:
if phase < 2:
# training
query = """{0} AND ABS(MOD(FARM_FINGERPRINT(CAST
(pickup_datetime AS STRING), 4)) < 2""".format(base_query)
else:
query = """{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(
pickup_datetime AS STRING), 4)) = {1}""".format(base_query, phase)
else:
query = """{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(
pickup_datetime AS STRING)), {1})) = {2}""".format(
base_query, EVERY_N, phase)
return query
query = create_query(2, 100000)
df = bigquery.Client().query(query).to_dataframe()
print_rmse(model, 'benchmark', df)
###Output
_____no_output_____
###Markdown
Machine Learning using tf.estimator In this notebook, we will create a machine learning model using tf.estimator and evaluate its performance. The dataset is rather small (7700 samples), so we can do it all in-memory. We will also simply pass the raw data in as-is.
###Code
import tensorflow as tf
import pandas as pd
import numpy as np
import shutil
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Read data created in the previous chapter.
###Code
# In CSV, label is the first column, after the features, followed by the key
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
FEATURES = CSV_COLUMNS[1:len(CSV_COLUMNS) - 1]
LABEL = CSV_COLUMNS[0]
df_train = pd.read_csv('./taxi-train.csv', header = None, names = CSV_COLUMNS)
df_valid = pd.read_csv('./taxi-valid.csv', header = None, names = CSV_COLUMNS)
###Output
_____no_output_____
###Markdown
Input function to read from Pandas Dataframe into tf.constant
###Code
def make_input_fn(df, num_epochs):
return tf.estimator.inputs.pandas_input_fn(
x = df,
y = df[LABEL],
batch_size = 128,
num_epochs = num_epochs,
shuffle = True,
queue_capacity = 1000,
num_threads = 1
)
###Output
_____no_output_____
###Markdown
Create feature columns for estimator
###Code
def make_feature_cols():
input_columns = [tf.feature_column.numeric_column(k) for k in FEATURES]
return input_columns
###Output
_____no_output_____
###Markdown
Linear Regression with tf.Estimator framework
###Code
tf.logging.set_verbosity(tf.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.LinearRegressor(
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_input_fn(df_train, num_epochs = 10))
###Output
_____no_output_____
###Markdown
Evaluate on the validation data (we should defer using the test data to after we have selected a final model).
###Code
def print_rmse(model, name, df):
metrics = model.evaluate(input_fn = make_input_fn(df, 1))
print('RMSE on {} dataset = {}'.format(name, np.sqrt(metrics['average_loss'])))
print_rmse(model, 'validation', df_valid)
###Output
_____no_output_____
###Markdown
This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction.
###Code
import itertools
# Read saved model and use it for prediction
model = tf.estimator.LinearRegressor(
feature_columns = make_feature_cols(), model_dir = OUTDIR)
preds_iter = model.predict(input_fn = make_input_fn(df_valid, 1))
print([pred['predictions'][0] for pred in list(itertools.islice(preds_iter, 5))])
###Output
_____no_output_____
###Markdown
This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well. Deep Neural Network regression
###Code
tf.logging.set_verbosity(tf.logging.INFO)
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.DNNRegressor(hidden_units = [32, 8, 2],
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_input_fn(df_train, num_epochs = 100));
print_rmse(model, 'validation', df_valid)
###Output
_____no_output_____
###Markdown
We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about!But, for the record, let's say we had to choose between the two models. We'd choose the one with the lower validation error. Finally, we'd measure the RMSE on the test data with this chosen model. Benchmark dataset Let's do this on the benchmark dataset.
###Code
import datalab.bigquery as bq
import numpy as np
import pandas as pd
def create_query(phase, EVERY_N):
"""
phase: 1 = train 2 = valid
"""
base_query = """
SELECT
(tolls_amount + fare_amount) AS fare_amount,
CONCAT(STRING(pickup_datetime), STRING(pickup_longitude), STRING(pickup_latitude), STRING(dropoff_latitude), STRING(dropoff_longitude)) AS key,
DAYOFWEEK(pickup_datetime)*1.0 AS dayofweek,
HOUR(pickup_datetime)*1.0 AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
FROM
[nyc-tlc:yellow.trips]
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
"""
if EVERY_N == None:
if phase < 2:
# Training
query = "{0} AND ABS(HASH(pickup_datetime)) % 4 < 2".format(base_query)
else:
# Validation
query = "{0} AND ABS(HASH(pickup_datetime)) % 4 == {1}".format(base_query, phase)
else:
query = "{0} AND ABS(HASH(pickup_datetime)) % {1} == {2}".format(base_query, EVERY_N, phase)
return query
query = create_query(2, 100000)
df = bq.Query(query).to_dataframe()
print_rmse(model, 'benchmark', df)
###Output
_____no_output_____
###Markdown
Machine Learning using tf.estimator In this notebook, we will create a machine learning model using tf.estimator and evaluate its performance. The dataset is rather small (7700 samples), so we can do it all in-memory. We will also simply pass the raw data in as-is.
###Code
import datalab.bigquery as bq
import tensorflow as tf
import pandas as pd
import numpy as np
import shutil
print tf.__version__
###Output
_____no_output_____
###Markdown
Read data created in the previous chapter.
###Code
# In CSV, label is the first column, after the features, followed by the key
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
FEATURES = CSV_COLUMNS[1:len(CSV_COLUMNS) - 1]
LABEL = CSV_COLUMNS[0]
df_train = pd.read_csv('./taxi-train.csv', header = None, names = CSV_COLUMNS)
df_valid = pd.read_csv('./taxi-valid.csv', header = None, names = CSV_COLUMNS)
###Output
_____no_output_____
###Markdown
Input function to read from Pandas Dataframe into tf.constant
###Code
def make_input_fn(df, num_epochs):
return tf.estimator.inputs.pandas_input_fn(
x = df,
y = df[LABEL],
batch_size = 128,
num_epochs = num_epochs,
shuffle = True,
queue_capacity = 1000,
num_threads = 1
)
###Output
_____no_output_____
###Markdown
Create feature columns for estimator
###Code
def make_feature_cols():
input_columns = [tf.feature_column.numeric_column(k) for k in FEATURES]
return input_columns
###Output
_____no_output_____
###Markdown
Linear Regression with tf.Estimator framework
###Code
tf.logging.set_verbosity(tf.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.LinearRegressor(
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_input_fn(df_train, num_epochs = 10))
###Output
_____no_output_____
###Markdown
Evaluate on the validation data (we should defer using the test data to after we have selected a final model).
###Code
def print_rmse(model, name, df):
metrics = model.evaluate(input_fn = make_input_fn(df, 1))
print 'RMSE on {} dataset = {}'.format(name, np.sqrt(metrics['average_loss']))
print_rmse(model, 'validation', df_valid)
###Output
_____no_output_____
###Markdown
This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction.
###Code
import itertools
# Read saved model and use it for prediction
model = tf.estimator.LinearRegressor(
feature_columns = make_feature_cols(), model_dir = OUTDIR)
preds_iter = model.predict(input_fn = make_input_fn(df_valid, 1))
print [pred['predictions'][0] for pred in list(itertools.islice(preds_iter, 5))]
###Output
_____no_output_____
###Markdown
This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well. Deep Neural Network regression
###Code
tf.logging.set_verbosity(tf.logging.INFO)
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.DNNRegressor(hidden_units = [32, 8, 2],
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_input_fn(df_train, num_epochs = 100));
print_rmse(model, 'validation', df_valid)
###Output
_____no_output_____
###Markdown
We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about!But, for the record, let's say we had to choose between the two models. We'd choose the one with the lower validation error. Finally, we'd measure the RMSE on the test data with this chosen model. Benchmark dataset Let's do this on the benchmark dataset.
###Code
import datalab.bigquery as bq
import numpy as np
import pandas as pd
def create_query(phase, EVERY_N):
"""
phase: 1 = train 2 = valid
"""
base_query = """
SELECT
(tolls_amount + fare_amount) AS fare_amount,
CONCAT(STRING(pickup_datetime), STRING(pickup_longitude), STRING(pickup_latitude), STRING(dropoff_latitude), STRING(dropoff_longitude)) AS key,
DAYOFWEEK(pickup_datetime)*1.0 AS dayofweek,
HOUR(pickup_datetime)*1.0 AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
FROM
[nyc-tlc:yellow.trips]
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
"""
if EVERY_N == None:
if phase < 2:
# Training
query = "{0} AND ABS(HASH(pickup_datetime)) % 4 < 2".format(base_query)
else:
# Validation
query = "{0} AND ABS(HASH(pickup_datetime)) % 4 == {1}".format(base_query, phase)
else:
query = "{0} AND ABS(HASH(pickup_datetime)) % {1} == {2}".format(base_query, EVERY_N, phase)
return query
query = create_query(2, 100000)
df = bq.Query(query).to_dataframe()
print_rmse(model, 'benchmark', df)
###Output
_____no_output_____
###Markdown
Machine Learning using tf.estimator In this notebook, we will create a machine learning model using tf.estimator and evaluate its performance. The dataset is rather small (7700 samples), so we can do it all in-memory. We will also simply pass the raw data in as-is.
###Code
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.6
import tensorflow as tf
import pandas as pd
import numpy as np
import shutil
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Read data created in the previous chapter.
###Code
# In CSV, label is the first column, after the features, followed by the key
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
FEATURES = CSV_COLUMNS[1:len(CSV_COLUMNS) - 1]
LABEL = CSV_COLUMNS[0]
df_train = pd.read_csv('./taxi-train.csv', header = None, names = CSV_COLUMNS)
df_valid = pd.read_csv('./taxi-valid.csv', header = None, names = CSV_COLUMNS)
###Output
_____no_output_____
###Markdown
Input function to read from Pandas Dataframe into tf.constant
###Code
def make_input_fn(df, num_epochs):
return tf.compat.v1.estimator.inputs.pandas_input_fn(
x = df,
y = df[LABEL],
batch_size = 128,
num_epochs = num_epochs,
shuffle = True,
queue_capacity = 1000,
num_threads = 1
)
###Output
_____no_output_____
###Markdown
Create feature columns for estimator
###Code
def make_feature_cols():
input_columns = [tf.feature_column.numeric_column(k) for k in FEATURES]
return input_columns
###Output
_____no_output_____
###Markdown
Linear Regression with tf.Estimator framework
###Code
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.LinearRegressor(
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_input_fn(df_train, num_epochs = 10))
###Output
_____no_output_____
###Markdown
Evaluate on the validation data (we should defer using the test data to after we have selected a final model).
###Code
def print_rmse(model, name, df):
metrics = model.evaluate(input_fn = make_input_fn(df, 1))
print('RMSE on {} dataset = {}'.format(name, np.sqrt(metrics['average_loss'])))
print_rmse(model, 'validation', df_valid)
###Output
_____no_output_____
###Markdown
This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction.
###Code
import itertools
# Read saved model and use it for prediction
model = tf.estimator.LinearRegressor(
feature_columns = make_feature_cols(), model_dir = OUTDIR)
preds_iter = model.predict(input_fn = make_input_fn(df_valid, 1))
print([pred['predictions'][0] for pred in list(itertools.islice(preds_iter, 5))])
###Output
_____no_output_____
###Markdown
This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well. Deep Neural Network regression
###Code
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.DNNRegressor(hidden_units = [32, 8, 2],
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_input_fn(df_train, num_epochs = 100));
print_rmse(model, 'validation', df_valid)
###Output
_____no_output_____
###Markdown
We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about!But, for the record, let's say we had to choose between the two models. We'd choose the one with the lower validation error. Finally, we'd measure the RMSE on the test data with this chosen model. Benchmark dataset Let's do this on the benchmark dataset.
###Code
from google.cloud import bigquery
import numpy as np
import pandas as pd
def create_query(phase, EVERY_N):
"""Creates a query with the proper splits.
Args:
phase: int, 1=train, 2=valid.
EVERY_N: int, take an example EVERY_N rows.
Returns:
Query string with the proper splits.
"""
base_query = """
WITH daynames AS
(SELECT ['Sun', 'Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat'] AS daysofweek)
SELECT
(tolls_amount + fare_amount) AS fare_amount,
daysofweek[ORDINAL(EXTRACT(DAYOFWEEK FROM pickup_datetime))] AS dayofweek,
EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count AS passengers,
'notneeded' AS key
FROM
`nyc-tlc.yellow.trips`, daynames
WHERE
trip_distance > 0 AND fare_amount > 0
"""
if EVERY_N is None:
if phase < 2:
# training
query = """{0} AND ABS(MOD(FARM_FINGERPRINT(CAST
(pickup_datetime AS STRING), 4)) < 2""".format(base_query)
else:
query = """{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(
pickup_datetime AS STRING), 4)) = {1}""".format(base_query, phase)
else:
query = """{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(
pickup_datetime AS STRING)), {1})) = {2}""".format(
base_query, EVERY_N, phase)
return query
query = create_query(2, 100000)
df = bigquery.Client().query(query).to_dataframe()
print_rmse(model, 'benchmark', df)
###Output
_____no_output_____
###Markdown
Machine Learning using tf.estimator In this notebook, we will create a machine learning model using tf.estimator and evaluate its performance. The dataset is rather small (7700 samples), so we can do it all in-memory. We will also simply pass the raw data in as-is.
###Code
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.5
import tensorflow as tf
import pandas as pd
import numpy as np
import shutil
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Read data created in the previous chapter.
###Code
# In CSV, label is the first column, after the features, followed by the key
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
FEATURES = CSV_COLUMNS[1:len(CSV_COLUMNS) - 1]
LABEL = CSV_COLUMNS[0]
df_train = pd.read_csv('./taxi-train.csv', header = None, names = CSV_COLUMNS)
df_valid = pd.read_csv('./taxi-valid.csv', header = None, names = CSV_COLUMNS)
###Output
_____no_output_____
###Markdown
Input function to read from Pandas Dataframe into tf.constant
###Code
def make_input_fn(df, num_epochs):
return tf.compat.v1.estimator.inputs.pandas_input_fn(
x = df,
y = df[LABEL],
batch_size = 128,
num_epochs = num_epochs,
shuffle = True,
queue_capacity = 1000,
num_threads = 1
)
###Output
_____no_output_____
###Markdown
Create feature columns for estimator
###Code
def make_feature_cols():
input_columns = [tf.feature_column.numeric_column(k) for k in FEATURES]
return input_columns
###Output
_____no_output_____
###Markdown
Linear Regression with tf.Estimator framework
###Code
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.LinearRegressor(
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_input_fn(df_train, num_epochs = 10))
###Output
_____no_output_____
###Markdown
Evaluate on the validation data (we should defer using the test data to after we have selected a final model).
###Code
def print_rmse(model, name, df):
metrics = model.evaluate(input_fn = make_input_fn(df, 1))
print('RMSE on {} dataset = {}'.format(name, np.sqrt(metrics['average_loss'])))
print_rmse(model, 'validation', df_valid)
###Output
_____no_output_____
###Markdown
This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction.
###Code
import itertools
# Read saved model and use it for prediction
model = tf.estimator.LinearRegressor(
feature_columns = make_feature_cols(), model_dir = OUTDIR)
preds_iter = model.predict(input_fn = make_input_fn(df_valid, 1))
print([pred['predictions'][0] for pred in list(itertools.islice(preds_iter, 5))])
###Output
_____no_output_____
###Markdown
This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well. Deep Neural Network regression
###Code
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.DNNRegressor(hidden_units = [32, 8, 2],
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_input_fn(df_train, num_epochs = 100));
print_rmse(model, 'validation', df_valid)
###Output
_____no_output_____
###Markdown
We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about!But, for the record, let's say we had to choose between the two models. We'd choose the one with the lower validation error. Finally, we'd measure the RMSE on the test data with this chosen model. Benchmark dataset Let's do this on the benchmark dataset.
###Code
from google.cloud import bigquery
import numpy as np
import pandas as pd
def create_query(phase, EVERY_N):
"""Creates a query with the proper splits.
Args:
phase: int, 1=train, 2=valid.
EVERY_N: int, take an example EVERY_N rows.
Returns:
Query string with the proper splits.
"""
base_query = """
WITH daynames AS
(SELECT ['Sun', 'Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat'] AS daysofweek)
SELECT
(tolls_amount + fare_amount) AS fare_amount,
daysofweek[ORDINAL(EXTRACT(DAYOFWEEK FROM pickup_datetime))] AS dayofweek,
EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count AS passengers,
'notneeded' AS key
FROM
`nyc-tlc.yellow.trips`, daynames
WHERE
trip_distance > 0 AND fare_amount > 0
"""
if EVERY_N is None:
if phase < 2:
# training
query = """{0} AND ABS(MOD(FARM_FINGERPRINT(CAST
(pickup_datetime AS STRING), 4)) < 2""".format(base_query)
else:
query = """{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(
pickup_datetime AS STRING), 4)) = {1}""".format(base_query, phase)
else:
query = """{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(
pickup_datetime AS STRING)), {1})) = {2}""".format(
base_query, EVERY_N, phase)
return query
query = create_query(2, 100000)
df = bigquery.Client().query(query).to_dataframe()
print_rmse(model, 'benchmark', df)
###Output
_____no_output_____
###Markdown
Machine Learning using tf.estimator In this notebook, we will create a machine learning model using tf.estimator and evaluate its performance. The dataset is rather small (7700 samples), so we can do it all in-memory. We will also simply pass the raw data in as-is.
###Code
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.6
import tensorflow as tf
import pandas as pd
import numpy as np
import shutil
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Read data created in the previous chapter.
###Code
# In CSV, label is the first column, after the features, followed by the key
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
FEATURES = CSV_COLUMNS[1:len(CSV_COLUMNS) - 1]
LABEL = CSV_COLUMNS[0]
df_train = pd.read_csv('./taxi-train.csv', header = None, names = CSV_COLUMNS)
df_valid = pd.read_csv('./taxi-valid.csv', header = None, names = CSV_COLUMNS)
###Output
_____no_output_____
###Markdown
Input function to read from Pandas Dataframe into tf.constant
###Code
def make_input_fn(df, num_epochs):
return tf.compat.v1.estimator.inputs.pandas_input_fn(
x = df,
y = df[LABEL],
batch_size = 128,
num_epochs = num_epochs,
shuffle = True,
queue_capacity = 1000,
num_threads = 1
)
###Output
_____no_output_____
###Markdown
Create feature columns for estimator
###Code
def make_feature_cols():
input_columns = [tf.feature_column.numeric_column(k) for k in FEATURES]
return input_columns
###Output
_____no_output_____
###Markdown
Linear Regression with tf.Estimator framework
###Code
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.LinearRegressor(
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_input_fn(df_train, num_epochs = 10))
###Output
_____no_output_____
###Markdown
Evaluate on the validation data (we should defer using the test data to after we have selected a final model).
###Code
def print_rmse(model, name, df):
metrics = model.evaluate(input_fn = make_input_fn(df, 1))
print('RMSE on {} dataset = {}'.format(name, np.sqrt(metrics['average_loss'])))
print_rmse(model, 'validation', df_valid)
###Output
_____no_output_____
###Markdown
This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction.
###Code
import itertools
# Read saved model and use it for prediction
model = tf.estimator.LinearRegressor(
feature_columns = make_feature_cols(), model_dir = OUTDIR)
preds_iter = model.predict(input_fn = make_input_fn(df_valid, 1))
print([pred['predictions'][0] for pred in list(itertools.islice(preds_iter, 5))])
###Output
_____no_output_____
###Markdown
This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well. Deep Neural Network regression
###Code
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.DNNRegressor(hidden_units = [32, 8, 2],
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_input_fn(df_train, num_epochs = 100));
print_rmse(model, 'validation', df_valid)
###Output
_____no_output_____
###Markdown
We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about!But, for the record, let's say we had to choose between the two models. We'd choose the one with the lower validation error. Finally, we'd measure the RMSE on the test data with this chosen model. Benchmark dataset Let's do this on the benchmark dataset.
###Code
from google.cloud import bigquery
import numpy as np
import pandas as pd
def create_query(phase, EVERY_N):
"""Creates a query with the proper splits.
Args:
phase: int, 1=train, 2=valid.
EVERY_N: int, take an example EVERY_N rows.
Returns:
Query string with the proper splits.
"""
base_query = """
WITH daynames AS
(SELECT ['Sun', 'Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat'] AS daysofweek)
SELECT
(tolls_amount + fare_amount) AS fare_amount,
daysofweek[ORDINAL(EXTRACT(DAYOFWEEK FROM pickup_datetime))] AS dayofweek,
EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count AS passengers,
'notneeded' AS key
FROM
`nyc-tlc.yellow.trips`, daynames
WHERE
trip_distance > 0 AND fare_amount > 0
"""
if EVERY_N is None:
if phase < 2:
# training
query = """{0} AND ABS(MOD(FARM_FINGERPRINT(CAST
(pickup_datetime AS STRING), 4)) < 2""".format(base_query)
else:
query = """{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(
pickup_datetime AS STRING), 4)) = {1}""".format(base_query, phase)
else:
query = """{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(
pickup_datetime AS STRING)), {1})) = {2}""".format(
base_query, EVERY_N, phase)
return query
query = create_query(2, 100000)
df = bigquery.Client().query(query).to_dataframe()
print_rmse(model, 'benchmark', df)
###Output
_____no_output_____
###Markdown
Machine Learning using tf.estimator In this notebook, we will create a machine learning model using tf.estimator and evaluate its performance. The dataset is rather small (7700 samples), so we can do it all in-memory. We will also simply pass the raw data in as-is.
###Code
import datalab.bigquery as bq
import tensorflow as tf
import pandas as pd
import numpy as np
import shutil
print(tf.__version__)
###Output
_____no_output_____
###Markdown
Read data created in the previous chapter.
###Code
# In CSV, label is the first column, after the features, followed by the key
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
FEATURES = CSV_COLUMNS[1:len(CSV_COLUMNS) - 1]
LABEL = CSV_COLUMNS[0]
df_train = pd.read_csv('./taxi-train.csv', header = None, names = CSV_COLUMNS)
df_valid = pd.read_csv('./taxi-valid.csv', header = None, names = CSV_COLUMNS)
###Output
_____no_output_____
###Markdown
Input function to read from Pandas Dataframe into tf.constant
###Code
def make_input_fn(df, num_epochs):
return tf.estimator.inputs.pandas_input_fn(
x = df,
y = df[LABEL],
batch_size = 128,
num_epochs = num_epochs,
shuffle = True,
queue_capacity = 1000,
num_threads = 1
)
###Output
_____no_output_____
###Markdown
Create feature columns for estimator
###Code
def make_feature_cols():
input_columns = [tf.feature_column.numeric_column(k) for k in FEATURES]
return input_columns
###Output
_____no_output_____
###Markdown
Linear Regression with tf.Estimator framework
###Code
tf.logging.set_verbosity(tf.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.LinearRegressor(
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_input_fn(df_train, num_epochs = 10))
###Output
_____no_output_____
###Markdown
Evaluate on the validation data (we should defer using the test data to after we have selected a final model).
###Code
def print_rmse(model, name, df):
metrics = model.evaluate(input_fn = make_input_fn(df, 1))
print('RMSE on {} dataset = {}'.format(name, np.sqrt(metrics['average_loss'])))
print_rmse(model, 'validation', df_valid)
###Output
_____no_output_____
###Markdown
This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction.
###Code
import itertools
# Read saved model and use it for prediction
model = tf.estimator.LinearRegressor(
feature_columns = make_feature_cols(), model_dir = OUTDIR)
preds_iter = model.predict(input_fn = make_input_fn(df_valid, 1))
print([pred['predictions'][0] for pred in list(itertools.islice(preds_iter, 5))])
###Output
_____no_output_____
###Markdown
This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well. Deep Neural Network regression
###Code
tf.logging.set_verbosity(tf.logging.INFO)
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.DNNRegressor(hidden_units = [32, 8, 2],
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_input_fn(df_train, num_epochs = 100));
print_rmse(model, 'validation', df_valid)
###Output
_____no_output_____
###Markdown
We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about!But, for the record, let's say we had to choose between the two models. We'd choose the one with the lower validation error. Finally, we'd measure the RMSE on the test data with this chosen model. Benchmark dataset Let's do this on the benchmark dataset.
###Code
import datalab.bigquery as bq
import numpy as np
import pandas as pd
def create_query(phase, EVERY_N):
"""
phase: 1 = train 2 = valid
"""
base_query = """
SELECT
(tolls_amount + fare_amount) AS fare_amount,
CONCAT(STRING(pickup_datetime), STRING(pickup_longitude), STRING(pickup_latitude), STRING(dropoff_latitude), STRING(dropoff_longitude)) AS key,
DAYOFWEEK(pickup_datetime)*1.0 AS dayofweek,
HOUR(pickup_datetime)*1.0 AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count*1.0 AS passengers,
FROM
[nyc-tlc:yellow.trips]
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
"""
if EVERY_N == None:
if phase < 2:
# Training
query = "{0} AND ABS(HASH(pickup_datetime)) % 4 < 2".format(base_query)
else:
# Validation
query = "{0} AND ABS(HASH(pickup_datetime)) % 4 == {1}".format(base_query, phase)
else:
query = "{0} AND ABS(HASH(pickup_datetime)) % {1} == {2}".format(base_query, EVERY_N, phase)
return query
query = create_query(2, 100000)
df = bq.Query(query).to_dataframe()
print_rmse(model, 'benchmark', df)
###Output
_____no_output_____ |
basic projects/maxpooling_visualization.ipynb | ###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
_____no_output_____
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____ |
Kopie_von_S4_3_THOR.ipynb | ###Markdown
Photo Credits: Sea Foam by Ivan Bandura licensed under the Unsplash License >*A frequently asked question related to this work is โWhich mixing processes matter most for climate?โ As with many alluringly comprehensive sounding questions, the answer is โit depends.โ* > $\qquad$ MacKinnon, Jennifer A., et al. $\qquad$"Climate process team on internal waveโdriven ocean mixing." $\qquad$ Bulletin of the American Meteorological Society 98.11 (2017): 2429-2454. In week 4's final notebook, we will perform clustering to identify regimes in data taken from the realistic numerical ocean model [Estimating the Circulation and Climate of the Ocean](https://www.ecco-group.org/products-ECCO-V4r4.htm). Sonnewald et al. point out that finding robust regimes is intractable with a naรฏve approach, so we will be using using reduced dimensionality data. It is worth pointing out, however, that the reduction was done with an equation instead of one of the algorithms we discussed this week. If you're interested in the full details, you can check out [Sonnewald et al. (2019)](https://doi.org/10.1029/2018EA000519) Setup First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. We also check that Python 3.5 or later is installed (although Python 2.x may work, it is deprecated so we strongly recommend you use Python 3 instead), as well as Scikit-Learn โฅ0.20.
###Code
# Python โฅ3.5 is required
import sys
assert sys.version_info >= (3, 5)
# Scikit-Learn โฅ0.20 is required
import sklearn
assert sklearn.__version__ >= "0.20"
# Common imports
import numpy as np
import os
import xarray as xr
import pooch
# to make this notebook's output stable across runs
rnd_seed = 42
rnd_gen = np.random.default_rng(rnd_seed)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "dim_reduction"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
###Output
_____no_output_____
###Markdown
Here we're going to import the [StandardScaler](https://duckduckgo.com/sklearn.preprocessing.standardscaler) function from scikit's preprocessing tools, import the [scikit clustering library](https://duckduckgo.com/sklearn.clustering), and set up the colormap that we will use when plotting.
###Code
from sklearn.preprocessing import StandardScaler
import sklearn.cluster as cluster
from matplotlib.colors import LinearSegmentedColormap, ListedColormap
colors = ['royalblue', 'cyan','yellow', 'orange', 'magenta', 'red']
mycmap = ListedColormap(colors)
###Output
_____no_output_____
###Markdown
Data Preprocessing The first thing we need to do is retrieve the list of files we'll be working on. We'll rely on pooch to access the files hosted on the cloud.
###Code
# Retrieve the files from the cloud using Pooch.
data_url = 'https://unils-my.sharepoint.com/:u:/g/personal/tom_beucler_unil_ch/EUYqUzpIjoJBui02QEo6q1wBSN1Zsi1ofE6I3G4B9LJn_Q?download=1'
hash = '3f41661c7a087fa7d7af1d2a8baf95c065468f8a415b8514baedda2f5bc18bb5'
files = pooch.retrieve(data_url, known_hash=hash, processor=pooch.Unzip())
[print(filename) for filename in files];
###Output
Downloading data from 'https://unils-my.sharepoint.com/:u:/g/personal/tom_beucler_unil_ch/EUYqUzpIjoJBui02QEo6q1wBSN1Zsi1ofE6I3G4B9LJn_Q?download=1' to file '/root/.cache/pooch/8a10ee1ae6941d8b9bb543c954c793fa-EUYqUzpIjoJBui02QEo6q1wBSN1Zsi1ofE6I3G4B9LJn_Q'.
Unzipping contents of '/root/.cache/pooch/8a10ee1ae6941d8b9bb543c954c793fa-EUYqUzpIjoJBui02QEo6q1wBSN1Zsi1ofE6I3G4B9LJn_Q' to '/root/.cache/pooch/8a10ee1ae6941d8b9bb543c954c793fa-EUYqUzpIjoJBui02QEo6q1wBSN1Zsi1ofE6I3G4B9LJn_Q.unzip'
###Markdown
And now that we have a set of files to load, let's set up a dictionary with the variable names as keys and the data in numpy array format as the values.
###Code
# Let's read in the variable names from the filepaths
var_names = []
[var_names.append(path.split('/')[-1][:-4]) for path in files]
# And build a dictionary of the data variables keyed to the filenames
data_dict = {}
for idx, val in enumerate(var_names):
data_dict[val] = np.load(files[idx]).T
#We'll print the name of the variable loaded and the associated shape
[print(f'Varname: {item[0]:<15} Shape: {item[1].shape}') for item in data_dict.items()];
###Output
Varname: curlB Shape: (360, 720)
Varname: BPT Shape: (360, 720)
Varname: curlCori Shape: (360, 720)
Varname: noiseMask Shape: (360, 720)
Varname: curlA Shape: (360, 720)
Varname: curlTau Shape: (360, 720)
###Markdown
We now have a dictionary that uses the filename as the key! Feel free to explore the data (e.g., loading the keys, checking the shape of the arrays, plotting)
###Code
#Feel free to explore the data dictionary
###Output
_____no_output_____
###Markdown
We're eventually going to have an array of cluster classes that we're going to use to label dynamic regimes in the ocean. Let's make an array full of NaN (not-a-number) values that has the same shape as our other variables and store it in the data dictionary.
###Code
data_dict['clusters'] = np.full_like(data_dict['BPT'],np.nan)
###Output
_____no_output_____
###Markdown
Reformatting as Xarray In the original paper, this data was loaded as numpy arrays. However, we'll take this opportunity to demonstrate the same procedure while relying on xarray. First, let's instantiate a blank dataset.**Q1) Make a blank xarray dataset.***Hint: Look at the xarray [documentation](https://duckduckgo.com/?q=xarray+dataset)*
###Code
ds=xr.Dataset()
###Output
_____no_output_____
###Markdown
Image taken from the xarray Data Structure documentation In order to build the dataset, we're going to need a set of coordinate vectors that help us map out our data! For our data, we have two axes corresponding to longitude ($\lambda$) and latitude ($\phi$). We don't know much about how many lat/lon points we have, so let's explore one of the variables to make sense of the data the shape of one of the numpy arrays.**Q2) Visualize the data using a plot and printing the shape of the data to the console output.**
###Code
#Complete the code
# Let's print out an image of the Bottom Pressure Torques (BPT)
plt.imshow( data_dict['BPT'] , origin='lower')
# It will also be useful to store and print out the shape of the data
data_shape =data_dict['BPT'].shape
print(data_shape)
###Output
(360, 720)
###Markdown
Now that we know how the resolution of our data, we can prepare a set of axis arrays. We will use these to organize the data we will feed into the dataset.**Q3) Prepare the latitude and longitude arrays to be used as axes for our dataset***Hint 1: You can build ordered numpy arrays using, e.g., [numpy.linspace](https://numpy.org/doc/stable/reference/generated/numpy.linspace.html) and [numpy.arange](https://numpy.org/doc/stable/reference/generated/numpy.arange.html)**Hint 2: You can rely on the data_shape variable we loaded previously to know how many points you need along each axis*
###Code
#Complete the code
# Let's prepare the lat and lon axes for our data.
lat =np.linspace(0,360,360)
lon =np.linspace(0,720,720)
###Output
_____no_output_____
###Markdown
Now that we have the axes we need, we can build xarray [*data arrays*](https://xarray.pydata.org/en/stable/generated/xarray.DataArray.html) for each data variable. Since we'll be doing it several times, let's go ahead and defined a function that does this for us!**Q4) Define a function that takes in: 1) an array name, 2) a numpy array, 3) a lat vector, and 4) a lon vector. The function should return a dataArray with lat-lon as the coordinate dimensions**
###Code
#Complete the code
def np_to_xr(array_name, array, lat, lon):
#building the xarrray
da = xr.DataArray(data = array, # Data to be stored
#set the name of dimensions for the dataArray
dims = ['lat', 'lon'],
#Set the dictionary pointing the name dimensions to np arrays
coords = {'lat':lat,
'lon':lon},
name=array_name)
return da
###Output
_____no_output_____
###Markdown
We're now ready to build our data array! Let's iterate through the items and merge our blank dataset with the data arrays we create.**Q5) Build the dataset from the data dictionary***Hint: We'll be using the xarray merge command to put everything together.*
###Code
# The code in the notebook assumes you named your dataset ds. Change it to
# whatever you used!
# Complete the code
for key, item in data_dict.items():
# Let's make use of our np_to_xr function to get the data as a dataArray
da = np_to_xr(key, item, lat, lon)
# Merge the dataSet with the dataArray here!
ds = xr.merge( [ds , da ] )
###Output
_____no_output_____
###Markdown
Congratulations! You should now have a nicely set up xarray dataset. This let's you access a ton of nice features, e.g.:> Data plotting by calling, e.g., `ds.BPT.plot.imshow(cmap='ocean')`> > Find statistical measures of all variables at once! (e.g.: `ds.std()`, `ds.mean()`)
###Code
# Play around with the dataset here if you'd like :)
###Output
_____no_output_____
###Markdown
Now we want to find clusters of data considering each grid point as a datapoint with 5 dimensional data. However, we went through a lot of work to get the data nicely associated with a lat and lon - do we really want to undo that?Luckily, xarray develops foresaw the need to group dimensions together. Let's create a 'flat' version of our dataset using the [`stack`](https://xarray.pydata.org/en/stable/generated/xarray.DataArray.stack.html) method. Let's make a flattened version of our dataset.**Q6) Store a flattened version of our dataset***Hint 1: You'll need to pass a dictionary with the 'new' stacked dimension name as the key and the 'flattened' dimensions as the values.**Hint 2: xarrays have a ['.values' attribute](https://xarray.pydata.org/en/stable/generated/xarray.DataArray.values.html) that return their data as a numpy array.*
###Code
# Complete the code
# Let's store the stacked version of our dataset
stacked = ds.stack({'dim':['lat','lon']})
# And verify the shape of our data
print(stacked.to_array().values.shape)
###Output
(7, 259200)
###Markdown
So far we've ignored an important point - we're supposed to have 5 variables, not 6! As you may have guessed, `noiseMask` helps us throw away data we dont want (e.g., from land mass or bad pixels). We're now going to clean up the stacked dataset using the noise mask. Relax and read through the code, since there won't be a question in this part :)
###Code
# Let's redefine stacked as all the points where noiseMask = 1, since noisemask
# is binary data.
print(f'Dataset shape before processing: {stacked.to_array().values.shape}')
print("Let's do some data cleaning!")
print(f'Points before cleaning: {len(stacked.BPT)}')
stacked = stacked.where(stacked.noiseMask==1, drop=True)
print(f'Points after cleaning: {len(stacked.BPT)}')
# We also no longer need the noiseMask variable, so we can just drop it.
print('And drop the noisemask variable...')
print(f'Before dropping: {stacked.to_array().values.shape}')
stacked = stacked.drop('noiseMask')
print(f'Dataset shape after processing: {stacked.to_array().values.shape}')
###Output
And drop the noisemask variable...
Before dropping: (7, 149714)
Dataset shape after processing: (6, 149714)
###Markdown
We now have several thousand points which we want to divide into clusters using the kmeans clustering algorithm (you can check out the documentation for scikit's implementation of kmeans [here](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html)).You'll note that the algorithm expects the input data `X` to be fed as `(n_samples, n_features)`. This is the opposite of what we have! Let's go ahead and make a copy to a numpy array has the axes in the right order.You'll need xarray's [`.to_array()`](https://xarray.pydata.org/en/stable/generated/xarray.Dataset.to_array.html) method and [`.values`](https://xarray.pydata.org/en/stable/generated/xarray.DataArray.values.html) parameter, as well as numpy's [`.moveaxis`](https://numpy.org/doc/stable/reference/generated/numpy.moveaxis.html) method.**Q7) Load the datapoints into a numpy array following the convention where the 0th axis corresponds to the samples and the 1st axis corresponds to the features.**
###Code
# Complete the code
input_data = np.moveaxis(stacked.to_array().values, # data to reshape
0, # source axis as integer,
1) # destination axis as integer
# Does the input data look the way it's supposed to? Print the shape.
print(input_data.shape)
###Output
(149714, 6)
###Markdown
In previous classes we discussed the importance of the scaling the data before implementing our algorithms. Now that our data is all but ready to be fed into an algorithm, let's make sure that it's been scaled.**Q8) Scale the input data***Hint 1: Import the [`StandardScaler`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html) class from scikit and instantiate it**Hint 2: Update the input array to the one returned by the [`.fit_transform(X)`](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.htmlsklearn.preprocessing.StandardScaler.fit_transform) method*
###Code
from sklearn.preprocessing import StandardScaler
SCL = StandardScaler()
X = SCL.fit_transform(input_data)
X.shape
###Output
/usr/local/lib/python3.7/dist-packages/sklearn/utils/extmath.py:985: RuntimeWarning: invalid value encountered in true_divide
updated_mean = (last_sum + new_sum) / updated_sample_count
/usr/local/lib/python3.7/dist-packages/sklearn/utils/extmath.py:990: RuntimeWarning: invalid value encountered in true_divide
T = new_sum / new_sample_count
/usr/local/lib/python3.7/dist-packages/sklearn/utils/extmath.py:1020: RuntimeWarning: invalid value encountered in true_divide
new_unnormalized_variance -= correction ** 2 / new_sample_count
###Markdown
Now we're finally ready to train our algorithm! Let's load up the kmeans model and find clusters in our data.**Q9) Instantiate the kmeans clustering algorithm, and then fit it using 50 clusters, trying out 10 different initial centroids.***Hint 1: `sklearn.cluster` was imported as `cluser` during the notebook setup! [Here is the scikit `KMeans` documentation](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html).**Hint 2: Use the `fit_predict` method to organize the data into clusters**Warning! : Fitting the data may take some time (under a minute during the testing of the notebook)
###Code
# Complete the code
kmeans = cluster.KMeans(n_clusters=50, # Number of clusters
random_state =42, # setting a random state
n_init =10, # Number of initial centroid states to try
verbose = 1) # Verbosity so we know things are working
cluster_labels = kmeans.fit_predict(X[:,0:-1]) # Feed in out scaled input data!
###Output
Initialization complete
Iteration 0, inertia 178083.89703048766
Iteration 1, inertia 159390.21041351298
Iteration 2, inertia 154958.53378937
Iteration 3, inertia 153364.6906988097
Iteration 4, inertia 152324.72153485252
Iteration 5, inertia 151710.9167395463
Iteration 6, inertia 151215.18572026718
Iteration 7, inertia 150798.56641029703
Iteration 8, inertia 150503.84122090333
Iteration 9, inertia 150281.32586012976
Iteration 10, inertia 150105.95237949066
Iteration 11, inertia 149943.23463400244
Iteration 12, inertia 149800.1544897156
Iteration 13, inertia 149688.51613387442
Iteration 14, inertia 149583.5026197455
Iteration 15, inertia 149483.2763791494
Iteration 16, inertia 149393.45781591441
Iteration 17, inertia 149312.77247441385
Iteration 18, inertia 149250.21704388165
Iteration 19, inertia 149190.28168967392
Iteration 20, inertia 149124.99589589806
Iteration 21, inertia 149022.5203657826
Iteration 22, inertia 148931.91982966047
Iteration 23, inertia 148881.302942448
Iteration 24, inertia 148816.3560276227
Iteration 25, inertia 148776.7611786304
Iteration 26, inertia 148741.60861700718
Iteration 27, inertia 148719.2617312301
Iteration 28, inertia 148703.93326699806
Iteration 29, inertia 148692.22729303813
Iteration 30, inertia 148682.64309787488
Iteration 31, inertia 148672.61712235268
Iteration 32, inertia 148663.90789397885
Iteration 33, inertia 148641.6834777884
Iteration 34, inertia 148634.2919559529
Iteration 35, inertia 148624.43161181136
Iteration 36, inertia 148613.84541832458
Iteration 37, inertia 148603.78331589495
Iteration 38, inertia 148595.4238670601
Iteration 39, inertia 148587.17845988573
Iteration 40, inertia 148578.11542856574
Iteration 41, inertia 148567.90076078434
Iteration 42, inertia 148559.02322317427
Iteration 43, inertia 148550.21131428555
Iteration 44, inertia 148542.2734119076
Iteration 45, inertia 148533.63291469176
Iteration 46, inertia 148525.56020105616
Iteration 47, inertia 148517.9865819187
Iteration 48, inertia 148510.0633317366
Iteration 49, inertia 148502.99309017576
Iteration 50, inertia 148495.897611984
Iteration 51, inertia 148489.15066630984
Iteration 52, inertia 148482.56978448623
Iteration 53, inertia 148476.10945277987
Iteration 54, inertia 148469.46459661316
Iteration 55, inertia 148462.9645218944
Iteration 56, inertia 148454.6000770288
Iteration 57, inertia 148444.54231196654
Iteration 58, inertia 148431.81636241858
Iteration 59, inertia 148417.13183848833
Iteration 60, inertia 148401.9373189272
Iteration 61, inertia 148387.98604647382
Iteration 62, inertia 148375.4922147618
Iteration 63, inertia 148362.86434359473
Iteration 64, inertia 148351.5387655549
Iteration 65, inertia 148340.15065029144
Iteration 66, inertia 148329.72044806438
Iteration 67, inertia 148319.5655088859
Iteration 68, inertia 148309.68430809246
Iteration 69, inertia 148299.6153246931
Iteration 70, inertia 148289.6810514759
Iteration 71, inertia 148280.18493545937
Iteration 72, inertia 148271.06335637852
Iteration 73, inertia 148262.59289688684
Iteration 74, inertia 148253.89500313986
Iteration 75, inertia 148243.92435065997
Iteration 76, inertia 148231.93149678136
Iteration 77, inertia 148216.69788334065
Iteration 78, inertia 148197.65119578777
Iteration 79, inertia 148176.75768602142
Iteration 80, inertia 148154.56486051448
Iteration 81, inertia 148126.6339786329
Iteration 82, inertia 148097.08300035645
Iteration 83, inertia 148061.59341856794
Iteration 84, inertia 148024.87110692833
Iteration 85, inertia 147985.60165029758
Iteration 86, inertia 147937.907051183
Iteration 87, inertia 147890.2161698339
Iteration 88, inertia 147845.41694965406
Iteration 89, inertia 147806.36924628535
Iteration 90, inertia 147772.94074131484
Iteration 91, inertia 147747.35096398732
Iteration 92, inertia 147722.62360386126
Iteration 93, inertia 147704.2831620844
Iteration 94, inertia 147688.70510074834
Iteration 95, inertia 147675.62169597694
Iteration 96, inertia 147666.89343500722
Iteration 97, inertia 147659.68127761278
Iteration 98, inertia 147653.05726236757
Iteration 99, inertia 147647.98655506698
Iteration 100, inertia 147643.5871739356
Iteration 101, inertia 147639.98521748587
Iteration 102, inertia 147637.1888916474
Iteration 103, inertia 147634.95331611775
Iteration 104, inertia 147632.68080944207
Iteration 105, inertia 147630.48465640712
Iteration 106, inertia 147628.88832703052
Iteration 107, inertia 147627.82103774085
Iteration 108, inertia 147626.03212096027
Iteration 109, inertia 147624.5969392516
Iteration 110, inertia 147622.19374827726
Iteration 111, inertia 147618.90279747295
Iteration 112, inertia 147616.69746609853
Iteration 113, inertia 147615.28791851387
Iteration 114, inertia 147614.04936196195
Iteration 115, inertia 147610.2063622774
Iteration 116, inertia 147607.31556259797
Iteration 117, inertia 147603.82023048148
Iteration 118, inertia 147600.9957105015
Iteration 119, inertia 147599.53322249354
Iteration 120, inertia 147599.16002300274
Iteration 121, inertia 147599.0735864094
Converged at iteration 121: center shift 9.484554547794578e-05 within tolerance 0.00010000000000000047.
Initialization complete
Iteration 0, inertia 182194.85935363895
Iteration 1, inertia 163072.09672366185
Iteration 2, inertia 157922.92460970973
Iteration 3, inertia 155087.57073658568
Iteration 4, inertia 153449.09073747596
Iteration 5, inertia 152511.71792891365
Iteration 6, inertia 151956.7557053373
Iteration 7, inertia 151550.03999003986
Iteration 8, inertia 151226.67911664105
Iteration 9, inertia 150989.59081743716
Iteration 10, inertia 150800.30254658952
Iteration 11, inertia 150683.06487617453
Iteration 12, inertia 150580.19960186174
Iteration 13, inertia 150493.94095350633
Iteration 14, inertia 150430.27623244657
Iteration 15, inertia 150359.73417643263
Iteration 16, inertia 150287.4021281145
Iteration 17, inertia 150216.7420708682
Iteration 18, inertia 150149.51181872998
Iteration 19, inertia 150070.9868726355
Iteration 20, inertia 149975.20597457548
Iteration 21, inertia 149879.29559478816
Iteration 22, inertia 149823.40550624102
Iteration 23, inertia 149771.09231153037
Iteration 24, inertia 149684.79154302136
Iteration 25, inertia 149570.0657386692
Iteration 26, inertia 149460.56204667824
Iteration 27, inertia 149335.40534527414
Iteration 28, inertia 149163.76808644296
Iteration 29, inertia 148977.72605615773
Iteration 30, inertia 148827.11857174544
Iteration 31, inertia 148626.36103426604
Iteration 32, inertia 148458.34004532034
Iteration 33, inertia 148365.20836735753
Iteration 34, inertia 148281.14294266154
Iteration 35, inertia 148236.95578851135
Iteration 36, inertia 148190.32601486612
Iteration 37, inertia 148133.45806787512
Iteration 38, inertia 148109.522177055
Iteration 39, inertia 148084.6117238783
Iteration 40, inertia 148066.62586208025
Iteration 41, inertia 148050.73046091796
Iteration 42, inertia 148040.782380672
Iteration 43, inertia 148030.56055888082
Iteration 44, inertia 148022.2295031826
Iteration 45, inertia 148014.60644565892
Iteration 46, inertia 148006.60793378303
Iteration 47, inertia 147998.18618259128
Iteration 48, inertia 147989.43262930412
Iteration 49, inertia 147980.8455582714
Iteration 50, inertia 147969.04627274355
Iteration 51, inertia 147958.5147098747
Iteration 52, inertia 147945.90234608183
Iteration 53, inertia 147935.84566804435
Iteration 54, inertia 147926.36348463967
Iteration 55, inertia 147915.8595485374
Iteration 56, inertia 147907.84816952804
Iteration 57, inertia 147900.33848430598
Iteration 58, inertia 147892.1876670499
Iteration 59, inertia 147883.5717384335
Iteration 60, inertia 147875.36024308522
Iteration 61, inertia 147867.1771815683
Iteration 62, inertia 147859.82542418502
Iteration 63, inertia 147854.38249107424
Iteration 64, inertia 147849.96644224727
Iteration 65, inertia 147843.99828708963
Iteration 66, inertia 147837.58249324615
Iteration 67, inertia 147827.9585780491
Iteration 68, inertia 147813.78510491145
Iteration 69, inertia 147797.64788743103
Iteration 70, inertia 147783.69500360402
Iteration 71, inertia 147777.8713374529
Iteration 72, inertia 147773.8726589075
Iteration 73, inertia 147770.75807367556
Iteration 74, inertia 147768.0452332358
Iteration 75, inertia 147765.312231377
Iteration 76, inertia 147762.95425450837
Iteration 77, inertia 147761.10107982362
Iteration 78, inertia 147760.10599815467
Iteration 79, inertia 147759.65238589828
Iteration 80, inertia 147759.43209691427
Iteration 81, inertia 147759.14699653472
Iteration 82, inertia 147758.9018791755
Converged at iteration 82: center shift 4.334167224259372e-05 within tolerance 0.00010000000000000047.
Initialization complete
Iteration 0, inertia 181231.81431616278
Iteration 1, inertia 159316.65449611598
Iteration 2, inertia 154453.77138570847
Iteration 3, inertia 152376.35994652528
Iteration 4, inertia 151169.40704567585
Iteration 5, inertia 150418.97833418852
Iteration 6, inertia 149946.15250986803
Iteration 7, inertia 149677.581455114
Iteration 8, inertia 149471.65225443197
Iteration 9, inertia 149270.9343093569
Iteration 10, inertia 149076.73436677846
Iteration 11, inertia 148904.312097972
Iteration 12, inertia 148720.88099858077
Iteration 13, inertia 148551.16272716864
Iteration 14, inertia 148396.8080555
Iteration 15, inertia 148245.04759308376
Iteration 16, inertia 148097.45604670743
Iteration 17, inertia 147960.9852011893
Iteration 18, inertia 147833.8908557032
Iteration 19, inertia 147690.37469515036
Iteration 20, inertia 147556.29402672785
Iteration 21, inertia 147460.0974799695
Iteration 22, inertia 147365.40164283404
Iteration 23, inertia 147279.90944257082
Iteration 24, inertia 147213.9805861643
Iteration 25, inertia 147166.90991390872
Iteration 26, inertia 147124.77600084274
Iteration 27, inertia 147094.97613269178
Iteration 28, inertia 147070.1003058213
Iteration 29, inertia 147040.01028519234
Iteration 30, inertia 147008.60105020844
Iteration 31, inertia 146986.57689802366
Iteration 32, inertia 146959.40521140903
Iteration 33, inertia 146934.14731370274
Iteration 34, inertia 146910.16843191968
Iteration 35, inertia 146888.959034972
Iteration 36, inertia 146866.33170420112
Iteration 37, inertia 146846.3125248034
Iteration 38, inertia 146821.75262980186
Iteration 39, inertia 146797.2121358837
Iteration 40, inertia 146780.91310161492
Iteration 41, inertia 146770.77727693337
Iteration 42, inertia 146759.10050199708
Iteration 43, inertia 146747.4418178534
Iteration 44, inertia 146733.100472789
Iteration 45, inertia 146719.4698939445
Iteration 46, inertia 146705.98149264167
Iteration 47, inertia 146695.74867789846
Iteration 48, inertia 146684.41468430328
Iteration 49, inertia 146677.29684529605
Iteration 50, inertia 146671.33518612717
Iteration 51, inertia 146665.72114467487
Iteration 52, inertia 146657.47086361004
Iteration 53, inertia 146650.72537863126
Iteration 54, inertia 146644.32053623247
Iteration 55, inertia 146640.29069477046
Iteration 56, inertia 146636.91129630292
Iteration 57, inertia 146632.56760703042
Iteration 58, inertia 146627.92046214387
Iteration 59, inertia 146624.49666938357
Iteration 60, inertia 146619.37425561046
Iteration 61, inertia 146610.27265036537
Iteration 62, inertia 146603.28511799948
Iteration 63, inertia 146597.67182538286
Iteration 64, inertia 146593.35039043188
Iteration 65, inertia 146587.92465880723
Iteration 66, inertia 146583.11488685478
Iteration 67, inertia 146576.15118328683
Iteration 68, inertia 146566.4682728104
Iteration 69, inertia 146557.19615355623
Iteration 70, inertia 146543.41142196977
Iteration 71, inertia 146527.26861186203
Iteration 72, inertia 146512.88023746526
Iteration 73, inertia 146500.5103154578
Iteration 74, inertia 146492.59412655095
Iteration 75, inertia 146483.60386136835
Iteration 76, inertia 146475.59408219965
Iteration 77, inertia 146465.0196894001
Iteration 78, inertia 146454.73363270567
Iteration 79, inertia 146448.48333424388
Iteration 80, inertia 146445.6907498528
Iteration 81, inertia 146442.23388002085
Iteration 82, inertia 146438.80953297782
Iteration 83, inertia 146432.56464322025
Iteration 84, inertia 146424.7891757172
Iteration 85, inertia 146417.07312117887
Iteration 86, inertia 146407.21022630768
Iteration 87, inertia 146395.64655744497
Iteration 88, inertia 146387.92523699068
Iteration 89, inertia 146383.8136534192
Iteration 90, inertia 146378.71018750998
Iteration 91, inertia 146373.9399051664
Iteration 92, inertia 146371.0533236941
Iteration 93, inertia 146367.4898834253
Iteration 94, inertia 146366.393740919
Iteration 95, inertia 146365.6784385183
Iteration 96, inertia 146364.11289170827
Iteration 97, inertia 146360.9649267211
Iteration 98, inertia 146359.52902810357
Iteration 99, inertia 146357.78164840405
Iteration 100, inertia 146356.45486906223
Iteration 101, inertia 146355.5762189193
Iteration 102, inertia 146355.08348779316
Iteration 103, inertia 146354.5124027303
Iteration 104, inertia 146353.5992858485
Iteration 105, inertia 146352.48330417078
Iteration 106, inertia 146351.387301091
Iteration 107, inertia 146350.48814209455
Iteration 108, inertia 146350.1971704622
Iteration 109, inertia 146349.84873113054
Iteration 110, inertia 146349.72666638097
Iteration 111, inertia 146349.59798804478
Iteration 112, inertia 146349.40909683466
Iteration 113, inertia 146349.30237504074
Iteration 114, inertia 146349.1662700894
Converged at iteration 114: center shift 8.593944593245745e-06 within tolerance 0.00010000000000000047.
Initialization complete
Iteration 0, inertia 180373.6751167826
Iteration 1, inertia 160153.90429951524
Iteration 2, inertia 155317.46193647227
Iteration 3, inertia 153123.90668331322
Iteration 4, inertia 151959.43826410559
Iteration 5, inertia 151368.78043941097
Iteration 6, inertia 150881.0338867371
Iteration 7, inertia 150562.3625784068
Iteration 8, inertia 150304.90802474078
Iteration 9, inertia 150014.04196610342
Iteration 10, inertia 149775.822327497
Iteration 11, inertia 149576.14445609704
Iteration 12, inertia 149335.83812204725
Iteration 13, inertia 149101.21241774395
Iteration 14, inertia 148950.30195872532
Iteration 15, inertia 148851.31145352218
Iteration 16, inertia 148768.73620840895
Iteration 17, inertia 148685.91027203016
Iteration 18, inertia 148602.10643194668
Iteration 19, inertia 148514.0437794624
Iteration 20, inertia 148426.619856589
Iteration 21, inertia 148333.83527466902
Iteration 22, inertia 148250.06492261443
Iteration 23, inertia 148189.32872207425
Iteration 24, inertia 148151.8167533082
Iteration 25, inertia 148127.2053694579
Iteration 26, inertia 148107.7690292423
Iteration 27, inertia 148082.14342437295
Iteration 28, inertia 148067.50087647297
Iteration 29, inertia 148051.51700330112
Iteration 30, inertia 148037.01557652513
Iteration 31, inertia 148022.71942494233
Iteration 32, inertia 148015.92266810217
Iteration 33, inertia 148010.70569990645
Iteration 34, inertia 148006.64413569626
Iteration 35, inertia 148002.5175783979
Iteration 36, inertia 147995.99767079577
Iteration 37, inertia 147989.945574518
Iteration 38, inertia 147985.39490393287
Iteration 39, inertia 147982.67392458318
Iteration 40, inertia 147979.82100476613
Iteration 41, inertia 147979.3016314202
Iteration 42, inertia 147978.8940117024
Converged at iteration 42: center shift 8.811238492159326e-05 within tolerance 0.00010000000000000047.
Initialization complete
Iteration 0, inertia 176030.44347663474
Iteration 1, inertia 161733.15943731146
Iteration 2, inertia 158541.8047796937
Iteration 3, inertia 156788.6345917383
Iteration 4, inertia 155408.80727383663
Iteration 5, inertia 154284.73976016502
Iteration 6, inertia 153343.78678512815
Iteration 7, inertia 152295.5373152041
Iteration 8, inertia 151222.93880925496
Iteration 9, inertia 150468.96229058478
Iteration 10, inertia 150008.0334354896
Iteration 11, inertia 149667.83049056187
Iteration 12, inertia 149356.72757920655
Iteration 13, inertia 149117.01957119937
Iteration 14, inertia 148865.41101483573
Iteration 15, inertia 148598.72605594865
Iteration 16, inertia 148297.06762790002
Iteration 17, inertia 148058.68335202834
Iteration 18, inertia 147859.4139270992
Iteration 19, inertia 147716.36318891417
Iteration 20, inertia 147586.1801754444
Iteration 21, inertia 147475.85580394472
Iteration 22, inertia 147352.16435297334
Iteration 23, inertia 147232.3146764884
Iteration 24, inertia 147133.21697522904
Iteration 25, inertia 147058.22555017224
Iteration 26, inertia 146992.3347522438
Iteration 27, inertia 146932.5172867082
Iteration 28, inertia 146880.48553159717
Iteration 29, inertia 146836.95648772974
Iteration 30, inertia 146795.82344898992
Iteration 31, inertia 146757.56853458073
Iteration 32, inertia 146717.1728076574
Iteration 33, inertia 146686.6999121632
Iteration 34, inertia 146655.6787168487
Iteration 35, inertia 146634.49011697323
Iteration 36, inertia 146619.6746252196
Iteration 37, inertia 146594.843894954
Iteration 38, inertia 146568.7836360905
Iteration 39, inertia 146533.05069126305
Iteration 40, inertia 146518.9475576128
Iteration 41, inertia 146509.65829744798
Iteration 42, inertia 146502.4164417311
Iteration 43, inertia 146495.38145261008
Iteration 44, inertia 146489.0683122402
Iteration 45, inertia 146484.4988052288
Iteration 46, inertia 146481.64924294912
Iteration 47, inertia 146479.6792725986
Iteration 48, inertia 146478.54676204405
Iteration 49, inertia 146477.54946370958
Iteration 50, inertia 146476.76238995232
Converged at iteration 50: center shift 8.126448060966502e-05 within tolerance 0.00010000000000000047.
Initialization complete
Iteration 0, inertia 179287.96632259738
Iteration 1, inertia 157786.87926666907
Iteration 2, inertia 153773.76636231324
Iteration 3, inertia 151837.03488340465
Iteration 4, inertia 150415.76504908607
Iteration 5, inertia 149576.89322323474
Iteration 6, inertia 149054.62776523386
Iteration 7, inertia 148610.06005916756
Iteration 8, inertia 148273.46771181235
Iteration 9, inertia 148012.1687098055
Iteration 10, inertia 147762.10159188378
Iteration 11, inertia 147512.45687076595
Iteration 12, inertia 147311.31033560616
Iteration 13, inertia 147162.43496129705
Iteration 14, inertia 147076.81541740726
Iteration 15, inertia 147021.27102892325
Iteration 16, inertia 146965.31964608538
Iteration 17, inertia 146907.63909194813
Iteration 18, inertia 146850.18484674883
Iteration 19, inertia 146811.7189057267
Iteration 20, inertia 146733.20665147976
Iteration 21, inertia 146693.88018866518
Iteration 22, inertia 146664.55628873716
Iteration 23, inertia 146651.62736028008
Iteration 24, inertia 146640.5474398756
Iteration 25, inertia 146631.0196714857
Iteration 26, inertia 146621.2748959381
Iteration 27, inertia 146615.52211677763
Iteration 28, inertia 146609.14306494477
Iteration 29, inertia 146603.65759804042
Iteration 30, inertia 146599.29582955097
Iteration 31, inertia 146595.87939012574
Iteration 32, inertia 146592.97019845922
Iteration 33, inertia 146590.6265618535
Iteration 34, inertia 146588.5253443754
Iteration 35, inertia 146586.97597486887
Iteration 36, inertia 146585.74634054466
Iteration 37, inertia 146584.66933239557
Iteration 38, inertia 146583.70071727489
Converged at iteration 38: center shift 9.679640726371067e-05 within tolerance 0.00010000000000000047.
Initialization complete
Iteration 0, inertia 177537.4401684333
Iteration 1, inertia 163038.09640245163
Iteration 2, inertia 158773.0643004182
Iteration 3, inertia 156213.2733781337
Iteration 4, inertia 154204.83119634446
Iteration 5, inertia 152720.90199203862
Iteration 6, inertia 151476.95942778443
Iteration 7, inertia 150552.80362503376
Iteration 8, inertia 149901.81998563142
Iteration 9, inertia 149471.5764495462
Iteration 10, inertia 149015.72135931105
Iteration 11, inertia 148667.99134438758
Iteration 12, inertia 148500.69105422602
Iteration 13, inertia 148368.58139087068
Iteration 14, inertia 148229.8611187638
Iteration 15, inertia 148151.91850304138
Iteration 16, inertia 148087.02743164782
Iteration 17, inertia 148005.91659302154
Iteration 18, inertia 147917.11852158638
Iteration 19, inertia 147839.89147871395
Iteration 20, inertia 147727.52045959776
Iteration 21, inertia 147498.3744562283
Iteration 22, inertia 147413.85074303628
Iteration 23, inertia 147358.15503726588
Iteration 24, inertia 147292.0799240925
Iteration 25, inertia 147240.9196287777
Iteration 26, inertia 147195.47827766422
Iteration 27, inertia 147156.83520217525
Iteration 28, inertia 147120.25540674932
Iteration 29, inertia 147079.2193861046
Iteration 30, inertia 147052.58699295184
Iteration 31, inertia 147032.38020807464
Iteration 32, inertia 147009.2049607872
Iteration 33, inertia 146983.90982806607
Iteration 34, inertia 146956.7121170134
Iteration 35, inertia 146921.30081908213
Iteration 36, inertia 146888.45053794672
Iteration 37, inertia 146858.46829448995
Iteration 38, inertia 146836.5710329273
Iteration 39, inertia 146812.35058007503
Iteration 40, inertia 146790.77196960786
Iteration 41, inertia 146774.31881559634
Iteration 42, inertia 146760.2310188346
Iteration 43, inertia 146748.81806826257
Iteration 44, inertia 146738.14105691624
Iteration 45, inertia 146725.93032370467
Iteration 46, inertia 146714.09245039377
Iteration 47, inertia 146696.98638841283
Iteration 48, inertia 146678.60868032568
Iteration 49, inertia 146658.51190158312
Iteration 50, inertia 146631.69091908142
Iteration 51, inertia 146594.3065861267
Iteration 52, inertia 146558.36636295292
Iteration 53, inertia 146519.424090906
Iteration 54, inertia 146477.65411900615
Iteration 55, inertia 146444.3529534516
Iteration 56, inertia 146417.53431118105
Iteration 57, inertia 146400.35769187912
Iteration 58, inertia 146384.35003431802
Iteration 59, inertia 146365.6930539784
Iteration 60, inertia 146323.9916405517
Iteration 61, inertia 146280.31525801792
Iteration 62, inertia 146251.0760841341
Iteration 63, inertia 146235.0547217445
Iteration 64, inertia 146216.46434784678
Iteration 65, inertia 146197.2798607304
Iteration 66, inertia 146178.1778110132
Iteration 67, inertia 146160.84948021002
Iteration 68, inertia 146151.02885933625
Iteration 69, inertia 146133.26459940895
Iteration 70, inertia 146120.5533882239
Iteration 71, inertia 146100.88200376258
Iteration 72, inertia 146080.9760268274
Iteration 73, inertia 146058.65054291644
Iteration 74, inertia 146040.865836675
Iteration 75, inertia 146021.38998653414
Iteration 76, inertia 146009.07658703203
Iteration 77, inertia 146002.48910069116
Iteration 78, inertia 145997.84309247427
Iteration 79, inertia 145993.21294860455
Iteration 80, inertia 145989.01285245497
Iteration 81, inertia 145986.02454653435
Iteration 82, inertia 145983.75205405877
Iteration 83, inertia 145981.278613162
Iteration 84, inertia 145977.06263238256
Iteration 85, inertia 145974.53275852755
Iteration 86, inertia 145971.5952321946
Iteration 87, inertia 145970.4876152583
Iteration 88, inertia 145969.6817960037
Iteration 89, inertia 145969.04428037655
Iteration 90, inertia 145968.7211037988
Iteration 91, inertia 145968.41515602195
Iteration 92, inertia 145968.0187112451
Iteration 93, inertia 145967.55241996833
Iteration 94, inertia 145967.2390838457
Iteration 95, inertia 145967.0653937853
Iteration 96, inertia 145966.91200746936
Converged at iteration 96: center shift 2.4373817891247376e-05 within tolerance 0.00010000000000000047.
Initialization complete
Iteration 0, inertia 184245.239463498
Iteration 1, inertia 161979.70835154247
Iteration 2, inertia 156554.1460848481
Iteration 3, inertia 153543.83228007314
Iteration 4, inertia 151839.94458027632
Iteration 5, inertia 150908.4868465594
Iteration 6, inertia 150195.87110393864
Iteration 7, inertia 149642.98721839095
Iteration 8, inertia 149117.3953205598
Iteration 9, inertia 148745.26549123088
Iteration 10, inertia 148457.5241082534
Iteration 11, inertia 148238.32402639883
Iteration 12, inertia 148083.80618938876
Iteration 13, inertia 147948.812068971
Iteration 14, inertia 147831.27943498897
Iteration 15, inertia 147743.80684098962
Iteration 16, inertia 147678.67481525254
Iteration 17, inertia 147629.3859930109
Iteration 18, inertia 147590.14343726682
Iteration 19, inertia 147544.77871532828
Iteration 20, inertia 147496.42677747647
Iteration 21, inertia 147419.095406103
Iteration 22, inertia 147369.93643222493
Iteration 23, inertia 147341.80818137754
Iteration 24, inertia 147321.44861721568
Iteration 25, inertia 147300.42099769568
Iteration 26, inertia 147286.9435473219
Iteration 27, inertia 147274.00491247445
Iteration 28, inertia 147246.8083852151
Iteration 29, inertia 147200.47356037705
Iteration 30, inertia 147170.1672485556
Iteration 31, inertia 147119.1464977358
Iteration 32, inertia 147079.48846272985
Iteration 33, inertia 147035.38870775007
Iteration 34, inertia 146985.55954417426
Iteration 35, inertia 146930.60531214793
Iteration 36, inertia 146870.78109899256
Iteration 37, inertia 146787.54325336203
Iteration 38, inertia 146744.94795792378
Iteration 39, inertia 146722.2694476394
Iteration 40, inertia 146708.58451057167
Iteration 41, inertia 146699.20134095056
Iteration 42, inertia 146693.6863745426
Iteration 43, inertia 146690.71164198703
Iteration 44, inertia 146685.54026957258
Iteration 45, inertia 146678.7773425431
Iteration 46, inertia 146672.4628716252
Iteration 47, inertia 146663.88183823842
Iteration 48, inertia 146652.72135845837
Iteration 49, inertia 146644.10811322276
Iteration 50, inertia 146639.91870041817
Iteration 51, inertia 146632.97015572066
Iteration 52, inertia 146626.77951979265
Iteration 53, inertia 146623.9623345082
Iteration 54, inertia 146620.6129524281
Iteration 55, inertia 146615.36801626082
Iteration 56, inertia 146604.66142630426
Iteration 57, inertia 146580.50504558257
Iteration 58, inertia 146568.68886896857
Iteration 59, inertia 146547.3196149323
Iteration 60, inertia 146505.09679687864
Iteration 61, inertia 146477.3740758965
Iteration 62, inertia 146461.8227710374
Iteration 63, inertia 146445.3795808457
Iteration 64, inertia 146408.9237571475
Iteration 65, inertia 146402.4821194481
Iteration 66, inertia 146396.07152050338
Iteration 67, inertia 146395.30326396876
Converged at iteration 67: center shift 1.5870451582887602e-05 within tolerance 0.00010000000000000047.
Initialization complete
Iteration 0, inertia 182826.10698283336
Iteration 1, inertia 163488.9375143451
Iteration 2, inertia 158355.82279718845
Iteration 3, inertia 155188.2012850572
Iteration 4, inertia 153223.6575607044
Iteration 5, inertia 151830.89402362664
Iteration 6, inertia 151062.0151721695
Iteration 7, inertia 150503.65989958672
Iteration 8, inertia 150096.95704388525
Iteration 9, inertia 149707.96068122296
Iteration 10, inertia 149341.4926451075
Iteration 11, inertia 148940.89655614662
Iteration 12, inertia 148565.9361288452
Iteration 13, inertia 148269.37055984288
Iteration 14, inertia 148019.02812369744
Iteration 15, inertia 147840.4304763001
Iteration 16, inertia 147698.277709722
Iteration 17, inertia 147589.7730923525
Iteration 18, inertia 147480.74341794063
Iteration 19, inertia 147374.58716200694
Iteration 20, inertia 147283.47452934165
Iteration 21, inertia 147201.3517933018
Iteration 22, inertia 147129.50548241322
Iteration 23, inertia 147057.3803695661
Iteration 24, inertia 146994.84479981652
Iteration 25, inertia 146936.3332907324
Iteration 26, inertia 146883.15946951846
Iteration 27, inertia 146828.50116865832
Iteration 28, inertia 146766.0422260381
Iteration 29, inertia 146699.51976102867
Iteration 30, inertia 146630.602621991
Iteration 31, inertia 146501.16584835382
Iteration 32, inertia 146365.9201473764
Iteration 33, inertia 146282.40647833695
Iteration 34, inertia 146227.0612460814
Iteration 35, inertia 146185.73374473336
Iteration 36, inertia 146155.9400041737
Iteration 37, inertia 146140.18148251285
Iteration 38, inertia 146126.86222455214
Iteration 39, inertia 146115.00299205421
Iteration 40, inertia 146103.7594917357
Iteration 41, inertia 146093.73435680987
Iteration 42, inertia 146083.96297113434
Iteration 43, inertia 146074.9296587879
Iteration 44, inertia 146067.63108049368
Iteration 45, inertia 146061.26217998777
Iteration 46, inertia 146055.18277160623
Iteration 47, inertia 146049.8550400421
Iteration 48, inertia 146044.90253938388
Iteration 49, inertia 146039.90926999864
Iteration 50, inertia 146036.155105909
Iteration 51, inertia 146033.1575825533
Iteration 52, inertia 146030.74808990597
Iteration 53, inertia 146028.63161233396
Iteration 54, inertia 146026.6967109553
Iteration 55, inertia 146025.07298133813
Converged at iteration 55: center shift 6.391134281399028e-05 within tolerance 0.00010000000000000047.
Initialization complete
Iteration 0, inertia 178994.18703937938
Iteration 1, inertia 159064.75524359779
Iteration 2, inertia 154331.96230890555
Iteration 3, inertia 152375.93974797073
Iteration 4, inertia 151276.32392840405
Iteration 5, inertia 150525.3010178235
Iteration 6, inertia 149917.59074440357
Iteration 7, inertia 149411.59764432593
Iteration 8, inertia 149034.48557399528
Iteration 9, inertia 148740.49518109873
Iteration 10, inertia 148562.07021218183
Iteration 11, inertia 148459.49539342494
Iteration 12, inertia 148388.84268535828
Iteration 13, inertia 148327.65756019205
Iteration 14, inertia 148275.21344089936
Iteration 15, inertia 148221.5282121772
Iteration 16, inertia 148175.05824913803
Iteration 17, inertia 148131.70793783278
Iteration 18, inertia 148082.9124245469
Iteration 19, inertia 148033.50065846764
Iteration 20, inertia 147991.60527442582
Iteration 21, inertia 147960.4534855629
Iteration 22, inertia 147922.3399568774
Iteration 23, inertia 147887.57981579297
Iteration 24, inertia 147858.09809038564
Iteration 25, inertia 147829.8582127038
Iteration 26, inertia 147805.93872007943
Iteration 27, inertia 147784.61505389953
Iteration 28, inertia 147762.23247286602
Iteration 29, inertia 147744.6816965132
Iteration 30, inertia 147728.17130032316
Iteration 31, inertia 147707.7669890909
Iteration 32, inertia 147685.41947274783
Iteration 33, inertia 147659.14798274048
Iteration 34, inertia 147625.44699619
Iteration 35, inertia 147605.54465934375
Iteration 36, inertia 147588.10431054243
Iteration 37, inertia 147572.3201191543
Iteration 38, inertia 147558.92422290787
Iteration 39, inertia 147541.0971895741
Iteration 40, inertia 147523.38863487064
Iteration 41, inertia 147483.78301038564
Iteration 42, inertia 147447.83462869743
Iteration 43, inertia 147415.9626979586
Iteration 44, inertia 147362.58826451655
Iteration 45, inertia 147313.63891643757
Iteration 46, inertia 147273.29614816152
Iteration 47, inertia 147196.63395576988
Iteration 48, inertia 147113.94165978784
Iteration 49, inertia 147001.29296518222
Iteration 50, inertia 146939.5869333765
Iteration 51, inertia 146905.25731523577
Iteration 52, inertia 146857.44409045757
Iteration 53, inertia 146817.16037172344
Iteration 54, inertia 146785.40960836955
Iteration 55, inertia 146753.69248787046
Iteration 56, inertia 146747.83680296628
Iteration 57, inertia 146744.9370737753
Iteration 58, inertia 146742.082893954
Iteration 59, inertia 146737.50988854002
Iteration 60, inertia 146734.96218786674
Iteration 61, inertia 146733.66880459286
Iteration 62, inertia 146732.4644378334
Iteration 63, inertia 146731.5746153261
Iteration 64, inertia 146731.32426079633
Iteration 65, inertia 146731.15780136213
Iteration 66, inertia 146730.82420812594
Converged at iteration 66: center shift 6.339320426859331e-05 within tolerance 0.00010000000000000047.
###Markdown
We now have a set of cluster labels that group the data into 50 similar groups. Let's store it in our stacked dataset!
###Code
# Let's run this line
stacked['clusters'].values = cluster_labels
###Output
_____no_output_____
###Markdown
We now have a set of labels, but they're stored in a flattened array. Since we'd like to see the data as a map, we still have some work to do. Let's go back to a 2D representation of our values.**Q10) Turn the flattened xarray back into a set of 2D fields***Hint*: xarrays have an [`.unstack` method](https://xarray.pydata.org/en/stable/generated/xarray.DataArray.unstack.html) that you will find to be very useful for this.
###Code
# Complete the code
processed_ds = stacked.unstack()
###Output
_____no_output_____
###Markdown
Now we have an unstacked dataset, and can now easily plot out the clusters we found!**Q11) Plot the 'cluster' variable using the buil-in xarray function***Hint: `.plot()` [link text](https://xarray.pydata.org/en/stable/generated/xarray.DataArray.plot.html) let's you access the xarray implementations of [`pcolormesh`](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.pcolormesh.html) and [`imshow`](https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.pyplot.imshow.html).*
###Code
xr.plot.pcolormesh(processed_ds['clusters'], figsize = (15,7), cmap = 'jet')
###Output
_____no_output_____
###Markdown
Compare your results to those from the paper: We now want to find the 5 most common regimes, and group the rest. This isn't straightforward, so we've gone ahead and prepared the code for you. Run through it and try to understand what the code is doing!
###Code
# Make field filled with -1 vals so unprocessed points are easily retrieved.
# Noise masked applied automatically by using previously found labels as base.
processed_ds['final_clusters'] = (processed_ds.clusters * 0) - 1
# Find the 5 most common cluster labels
top_clusters = processed_ds.groupby('clusters').count().sortby('BPT').tail(5).clusters.values
#Build the set of indices for the cluster data, used for rewriting cluster labels
for idx, label in enumerate(top_clusters):
#Find the indices where the label is found
indices = (processed_ds.clusters == label)
processed_ds['final_clusters'].values[indices] = 4-idx
# Set the remaining unlabeled regions to category 5 "non-linear"
processed_ds['final_clusters'].values[processed_ds.final_clusters==-1] = 5
# Plot the figure
processed_ds.final_clusters.plot.imshow(cmap=mycmap, figsize=(18,8));
# Feel free to use this space
###Output
_____no_output_____ |
PHYS201/Lab5.ipynb | ###Markdown
Experiment 5: Air Track III---Conservation of Energy Objectives- To learn to take good measurements- To learn how to estimate and propagate errors- To learn how to take measurements to verify a theory- To learn how to measure potential and kinetic energies Equipment- One Airtrack, Blower, and Cart- One accessory kit containing: one pulley, one mass hanger, and masses for cart and hanger.- One string (150 cm long)- One photogate- One 30 cm ruler- One scale- One vernier caliper- One two-meter stick- One 3โ x 5/32โ rod- One 6โ x 5/32โ rod- One 6 mm x 44 mm spring- One flat washer- One 2x4- One 1x4- One set of slotted weights: (Eight 100g and four 50g) Safety- **Be careful placing the carts on the track. Do not damage the track.**- Please place paper underneath the carts when they are resting on the track without the air turned on. **Do not slide the carts on the track without the air turned on.**- **Do not launch the cart unless a rubber-band bumper is secured to the opposite end of the track.** IntroductionThere are two kinds of energy in mechanical systems: Potential energy and Kinetic energy. *Potential* energy is stored energy. When this energy is released, it can be converted into *kinetic* energy---energy of motion. In this lab you will conduct two experiments to determine if all of the potential energy stored in a system can be converted into kinetic energy.In the first experiment, you will convert gravitational potential energy to kinetic energy (using the hanging mass and pulley). In the second, you will convert spring potential energy to kinetic energy. Theory KinematicsFirst and foremost, you will need to compute the velocity of the aitrack glider. The information you will have, however, is velocity computed by a photogate. The photogate measures thevelocity of the glider as it passes through by timing how long the infrared LED is blocked by the object (sometimes called a โflagโ) passing through and uses the definition of velocity: \begin{equation}v \equiv \frac{\Delta x}{\Delta t}\tag{1}\end{equation}where $\Delta x$ is the length of the flag and $\Delta t$ is the time the LED was blocked. Gravitational Potential EnergyRecall that the amount of energy stored in an object in a gravitational field is \begin{equation}U_{g} = m g \Delta y\tag{2}\end{equation} where *m* is the mass of the object, *g* is the gravitational constant, and $ \Delta y$ is the change in height of that object. Spring Potential EnergyThe amount of energy stored in a compressed spring is \begin{equation}U_{s} = \frac{1}{2} k (\Delta x)^2\tag{3}\end{equation} where *k* is the spring constant. You will need to measure this spring constant for your spring. You can find this constant by recalling Hookeโs Law: \begin{equation}F = -k \Delta x\tag{4}\end{equation} The force you apply to the spring is directly related to the compression, $ \Delta x$, of that spring (the negative sign is a reminder that the force is opposes the spring compression). By plotting the force vs. compression for different weights, you can plot a straight line. The slope of that straight line will be the spring constant (if you have plotted the correct variable on the correct axis---recall the definition of the slope of a straight line and youโll figure it out). Kinetic EnergyKinetic energy is the energy of motion. For objects moving much slower than the speed of light, we can use the formula: \begin{equation}K = \frac{1}{2} m v^2\tag{5}\end{equation} where *m* is the mass of all the objects moving at speed v. Uncertainty AnalysisYour Lab Manual and the previous labs can guide you in estimating the uncertainties in your measurements and propagating those uncertainties into your computed energies. The uncertainty in the masses of the glider and hanging mass will be determined by the accuracy of the scale. In some cases, it might be easier to use the standard deviation of a large number of measurements as your velocity measurement uncertainty. The uncertainty in your spring constant and kinetic and potential energies, however, will have to be computed using the three equations in your Lab Manual.In order to minimize random errors, it is extremely important that each of your measurements be performed several times. Experimental Procedure Setting up the Air TrackEnsure that the pulley is securely inserted into the top hole in the bracket at the far end of the track, that it spins freely, and that the hanging mass does not strike the table as it falls. You will need to level the airtrack for two of the experiments. Look back to the second lab for instructions if youโve forgotten how to do this. Setting up a Photogate1. Turn on the PASCO 850 Interface and start the PASCO Capstone software.2. Plug a photogate into a Digital input.3. Click the โHardware Setupโ tab in the left โToolsโ palette, left-click the jack on the diagram where you inserted the plug, and select โPhotogateโ from the drop-down menu.4. You should see a tab labeled โTimer Setup.โ Open that tab and set up a pre-configured timer: 1. Select the photogate you just installed. 2. You will be using this photogate with a single flag. 3. The computer need only keep track of the speed through the gate. 4. A text box requesting the length of the flag in meters will appear. Measure the flag as best you can and enter that information in the box. 5. Give this sensor a name such as โPhotogate 1โ or similar.
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from P201_Functions import *
# Flag length measurement (m) and its uncertainty
flag_len = 0.1
delta_x = 0.005
###Output
_____no_output_____
###Markdown
When you adjust the height of the photogate, make sure that it is triggered only by the flag. The string should not trigger the photogate. You can tell when the photogate is triggered by looking for the red LED to light when the infrared LED is blocked. Ensure that the red LED only lights when the proper portion of the cart passes through.When choosing the positions for your photogate, think carefully about whether you want the cart to be coasting through the gate or experiencing a force as it travels through the gate. Also keep in mind that the airtrack only *reduces* friction, it does not eliminate it. Measuring the Spring ConstantIt will be easiest to measure the compression of the spring while it is on a long rod. This will keep the spring from bending while weights are applied to the spring. Place one end of the rod on the table, then place the spring on the rod, and then place a washer on top of the spring. You should then be able to measure the compression of the spring as a function of the masses you apply. A series of ten 100 g weights should give you a good graph. (It might be instructive to start with ten 10 g weights before applying the remaining nine 100g weights to see if the spring constant is truly linear.) **Donโt forget to include the mass of the washer and do not exceed 1.1 kg of mass on the spring!**(Ask your instructor how to remove and use one of the end-brackets of the air-track to support the bottom portion of the rod if that would help make your compression measurements easier).You must use PASCO Capstone to create the graph of "Weight vs. Compression": 1. Start the Capstone program and click โTable & Graphโ in the main window.2. In the first table column, click `` and then select โCreate Newโ โUser-Entered Dataโ in the cascading menus.3. Rename the column either โWeightโ or โMeasured Compressionโ when the โUser Data 1/2โ title is highlighted in blue and put the appropriate units (โNโ or โmโ).4. In the graph, you can then click on the `` button on each axis and select the data you would like on that axis.**For reasons you will soon discover, plot the *compression* on the x-axis.** You will need to printthe graphs you produce for each team member. **Be sure to properly label your graphs!**
###Code
# Weight vs. Compression Graph
# Reads the name of the csv file and gets the data
df = pd.read_csv("./ExampleFiles/Spring Constant.csv")
# Prints information about the file
#df.info()
print(df)
print()
# Defines the x and y values
weight = df.filter(['Weight (N)'], axis=1).dropna()
compr = df.filter(['Compression (m)'], axis=1).dropna()
# Create a figure of reasonable size and resolution, white background, black edge color
fig=plt.figure(figsize=(7,5), dpi= 100, facecolor='w', edgecolor='k')
# Gets the data values for x and y
x_data = compr.values.reshape(-1, 1)
y_data = weight.values.reshape(-1, 1)
xi = df['Compression (m)'].to_numpy()
yi = df['Weight (N)'].to_numpy()
# Creates the base plot with titles
plt.plot(x_data,y_data,'b.',label='Raw Data')
plt.ylabel('Weight (N)')
plt.xlabel('Measured Compression (m)')
plt.title('Weight vs. Compression')
# Takes the x and y values to make a trendline
#intercept, slope = linear_fit_plot(x_data,y_data)
intercept, slope, dintercept, dslope = linear_fit_plot_errors(xi,yi,0.006,0.024)
# Adds the legend to the plot
plt.legend()
# Displays the plot
plt.show()
print()
###Output
Compression (m) Weight (N)
0 0.006 1.984
1 0.012 3.946
2 0.019 5.908
3 0.024 7.870
Linear Fit: Coefficients (from curve_fit)
[4.03854511e-02 3.20433740e+02]
Linear Fit: Covariance Matrix (from curve_fit)
[[ 5.39396725e-02 -2.94567762e+00]
[-2.94567762e+00 1.93159241e+02]]
Linear Fit: Final Result: y = (320.43374 +/- 13.89817) x + (0.04039 +/- 0.23225)
###Markdown
Experiment 1: Gravitational Potential Energy vs. Kinetic EnergyFor this experiment, repeat the set up from the last lab. You will be comparing the initial potential energy of the mass hanger to **the sum of** the final kinetic energies of both the cart and the mass hanger (the instant before it strikes the ground). Do four trials (for statistics) for four different combinations of masses (the hanger mass must change each time).Where should the photogate be placed for this experiment?
###Code
# Experiment 1 Raw Data
# Cart 1
# Create an empty numpy array to hold the raw data
raw_data_1 = np.empty((4,2))
# Set the trial number column identifiers for each Trial
raw_data_1[0][0]=1
raw_data_1[1][0]=2
raw_data_1[2][0]=3
raw_data_1[3][0]=4
# Create a Pandas dataframe, and convert the Trial number column to integer format
df1 = pd.DataFrame(raw_data_1, columns=["Trial",
"Measured v_f (m/s)"])
df1['Trial'] = df1['Trial'].astype(int)
#### Enter Raw Data Here!!!!!!!!!!!!!! ####
# Mass of the cart (kg) and its uncertainty
m_cart = 0.18995
delta_m_cart = 5e-06
# Mass of hanging mass (kg) and its uncertainty
m_hang = 0.01175
delta_m_hang = 5e-06
# Height of the spring (m) and its uncertainty
h = 0.952
delta_h = 0.0005
# Predicted Final Velocity (m/s) and its uncertainty
v_p = 1.028
delta_v_p = 0.0007081
# Enter the measured values of v_f (m/s)
df1['Measured v_f (m/s)'] = [0.98,0.98,0.97,0.99]
###########################################
# calculates the mass of the cart with the mass of the hanger and its uncertainty
m_total = m_cart + m_hang
delta_m_total = np.sqrt(2) * delta_m_cart
# Calculates the uncertainty of the velocity using standard deviation
uncertainty_v = np.std(df1['Measured v_f (m/s)'])
# prints out the cart data
print("Data for: ")
print("m_cart = %0.5f kg ๐ฟm_cart = %g" % (m_cart, delta_m_cart))
print("m_hang = %0.5f kg ๐ฟm_hang = %g" % (m_hang,delta_m_hang))
print("h = %0.3f m ๐ฟh = %0.4f" % (h,delta_h))
print("Predicted v_f = %0.5f m/s ๐ฟv_p_f = %0.4f" % (v_p,delta_v_p))
print("")
# Display the dataframe
from IPython.display import display
print ("Cart 1")
display(df1)
# Print statements for uncertainty of final velocity
print("๐ฟv = %0.5f m/s" % (uncertainty_v))
###Output
Data for:
m_cart = 0.18995 kg ๐ฟm_cart = 5e-06
m_hang = 0.01175 kg ๐ฟm_hang = 5e-06
h = 0.952 m ๐ฟh = 0.0005
Predicted v_f = 1.02800 m/s ๐ฟv_p_f = 0.0007
Cart 1
###Markdown
***
###Code
# Experiment 1 Raw Data
# Cart 2
# Create an empty numpy array to hold the raw data
raw_data_2 = np.empty((4,2))
# Set the trial number column identifiers for each Trial
raw_data_2[0][0]=1
raw_data_2[1][0]=2
raw_data_2[2][0]=3
raw_data_2[3][0]=4
# Create a Pandas dataframe, and convert the Trial number column to integer format
df2 = pd.DataFrame(raw_data_2, columns=["Trial",
"Measured v_f (m/s)"])
df2['Trial'] = df2['Trial'].astype(int)
#### Enter Raw Data Here!!!!!!!!!!!!!! ####
# Mass of the cart (kg) and its uncertainty
m_cart = 0.18995
delta_m_cart = 5e-06
# Mass of hanging mass (kg) and its uncertainty
m_hang = 0.01665
delta_m_hang = 5e-06
# Height of the spring (m) and its uncertainty
h = 0.952
delta_h = 0.0005
# Predicted Final Velocity (m/s) and its uncertainty
v_p = 1.209
delta_v_p = 0.0007081
# Enter the measured values of v_f (m/s)
df2['Measured v_f (m/s)'] = [1.17,1.18,1.17,1.16]
###########################################
# calculates the mass of the cart with the mass of the hanger and its uncertainty
m_total = m_cart + m_hang
delta_m_total = np.sqrt(2) * delta_m_cart
# Calculates the uncertainty of the velocity using standard deviation
uncertainty_v = np.std(df2['Measured v_f (m/s)'])
# prints out the cart data
print("Data for: ")
print("m_cart = %0.5f kg ๐ฟm_cart = %g" % (m_cart, delta_m_cart))
print("m_hang = %0.5f kg ๐ฟm_hang = %g" % (m_hang,delta_m_hang))
print("h = %0.3f m ๐ฟh = %0.4f" % (h,delta_h))
print("Predicted v_f = %0.5f m/s ๐ฟv_p_f = %0.4f" % (v_p,delta_v_p))
print("")
# Display the dataframe
from IPython.display import display
print ("Cart 2")
display(df2)
# Print statements for uncertainty of final velocity
print("๐ฟv = %0.5f m/s" % (uncertainty_v))
###Output
Data for:
m_cart = 0.18995 kg ๐ฟm_cart = 5e-06
m_hang = 0.01665 kg ๐ฟm_hang = 5e-06
h = 0.952 m ๐ฟh = 0.0005
Predicted v_f = 1.20900 m/s ๐ฟv_p_f = 0.0007
Cart 2
###Markdown
***
###Code
# Experiment 1 Raw Data
# Cart 3
# Create an empty numpy array to hold the raw data
raw_data_3 = np.empty((4,2))
# Set the trial number column identifiers for each Trial
raw_data_3[0][0]=1
raw_data_3[1][0]=2
raw_data_3[2][0]=3
raw_data_3[3][0]=4
# Create a Pandas dataframe, and convert the Trial number column to integer format
df3 = pd.DataFrame(raw_data_3, columns=["Trial",
"Measured v_f (m/s)"])
df3['Trial'] = df3['Trial'].astype(int)
#### Enter Raw Data Here!!!!!!!!!!!!!! ####
# Mass of the cart (kg) and its uncertainty
m_cart = 0.18995
delta_m_cart = 5e-06
# Mass of hanging mass (kg) and its uncertainty
m_hang = 0.00965
delta_m_hang = 5e-06
# Height of the spring (m) and its uncertainty
h = 0.820
delta_h = 0.0005
# Predicted Final Velocity (m/s) and its uncertainty
v_p = 0.9367
delta_v_p = 0.0007081
# Enter the measured values of v_f (m/s)
df3['Measured v_f (m/s)'] = [0.83,0.85,0.82,0.83]
###########################################
# calculates the mass of the cart with the mass of the hanger and its uncertainty
m_total = m_cart + m_hang
delta_m_total = np.sqrt(2) * delta_m_cart
# Calculates the uncertainty of the velocity using standard deviation
uncertainty_v = np.std(df3['Measured v_f (m/s)'])
# prints out the cart data
print("Data for: ")
print("m_cart = %0.5f kg ๐ฟm_cart = %g" % (m_cart, delta_m_cart))
print("m_hang = %0.5f kg ๐ฟm_hang = %g" % (m_hang,delta_m_hang))
print("h = %0.3f m ๐ฟh = %0.4f" % (h,delta_h))
print("Predicted v_f = %0.5f m/s ๐ฟv_p_f = %0.4f" % (v_p,delta_v_p))
print("")
# Display the dataframe
from IPython.display import display
print ("Cart 3")
display(df3)
# Print statements for uncertainty of final velocity
print("๐ฟv = %0.5f m/s" % (uncertainty_v))
###Output
Data for:
m_cart = 0.18995 kg ๐ฟm_cart = 5e-06
m_hang = 0.00965 kg ๐ฟm_hang = 5e-06
h = 0.820 m ๐ฟh = 0.0005
Predicted v_f = 0.93670 m/s ๐ฟv_p_f = 0.0007
Cart 3
###Markdown
***
###Code
# Experiment 1 Raw Data
# Cart 4
# Create an empty numpy array to hold the raw data
raw_data_4 = np.empty((4,2))
# Set the trial number column identifiers for each Trial
raw_data_4[0][0]=1
raw_data_4[1][0]=2
raw_data_4[2][0]=3
raw_data_4[3][0]=4
# Create a Pandas dataframe, and convert the Trial number column to integer format
df4 = pd.DataFrame(raw_data_4, columns=["Trial",
"Measured v_f (m/s)"])
df4['Trial'] = df4['Trial'].astype(int)
#### Enter Raw Data Here!!!!!!!!!!!!!! ####
# Mass of the cart (kg) and its uncertainty
m_cart = 0.18995
delta_m_cart = 5e-06
# Mass of hanging mass (kg) and its uncertainty
m_hang = 0.01000
delta_m_hang = 5e-06
# Height of the spring (m) and its uncertainty
h = 0.95
delta_h = 0.0005
# Predicted Final Velocity (m/s) and its uncertainty
v_p = 0.9666
delta_v_p = 0.0007081
# Enter the measured values of v_f (m/s)
df4['Measured v_f (m/s)'] = [0.93,0.93,0.92,0.94]
###########################################
# calculates the mass of the cart with the mass of the hanger and its uncertainty
m_total = m_cart + m_hang
delta_m_total = np.sqrt(2) * delta_m_cart
# Calculates the uncertainty of the velocity using standard deviation
uncertainty_v = np.std(df4['Measured v_f (m/s)'])
# prints out the cart data
print("Data for: ")
print("m_cart = %0.5f kg ๐ฟm_cart = %g" % (m_cart, delta_m_cart))
print("m_hang = %0.5f kg ๐ฟm_hang = %g" % (m_hang,delta_m_hang))
print("h = %0.3f m ๐ฟh = %0.4f" % (h,delta_h))
print("Predicted v_f = %0.5f m/s ๐ฟv_p_f = %0.4f" % (v_p,delta_v_p))
print("")
# Display the dataframe
from IPython.display import display
print ("Cart 4")
display(df4)
# Print statements for uncertainty of final velocity
print("๐ฟv = %0.5f m/s" % (uncertainty_v))
###Output
Data for:
m_cart = 0.18995 kg ๐ฟm_cart = 5e-06
m_hang = 0.01000 kg ๐ฟm_hang = 5e-06
h = 0.950 m ๐ฟh = 0.0005
Predicted v_f = 0.96660 m/s ๐ฟv_p_f = 0.0007
Cart 4
###Markdown
***
###Code
# Initial Potential Energy vs. Final Kinetic Energy Plot
# Reads the name of the csv file and gets the data
df = pd.read_csv("./ExampleFiles/Lab 5 Graphs - PEi vs KEf.csv")
# Prints information about the file
#df.info()
print(df)
print()
# Defines the x and y values
pe = df.filter(['Ug (J)'], axis=1).dropna()
ke = df.filter(['Kf (J)'], axis=1).dropna()
# Create a figure of reasonable size and resolution, white background, black edge color
fig=plt.figure(figsize=(7,5), dpi= 100, facecolor='w', edgecolor='k')
# Gets the data values for x and y
x_data = pe.values.reshape(-1, 1)
y_data = ke.values.reshape(-1, 1)
xi = df['Ug (J)'].to_numpy()
yi = df['Kf (J)'].to_numpy()
# Creates the base plot with titles
plt.plot(x_data,y_data,'b.',label='Raw Data')
#plt.errorbar(x_data,y_data,Delta_x,Delta_t,'b.',label='Raw Data')
plt.ylabel('Kinetic Energy (J)')
plt.xlabel('Gravitational Potential Energy(J)')
plt.title('Initial Potential Energy vs. Final Kinetic Energy')
# Takes the x and y values to make a trendline
intercept, slope, dintercept, dslope = linear_fit_plot_errors(xi,yi,0.07763,0.15120)
# Adds the legend to the plot
plt.legend()
# Displays the plot
plt.show()
print("")
###Output
Ug (J) Kf (J)
0 0.10660 0.09686
1 0.15120 0.14140
2 0.07763 0.06962
3 0.09320 0.08647
Linear Fit: Coefficients (from curve_fit)
[-0.00408172 0.96004316]
Linear Fit: Covariance Matrix (from curve_fit)
[[ 2.49619856e-05 -2.04180950e-04]
[-2.04180950e-04 1.74513640e-03]]
Linear Fit: Final Result: y = (0.96004 +/- 0.04177) x + (-0.00408 +/- 0.00500)
###Markdown
****** Experiment 2: Spring Potential Energy vs. Kinetic EnergyFor this experiment, you will store potential energy in a compressed spring and launch the cart down the airtrack. You will need to use the 3โ rod to support the spring to launch the cart (See the diagram below). The best way to launch the cart is to use a fingernail to hold the cart back on the compressed spring and then let the cart slip out from under your fingernail. You might devise a better method, but it is important to release the cart as quickly as possible. **Do not fully compress the spring!** The last few millimeters are non-linear. But donโt be too gentle: at least compress the spring 1 cm.You will need to perform four trial launches for each value of compression, $x_{s}$, that you determine is necessary (minimum of two). Consider carefully what your uncertainty, $ \delta x_{s}$, in compression is over each set of trials. This will be important in calculating your uncertainty in the stored potential energy.Where should the photogate be placed for *this* experiment?
###Code
# Experiment 2 Raw Data
# Cart 1
# Create an empty numpy array to hold the raw data
raw_data_5 = np.empty((4,2))
# Set the trial number column identifiers for each Trial
raw_data_5[0][0]=1
raw_data_5[1][0]=2
raw_data_5[2][0]=3
raw_data_5[3][0]=4
# Create a Pandas dataframe, and convert the Trial number column to integer format
df5 = pd.DataFrame(raw_data_5, columns=["Trial",
"Measured v_f (m/s)"])
df5['Trial'] = df5['Trial'].astype(int)
#### Enter Raw Data Here!!!!!!!!!!!!!! ####
# Mass of the cart (kg) and its uncertainty
m_cart = 0.18995
delta_m_cart = 5e-06
# Value of compression (m) and its uncertainty
x_s = 0.01
delta_xs = 0.005
# Predicted Final Velocity (m/s) and its uncertainty
v_p = 0.4104
delta_v_p = 0.2052
# Enter the measured values of v_f (m/s)
df5['Measured v_f (m/s)'] = [0.54,0.44,0.50,0.50]
###########################################
# Calculates the uncertainty of the velocity using standard deviation
uncertainty_v = np.std(df5['Measured v_f (m/s)'])
# prints out the cart data
print("Data for: ")
print("m_cart = %0.5f kg ๐ฟm_cart = %g" % (m_cart, delta_m_cart))
print("x_s = %0.2f m ๐ฟx_s = %0.4f" % (x_s,delta_xs))
print("Predicted v_f = %0.5f m/s ๐ฟv_p_f = %0.4f" % (v_p,delta_v_p))
print("")
# Display the dataframe
from IPython.display import display
print ("Cart 1")
display(df5)
# Print statements for uncertainty of final velocity
print("๐ฟv = %0.5f m/s" % (uncertainty_v))
###Output
Data for:
m_cart = 0.18995 kg ๐ฟm_cart = 5e-06
x_s = 0.01 m ๐ฟx_s = 0.0050
Predicted v_f = 0.41040 m/s ๐ฟv_p_f = 0.2052
Cart 1
###Markdown
***
###Code
# Experiment 2 Raw Data
# Cart 2
# Create an empty numpy array to hold the raw data
raw_data_6 = np.empty((4,2))
# Set the trial number column identifiers for each Trial
raw_data_6[0][0]=1
raw_data_6[1][0]=2
raw_data_6[2][0]=3
raw_data_6[3][0]=4
# Create a Pandas dataframe, and convert the Trial number column to integer format
df6 = pd.DataFrame(raw_data_6, columns=["Trial",
"Measured v_f (m/s)"])
df6['Trial'] = df6['Trial'].astype(int)
#### Enter Raw Data Here!!!!!!!!!!!!!! ####
# Mass of the cart (kg) and its uncertainty
m_cart = 0.18995
delta_m_cart = 5e-06
# Value of compression (m) and its uncertainty
x_s = 0.02
delta_xs = 0.005
# Predicted Final Velocity (m/s) and its uncertainty
v_p = 0.6739
delta_v_p = 0.16
# Enter the measured values of v_f (m/s)
df6['Measured v_f (m/s)'] = [0.82,0.81,0.83,0.82]
###########################################
# Calculates the uncertainty of the velocity using standard deviation
uncertainty_v = np.std(df6['Measured v_f (m/s)'])
# prints out the cart data
print("Data for: ")
print("m_cart = %0.5f kg ๐ฟm_cart = %g" % (m_cart, delta_m_cart))
print("x_s = %0.2f m ๐ฟx_s = %0.4f" % (x_s,delta_xs))
print("Predicted v_f = %0.5f m/s ๐ฟv_p_f = %0.4f" % (v_p,delta_v_p))
print("")
# Display the dataframe
from IPython.display import display
print ("Cart 2")
display(df6)
# Print statements for uncertainty of final velocity
print("๐ฟv = %0.5f m/s" % (uncertainty_v))
###Output
Data for:
m_cart = 0.18995 kg ๐ฟm_cart = 5e-06
x_s = 0.02 m ๐ฟx_s = 0.0050
Predicted v_f = 0.67390 m/s ๐ฟv_p_f = 0.1600
Cart 2
|
courses/dl2/translate.ipynb | ###Markdown
Translation files
###Code
from fastai.text import *
###Output
_____no_output_____
###Markdown
French/English parallel texts from http://www.statmt.org/wmt15/translation-task.html . It was created by Chris Callison-Burch, who crawled millions of web pages and then used *a set of simple heuristics to transform French URLs onto English URLs (i.e. replacing "fr" with "en" and about 40 other hand-written rules), and assume that these documents are translations of each other*.
###Code
PATH = Path('data/translate')
TMP_PATH = PATH/'tmp'
TMP_PATH.mkdir(exist_ok=True)
fname='giga-fren.release2.fixed'
en_fname = PATH/f'{fname}.en'
fr_fname = PATH/f'{fname}.fr'
re_eq = re.compile('^(Wh[^?.!]+\?)')
re_fq = re.compile('^([^?.!]+\?)')
lines = ((re_eq.search(eq), re_fq.search(fq))
for eq, fq in zip(open(en_fname, encoding='utf-8'), open(fr_fname, encoding='utf-8')))
qs = [(e.group(), f.group()) for e,f in lines if e and f]
pickle.dump(qs, (PATH/'fr-en-qs.pkl').open('wb'))
qs = pickle.load((PATH/'fr-en-qs.pkl').open('rb'))
qs[:5], len(qs)
en_qs,fr_qs = zip(*qs)
en_tok = Tokenizer.proc_all_mp(partition_by_cores(en_qs))
fr_tok = Tokenizer.proc_all_mp(partition_by_cores(fr_qs), 'fr')
en_tok[0], fr_tok[0]
np.percentile([len(o) for o in en_tok], 90), np.percentile([len(o) for o in fr_tok], 90)
keep = np.array([len(o)<30 for o in en_tok])
en_tok = np.array(en_tok)[keep]
fr_tok = np.array(fr_tok)[keep]
pickle.dump(en_tok, (PATH/'en_tok.pkl').open('wb'))
pickle.dump(fr_tok, (PATH/'fr_tok.pkl').open('wb'))
en_tok = pickle.load((PATH/'en_tok.pkl').open('rb'))
fr_tok = pickle.load((PATH/'fr_tok.pkl').open('rb'))
def toks2ids(tok,pre):
freq = Counter(p for o in tok for p in o)
itos = [o for o,c in freq.most_common(40000)]
itos.insert(0, '_bos_')
itos.insert(1, '_pad_')
itos.insert(2, '_eos_')
itos.insert(3, '_unk')
stoi = collections.defaultdict(lambda: 3, {v:k for k,v in enumerate(itos)})
ids = np.array([([stoi[o] for o in p] + [2]) for p in tok])
np.save(TMP_PATH/f'{pre}_ids.npy', ids)
pickle.dump(itos, open(TMP_PATH/f'{pre}_itos.pkl', 'wb'))
return ids,itos,stoi
en_ids,en_itos,en_stoi = toks2ids(en_tok,'en')
fr_ids,fr_itos,fr_stoi = toks2ids(fr_tok,'fr')
def load_ids(pre):
ids = np.load(TMP_PATH/f'{pre}_ids.npy')
itos = pickle.load(open(TMP_PATH/f'{pre}_itos.pkl', 'rb'))
stoi = collections.defaultdict(lambda: 3, {v:k for k,v in enumerate(itos)})
return ids,itos,stoi
en_ids,en_itos,en_stoi = load_ids('en')
fr_ids,fr_itos,fr_stoi = load_ids('fr')
[fr_itos[o] for o in fr_ids[0]], len(en_itos), len(fr_itos)
###Output
_____no_output_____
###Markdown
Word vectors fasttext word vectors available from https://fasttext.cc/docs/en/english-vectors.html
###Code
# ! pip install git+https://github.com/facebookresearch/fastText.git
import fastText as ft
###Output
_____no_output_____
###Markdown
To use the fastText library, you'll need to download [fasttext word vectors](https://github.com/facebookresearch/fastText/blob/master/pretrained-vectors.md) for your language (download the 'bin plus text' ones).
###Code
en_vecs = ft.load_model(str((PATH/'wiki.en.bin')))
fr_vecs = ft.load_model(str((PATH/'wiki.fr.bin')))
def get_vecs(lang, ft_vecs):
vecd = {w:ft_vecs.get_word_vector(w) for w in ft_vecs.get_words()}
pickle.dump(vecd, open(PATH/f'wiki.{lang}.pkl','wb'))
return vecd
en_vecd = get_vecs('en', en_vecs)
fr_vecd = get_vecs('fr', fr_vecs)
en_vecd = pickle.load(open(PATH/'wiki.en.pkl','rb'))
fr_vecd = pickle.load(open(PATH/'wiki.fr.pkl','rb'))
ft_words = en_vecs.get_words(include_freq=True)
ft_word_dict = {k:v for k,v in zip(*ft_words)}
ft_words = sorted(ft_word_dict.keys(), key=lambda x: ft_word_dict[x])
len(ft_words)
dim_en_vec = len(en_vecd[','])
dim_fr_vec = len(fr_vecd[','])
dim_en_vec,dim_fr_vec
en_vecs = np.stack(list(en_vecd.values()))
en_vecs.mean(),en_vecs.std()
###Output
_____no_output_____
###Markdown
Model data
###Code
enlen_90 = int(np.percentile([len(o) for o in en_ids], 99))
frlen_90 = int(np.percentile([len(o) for o in fr_ids], 97))
enlen_90,frlen_90
en_ids_tr = np.array([o[:enlen_90] for o in en_ids])
fr_ids_tr = np.array([o[:frlen_90] for o in fr_ids])
class Seq2SeqDataset(Dataset):
def __init__(self, x, y): self.x,self.y = x,y
def __getitem__(self, idx): return A(self.x[idx], self.y[idx])
def __len__(self): return len(self.x)
np.random.seed(42)
trn_keep = np.random.rand(len(en_ids_tr))>0.1
en_trn,fr_trn = en_ids_tr[trn_keep],fr_ids_tr[trn_keep]
en_val,fr_val = en_ids_tr[~trn_keep],fr_ids_tr[~trn_keep]
len(en_trn),len(en_val)
trn_ds = Seq2SeqDataset(fr_trn,en_trn)
val_ds = Seq2SeqDataset(fr_val,en_val)
bs=125
trn_samp = SortishSampler(en_trn, key=lambda x: len(en_trn[x]), bs=bs)
val_samp = SortSampler(en_val, key=lambda x: len(en_val[x]))
trn_dl = DataLoader(trn_ds, bs, transpose=True, transpose_y=True, num_workers=1,
pad_idx=1, pre_pad=False, sampler=trn_samp)
val_dl = DataLoader(val_ds, int(bs*1.6), transpose=True, transpose_y=True, num_workers=1,
pad_idx=1, pre_pad=False, sampler=val_samp)
md = ModelData(PATH, trn_dl, val_dl)
it = iter(trn_dl)
its = [next(it) for i in range(5)]
[(len(x),len(y)) for x,y in its]
###Output
_____no_output_____
###Markdown
Initial model
###Code
def create_emb(vecs, itos, em_sz):
emb = nn.Embedding(len(itos), em_sz, padding_idx=1)
wgts = emb.weight.data
miss = []
for i,w in enumerate(itos):
try: wgts[i] = torch.from_numpy(vecs[w]*3)
except: miss.append(w)
print(len(miss),miss[5:10])
return emb
nh,nl = 256,2
class Seq2SeqRNN(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.emb_enc_drop = nn.Dropout(0.15)
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
def seq2seq_loss(input, target):
sl,bs = target.size()
sl_in,bs_in,nc = input.size()
if sl>sl_in: input = F.pad(input, (0,0,0,0,0,sl-sl_in))
input = input[:sl]
return F.cross_entropy(input.view(-1,nc), target.view(-1))#, ignore_index=1)
opt_fn = partial(optim.Adam, betas=(0.8, 0.99))
rnn = Seq2SeqRNN(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.lr_find()
learn.sched.plot()
lr=3e-3
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10))
learn.save('initial')
learn.load('initial')
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
###Output
quels facteurs pourraient influer sur le choix de leur emplacement ? _eos_
what factors influencetheir location ? _eos_
what factors might might influence on the their ? ? _eos_
quโ est -ce qui ne peut pas changer ? _eos_
what can not change ? _eos_
what not change change ? _eos_
que faites - vous ? _eos_
what do you do ? _eos_
what do you do ? _eos_
qui rรฉglemente les pylรดnes d' antennes ? _eos_
who regulates antenna towers ? _eos_
who regulates the doors doors ? _eos_
oรน sont - ils situรฉs ? _eos_
where are they located ? _eos_
where are the located ? _eos_
quelles sont leurs compรฉtences ? _eos_
what are their qualifications ? _eos_
what are their skills ? _eos_
qui est victime de harcรจlement sexuel ? _eos_
who experiences sexual harassment ? _eos_
who is victim sexual sexual ? ? _eos_
quelles sont les personnes qui visitent les communautรฉs autochtones ? _eos_
who visits indigenous communities ? _eos_
who are people people aboriginal aboriginal ? _eos_
pourquoi ces trois points en particulier ? _eos_
why these specific three ? _eos_
why are these two different ? ? _eos_
pourquoi ou pourquoi pas ? _eos_
why or why not ? _eos_
why or why not _eos_
###Markdown
Bidir
###Code
class Seq2SeqRNN_Bidir(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25, bidirectional=True)
self.out_enc = nn.Linear(nh*2, em_sz_dec, bias=False)
self.drop_enc = nn.Dropout(0.05)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = h.view(2,2,bs,-1).permute(0,2,1,3).contiguous().view(2,bs,-1)
h = self.out_enc(self.drop_enc(h))
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl*2, bs, self.nh))
rnn = Seq2SeqRNN_Bidir(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10))
learn.save('bidir')
###Output
_____no_output_____
###Markdown
Teacher forcing
###Code
class Seq2SeqStepper(Stepper):
def step(self, xs, y, epoch):
self.m.pr_force = (10-epoch)*0.1 if epoch<10 else 0
xtra = []
output = self.m(*xs, y)
if isinstance(output,tuple): output,*xtra = output
self.opt.zero_grad()
loss = raw_loss = self.crit(output, y)
if self.reg_fn: loss = self.reg_fn(output, xtra, raw_loss)
loss.backward()
if self.clip: # Gradient clipping
nn.utils.clip_grad_norm(trainable_params_(self.m), self.clip)
self.opt.step()
return raw_loss.data[0]
class Seq2SeqRNN_TeacherForcing(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.pr_force = 1.
def forward(self, inp, y=None):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
rnn = Seq2SeqRNN_TeacherForcing(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10), stepper=Seq2SeqStepper)
learn.save('forcing')
###Output
_____no_output_____
###Markdown
Attentional model
###Code
def rand_t(*sz): return torch.randn(sz)/math.sqrt(sz[0])
def rand_p(*sz): return nn.Parameter(rand_t(*sz))
class Seq2SeqAttnRNN(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.W1 = rand_p(nh, em_sz_dec)
self.l2 = nn.Linear(em_sz_dec, em_sz_dec)
self.l3 = nn.Linear(em_sz_dec+nh, em_sz_dec)
self.V = rand_p(em_sz_dec)
def forward(self, inp, y=None, ret_attn=False):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res,attns = [],[]
w1e = enc_out @ self.W1
for i in range(self.out_sl):
w2h = self.l2(h[-1])
u = F.tanh(w1e + w2h)
a = F.softmax(u @ self.V, 0)
attns.append(a)
Xa = (a.unsqueeze(2) * enc_out).sum(0)
emb = self.emb_dec(dec_inp)
wgt_enc = self.l3(torch.cat([emb, Xa], 1))
outp, h = self.gru_dec(wgt_enc.unsqueeze(0), h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
res = torch.stack(res)
if ret_attn: res = res,torch.stack(attns)
return res
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
rnn = Seq2SeqAttnRNN(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
lr=2e-3
learn.fit(lr, 1, cycle_len=15, use_clr=(20,10), stepper=Seq2SeqStepper)
learn.save('attn')
learn.load('attn')
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs,attns = learn.model(V(x),ret_attn=True)
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
attn = to_np(attns[...,180])
fig, axes = plt.subplots(3, 3, figsize=(15, 10))
for i,ax in enumerate(axes.flat):
ax.plot(attn[i])
###Output
_____no_output_____
###Markdown
All
###Code
class Seq2SeqRNN_All(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25, bidirectional=True)
self.out_enc = nn.Linear(nh*2, em_sz_dec, bias=False)
self.drop_enc = nn.Dropout(0.25)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.W1 = rand_p(nh*2, em_sz_dec)
self.l2 = nn.Linear(em_sz_dec, em_sz_dec)
self.l3 = nn.Linear(em_sz_dec+nh*2, em_sz_dec)
self.V = rand_p(em_sz_dec)
def forward(self, inp, y=None):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = h.view(2,2,bs,-1).permute(0,2,1,3).contiguous().view(2,bs,-1)
h = self.out_enc(self.drop_enc(h))
dec_inp = V(torch.zeros(bs).long())
res,attns = [],[]
w1e = enc_out @ self.W1
for i in range(self.out_sl):
w2h = self.l2(h[-1])
u = F.tanh(w1e + w2h)
a = F.softmax(u @ self.V, 0)
attns.append(a)
Xa = (a.unsqueeze(2) * enc_out).sum(0)
emb = self.emb_dec(dec_inp)
wgt_enc = self.l3(torch.cat([emb, Xa], 1))
outp, h = self.gru_dec(wgt_enc.unsqueeze(0), h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl*2, bs, self.nh))
rnn = Seq2SeqRNN_All(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=15, use_clr=(20,10), stepper=Seq2SeqStepper)
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
###Output
quels facteurs pourraient influer sur le choix de leur emplacement ? _eos_
what factors influencetheir location ? _eos_
what factors might affect the choice of their ? ? _eos_
quโ est -ce qui ne peut pas changer ? _eos_
what can not change ? _eos_
what can not change change _eos_
que faites - vous ? _eos_
what do you do ? _eos_
what do you do ? _eos_
qui rรฉglemente les pylรดnes d' antennes ? _eos_
who regulates antenna towers ? _eos_
who regulates the antenna ? ? _eos_
oรน sont - ils situรฉs ? _eos_
where are they located ? _eos_
where are they located ? _eos_
quelles sont leurs compรฉtences ? _eos_
what are their qualifications ? _eos_
what are their skills ? _eos_
qui est victime de harcรจlement sexuel ? _eos_
who experiences sexual harassment ? _eos_
who is victim harassment harassment ? _eos_
quelles sont les personnes qui visitent les communautรฉs autochtones ? _eos_
who visits indigenous communities ? _eos_
who are the people people ? ?
pourquoi ces trois points en particulier ? _eos_
why these specific three ? _eos_
why are these three specific ? _eos_
pourquoi ou pourquoi pas ? _eos_
why or why not ? _eos_
why or why not ? _eos_
###Markdown
Translation files
###Code
from fastai.text import *
from pathlib import Path
torch.cuda.set_device(0)
PATH = Path('data/translate')
TMP_PATH = PATH/'tmp'
TMP_PATH.mkdir(exist_ok=True)
fname='giga-fren.release2.fixed'
en_fname = PATH/f'{fname}.en'
fr_fname = PATH/f'{fname}.fr'
re_eq = re.compile('^(Wh[^?.!]+\?)')
re_fq = re.compile('^([^?.!]+\?)')
lines = ((re_eq.search(eq), re_fq.search(fq))
for eq, fq in zip(open(en_fname, encoding='utf-8'), open(fr_fname, encoding='utf-8')))
qs = [(e.group(), f.group()) for e,f in lines if e and f]
pickle.dump(qs, (PATH/'fr-en-qs.pkl').open('wb'))
qs = pickle.load((PATH/'fr-en-qs.pkl').open('rb'))
qs[:5], len(qs)
en_qs,fr_qs = zip(*qs)
%%time
en_tok = Tokenizer.proc_all_mp(partition_by_cores(en_qs))
fr_tok = Tokenizer.proc_all_mp(partition_by_cores(fr_qs), 'fr')
np.percentile([len(o) for o in en_tok], 90), np.percentile([len(o) for o in fr_tok], 90)
keep = np.array([len(o)<30 for o in en_tok])
en_tok = np.array(en_tok)[keep]
fr_tok = np.array(fr_tok)[keep]
pickle.dump(en_tok, (PATH/'en_tok.pkl').open('wb'))
pickle.dump(fr_tok, (PATH/'fr_tok.pkl').open('wb'))
en_tok = pickle.load((PATH/'en_tok.pkl').open('rb'))
fr_tok = pickle.load((PATH/'fr_tok.pkl').open('rb'))
en_tok[0], fr_tok[0]
def toks2ids(tok,pre):
freq = Counter(p for o in tok for p in o)
itos = [o for o,c in freq.most_common(40000)]
itos.insert(0, '_bos_')
itos.insert(1, '_pad_')
itos.insert(2, '_eos_')
itos.insert(3, '_unk')
stoi = collections.defaultdict(lambda: 3, {v:k for k,v in enumerate(itos)})
ids = np.array([([stoi[o] for o in p] + [2]) for p in tok])
np.save(TMP_PATH/f'{pre}_ids.npy', ids)
pickle.dump(itos, open(TMP_PATH/f'{pre}_itos.pkl', 'wb'))
return ids,itos,stoi
en_ids,en_itos,en_stoi = toks2ids(en_tok,'en')
fr_ids,fr_itos,fr_stoi = toks2ids(fr_tok,'fr')
def load_ids(pre):
ids = np.load(TMP_PATH/f'{pre}_ids.npy')
itos = pickle.load(open(TMP_PATH/f'{pre}_itos.pkl', 'rb'))
stoi = collections.defaultdict(lambda: 3, {v:k for k,v in enumerate(itos)})
return ids,itos,stoi
en_ids,en_itos,en_stoi = load_ids('en')
fr_ids,fr_itos,fr_stoi = load_ids('fr')
[fr_itos[o] for o in fr_ids[0]], len(en_itos), len(fr_itos)
###Output
_____no_output_____
###Markdown
Word vectors
###Code
with (PATH/'glove.6B.100d.txt').open('r', encoding='utf-8') as f: lines = [line.split() for line in f]
en_vecd = {w:np.array(v, dtype=np.float32) for w,*v in lines}
pickle.dump(en_vecd, open(PATH/'glove.6B.100d.dict.pkl','wb'))
def is_number(s):
try:
float(s)
return True
except ValueError: return False
def get_vecs(lang):
with (PATH/f'wiki.{lang}.vec').open('r', encoding='utf-8') as f:
lines = [line.split() for line in f]
lines.pop(0)
vecd = {w:np.array(v, dtype=np.float32)
for w,*v in lines if is_number(v[0]) and len(v)==300}
pickle.dump(vecd, open(PATH/f'wiki.{lang}.pkl','wb'))
return vecd
en_vecd = get_vecs('en')
fr_vecd = get_vecs('fr')
en_vecd = pickle.load(open(PATH/'wiki.en.pkl','rb'))
fr_vecd = pickle.load(open(PATH/'wiki.fr.pkl','rb'))
dim_en_vec = len(en_vecd[','])
dim_fr_vec = len(fr_vecd[','])
en_vecs = np.stack(list(en_vecd.values()))
en_vecs.mean(),en_vecs.std()
###Output
_____no_output_____
###Markdown
Model data
###Code
enlen_90 = int(np.percentile([len(o) for o in en_ids], 99))
frlen_90 = int(np.percentile([len(o) for o in fr_ids], 97))
enlen_90,frlen_90
en_ids_tr = np.array([o[:enlen_90] for o in en_ids])
fr_ids_tr = np.array([o[:frlen_90] for o in fr_ids])
class Seq2SeqDataset(Dataset):
def __init__(self, x, y): self.x,self.y = x,y
def __getitem__(self, idx): return A(self.x[idx], self.y[idx])
def __len__(self): return len(self.x)
np.random.seed(42)
trn_keep = np.random.rand(len(en_ids_tr))>0.1
en_trn,fr_trn = en_ids_tr[trn_keep],fr_ids_tr[trn_keep]
en_val,fr_val = en_ids_tr[~trn_keep],fr_ids_tr[~trn_keep]
len(en_trn),len(en_val)
trn_ds = Seq2SeqDataset(fr_trn,en_trn)
val_ds = Seq2SeqDataset(fr_val,en_val)
bs=125
trn_samp = SortishSampler(en_trn, key=lambda x: len(en_trn[x]), bs=bs)
val_samp = SortSampler(en_val, key=lambda x: len(en_val[x]))
trn_dl = DataLoader(trn_ds, bs, transpose=True, transpose_y=True, num_workers=1, pad_idx=1, pre_pad=False, sampler=trn_samp)
val_dl = DataLoader(val_ds, int(bs*1.6), transpose=True, transpose_y=True, num_workers=1, pad_idx=1, pre_pad=False, sampler=val_samp)
md = ModelData(PATH, trn_dl, val_dl)
it = iter(trn_dl)
its = [next(it) for i in range(5)]
[(len(x),len(y)) for x,y in its]
###Output
_____no_output_____
###Markdown
Initial model
###Code
def create_emb(vecs, itos, em_sz):
emb = nn.Embedding(len(itos), em_sz, padding_idx=1)
wgts = emb.weight.data
miss = []
for i,w in enumerate(itos):
try: wgts[i] = torch.from_numpy(vecs[w]*3)
except: miss.append(w)
return emb
nh,nl = 256,2
class Seq2SeqRNN(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
def seq2seq_loss(input, target):
sl,bs = target.size()
sl_in,bs_in,nc = input.size()
if sl>sl_in: input = F.pad(input, (0,0,0,0,0,sl-sl_in))
input = input[:sl]
return F.cross_entropy(input.view(-1,nc), target.view(-1))#, ignore_index=1)
opt_fn = partial(optim.Adam, betas=(0.8, 0.99))
rnn = Seq2SeqRNN(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.lr_find()
learn.sched.plot()
lr=3e-3
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10))
learn.save('initial')
learn.load('initial')
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
###Output
quels facteurs pourraient influer sur le choix de leur emplacement ? _eos_
what factors influencetheir location ? _eos_
what factors might might influence on the their ? ? _eos_
quโ est -ce qui ne peut pas changer ? _eos_
what can not change ? _eos_
what not change change ? _eos_
que faites - vous ? _eos_
what do you do ? _eos_
what do you do ? _eos_
qui rรฉglemente les pylรดnes d' antennes ? _eos_
who regulates antenna towers ? _eos_
who regulates the doors doors ? _eos_
oรน sont - ils situรฉs ? _eos_
where are they located ? _eos_
where are the located ? _eos_
quelles sont leurs compรฉtences ? _eos_
what are their qualifications ? _eos_
what are their skills ? _eos_
qui est victime de harcรจlement sexuel ? _eos_
who experiences sexual harassment ? _eos_
who is victim sexual sexual ? ? _eos_
quelles sont les personnes qui visitent les communautรฉs autochtones ? _eos_
who visits indigenous communities ? _eos_
who are people people aboriginal aboriginal ? _eos_
pourquoi ces trois points en particulier ? _eos_
why these specific three ? _eos_
why are these two different ? ? _eos_
pourquoi ou pourquoi pas ? _eos_
why or why not ? _eos_
why or why not _eos_
###Markdown
Bidir
###Code
class Seq2SeqRNN_Bidir(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25, bidirectional=True)
self.out_enc = nn.Linear(nh*2, em_sz_dec, bias=False)
self.drop_enc = nn.Dropout(0.05)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = h.view(2,2,bs,-1).permute(0,2,1,3).contiguous().view(2,bs,-1)
h = self.out_enc(self.drop_enc(h))
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl*2, bs, self.nh))
rnn = Seq2SeqRNN_Bidir(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10))
learn.save('bidir')
###Output
_____no_output_____
###Markdown
Teacher forcing
###Code
class Seq2SeqStepper(Stepper):
def step(self, xs, y, epoch):
self.m.pr_force = (10-epoch)*0.1 if epoch<10 else 0
xtra = []
output = self.m(*xs, y)
if isinstance(output,tuple): output,*xtra = output
self.opt.zero_grad()
loss = raw_loss = self.crit(output, y)
if self.reg_fn: loss = self.reg_fn(output, xtra, raw_loss)
loss.backward()
if self.clip: # Gradient clipping
nn.utils.clip_grad_norm(trainable_params_(self.m), self.clip)
self.opt.step()
return raw_loss.data[0]
class Seq2SeqRNN_TeacherForcing(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.pr_force = 1.
def forward(self, inp, y=None):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
rnn = Seq2SeqRNN_TeacherForcing(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10), stepper=Seq2SeqStepper)
learn.save('forcing')
###Output
_____no_output_____
###Markdown
Attentional model
###Code
def rand_t(*sz): return torch.randn(sz)/math.sqrt(sz[0])
def rand_p(*sz): return nn.Parameter(to_gpu(rand_t(*sz)))
class Seq2SeqAttnRNN(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec*2, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.W1 = rand_p(nh, em_sz_dec)
self.l2 = nn.Linear(em_sz_dec, em_sz_dec)
self.l3 = nn.Linear(em_sz_dec+nh, em_sz_dec)
self.V = rand_p(em_sz_dec)
def forward(self, inp, y=None, ret_attn=False):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res,attns = [],[]
w1e = enc_out @ self.W1
for i in range(self.out_sl):
w2h = self.l2(h[-1])
u = F.tanh(w1e + w2h)
a = F.softmax(u @ self.V, 0)
attns.append(a)
Xa = (a.unsqueeze(2) * enc_out).sum(0)
emb = self.emb_dec(dec_inp)
wgt_enc = self.l3(torch.cat([emb, Xa], 1))
outp, h = self.gru_dec(wgt_enc.unsqueeze(0), h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
res = torch.stack(res)
if ret_attn: res = res,torch.stack(attns)
return res
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
rnn = Seq2SeqAttnRNN(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
lr=2e-3
learn.fit(lr, 1, cycle_len=15, use_clr=(20,10), stepper=Seq2SeqStepper)
learn.save('attn')
learn.load('attn')
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs,attns = learn.model(V(x),ret_attn=True)
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
attn = to_np(attns[...,180])
fig, axes = plt.subplots(3, 3, figsize=(15, 10))
for i,ax in enumerate(axes.flat):
ax.plot(attn[i])
###Output
_____no_output_____
###Markdown
All
###Code
class Seq2SeqRNN_All(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25, bidirectional=True)
self.out_enc = nn.Linear(nh*2, em_sz_dec, bias=False)
self.drop_enc = nn.Dropout(0.25)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.W1 = rand_p(nh*2, em_sz_dec)
self.l2 = nn.Linear(em_sz_dec, em_sz_dec)
self.l3 = nn.Linear(em_sz_dec+nh*2, em_sz_dec)
self.V = rand_p(em_sz_dec)
def forward(self, inp, y=None):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = h.view(2,2,bs,-1).permute(0,2,1,3).contiguous().view(2,bs,-1)
h = self.out_enc(self.drop_enc(h))
dec_inp = V(torch.zeros(bs).long())
res,attns = [],[]
w1e = enc_out @ self.W1
for i in range(self.out_sl):
w2h = self.l2(h[-1])
u = F.tanh(w1e + w2h)
a = F.softmax(u @ self.V, 0)
attns.append(a)
Xa = (a.unsqueeze(2) * enc_out).sum(0)
emb = self.emb_dec(dec_inp)
wgt_enc = self.l3(torch.cat([emb, Xa], 1))
outp, h = self.gru_dec(wgt_enc.unsqueeze(0), h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl*2, bs, self.nh))
rnn = Seq2SeqRNN_All(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=15, use_clr=(20,10), stepper=Seq2SeqStepper)
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
###Output
quels facteurs pourraient influer sur le choix de leur emplacement ? _eos_
what factors influencetheir location ? _eos_
what factors might affect the choice of their ? ? _eos_
quโ est -ce qui ne peut pas changer ? _eos_
what can not change ? _eos_
what can not change change _eos_
que faites - vous ? _eos_
what do you do ? _eos_
what do you do ? _eos_
qui rรฉglemente les pylรดnes d' antennes ? _eos_
who regulates antenna towers ? _eos_
who regulates the antenna ? ? _eos_
oรน sont - ils situรฉs ? _eos_
where are they located ? _eos_
where are they located ? _eos_
quelles sont leurs compรฉtences ? _eos_
what are their qualifications ? _eos_
what are their skills ? _eos_
qui est victime de harcรจlement sexuel ? _eos_
who experiences sexual harassment ? _eos_
who is victim harassment harassment ? _eos_
quelles sont les personnes qui visitent les communautรฉs autochtones ? _eos_
who visits indigenous communities ? _eos_
who are the people people ? ?
pourquoi ces trois points en particulier ? _eos_
why these specific three ? _eos_
why are these three specific ? _eos_
pourquoi ou pourquoi pas ? _eos_
why or why not ? _eos_
why or why not ? _eos_
###Markdown
Translation files
###Code
from fastai.text import *
###Output
_____no_output_____
###Markdown
French/English parallel texts from http://www.statmt.org/wmt15/translation-task.html . It was created by Chris Callison-Burch, who crawled millions of web pages and then used *a set of simple heuristics to transform French URLs onto English URLs (i.e. replacing "fr" with "en" and about 40 other hand-written rules), and assume that these documents are translations of each other*.
###Code
PATH = Path('data/translate')
TMP_PATH = PATH/'tmp'
TMP_PATH.mkdir(exist_ok=True)
fname='giga-fren.release2.fixed'
en_fname = PATH/f'{fname}.en'
fr_fname = PATH/f'{fname}.fr'
re_eq = re.compile('^(Wh[^?.!]+\?)')
re_fq = re.compile('^([^?.!]+\?)')
lines = ((re_eq.search(eq), re_fq.search(fq))
for eq, fq in zip(open(en_fname, encoding='utf-8'), open(fr_fname, encoding='utf-8')))
qs = [(e.group(), f.group()) for e,f in lines if e and f]
pickle.dump(qs, (PATH/'fr-en-qs.pkl').open('wb'))
qs = pickle.load((PATH/'fr-en-qs.pkl').open('rb'))
qs[:5], len(qs)
en_qs,fr_qs = zip(*qs)
en_tok = Tokenizer.proc_all_mp(partition_by_cores(en_qs))
fr_tok = Tokenizer.proc_all_mp(partition_by_cores(fr_qs), 'fr')
en_tok[0], fr_tok[0]
np.percentile([len(o) for o in en_tok], 90), np.percentile([len(o) for o in fr_tok], 90)
keep = np.array([len(o)<30 for o in en_tok])
en_tok = np.array(en_tok)[keep]
fr_tok = np.array(fr_tok)[keep]
pickle.dump(en_tok, (PATH/'en_tok.pkl').open('wb'))
pickle.dump(fr_tok, (PATH/'fr_tok.pkl').open('wb'))
en_tok = pickle.load((PATH/'en_tok.pkl').open('rb'))
fr_tok = pickle.load((PATH/'fr_tok.pkl').open('rb'))
def toks2ids(tok,pre):
freq = Counter(p for o in tok for p in o)
itos = [o for o,c in freq.most_common(40000)]
itos.insert(0, '_bos_')
itos.insert(1, '_pad_')
itos.insert(2, '_eos_')
itos.insert(3, '_unk')
stoi = collections.defaultdict(lambda: 3, {v:k for k,v in enumerate(itos)})
ids = np.array([([stoi[o] for o in p] + [2]) for p in tok])
np.save(TMP_PATH/f'{pre}_ids.npy', ids)
pickle.dump(itos, open(TMP_PATH/f'{pre}_itos.pkl', 'wb'))
return ids,itos,stoi
en_ids,en_itos,en_stoi = toks2ids(en_tok,'en')
fr_ids,fr_itos,fr_stoi = toks2ids(fr_tok,'fr')
def load_ids(pre):
ids = np.load(TMP_PATH/f'{pre}_ids.npy')
itos = pickle.load(open(TMP_PATH/f'{pre}_itos.pkl', 'rb'))
stoi = collections.defaultdict(lambda: 3, {v:k for k,v in enumerate(itos)})
return ids,itos,stoi
en_ids,en_itos,en_stoi = load_ids('en')
fr_ids,fr_itos,fr_stoi = load_ids('fr')
[fr_itos[o] for o in fr_ids[0]], len(en_itos), len(fr_itos)
###Output
_____no_output_____
###Markdown
Word vectors fasttext word vectors available from https://fasttext.cc/docs/en/english-vectors.html
###Code
# ! pip install git+https://github.com/facebookresearch/fastText.git
import fastText as ft
###Output
_____no_output_____
###Markdown
To use the fastText library, you'll need to download [fasttext word vectors](https://github.com/facebookresearch/fastText/blob/master/pretrained-vectors.md) for your language (download the 'bin plus text' ones).
###Code
en_vecs = ft.load_model(str((PATH/'wiki.en.bin')))
fr_vecs = ft.load_model(str((PATH/'wiki.fr.bin')))
def get_vecs(lang, ft_vecs):
vecd = {w:ft_vecs.get_word_vector(w) for w in ft_vecs.get_words()}
pickle.dump(vecd, open(PATH/f'wiki.{lang}.pkl','wb'))
return vecd
en_vecd = get_vecs('en', en_vecs)
fr_vecd = get_vecs('fr', fr_vecs)
en_vecd = pickle.load(open(PATH/'wiki.en.pkl','rb'))
fr_vecd = pickle.load(open(PATH/'wiki.fr.pkl','rb'))
ft_words = ft_vecs.get_words(include_freq=True)
ft_word_dict = {k:v for k,v in zip(*ft_words)}
ft_words = sorted(ft_word_dict.keys(), key=lambda x: ft_word_dict[x])
len(ft_words)
dim_en_vec = len(en_vecd[','])
dim_fr_vec = len(fr_vecd[','])
dim_en_vec,dim_fr_vec
en_vecs = np.stack(list(en_vecd.values()))
en_vecs.mean(),en_vecs.std()
###Output
_____no_output_____
###Markdown
Model data
###Code
enlen_90 = int(np.percentile([len(o) for o in en_ids], 99))
frlen_90 = int(np.percentile([len(o) for o in fr_ids], 97))
enlen_90,frlen_90
en_ids_tr = np.array([o[:enlen_90] for o in en_ids])
fr_ids_tr = np.array([o[:frlen_90] for o in fr_ids])
class Seq2SeqDataset(Dataset):
def __init__(self, x, y): self.x,self.y = x,y
def __getitem__(self, idx): return A(self.x[idx], self.y[idx])
def __len__(self): return len(self.x)
np.random.seed(42)
trn_keep = np.random.rand(len(en_ids_tr))>0.1
en_trn,fr_trn = en_ids_tr[trn_keep],fr_ids_tr[trn_keep]
en_val,fr_val = en_ids_tr[~trn_keep],fr_ids_tr[~trn_keep]
len(en_trn),len(en_val)
trn_ds = Seq2SeqDataset(fr_trn,en_trn)
val_ds = Seq2SeqDataset(fr_val,en_val)
bs=125
trn_samp = SortishSampler(en_trn, key=lambda x: len(en_trn[x]), bs=bs)
val_samp = SortSampler(en_val, key=lambda x: len(en_val[x]))
trn_dl = DataLoader(trn_ds, bs, transpose=True, transpose_y=True, num_workers=1,
pad_idx=1, pre_pad=False, sampler=trn_samp)
val_dl = DataLoader(val_ds, int(bs*1.6), transpose=True, transpose_y=True, num_workers=1,
pad_idx=1, pre_pad=False, sampler=val_samp)
md = ModelData(PATH, trn_dl, val_dl)
it = iter(trn_dl)
its = [next(it) for i in range(5)]
[(len(x),len(y)) for x,y in its]
###Output
_____no_output_____
###Markdown
Initial model
###Code
def create_emb(vecs, itos, em_sz):
emb = nn.Embedding(len(itos), em_sz, padding_idx=1)
wgts = emb.weight.data
miss = []
for i,w in enumerate(itos):
try: wgts[i] = torch.from_numpy(vecs[w]*3)
except: miss.append(w)
print(len(miss),miss[5:10])
return emb
nh,nl = 256,2
class Seq2SeqRNN(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.emb_enc_drop = nn.Dropout(0.15)
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
def seq2seq_loss(input, target):
sl,bs = target.size()
sl_in,bs_in,nc = input.size()
if sl>sl_in: input = F.pad(input, (0,0,0,0,0,sl-sl_in))
input = input[:sl]
return F.cross_entropy(input.view(-1,nc), target.view(-1))#, ignore_index=1)
opt_fn = partial(optim.Adam, betas=(0.8, 0.99))
rnn = Seq2SeqRNN(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.lr_find()
learn.sched.plot()
lr=3e-3
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10))
learn.save('initial')
learn.load('initial')
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
###Output
quels facteurs pourraient influer sur le choix de leur emplacement ? _eos_
what factors influencetheir location ? _eos_
what factors might might influence on the their ? ? _eos_
quโ est -ce qui ne peut pas changer ? _eos_
what can not change ? _eos_
what not change change ? _eos_
que faites - vous ? _eos_
what do you do ? _eos_
what do you do ? _eos_
qui rรฉglemente les pylรดnes d' antennes ? _eos_
who regulates antenna towers ? _eos_
who regulates the doors doors ? _eos_
oรน sont - ils situรฉs ? _eos_
where are they located ? _eos_
where are the located ? _eos_
quelles sont leurs compรฉtences ? _eos_
what are their qualifications ? _eos_
what are their skills ? _eos_
qui est victime de harcรจlement sexuel ? _eos_
who experiences sexual harassment ? _eos_
who is victim sexual sexual ? ? _eos_
quelles sont les personnes qui visitent les communautรฉs autochtones ? _eos_
who visits indigenous communities ? _eos_
who are people people aboriginal aboriginal ? _eos_
pourquoi ces trois points en particulier ? _eos_
why these specific three ? _eos_
why are these two different ? ? _eos_
pourquoi ou pourquoi pas ? _eos_
why or why not ? _eos_
why or why not _eos_
###Markdown
Bidir
###Code
class Seq2SeqRNN_Bidir(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25, bidirectional=True)
self.out_enc = nn.Linear(nh*2, em_sz_dec, bias=False)
self.drop_enc = nn.Dropout(0.05)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = h.view(2,2,bs,-1).permute(0,2,1,3).contiguous().view(2,bs,-1)
h = self.out_enc(self.drop_enc(h))
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl*2, bs, self.nh))
rnn = Seq2SeqRNN_Bidir(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10))
learn.save('bidir')
###Output
_____no_output_____
###Markdown
Teacher forcing
###Code
class Seq2SeqStepper(Stepper):
def step(self, xs, y, epoch):
self.m.pr_force = (10-epoch)*0.1 if epoch<10 else 0
xtra = []
output = self.m(*xs, y)
if isinstance(output,tuple): output,*xtra = output
self.opt.zero_grad()
loss = raw_loss = self.crit(output, y)
if self.reg_fn: loss = self.reg_fn(output, xtra, raw_loss)
loss.backward()
if self.clip: # Gradient clipping
nn.utils.clip_grad_norm(trainable_params_(self.m), self.clip)
self.opt.step()
return raw_loss.data[0]
class Seq2SeqRNN_TeacherForcing(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.pr_force = 1.
def forward(self, inp, y=None):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
rnn = Seq2SeqRNN_TeacherForcing(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10), stepper=Seq2SeqStepper)
learn.save('forcing')
###Output
_____no_output_____
###Markdown
Attentional model
###Code
def rand_t(*sz): return torch.randn(sz)/math.sqrt(sz[0])
def rand_p(*sz): return nn.Parameter(rand_t(*sz))
class Seq2SeqAttnRNN(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.W1 = rand_p(nh, em_sz_dec)
self.l2 = nn.Linear(em_sz_dec, em_sz_dec)
self.l3 = nn.Linear(em_sz_dec+nh, em_sz_dec)
self.V = rand_p(em_sz_dec)
def forward(self, inp, y=None, ret_attn=False):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res,attns = [],[]
w1e = enc_out @ self.W1
for i in range(self.out_sl):
w2h = self.l2(h[-1])
u = F.tanh(w1e + w2h)
a = F.softmax(u @ self.V, 0)
attns.append(a)
Xa = (a.unsqueeze(2) * enc_out).sum(0)
emb = self.emb_dec(dec_inp)
wgt_enc = self.l3(torch.cat([emb, Xa], 1))
outp, h = self.gru_dec(wgt_enc.unsqueeze(0), h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
res = torch.stack(res)
if ret_attn: res = res,torch.stack(attns)
return res
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
rnn = Seq2SeqAttnRNN(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
lr=2e-3
learn.fit(lr, 1, cycle_len=15, use_clr=(20,10), stepper=Seq2SeqStepper)
learn.save('attn')
learn.load('attn')
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs,attns = learn.model(V(x),ret_attn=True)
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
attn = to_np(attns[...,180])
fig, axes = plt.subplots(3, 3, figsize=(15, 10))
for i,ax in enumerate(axes.flat):
ax.plot(attn[i])
###Output
_____no_output_____
###Markdown
All
###Code
class Seq2SeqRNN_All(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25, bidirectional=True)
self.out_enc = nn.Linear(nh*2, em_sz_dec, bias=False)
self.drop_enc = nn.Dropout(0.25)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.W1 = rand_p(nh*2, em_sz_dec)
self.l2 = nn.Linear(em_sz_dec, em_sz_dec)
self.l3 = nn.Linear(em_sz_dec+nh*2, em_sz_dec)
self.V = rand_p(em_sz_dec)
def forward(self, inp, y=None):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = h.view(2,2,bs,-1).permute(0,2,1,3).contiguous().view(2,bs,-1)
h = self.out_enc(self.drop_enc(h))
dec_inp = V(torch.zeros(bs).long())
res,attns = [],[]
w1e = enc_out @ self.W1
for i in range(self.out_sl):
w2h = self.l2(h[-1])
u = F.tanh(w1e + w2h)
a = F.softmax(u @ self.V, 0)
attns.append(a)
Xa = (a.unsqueeze(2) * enc_out).sum(0)
emb = self.emb_dec(dec_inp)
wgt_enc = self.l3(torch.cat([emb, Xa], 1))
outp, h = self.gru_dec(wgt_enc.unsqueeze(0), h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl*2, bs, self.nh))
rnn = Seq2SeqRNN_All(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=15, use_clr=(20,10), stepper=Seq2SeqStepper)
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
###Output
quels facteurs pourraient influer sur le choix de leur emplacement ? _eos_
what factors influencetheir location ? _eos_
what factors might affect the choice of their ? ? _eos_
quโ est -ce qui ne peut pas changer ? _eos_
what can not change ? _eos_
what can not change change _eos_
que faites - vous ? _eos_
what do you do ? _eos_
what do you do ? _eos_
qui rรฉglemente les pylรดnes d' antennes ? _eos_
who regulates antenna towers ? _eos_
who regulates the antenna ? ? _eos_
oรน sont - ils situรฉs ? _eos_
where are they located ? _eos_
where are they located ? _eos_
quelles sont leurs compรฉtences ? _eos_
what are their qualifications ? _eos_
what are their skills ? _eos_
qui est victime de harcรจlement sexuel ? _eos_
who experiences sexual harassment ? _eos_
who is victim harassment harassment ? _eos_
quelles sont les personnes qui visitent les communautรฉs autochtones ? _eos_
who visits indigenous communities ? _eos_
who are the people people ? ?
pourquoi ces trois points en particulier ? _eos_
why these specific three ? _eos_
why are these three specific ? _eos_
pourquoi ou pourquoi pas ? _eos_
why or why not ? _eos_
why or why not ? _eos_
###Markdown
Translation files
###Code
from fastai.text import *
###Output
_____no_output_____
###Markdown
French/English parallel texts from http://www.statmt.org/wmt15/translation-task.html . It was created by Chris Callison-Burch, who crawled millions of web pages and then used *a set of simple heuristics to transform French URLs onto English URLs (i.e. replacing "fr" with "en" and about 40 other hand-written rules), and assume that these documents are translations of each other*.
###Code
PATH = Path('data/translate')
TMP_PATH = PATH/'tmp'
TMP_PATH.mkdir(exist_ok=True)
fname='giga-fren.release2.fixed'
en_fname = PATH/f'{fname}.en'
fr_fname = PATH/f'{fname}.fr'
re_eq = re.compile('^(Wh[^?.!]+\?)')
re_fq = re.compile('^([^?.!]+\?)')
lines = ((re_eq.search(eq), re_fq.search(fq))
for eq, fq in zip(open(en_fname, encoding='utf-8'), open(fr_fname, encoding='utf-8')))
qs = [(e.group(), f.group()) for e,f in lines if e and f]
pickle.dump(qs, (PATH/'fr-en-qs.pkl').open('wb'))
qs = pickle.load((PATH/'fr-en-qs.pkl').open('rb'))
qs[:5], len(qs)
en_qs,fr_qs = zip(*qs)
en_tok = Tokenizer.proc_all_mp(partition_by_cores(en_qs))
fr_tok = Tokenizer.proc_all_mp(partition_by_cores(fr_qs), 'fr')
en_tok[0], fr_tok[0]
np.percentile([len(o) for o in en_tok], 90), np.percentile([len(o) for o in fr_tok], 90)
keep = np.array([len(o)<30 for o in en_tok])
en_tok = np.array(en_tok)[keep]
fr_tok = np.array(fr_tok)[keep]
pickle.dump(en_tok, (PATH/'en_tok.pkl').open('wb'))
pickle.dump(fr_tok, (PATH/'fr_tok.pkl').open('wb'))
en_tok = pickle.load((PATH/'en_tok.pkl').open('rb'))
fr_tok = pickle.load((PATH/'fr_tok.pkl').open('rb'))
def toks2ids(tok,pre):
freq = Counter(p for o in tok for p in o)
itos = [o for o,c in freq.most_common(40000)]
itos.insert(0, '_bos_')
itos.insert(1, '_pad_')
itos.insert(2, '_eos_')
itos.insert(3, '_unk')
stoi = collections.defaultdict(lambda: 3, {v:k for k,v in enumerate(itos)})
ids = np.array([([stoi[o] for o in p] + [2]) for p in tok])
np.save(TMP_PATH/f'{pre}_ids.npy', ids)
pickle.dump(itos, open(TMP_PATH/f'{pre}_itos.pkl', 'wb'))
return ids,itos,stoi
en_ids,en_itos,en_stoi = toks2ids(en_tok,'en')
fr_ids,fr_itos,fr_stoi = toks2ids(fr_tok,'fr')
def load_ids(pre):
ids = np.load(TMP_PATH/f'{pre}_ids.npy')
itos = pickle.load(open(TMP_PATH/f'{pre}_itos.pkl', 'rb'))
stoi = collections.defaultdict(lambda: 3, {v:k for k,v in enumerate(itos)})
return ids,itos,stoi
en_ids,en_itos,en_stoi = load_ids('en')
fr_ids,fr_itos,fr_stoi = load_ids('fr')
[fr_itos[o] for o in fr_ids[0]], len(en_itos), len(fr_itos)
###Output
_____no_output_____
###Markdown
Word vectors fasttext word vectors available from https://fasttext.cc/docs/en/english-vectors.html
###Code
# ! pip install git+https://github.com/facebookresearch/fastText.git
import fastText as ft
###Output
_____no_output_____
###Markdown
To use the fastText library, you'll need to download [fasttext word vectors](https://github.com/facebookresearch/fastText/blob/master/pretrained-vectors.md) for your language (download the 'bin plus text' ones).
###Code
en_vecs = ft.load_model(str((PATH/'wiki.en.bin')))
fr_vecs = ft.load_model(str((PATH/'wiki.fr.bin')))
def get_vecs(lang, ft_vecs):
vecd = {w:ft_vecs.get_word_vector(w) for w in ft_vecs.get_words()}
pickle.dump(vecd, open(PATH/f'wiki.{lang}.pkl','wb'))
return vecd
en_vecd = get_vecs('en', en_vecs)
fr_vecd = get_vecs('fr', fr_vecs)
en_vecd = pickle.load(open(PATH/'wiki.en.pkl','rb'))
fr_vecd = pickle.load(open(PATH/'wiki.fr.pkl','rb'))
ft_words = ft_vecs.get_words(include_freq=True)
ft_word_dict = {k:v for k,v in zip(*ft_words)}
ft_words = sorted(ft_word_dict.keys(), key=lambda x: ft_word_dict[x])
len(ft_words)
dim_en_vec = len(en_vecd[','])
dim_fr_vec = len(fr_vecd[','])
dim_en_vec,dim_fr_vec
en_vecs = np.stack(list(en_vecd.values()))
en_vecs.mean(),en_vecs.std()
###Output
_____no_output_____
###Markdown
Model data
###Code
enlen_90 = int(np.percentile([len(o) for o in en_ids], 99))
frlen_90 = int(np.percentile([len(o) for o in fr_ids], 97))
enlen_90,frlen_90
en_ids_tr = np.array([o[:enlen_90] for o in en_ids])
fr_ids_tr = np.array([o[:frlen_90] for o in fr_ids])
class Seq2SeqDataset(Dataset):
def __init__(self, x, y): self.x,self.y = x,y
def __getitem__(self, idx): return A(self.x[idx], self.y[idx])
def __len__(self): return len(self.x)
np.random.seed(42)
trn_keep = np.random.rand(len(en_ids_tr))>0.1
en_trn,fr_trn = en_ids_tr[trn_keep],fr_ids_tr[trn_keep]
en_val,fr_val = en_ids_tr[~trn_keep],fr_ids_tr[~trn_keep]
len(en_trn),len(en_val)
trn_ds = Seq2SeqDataset(fr_trn,en_trn)
val_ds = Seq2SeqDataset(fr_val,en_val)
bs=125
trn_samp = SortishSampler(en_trn, key=lambda x: len(en_trn[x]), bs=bs)
val_samp = SortSampler(en_val, key=lambda x: len(en_val[x]))
trn_dl = DataLoader(trn_ds, bs, transpose=True, transpose_y=True, num_workers=1,
pad_idx=1, pre_pad=False, sampler=trn_samp)
val_dl = DataLoader(val_ds, int(bs*1.6), transpose=True, transpose_y=True, num_workers=1,
pad_idx=1, pre_pad=False, sampler=val_samp)
md = ModelData(PATH, trn_dl, val_dl)
it = iter(trn_dl)
its = [next(it) for i in range(5)]
[(len(x),len(y)) for x,y in its]
###Output
_____no_output_____
###Markdown
Initial model
###Code
def create_emb(vecs, itos, em_sz):
emb = nn.Embedding(len(itos), em_sz, padding_idx=1)
wgts = emb.weight.data
miss = []
for i,w in enumerate(itos):
try: wgts[i] = torch.from_numpy(vecs[w]*3)
except: miss.append(w)
print(len(miss),miss[5:10])
return emb
nh,nl = 256,2
class Seq2SeqRNN(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.emb_enc_drop = nn.Dropout(0.15)
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
def seq2seq_loss(input, target):
sl,bs = target.size()
sl_in,bs_in,nc = input.size()
if sl>sl_in: input = F.pad(input, (0,0,0,0,0,sl-sl_in))
input = input[:sl]
return F.cross_entropy(input.view(-1,nc), target.view(-1))#, ignore_index=1)
opt_fn = partial(optim.Adam, betas=(0.8, 0.99))
rnn = Seq2SeqRNN(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.lr_find()
learn.sched.plot()
lr=3e-3
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10))
learn.save('initial')
learn.load('initial')
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
###Output
quels facteurs pourraient influer sur le choix de leur emplacement ? _eos_
what factors influencetheir location ? _eos_
what factors might might influence on the their ? ? _eos_
quโ est -ce qui ne peut pas changer ? _eos_
what can not change ? _eos_
what not change change ? _eos_
que faites - vous ? _eos_
what do you do ? _eos_
what do you do ? _eos_
qui rรฉglemente les pylรดnes d' antennes ? _eos_
who regulates antenna towers ? _eos_
who regulates the doors doors ? _eos_
oรน sont - ils situรฉs ? _eos_
where are they located ? _eos_
where are the located ? _eos_
quelles sont leurs compรฉtences ? _eos_
what are their qualifications ? _eos_
what are their skills ? _eos_
qui est victime de harcรจlement sexuel ? _eos_
who experiences sexual harassment ? _eos_
who is victim sexual sexual ? ? _eos_
quelles sont les personnes qui visitent les communautรฉs autochtones ? _eos_
who visits indigenous communities ? _eos_
who are people people aboriginal aboriginal ? _eos_
pourquoi ces trois points en particulier ? _eos_
why these specific three ? _eos_
why are these two different ? ? _eos_
pourquoi ou pourquoi pas ? _eos_
why or why not ? _eos_
why or why not _eos_
###Markdown
Bidir
###Code
class Seq2SeqRNN_Bidir(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25, bidirectional=True)
self.out_enc = nn.Linear(nh*2, em_sz_dec, bias=False)
self.drop_enc = nn.Dropout(0.05)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = h.view(2,2,bs,-1).permute(0,2,1,3).contiguous().view(2,bs,-1)
h = self.out_enc(self.drop_enc(h))
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl*2, bs, self.nh))
rnn = Seq2SeqRNN_Bidir(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10))
learn.save('bidir')
###Output
_____no_output_____
###Markdown
Teacher forcing
###Code
class Seq2SeqStepper(Stepper):
def step(self, xs, y, epoch):
self.m.pr_force = (10-epoch)*0.1 if epoch<10 else 0
xtra = []
output = self.m(*xs, y)
if isinstance(output,tuple): output,*xtra = output
self.opt.zero_grad()
loss = raw_loss = self.crit(output, y)
if self.reg_fn: loss = self.reg_fn(output, xtra, raw_loss)
loss.backward()
if self.clip: # Gradient clipping
nn.utils.clip_grad_norm(trainable_params_(self.m), self.clip)
self.opt.step()
return raw_loss.data[0]
class Seq2SeqRNN_TeacherForcing(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.pr_force = 1.
def forward(self, inp, y=None):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
rnn = Seq2SeqRNN_TeacherForcing(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10), stepper=Seq2SeqStepper)
learn.save('forcing')
###Output
_____no_output_____
###Markdown
Attentional model
###Code
def rand_t(*sz): return torch.randn(sz)/math.sqrt(sz[0])
def rand_p(*sz): return nn.Parameter(rand_t(*sz))
class Seq2SeqAttnRNN(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.W1 = rand_p(nh, em_sz_dec)
self.l2 = nn.Linear(em_sz_dec, em_sz_dec)
self.l3 = nn.Linear(em_sz_dec+nh, em_sz_dec)
self.V = rand_p(em_sz_dec)
def forward(self, inp, y=None, ret_attn=False):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res,attns = [],[]
w1e = enc_out @ self.W1
for i in range(self.out_sl):
w2h = self.l2(h[-1])
u = F.tanh(w1e + w2h)
a = F.softmax(u @ self.V, 0)
attns.append(a)
Xa = (a.unsqueeze(2) * enc_out).sum(0)
emb = self.emb_dec(dec_inp)
wgt_enc = self.l3(torch.cat([emb, Xa], 1))
outp, h = self.gru_dec(wgt_enc.unsqueeze(0), h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
res = torch.stack(res)
if ret_attn: res = res,torch.stack(attns)
return res
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
rnn = Seq2SeqAttnRNN(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
lr=2e-3
learn.fit(lr, 1, cycle_len=15, use_clr=(20,10), stepper=Seq2SeqStepper)
learn.save('attn')
learn.load('attn')
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs,attns = learn.model(V(x),ret_attn=True)
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
attn = to_np(attns[...,180])
fig, axes = plt.subplots(3, 3, figsize=(15, 10))
for i,ax in enumerate(axes.flat):
ax.plot(attn[i])
###Output
_____no_output_____
###Markdown
All
###Code
class Seq2SeqRNN_All(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25, bidirectional=True)
self.out_enc = nn.Linear(nh*2, em_sz_dec, bias=False)
self.drop_enc = nn.Dropout(0.25)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.W1 = rand_p(nh*2, em_sz_dec)
self.l2 = nn.Linear(em_sz_dec, em_sz_dec)
self.l3 = nn.Linear(em_sz_dec+nh*2, em_sz_dec)
self.V = rand_p(em_sz_dec)
def forward(self, inp, y=None):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = h.view(2,2,bs,-1).permute(0,2,1,3).contiguous().view(2,bs,-1)
h = self.out_enc(self.drop_enc(h))
dec_inp = V(torch.zeros(bs).long())
res,attns = [],[]
w1e = enc_out @ self.W1
for i in range(self.out_sl):
w2h = self.l2(h[-1])
u = F.tanh(w1e + w2h)
a = F.softmax(u @ self.V, 0)
attns.append(a)
Xa = (a.unsqueeze(2) * enc_out).sum(0)
emb = self.emb_dec(dec_inp)
wgt_enc = self.l3(torch.cat([emb, Xa], 1))
outp, h = self.gru_dec(wgt_enc.unsqueeze(0), h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl*2, bs, self.nh))
rnn = Seq2SeqRNN_All(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=15, use_clr=(20,10), stepper=Seq2SeqStepper)
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
###Output
quels facteurs pourraient influer sur le choix de leur emplacement ? _eos_
what factors influencetheir location ? _eos_
what factors might affect the choice of their ? ? _eos_
quโ est -ce qui ne peut pas changer ? _eos_
what can not change ? _eos_
what can not change change _eos_
que faites - vous ? _eos_
what do you do ? _eos_
what do you do ? _eos_
qui rรฉglemente les pylรดnes d' antennes ? _eos_
who regulates antenna towers ? _eos_
who regulates the antenna ? ? _eos_
oรน sont - ils situรฉs ? _eos_
where are they located ? _eos_
where are they located ? _eos_
quelles sont leurs compรฉtences ? _eos_
what are their qualifications ? _eos_
what are their skills ? _eos_
qui est victime de harcรจlement sexuel ? _eos_
who experiences sexual harassment ? _eos_
who is victim harassment harassment ? _eos_
quelles sont les personnes qui visitent les communautรฉs autochtones ? _eos_
who visits indigenous communities ? _eos_
who are the people people ? ?
pourquoi ces trois points en particulier ? _eos_
why these specific three ? _eos_
why are these three specific ? _eos_
pourquoi ou pourquoi pas ? _eos_
why or why not ? _eos_
why or why not ? _eos_
###Markdown
Translation files
###Code
from fastai.text import *
###Output
_____no_output_____
###Markdown
French/English parallel texts from http://www.statmt.org/wmt15/translation-task.html . It was created by Chris Callison-Burch, who crawled millions of web pages and then used *a set of simple heuristics to transform French URLs onto English URLs (i.e. replacing "fr" with "en" and about 40 other hand-written rules), and assume that these documents are translations of each other*.
###Code
PATH = Path('data/translate')
TMP_PATH = PATH/'tmp'
TMP_PATH.mkdir(exist_ok=True)
fname='giga-fren.release2.fixed'
en_fname = PATH/f'{fname}.en'
fr_fname = PATH/f'{fname}.fr'
re_eq = re.compile('^(Wh[^?.!]+\?)')
re_fq = re.compile('^([^?.!]+\?)')
lines = ((re_eq.search(eq), re_fq.search(fq))
for eq, fq in zip(open(en_fname, encoding='utf-8'), open(fr_fname, encoding='utf-8')))
qs = [(e.group(), f.group()) for e,f in lines if e and f]
pickle.dump(qs, (PATH/'fr-en-qs.pkl').open('wb'))
qs = pickle.load((PATH/'fr-en-qs.pkl').open('rb'))
qs[:5], len(qs)
en_qs,fr_qs = zip(*qs)
en_tok = Tokenizer.proc_all_mp(partition_by_cores(en_qs))
fr_tok = Tokenizer.proc_all_mp(partition_by_cores(fr_qs), 'fr')
en_tok[0], fr_tok[0]
np.percentile([len(o) for o in en_tok], 90), np.percentile([len(o) for o in fr_tok], 90)
keep = np.array([len(o)<30 for o in en_tok])
en_tok = np.array(en_tok)[keep]
fr_tok = np.array(fr_tok)[keep]
pickle.dump(en_tok, (PATH/'en_tok.pkl').open('wb'))
pickle.dump(fr_tok, (PATH/'fr_tok.pkl').open('wb'))
en_tok = pickle.load((PATH/'en_tok.pkl').open('rb'))
fr_tok = pickle.load((PATH/'fr_tok.pkl').open('rb'))
def toks2ids(tok,pre):
freq = Counter(p for o in tok for p in o)
itos = [o for o,c in freq.most_common(40000)]
itos.insert(0, '_bos_')
itos.insert(1, '_pad_')
itos.insert(2, '_eos_')
itos.insert(3, '_unk')
stoi = collections.defaultdict(lambda: 3, {v:k for k,v in enumerate(itos)})
ids = np.array([([stoi[o] for o in p] + [2]) for p in tok])
np.save(TMP_PATH/f'{pre}_ids.npy', ids)
pickle.dump(itos, open(TMP_PATH/f'{pre}_itos.pkl', 'wb'))
return ids,itos,stoi
en_ids,en_itos,en_stoi = toks2ids(en_tok,'en')
fr_ids,fr_itos,fr_stoi = toks2ids(fr_tok,'fr')
def load_ids(pre):
ids = np.load(TMP_PATH/f'{pre}_ids.npy')
itos = pickle.load(open(TMP_PATH/f'{pre}_itos.pkl', 'rb'))
stoi = collections.defaultdict(lambda: 3, {v:k for k,v in enumerate(itos)})
return ids,itos,stoi
en_ids,en_itos,en_stoi = load_ids('en')
fr_ids,fr_itos,fr_stoi = load_ids('fr')
[fr_itos[o] for o in fr_ids[0]], len(en_itos), len(fr_itos)
###Output
_____no_output_____
###Markdown
Word vectors fasttext word vectors available from https://fasttext.cc/docs/en/english-vectors.html
###Code
# ! pip install git+https://github.com/facebookresearch/fastText.git
import fastText as ft
###Output
_____no_output_____
###Markdown
To use the fastText library, you'll need to download [fasttext word vectors](https://github.com/facebookresearch/fastText/blob/master/pretrained-vectors.md) for your language (download the 'bin plus text' ones).
###Code
en_vecs = ft.load_model(str((PATH/'wiki.en.bin')))
fr_vecs = ft.load_model(str((PATH/'wiki.fr.bin')))
def get_vecs(lang, ft_vecs):
vecd = {w:ft_vecs.get_word_vector(w) for w in ft_vecs.get_words()}
pickle.dump(vecd, open(PATH/f'wiki.{lang}.pkl','wb'))
return vecd
en_vecd = get_vecs('en', en_vecs)
fr_vecd = get_vecs('fr', fr_vecs)
en_vecd = pickle.load(open(PATH/'wiki.en.pkl','rb'))
fr_vecd = pickle.load(open(PATH/'wiki.fr.pkl','rb'))
ft_words = en_vecs.get_words(include_freq=True)
ft_word_dict = {k:v for k,v in zip(*ft_words)}
ft_words = sorted(ft_word_dict.keys(), key=lambda x: ft_word_dict[x])
len(ft_words)
dim_en_vec = len(en_vecd[','])
dim_fr_vec = len(fr_vecd[','])
dim_en_vec,dim_fr_vec
en_vecs = np.stack(list(en_vecd.values()))
en_vecs.mean(),en_vecs.std()
###Output
_____no_output_____
###Markdown
Model data
###Code
enlen_90 = int(np.percentile([len(o) for o in en_ids], 99))
frlen_90 = int(np.percentile([len(o) for o in fr_ids], 97))
enlen_90,frlen_90
en_ids_tr = np.array([o[:enlen_90] for o in en_ids])
fr_ids_tr = np.array([o[:frlen_90] for o in fr_ids])
class Seq2SeqDataset(Dataset):
def __init__(self, x, y): self.x,self.y = x,y
def __getitem__(self, idx): return A(self.x[idx], self.y[idx])
def __len__(self): return len(self.x)
np.random.seed(42)
trn_keep = np.random.rand(len(en_ids_tr))>0.1
en_trn,fr_trn = en_ids_tr[trn_keep],fr_ids_tr[trn_keep]
en_val,fr_val = en_ids_tr[~trn_keep],fr_ids_tr[~trn_keep]
len(en_trn),len(en_val)
trn_ds = Seq2SeqDataset(fr_trn,en_trn)
val_ds = Seq2SeqDataset(fr_val,en_val)
bs=125
trn_samp = SortishSampler(en_trn, key=lambda x: len(en_trn[x]), bs=bs)
val_samp = SortSampler(en_val, key=lambda x: len(en_val[x]))
trn_dl = DataLoader(trn_ds, bs, transpose=True, transpose_y=True, num_workers=1,
pad_idx=1, pre_pad=False, sampler=trn_samp)
val_dl = DataLoader(val_ds, int(bs*1.6), transpose=True, transpose_y=True, num_workers=1,
pad_idx=1, pre_pad=False, sampler=val_samp)
md = ModelData(PATH, trn_dl, val_dl)
it = iter(trn_dl)
its = [next(it) for i in range(5)]
[(len(x),len(y)) for x,y in its]
###Output
_____no_output_____
###Markdown
Initial model
###Code
def create_emb(vecs, itos, em_sz):
emb = nn.Embedding(len(itos), em_sz, padding_idx=1)
wgts = emb.weight.data
miss = []
for i,w in enumerate(itos):
try: wgts[i] = torch.from_numpy(vecs[w]*3)
except: miss.append(w)
print(len(miss),miss[5:10])
return emb
nh,nl = 256,2
class Seq2SeqRNN(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.emb_enc_drop = nn.Dropout(0.15)
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
def seq2seq_loss(input, target):
sl,bs = target.size()
sl_in,bs_in,nc = input.size()
if sl>sl_in: input = F.pad(input, (0,0,0,0,0,sl-sl_in))
input = input[:sl]
return F.cross_entropy(input.view(-1,nc), target.view(-1))#, ignore_index=1)
opt_fn = partial(optim.Adam, betas=(0.8, 0.99))
rnn = Seq2SeqRNN(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.lr_find()
learn.sched.plot()
lr=3e-3
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10))
learn.save('initial')
learn.load('initial')
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
###Output
quels facteurs pourraient influer sur le choix de leur emplacement ? _eos_
what factors influencetheir location ? _eos_
what factors might might influence on the their ? ? _eos_
quโ est -ce qui ne peut pas changer ? _eos_
what can not change ? _eos_
what not change change ? _eos_
que faites - vous ? _eos_
what do you do ? _eos_
what do you do ? _eos_
qui rรฉglemente les pylรดnes d' antennes ? _eos_
who regulates antenna towers ? _eos_
who regulates the doors doors ? _eos_
oรน sont - ils situรฉs ? _eos_
where are they located ? _eos_
where are the located ? _eos_
quelles sont leurs compรฉtences ? _eos_
what are their qualifications ? _eos_
what are their skills ? _eos_
qui est victime de harcรจlement sexuel ? _eos_
who experiences sexual harassment ? _eos_
who is victim sexual sexual ? ? _eos_
quelles sont les personnes qui visitent les communautรฉs autochtones ? _eos_
who visits indigenous communities ? _eos_
who are people people aboriginal aboriginal ? _eos_
pourquoi ces trois points en particulier ? _eos_
why these specific three ? _eos_
why are these two different ? ? _eos_
pourquoi ou pourquoi pas ? _eos_
why or why not ? _eos_
why or why not _eos_
###Markdown
Bidir
###Code
class Seq2SeqRNN_Bidir(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25, bidirectional=True)
self.out_enc = nn.Linear(nh*2, em_sz_dec, bias=False)
self.drop_enc = nn.Dropout(0.05)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = h.view(2,2,bs,-1).permute(0,2,1,3).contiguous().view(2,bs,-1)
h = self.out_enc(self.drop_enc(h))
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl*2, bs, self.nh))
rnn = Seq2SeqRNN_Bidir(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10))
learn.save('bidir')
###Output
_____no_output_____
###Markdown
Teacher forcing
###Code
class Seq2SeqStepper(Stepper):
def step(self, xs, y, epoch):
self.m.pr_force = (10-epoch)*0.1 if epoch<10 else 0
xtra = []
output = self.m(*xs, y)
if isinstance(output,tuple): output,*xtra = output
self.opt.zero_grad()
loss = raw_loss = self.crit(output, y)
if self.reg_fn: loss = self.reg_fn(output, xtra, raw_loss)
loss.backward()
if self.clip: # Gradient clipping
nn.utils.clip_grad_norm(trainable_params_(self.m), self.clip)
self.opt.step()
return raw_loss.data[0]
class Seq2SeqRNN_TeacherForcing(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.pr_force = 1.
def forward(self, inp, y=None):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
rnn = Seq2SeqRNN_TeacherForcing(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10), stepper=Seq2SeqStepper)
learn.save('forcing')
###Output
_____no_output_____
###Markdown
Attentional model
###Code
def rand_t(*sz): return torch.randn(sz)/math.sqrt(sz[0])
def rand_p(*sz): return nn.Parameter(rand_t(*sz))
class Seq2SeqAttnRNN(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.W1 = rand_p(nh, em_sz_dec)
self.l2 = nn.Linear(em_sz_dec, em_sz_dec)
self.l3 = nn.Linear(em_sz_dec+nh, em_sz_dec)
self.V = rand_p(em_sz_dec)
def forward(self, inp, y=None, ret_attn=False):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res,attns = [],[]
w1e = enc_out @ self.W1
for i in range(self.out_sl):
w2h = self.l2(h[-1])
u = F.tanh(w1e + w2h)
a = F.softmax(u @ self.V, 0)
attns.append(a)
Xa = (a.unsqueeze(2) * enc_out).sum(0)
emb = self.emb_dec(dec_inp)
wgt_enc = self.l3(torch.cat([emb, Xa], 1))
outp, h = self.gru_dec(wgt_enc.unsqueeze(0), h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
res = torch.stack(res)
if ret_attn: res = res,torch.stack(attns)
return res
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
rnn = Seq2SeqAttnRNN(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
lr=2e-3
learn.fit(lr, 1, cycle_len=15, use_clr=(20,10), stepper=Seq2SeqStepper)
learn.save('attn')
learn.load('attn')
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs,attns = learn.model(V(x),ret_attn=True)
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
attn = to_np(attns[...,180])
fig, axes = plt.subplots(3, 3, figsize=(15, 10))
for i,ax in enumerate(axes.flat):
ax.plot(attn[i])
###Output
_____no_output_____
###Markdown
All
###Code
class Seq2SeqRNN_All(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25, bidirectional=True)
self.out_enc = nn.Linear(nh*2, em_sz_dec, bias=False)
self.drop_enc = nn.Dropout(0.25)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.W1 = rand_p(nh*2, em_sz_dec)
self.l2 = nn.Linear(em_sz_dec, em_sz_dec)
self.l3 = nn.Linear(em_sz_dec+nh*2, em_sz_dec)
self.V = rand_p(em_sz_dec)
def forward(self, inp, y=None):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = h.view(2,2,bs,-1).permute(0,2,1,3).contiguous().view(2,bs,-1)
h = self.out_enc(self.drop_enc(h))
dec_inp = V(torch.zeros(bs).long())
res,attns = [],[]
w1e = enc_out @ self.W1
for i in range(self.out_sl):
w2h = self.l2(h[-1])
u = F.tanh(w1e + w2h)
a = F.softmax(u @ self.V, 0)
attns.append(a)
Xa = (a.unsqueeze(2) * enc_out).sum(0)
emb = self.emb_dec(dec_inp)
wgt_enc = self.l3(torch.cat([emb, Xa], 1))
outp, h = self.gru_dec(wgt_enc.unsqueeze(0), h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl*2, bs, self.nh))
rnn = Seq2SeqRNN_All(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=15, use_clr=(20,10), stepper=Seq2SeqStepper)
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
###Output
quels facteurs pourraient influer sur le choix de leur emplacement ? _eos_
what factors influencetheir location ? _eos_
what factors might affect the choice of their ? ? _eos_
quโ est -ce qui ne peut pas changer ? _eos_
what can not change ? _eos_
what can not change change _eos_
que faites - vous ? _eos_
what do you do ? _eos_
what do you do ? _eos_
qui rรฉglemente les pylรดnes d' antennes ? _eos_
who regulates antenna towers ? _eos_
who regulates the antenna ? ? _eos_
oรน sont - ils situรฉs ? _eos_
where are they located ? _eos_
where are they located ? _eos_
quelles sont leurs compรฉtences ? _eos_
what are their qualifications ? _eos_
what are their skills ? _eos_
qui est victime de harcรจlement sexuel ? _eos_
who experiences sexual harassment ? _eos_
who is victim harassment harassment ? _eos_
quelles sont les personnes qui visitent les communautรฉs autochtones ? _eos_
who visits indigenous communities ? _eos_
who are the people people ? ?
pourquoi ces trois points en particulier ? _eos_
why these specific three ? _eos_
why are these three specific ? _eos_
pourquoi ou pourquoi pas ? _eos_
why or why not ? _eos_
why or why not ? _eos_
###Markdown
Translation files
###Code
from fastai.text import *
from pathlib import Path
torch.cuda.set_device(1)
?re.compile
###Output
_____no_output_____
###Markdown
French/English parallel texts from http://www.statmt.org/wmt15/translation-task.html .Download with `wget http://www.statmt.org/wmt10/training-giga-fren.tar`
###Code
PATH = Path('data/translate')
TMP_PATH = PATH/'tmp'
TMP_PATH.mkdir(exist_ok=True)
fname='giga-fren.release2.fixed'
en_fname = PATH/f'{fname}.en'
fr_fname = PATH/f'{fname}.fr'
re_eq = re.compile('^(Wh[^?.!]+\?)')
re_fq = re.compile('^([^?.!]+\?)')
lines = ((re_eq.search(eq), re_fq.search(fq))
for eq, fq in zip(open(en_fname, encoding='utf-8'), open(fr_fname, encoding='utf-8')))
qs = [(e.group(), f.group()) for e,f in lines if e and f]
pickle.dump(qs, (PATH/'fr-en-qs.pkl').open('wb'))
qs = pickle.load((PATH/'fr-en-qs.pkl').open('rb'))
qs[:5], len(qs)
en_qs,fr_qs = zip(*qs)
%%time
en_tok = Tokenizer.proc_all_mp(partition_by_cores(en_qs))
fr_tok = Tokenizer.proc_all_mp(partition_by_cores(fr_qs), 'fr')
np.percentile([len(o) for o in en_tok], 90), np.percentile([len(o) for o in fr_tok], 90)
keep = np.array([len(o)<30 for o in en_tok])
en_tok = np.array(en_tok)[keep]
fr_tok = np.array(fr_tok)[keep]
pickle.dump(en_tok, (PATH/'en_tok.pkl').open('wb'))
pickle.dump(fr_tok, (PATH/'fr_tok.pkl').open('wb'))
en_tok = pickle.load((PATH/'en_tok.pkl').open('rb'))
fr_tok = pickle.load((PATH/'fr_tok.pkl').open('rb'))
en_tok[0], fr_tok[0]
def toks2ids(tok,pre):
freq = Counter(p for o in tok for p in o)
itos = [o for o,c in freq.most_common(40000)]
itos.insert(0, '_bos_')
itos.insert(1, '_pad_')
itos.insert(2, '_eos_')
itos.insert(3, '_unk')
stoi = collections.defaultdict(lambda: 3, {v:k for k,v in enumerate(itos)})
ids = np.array([([stoi[o] for o in p] + [2]) for p in tok])
np.save(TMP_PATH/f'{pre}_ids.npy', ids)
pickle.dump(itos, open(TMP_PATH/f'{pre}_itos.pkl', 'wb'))
return ids,itos,stoi
en_ids,en_itos,en_stoi = toks2ids(en_tok,'en')
fr_ids,fr_itos,fr_stoi = toks2ids(fr_tok,'fr')
def load_ids(pre):
ids = np.load(TMP_PATH/f'{pre}_ids.npy')
itos = pickle.load(open(TMP_PATH/f'{pre}_itos.pkl', 'rb'))
stoi = collections.defaultdict(lambda: 3, {v:k for k,v in enumerate(itos)})
return ids,itos,stoi
en_ids,en_itos,en_stoi = load_ids('en')
fr_ids,fr_itos,fr_stoi = load_ids('fr')
[fr_itos[o] for o in fr_ids[0]], len(en_itos), len(fr_itos)
###Output
_____no_output_____
###Markdown
Word vectors
###Code
with (PATH/'glove.6B.100d.txt').open('r', encoding='utf-8') as f: lines = [line.split() for line in f]
en_vecd = {w:np.array(v, dtype=np.float32) for w,*v in lines}
pickle.dump(en_vecd, open(PATH/'glove.6B.100d.dict.pkl','wb'))
###Output
_____no_output_____
###Markdown
fasttext word vectors available from https://fasttext.cc/docs/en/english-vectors.html
###Code
def is_number(s):
try:
float(s)
return True
except ValueError: return False
def get_vecs(lang):
with (PATH/f'wiki.{lang}.vec').open('r', encoding='utf-8') as f:
lines = [line.split() for line in f]
lines.pop(0)
vecd = {w:np.array(v, dtype=np.float32)
for w,*v in lines if is_number(v[0]) and len(v)==300}
pickle.dump(vecd, open(PATH/f'wiki.{lang}.pkl','wb'))
return vecd
en_vecd = get_vecs('en')
fr_vecd = get_vecs('fr')
en_vecd = pickle.load(open(PATH/'wiki.en.pkl','rb'))
fr_vecd = pickle.load(open(PATH/'wiki.fr.pkl','rb'))
dim_en_vec = len(en_vecd[','])
dim_fr_vec = len(fr_vecd[','])
en_vecs = np.stack(list(en_vecd.values()))
en_vecs.mean(),en_vecs.std()
###Output
_____no_output_____
###Markdown
Model data
###Code
enlen_90 = int(np.percentile([len(o) for o in en_ids], 99))
frlen_90 = int(np.percentile([len(o) for o in fr_ids], 97))
enlen_90,frlen_90
en_ids_tr = np.array([o[:enlen_90] for o in en_ids])
fr_ids_tr = np.array([o[:frlen_90] for o in fr_ids])
class Seq2SeqDataset(Dataset):
def __init__(self, x, y): self.x,self.y = x,y
def __getitem__(self, idx): return A(self.x[idx], self.y[idx])
def __len__(self): return len(self.x)
np.random.seed(42)
trn_keep = np.random.rand(len(en_ids_tr))>0.1
en_trn,fr_trn = en_ids_tr[trn_keep],fr_ids_tr[trn_keep]
en_val,fr_val = en_ids_tr[~trn_keep],fr_ids_tr[~trn_keep]
len(en_trn),len(en_val)
trn_ds = Seq2SeqDataset(fr_trn,en_trn)
val_ds = Seq2SeqDataset(fr_val,en_val)
bs=125
trn_samp = SortishSampler(en_trn, key=lambda x: len(en_trn[x]), bs=bs)
val_samp = SortSampler(en_val, key=lambda x: len(en_val[x]))
trn_dl = DataLoader(trn_ds, bs, transpose=True, transpose_y=True, num_workers=1, pad_idx=1, pre_pad=False, sampler=trn_samp)
val_dl = DataLoader(val_ds, int(bs*1.6), transpose=True, transpose_y=True, num_workers=1, pad_idx=1, pre_pad=False, sampler=val_samp)
md = ModelData(PATH, trn_dl, val_dl)
it = iter(trn_dl)
its = [next(it) for i in range(5)]
[(len(x),len(y)) for x,y in its]
###Output
_____no_output_____
###Markdown
Initial model
###Code
def create_emb(vecs, itos, em_sz):
emb = nn.Embedding(len(itos), em_sz, padding_idx=1)
wgts = emb.weight.data
miss = []
for i,w in enumerate(itos):
try: wgts[i] = torch.from_numpy(vecs[w]*3)
except: miss.append(w)
return emb
nh,nl = 256,2
class Seq2SeqRNN(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
def seq2seq_loss(input, target):
sl,bs = target.size()
sl_in,bs_in,nc = input.size()
if sl>sl_in: input = F.pad(input, (0,0,0,0,0,sl-sl_in))
input = input[:sl]
return F.cross_entropy(input.view(-1,nc), target.view(-1))#, ignore_index=1)
opt_fn = partial(optim.Adam, betas=(0.8, 0.99))
rnn = Seq2SeqRNN(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.lr_find()
learn.sched.plot()
lr=3e-3
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10))
learn.save('initial')
learn.load('initial')
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
###Output
quels facteurs pourraient influer sur le choix de leur emplacement ? _eos_
what factors influencetheir location ? _eos_
what factors might might influence on the their ? ? _eos_
quโ est -ce qui ne peut pas changer ? _eos_
what can not change ? _eos_
what not change change ? _eos_
que faites - vous ? _eos_
what do you do ? _eos_
what do you do ? _eos_
qui rรฉglemente les pylรดnes d' antennes ? _eos_
who regulates antenna towers ? _eos_
who regulates the doors doors ? _eos_
oรน sont - ils situรฉs ? _eos_
where are they located ? _eos_
where are the located ? _eos_
quelles sont leurs compรฉtences ? _eos_
what are their qualifications ? _eos_
what are their skills ? _eos_
qui est victime de harcรจlement sexuel ? _eos_
who experiences sexual harassment ? _eos_
who is victim sexual sexual ? ? _eos_
quelles sont les personnes qui visitent les communautรฉs autochtones ? _eos_
who visits indigenous communities ? _eos_
who are people people aboriginal aboriginal ? _eos_
pourquoi ces trois points en particulier ? _eos_
why these specific three ? _eos_
why are these two different ? ? _eos_
pourquoi ou pourquoi pas ? _eos_
why or why not ? _eos_
why or why not _eos_
###Markdown
Bidir
###Code
class Seq2SeqRNN_Bidir(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25, bidirectional=True)
self.out_enc = nn.Linear(nh*2, em_sz_dec, bias=False)
self.drop_enc = nn.Dropout(0.05)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = h.view(2,2,bs,-1).permute(0,2,1,3).contiguous().view(2,bs,-1)
h = self.out_enc(self.drop_enc(h))
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl*2, bs, self.nh))
rnn = Seq2SeqRNN_Bidir(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10))
learn.save('bidir')
###Output
_____no_output_____
###Markdown
Teacher forcing
###Code
class Seq2SeqStepper(Stepper):
def step(self, xs, y, epoch):
self.m.pr_force = (10-epoch)*0.1 if epoch<10 else 0
xtra = []
output = self.m(*xs, y)
if isinstance(output,tuple): output,*xtra = output
self.opt.zero_grad()
loss = raw_loss = self.crit(output, y)
if self.reg_fn: loss = self.reg_fn(output, xtra, raw_loss)
loss.backward()
if self.clip: # Gradient clipping
nn.utils.clip_grad_norm(trainable_params_(self.m), self.clip)
self.opt.step()
return raw_loss.data[0]
class Seq2SeqRNN_TeacherForcing(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.pr_force = 1.
def forward(self, inp, y=None):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
rnn = Seq2SeqRNN_TeacherForcing(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10), stepper=Seq2SeqStepper)
learn.save('forcing')
###Output
_____no_output_____
###Markdown
Attentional model
###Code
def rand_t(*sz): return torch.randn(sz)/math.sqrt(sz[0])
def rand_p(*sz): return nn.Parameter(rand_t(*sz))
class Seq2SeqAttnRNN(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec*2, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.W1 = rand_p(nh, em_sz_dec)
self.l2 = nn.Linear(em_sz_dec, em_sz_dec)
self.l3 = nn.Linear(em_sz_dec+nh, em_sz_dec)
self.V = rand_p(em_sz_dec)
def forward(self, inp, y=None, ret_attn=False):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res,attns = [],[]
w1e = enc_out @ self.W1
for i in range(self.out_sl):
w2h = self.l2(h[-1])
u = F.tanh(w1e + w2h)
a = F.softmax(u @ self.V, 0)
attns.append(a)
Xa = (a.unsqueeze(2) * enc_out).sum(0)
emb = self.emb_dec(dec_inp)
wgt_enc = self.l3(torch.cat([emb, Xa], 1))
outp, h = self.gru_dec(wgt_enc.unsqueeze(0), h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
res = torch.stack(res)
if ret_attn: res = res,torch.stack(attns)
return res
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
rnn = Seq2SeqAttnRNN(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
lr=2e-3
learn.fit(lr, 1, cycle_len=15, use_clr=(20,10), stepper=Seq2SeqStepper)
learn.save('attn')
learn.load('attn')
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs,attns = learn.model(V(x),ret_attn=True)
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
attn = to_np(attns[...,180])
fig, axes = plt.subplots(3, 3, figsize=(15, 10))
for i,ax in enumerate(axes.flat):
ax.plot(attn[i])
###Output
_____no_output_____
###Markdown
All
###Code
class Seq2SeqRNN_All(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25, bidirectional=True)
self.out_enc = nn.Linear(nh*2, em_sz_dec, bias=False)
self.drop_enc = nn.Dropout(0.25)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.W1 = rand_p(nh*2, em_sz_dec)
self.l2 = nn.Linear(em_sz_dec, em_sz_dec)
self.l3 = nn.Linear(em_sz_dec+nh*2, em_sz_dec)
self.V = rand_p(em_sz_dec)
def forward(self, inp, y=None):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = h.view(2,2,bs,-1).permute(0,2,1,3).contiguous().view(2,bs,-1)
h = self.out_enc(self.drop_enc(h))
dec_inp = V(torch.zeros(bs).long())
res,attns = [],[]
w1e = enc_out @ self.W1
for i in range(self.out_sl):
w2h = self.l2(h[-1])
u = F.tanh(w1e + w2h)
a = F.softmax(u @ self.V, 0)
attns.append(a)
Xa = (a.unsqueeze(2) * enc_out).sum(0)
emb = self.emb_dec(dec_inp)
wgt_enc = self.l3(torch.cat([emb, Xa], 1))
outp, h = self.gru_dec(wgt_enc.unsqueeze(0), h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl*2, bs, self.nh))
rnn = Seq2SeqRNN_All(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=15, use_clr=(20,10), stepper=Seq2SeqStepper)
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
###Output
quels facteurs pourraient influer sur le choix de leur emplacement ? _eos_
what factors influencetheir location ? _eos_
what factors might affect the choice of their ? ? _eos_
quโ est -ce qui ne peut pas changer ? _eos_
what can not change ? _eos_
what can not change change _eos_
que faites - vous ? _eos_
what do you do ? _eos_
what do you do ? _eos_
qui rรฉglemente les pylรดnes d' antennes ? _eos_
who regulates antenna towers ? _eos_
who regulates the antenna ? ? _eos_
oรน sont - ils situรฉs ? _eos_
where are they located ? _eos_
where are they located ? _eos_
quelles sont leurs compรฉtences ? _eos_
what are their qualifications ? _eos_
what are their skills ? _eos_
qui est victime de harcรจlement sexuel ? _eos_
who experiences sexual harassment ? _eos_
who is victim harassment harassment ? _eos_
quelles sont les personnes qui visitent les communautรฉs autochtones ? _eos_
who visits indigenous communities ? _eos_
who are the people people ? ?
pourquoi ces trois points en particulier ? _eos_
why these specific three ? _eos_
why are these three specific ? _eos_
pourquoi ou pourquoi pas ? _eos_
why or why not ? _eos_
why or why not ? _eos_
###Markdown
Neural Machine Translation
###Code
%matplotlib inline
%reload_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
1. Data Translation files
###Code
from fastai.text import *
###Output
_____no_output_____
###Markdown
**Download dataset** French/English parallel texts from http://www.statmt.org/wmt15/translation-task.html . It was created by Chris Callison-Burch, who crawled millions of web pages and then used *a set of simple heuristics to transform French URLs onto English URLs (i.e. replacing "fr" with "en" and about 40 other hand-written rules), and assume that these documents are translations of each other*.
###Code
%cd data
%mkdir translate
###Output
/home/ubuntu/data
###Markdown
~20 minutes to download at 1.5 MB/s
###Code
!aria2c --file-allocation=none -c -x 5 -s 5 http://www.statmt.org/wmt10/training-giga-fren.tar
!tar -xf training-giga-fren.tar
%mv giga-fren.release2.fixed.en.gz giga-fren.release2.fixed.fr.gz training-giga-fren.tar translate/
%cd translate/
# Strange error
!tar -xzf giga-fren.release2.fixed.en.gz
# Resolve the previous issue
!gunzip giga-fren.release2.fixed.en.gz
!gunzip giga-fren.release2.fixed.fr.gz
%cd ../..
###Output
/home/ubuntu
###Markdown
**Setup the directories and files**
###Code
PATH = Path('data/translate')
TMP_PATH = PATH / 'tmp'
TMP_PATH.mkdir(exist_ok=True)
fname = 'giga-fren.release2.fixed'
en_fname = PATH / f'{fname}.en'
fr_fname = PATH / f'{fname}.fr'
###Output
_____no_output_____
###Markdown
Tokenizing and Pre-processing Training a neural model takes a long time- Google's model has 8 layers- we are going to build a simpler one- Instead of a general model we will translate French questions
###Code
# Question regex search filters
re_eq = re.compile('^(Wh[^?.!]+\?)')
re_fq = re.compile('^([^?.!]+\?)')
# grabbing lines from the English and French source texts
lines = ( (re_eq.search(eq), re_fq.search(fq))
for eq, fq in zip(open(en_fname, encoding='utf-8'), open(fr_fname, encoding='utf-8')))
# isolate the questions
qs = [(e.group(), f.group()) for e, f in lines if e and f]
# save the questions for later
pickle.dump(qs, (PATH / 'fr-en-qs.pkl').open('wb'))
# load in pickled questions
qs = pickle.load((PATH / 'fr-en-qs.pkl').open('rb'))
# ======================================== START DEBUG ========================================
print(len(qs))
print(qs[:5])
# ======================================== END DEBUG ========================================
# ======================================== START DEBUG ========================================
# Python zip method: https://www.programiz.com/python-programming/methods/built-in/zip
# What is iterable, iterator: https://stackoverflow.com/questions/9884132/what-exactly-are-iterator-iterable-and-iteration
coord = ['x', 'y', 'z']
value = [3, 4, 5, 0, 9]
result = zip(coord, value)
result_list = list(result)
print(result_list)
# unzip result_list
c, v = zip(*result_list)
print(c)
print(v)
# ======================================== END DEBUG ========================================
###Output
[('x', 3), ('y', 4), ('z', 5)]
('x', 'y', 'z')
(3, 4, 5)
###Markdown
Tokenize all the questions.
###Code
en_qs, fr_qs = zip(*qs)
en_tok = Tokenizer.proc_all_mp(partition_by_cores(en_qs))
###Output
_____no_output_____
###Markdown
_Note: tokenizing for French is much different compared to english_
###Code
# Download spaCy 'fr' model.Otherwise, you'll encounter errorr "OSError: [E050] Can't find model 'fr'..."
!python -m spacy download fr
fr_tok = Tokenizer.proc_all_mp(partition_by_cores(fr_qs), 'fr')
en_tok[:3], fr_tok[:3]
###Output
_____no_output_____
###Markdown
Check stats for the sentences length
###Code
# 90th percentile of English and French sentences length.
np.percentile([len(o) for o in en_tok], 90), np.percentile([len(o) for o in fr_tok], 90)
###Output
_____no_output_____
###Markdown
We are keeping tokens that are less than 30 chars. The filter is applied on the English words, and the same tokens are kept for French.
###Code
keep = np.array([len(o) < 30 for o in en_tok])
en_tok = np.array(en_tok)[keep]
fr_tok = np.array(fr_tok)[keep]
# save our work
pickle.dump(en_tok, (PATH / 'en_tok.pkl').open('wb'))
pickle.dump(fr_tok, (PATH / 'fr_tok.pkl').open('wb'))
def toks2ids(tok, pre):
"""
Numericalize words to integers.
Arguments:
tok: token
pre: prefix
"""
freq = Counter(p for o in tok for p in o)
itos = [o for o, c in freq.most_common(40000)] # 40k most common words
itos.insert(0, '_bos_')
itos.insert(1, '_pad_')
itos.insert(2, '_eos_')
itos.insert(3, '_unk')
stoi = collections.defaultdict(lambda: 3, { v: k for k, v in enumerate(itos) }) #reverse
ids = np.array([ ([stoi[o] for o in p] + [2]) for p in tok ])
np.save(TMP_PATH / f'{pre}_ids.npy', ids)
pickle.dump(itos, open(TMP_PATH / f'{pre}_itos.pkl', 'wb'))
return ids, itos, stoi
en_ids, en_itos, en_stoi = toks2ids(en_tok, 'en')
fr_ids, fr_itos, fr_stoi = toks2ids(fr_tok, 'fr')
def load_ids(pre):
ids = np.load(TMP_PATH / f'{pre}_ids.npy')
itos = pickle.load(open(TMP_PATH / f'{pre}_itos.pkl', 'rb'))
stoi = collections.defaultdict(lambda: 3, { v: k for k, v in enumerate(itos) })
return ids, itos, stoi
en_ids, en_itos, en_stoi = load_ids('en')
fr_ids, fr_itos, fr_stoi = load_ids('fr')
# Sanity check
[fr_itos[o] for o in fr_ids[0]], len(en_itos), len(fr_itos)
###Output
_____no_output_____
###Markdown
Word vectors Facebook's fasttext word vectors available from https://fasttext.cc/docs/en/english-vectors.html Download word vectors:We are using the pre-trained word vectors for English language, trained on Wikipedia using fastText. These vectors in dimension 300 were obtained using the skip-gram model: https://fasttext.cc/docs/en/pretrained-vectors.html
###Code
!aria2c --file-allocation=none -c -x 5 -s 5 -d data/translate https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.en.zip
!aria2c --file-allocation=none -c -x 5 -s 5 -d data/translate https://s3-us-west-1.amazonaws.com/fasttext-vectors/wiki.fr.zip
!unzip data/translate/wiki.en.zip -d data/translate/
!unzip data/translate/wiki.fr.zip -d data/translate/
###Output
Archive: data/translate/wiki.fr.zip
inflating: data/translate/wiki.fr.vec
inflating: data/translate/wiki.fr.bin
###Markdown
To use the fastText library, you'll need to download [fasttext word vectors](https://github.com/facebookresearch/fastText/blob/master/pretrained-vectors.md) for your language (download the 'bin plus text' ones).
###Code
!pip install git+https://github.com/facebookresearch/fastText.git
import fastText as ft
en_vecs = ft.load_model(str((PATH / 'wiki.en.bin')))
fr_vecs = ft.load_model(str((PATH / 'wiki.fr.bin')))
def get_vecs(lang, ft_vecs):
"""
Convert fastText word vectors into a standard Python dictionary to make it a bit easier to work with.
This is just going through each word with a dictionary comprehension and save it as a pickle dictionary.
get_word_vector:
[method] get the vector representation of word.
get_words:
[method] get the entire list of words of the dictionary optionally
including the frequency of the individual words. This
does not include any subwords.
"""
vecd = { w: ft_vecs.get_word_vector(w) for w in ft_vecs.get_words() }
pickle.dump(vecd, open(PATH / f'wiki.{lang}.pkl', 'wb'))
return vecd
en_vecd = get_vecs('en', en_vecs)
fr_vecd = get_vecs('fr', fr_vecs)
en_vecd = pickle.load(open(PATH / 'wiki.en.pkl', 'rb'))
fr_vecd = pickle.load(open(PATH / 'wiki.fr.pkl', 'rb'))
# DEBUG
ft_vecs = en_vecs
# DEBUG
ft_words = ft_vecs.get_words(include_freq=True)
ft_word_dict = { k: v for k, v in zip(*ft_words) }
ft_words = sorted(ft_word_dict.keys(), key=lambda x: ft_word_dict[x])
len(ft_words)
dim_en_vec = len(en_vecd[','])
dim_fr_vec = len(fr_vecd[','])
dim_en_vec, dim_fr_vec
###Output
_____no_output_____
###Markdown
Find out what the mean and standard deviation of our vectors are. So the mean is about zero and standard deviation is about 0.3.
###Code
# en_vecd type is dict
en_vecs = np.stack(list(en_vecd.values())) # convert dict_values to list and then stack it
en_vecs.mean(), en_vecs.std()
###Output
_____no_output_____
###Markdown
Model data **Exclude the extreme cases** Often corpuses have a pretty long tailed distribution of sequence length and it's the longest sequences that tend to overwhelm how long things take, how much memory is used, etc. So in this case, we are going to grab 99th to 97th percentile of the English and French and truncate them to that amount. Originally Jeremy was using 90 percentiles (hence the variable name):
###Code
enlen_90 = int(np.percentile([len(o) for o in en_ids], 99))
frlen_90 = int(np.percentile([len(o) for o in fr_ids], 99))
enlen_90, frlen_90
en_ids_tr = np.array([o[:enlen_90] for o in en_ids])
fr_ids_tr = np.array([o[:frlen_90] for o in fr_ids])
###Output
_____no_output_____
###Markdown
**Create our Dataset, DataLoaders**
###Code
class Seq2SeqDataset(Dataset):
def __init__(self, x, y):
self.x, self.y = x, y
def __getitem__(self, idx):
return A(self.x[idx], self.y[idx]) # A for Arrays
def __len__(self):
return len(self.x)
###Output
_____no_output_____
###Markdown
**Split the training and testing set** Here is an easy way to get training and validation sets. Grab a bunch of random numbersโโโone for each row of your data, and see if they are bigger than 0.1 or not. That gets you a list of booleans. Index into your array with that list of booleans to grab a training set, index into that array with the opposite of that list of booleans to get your validation set.
###Code
np.random.seed(42)
trn_keep = np.random.rand(len(en_ids_tr)) > 0.1
en_trn, fr_trn = en_ids_tr[trn_keep], fr_ids_tr[trn_keep] # training set
en_val, fr_val = en_ids_tr[~trn_keep], fr_ids_tr[~trn_keep] # validation set
len(en_trn), len(en_val)
###Output
_____no_output_____
###Markdown
**Create training and validation sets**
###Code
trn_ds = Seq2SeqDataset(fr_trn, en_trn)
val_ds = Seq2SeqDataset(fr_val, en_val)
# Set batch size
bs = 125
###Output
_____no_output_____
###Markdown
- Most of our preprocessing is complete, so making `numworkers = 1` will save you some time.- Padding will pad the shorter phrases to be the same length.- Classifier โ padding in the beginning.- Decoder โ padding at the end.- Sampler - so we keep the similar sentences together (sorted by length).
###Code
# arranges sentences so that similar lengths are close to each other
trn_samp = SortishSampler(en_trn, key=lambda x: len(en_trn[x]), bs=bs)
val_samp = SortSampler(en_val, key=lambda x: len(en_val[x]))
###Output
_____no_output_____
###Markdown
**Create DataLoaders**
###Code
trn_dl = DataLoader(trn_ds, bs, transpose=True, transpose_y=True, num_workers=1,
pad_idx=1, pre_pad=False, sampler=trn_samp)
val_dl = DataLoader(val_ds, int(bs * 1.6), transpose=True, transpose_y=True, num_workers=1,
pad_idx=1, pre_pad=False, sampler=val_samp)
md = ModelData(PATH, trn_dl, val_dl)
# Test - inspect
it = iter(trn_dl) # trn_dl is iterable. turns iterable into iterator.
# Return the next item from the iterator.
its = [next(it) for i in range(5)]
[(len(x), len(y)) for x, y in its]
###Output
_____no_output_____
###Markdown
2. Architecture Initial model - The architecture is going to take our sequence of tokens.- It is going to spit them into an encoder (a.k.a. backbone).- That is going to spit out the final hidden state which for each sentence, itโs just a single vector.- Then, it will need to be passed to a decoder that will walk through the words one by one.
###Code
def create_emb(vecs, itos, em_sz):
"""
Creates embedding:
1. rows = number of vocab
2. cols = embedding size dimension
Will randomly initialize the embedding
"""
emb = nn.Embedding(len(itos), em_sz, padding_idx=1)
wgts = emb.weight.data
miss = []
# goes through the embedding and replace
# the initialized weights with existing word vectors
# multiply x3 to compensate for the stdev 0.3
for i, w in enumerate(itos):
try:
wgts[i] = torch.from_numpy(vecs[w] * 3)
except:
miss.append(w)
print(len(miss), miss[5:10])
return emb
nh, nl = 256, 2
class Seq2SeqRNN(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
# encoder (enc)
self.nl, self.nh, self.out_sl = nl, nh, out_sl
# for each word, pull up the 300M vector and create an embedding
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.emb_enc_drop = nn.Dropout(0.15)
# GRU - similiar to LSTM
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
# decoder (dec)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
sl, bs = inp.size()
# ==================================================
# Encoder version
# ==================================================
# initialize the hidden layer
h = self.initHidden(bs)
# run the input through our embeddings + apply dropout
emb = self.emb_enc_drop(self.emb_enc(inp))
# run it through the RNN layer
enc_out, h = self.gru_enc(emb, h)
# run the hidden state through our linear layer
# note: we are only using the last hidden state to 'decode' into another phrase
h = self.out_enc(h)
# ==================================================
# Decoder version
# ==================================================
# starting with a 0 (or beginning of string _BOS_)
dec_inp = V(torch.zeros(bs).long())
res = []
# will loop as long as the longest english sentence
for i in range(self.out_sl):
# embedding - we are only looking at a section at time
# which is why the .unsqueeze is required
emb = self.emb_dec(dec_inp).unsqueeze(0)
# rnn - typically works with whole phrases, but we passing
# only 1 unit at a time in a loop
outp, h = self.gru_dec(emb, h)
# dropout
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
# highest probability word
dec_inp = V(outp.data.max(1)[1])
# if its padding ,we are at the end of the sentence
if (dec_inp == 1).all():
break
# stack the output into a single tensor
return torch.stack(res)
def initHidden(self, bs):
return V(torch.zeros(self.nl, bs, self.nh))
def seq2seq_loss(input, target):
"""
Loss function - modified version of cross entropy
"""
sl, bs = target.size()
sl_in, bs_in, nc = input.size()
# sequence length could be shorter than the original
# need to add padding to even out the size
if sl > sl_in:
input = F.pad(input, (0, 0, 0, 0, 0, sl - sl_in))
input = input[:sl]
return F.cross_entropy(input.view(-1, nc), target.view(-1))#, ignore_index=1)
opt_fn = partial(optim.Adam, betas=(0.8, 0.99))
rnn = Seq2SeqRNN(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
# Find the learning rate
learn.lr_find()
learn.sched.plot()
###Output
_____no_output_____
###Markdown
**Fit the model (15-20 mins to train)**
###Code
lr = 3e-3
learn.fit(lr, 1, cycle_len=12, use_clr=(20, 10))
learn.sched.plot_loss()
learn.save('initial')
learn.load('initial')
###Output
_____no_output_____
###Markdown
Test
###Code
x, y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180, 190):
print(' '.join([ fr_itos[o] for o in x[:, i] if o != 1 ]))
print(' '.join([ en_itos[o] for o in y[:, i] if o != 1 ]))
print(' '.join([ en_itos[o] for o in preds[:, i] if o != 1 ]))
print()
###Output
quelles composantes des diffรฉrents aspects de la performance devraient รชtre mesurรฉes , quelles donnรฉes pertinentes recueillir et comment ? _eos_
which components within various performance areas should be measured , whatkinds of data are appropriate to collect , and how should this be done ? _eos_
what aspects of the and and be be be be be be be be be ? ? _eos_
le premier ministre doit - il nommer un ministre dโ รฉtat ร la santรฉ mentale , ร la maladie mentale et ร la toxicomanie ? _eos_
what role can the federal government play to ensure that individuals with mental illness and addiction have access to the drug therapy they need ? _eos_
what minister the minister minister minister minister minister , , , , health health and health ? ? ? ? _eos_
quelles sont les consรฉquences de la hausse des formes dโ emploi non conformes aux normes chez les travailleurs hautement qualifiรฉs et chez ceux qui occupent des emplois plus marginaux ? _eos_
what is the impact of growing forms of non - standard employment for highly skilled workers and for those employed in more marginal occupations ? _eos_
what are the consequences of workers workers workers workers workers workers and and workers and workers workers workers workers workers workers ? ? ? _eos_ _eos_
que se produit - il si le gestionnaire nโ est pas en mesure de donner ร lโ employรฉ nommรฉ pour une pรฉriode dรฉterminรฉe un prรฉavis de cessation dโ emploi dโ un mois ou sโ il nรฉglige de le
what happens if the manager is unable to or neglects to give a term employee the one - month notice of non - renewal ? _eos_
what happens the the employee employee employee employee employee the the the the the or or the the the ? ? _eos_
quelles personnes , communautรฉs ou entitรฉs sont considรฉrรฉes comme potentiels i ) bรฉnรฉficiaires de la protection et ii ) titulaires de droits ? _eos_
which persons , communities or entities are identified as potential ( i ) beneficiaries of protection and / or ( ii ) rights holders ? _eos_
who , , , , , or or or or or or or or protection ? ? ? ? _eos_
quelles conditions particuliรจres doivent รชtre remplies pendant lโ examen prรฉliminaire international en ce qui concerne les listages des sรฉquences de nuclรฉotides ou dโ acides aminรฉs ou les tableaux y relatifs ? _eos_
what special requirements apply during the international preliminary examination to nucleotide and / or amino acid sequence listings and / or tables related thereto ? _eos_
what specific must be be be be sequence sequence or or or or sequence or or sequence or sequence or sequence in in ? ? ? ? _eos_ _eos_
pourquoi cette soudaine rรฉticence ร promouvoir lโ รฉgalitรฉ des genres et ร protรฉger les femmes de ce que , dans la plupart des cas , on peut qualifier de violations grossiรจres des droits humains ? _eos_
why this sudden reluctance to effectively promote gender equality and protect women from what are โ in many cases โ egregious human rights violations ? _eos_
why is the so for such of of of of of and rights and rights rights of rights rights ? ? ? ? _eos_ _eos_
pouvez - vous dire comment votre bagage culturel vous a aidรฉe ร aborder votre nouvelle vie au canada ( ร vous adapter au mode de vie canadien ) ? _eos_
what are some things from your cultural background that have helped you navigate canadian life ( helped you adjust to life in canada ) ? _eos_
what are you new to to to to to to to to life life life life ? ? ? ? _eos_ _eos_
selon vous , quels seront , dans les dix prochaines annรฉes , les cinq enjeux les plus urgents en matiรจre d' environnement et d' avenir viable pour vous et votre rรฉgion ? _eos_
which do you think will be the five most pressing environmental and sustainability issues for you and your region in the next ten years ? _eos_
what do you see the next priorities priorities next the next the and and in in in in in in ? ? ? ? ? _eos_ _eos_
dans quelle mesure lโ expert est-il motivรฉ et capable de partager ses connaissances , et dans quelle mesure son successeur est-il motivรฉ et capable de recevoir ce savoir ? _eos_
what is the expert โs level of motivation and capability for sharing knowledge , and the successor โs motivation and capability of acquiring it ? _eos_
what is the nature and and and and and and and and and and and and and to to to ? ? ? ? _eos_ _eos_ _eos_
###Markdown
Bi-direction Take all your sequences and reverse them and make a "backwards model" then average the predictions. Note that with deeper models, not all levels may be bi-directional.
###Code
class Seq2SeqRNN_Bidir(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.nl, self.nh, self.out_sl = nl, nh, out_sl
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.emb_enc_drop = nn.Dropout(0.15)
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25, bidirectional=True) # for bidir, bidirectional=True
self.out_enc = nn.Linear(nh * 2, em_sz_dec, bias=False) # for bidir, nh * 2
self.drop_enc = nn.Dropout(0.05) # additional for bidir
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
sl, bs = inp.size()
# ==================================================
# Encoder version
# ==================================================
# initialize the hidden layer
h = self.initHidden(bs)
# run the input through our embeddings + apply dropout
emb = self.emb_enc_drop(self.emb_enc(inp))
# run it through the RNN layer
enc_out, h = self.gru_enc(emb, h)
# Additional for bidir
h = h.view(2, 2, bs, -1).permute(0, 2, 1, 3).contiguous().view(2, bs, -1)
# run the hidden state through our linear layer
h = self.out_enc(self.drop_enc(h)) # new for bidir; dropout hidden state.
# ==================================================
# Decoder version
# ==================================================
# starting with a 0 (or beginning of string _BOS_)
dec_inp = V(torch.zeros(bs).long())
res = []
# will loop as long as the longest english sentence
for i in range(self.out_sl):
# embedding - we are only looking at a section at time
# which is why the .unsqueeze is required
emb = self.emb_dec(dec_inp).unsqueeze(0)
# rnn - typically works with whole phrases, but we passing
# only 1 unit at a time in a loop
outp, h = self.gru_dec(emb, h)
# dropout
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
# highest probability word
dec_inp = V(outp.data.max(1)[1])
# if its padding ,we are at the end of the sentence
if (dec_inp == 1).all():
break
# stack the output into a single tensor
return torch.stack(res)
def initHidden(self, bs):
return V(torch.zeros(self.nl * 2, bs, self.nh)) # for bidir, sel.nl * 2
rnn = Seq2SeqRNN_Bidir(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=12, use_clr=(20, 10))
learn.sched.plot_loss()
learn.save('bidir')
learn.load('bidir')
###Output
_____no_output_____
###Markdown
**Test**
###Code
x, y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180, 190):
print(' '.join([ fr_itos[o] for o in x[:, i] if o != 1 ]))
print(' '.join([ en_itos[o] for o in y[:, i] if o != 1 ]))
print(' '.join([ en_itos[o] for o in preds[:, i] if o != 1 ]))
print()
###Output
quelles composantes des diffรฉrents aspects de la performance devraient รชtre mesurรฉes , quelles donnรฉes pertinentes recueillir et comment ? _eos_
which components within various performance areas should be measured , whatkinds of data are appropriate to collect , and how should this be done ? _eos_
which aspects of should should should be be and and how how how be be be ? ? _eos_ _eos_
le premier ministre doit - il nommer un ministre dโ รฉtat ร la santรฉ mentale , ร la maladie mentale et ร la toxicomanie ? _eos_
what role can the federal government play to ensure that individuals with mental illness and addiction have access to the drug therapy they need ? _eos_
who is the minister minister minister to minister mental mental mental mental mental health health ? ? ? _eos_
quelles sont les consรฉquences de la hausse des formes dโ emploi non conformes aux normes chez les travailleurs hautement qualifiรฉs et chez ceux qui occupent des emplois plus marginaux ? _eos_
what is the impact of growing forms of non - standard employment for highly skilled workers and for those employed in more marginal occupations ? _eos_
what are the implications of of of of of workers workers workers workers workers workers workers in less workers ? ? ? _eos_ _eos_
que se produit - il si le gestionnaire nโ est pas en mesure de donner ร lโ employรฉ nommรฉ pour une pรฉriode dรฉterminรฉe un prรฉavis de cessation dโ emploi dโ un mois ou sโ il nรฉglige de le
what happens if the manager is unable to or neglects to give a term employee the one - month notice of non - renewal ? _eos_
what happens if the employee of the the the the of of of or or or or of of of
quelles personnes , communautรฉs ou entitรฉs sont considรฉrรฉes comme potentiels i ) bรฉnรฉficiaires de la protection et ii ) titulaires de droits ? _eos_
which persons , communities or entities are identified as potential ( i ) beneficiaries of protection and / or ( ii ) rights holders ? _eos_
which communities are are or as or or or or or , , of ? ? ? ? ?
quelles conditions particuliรจres doivent รชtre remplies pendant lโ examen prรฉliminaire international en ce qui concerne les listages des sรฉquences de nuclรฉotides ou dโ acides aminรฉs ou les tableaux y relatifs ? _eos_
what special requirements apply during the international preliminary examination to nucleotide and / or amino acid sequence listings and / or tables related thereto ? _eos_
what special requirements requirements be be for for for / sequence / or or sequence or or sequence sequence or or sequence sequence ? ? _eos_ _eos_
pourquoi cette soudaine rรฉticence ร promouvoir lโ รฉgalitรฉ des genres et ร protรฉger les femmes de ce que , dans la plupart des cas , on peut qualifier de violations grossiรจres des droits humains ? _eos_
why this sudden reluctance to effectively promote gender equality and protect women from what are โ in many cases โ egregious human rights violations ? _eos_
why is such such such women women of women , , , , rights rights ? ? ? ? ? _eos_ _eos_
pouvez - vous dire comment votre bagage culturel vous a aidรฉe ร aborder votre nouvelle vie au canada ( ร vous adapter au mode de vie canadien ) ? _eos_
what are some things from your cultural background that have helped you navigate canadian life ( helped you adjust to life in canada ) ? _eos_
what is your you to you you to to to to to life life life life in life canada ? ? ? ? _eos_
selon vous , quels seront , dans les dix prochaines annรฉes , les cinq enjeux les plus urgents en matiรจre d' environnement et d' avenir viable pour vous et votre rรฉgion ? _eos_
which do you think will be the five most pressing environmental and sustainability issues for you and your region in the next ten years ? _eos_
what do you see the the the the the the , , future and and and and and future future future future future ? ? ? ? _eos_
dans quelle mesure lโ expert est-il motivรฉ et capable de partager ses connaissances , et dans quelle mesure son successeur est-il motivรฉ et capable de recevoir ce savoir ? _eos_
what is the expert โs level of motivation and capability for sharing knowledge , and the successor โs motivation and capability of acquiring it ? _eos_
what is is expertise of the of and and and and and and and and and and and and and and and and ? ? ?
###Markdown
Teacher forcing When the model starts learning, it starts out not knowing anything about the different languages. It will eventually get better, but in the beginning it doesn't have a lot to work with.**idea:** what if we force feed the correct answer in the beginnging?
###Code
class Seq2SeqStepper(Stepper):
def step(self, xs, y, epoch):
self.m.pr_force = (10 - epoch) * 0.1 if epoch < 10 else 0
xtra = []
output = self.m(*xs, y)
if isinstance(output, tuple):
output, *xtra = output
self.opt.zero_grad()
loss = raw_loss = self.crit(output, y)
if self.reg_fn:
loss = self.reg_fn(output, xtra, raw_loss)
loss.backward()
if self.clip: # gradient clipping
nn.utils.clip_grad_norm(trainable_params_(self.m), self.clip)
self.opt.step()
return raw_loss.data[0]
class Seq2SeqRNN_TeacherForcing(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.nl, self.nh, self.out_sl = nl, nh, out_sl
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.emb_enc_drop = nn.Dropout(0.15)
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.pr_force = 1. # new for teacher forcing
def forward(self, inp, y=None): # argument y is new for teacher forcing
sl, bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp == 1).all():
break
if (y is not None) and (random.random() < self.pr_force): # new for teacher forcing
if i >= len(y):
break
dec_inp = y[i]
return torch.stack(res)
def initHidden(self, bs):
return V(torch.zeros(self.nl, bs, self.nh))
rnn = Seq2SeqRNN_TeacherForcing(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=12, use_clr=(20, 10), stepper=Seq2SeqStepper)
learn.sched.plot_loss()
learn.save('forcing')
learn.load('forcing')
###Output
_____no_output_____
###Markdown
**Test**
###Code
x, y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180, 190):
print(' '.join([ fr_itos[o] for o in x[:, i] if o != 1 ]))
print(' '.join([ en_itos[o] for o in y[:, i] if o != 1 ]))
print(' '.join([ en_itos[o] for o in preds[:, i] if o != 1 ]))
print()
###Output
quelles composantes des diffรฉrents aspects de la performance devraient รชtre mesurรฉes , quelles donnรฉes pertinentes recueillir et comment ? _eos_
which components within various performance areas should be measured , whatkinds of data are appropriate to collect , and how should this be done ? _eos_
what elements of the should be be be be and and and and ? ? ? ?
le premier ministre doit - il nommer un ministre dโ รฉtat ร la santรฉ mentale , ร la maladie mentale et ร la toxicomanie ? _eos_
what role can the federal government play to ensure that individuals with mental illness and addiction have access to the drug therapy they need ? _eos_
what is the minister of the the of the and and and and and and mental health ? ? ? _eos_
quelles sont les consรฉquences de la hausse des formes dโ emploi non conformes aux normes chez les travailleurs hautement qualifiรฉs et chez ceux qui occupent des emplois plus marginaux ? _eos_
what is the impact of growing forms of non - standard employment for highly skilled workers and for those employed in more marginal occupations ? _eos_
what are the implications of of of of of of workers in in in and and and workers and workers and workers ? ? _eos_ _eos_
que se produit - il si le gestionnaire nโ est pas en mesure de donner ร lโ employรฉ nommรฉ pour une pรฉriode dรฉterminรฉe un prรฉavis de cessation dโ emploi dโ un mois ou sโ il nรฉglige de le
what happens if the manager is unable to or neglects to give a term employee the one - month notice of non - renewal ? _eos_
what if if not is not a a or or or or or or or ? ? ? ? ? ? ?
quelles personnes , communautรฉs ou entitรฉs sont considรฉrรฉes comme potentiels i ) bรฉnรฉficiaires de la protection et ii ) titulaires de droits ? _eos_
which persons , communities or entities are identified as potential ( i ) beneficiaries of protection and / or ( ii ) rights holders ? _eos_
who communities or persons , as as as as ( ( ( , protection and ? ? ? _eos_
quelles conditions particuliรจres doivent รชtre remplies pendant lโ examen prรฉliminaire international en ce qui concerne les listages des sรฉquences de nuclรฉotides ou dโ acides aminรฉs ou les tableaux y relatifs ? _eos_
what special requirements apply during the international preliminary examination to nucleotide and / or amino acid sequence listings and / or tables related thereto ? _eos_
what special conditions to to to to the the / / / / sequence sequence sequence of of / / / / ? ? ? ? ? _eos_
pourquoi cette soudaine rรฉticence ร promouvoir lโ รฉgalitรฉ des genres et ร protรฉger les femmes de ce que , dans la plupart des cas , on peut qualifier de violations grossiรจres des droits humains ? _eos_
why this sudden reluctance to effectively promote gender equality and protect women from what are โ in many cases โ egregious human rights violations ? _eos_
why encourage such such such such such such as as human human human ? ? ? _eos_ _eos_
pouvez - vous dire comment votre bagage culturel vous a aidรฉe ร aborder votre nouvelle vie au canada ( ร vous adapter au mode de vie canadien ) ? _eos_
what are some things from your cultural background that have helped you navigate canadian life ( helped you adjust to life in canada ) ? _eos_
what are the you you you to to to to to to to to your your in in in in canada ? ? ? _eos_
selon vous , quels seront , dans les dix prochaines annรฉes , les cinq enjeux les plus urgents en matiรจre d' environnement et d' avenir viable pour vous et votre rรฉgion ? _eos_
which do you think will be the five most pressing environmental and sustainability issues for you and your region in the next ten years ? _eos_
what do you see as the most most most important future and and and and future future future ? ? ? ? _eos_
dans quelle mesure lโ expert est-il motivรฉ et capable de partager ses connaissances , et dans quelle mesure son successeur est-il motivรฉ et capable de recevoir ce savoir ? _eos_
what is the expert โs level of motivation and capability for sharing knowledge , and the successor โs motivation and capability of acquiring it ? _eos_
what is the expert of and and and and and and and and and and and and and ? ? ? ? ?
###Markdown
Attentional model Our RNN model exports the hidden state at every time step, along with the hidden state at the last time step. Initially we are only using the LAST hidden state to 'decode' into another phrase.Can we use the rest of those hidden states?**goal:** use some percentage of all hidden states and add another trainable parameter to find good answers in the model.**idea:** expecting the entire sentence to be summarized into a vector is a lot. Instead of having a hidden state at the end of the phrase, we can have a hidden state after every single word. So how do we use the hidden information after every word.
###Code
def rand_t(*sz):
return torch.randn(sz) / math.sqrt(sz[0])
def rand_p(*sz):
return nn.Parameter(rand_t(*sz))
class Seq2SeqAttnRNN(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
# these 4 lines are addition for 'attention'
self.W1 = rand_p(nh, em_sz_dec) # random matrix wrapped up in PyTorch Parameter
self.l2 = nn.Linear(em_sz_dec, em_sz_dec) # this is the mini NN that will calculate the weights
self.l3 = nn.Linear(em_sz_dec + nh, em_sz_dec)
self.V = rand_p(em_sz_dec)
def forward(self, inp, y=None, ret_attn=False):
sl, bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res, attns = [], [] # attns is addition for 'attention'
w1e = enc_out @ self.W1 # this line is addition for 'attention'. matrix product.
for i in range(self.out_sl):
# these 5 lines are addition for 'attention'.
# create a little neural network.
# use softmax to generate the probabilities.
w2h = self.l2(h[-1]) # take last layers hidden state put into linear layer
u = F.tanh(w1e + w2h) # nonlinear activation
a = F.softmax(u @ self.V, 0) # matrix product
attns.append(a)
# take a weighted average. Use the weights from mini NN.
# note we are using all the encoder states
Xa = (a.unsqueeze(2) * enc_out).sum(0)
emb = self.emb_dec(dec_inp)
# adding the hidden states to the encoder weights
wgt_enc = self.l3(torch.cat([emb, Xa], 1)) # this line is addition for 'attention'
outp, h = self.gru_dec(wgt_enc.unsqueeze(0), h) # this line has changed for 'attention'
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all():
break
if (y is not None) and (random.random() < self.pr_force): # why is teacher forcing logic still here? bug?
if i >= len(y):
break
dec_inp = y[i]
res = torch.stack(res)
if ret_attn:
res = res, torch.stack(attns) # bug?
return res
def initHidden(self, bs):
return V(torch.zeros(self.nl, bs, self.nh))
rnn = Seq2SeqAttnRNN(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
lr = 2e-3
learn.fit(lr, 1, cycle_len=15, use_clr=(20, 10), stepper=Seq2SeqStepper)
learn.sched.plot_loss()
learn.save('attn')
learn.load('attn')
###Output
_____no_output_____
###Markdown
**Test**
###Code
x, y = next(iter(val_dl))
probs, attns = learn.model(V(x), ret_attn=True)
preds = to_np(probs.max(2)[1])
for i in range(180, 190):
print(' '.join([fr_itos[o] for o in x[:, i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:, i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:, i] if o != 1]))
print()
###Output
quelles composantes des diffรฉrents aspects de la performance devraient รชtre mesurรฉes , quelles donnรฉes pertinentes recueillir et comment ? _eos_
which components within various performance areas should be measured , whatkinds of data are appropriate to collect , and how should this be done ? _eos_
what components of the performance should be be be data be and and how ? ? _eos_ ?
le premier ministre doit - il nommer un ministre dโ รฉtat ร la santรฉ mentale , ร la maladie mentale et ร la toxicomanie ? _eos_
what role can the federal government play to ensure that individuals with mental illness and addiction have access to the drug therapy they need ? _eos_
what is the minister minister โs minister minister to to minister to health health ? and mental mental health _eos_ _eos_ mental _eos_
quelles sont les consรฉquences de la hausse des formes dโ emploi non conformes aux normes chez les travailleurs hautement qualifiรฉs et chez ceux qui occupent des emplois plus marginaux ? _eos_
what is the impact of growing forms of non - standard employment for highly skilled workers and for those employed in more marginal occupations ? _eos_
what are the implications of of - statistics - workers - workers workers and and skilled workers workers workers older workers _eos_ ? workers ? _eos_ _eos_
que se produit - il si le gestionnaire nโ est pas en mesure de donner ร lโ employรฉ nommรฉ pour une pรฉriode dรฉterminรฉe un prรฉavis de cessation dโ emploi dโ un mois ou sโ il nรฉglige de le
what happens if the manager is unable to or neglects to give a term employee the one - month notice of non - renewal ? _eos_
what if the manager is not to to employee employee employee a employee the employee for retirement time hours employee after a employee of ? after _eos_
quelles personnes , communautรฉs ou entitรฉs sont considรฉrรฉes comme potentiels i ) bรฉnรฉficiaires de la protection et ii ) titulaires de droits ? _eos_
which persons , communities or entities are identified as potential ( i ) beneficiaries of protection and / or ( ii ) rights holders ? _eos_
who , or or or or considered as as recipients of of of protection protection protection _eos_ ? _eos_ _eos_
quelles conditions particuliรจres doivent รชtre remplies pendant lโ examen prรฉliminaire international en ce qui concerne les listages des sรฉquences de nuclรฉotides ou dโ acides aminรฉs ou les tableaux y relatifs ? _eos_
what special requirements apply during the international preliminary examination to nucleotide and / or amino acid sequence listings and / or tables related thereto ? _eos_
what specific conditions conditions be be during the international examination examination in the for nucleotide or amino amino / or or ? _eos_ ? ? _eos_ tables _eos_ ?
pourquoi cette soudaine rรฉticence ร promouvoir lโ รฉgalitรฉ des genres et ร protรฉger les femmes de ce que , dans la plupart des cas , on peut qualifier de violations grossiรจres des droits humains ? _eos_
why this sudden reluctance to effectively promote gender equality and protect women from what are โ in many cases โ egregious human rights violations ? _eos_
why this this to to to to to to women to and and and women to , of _eos_ of many people ? ? of _eos_ ? human human
pouvez - vous dire comment votre bagage culturel vous a aidรฉe ร aborder votre nouvelle vie au canada ( ร vous adapter au mode de vie canadien ) ? _eos_
what are some things from your cultural background that have helped you navigate canadian life ( helped you adjust to life in canada ) ? _eos_
what is your your of your you to you to to in canada canada canada life canada canada canada _eos_ _eos_ _eos_ _eos_ _eos_
selon vous , quels seront , dans les dix prochaines annรฉes , les cinq enjeux les plus urgents en matiรจre d' environnement et d' avenir viable pour vous et votre rรฉgion ? _eos_
which do you think will be the five most pressing environmental and sustainability issues for you and your region in the next ten years ? _eos_
what do you think in the next five five next , , next and and and and and and you and in ? _eos_ ? ? _eos_ ?
dans quelle mesure lโ expert est-il motivรฉ et capable de partager ses connaissances , et dans quelle mesure son successeur est-il motivรฉ et capable de recevoir ce savoir ? _eos_
what is the expert โs level of motivation and capability for sharing knowledge , and the successor โs motivation and capability of acquiring it ? _eos_
what is the the of the the and and and and and and and to and to and and ? ? ? _eos_ _eos_
###Markdown
Visualization
###Code
attn = to_np(attns[..., 180])
# DEBUG
print(attn.shape)
# graph 1
print(attn[0].shape)
print(attn[0][:10])
# END DEBUG
fig, axes = plt.subplots(3, 3, figsize=(15, 10))
for i, ax in enumerate(axes.flat):
ax.plot(attn[i])
###Output
_____no_output_____
###Markdown
All (seq2seq + bi-directional + attention)
###Code
class Seq2SeqRNN_All(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl, self.nh, self.out_sl = nl, nh, out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25, bidirectional=True)
self.out_enc = nn.Linear(nh * 2, em_sz_dec, bias=False)
self.drop_enc = nn.Dropout(0.25)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.W1 = rand_p(nh * 2, em_sz_dec)
self.l2 = nn.Linear(em_sz_dec, em_sz_dec)
self.l3 = nn.Linear(em_sz_dec + nh * 2, em_sz_dec)
self.V = rand_p(em_sz_dec)
def forward(self, inp, y=None):
sl, bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = h.view(2, 2, bs, -1).permute(0, 2, 1, 3).contiguous().view(2, bs, -1)
h = self.out_enc(self.drop_enc(h))
dec_inp = V(torch.zeros(bs).long())
res, attns = [], []
w1e = enc_out @ self.W1
for i in range(self.out_sl):
w2h = self.l2(h[-1])
u = F.tanh(w1e + w2h)
a = F.softmax(u @ self.V, 0)
attns.append(a)
Xa = (a.unsqueeze(2) * enc_out).sum(0)
emb = self.emb_dec(dec_inp)
wgt_enc = self.l3(torch.cat([emb, Xa], 1))
outp, h = self.gru_dec(wgt_enc.unsqueeze(0), h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp == 1).all():
break
if (y is not None) and (random.random() < self.pr_force):
if i >= len(y):
break
dec_inp = y[i]
return torch.stack(res)
def initHidden(self, bs):
return V(torch.zeros(self.nl * 2, bs, self.nh))
rnn = Seq2SeqRNN_All(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=15, use_clr=(20, 10), stepper=Seq2SeqStepper)
learn.sched.plot_loss()
learn.save('all')
learn.load('all')
###Output
_____no_output_____
###Markdown
**Test**
###Code
x,y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180, 190):
print(' '.join([fr_itos[o] for o in x[:, i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:, i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:, i] if o != 1]))
print()
###Output
quelles composantes des diffรฉrents aspects de la performance devraient รชtre mesurรฉes , quelles donnรฉes pertinentes recueillir et comment ? _eos_
which components within various performance areas should be measured , whatkinds of data are appropriate to collect , and how should this be done ? _eos_
what components of the different aspects of should be be measured , and and how how ? _eos_
le premier ministre doit - il nommer un ministre dโ รฉtat ร la santรฉ mentale , ร la maladie mentale et ร la toxicomanie ? _eos_
what role can the federal government play to ensure that individuals with mental illness and addiction have access to the drug therapy they need ? _eos_
who is the minister minister to minister minister to mental health mental and mental ? ? ? _eos_
quelles sont les consรฉquences de la hausse des formes dโ emploi non conformes aux normes chez les travailleurs hautement qualifiรฉs et chez ceux qui occupent des emplois plus marginaux ? _eos_
what is the impact of growing forms of non - standard employment for highly skilled workers and for those employed in more marginal occupations ? _eos_
what are the implications of increasing employment forms of workers workers workers workers workers workers workers workers workers workers workers workers more more ? _eos_ _eos_ _eos_
que se produit - il si le gestionnaire nโ est pas en mesure de donner ร lโ employรฉ nommรฉ pour une pรฉriode dรฉterminรฉe un prรฉavis de cessation dโ emploi dโ un mois ou sโ il nรฉglige de le
what happens if the manager is unable to or neglects to give a term employee the one - month notice of non - renewal ? _eos_
what happens the manager does not to to the employee employee employee employee a a employee a employee or employee or or or or or or or or ?
quelles personnes , communautรฉs ou entitรฉs sont considรฉrรฉes comme potentiels i ) bรฉnรฉficiaires de la protection et ii ) titulaires de droits ? _eos_
which persons , communities or entities are identified as potential ( i ) beneficiaries of protection and / or ( ii ) rights holders ? _eos_
who , communities communities or entities as as potential as beneficiaries of ( protection and protection protection protection ? ? _eos_ _eos_ _eos_
quelles conditions particuliรจres doivent รชtre remplies pendant lโ examen prรฉliminaire international en ce qui concerne les listages des sรฉquences de nuclรฉotides ou dโ acides aminรฉs ou les tableaux y relatifs ? _eos_
what special requirements apply during the international preliminary examination to nucleotide and / or amino acid sequence listings and / or tables related thereto ? _eos_
what special conditions must be required during the international preliminary preliminary in for nucleotide or sequence amino or sequence or or or or tables ? ? _eos_ _eos_ _eos_
pourquoi cette soudaine rรฉticence ร promouvoir lโ รฉgalitรฉ des genres et ร protรฉger les femmes de ce que , dans la plupart des cas , on peut qualifier de violations grossiรจres des droits humains ? _eos_
why this sudden reluctance to effectively promote gender equality and protect women from what are โ in many cases โ egregious human rights violations ? _eos_
why this sudden effect of to to women women women and and of of of , , of of of human human human human ? _eos_ _eos_ _eos_ _eos_
pouvez - vous dire comment votre bagage culturel vous a aidรฉe ร aborder votre nouvelle vie au canada ( ร vous adapter au mode de vie canadien ) ? _eos_
what are some things from your cultural background that have helped you navigate canadian life ( helped you adjust to life in canada ) ? _eos_
what can you you your your cultural your your you to to to canada canada canada life life life life canada ? _eos_
selon vous , quels seront , dans les dix prochaines annรฉes , les cinq enjeux les plus urgents en matiรจre d' environnement et d' avenir viable pour vous et votre rรฉgion ? _eos_
which do you think will be the five most pressing environmental and sustainability issues for you and your region in the next ten years ? _eos_
what do you see be be the the next five five five , and and and and and and and your your ? ? ? _eos_ _eos_ _eos_
dans quelle mesure lโ expert est-il motivรฉ et capable de partager ses connaissances , et dans quelle mesure son successeur est-il motivรฉ et capable de recevoir ce savoir ? _eos_
what is the expert โs level of motivation and capability for sharing knowledge , and the successor โs motivation and capability of acquiring it ? _eos_
what is the expert โs and and knowledge knowledge knowledge knowledge and and and and and and and and and and and ? ? ? _eos_ _eos_
###Markdown
**Important: This notebook will only work with fastai-0.7.x. Do not try to run any fastai-1.x code from this path in the repository because it will load fastai-0.7.x**
###Code
%matplotlib inline
%reload_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Please note that this notebook is most likely going to cause a stuck process. So if you are going to run it, please make sure to restart your jupyter notebook as soon as you completed running it.The bug happens inside the `fastText` library, which we have no control over. You can check the status of this issue: [here](https://github.com/fastai/fastai/issues/754) and [here](https://github.com/facebookresearch/fastText/issues/618issuecomment-419554225).For the future, note that there're 3 separate implementations of fasttext, perhaps one of them works:https://github.com/facebookresearch/fastText/tree/master/pythonhttps://pypi.org/project/fasttext/https://radimrehurek.com/gensim/models/fasttext.htmlmodule-gensim.models.fasttext Translation files
###Code
from fastai.text import *
###Output
_____no_output_____
###Markdown
French/English parallel texts from http://www.statmt.org/wmt15/translation-task.html . It was created by Chris Callison-Burch, who crawled millions of web pages and then used *a set of simple heuristics to transform French URLs onto English URLs (i.e. replacing "fr" with "en" and about 40 other hand-written rules), and assume that these documents are translations of each other*.
###Code
PATH = Path('data/translate')
TMP_PATH = PATH/'tmp'
TMP_PATH.mkdir(exist_ok=True)
fname='giga-fren.release2.fixed'
en_fname = PATH/f'{fname}.en'
fr_fname = PATH/f'{fname}.fr'
re_eq = re.compile('^(Wh[^?.!]+\?)')
re_fq = re.compile('^([^?.!]+\?)')
lines = ((re_eq.search(eq), re_fq.search(fq))
for eq, fq in zip(open(en_fname, encoding='utf-8'), open(fr_fname, encoding='utf-8')))
qs = [(e.group(), f.group()) for e,f in lines if e and f]
pickle.dump(qs, (PATH/'fr-en-qs.pkl').open('wb'))
qs = pickle.load((PATH/'fr-en-qs.pkl').open('rb'))
qs[:5], len(qs)
en_qs,fr_qs = zip(*qs)
en_tok = Tokenizer.proc_all_mp(partition_by_cores(en_qs))
fr_tok = Tokenizer.proc_all_mp(partition_by_cores(fr_qs), 'fr')
en_tok[0], fr_tok[0]
np.percentile([len(o) for o in en_tok], 90), np.percentile([len(o) for o in fr_tok], 90)
keep = np.array([len(o)<30 for o in en_tok])
en_tok = np.array(en_tok)[keep]
fr_tok = np.array(fr_tok)[keep]
pickle.dump(en_tok, (PATH/'en_tok.pkl').open('wb'))
pickle.dump(fr_tok, (PATH/'fr_tok.pkl').open('wb'))
en_tok = pickle.load((PATH/'en_tok.pkl').open('rb'))
fr_tok = pickle.load((PATH/'fr_tok.pkl').open('rb'))
def toks2ids(tok,pre):
freq = Counter(p for o in tok for p in o)
itos = [o for o,c in freq.most_common(40000)]
itos.insert(0, '_bos_')
itos.insert(1, '_pad_')
itos.insert(2, '_eos_')
itos.insert(3, '_unk')
stoi = collections.defaultdict(lambda: 3, {v:k for k,v in enumerate(itos)})
ids = np.array([([stoi[o] for o in p] + [2]) for p in tok])
np.save(TMP_PATH/f'{pre}_ids.npy', ids)
pickle.dump(itos, open(TMP_PATH/f'{pre}_itos.pkl', 'wb'))
return ids,itos,stoi
en_ids,en_itos,en_stoi = toks2ids(en_tok,'en')
fr_ids,fr_itos,fr_stoi = toks2ids(fr_tok,'fr')
def load_ids(pre):
ids = np.load(TMP_PATH/f'{pre}_ids.npy')
itos = pickle.load(open(TMP_PATH/f'{pre}_itos.pkl', 'rb'))
stoi = collections.defaultdict(lambda: 3, {v:k for k,v in enumerate(itos)})
return ids,itos,stoi
en_ids,en_itos,en_stoi = load_ids('en')
fr_ids,fr_itos,fr_stoi = load_ids('fr')
[fr_itos[o] for o in fr_ids[0]], len(en_itos), len(fr_itos)
###Output
_____no_output_____
###Markdown
Word vectors fasttext word vectors available from https://fasttext.cc/docs/en/english-vectors.html
###Code
# ! pip install git+https://github.com/facebookresearch/fastText.git
import fastText as ft
###Output
_____no_output_____
###Markdown
To use the fastText library, you'll need to download [fasttext word vectors](https://github.com/facebookresearch/fastText/blob/master/pretrained-vectors.md) for your language (download the 'bin plus text' ones).
###Code
en_vecs = ft.load_model(str((PATH/'wiki.en.bin')))
fr_vecs = ft.load_model(str((PATH/'wiki.fr.bin')))
def get_vecs(lang, ft_vecs):
vecd = {w:ft_vecs.get_word_vector(w) for w in ft_vecs.get_words()}
pickle.dump(vecd, open(PATH/f'wiki.{lang}.pkl','wb'))
return vecd
en_vecd = get_vecs('en', en_vecs)
fr_vecd = get_vecs('fr', fr_vecs)
en_vecd = pickle.load(open(PATH/'wiki.en.pkl','rb'))
fr_vecd = pickle.load(open(PATH/'wiki.fr.pkl','rb'))
ft_words = en_vecs.get_words(include_freq=True)
ft_word_dict = {k:v for k,v in zip(*ft_words)}
ft_words = sorted(ft_word_dict.keys(), key=lambda x: ft_word_dict[x])
len(ft_words)
dim_en_vec = len(en_vecd[','])
dim_fr_vec = len(fr_vecd[','])
dim_en_vec,dim_fr_vec
en_vecs = np.stack(list(en_vecd.values()))
en_vecs.mean(),en_vecs.std()
###Output
_____no_output_____
###Markdown
Model data
###Code
enlen_90 = int(np.percentile([len(o) for o in en_ids], 99))
frlen_90 = int(np.percentile([len(o) for o in fr_ids], 97))
enlen_90,frlen_90
en_ids_tr = np.array([o[:enlen_90] for o in en_ids])
fr_ids_tr = np.array([o[:frlen_90] for o in fr_ids])
class Seq2SeqDataset(Dataset):
def __init__(self, x, y): self.x,self.y = x,y
def __getitem__(self, idx): return A(self.x[idx], self.y[idx])
def __len__(self): return len(self.x)
np.random.seed(42)
trn_keep = np.random.rand(len(en_ids_tr))>0.1
en_trn,fr_trn = en_ids_tr[trn_keep],fr_ids_tr[trn_keep]
en_val,fr_val = en_ids_tr[~trn_keep],fr_ids_tr[~trn_keep]
len(en_trn),len(en_val)
trn_ds = Seq2SeqDataset(fr_trn,en_trn)
val_ds = Seq2SeqDataset(fr_val,en_val)
bs=125
trn_samp = SortishSampler(en_trn, key=lambda x: len(en_trn[x]), bs=bs)
val_samp = SortSampler(en_val, key=lambda x: len(en_val[x]))
trn_dl = DataLoader(trn_ds, bs, transpose=True, transpose_y=True, num_workers=1,
pad_idx=1, pre_pad=False, sampler=trn_samp)
val_dl = DataLoader(val_ds, int(bs*1.6), transpose=True, transpose_y=True, num_workers=1,
pad_idx=1, pre_pad=False, sampler=val_samp)
md = ModelData(PATH, trn_dl, val_dl)
it = iter(trn_dl)
its = [next(it) for i in range(5)]
[(len(x),len(y)) for x,y in its]
###Output
_____no_output_____
###Markdown
Initial model
###Code
def create_emb(vecs, itos, em_sz):
emb = nn.Embedding(len(itos), em_sz, padding_idx=1)
wgts = emb.weight.data
miss = []
for i,w in enumerate(itos):
try: wgts[i] = torch.from_numpy(vecs[w]*3)
except: miss.append(w)
print(len(miss),miss[5:10])
return emb
nh,nl = 256,2
class Seq2SeqRNN(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.emb_enc_drop = nn.Dropout(0.15)
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
def seq2seq_loss(input, target):
sl,bs = target.size()
sl_in,bs_in,nc = input.size()
if sl>sl_in: input = F.pad(input, (0,0,0,0,0,sl-sl_in))
input = input[:sl]
return F.cross_entropy(input.view(-1,nc), target.view(-1))#, ignore_index=1)
opt_fn = partial(optim.Adam, betas=(0.8, 0.99))
rnn = Seq2SeqRNN(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.lr_find()
learn.sched.plot()
lr=3e-3
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10))
learn.save('initial')
learn.load('initial')
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
###Output
quels facteurs pourraient influer sur le choix de leur emplacement ? _eos_
what factors influencetheir location ? _eos_
what factors might might influence on the their ? ? _eos_
quโ est -ce qui ne peut pas changer ? _eos_
what can not change ? _eos_
what not change change ? _eos_
que faites - vous ? _eos_
what do you do ? _eos_
what do you do ? _eos_
qui rรฉglemente les pylรดnes d' antennes ? _eos_
who regulates antenna towers ? _eos_
who regulates the doors doors ? _eos_
oรน sont - ils situรฉs ? _eos_
where are they located ? _eos_
where are the located ? _eos_
quelles sont leurs compรฉtences ? _eos_
what are their qualifications ? _eos_
what are their skills ? _eos_
qui est victime de harcรจlement sexuel ? _eos_
who experiences sexual harassment ? _eos_
who is victim sexual sexual ? ? _eos_
quelles sont les personnes qui visitent les communautรฉs autochtones ? _eos_
who visits indigenous communities ? _eos_
who are people people aboriginal aboriginal ? _eos_
pourquoi ces trois points en particulier ? _eos_
why these specific three ? _eos_
why are these two different ? ? _eos_
pourquoi ou pourquoi pas ? _eos_
why or why not ? _eos_
why or why not _eos_
###Markdown
Bidir
###Code
class Seq2SeqRNN_Bidir(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25, bidirectional=True)
self.out_enc = nn.Linear(nh*2, em_sz_dec, bias=False)
self.drop_enc = nn.Dropout(0.05)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = h.view(2,2,bs,-1).permute(0,2,1,3).contiguous().view(2,bs,-1)
h = self.out_enc(self.drop_enc(h))
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl*2, bs, self.nh))
rnn = Seq2SeqRNN_Bidir(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10))
learn.save('bidir')
###Output
_____no_output_____
###Markdown
Teacher forcing
###Code
class Seq2SeqStepper(Stepper):
def step(self, xs, y, epoch):
self.m.pr_force = (10-epoch)*0.1 if epoch<10 else 0
xtra = []
output = self.m(*xs, y)
if isinstance(output,tuple): output,*xtra = output
self.opt.zero_grad()
loss = raw_loss = self.crit(output, y)
if self.reg_fn: loss = self.reg_fn(output, xtra, raw_loss)
loss.backward()
if self.clip: # Gradient clipping
nn.utils.clip_grad_norm(trainable_params_(self.m), self.clip)
self.opt.step()
return raw_loss.data[0]
class Seq2SeqRNN_TeacherForcing(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.pr_force = 1.
def forward(self, inp, y=None):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
rnn = Seq2SeqRNN_TeacherForcing(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10), stepper=Seq2SeqStepper)
learn.save('forcing')
###Output
_____no_output_____
###Markdown
Attentional model
###Code
def rand_t(*sz): return torch.randn(sz)/math.sqrt(sz[0])
def rand_p(*sz): return nn.Parameter(rand_t(*sz))
class Seq2SeqAttnRNN(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.W1 = rand_p(nh, em_sz_dec)
self.l2 = nn.Linear(em_sz_dec, em_sz_dec)
self.l3 = nn.Linear(em_sz_dec+nh, em_sz_dec)
self.V = rand_p(em_sz_dec)
def forward(self, inp, y=None, ret_attn=False):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res,attns = [],[]
w1e = enc_out @ self.W1
for i in range(self.out_sl):
w2h = self.l2(h[-1])
u = F.tanh(w1e + w2h)
a = F.softmax(u @ self.V, 0)
attns.append(a)
Xa = (a.unsqueeze(2) * enc_out).sum(0)
emb = self.emb_dec(dec_inp)
wgt_enc = self.l3(torch.cat([emb, Xa], 1))
outp, h = self.gru_dec(wgt_enc.unsqueeze(0), h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
res = torch.stack(res)
if ret_attn: res = res,torch.stack(attns)
return res
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
rnn = Seq2SeqAttnRNN(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
lr=2e-3
learn.fit(lr, 1, cycle_len=15, use_clr=(20,10), stepper=Seq2SeqStepper)
learn.save('attn')
learn.load('attn')
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs,attns = learn.model(V(x),ret_attn=True)
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
attn = to_np(attns[...,180])
fig, axes = plt.subplots(3, 3, figsize=(15, 10))
for i,ax in enumerate(axes.flat):
ax.plot(attn[i])
###Output
_____no_output_____
###Markdown
All
###Code
class Seq2SeqRNN_All(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25, bidirectional=True)
self.out_enc = nn.Linear(nh*2, em_sz_dec, bias=False)
self.drop_enc = nn.Dropout(0.25)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.W1 = rand_p(nh*2, em_sz_dec)
self.l2 = nn.Linear(em_sz_dec, em_sz_dec)
self.l3 = nn.Linear(em_sz_dec+nh*2, em_sz_dec)
self.V = rand_p(em_sz_dec)
def forward(self, inp, y=None):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = h.view(2,2,bs,-1).permute(0,2,1,3).contiguous().view(2,bs,-1)
h = self.out_enc(self.drop_enc(h))
dec_inp = V(torch.zeros(bs).long())
res,attns = [],[]
w1e = enc_out @ self.W1
for i in range(self.out_sl):
w2h = self.l2(h[-1])
u = F.tanh(w1e + w2h)
a = F.softmax(u @ self.V, 0)
attns.append(a)
Xa = (a.unsqueeze(2) * enc_out).sum(0)
emb = self.emb_dec(dec_inp)
wgt_enc = self.l3(torch.cat([emb, Xa], 1))
outp, h = self.gru_dec(wgt_enc.unsqueeze(0), h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl*2, bs, self.nh))
rnn = Seq2SeqRNN_All(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=15, use_clr=(20,10), stepper=Seq2SeqStepper)
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
###Output
quels facteurs pourraient influer sur le choix de leur emplacement ? _eos_
what factors influencetheir location ? _eos_
what factors might affect the choice of their ? ? _eos_
quโ est -ce qui ne peut pas changer ? _eos_
what can not change ? _eos_
what can not change change _eos_
que faites - vous ? _eos_
what do you do ? _eos_
what do you do ? _eos_
qui rรฉglemente les pylรดnes d' antennes ? _eos_
who regulates antenna towers ? _eos_
who regulates the antenna ? ? _eos_
oรน sont - ils situรฉs ? _eos_
where are they located ? _eos_
where are they located ? _eos_
quelles sont leurs compรฉtences ? _eos_
what are their qualifications ? _eos_
what are their skills ? _eos_
qui est victime de harcรจlement sexuel ? _eos_
who experiences sexual harassment ? _eos_
who is victim harassment harassment ? _eos_
quelles sont les personnes qui visitent les communautรฉs autochtones ? _eos_
who visits indigenous communities ? _eos_
who are the people people ? ?
pourquoi ces trois points en particulier ? _eos_
why these specific three ? _eos_
why are these three specific ? _eos_
pourquoi ou pourquoi pas ? _eos_
why or why not ? _eos_
why or why not ? _eos_
###Markdown
Translation files
###Code
from fastai.text import *
from pathlib import Path
torch.cuda.set_device(1)
PATH = Path('data/translate')
TMP_PATH = PATH/'tmp'
TMP_PATH.mkdir(exist_ok=True)
fname='giga-fren.release2.fixed'
en_fname = PATH/f'{fname}.en'
fr_fname = PATH/f'{fname}.fr'
re_eq = re.compile('^(Wh[^?.!]+\?)')
re_fq = re.compile('^([^?.!]+\?)')
lines = ((re_eq.search(eq), re_fq.search(fq))
for eq, fq in zip(open(en_fname, encoding='utf-8'), open(fr_fname, encoding='utf-8')))
qs = [(e.group(), f.group()) for e,f in lines if e and f]
pickle.dump(qs, (PATH/'fr-en-qs.pkl').open('wb'))
qs = pickle.load((PATH/'fr-en-qs.pkl').open('rb'))
qs[:5], len(qs)
en_qs,fr_qs = zip(*qs)
%%time
en_tok = Tokenizer.proc_all_mp(partition_by_cores(en_qs))
fr_tok = Tokenizer.proc_all_mp(partition_by_cores(fr_qs), 'fr')
np.percentile([len(o) for o in en_tok], 90), np.percentile([len(o) for o in fr_tok], 90)
keep = np.array([len(o)<30 for o in en_tok])
en_tok = np.array(en_tok)[keep]
fr_tok = np.array(fr_tok)[keep]
pickle.dump(en_tok, (PATH/'en_tok.pkl').open('wb'))
pickle.dump(fr_tok, (PATH/'fr_tok.pkl').open('wb'))
en_tok = pickle.load((PATH/'en_tok.pkl').open('rb'))
fr_tok = pickle.load((PATH/'fr_tok.pkl').open('rb'))
en_tok[0], fr_tok[0]
def toks2ids(tok,pre):
freq = Counter(p for o in tok for p in o)
itos = [o for o,c in freq.most_common(40000)]
itos.insert(0, '_bos_')
itos.insert(1, '_pad_')
itos.insert(2, '_eos_')
itos.insert(3, '_unk')
stoi = collections.defaultdict(lambda: 3, {v:k for k,v in enumerate(itos)})
ids = np.array([([stoi[o] for o in p] + [2]) for p in tok])
np.save(TMP_PATH/f'{pre}_ids.npy', ids)
pickle.dump(itos, open(TMP_PATH/f'{pre}_itos.pkl', 'wb'))
return ids,itos,stoi
en_ids,en_itos,en_stoi = toks2ids(en_tok,'en')
fr_ids,fr_itos,fr_stoi = toks2ids(fr_tok,'fr')
def load_ids(pre):
ids = np.load(TMP_PATH/f'{pre}_ids.npy')
itos = pickle.load(open(TMP_PATH/f'{pre}_itos.pkl', 'rb'))
stoi = collections.defaultdict(lambda: 3, {v:k for k,v in enumerate(itos)})
return ids,itos,stoi
en_ids,en_itos,en_stoi = load_ids('en')
fr_ids,fr_itos,fr_stoi = load_ids('fr')
[fr_itos[o] for o in fr_ids[0]], len(en_itos), len(fr_itos)
###Output
_____no_output_____
###Markdown
Word vectors
###Code
with (PATH/'glove.6B.100d.txt').open('r', encoding='utf-8') as f: lines = [line.split() for line in f]
en_vecd = {w:np.array(v, dtype=np.float32) for w,*v in lines}
pickle.dump(en_vecd, open(PATH/'glove.6B.100d.dict.pkl','wb'))
def is_number(s):
try:
float(s)
return True
except ValueError: return False
def get_vecs(lang):
with (PATH/f'wiki.{lang}.vec').open('r', encoding='utf-8') as f:
lines = [line.split() for line in f]
lines.pop(0)
vecd = {w:np.array(v, dtype=np.float32)
for w,*v in lines if is_number(v[0]) and len(v)==300}
pickle.dump(vecd, open(PATH/f'wiki.{lang}.pkl','wb'))
return vecd
en_vecd = get_vecs('en')
fr_vecd = get_vecs('fr')
en_vecd = pickle.load(open(PATH/'wiki.en.pkl','rb'))
fr_vecd = pickle.load(open(PATH/'wiki.fr.pkl','rb'))
dim_en_vec = len(en_vecd[','])
dim_fr_vec = len(fr_vecd[','])
en_vecs = np.stack(list(en_vecd.values()))
en_vecs.mean(),en_vecs.std()
###Output
_____no_output_____
###Markdown
Model data
###Code
enlen_90 = int(np.percentile([len(o) for o in en_ids], 99))
frlen_90 = int(np.percentile([len(o) for o in fr_ids], 97))
enlen_90,frlen_90
en_ids_tr = np.array([o[:enlen_90] for o in en_ids])
fr_ids_tr = np.array([o[:frlen_90] for o in fr_ids])
class Seq2SeqDataset(Dataset):
def __init__(self, x, y): self.x,self.y = x,y
def __getitem__(self, idx): return A(self.x[idx], self.y[idx])
def __len__(self): return len(self.x)
np.random.seed(42)
trn_keep = np.random.rand(len(en_ids_tr))>0.1
en_trn,fr_trn = en_ids_tr[trn_keep],fr_ids_tr[trn_keep]
en_val,fr_val = en_ids_tr[~trn_keep],fr_ids_tr[~trn_keep]
len(en_trn),len(en_val)
trn_ds = Seq2SeqDataset(fr_trn,en_trn)
val_ds = Seq2SeqDataset(fr_val,en_val)
bs=125
trn_samp = SortishSampler(en_trn, key=lambda x: len(en_trn[x]), bs=bs)
val_samp = SortSampler(en_val, key=lambda x: len(en_val[x]))
trn_dl = DataLoader(trn_ds, bs, transpose=True, transpose_y=True, num_workers=1, pad_idx=1, pre_pad=False, sampler=trn_samp)
val_dl = DataLoader(val_ds, int(bs*1.6), transpose=True, transpose_y=True, num_workers=1, pad_idx=1, pre_pad=False, sampler=val_samp)
md = ModelData(PATH, trn_dl, val_dl)
it = iter(trn_dl)
its = [next(it) for i in range(5)]
[(len(x),len(y)) for x,y in its]
###Output
_____no_output_____
###Markdown
Initial model
###Code
def create_emb(vecs, itos, em_sz):
emb = nn.Embedding(len(itos), em_sz, padding_idx=1)
wgts = emb.weight.data
miss = []
for i,w in enumerate(itos):
try: wgts[i] = torch.from_numpy(vecs[w]*3)
except: miss.append(w)
return emb
nh,nl = 256,2
class Seq2SeqRNN(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
def seq2seq_loss(input, target):
sl,bs = target.size()
sl_in,bs_in,nc = input.size()
if sl>sl_in: input = F.pad(input, (0,0,0,0,0,sl-sl_in))
input = input[:sl]
return F.cross_entropy(input.view(-1,nc), target.view(-1))#, ignore_index=1)
opt_fn = partial(optim.Adam, betas=(0.8, 0.99))
rnn = Seq2SeqRNN(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.lr_find()
learn.sched.plot()
lr=3e-3
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10))
learn.save('initial')
learn.load('initial')
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
###Output
quels facteurs pourraient influer sur le choix de leur emplacement ? _eos_
what factors influencetheir location ? _eos_
what factors might might influence on the their ? ? _eos_
quโ est -ce qui ne peut pas changer ? _eos_
what can not change ? _eos_
what not change change ? _eos_
que faites - vous ? _eos_
what do you do ? _eos_
what do you do ? _eos_
qui rรฉglemente les pylรดnes d' antennes ? _eos_
who regulates antenna towers ? _eos_
who regulates the doors doors ? _eos_
oรน sont - ils situรฉs ? _eos_
where are they located ? _eos_
where are the located ? _eos_
quelles sont leurs compรฉtences ? _eos_
what are their qualifications ? _eos_
what are their skills ? _eos_
qui est victime de harcรจlement sexuel ? _eos_
who experiences sexual harassment ? _eos_
who is victim sexual sexual ? ? _eos_
quelles sont les personnes qui visitent les communautรฉs autochtones ? _eos_
who visits indigenous communities ? _eos_
who are people people aboriginal aboriginal ? _eos_
pourquoi ces trois points en particulier ? _eos_
why these specific three ? _eos_
why are these two different ? ? _eos_
pourquoi ou pourquoi pas ? _eos_
why or why not ? _eos_
why or why not _eos_
###Markdown
Bidir
###Code
class Seq2SeqRNN_Bidir(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25, bidirectional=True)
self.out_enc = nn.Linear(nh*2, em_sz_dec, bias=False)
self.drop_enc = nn.Dropout(0.05)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = h.view(2,2,bs,-1).permute(0,2,1,3).contiguous().view(2,bs,-1)
h = self.out_enc(self.drop_enc(h))
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl*2, bs, self.nh))
rnn = Seq2SeqRNN_Bidir(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10))
learn.save('bidir')
###Output
_____no_output_____
###Markdown
Teacher forcing
###Code
class Seq2SeqStepper(Stepper):
def step(self, xs, y, epoch):
self.m.pr_force = (10-epoch)*0.1 if epoch<10 else 0
xtra = []
output = self.m(*xs, y)
if isinstance(output,tuple): output,*xtra = output
self.opt.zero_grad()
loss = raw_loss = self.crit(output, y)
if self.reg_fn: loss = self.reg_fn(output, xtra, raw_loss)
loss.backward()
if self.clip: # Gradient clipping
nn.utils.clip_grad_norm(trainable_params_(self.m), self.clip)
self.opt.step()
return raw_loss.data[0]
class Seq2SeqRNN_TeacherForcing(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.pr_force = 1.
def forward(self, inp, y=None):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
rnn = Seq2SeqRNN_TeacherForcing(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10), stepper=Seq2SeqStepper)
learn.save('forcing')
###Output
_____no_output_____
###Markdown
Attentional model
###Code
def rand_t(*sz): return torch.randn(sz)/math.sqrt(sz[0])
def rand_p(*sz): return nn.Parameter(rand_t(*sz))
class Seq2SeqAttnRNN(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec*2, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.W1 = rand_p(nh, em_sz_dec)
self.l2 = nn.Linear(em_sz_dec, em_sz_dec)
self.l3 = nn.Linear(em_sz_dec+nh, em_sz_dec)
self.V = rand_p(em_sz_dec)
def forward(self, inp, y=None, ret_attn=False):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res,attns = [],[]
w1e = enc_out @ self.W1
for i in range(self.out_sl):
w2h = self.l2(h[-1])
u = F.tanh(w1e + w2h)
a = F.softmax(u @ self.V, 0)
attns.append(a)
Xa = (a.unsqueeze(2) * enc_out).sum(0)
emb = self.emb_dec(dec_inp)
wgt_enc = self.l3(torch.cat([emb, Xa], 1))
outp, h = self.gru_dec(wgt_enc.unsqueeze(0), h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
res = torch.stack(res)
if ret_attn: res = res,torch.stack(attns)
return res
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
rnn = Seq2SeqAttnRNN(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
lr=2e-3
learn.fit(lr, 1, cycle_len=15, use_clr=(20,10), stepper=Seq2SeqStepper)
learn.save('attn')
learn.load('attn')
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs,attns = learn.model(V(x),ret_attn=True)
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
attn = to_np(attns[...,180])
fig, axes = plt.subplots(3, 3, figsize=(15, 10))
for i,ax in enumerate(axes.flat):
ax.plot(attn[i])
###Output
_____no_output_____
###Markdown
All
###Code
class Seq2SeqRNN_All(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25, bidirectional=True)
self.out_enc = nn.Linear(nh*2, em_sz_dec, bias=False)
self.drop_enc = nn.Dropout(0.25)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.W1 = rand_p(nh*2, em_sz_dec)
self.l2 = nn.Linear(em_sz_dec, em_sz_dec)
self.l3 = nn.Linear(em_sz_dec+nh*2, em_sz_dec)
self.V = rand_p(em_sz_dec)
def forward(self, inp, y=None):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = h.view(2,2,bs,-1).permute(0,2,1,3).contiguous().view(2,bs,-1)
h = self.out_enc(self.drop_enc(h))
dec_inp = V(torch.zeros(bs).long())
res,attns = [],[]
w1e = enc_out @ self.W1
for i in range(self.out_sl):
w2h = self.l2(h[-1])
u = F.tanh(w1e + w2h)
a = F.softmax(u @ self.V, 0)
attns.append(a)
Xa = (a.unsqueeze(2) * enc_out).sum(0)
emb = self.emb_dec(dec_inp)
wgt_enc = self.l3(torch.cat([emb, Xa], 1))
outp, h = self.gru_dec(wgt_enc.unsqueeze(0), h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl*2, bs, self.nh))
rnn = Seq2SeqRNN_All(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=15, use_clr=(20,10), stepper=Seq2SeqStepper)
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
###Output
quels facteurs pourraient influer sur le choix de leur emplacement ? _eos_
what factors influencetheir location ? _eos_
what factors might affect the choice of their ? ? _eos_
quโ est -ce qui ne peut pas changer ? _eos_
what can not change ? _eos_
what can not change change _eos_
que faites - vous ? _eos_
what do you do ? _eos_
what do you do ? _eos_
qui rรฉglemente les pylรดnes d' antennes ? _eos_
who regulates antenna towers ? _eos_
who regulates the antenna ? ? _eos_
oรน sont - ils situรฉs ? _eos_
where are they located ? _eos_
where are they located ? _eos_
quelles sont leurs compรฉtences ? _eos_
what are their qualifications ? _eos_
what are their skills ? _eos_
qui est victime de harcรจlement sexuel ? _eos_
who experiences sexual harassment ? _eos_
who is victim harassment harassment ? _eos_
quelles sont les personnes qui visitent les communautรฉs autochtones ? _eos_
who visits indigenous communities ? _eos_
who are the people people ? ?
pourquoi ces trois points en particulier ? _eos_
why these specific three ? _eos_
why are these three specific ? _eos_
pourquoi ou pourquoi pas ? _eos_
why or why not ? _eos_
why or why not ? _eos_
###Markdown
Please note that this notebook is most likely going to cause a stuck process. So if you are going to run it, please make sure to restart your jupyter notebook as soon as you completed running it.The bug happens inside the `fastText` library, which we have no control over. You can check the status of this issue: [here](https://github.com/fastai/fastai/issues/754) and [here](https://github.com/facebookresearch/fastText/issues/618issuecomment-419554225).For the future, note that there're 3 separate implementations of fasttext, perhaps one of them works:https://github.com/facebookresearch/fastText/tree/master/pythonhttps://pypi.org/project/fasttext/https://radimrehurek.com/gensim/models/fasttext.htmlmodule-gensim.models.fasttext Translation files
###Code
from fastai.text import *
###Output
_____no_output_____
###Markdown
French/English parallel texts from http://www.statmt.org/wmt15/translation-task.html . It was created by Chris Callison-Burch, who crawled millions of web pages and then used *a set of simple heuristics to transform French URLs onto English URLs (i.e. replacing "fr" with "en" and about 40 other hand-written rules), and assume that these documents are translations of each other*.
###Code
PATH = Path('data/translate')
TMP_PATH = PATH/'tmp'
TMP_PATH.mkdir(exist_ok=True)
fname='giga-fren.release2.fixed'
en_fname = PATH/f'{fname}.en'
fr_fname = PATH/f'{fname}.fr'
re_eq = re.compile('^(Wh[^?.!]+\?)')
re_fq = re.compile('^([^?.!]+\?)')
lines = ((re_eq.search(eq), re_fq.search(fq))
for eq, fq in zip(open(en_fname, encoding='utf-8'), open(fr_fname, encoding='utf-8')))
qs = [(e.group(), f.group()) for e,f in lines if e and f]
pickle.dump(qs, (PATH/'fr-en-qs.pkl').open('wb'))
qs = pickle.load((PATH/'fr-en-qs.pkl').open('rb'))
qs[:5], len(qs)
en_qs,fr_qs = zip(*qs)
en_tok = Tokenizer.proc_all_mp(partition_by_cores(en_qs))
fr_tok = Tokenizer.proc_all_mp(partition_by_cores(fr_qs), 'fr')
en_tok[0], fr_tok[0]
np.percentile([len(o) for o in en_tok], 90), np.percentile([len(o) for o in fr_tok], 90)
keep = np.array([len(o)<30 for o in en_tok])
en_tok = np.array(en_tok)[keep]
fr_tok = np.array(fr_tok)[keep]
pickle.dump(en_tok, (PATH/'en_tok.pkl').open('wb'))
pickle.dump(fr_tok, (PATH/'fr_tok.pkl').open('wb'))
en_tok = pickle.load((PATH/'en_tok.pkl').open('rb'))
fr_tok = pickle.load((PATH/'fr_tok.pkl').open('rb'))
def toks2ids(tok,pre):
freq = Counter(p for o in tok for p in o)
itos = [o for o,c in freq.most_common(40000)]
itos.insert(0, '_bos_')
itos.insert(1, '_pad_')
itos.insert(2, '_eos_')
itos.insert(3, '_unk')
stoi = collections.defaultdict(lambda: 3, {v:k for k,v in enumerate(itos)})
ids = np.array([([stoi[o] for o in p] + [2]) for p in tok])
np.save(TMP_PATH/f'{pre}_ids.npy', ids)
pickle.dump(itos, open(TMP_PATH/f'{pre}_itos.pkl', 'wb'))
return ids,itos,stoi
en_ids,en_itos,en_stoi = toks2ids(en_tok,'en')
fr_ids,fr_itos,fr_stoi = toks2ids(fr_tok,'fr')
def load_ids(pre):
ids = np.load(TMP_PATH/f'{pre}_ids.npy')
itos = pickle.load(open(TMP_PATH/f'{pre}_itos.pkl', 'rb'))
stoi = collections.defaultdict(lambda: 3, {v:k for k,v in enumerate(itos)})
return ids,itos,stoi
en_ids,en_itos,en_stoi = load_ids('en')
fr_ids,fr_itos,fr_stoi = load_ids('fr')
[fr_itos[o] for o in fr_ids[0]], len(en_itos), len(fr_itos)
###Output
_____no_output_____
###Markdown
Word vectors fasttext word vectors available from https://fasttext.cc/docs/en/english-vectors.html
###Code
# ! pip install git+https://github.com/facebookresearch/fastText.git
import fastText as ft
###Output
_____no_output_____
###Markdown
To use the fastText library, you'll need to download [fasttext word vectors](https://github.com/facebookresearch/fastText/blob/master/pretrained-vectors.md) for your language (download the 'bin plus text' ones).
###Code
en_vecs = ft.load_model(str((PATH/'wiki.en.bin')))
fr_vecs = ft.load_model(str((PATH/'wiki.fr.bin')))
def get_vecs(lang, ft_vecs):
vecd = {w:ft_vecs.get_word_vector(w) for w in ft_vecs.get_words()}
pickle.dump(vecd, open(PATH/f'wiki.{lang}.pkl','wb'))
return vecd
en_vecd = get_vecs('en', en_vecs)
fr_vecd = get_vecs('fr', fr_vecs)
en_vecd = pickle.load(open(PATH/'wiki.en.pkl','rb'))
fr_vecd = pickle.load(open(PATH/'wiki.fr.pkl','rb'))
ft_words = en_vecs.get_words(include_freq=True)
ft_word_dict = {k:v for k,v in zip(*ft_words)}
ft_words = sorted(ft_word_dict.keys(), key=lambda x: ft_word_dict[x])
len(ft_words)
dim_en_vec = len(en_vecd[','])
dim_fr_vec = len(fr_vecd[','])
dim_en_vec,dim_fr_vec
en_vecs = np.stack(list(en_vecd.values()))
en_vecs.mean(),en_vecs.std()
###Output
_____no_output_____
###Markdown
Model data
###Code
enlen_90 = int(np.percentile([len(o) for o in en_ids], 99))
frlen_90 = int(np.percentile([len(o) for o in fr_ids], 97))
enlen_90,frlen_90
en_ids_tr = np.array([o[:enlen_90] for o in en_ids])
fr_ids_tr = np.array([o[:frlen_90] for o in fr_ids])
class Seq2SeqDataset(Dataset):
def __init__(self, x, y): self.x,self.y = x,y
def __getitem__(self, idx): return A(self.x[idx], self.y[idx])
def __len__(self): return len(self.x)
np.random.seed(42)
trn_keep = np.random.rand(len(en_ids_tr))>0.1
en_trn,fr_trn = en_ids_tr[trn_keep],fr_ids_tr[trn_keep]
en_val,fr_val = en_ids_tr[~trn_keep],fr_ids_tr[~trn_keep]
len(en_trn),len(en_val)
trn_ds = Seq2SeqDataset(fr_trn,en_trn)
val_ds = Seq2SeqDataset(fr_val,en_val)
bs=125
trn_samp = SortishSampler(en_trn, key=lambda x: len(en_trn[x]), bs=bs)
val_samp = SortSampler(en_val, key=lambda x: len(en_val[x]))
trn_dl = DataLoader(trn_ds, bs, transpose=True, transpose_y=True, num_workers=1,
pad_idx=1, pre_pad=False, sampler=trn_samp)
val_dl = DataLoader(val_ds, int(bs*1.6), transpose=True, transpose_y=True, num_workers=1,
pad_idx=1, pre_pad=False, sampler=val_samp)
md = ModelData(PATH, trn_dl, val_dl)
it = iter(trn_dl)
its = [next(it) for i in range(5)]
[(len(x),len(y)) for x,y in its]
###Output
_____no_output_____
###Markdown
Initial model
###Code
def create_emb(vecs, itos, em_sz):
emb = nn.Embedding(len(itos), em_sz, padding_idx=1)
wgts = emb.weight.data
miss = []
for i,w in enumerate(itos):
try: wgts[i] = torch.from_numpy(vecs[w]*3)
except: miss.append(w)
print(len(miss),miss[5:10])
return emb
nh,nl = 256,2
class Seq2SeqRNN(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.emb_enc_drop = nn.Dropout(0.15)
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
def seq2seq_loss(input, target):
sl,bs = target.size()
sl_in,bs_in,nc = input.size()
if sl>sl_in: input = F.pad(input, (0,0,0,0,0,sl-sl_in))
input = input[:sl]
return F.cross_entropy(input.view(-1,nc), target.view(-1))#, ignore_index=1)
opt_fn = partial(optim.Adam, betas=(0.8, 0.99))
rnn = Seq2SeqRNN(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.lr_find()
learn.sched.plot()
lr=3e-3
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10))
learn.save('initial')
learn.load('initial')
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
###Output
quels facteurs pourraient influer sur le choix de leur emplacement ? _eos_
what factors influencetheir location ? _eos_
what factors might might influence on the their ? ? _eos_
quโ est -ce qui ne peut pas changer ? _eos_
what can not change ? _eos_
what not change change ? _eos_
que faites - vous ? _eos_
what do you do ? _eos_
what do you do ? _eos_
qui rรฉglemente les pylรดnes d' antennes ? _eos_
who regulates antenna towers ? _eos_
who regulates the doors doors ? _eos_
oรน sont - ils situรฉs ? _eos_
where are they located ? _eos_
where are the located ? _eos_
quelles sont leurs compรฉtences ? _eos_
what are their qualifications ? _eos_
what are their skills ? _eos_
qui est victime de harcรจlement sexuel ? _eos_
who experiences sexual harassment ? _eos_
who is victim sexual sexual ? ? _eos_
quelles sont les personnes qui visitent les communautรฉs autochtones ? _eos_
who visits indigenous communities ? _eos_
who are people people aboriginal aboriginal ? _eos_
pourquoi ces trois points en particulier ? _eos_
why these specific three ? _eos_
why are these two different ? ? _eos_
pourquoi ou pourquoi pas ? _eos_
why or why not ? _eos_
why or why not _eos_
###Markdown
Bidir
###Code
class Seq2SeqRNN_Bidir(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25, bidirectional=True)
self.out_enc = nn.Linear(nh*2, em_sz_dec, bias=False)
self.drop_enc = nn.Dropout(0.05)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
def forward(self, inp):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = h.view(2,2,bs,-1).permute(0,2,1,3).contiguous().view(2,bs,-1)
h = self.out_enc(self.drop_enc(h))
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl*2, bs, self.nh))
rnn = Seq2SeqRNN_Bidir(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10))
learn.save('bidir')
###Output
_____no_output_____
###Markdown
Teacher forcing
###Code
class Seq2SeqStepper(Stepper):
def step(self, xs, y, epoch):
self.m.pr_force = (10-epoch)*0.1 if epoch<10 else 0
xtra = []
output = self.m(*xs, y)
if isinstance(output,tuple): output,*xtra = output
self.opt.zero_grad()
loss = raw_loss = self.crit(output, y)
if self.reg_fn: loss = self.reg_fn(output, xtra, raw_loss)
loss.backward()
if self.clip: # Gradient clipping
nn.utils.clip_grad_norm(trainable_params_(self.m), self.clip)
self.opt.step()
return raw_loss.data[0]
class Seq2SeqRNN_TeacherForcing(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.pr_force = 1.
def forward(self, inp, y=None):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res = []
for i in range(self.out_sl):
emb = self.emb_dec(dec_inp).unsqueeze(0)
outp, h = self.gru_dec(emb, h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
rnn = Seq2SeqRNN_TeacherForcing(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=12, use_clr=(20,10), stepper=Seq2SeqStepper)
learn.save('forcing')
###Output
_____no_output_____
###Markdown
Attentional model
###Code
def rand_t(*sz): return torch.randn(sz)/math.sqrt(sz[0])
def rand_p(*sz): return nn.Parameter(rand_t(*sz))
class Seq2SeqAttnRNN(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25)
self.out_enc = nn.Linear(nh, em_sz_dec, bias=False)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.W1 = rand_p(nh, em_sz_dec)
self.l2 = nn.Linear(em_sz_dec, em_sz_dec)
self.l3 = nn.Linear(em_sz_dec+nh, em_sz_dec)
self.V = rand_p(em_sz_dec)
def forward(self, inp, y=None, ret_attn=False):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = self.out_enc(h)
dec_inp = V(torch.zeros(bs).long())
res,attns = [],[]
w1e = enc_out @ self.W1
for i in range(self.out_sl):
w2h = self.l2(h[-1])
u = F.tanh(w1e + w2h)
a = F.softmax(u @ self.V, 0)
attns.append(a)
Xa = (a.unsqueeze(2) * enc_out).sum(0)
emb = self.emb_dec(dec_inp)
wgt_enc = self.l3(torch.cat([emb, Xa], 1))
outp, h = self.gru_dec(wgt_enc.unsqueeze(0), h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
res = torch.stack(res)
if ret_attn: res = res,torch.stack(attns)
return res
def initHidden(self, bs): return V(torch.zeros(self.nl, bs, self.nh))
rnn = Seq2SeqAttnRNN(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
lr=2e-3
learn.fit(lr, 1, cycle_len=15, use_clr=(20,10), stepper=Seq2SeqStepper)
learn.save('attn')
learn.load('attn')
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs,attns = learn.model(V(x),ret_attn=True)
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
attn = to_np(attns[...,180])
fig, axes = plt.subplots(3, 3, figsize=(15, 10))
for i,ax in enumerate(axes.flat):
ax.plot(attn[i])
###Output
_____no_output_____
###Markdown
All
###Code
class Seq2SeqRNN_All(nn.Module):
def __init__(self, vecs_enc, itos_enc, em_sz_enc, vecs_dec, itos_dec, em_sz_dec, nh, out_sl, nl=2):
super().__init__()
self.emb_enc = create_emb(vecs_enc, itos_enc, em_sz_enc)
self.nl,self.nh,self.out_sl = nl,nh,out_sl
self.gru_enc = nn.GRU(em_sz_enc, nh, num_layers=nl, dropout=0.25, bidirectional=True)
self.out_enc = nn.Linear(nh*2, em_sz_dec, bias=False)
self.drop_enc = nn.Dropout(0.25)
self.emb_dec = create_emb(vecs_dec, itos_dec, em_sz_dec)
self.gru_dec = nn.GRU(em_sz_dec, em_sz_dec, num_layers=nl, dropout=0.1)
self.emb_enc_drop = nn.Dropout(0.15)
self.out_drop = nn.Dropout(0.35)
self.out = nn.Linear(em_sz_dec, len(itos_dec))
self.out.weight.data = self.emb_dec.weight.data
self.W1 = rand_p(nh*2, em_sz_dec)
self.l2 = nn.Linear(em_sz_dec, em_sz_dec)
self.l3 = nn.Linear(em_sz_dec+nh*2, em_sz_dec)
self.V = rand_p(em_sz_dec)
def forward(self, inp, y=None):
sl,bs = inp.size()
h = self.initHidden(bs)
emb = self.emb_enc_drop(self.emb_enc(inp))
enc_out, h = self.gru_enc(emb, h)
h = h.view(2,2,bs,-1).permute(0,2,1,3).contiguous().view(2,bs,-1)
h = self.out_enc(self.drop_enc(h))
dec_inp = V(torch.zeros(bs).long())
res,attns = [],[]
w1e = enc_out @ self.W1
for i in range(self.out_sl):
w2h = self.l2(h[-1])
u = F.tanh(w1e + w2h)
a = F.softmax(u @ self.V, 0)
attns.append(a)
Xa = (a.unsqueeze(2) * enc_out).sum(0)
emb = self.emb_dec(dec_inp)
wgt_enc = self.l3(torch.cat([emb, Xa], 1))
outp, h = self.gru_dec(wgt_enc.unsqueeze(0), h)
outp = self.out(self.out_drop(outp[0]))
res.append(outp)
dec_inp = V(outp.data.max(1)[1])
if (dec_inp==1).all(): break
if (y is not None) and (random.random()<self.pr_force):
if i>=len(y): break
dec_inp = y[i]
return torch.stack(res)
def initHidden(self, bs): return V(torch.zeros(self.nl*2, bs, self.nh))
rnn = Seq2SeqRNN_All(fr_vecd, fr_itos, dim_fr_vec, en_vecd, en_itos, dim_en_vec, nh, enlen_90)
learn = RNN_Learner(md, SingleModel(to_gpu(rnn)), opt_fn=opt_fn)
learn.crit = seq2seq_loss
learn.fit(lr, 1, cycle_len=15, use_clr=(20,10), stepper=Seq2SeqStepper)
###Output
_____no_output_____
###Markdown
Test
###Code
x,y = next(iter(val_dl))
probs = learn.model(V(x))
preds = to_np(probs.max(2)[1])
for i in range(180,190):
print(' '.join([fr_itos[o] for o in x[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in y[:,i] if o != 1]))
print(' '.join([en_itos[o] for o in preds[:,i] if o!=1]))
print()
###Output
quels facteurs pourraient influer sur le choix de leur emplacement ? _eos_
what factors influencetheir location ? _eos_
what factors might affect the choice of their ? ? _eos_
quโ est -ce qui ne peut pas changer ? _eos_
what can not change ? _eos_
what can not change change _eos_
que faites - vous ? _eos_
what do you do ? _eos_
what do you do ? _eos_
qui rรฉglemente les pylรดnes d' antennes ? _eos_
who regulates antenna towers ? _eos_
who regulates the antenna ? ? _eos_
oรน sont - ils situรฉs ? _eos_
where are they located ? _eos_
where are they located ? _eos_
quelles sont leurs compรฉtences ? _eos_
what are their qualifications ? _eos_
what are their skills ? _eos_
qui est victime de harcรจlement sexuel ? _eos_
who experiences sexual harassment ? _eos_
who is victim harassment harassment ? _eos_
quelles sont les personnes qui visitent les communautรฉs autochtones ? _eos_
who visits indigenous communities ? _eos_
who are the people people ? ?
pourquoi ces trois points en particulier ? _eos_
why these specific three ? _eos_
why are these three specific ? _eos_
pourquoi ou pourquoi pas ? _eos_
why or why not ? _eos_
why or why not ? _eos_
|
templates/extratrees.ipynb | ###Markdown
Imports
###Code
import time
import gc
gc.enable()
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
import scipy.stats as st
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from sklearn.metrics import #task-dependent
from sklearn.ensemble import ExtraTreesClassifier as XTC
from sklearn.ensemble import ExtraTreesRegressor as XTR
import optuna
from optuna.samplers import TPESampler
train = pd.read_csv('')
test = pd.read_csv('')
###Output
_____no_output_____
###Markdown
Config
###Code
SEED = 2311
N_FOLDS = 5
N_THREADS = 4 #number of CPUs
IS_CLF = True #True for Classification, False for Regression
TARGET = '----'
ID_COL = '----'
TEST_INDEX = test.pop(ID_COL) # id column
###Output
_____no_output_____
###Markdown
Preprocessing
###Code
features = list(test.columns)
cat_features = list(test.select_dtypes('category').columns)
num_features = list(test.select_dtypes('number').columns)
train[cat_features] = train[cat_features].astype('int')
test[cat_features] = test[cat_features].astype('int')
labels = LabelEncoder()
train[TARGET] = labels.fit_transform(train[TARGET])
###Output
_____no_output_____
###Markdown
Baseline
###Code
baseline_params = {
'n_estimators': 150,
'n_jobs': N_THREADS,
'verbose': 0,
'random_state': SEED
}
if TASK_IS_CLF:
baseline = XTC(**baseline_params).fit(train[features], train[TARGET])
else:
baseline = XTR(**baseline_params).fit(train[features], train[TARGET])
predictions = baseline.predict(test[features])
submission_baseline = pd.DataFrame({ID_COL: TEST_INDEX,
TARGET: labels.inverse_transform(predictions)})
del baseline
gc.collect()
###Output
_____no_output_____
###Markdown
Hyperparameter tuning
###Code
def objective(trial, train):
param_grid = {
'n_estimators': trial.suggest_int('n_estimators', 200, 2000, step=50),
'max_depth': trial.suggest_int('max_depth', 3, 15),
'max_features': trial.suggest_discrete_uniform('max_features', 0.1, 1.0, 0.1),
'bootstrap': trial.suggest_categorical('bootstrap', [True, False]),
'ccp_alpha': trial.suggest_uniform('ccp_alpha', 0.0, 0.1)
}
if param_grid['bootstrap']:
param_grid['oob_score'] = trial.suggest_categorical('oob_score', [True, False])
param_grid['max_samples'] = trial.suggest_uniform('max_samples', 0.1, 1.0)
if IS_CLF:
param_grid['criterion'] = trial.suggest_categorical('criterion', ['gini', 'entropy'])
param_grid['class_weight'] = trial.suggest_categorical('class_weight', ['balanced', 'balanced_subsample'])
model = XTC(**param_grid, verbose=0, n_jobs=N_THREADS, random_state=SEED)
else:
param_grid['criterion'] = trial.suggest_categorical('criterion', ['squared_error', 'absolute_error'])
model = XTR(**param_grid, verbose=0, n_jobs=N_THREADS, random_state=SEED)
scores = []
for fold in range(N_FOLDS):
xtrain = train[train.fold != fold]
ytrain = xtrain[TARGET]
xval = train[train.fold == fold]
yval = xval[TARGET]
gc.collect()
model.fit(xtrain[features], ytrain)
val_preds = model.predict(xval[features])
# val_preds = model.predict_proba(xval[features])[:1]
score = ----(yval, val_preds)
scores.append(score)
return np.mean(scores)
def tune(objective, direction, train):
study = optuna.create_study(sampler=TPESampler(seed=SEED),
direction=direction)
study.optimize(lambda trial: objective(trial, train),
n_trials=100)
best_params = study.best_params
print(f'Best score: {study.best_value:.5f}')
print('Best params:')
for key, value in best_params.items():
print(f'\t{key}: {value}')
return best_params
direction = '----' #maximize/minimize according to metric
best_params = tune(objective, direction, train)
gc.collect()
###Output
_____no_output_____
###Markdown
CV + Inference
###Code
def custom_cv(train, test, features, model):
oof_preds = {}
test_preds = []
scores = []
cv_start = time.time()
for fold in range(N_SPLITS):
print('-' * 40)
xtrain = train[train.fold != fold].reset_index(drop=True)
xval = train[train.fold == fold].reset_index(drop=True)
val_idx = xval[ID_COL].values.tolist()
fold_start = time.time()
model.fit(xtrain[features], xtrain[TARGET])
val_preds = model.predict(xval[features])
# val_preds = model.predict_proba(xval[features])[:,1]
oof_preds.update(dict(zip(val_idx, val_preds)))
score = ----(xval[TARGET], val_preds)
scores.append(score)
fold_end = time.time()
print(f'Fold #{fold}: Score = {score:.5f}\t[Time: {fold_end - fold_start:.2f} secs]')
test_preds.append(model.predict(test[features]))
# test_preds.append(model.predict_proba(test[features])[:,1])
cv_end = time.time()
print(f'\nAverage score = {np.mean(scores):.5f} with std. dev. = {np.std(scores):.5f}')
print(f'[Total time: {cv_end - cv_start:.2f} secs]\n')
oof_preds = pd.DataFrame.from_dict(oof_preds, orient='index').reset_index()
test_preds = st.mode(np.column_stack(test_preds), axis=1).mode
# test_preds = np.mean(np.column_stack(test_preds), axis=1)
return oof_preds, test_preds
if IS_CLF:
model = XTC(**best_params, verbose=0, n_jobs=N_THREADS, random_state=SEED)
else:
model = XTR(**best_params, verbose=0, n_jobs=N_THREADS, random_state=SEED)
oof_preds, test_preds = custom_cv(train, test, features, model)
###Output
_____no_output_____
###Markdown
Postprocessing and Submission
###Code
#any post-processing if needed
test_preds = labels.inverse_transform(test_preds)
submission_xt = pd.DataFrame({ID_COL: TEST_INDEX,
TARGET: test_preds})
submission_xt.to_csv('submission_xt.csv', index=False)
###Output
_____no_output_____ |
patrick_codes/twiiter_api.ipynb | ###Markdown
This Notebook would be used to Fetch Twitter Data via Twitter API > import libraries
###Code
import pandas as pd
import tweepy
from tweepy import OAuthHandler
from tweepy import API
from tweepy import Cursor
from datetime import datetime, date, time, timedelta
from collections import Counter
import os, sys
import csv
###Output
_____no_output_____
###Markdown
> Load dotenv to expose api keys to the application
###Code
from dotenv import load_dotenv
load_dotenv('../.env')
API_KEY="API_KEY"
API_SECRET_KEY="API_SECRET_KEY"
ACCESS_TOKEN="ACCESS_TOKEN"
ACCESS_TOKEN_SECRET="ACCESS_TOKEN_SECRET"
print(API_KEY, API_SECRET_KEY, ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
API_KEY = os.environ.get(API_KEY)
API_SECRET_KEY = os.getenv(API_SECRET_KEY)
ACCESS_TOKEN = os.getenv(ACCESS_TOKEN)
ACCESS_TOKEN_SECRET=os.getenv(ACCESS_TOKEN_SECRET)
auth = OAuthHandler(API_KEY, API_SECRET_KEY)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
api = tweepy.API(auth, wait_on_rate_limit=True)
auth_api = API(auth)
###Output
_____no_output_____
###Markdown
> Testing Api
###Code
search_words = "airquality"
date_since="2020-03-03"
# Collect tweets
tweets = tweepy.Cursor(api.search,
q=search_words, tweet_mode='extended',
lang="en",
since=date_since
).items(2)
# Iterate and print tweets
for tweet in tweets:
print(tweet.full_text)
# print(tweet._json['full_text'])
tweets = Cursor(api.user_timeline, id='WestAfricaAQ',
tweet_mode='extended',
lang="en", count=10).items(2)
for tweet in tweets:
print(tweet.full_text)
###Output
RT @AguGeohealth: Are you a #BlackGeoscientist (anywhere in the world!) who is interested in how our environment and Earth impacts human heโฆ
RT @cleanaironea: 1/n
While this is preliminary, we have tried to firstly test our open source data mining tools plus compare current trendโฆ
###Markdown
> TWITTER API ALL SET UP! Data Extraction
###Code
hashtags= ['#airquality ','#cleanair','#airpollution' ,'#pollution',
'#hvac', '#airpurifier', '#indoorairquality','#health',
'#covid', '#air', '#climatechange',' #indoorair',
'#environment','#airconditioning', '#coronavirus', '#heating',
'#mold', '#freshair', '#safety', '#ac', '#airfilter', '#allergies',
'#hvacservice', '#ventilation','#wellness','#delhipollution',
'#airconditioner','#airqualityindex','#bhfyp',
'particulate matter', 'fine particulate matter','#pm2_5',
'#emissions', '#natureishealing','#nature','#pollutionfree',
'#wearethevirus']
accounts = ['@GhanaAQ','@asap_eastafrica', '@WestAfricaAQ']
geocodes = {'lagos':("6.48937,3.37709"),'cape_town':("-33.99268,18.46654"),
'joburg' : ("-26.22081,28.03239"),
'accra' : ("5.58445,-0.20514"),
'nairobi' : ("-1.27467,36.81178"),
'mombasa' : ("-4.04549,39.66644"),
'kigali' : ("-1.95360,30.09186"),
'kampala' : ("0.32400,32.58662")}
str(65)
x = geocodes['lagos']
x+','+str(7)+'km'
x
###Output
_____no_output_____
###Markdown
___________________________________
###Code
!pip install GetOldTweets3
import twint
import GetOldTweets3 as got
got.manager.TweetCriteria
# tweetCriteria = got.manager.TweetCriteria().setQuerySearch('europe refugees')\
# .setSince("2015-05-01")\
# .setUntil("2015-09-30")\
# .setMaxTweets(1)
# tweet = got.manager.TweetManager.getTweets(tweetCriteria)[0]
# print(tweet.text)
# %tb
class GetCursor():
import tweepy
from tweepy import OAuthHandler
from tweepy import API
from tweepy import Cursor
from dotenv import load_dotenv
import os, sys
def __init__(self,env_file=None):
if env_file is None:
self.env = load_dotenv('../.env')
else:
self.env = load_dotenv(env_file)
def __repr__(self):
return "Twitter API Auth Object"
def get_auth(self):
API_KEY="API_KEY"
API_SECRET_KEY="API_SECRET_KEY"
ACCESS_TOKEN="ACCESS_TOKEN"
ACCESS_TOKEN_SECRET="ACCESS_TOKEN_SECRET"
self.__API_KEY = os.environ.get(API_KEY)
self.__API_SECRET_KEY = os.getenv(API_SECRET_KEY)
self.__ACCESS_TOKEN = os.getenv(ACCESS_TOKEN)
self.__ACCESS_TOKEN_SECRET=os.getenv(ACCESS_TOKEN_SECRET)
try:
self.__auth = OAuthHandler(API_KEY, API_SECRET_KEY)
self.__auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
self.api = API(auth, wait_on_rate_limit=True)
self.auth_api = API(auth, retry_count=5,retry_delay=5,
timeout=60,
wait_on_rate_limit=True,wait_on_rate_limit_notify=True)
except tweepy.TweepError as e:
print(e.reason())
class GetTweets(GetCursor):
# import dependencies
import tweepy
from tweepy import Cursor
from datetime import datetime, date, time, timedelta
def __init__(self,env_file=None):
super().__init__(env_file)
self.get_auth()
print('Authentication successful')
def __repr__(self):
return "Get tweets from Hashtags -> # & Users -> @"
"""
helper functions
1. limit_handled - handle wait_limit error
2. check_is_bot - check if handle is a bot
3. save_result - save data to a file
"""
def limit_handled(cursor):
while True:
try:
yield cursor.next()
except tweepy.RateLimitError:
time.sleep(15 * 60) #default 15mins
def check_is_bot(self, handle)-> bool:
self.is_bot = False
account_age_days = 0
item = self.auth_api.get_user(handle)
account_created_date = item.created_at
delta = datetime.utcnow() - account_created_date
account_age_days = delta.days
if account_age_days < 180: #(6 months)
is_bot=True
return self.is_bot
def save_result(self, data:pd.DataFrame, path:str='../saved_data/',
fname='new_file'):
data.to_csv(path+name, index=False)
def get_handle_tweets(self, handles:list=[], items_count=20):
self.handles = handles
if len(self.handles) > 0:
for handle in self.handles:
print(f"collecting tweets of -> {handle}")
users_tweets = {}
# this helps avoid Tweepy errors like suspended users or user not found errors
try:
item = self.auth_api.get_user(handle)
except tweepy.TweepError as e:
print("found errors!!!")
continue
#check if handle is a potential bot
if self.check_is_bot(handle):
print('bot alert!!!, skipping the bad guy :(')
continue
else:
current_handle_tweets = Cursor(api.user_timeline, id=handle,
tweet_mode='extended',
lang="en").items(items_count)
for tweet in current_handle_tweets:
users_tweets[handle] = ({'tweet_text':tweet.full_text.encode('utf-8'),
'tweet_date':tweet._json['created_at'],
'retweet_count':tweet._json['retweet_count'],
'favorite_count':tweet._json['favorite_count']})
self.handles_data = pd.DataFrame(users_tweets).T
return self.handles_data
def get_tag_tweets(self, tags:list=[], geocode:str=None,
radius:int=None,
until_date:str="2020-03-30", no_of_items=10):
"""
until_date should be formatted as YYYY-MM-DD
geocode should be used
"""
#if geocode is not None
self.tags = tags
tags_tweets = {}
for tag in self.tags:
print(f"collecting tweets of -> {tag}")
if radius is not None and geocode is not None:
geocode = geocode+','+str(radius)+'km'
current_tag_tweets = tweepy.Cursor(api.search,
q=tag, tweet_mode='extended',
lang="en",
since=until_date,
geocode=geocode,
).items(no_of_items)
for tweet in current_tag_tweets:
tags_tweets[tag] = ({'tweet_text':tweet.full_text.encode('utf-8'),
'tweet_date':tweet._json['created_at'],
'retweet_count':tweet._json['retweet_count'],
'favorite_count':tweet._json['favorite_count']})
self.tags_data = pd.DataFrame(tags_tweets).T
return self.tags_data
def main():
return "wip"
if __name__== main():
pass
get_tweet= GetTweets()
trial_tags = ['#airquality']#,'#cleanair','#airpollution' ,'#pollution',
#'#hvac', '#airpurifier']
trial_accounts = ['@GhanaAQ']#,'@asap_eastafrica', '@WestAfricaAQ']created_at
###Output
_____no_output_____
###Markdown
>> test for tags
###Code
trial_tags_result = get_tweet.get_tag_tweets(trial_tags)
trial_tags_result
###Output
_____no_output_____
###Markdown
>> test for accounts
###Code
trial_account_results = get_tweet.get_handle_tweets(trial_accounts)
trial_account_results
###Output
_____no_output_____
###Markdown
_____________________________________________________________________________________________________ Working with BlueBird
###Code
!pip install bluebird
###Output
Defaulting to user installation because normal site-packages is not writeable
Collecting bluebird
Downloading bluebird-0.0.4a0-py3-none-any.whl (19 kB)
Requirement already satisfied: lxml in /home/patrick/.local/lib/python3.6/site-packages (from bluebird) (4.5.0)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from bluebird) (2.24.0)
Collecting orderedset
Downloading orderedset-2.0.3.tar.gz (101 kB)
[K |โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 101 kB 306 kB/s ta 0:00:01
[?25hRequirement already satisfied: certifi>=2017.4.17 in /usr/lib/python3/dist-packages (from requests->bluebird) (2018.1.18)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/lib/python3/dist-packages (from requests->bluebird) (3.0.4)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /home/patrick/.local/lib/python3.6/site-packages (from requests->bluebird) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/lib/python3/dist-packages (from requests->bluebird) (2.6)
Building wheels for collected packages: orderedset
Building wheel for orderedset (setup.py) ... [?25ldone
[?25h Created wheel for orderedset: filename=orderedset-2.0.3-cp36-cp36m-linux_x86_64.whl size=255683 sha256=bd7ff7ebd8f0f3274190dc56a7abf1b9319df19040e21cc1433ca5b5f8670266
Stored in directory: /home/patrick/.cache/pip/wheels/ff/f8/cf/5baf5e74a6f3a9b5cb405408673ed11dc1276599cc0877dae7
Successfully built orderedset
Installing collected packages: orderedset, bluebird
Successfully installed bluebird-0.0.4a0 orderedset-2.0.3
[33mWARNING: You are using pip version 20.2.2; however, version 20.2.3 is available.
You should consider upgrading via the '/usr/bin/python3 -m pip install --upgrade pip' command.[0m
###Markdown
This Notebook would be used to Fetch Twitter Data via Twitter API > import libraries
###Code
import pandas as pd
import tweepy
from tweepy import OAuthHandler
from tweepy import API
from tweepy import Cursor
from datetime import datetime, date, time, timedelta
from collections import Counter
import os, sys
import csv
###Output
_____no_output_____
###Markdown
> Load dotenv to expose api keys to the application
###Code
from dotenv import load_dotenv
load_dotenv('../.env')
API_KEY="API_KEY"
API_SECRET_KEY="API_SECRET_KEY"
ACCESS_TOKEN="ACCESS_TOKEN"
ACCESS_TOKEN_SECRET="ACCESS_TOKEN_SECRET"
print(API_KEY, API_SECRET_KEY, ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
API_KEY = os.environ.get(API_KEY)
API_SECRET_KEY = os.getenv(API_SECRET_KEY)
ACCESS_TOKEN = os.getenv(ACCESS_TOKEN)
ACCESS_TOKEN_SECRET=os.getenv(ACCESS_TOKEN_SECRET)
auth = OAuthHandler(API_KEY, API_SECRET_KEY)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
api = tweepy.API(auth, wait_on_rate_limit=True)
auth_api = API(auth)
###Output
_____no_output_____
###Markdown
> Testing Api
###Code
search_words = "#wildfires"
date_since="2018-11-16"
# Collect tweets
tweets = tweepy.Cursor(api.search,
q=search_words,
lang="en",
since=date_since).items(2)
# Iterate and print tweets
for tweet in tweets:
print(tweet.text)
###Output
RT @jcfphotog: The Glass Fire burns in the hills of Calistoga, Calif., on Monday, Sept. 28, 2020. Calistoga is under mandatory evacuation tโฆ
The Glass Fire burns in the hills of Calistoga, Calif., on Monday, Sept. 28, 2020. Calistoga is under mandatory evaโฆ https://t.co/qCnmxzqQyz
|
week_6.ipynb | ###Markdown
Football Prediction - Regression Analysis1. Defining the Questiona) Specifying the QuestionMchezopesa Ltd and tasked to accomplish the task below. A prediction result of a game between team 1 and team 2, based on who's home and who's away, and on whether or not the game is friendly (include rank in your training). You have two possible approaches (as shown below) given the datasets that will be providedInput: Home team, Away team, Tournament type (World cup, Friendly, Other)**Approach 1: Polynomial approach**What to train given:Rank of home teamRank of away teamTournament typeModel 1: Predict how many goals the home team scores.Model 2: Predict how many goals the away team scores.**Approach 2: Logistic approach**Feature Engineering: Figure out from the home teamโs perspective if the game is a Win, Lose or Draw (W, L, D)b) Defining the Metric for SuccessUsing Polynomial regression, the Root Mean Squared Error will be used to measure the performace of the model.The prediction of model using logistic regression model will be measured using the accuracy scorec) Understanding the contextAs a data analyst at Mchezo Ltd, the following task is required of you: Make a prediction of a game between team 1 and team 2 , based on who's home and who isaway and on whether or not the game is friendly.A more detailed explanation and history of the rankings is available here: [link text](https://en.wikipedia.org/wiki/FIFA_World_Rankings)An explanation of the ranking procedure is available here: [link text](https://www.fifa.com/fifa-world-ranking/procedure/men.html)Some features are available on the FIFA ranking page: [link text](https://https://www.fifa.com/fifa-world-ranking/ranking-table/men/index.html)d) Recording the Experimental DesignPerform appropriate regressions on the data including your justificationChallenge your solution by providing insights on how you can make improvements.* Perform your EDA* Perform any necessary feature engineering* Check of multicollinearity* Cross-validate the model* Compute RMSE* Create residual plots for your models, and assess their heteroscedasticity using Bartlettโs test
###Code
# Import Libraries
# Analysis libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Machine learning libraries
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split, GridSearchCV, KFold, cross_val_score
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.metrics import mean_squared_error, f1_score, accuracy_score, confusion_matrix
# Loading the Datasets
# Fifa dataset
rank = pd.read_csv('fifa_ranking.csv')
# results dataset
results = pd.read_csv('results.csv')
# Previewing the top of our dataset
rank.head()
results.head()
# getting the info
rank.info()
results.info()
rank.isnull().sum()
results.isnull().sum()
# check
rank.duplicated().sum()
# drop duplicate in rank
rank.drop_duplicates(inplace=True)
rank.duplicated().sum()
# drop duplicate in results
results.duplicated().sum()
# selecting all non-objects data
cols = rank.dtypes[rank.dtypes != "object"].index
cols
# Checking for Outliers
# Ranking Dataset
fig, ax = plt.subplots(len(cols), figsize=(8,30))
for i, col_val in enumerate(cols):
sns.boxplot(y=rank[col_val], ax=ax[i])
ax[i].set_title('Box plot - {}'.format(col_val), fontsize=10)
ax[i].set_xlabel(col_val, fontsize=8)
plt.tight_layout()
plt.show()
# Checking for outliers
# Results dataset
cols_re = results.dtypes[results.dtypes != "object"].index
fig, ax = plt.subplots(len(cols_re), figsize=(8,20))
for i, col_val in enumerate(cols_re):
sns.boxplot(y=results[col_val], ax=ax[i])
ax[i].set_title('Box plot - {}'.format(col_val), fontsize=10)
ax[i].set_xlabel(col_val, fontsize=8)
plt.tight_layout()
plt.show()
# Changing the date column data type to datetime#
rank['rank_date'] = pd.to_datetime(rank['rank_date'])
results['date'] = pd.to_datetime(results['date'])
# Create new columns and split the date colums into month and year.
#
# For the year columns
rank['year'] = rank['rank_date'].dt.year
results['year'] = results['date'].dt.year
# Now for the month columns
rank['month'] = rank['rank_date'].dt.month
results['month'] = results['date'].dt.month
# Dropping irrelevant columns in rank dataset
rank_clean = rank.drop(['country_abrv', 'total_points',
'previous_points', 'rank_change', 'cur_year_avg',
'cur_year_avg_weighted', 'last_year_avg', 'last_year_avg_weighted',
'two_year_ago_avg', 'two_year_ago_weighted', 'three_year_ago_avg',
'three_year_ago_weighted',], axis=1)
results_clean =results.drop(['city', 'country' ], axis=1)
rank_clean.head()
results_clean.head()
# Merging the dataset
# Home Team dataset
total_home = pd.merge(results_clean, rank_clean, left_on = ['home_team', 'year', 'month'],
right_on = ['country_full', 'year', 'month'], how = 'inner' )
# Merging the dataset
# Away Team dataset
total_away = pd.merge(results_clean, rank_clean, how = 'inner', left_on = ['year', 'month', 'away_team'],
right_on = ['year', 'month', 'country_full'])
# Renaming the ranks columns to get the home team and away team ranks
#
total_home.rename({'rank' : 'home_rank'}, axis = 1, inplace = True)
total_away.rename({'rank' : 'away_rank'}, axis =1, inplace = True)
away = total_away[['away_team','away_rank','year','month']]
away.head()
total_df = pd.merge(total_home, away, how = 'inner', left_on = ['year', 'month', 'away_team'], right_on = ['year', 'month', 'away_team'])
total_df.head()
total_df = total_df.drop(['date','country_full','rank_date','confederation'], 1)
total_df.head()
# Dropping duplicate rows from the dataset
total_df.drop_duplicates(keep = 'first', inplace = True)
total_df.isnull().sum()
# 0 means a draw
# A positive value means the home team won
# A negative value means the away team won, ie. that the home team lost.
#
total_df['score'] = total_df['home_score'] - total_df['away_score']
# Creating a function to be used to create a win, draw or lose column
#
def result(goals):
if goals > 0:
return 'Win'
elif goals < 0:
return 'Lose'
else:
return 'Draw'
# Applying the result function to the dataframe
total_df['result'] = total_df['score'].apply(lambda x: result(x))
# Dropping the score column, as it has served its purpose
#total_df.drop('score', axis = 1, inplace = True)
# Creating a column of total goals scored
total_df['total_goals'] = total_df['home_score'] + total_df['away_score']
# Previewing the last five rows of the dataframe together with the result column
#
total_df.tail()
###Output
_____no_output_____
###Markdown
EDA
###Code
# Pie chart to check the distribution of W,D,L
total_df['result'].value_counts().plot(kind='pie', subplots=True, figsize=(10, 5), autopct='%1.1f%%')
# Ploting the univariate summaries and recording our observations
# Boxplots
# Creating a list of columns to check for outliers
# Creating a list of colors
#
col_list = ['home_score', 'away_score', 'home_rank', 'away_rank']
# Plotting boxplots of the col_list columns to check for outliers
#
fig, axes = plt.subplots(nrows = 2, ncols = 2, figsize = (15, 10))
plt.suptitle('Boxplots', fontsize = 15, y = 0.92)
for ax, data, column, color in zip(axes.flatten(), total_df, col_list, colors):
sns.boxplot(total_df[column], ax = ax)
###Output
_____no_output_____
###Markdown
Regression
###Code
# Polynomial
# choosing columns to use in regression
#
reg_total = total_df[['home_team', 'away_team', 'home_score', 'away_score', 'tournament', 'home_rank', 'away_rank']]
reg_total.head()
# Displaying the correlations between the variables
corr = reg_total.corr()
corr
sns.heatmap(corr, annot=True)
# multicollinearity with VIF table
pd.DataFrame(np.linalg.inv(corr.values), index = corr.index, columns=corr.columns)
reg_total.head()
X = reg_total.iloc[:, [2, 3, 5, 6]]
y = reg_total.home_score
# Splitting the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 101)
# Standardising the X_train and the X_test to the same scale
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# Fitting the polynomial features to the X_train and X_test
poly_features = PolynomialFeatures(degree = 1)
X_train = poly_features.fit_transform(X_train)
X_test = poly_features.fit_transform(X_test)
# Training the model
regressor = LinearRegression()
regressor.fit(X_train, y_train)
# Making predictions
y_pred = regressor.predict(X_test)
# Measuring the accuracy of the model
print(np.sqrt(mean_squared_error(y_test, y_pred)))
# Creating a parameters dictionary
#
params = {'normalize': [True, False],
'fit_intercept': [True, False]}
# Creating a cross validation of 5 folds
#
kfold = KFold(n_splits = 5)
# Using grid search to find the optimal parameters
grid_search = GridSearchCV(estimator=regressor, param_grid = params, cv = kfold)
# Fitting the grid search
grid_search_results = grid_search.fit(X, y)
# Displaying the best parameters and the the best score
print(f'Best score is {grid_search.best_score_}')
# Performing cross validation of ten folds
scores = cross_val_score(regressor, X, y, cv = 10)
# Calculating the mean of the cross validation scores
print(f'Mean of cross validation scores is {np.round(scores.mean()*-1, 3)}')
# Calculating the variance of the cross validation scores from the mean
print(f'Standard deviation of the cross validation scores is {np.round(scores.std(), 3)}')
# Plotting the residual plot
# Residuals have been calculated by by substracting the test value from the predicted value
residuals = np.subtract(y_pred, y_test)
# Plotting the residual scatterplot
plt.scatter(y_pred, residuals, color='black')
plt.ylabel('residual')
plt.xlabel('fitted values')
plt.axhline(y= residuals.mean(), color='red', linewidth=1)
plt.show()
# Performing the barlett's test
test_result, p_value = sp.stats.bartlett(y_pred, residuals)
# Calculating the critical value of the chi squared distribution, to compare it with the test_result
degrees_of_freedom = len(y_pred) - 1
probability = 1 - p_value
critical_value = sp.stats.chi2.ppf(probability, degrees_of_freedom)
if (test_result > critical_value):
print('The variances are heterogenous')
else:
print('The variances are homogeneous')
###Output
The variances are homogeneous
###Markdown
Logistic
###Code
# Selecting the relevant features for the logistic regression model
log_total = total_df[['home_team', 'away_team', 'home_score', 'away_score', 'tournament', 'year', 'home_rank', 'away_rank', 'result']]
# Previewing the first five rows of the data
log_total.head()
# Checking for correlations between features
#
plt.figure(figsize = (10, 6))
sns.heatmap(log_total.corr(), annot = True)
plt.title('Correlation between variables')
plt.show()
# Spliting the data into features and the target variable
X = log_total.drop('result', axis = 1)
y = log_total.result
# Encoding the categorical features
X = pd.get_dummies(X, drop_first=True)
# Spliting the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 101)
# Instantiating the model and training the model
logistic = LogisticRegression()
logistic.fit(X_train, y_train)
# Test and Training Scores
score = logistic.score(X_train, y_train)
score2 = logistic.score(X_test, y_test)
print(f'Training set accuracy: {score}')
print(f'Test set accuracy: {score2}')
# Making predictions
y_pred = logistic.predict(X_test)
# confsion matrix
print(confusion_matrix(y_pred,y_test))
# Creating a dictioanry of parameters to be tuned
params = {'C': [1.0, 5.0],
'penalty': ['l1', 'l2']}
logistic = LogisticRegression()
# Creating a cross validation of 10 folds
kfold = KFold(n_splits = 10)
# Using grid search to find the optimal parameters
grid_search = GridSearchCV(estimator=logistic, param_grid = params, cv = kfold)
# Fitting the grid search
grid_search_results = grid_search.fit(X, y)
# Displaying the best parameters and the the best score
print(f'Best score is {grid_search.best_score_}')
###Output
_____no_output_____ |
nb/tests/models.ipynb | ###Markdown
`DESIspeculator._emulator` test
###Code
# load test parameter and spectrum. These were generated for the validation of the trained Speculator model
test_theta = np.load('/Users/chahah/data/gqp_mc/speculator/DESI_complexdust.theta_test.npy')[:10000]
test_logspec = np.load('/Users/chahah/data/gqp_mc/speculator/DESI_complexdust.logspectrum_fsps_test.npy')[:10000]
# initiate desi model
Mdesi = Models.DESIspeculator()
%timeit Mdesi._emulator(test_theta[0])
log_emu = np.array([Mdesi._emulator(tt) for tt in test_theta]) # 100,000 evaluates takes about 2 mins
fig = plt.figure(figsize=(10,5))
sub = fig.add_subplot(111)
for i in range(10):
sub.plot(Mdesi._emu_waves, np.exp(log_emu[i]), c='C%i' % i)
sub.plot(Mdesi._emu_waves, np.exp(test_logspec[i]), c='k', ls=':', lw=1)
sub.set_xlabel('wavelength [$\AA$]', fontsize=25)
sub.set_xlim(Mdesi._emu_waves.min(), Mdesi._emu_waves.max())
sub.set_ylabel('SSP luminosity [$L_\odot/\AA$]', fontsize=25)
sub.set_ylim(0., None)
# fractional error of the Speculator model
frac_dspectrum = 1. - np.exp(log_emu - test_logspec)
frac_dspectrum_quantiles = np.nanquantile(frac_dspectrum,
[0.0005, 0.005, 0.025, 0.16, 0.84, 0.975, 0.995, 0.9995], axis=0)
fig = plt.figure(figsize=(15,5))
sub = fig.add_subplot(111)
sub.fill_between(Mdesi._emu_waves,
frac_dspectrum_quantiles[0],
frac_dspectrum_quantiles[-1], fc='C0', ec='none', alpha=0.1, label=r'$99.9\%$')
sub.fill_between(Mdesi._emu_waves,
frac_dspectrum_quantiles[1],
frac_dspectrum_quantiles[-2], fc='C0', ec='none', alpha=0.2, label=r'$99\%$')
sub.fill_between(Mdesi._emu_waves,
frac_dspectrum_quantiles[2],
frac_dspectrum_quantiles[-3], fc='C0', ec='none', alpha=0.3, label=r'$95\%$')
sub.fill_between(Mdesi._emu_waves,
frac_dspectrum_quantiles[3],
frac_dspectrum_quantiles[-4], fc='C0', ec='none', alpha=0.5, label=r'$68\%$')
sub.legend(loc='upper right', fontsize=20)
sub.set_xlabel('wavelength [$\AA$]', fontsize=25)
sub.set_xlim(Mdesi._emu_waves.min(), Mdesi._emu_waves.max())
sub.set_ylabel(r'$(f_{\rm speculator} - f_{\rm test})/f_{\rm test}$', fontsize=25)
sub.set_ylim(-0.1, 0.1)
###Output
_____no_output_____
###Markdown
Lets compare it to the FSPS model
###Code
fsps = Models.FSPS(name='nmf_bases')
fig = plt.figure(figsize=(10,5))
sub = fig.add_subplot(111)
for i in range(3):
_w, _ssp_lum = fsps._sps_model(test_theta[i])
sub.plot(_w, _ssp_lum, c='r')
sub.plot(Mdesi._emu_waves, np.exp(log_emu[i]), c='C%i' % i, ls='--')
sub.plot(Mdesi._emu_waves, np.exp(test_logspec[i]), c='k', ls=':', lw=1)
sub.set_xlabel('wavelength [$\AA$]', fontsize=25)
sub.set_xlim(Mdesi._emu_waves.min(), Mdesi._emu_waves.max())
sub.set_ylabel('SSP luminosity [$L_\odot/\AA$]', fontsize=25)
sub.set_ylim(0., None)
###Output
/Users/chahah/projects/provabgs/src/provabgs/models.py:379: RuntimeWarning: divide by zero encountered in log10
self._ssp.params['logzsol'] = np.log10(z/0.0190) # log(Z/Zsun)
/Users/chahah/projects/provabgs/src/provabgs/models.py:379: RuntimeWarning: divide by zero encountered in log10
self._ssp.params['logzsol'] = np.log10(z/0.0190) # log(Z/Zsun)
/Users/chahah/projects/provabgs/src/provabgs/models.py:379: RuntimeWarning: divide by zero encountered in log10
self._ssp.params['logzsol'] = np.log10(z/0.0190) # log(Z/Zsun)
###Markdown
`DESIspeculator.sed` test
###Code
some_theta = np.concatenate([[10.], test_theta[0][:-1]])
fig = plt.figure(figsize=(10,5))
sub = fig.add_subplot(111)
for z in [0.1, 0.2, 0.3]:
w, flux = Mdesi.sed(some_theta, z)
sub.plot(w, flux, label='$z=%.1f$' % z)
sub.legend(loc='upper left', fontsize=20)
sub.set_xlabel('wavelength [$\AA$]', fontsize=25)
sub.set_xlim(Mdesi._emu_waves.min(), Mdesi._emu_waves.max())
sub.set_ylabel(r'$f(\lambda)$ [$10^{-17}erg/s/cm^2/\AA$]', fontsize=25)
sub.set_ylim(0., None)
fig = plt.figure(figsize=(10,5))
sub = fig.add_subplot(111)
for z in [0.1, 0.2, 0.3]:
w, flux = Mdesi.sed(some_theta, z)
sub.plot(w, flux, label='$z=%.1f$' % z)
_w, _flux = fsps.sed(some_theta, z)
sub.plot(_w, _flux, c='k', ls=':')
sub.legend(loc='upper left', fontsize=20)
sub.set_xlabel('wavelength [$\AA$]', fontsize=25)
sub.set_xlim(Mdesi._emu_waves.min(), Mdesi._emu_waves.max())
sub.set_ylabel(r'$f(\lambda)$ [$10^{-17}erg/s/cm^2/\AA$]', fontsize=25)
sub.set_ylim(0., 1.5)
fig = plt.figure(figsize=(10,5))
sub = fig.add_subplot(111)
for vdisp in [0., 50, 150, 500, 1000]:
w, flux = Mdesi.sed(some_theta, 0.1, vdisp=vdisp)
sub.plot(w, flux, label=r'$v_{\rm disp}=%.1f$' % vdisp)
sub.legend(loc='upper left', fontsize=20)
sub.set_xlabel('wavelength [$\AA$]', fontsize=25)
sub.set_xlim(7000., 8000.)
sub.set_ylabel(r'$f(\lambda)$ [$10^{-17}erg/s/cm^2/\AA$]', fontsize=25)
sub.set_ylim(0., None)
from gqp_mc import data as Data
specs, prop = Data.Spectra(sim='lgal', noise='bgs0', lib='bc03', sample='mini_mocha')
fig = plt.figure(figsize=(10,5))
sub = fig.add_subplot(111)
w, flux = Mdesi.sed(some_theta, 0.1, vdisp=0)
sub.plot(w, flux, c='k', lw=3, label=r'model')
w, flux = Mdesi.sed(some_theta, 0.1, vdisp=150)
sub.plot(w, flux, c='C0', lw=3, label=r'$v_{\rm disp}=150$')
w, flux = Mdesi.sed(some_theta, 0.1, vdisp=150, resolution=[specs['res_b'][0], specs['res_r'][0], specs['res_z'][0]])
sub.plot(w, flux, c='C1', lw=3, ls='--', label=r'$v_{\rm disp}=150$ + res. matrix')
sub.legend(loc='upper left', fontsize=20)
sub.set_xlabel('wavelength [$\AA$]', fontsize=25)
sub.set_xlim(7000., 8000.)
sub.set_ylabel(r'$f(\lambda)$ [$10^{-17}erg/s/cm^2/\AA$]', fontsize=25)
sub.set_ylim(0., None)
%timeit w, flux = Mdesi.sed(some_theta, 0.1, vdisp=0)
%timeit w, flux = Mdesi.sed(some_theta, 0.1, vdisp=150)
%timeit w, flux = Mdesi.sed(some_theta, 0.1, vdisp=150, resolution=[specs['res_b'][0], specs['res_r'][0], specs['res_z'][0]])
###Output
_____no_output_____
###Markdown
`DESIspeculator.SFH` and `DESIspeculator.ZH` tests
###Code
tlookback, sfh = Mdesi.SFH(some_theta, 0.1) # get SFH for some arbitrary galaxy at z=0.1
avgSFR = Mdesi.avgSFR(some_theta, 0.1, dt=1)
fig = plt.figure(figsize=(6,4))
sub = fig.add_subplot(111)
sub.plot(tlookback, sfh)
for i in range(4):
sub.plot(tlookback,
10**some_theta[0]*some_theta[i+1] * Mdesi._sfh_basis[i](tlookback) / np.trapz(Mdesi._sfh_basis[i](tlookback), tlookback),
c='C%i' % i, ls='--')
sub.legend(loc='upper right', handletextpad=0, markerscale=2, fontsize=20)
sub.set_xlabel(r'$t_{\rm lookback}$', fontsize=25)
sub.set_xlim(0, Mdesi.cosmo.age(0.).value)
sub.set_ylabel('SFH [$M_\odot/Gyr$]', fontsize=25)
sub.set_ylim(0, None)
fig = plt.figure(figsize=(6,4))
sub = fig.add_subplot(111)
sub.plot(tlookback, sfh)
i0 = np.where(tlookback > 1.)[0][0]
sub.fill_between(tlookback[:i0+1], np.zeros(i0+1), sfh[:i0+1])
sub.scatter([0.5], [avgSFR*1e9], label='1Gyr avg. SFR') # convert to per Gyr
sub.legend(loc='upper right', handletextpad=0, markerscale=2, fontsize=20)
sub.set_xlabel(r'$t_{\rm lookback}$', fontsize=25)
sub.set_xlim(0, Mdesi.cosmo.age(0.).value)
sub.set_ylabel('SFH [$M_\odot/Gyr$]', fontsize=25)
sub.set_ylim(0, None)
avgSFR_2gyrago = Mdesi.avgSFR(some_theta, 0.1, dt=1, t0=2)
fig = plt.figure(figsize=(6,4))
sub = fig.add_subplot(111)
sub.plot(tlookback, sfh)
i0 = np.where(tlookback > 1.)[0][0]
sub.fill_between(tlookback[:i0+1], np.zeros(i0+1), sfh[:i0+1])
sub.scatter([0.5], [avgSFR*1e9], label='1Gyr avg. SFR') # convert to per Gyr
i0 = np.where(tlookback < 2)[0][-1]
i1 = np.where(tlookback > 3)[0][0]
print(tlookback[i0], tlookback[i1])
sub.fill_between(tlookback[i0:i1], np.zeros(i1-i0), sfh[i0:i1], color='C0')
sub.scatter([2.5], [avgSFR_2gyrago*1e9], c='C1', label=r'avg. SFR over $t_{\rm looback} = $[2,3]')
sub.legend(loc='lower right', handletextpad=0, markerscale=2, fontsize=20)
sub.set_xlabel(r'$t_{\rm lookback}$', fontsize=25)
sub.set_xlim(0, Mdesi.cosmo.age(0.).value)
sub.set_ylabel('SFH [$M_\odot/Gyr$]', fontsize=25)
sub.set_ylim(0, None)
_, zh = Mdesi.ZH(some_theta, 0.1) # get ZH for some arbitrary galaxy
zmw = Mdesi.Z_MW(some_theta, 0.1)
fig = plt.figure(figsize=(6,4))
sub = fig.add_subplot(111)
sub.plot(tlookback, zh)
for i in range(2):
sub.plot(tlookback, some_theta[i+5] * Mdesi._zh_basis[i](tlookback), c='C%i' % i, ls='--')
sub.scatter([np.mean(tlookback)], [zmw], c='C1', s=20, label='mass-weighted metallicity')
sub.legend(loc='upper right', handletextpad=0, fontsize=20)
sub.set_xlabel(r'$t_{\rm cosmic}$', fontsize=25)
sub.set_xlim(0, Mdesi.cosmo.age(0.).value)
sub.set_ylabel('metallicity history', fontsize=25)
sub.set_ylim(0, None)
###Output
_____no_output_____ |
Chapter 5/R Lab/5.3.1 The Validation Set Approach.ipynb | ###Markdown
Preprocessing
###Code
# import statistical packages
import numpy as np
import pandas as pd
# import data visualisation packages
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
*I do not need to specify a separate 50% training dataset. Instead we use the [train_test_split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) method from sklearn.*
###Code
from sklearn.model_selection import train_test_split
url = "/Users/arpanganguli/Documents/Professional/Finance/ISLR/Datasets/Auto.csv"
df = pd.read_csv(url)
df.head()
df.horsepower.dtype
###Output
_____no_output_____
###Markdown
*Quite annoyingly, I have to convert the datatype in horsepwer into float and store them in a separate column called 'hp'*
###Code
df['hp'] = df.horsepower.astype(float)
df.head()
df.hp.dtype
###Output
_____no_output_____
###Markdown
*Okay cool!* Regressions using random state = 1 **Simple Linear Regression**
###Code
X = df[['hp']]
y = df['mpg']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=1)
X_train.shape
y_train.shape
X_test.shape
y_test.shape
df.shape
###Output
_____no_output_____
###Markdown
*The Auto dataset contains 397 rows whereas the same dataset in the book example contains 392 rows. This can be explainedby the fact that some of the rows have missing values and have been deleted. I have, however, imputed those values. So,I have the same number of rows as the original dataset. More information about imputation of missing values can be found [here](http://www.stat.columbia.edu/~gelman/arm/missing.pdf). In any case, it does not matter since the prime purpose of the chapter is to show relative differences in prediction abilities of different methodologies. So as long as the relative difference is more or less the same, the point still stands.*
###Code
from sklearn.linear_model import LinearRegression
lmfit = LinearRegression().fit(X_train, y_train)
lmpred = lmfit.predict(X_test)
from sklearn.metrics import mean_squared_error
MSE = mean_squared_error(y_test, lmpred)
round(MSE, 2)
###Output
_____no_output_____
###Markdown
**Polynomial Regression (horsepower$^2$)**
###Code
from sklearn.preprocessing import PolynomialFeatures as PF
X = df[['hp']]
X_ = pd.DataFrame(PF(2).fit_transform(X))
y = df[['mpg']]
X_.head()
X_.drop(columns=0, inplace=True)
X_train, X_test, y_train, y_test = train_test_split(X_, y, test_size=0.5, random_state=1)
lmfit2 = LinearRegression().fit(X_train, y_train)
lmpred2 = lmfit2.predict(X_test)
MSE2 = mean_squared_error(y_test, lmpred2)
round(MSE2, 2)
###Output
_____no_output_____
###Markdown
**Polynomial Regression (horsepower$^3$)**
###Code
from sklearn.preprocessing import PolynomialFeatures as PF
X = df[['hp']]
X_ = pd.DataFrame(PF(3).fit_transform(X))
y = df[['mpg']]
X_.head()
X_.drop(columns=0, inplace=True)
X_.head()
X_train, X_test, y_train, y_test = train_test_split(X_, y, test_size=0.5, random_state=1)
lmfit3 = LinearRegression().fit(X_train, y_train)
lmpred3 = lmfit3.predict(X_test)
MSE3 = mean_squared_error(y_test, lmpred3)
round(MSE3, 2)
###Output
_____no_output_____
###Markdown
Regressions using random state = 2 **Simple Linear Regression**
###Code
X = df[['hp']]
y = df['mpg']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=2)
from sklearn.linear_model import LinearRegression
lmfit = LinearRegression().fit(X_train, y_train)
lmpred = lmfit.predict(X_test)
MSE = mean_squared_error(y_test, lmpred)
round(MSE, 2)
###Output
_____no_output_____
###Markdown
**Polynomial Regression (horsepower$^2$)**
###Code
from sklearn.preprocessing import PolynomialFeatures as PF
X = df[['hp']]
X_ = pd.DataFrame(PF(2).fit_transform(X))
y = df[['mpg']]
X_.head()
X_.drop(columns=0, inplace=True)
X_.head()
X_train, X_test, y_train, y_test = train_test_split(X_, y, test_size=0.5, random_state=2)
lmfit2 = LinearRegression().fit(X_train, y_train)
lmpred2 = lmfit2.predict(X_test)
MSE2 = mean_squared_error(y_test, lmpred2)
round(MSE2, 2)
###Output
_____no_output_____
###Markdown
**Polynomial Regression (horsepower$^3$)**
###Code
from sklearn.preprocessing import PolynomialFeatures as PF
X = df[['hp']]
X_ = pd.DataFrame(PF(3).fit_transform(X))
y = df[['mpg']]
X_.head()
X_.drop(columns=0, inplace=True)
X_.head()
X_train, X_test, y_train, y_test = train_test_split(X_, y, test_size=0.5, random_state=2)
lmfit3 = LinearRegression().fit(X_train, y_train)
lmpred3 = lmfit3.predict(X_test)
MSE3 = mean_squared_error(y_test, lmpred3)
round(MSE3, 2)
###Output
_____no_output_____ |
planner/experiments/analysis-Copy1.ipynb | ###Markdown
Robot boxes domain analysis - RRT-Plan vs A* with hADD heuristic
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
df_rrt = pd.read_csv('RRT_Plan_Robot_Boxes _RRT_PLAN.csv')
df_rrt.head()
df_astar = pd.read_csv('RRT_Plan_Robot_Boxes_ASTAR_hADD.csv')
df_astar.head()
df_ling = pd.read_csv('RRT_Plan_Robot_Boxes_RRT_PLAN_lingcomp.csv')
df_ling.head()
###Output
_____no_output_____
###Markdown
Time x Problem Complexity
###Code
x = df_rrt['problem'].values
rrt_y = df_rrt['time_seconds'].values
astar_y = df_astar['time_seconds'].values
rrt_ling = df_ling['time_seconds'].values
plt.plot(x, rrt_y, 'bo--', x, astar_y, 'ro--')
plt.legend(['RRT-Plan', 'A*+hADD'])
plt.xlabel('Boxes Problem')
plt.ylabel('Time (seconds)')
plt.show()
plt.plot(x, rrt_ling, 'bo--', x, astar_y, 'ro--')
plt.legend(['RRT-Plan LING', 'A*+hADD'])
plt.xlabel('Boxes Problem')
plt.ylabel('Time (seconds)')
plt.show()
###Output
_____no_output_____
###Markdown
Solution Length x Problem Complexity
###Code
optimal_solution = np.array([3,5,9,11,15,17,21,23,27,29,33,35,39,41,45,47,51,53,57,59])
rrt_y = df_rrt['solution_length'].values
astar_y = df_astar['solution_length'].values
rrt_ling = df_ling['solution_length'].values
plt.plot(x, rrt_y, 'bo--', x, astar_y, 'ro--', x, optimal_solution, 'mo--')
plt.legend(['RRT-Plan', 'A*+hADD','Optimal solution'])
plt.xlabel('Boxes Problem')
plt.ylabel('Solution Length')
plt.show()
plt.plot(x, rrt_ling, 'bo--', x, astar_y, 'ro--', x, optimal_solution, 'mo--')
plt.legend(['RRT-Plan LING', 'A*+hADD','Optimal solution'])
plt.xlabel('Boxes Problem')
plt.ylabel('Solution Length')
plt.show()
###Output
_____no_output_____
###Markdown
Individual time analysis
###Code
rrt_time = df_rrt['time_seconds'].values
plt.figure(figsize=(10,5))
plt.title('Robot Boxes - RRT-Plan')
axis_values = [0, 20, -500, max(rrt_time[:-1])+500] # xmin, xmax, ymin, ymax
plt.axis(axis_values)
plt.plot(x, rrt_time, 'bo--')
plt.xticks(np.arange(min(x), max(x)+1, 1.0))
plt.xlabel('Boxes Problem')
plt.ylabel('Time (seconds)')
plt.show()
rrt_time = df_ling['time_seconds'].values
plt.figure(figsize=(10,5))
plt.title('Robot Boxes - RRT-Plan LING')
axis_values = [0, 20, -500, max(rrt_time[:-1])+500] # xmin, xmax, ymin, ymax
plt.axis(axis_values)
plt.plot(x, rrt_time, 'bo--')
plt.xticks(np.arange(min(x), max(x)+1, 1.0))
plt.xlabel('Boxes Problem')
plt.ylabel('Time (seconds)')
plt.show()
astar_time = df_astar['time_seconds'].values
plt.figure(figsize=(10,5))
plt.title('Robot Boxes - A* + hADD')
plt.plot(x[:astar_time.shape[0]], astar_time, 'ro--')
plt.xticks(np.arange(min(x), max(x)+1, 1.0))
plt.xlabel('Boxes Problem')
plt.ylabel('Time (seconds)')
plt.show()
###Output
_____no_output_____ |
ipynb/movie_renege.ipynb | ###Markdown
[Movie Renege](https://simpy.readthedocs.io/en/latest/examples/movie_renege.html)Covers:* Resources: Resource* Condition events* Shared eventsThis examples models a movie theater with one ticket counter selling tickets for three movies (next show only). People arrive at random times and try to buy a random number (1โ6) of tickets for a random movie. When a movie is sold out, all people waiting to buy a ticket for that movie renege (leave the queue).The movie theater is just a container for all the related data (movies, the counter, tickets left, collected data, โฆ). The counter is a `Resource` with a capacity of one.The moviegoer process starts waiting until either itโs his turn (it acquires the counter resource) or until the sold out signal is triggered. If the latter is the case it reneges (leaves the queue). If it gets to the counter, it tries to buy some tickets. This might not be successful, e.g. if the process tries to buy 5 tickets but only 3 are left. If less than two tickets are left after the ticket purchase, the sold out signal is triggered.Moviegoers are generated by the customer arrivals process. It also chooses a movie and the number of tickets for the moviegoer.
###Code
"""
Movie renege example
Covers:
- Resources: Resource
- Condition events
- Shared events
Scenario:
A movie theatre has one ticket counter selling tickets for three
movies (next show only). When a movie is sold out, all people waiting
to buy tickets for that movie renege (leave queue).
"""
import collections
import random
import simpy
RANDOM_SEED = 42
TICKETS = 50 # Number of tickets per movie
SIM_TIME = 120 # Simulate until
def moviegoer(env, movie, num_tickets, theater):
"""A moviegoer tries to by a number of tickets (*num_tickets*) for
a certain *movie* in a *theater*.
If the movie becomes sold out, she leaves the theater. If she gets
to the counter, she tries to buy a number of tickets. If not enough
tickets are left, she argues with the teller and leaves.
If at most one ticket is left after the moviegoer bought her
tickets, the *sold out* event for this movie is triggered causing
all remaining moviegoers to leave.
"""
with theater.counter.request() as my_turn:
# Wait until its our turn or until the movie is sold out
result = yield my_turn | theater.sold_out[movie]
# Check if it's our turn or if movie is sold out
if my_turn not in result:
theater.num_renegers[movie] += 1
return
# Check if enough tickets left.
if theater.available[movie] < num_tickets:
# Moviegoer leaves after some discussion
yield env.timeout(0.5)
return
# Buy tickets
theater.available[movie] -= num_tickets
if theater.available[movie] < 2:
# Trigger the "sold out" event for the movie
theater.sold_out[movie].succeed()
theater.when_sold_out[movie] = env.now
theater.available[movie] = 0
yield env.timeout(1)
def customer_arrivals(env, theater):
"""Create new *moviegoers* until the sim time reaches 120."""
while True:
yield env.timeout(random.expovariate(1 / 0.5))
movie = random.choice(theater.movies)
num_tickets = random.randint(1, 6)
if theater.available[movie]:
env.process(moviegoer(env, movie, num_tickets, theater))
Theater = collections.namedtuple('Theater', 'counter, movies, available, '
'sold_out, when_sold_out, '
'num_renegers')
# Setup and start the simulation
print('Movie renege')
random.seed(RANDOM_SEED)
env = simpy.Environment()
# Create movie theater
counter = simpy.Resource(env, capacity=1)
movies = ['Python Unchained', 'Kill Process', 'Pulp Implementation']
available = {movie: TICKETS for movie in movies}
sold_out = {movie: env.event() for movie in movies}
when_sold_out = {movie: None for movie in movies}
num_renegers = {movie: 0 for movie in movies}
theater = Theater(counter, movies, available, sold_out, when_sold_out,
num_renegers)
# Start process and run
env.process(customer_arrivals(env, theater))
env.run(until=SIM_TIME)
# Analysis/results
for movie in movies:
if theater.sold_out[movie]:
print('Movie "%s" sold out %.1f minutes after ticket counter '
'opening.' % (movie, theater.when_sold_out[movie]))
print(' Number of people leaving queue when film sold out: %s' %
theater.num_renegers[movie])
###Output
Movie renege
Movie "Python Unchained" sold out 38.0 minutes after ticket counter opening.
Number of people leaving queue when film sold out: 16
Movie "Kill Process" sold out 43.0 minutes after ticket counter opening.
Number of people leaving queue when film sold out: 5
Movie "Pulp Implementation" sold out 28.0 minutes after ticket counter opening.
Number of people leaving queue when film sold out: 5
|
notebooks/NB05 - Interact with an Ethereum Contract .ipynb | ###Markdown
AboutInteract with a deployed ethereum contract.For this example I'll try to read information from the reserve token contract.
###Code
from web3 import Web3
import sys; sys.path.insert(0, '../') # Add project root to path for imports
from config.credentials import infura_hello_world # Import variable from local config/credentials.py file
###Output
_____no_output_____
###Markdown
Connect to a NodeConnect to an ethereum node. This repeats steps done in notebook 04.
###Code
# Ethereum node endpoint on infura
url = infura_hello_world # i.e. "https://mainnet.infura.io/v3/..."
w3 = Web3(Web3.HTTPProvider(url))
# Check note is connected
w3.isConnected()
###Output
_____no_output_____
###Markdown
Connect to a Contract I am following the `web3.py` documentation, found [here](https://web3py.readthedocs.io/en/stable/examples.htmlinteracting-with-existing-contracts).And also this article from Dapp University:* See the section titled "2 ยท Read Data from Smart Contracts with Web3.py"* https://www.dappuniversity.com/articles/web3-py-intro Define the contract address:I got the contract address from the RSV v2 README file [here](https://github.com/reserve-protocol/rsv-v2readme). The contract source code is [here](https://github.com/reserve-protocol/rsv-v2/blob/working/contracts/rsv/Reserve.sol).
###Code
# Reserve Token Address
rsv_token_address = "0x1C5857e110CD8411054660F60B5De6a6958CfAE2"
###Output
_____no_output_____
###Markdown
Get the ABIThe ABI is a thing with information for encoding/decoding. How to get the ABI from etherscan:* Search for the contract in [etherscan.io](https://etherscan.io/) by pasting in its address.* Scroll down and Select the *Contract* tab* Scroll down until you see something about *Contract ABI** Click the "Copy ABI to clipboard" icon* Wrap the text in single quotes * So your `abi` variable is a string
###Code
abi = '[{"constant":true,"inputs":[],"name":"name","outputs":[{"name":"","type":"string"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"minter","outputs":[{"name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"name":"spender","type":"address"},{"name":"value","type":"uint256"}],"name":"approve","outputs":[{"name":"","type":"bool"}],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"newOwner","type":"address"}],"name":"nominateNewOwner","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[],"name":"totalSupply","outputs":[{"name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"name":"newFeeRecipient","type":"address"}],"name":"changeFeeRecipient","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"from","type":"address"},{"name":"to","type":"address"},{"name":"value","type":"uint256"}],"name":"transferFrom","outputs":[{"name":"","type":"bool"}],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"newMinter","type":"address"}],"name":"changeMinter","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"newPauser","type":"address"}],"name":"changePauser","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[],"name":"decimals","outputs":[{"name":"","type":"uint8"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"name":"spender","type":"address"},{"name":"addedValue","type":"uint256"}],"name":"increaseAllowance","outputs":[{"name":"","type":"bool"}],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[],"name":"unpause","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"newMaxSupply","type":"uint256"}],"name":"changeMaxSupply","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"account","type":"address"},{"name":"value","type":"uint256"}],"name":"mint","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[],"name":"feeRecipient","outputs":[{"name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"name":"declaration","type":"string"}],"name":"renounceOwnership","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[],"name":"nominatedOwner","outputs":[{"name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"paused","outputs":[{"name":"","type":"bool"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[{"name":"holder","type":"address"}],"name":"balanceOf","outputs":[{"name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[],"name":"acceptOwnership","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"account","type":"address"},{"name":"value","type":"uint256"}],"name":"burnFrom","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[],"name":"pause","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[],"name":"owner","outputs":[{"name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"symbol","outputs":[{"name":"","type":"string"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"name":"newReserveAddress","type":"address"}],"name":"transferEternalStorage","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[],"name":"pauser","outputs":[{"name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":false,"inputs":[{"name":"spender","type":"address"},{"name":"subtractedValue","type":"uint256"}],"name":"decreaseAllowance","outputs":[{"name":"","type":"bool"}],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"to","type":"address"},{"name":"value","type":"uint256"}],"name":"transfer","outputs":[{"name":"","type":"bool"}],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":false,"inputs":[{"name":"newTrustedTxFee","type":"address"}],"name":"changeTxFeeHelper","outputs":[],"payable":false,"stateMutability":"nonpayable","type":"function"},{"constant":true,"inputs":[],"name":"trustedTxFee","outputs":[{"name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"maxSupply","outputs":[{"name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[{"name":"holder","type":"address"},{"name":"spender","type":"address"}],"name":"allowance","outputs":[{"name":"","type":"uint256"}],"payable":false,"stateMutability":"view","type":"function"},{"constant":true,"inputs":[],"name":"getEternalStorageAddress","outputs":[{"name":"","type":"address"}],"payable":false,"stateMutability":"view","type":"function"},{"inputs":[],"payable":false,"stateMutability":"nonpayable","type":"constructor"},{"anonymous":false,"inputs":[{"indexed":true,"name":"newMinter","type":"address"}],"name":"MinterChanged","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"newPauser","type":"address"}],"name":"PauserChanged","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"newFeeRecipient","type":"address"}],"name":"FeeRecipientChanged","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"newMaxSupply","type":"uint256"}],"name":"MaxSupplyChanged","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"newReserveAddress","type":"address"}],"name":"EternalStorageTransferred","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"newTxFeeHelper","type":"address"}],"name":"TxFeeHelperChanged","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"account","type":"address"}],"name":"Paused","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"account","type":"address"}],"name":"Unpaused","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"previousOwner","type":"address"},{"indexed":true,"name":"nominee","type":"address"}],"name":"NewOwnerNominated","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"previousOwner","type":"address"},{"indexed":true,"name":"newOwner","type":"address"}],"name":"OwnershipTransferred","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"from","type":"address"},{"indexed":true,"name":"to","type":"address"},{"indexed":false,"name":"value","type":"uint256"}],"name":"Transfer","type":"event"},{"anonymous":false,"inputs":[{"indexed":true,"name":"owner","type":"address"},{"indexed":true,"name":"spender","type":"address"},{"indexed":false,"name":"value","type":"uint256"}],"name":"Approval","type":"event"}]'
###Output
_____no_output_____
###Markdown
Instantiate the Contract
###Code
contract_instance = w3.eth.contract(address=rsv_token_address, abi=abi)
###Output
_____no_output_____
###Markdown
List all functions available in the contract:
###Code
contract_instance.all_functions()
# Total Supply
contract_instance.caller().totalSupply()
# Max Supply
contract_instance.caller().maxSupply()
# Max Supply
contract_instance.caller().symbol()
###Output
_____no_output_____ |
notebooks/DataLoading-Fannie_Entire_Acquisitions_File.ipynb | ###Markdown
All Acquisition Data AnalysisWe first put together all of the `Acquisition` datasets.
###Code
import pandas as pd
from os import listdir
from os.path import join
from tqdm import tqdm
import matplotlib.pyplot as plt
from utils import hello_world
hello_world()
AcquisitionColumnNames = (
"LOAN_ID", "ORIG_CHN", "Seller.Name",
"ORIG_RT", "ORIG_AMT", "ORIG_TRM", "ORIG_DTE",
"FRST_DTE", "OLTV", "OCLTV", "NUM_BO",
"DTI", "CSCORE_B", "FTHB_FLG", "PURPOSE",
"PROP_TYP", "NUM_UNIT", "OCC_STAT", "STATE", "ZIP_3",
"MI_PCT", "Product.Type", "CSCORE_C", "MI_TYPE",
"RELOCATION_FLG"
)
base_path = "/home/capcolabs/data/FannieMae"
all_acq = join(base_path, "Acquisition_All")
files = [
join(all_acq, f) for f in listdir(all_acq)
]
###Output
_____no_output_____
###Markdown
We create dataframes for all of the files and create a column in each `QUARTER` which is the combined `YEAR` and `QUARTER`
###Code
DFS = []
for file in tqdm(files):
df = pd.read_csv(
file,
names=AcquisitionColumnNames,
header=None,
sep="|"
)
df['QUARTER'] = file.split("/")[-1].replace(".txt","").split("_")[-1]
DFS.append(df)
df = pd.concat(DFS)
df.columns
df['ORIG_DTE'] = pd.to_datetime(df["ORIG_DTE"])
###Output
_____no_output_____
###Markdown
Getting Monthly DataWe will group by the `ORIG_DTE` and use this to get the various descriptive statistics for our dataset.
###Code
loans = performance_df.groupby("LOAN_ID", sort=True)['Delq.Status'].max()
ID_To_Delinq = {}
for row in loans.iteritems():
loan_id, delinq = row
ID_To_Delinq[loan_id] = delinq
credit_score_mean = df.groupby("ORIG_DTE", sort=True)['CSCORE_B'].mean()
credit_score_std = df.groupby("ORIG_DTE", sort=True)['CSCORE_B'].std()
plt.plot(credit_score)
plt.plot(credit_score - credit_score_std)
plt.plot(credit_score + credit_score_std)
oltv = df.groupby("ORIG_DTE", sort=True)['OLTV'].mean()
oltv_std = df.groupby("ORIG_DTE", sort=True)['OLTV'].std()
plt.plot(oltv)
plt.plot(oltv - oltv_std)
plt.plot(oltv + oltv_std)
orate = df.groupby("ORIG_DTE", sort=True)['ORIG_RT'].mean()
orate_std = df.groupby("ORIG_DTE", sort=True)['ORIG_RT'].std()
plt.plot(orate)
plt.plot(orate - orate_std)
plt.plot(orate + orate_std)
dti = df.groupby("ORIG_DTE", sort=True)['DTI'].mean()
dti_std = df.groupby("ORIG_DTE", sort=True)['DTI'].std()
plt.plot(dti)
plt.plot(dti - dti_std)
plt.plot(dti + dti_std)
###Output
_____no_output_____
###Markdown
Analyzing the Performance Set
###Code
base_path = "/home/capcolabs/data/FannieMae"
all_acq = join(base_path, "Performance_All")
files = [
join(all_acq, f) for f in listdir(all_acq)
]
print(f'There are {len(files)} Performance Files!')
PerformanceColumnNames = (
"LOAN_ID", "Monthly.Rpt.Prd", "Servicer.Name",
"LAST_RT", "LAST_UPB", "Loan.Age", "Months.To.Legal.Mat",
"Adj.Month.To.Mat", "Maturity.Date", "MSA",
"Delq.Status", "MOD_FLAG", "Zero.Bal.Code",
"ZB_DTE", "LPI_DTE", "FCC_DTE","DISP_DT",
"FCC_COST", "PP_COST", "AR_COST", "IE_COST",
"TAX_COST", "NS_PROCS","CE_PROCS", "RMW_PROCS",
"O_PROCS", "NON_INT_UPB", "PRIN_FORG_UPB_FHFA",
"REPCH_FLAG", "PRIN_FORG_UPB_OTH", "TRANSFER_FLG"
)
from sqlalchemy import create_engine
engine = create_engine('postgres://postgres@localhost:5432', echo=False)
DFS = []
FCC = {}
for file in tqdm(files):
pf = pd.read_csv(
file,
names=PerformanceColumnNames,
header=None,
sep="|"
)
pf['QUARTER'] = file.split("/")[-1].replace(".txt","").split("_")[-1]
pf.to_sql('performance', con=engine, if_exists='append')
###Output
0%| | 0/75 [00:00<?, ?it/s] |
examples/aerosols_pysics_hygroscopicity.ipynb | ###Markdown
HygroscopicGrowthFactorDistributions We need to generate a data set that can be used to initiate a HygroscopicGrowthFactorDistributions instance. Hiere we take Arm data generated by a HTDMA. The Arm data contains gf-distributions for different diameters, so we select one (200.0 nm).
###Code
fname = '../atmPy/unit_testing/test_data/sgptdmahygC1.b1.20120601.004227.cdf'
out = arm.read_netCDF(fname, data_quality= 'patchy', leave_cdf_open= False)
###Output
_____no_output_____
###Markdown
in general Peaks in the gf-distribution are detected and fit by normal distributions (at log-base). Fit parameters are tightly constrained to avoid run-away parameters and unrealistic results, which in turn can result in unexpacted results ... hard coded fit parameters/boundaries might need adjustment. Growth modes Position of detected growthmodes and ratio of particles in it as a function of time. Here plotted on top of the gf-distribution time series.
###Code
out.hyg_distributions_d200nm.plot(growth_modes=True)
out.hyg_distributions_d200nm.growth_modes_kappa
###Output
_____no_output_____
###Markdown
Mixing state I came up with the following definition, it should be adjusted if there is a better one in the literature Mixing state is given by the pythagoras of the particle ratios of all detected growth modes in a growth distribution. E.g. if there where three modes detected with ratios $r_1$, $r_2$, $r_3$ the mixing state is given by $\sqrt{r_1^2 + r_2^2 + r_3^2}$.
###Code
out.hyg_distributions_d200nm.mixing_state.plot(marker = 'o', ls = '')
###Output
_____no_output_____
###Markdown
Grown size distribution this is the sum. for optical properties the individual information is used so the change in the refractive index which is different for each growth mode is considered individually.
###Code
fname = '../atmPy/unit_testing/test_data/sgptdmaapssizeC1.c1.20120601.004227.cdf'
tdmaaps = arm.read_netCDF(fname, data_quality= 'patchy', leave_cdf_open= False)
sd = tdmaaps.size_distribution
hgfd = out.hyg_distributions_d200nm
# gmk = out.hyg_distributions_d200nm.growth_modes_kappa
sd.convert2dVdlogDp().plot()
sd.hygroscopicity.parameters.growth_distribution = hgfd
sd.hygroscopicity.parameters.RH = 90
sd.hygroscopicity.grown_size_distribution.sum_of_all_sizeditributions.convert2dVdlogDp().plot()
###Output
_____no_output_____
###Markdown
Optical properties scattering
###Code
sd.optical_properties.parameters.wavelength = 550
sd.optical_properties.parameters.refractive_index = 1.5
sd.hygroscopicity.grown_size_distribution.optical_properties.scattering_coeff.plot()
###Output
/Users/htelg/prog/atm-py/atmPy/aerosols/physics/optical_properties.py:112: RuntimeWarning: invalid value encountered in true_divide
y_phase_func = y_1p * 4 * _np.pi / scattering_cross_eff.sum()
###Markdown
fRH
###Code
a = sd.hygroscopicity.f_RH_85_40.plot()
sd.hygroscopicity.f_RH_85_0.plot(ax = a)
###Output
/Users/htelg/prog/atm-py/atmPy/aerosols/physics/optical_properties.py:112: RuntimeWarning: invalid value encountered in true_divide
y_phase_func = y_1p * 4 * _np.pi / scattering_cross_eff.sum()
###Markdown
catch fit runaways
###Code
from atmPy.tools import math_functions as _math_functions
def multi_gauss(x, *params, verbose=False):
# print(params)
y = np.zeros_like(x)
for i in range(0, len(params), 3):
if verbose:
print(len(params), i)
amp = params[i]
pos = params[i + 1]
sig = params[i + 2]
y = y + _math_functions.gauss(x, amp, pos, sig)
return y
# %%debug --breakpoint /Users/htelg/prog/atm-py/atmPy/aerosols/physics/hygroscopicity.py:523
out.hyg_distributions_d200nm.plot(growth_modes=True)
###Output
_____no_output_____
###Markdown
atmPy.aerosols.physics.hygroscopicityipdb> globals()['x'] = xipdb> globals()['y'] = yipdb> globals()['param'] = paramipdb> globals()['bound_l'] = bound_lipdb> globals()['bound_h'] = bound_hglobals()['x'] = x; globals()['y'] = y; globals()['param'] = param; globals()['bound_l'] = bound_l; globals()['bound_h'] = bound_h
###Code
x = atmPy.aerosols.physics.hygroscopicity.x
y = atmPy.aerosols.physics.hygroscopicity.y
param = atmPy.aerosols.physics.hygroscopicity.param
bound_l = atmPy.aerosols.physics.hygroscopicity.bound_l
bound_h= atmPy.aerosols.physics.hygroscopicity.bound_h
plt.plot(x,y)
plt.plot(x, y_start)
plt.plot(x, new_y)
parry = np.array(param)
# parry[::3] *= 10
y_start = multi_gauss(x, *parry)
# fitres, _ = atmPy.aerosols.physics.hygroscopicity._curve_fit(multi_gauss, x, y, p0=param[:-3], bounds=(bound_l[:-3], bound_h[:-3]))
fitres, _ = atmPy.aerosols.physics.hygroscopicity._curve_fit(multi_gauss, x, y, p0=parry, bounds=(bound_l, bound_h),
# max_nfev = 10000
)
new_y = multi_gauss(x, *fitres)
###Output
_____no_output_____
###Markdown
Kappa In this section a kappa is defined instead of a growth distribution.
###Code
# fname = '../atmPy/unit_testing/test_data/sgptdmaapssizeC1.c1.20120601.004227.cdf
fname = '/Users/htelg/data/ARM/SGP/tdmaaps/sgptdmaapssizeC1.c1.20120201.002958.cdf'
tdmaaps = arm.read_netCDF(fname, data_quality= 'patchy', leave_cdf_open= False)
fname = '/Users/htelg/data/ARM/SGP/acsm/sgpaosacsmC1.b1.20120201.002022.cdf'
acsm = arm.read_netCDF(fname, data_quality= 'patchy', leave_cdf_open= False)
tdmaaps.size_distribution.parameters4reductions.wavelength = 550
# %%debug --breakpoint /Users/htelg/prog/atm-py/atmPy/aerosols/physics/hygroscopicity.py:606
tdmaaps.size_distribution.hygroscopicity.parameters.kappa = acsm.kappa
tdmaaps.size_distribution.hygroscopicity.parameters.refractive_index = acsm.refractive_index
fRH_nams_kams = tdmaaps.size_distribution.hygroscopicity.f_RH_85_0.copy()
# %%debug --breakpoint /Users/htelg/prog/atm-py/atmPy/aerosols/physics/hygroscopicity.py:742
tdmaaps.size_distribution.hygroscopicity.parameters.kappa = acsm.kappa
tdmaaps.size_distribution.hygroscopicity.parameters.refractive_index = 1.5#acsm.refractive_index
fRH_nfix_kams = tdmaaps.size_distribution.hygroscopicity.f_RH_85_0.copy()
tdmaaps.size_distribution.hygroscopicity.parameters.kappa = acsm.kappa.data.values.mean() #acsm.kappa
tdmaaps.size_distribution.hygroscopicity.parameters.refractive_index = 1.5#acsm.refractive_index
fRH_nfix_kfix = tdmaaps.size_distribution.hygroscopicity.f_RH_85_0.copy()
a = fRH_nfix_kfix.plot(label = 'nfix kfix')
fRH_nfix_kams.plot(ax = a, label = 'nfix_kams')
fRH_nams_kams.plot(ax = a, label = 'nams_kams')
a.legend()
###Output
_____no_output_____ |
notebooks/cores/core-number.ipynb | ###Markdown
Core NumberIn this notebook, we will use cuGraph to compute the core number of every vertex in our test graph Notebook Credits* Original Authors: Bradley Rees* Created: 10/28/2019* Last Edit: 03/03/2020RAPIDS Versions: 0.13Test Hardware* GV100 32G, CUDA 10.2 IntroductionCore Number computes the core number for every vertex of a graph G. A k-core of a graph is a maximal subgraph that contains nodes of degree k or more. A node has a core number of k if it belongs to a k-core but not to k+1-core. This call does not support a graph with self-loops and parallel edges.For a detailed description of the algorithm see: https://en.wikipedia.org/wiki/Degeneracy_(graph_theory)It takes as input a cugraph.Graph object and returns as output a cudf.Dataframe object To compute the K-Core Number cluster in cuGraph use: * __df = cugraph.core_number(G)__ * G: A cugraph.Graph object Returns:* __df : cudf.DataFrame__ * df['vertex'] - vertex ID * df['core_number'] - core number of that vertex cuGraph Notice The current version of cuGraph has some limitations:* Vertex IDs need to be 32-bit integers.* Vertex IDs are expected to be contiguous integers starting from 0.cuGraph provides the renumber function to mitigate this problem. Input vertex IDs for the renumber function can be either 32-bit or 64-bit integers, can be non-contiguous, and can start from an arbitrary number. The renumber function maps the provided input vertex IDs to 32-bit contiguous integers starting from 0. cuGraph still requires the renumbered vertex IDs to be representable in 32-bit integers. These limitations are being addressed and will be fixed soon. Test DataWe will be using the Zachary Karate club dataset *W. W. Zachary, An information flow model for conflict and fission in small groups, Journal ofAnthropological Research 33, 452-473 (1977).* Prep
###Code
# Import needed libraries
import cugraph
import cudf
###Output
_____no_output_____
###Markdown
Read data using cuDF
###Code
# Test file
datafile='../data//karate-data.csv'
# read the data using cuDF
gdf = cudf.read_csv(datafile, delimiter='\t', names=['src', 'dst'], dtype=['int32', 'int32'] )
# create a Graph
G = cugraph.Graph()
G.from_cudf_edgelist(gdf, source='src', destination='dst')
###Output
_____no_output_____
###Markdown
Now compute the Core Number
###Code
# Call k-cores on the graph
df = cugraph.core_number(G)
df
###Output
_____no_output_____
###Markdown
Core NumberIn this notebook, we will use cuGraph to compute the core number of every vertex in our test graph Notebook Credits* Original Authors: Bradley Rees* Created: 10/28/2019* Last Edit: 03/03/2020RAPIDS Versions: 0.13Test Hardware* GV100 32G, CUDA 10.2 IntroductionCore Number computes the core number for every vertex of a graph G. A k-core of a graph is a maximal subgraph that contains nodes of degree k or more. A node has a core number of k if it belongs to a k-core but not to k+1-core. This call does not support a graph with self-loops and parallel edges.For a detailed description of the algorithm see: https://en.wikipedia.org/wiki/Degeneracy_(graph_theory)It takes as input a cugraph.Graph object and returns as output a cudf.Dataframe object To compute the K-Core Number cluster in cuGraph use: * __df = cugraph.core_number(G)__ * G: A cugraph.Graph object Returns:* __df : cudf.DataFrame__ * df['vertex'] - vertex ID * df['core_number'] - core number of that vertex cuGraph Notice The current version of cuGraph has some limitations:* Vertex IDs need to be 32-bit integers.* Vertex IDs are expected to be contiguous integers starting from 0.cuGraph provides the renumber function to mitigate this problem. Input vertex IDs for the renumber function can be either 32-bit or 64-bit integers, can be non-contiguous, and can start from an arbitrary number. The renumber function maps the provided input vertex IDs to 32-bit contiguous integers starting from 0. cuGraph still requires the renumbered vertex IDs to be representable in 32-bit integers. These limitations are being addressed and will be fixed soon. Test DataWe will be using the Zachary Karate club dataset *W. W. Zachary, An information flow model for conflict and fission in small groups, Journal ofAnthropological Research 33, 452-473 (1977).* Prep
###Code
# Import needed libraries
import cugraph
import cudf
###Output
_____no_output_____
###Markdown
Read data using cuDF
###Code
# Test file
datafile='../data//karate-data.csv'
# read the data using cuDF
gdf = cudf.read_csv(datafile, delimiter='\t', names=['src', 'dst'], dtype=['int32', 'int32'] )
# create a Graph
G = cugraph.Graph()
G.from_cudf_edgelist(gdf, source='src', destination='dst')
###Output
_____no_output_____
###Markdown
Now compute the Core Number
###Code
# Call k-cores on the graph
df = cugraph.core_number(G)
df
###Output
_____no_output_____
###Markdown
Core NumberIn this notebook, we will use cuGraph to compute the core number of every vertex in our test graph Notebook Credits* Original Authors: Bradley Rees* Created: 10/28/2019* Last Edit: 08/16/2020RAPIDS Versions: 0.13Test Hardware* GV100 32G, CUDA 10.2 IntroductionCore Number computes the core number for every vertex of a graph G. A k-core of a graph is a maximal subgraph that contains nodes of degree k or more. A node has a core number of k if it belongs to a k-core but not to k+1-core. This call does not support a graph with self-loops and parallel edges.For a detailed description of the algorithm see: https://en.wikipedia.org/wiki/Degeneracy_(graph_theory)It takes as input a cugraph.Graph object and returns as output a cudf.Dataframe object To compute the K-Core Number cluster in cuGraph use: * __df = cugraph.core_number(G)__ * G: A cugraph.Graph object Returns:* __df : cudf.DataFrame__ * df['vertex'] - vertex ID * df['core_number'] - core number of that vertex Some notes about vertex IDs...* The current version of cuGraph requires that vertex IDs be representable as 32-bit integers, meaning graphs currently can contain at most 2^32 unique vertex IDs. However, this limitation is being actively addressed and a version of cuGraph that accommodates more than 2^32 vertices will be available in the near future.* cuGraph will automatically renumber graphs to an internal format consisting of a contiguous series of integers starting from 0, and convert back to the original IDs when returning data to the caller. If the vertex IDs of the data are already a contiguous series of integers starting from 0, the auto-renumbering step can be skipped for faster graph creation times. * To skip auto-renumbering, set the `renumber` boolean arg to `False` when calling the appropriate graph creation API (eg. `G.from_cudf_edgelist(gdf_r, source='src', destination='dst', renumber=False)`). * For more advanced renumbering support, see the examples in `structure/renumber.ipynb` and `structure/renumber-2.ipynb` Test DataWe will be using the Zachary Karate club dataset *W. W. Zachary, An information flow model for conflict and fission in small groups, Journal ofAnthropological Research 33, 452-473 (1977).* Prep
###Code
# Import needed libraries
import cugraph
import cudf
###Output
_____no_output_____
###Markdown
Read data using cuDF
###Code
# Test file
datafile='../data//karate-data.csv'
# read the data using cuDF
gdf = cudf.read_csv(datafile, delimiter='\t', names=['src', 'dst'], dtype=['int32', 'int32'] )
# create a Graph
G = cugraph.Graph()
G.from_cudf_edgelist(gdf, source='src', destination='dst')
###Output
_____no_output_____
###Markdown
Now compute the Core Number
###Code
# Call k-cores on the graph
df = cugraph.core_number(G)
df
###Output
_____no_output_____
###Markdown
Core NumberIn this notebook, we will use cuGraph to compute the core number of every vertex in our test graph Notebook Credits* Original Authors: Bradley Rees* Created: 10/28/2019* Last Edit: 10/28/2019RAPIDS Versions: 0.10.0Test Hardware* GV100 32G, CUDA 10.0 IntroductionCore Number computes the core number for every vertex of a graph G. A k-core of a graph is a maximal subgraph that contains nodes of degree k or more. A node has a core number of k if it belongs to a k-core but not to k+1-core. This call does not support a graph with self-loops and parallel edges.For a detailed description of the algorithm see: https://en.wikipedia.org/wiki/Degeneracy_(graph_theory)It takes as input a cugraph.Graph object and returns as output a cudf.Dataframe object To compute the K-Core Number cluster in cuGraph use: * __df = cugraph.core_number(G)__ * G: A cugraph.Graph object Returns:* __df : cudf.DataFrame__ * df['vertex'] - vertex ID * df['core_number'] - core number of that vertex cuGraph Notice The current version of cuGraph has some limitations:* Vertex IDs need to be 32-bit integers.* Vertex IDs are expected to be contiguous integers starting from 0.cuGraph provides the renumber function to mitigate this problem. Input vertex IDs for the renumber function can be either 32-bit or 64-bit integers, can be non-contiguous, and can start from an arbitrary number. The renumber function maps the provided input vertex IDs to 32-bit contiguous integers starting from 0. cuGraph still requires the renumbered vertex IDs to be representable in 32-bit integers. These limitations are being addressed and will be fixed soon. Test DataWe will be using the Zachary Karate club dataset *W. W. Zachary, An information flow model for conflict and fission in small groups, Journal ofAnthropological Research 33, 452-473 (1977).* Prep
###Code
# Import needed libraries
import cugraph
import cudf
###Output
_____no_output_____
###Markdown
Read data using cuDF
###Code
# Test file
datafile='../data//karate-data.csv'
# read the data using cuDF
gdf = cudf.read_csv(datafile, delimiter='\t', names=['src', 'dst'], dtype=['int32', 'int32'] )
# create a Graph
G = cugraph.Graph()
G.from_cudf_edgelist(gdf, source='src', destination='dst')
###Output
_____no_output_____
###Markdown
Now compute the Core Number
###Code
# Call k-cores on the graph
df = cugraph.core_number(G)
df
###Output
_____no_output_____ |
Walmart v1.ipynb | ###Markdown
Missing Value Treatment
###Code
#Handling missings
def Missing_imputation(x):
x = x.fillna(x.median())
return x
train_num=train_num.apply(Missing_imputation)
test_num=test_num.apply(Missing_imputation)
#print(df_train_merged_2.isnull().sum())
#print(df_test_merged_2.isnull().sum())
# df_test_merged_2['CPI']=df_test_merged_2.groupby(['Dept'])['CPI'].transform(lambda x: x.fillna(x.mean()))
# df_test_merged_2['Unemployment']=df_test_merged_2.groupby(['Dept'])['Unemployment'].transform(lambda x: x.fillna(x.mean()))
# df_train_merged_2=df_train_merged_2.fillna(0)
# df_test_merged_2=df_test_merged_2.fillna(0)
#print(df_train_merged_2.isnull().sum())
#print(df_test_merged_2.isnull().sum())
###Output
_____no_output_____
###Markdown
Outlier Treatment
###Code
#Handling Outliers
def outlier_capping(x):
x = x.clip(upper=x.quantile(0.99))
x = x.clip(lower=x.quantile(0.01))
return x
train_num=train_num.apply(outlier_capping)
test_num=test_num.apply(outlier_capping)
#df_train_merged_2.Weekly_Sales=np.where(df_train_merged_2.Weekly_Sales>100000, 100000,df_train_merged_2.Weekly_Sales)
df_train_merged_2.Weekly_Sales.plot.hist(bins=25)
#df_train_merged_2.loc[df_train_merged_2.Type== 'A']= 1
#df_train_merged_2.loc[df_train_merged_2.Type== 'B']= 2
#df_train_merged_2.loc[df_train_merged_2.Type== 'C']= 3
#df_test_merged_2.loc[df_test_merged_2.Type== 'A']= 1
#df_test_merged_2.loc[df_test_merged_2.Type== 'B']= 2
#df_test_merged_2.loc[df_test_merged_2.Type== 'C']= 3
###Output
_____no_output_____
###Markdown
Dummy Creation
###Code
# An utility function to create dummy variable
def create_dummies( df, colname ):
col_dummies = pd.get_dummies(df[colname], prefix=colname, drop_first=True)
df = pd.concat([df, col_dummies], axis=1)
df.drop( colname, axis = 1, inplace = True )
return df
for c_feature in ['IsHoliday', 'Type']:
train_cat.loc[:,c_feature] = train_cat[c_feature].astype('category')
train_cat = create_dummies(train_cat , c_feature )
train_cat.head()
for c_feature in ['IsHoliday', 'Type']:
test_cat.loc[:,c_feature] = test_cat[c_feature].astype('category')
test_cat = create_dummies(test_cat , c_feature )
train = pd.concat([train_num, train_cat], axis=1)
train.head()
test = pd.concat([test_num, test_cat], axis=1)
test.head()
train_corr2= train.corr()
train_corr2.to_csv('train_corr2.csv')
sns.heatmap(train.corr())
###Output
_____no_output_____
###Markdown
Model Buildings Linear Regression model basic phase 1
###Code
lm=smf.ols('Weekly_Sales~CPI+Dept+Fuel_Price+IsHoliday_True+MarkDown1+MarkDown2+MarkDown3+MarkDown4+MarkDown5+Size+Store+Temperature+Type_B+Type_C+Unemployment+Week', train).fit()
print(lm.summary())
lm=smf.ols('Weekly_Sales~CPI+Dept+IsHoliday_True+MarkDown3+MarkDown4+MarkDown5+Size+Store+Type_B+Type_C+Unemployment+Week', train).fit()
print(lm.summary())
train_new=train[['Weekly_Sales','CPI','Dept','IsHoliday_True','MarkDown3','MarkDown4','MarkDown5','Size','Store','Type_B','Type_C','Unemployment','Week']]
test_new=test[['CPI','Dept','IsHoliday_True','MarkDown3','MarkDown4','MarkDown5','Size','Store','Type_B','Type_C','Unemployment','Week']]
train_X=train_new[train_new.columns.difference(['Weekly_Sales'])]
train_y=train_new['Weekly_Sales']
test_X=test_new
###Output
_____no_output_____
###Markdown
Decision Tree
###Code
from sklearn.tree import DecisionTreeRegressor
from sklearn.model_selection import cross_val_predict
regressor_dt = DecisionTreeRegressor(max_depth=5,random_state=123)
regressor_dt.fit(train_X, train_y)
predict_train=regressor_dt.predict(train_X)
print('Mean Absolute Error:', metrics.mean_absolute_error(train_y, predict_train))
print('Mean Squared Error:', metrics.mean_squared_error(train_y, predict_train))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(train_y, predict_train)))
print("R-squared for Train:",regressor_dt.score(train_X, train_y))
y_pred = regressor_dt.predict(test_X)
y_pred
### Tunning dt
# list of values to try
max_depth_range = range(5, 15)
# list to store the average RMSE for each value of max_depth
RMSE_Scores = []
MSE_Scores = []
# use LOOCV with each value of max_depth
for depth in max_depth_range:
treereg = DecisionTreeRegressor(max_depth=depth, random_state=345)
MSE_scores = cross_val_score(treereg, train_X, train_y, cv=14, scoring='neg_mean_squared_error')
RMSE_Scores.append(np.mean(np.sqrt(-MSE_scores)))
MSE_Scores.append(MSE_scores)
print (RMSE_Scores)
# plot max_depth (x-axis) versus RMSE (y-axis)
plt.plot(max_depth_range, RMSE_Scores)
plt.xlabel('max_depth')
plt.ylabel('RMSE (lower is better)')
###Output
_____no_output_____
###Markdown
Final Dt
###Code
# max_depth=10 was best, so fit a tree using that parameter
treereg = DecisionTreeRegressor(max_depth=10, random_state=345)
treereg.fit(train_X, train_y)
treereg.feature_importances_
# "Gini importance" of each feature: the (normalized) total reduction of error brought by that feature
pd.DataFrame({'feature':train_new.columns.difference(['Weekly_Sales']), 'importance':treereg.feature_importances_})
# predictions
predict_train_dt=treereg.predict(train_X)
dtree=pd.DataFrame({'Actual':train_y, 'Predicted':predict_train_dt ,'Week':train_new.Week})
dtree
mean_week=dtree.groupby('Week').apply(lambda x:np.mean(x))
mean_week
mean_week.plot(kind='line',x='Week',y='Actual', color='yellow',ax=plt.gca())
mean_week.plot(kind='line',x='Week',y='Predicted', color='blue', ax=plt.gca())
plt.xlabel('Week Number')
plt.ylabel('Weekly Sales')
plt.title('Comparison of Predicted Sales and Actual Sales in Decision Tree')
plt.show()
print('Mean Absolute Error:', metrics.mean_absolute_error(train_y, predict_train_dt))
print('Mean Squared Error:', metrics.mean_squared_error(train_y, predict_train_dt))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(train_y, predict_train_dt)))
print("R-squared for Train:",treereg.score(train_X, train_y))
y_pred = treereg.predict(test_X)
pd.DataFrame(y_pred)
DT_output=pd.read_csv('test.csv')
DT_output['Weekly_Sales']=pd.DataFrame(y_pred)
DT_output.to_csv('DT_output.csv')
###Output
_____no_output_____
###Markdown
Random Forest
###Code
from sklearn.ensemble import RandomForestRegressor
rfr = RandomForestRegressor(max_depth=5,n_estimators=20, random_state=0)
rfr.fit(train_X, train_y)
pred = rfr.predict(train_X)
print('Mean Absolute Error:', metrics.mean_absolute_error(train_y, pred))
print('Mean Squared Error:', metrics.mean_squared_error(train_y, pred))
print('Root Mean Squared Error:', np.sqrt(metrics.mean_squared_error(train_y, pred)))
print("R-squared for Train:",rfr.score(train_X, train_y))
y_pred = rfr.predict(test_X)
y_pred
###Output
_____no_output_____
###Markdown
Tuning rf
###Code
from sklearn.model_selection import GridSearchCV
param_grid={'max_depth': range(8,15),
'n_estimators': (10, 50)}
# Perform Grid-Search
gsc = GridSearchCV(estimator=RandomForestRegressor(),param_grid=param_grid,cv=5, verbose=0, n_jobs=-1)
grid_result = gsc.fit(train_X, train_y)
grid_result.best_score_
grid_result.best_params_
###Output
_____no_output_____
###Markdown
Final rf
###Code
rfr = RandomForestRegressor(max_depth=14,n_estimators=50, random_state=0)
rfr.fit(train_X, train_y)
pred = rfr.predict(train_X)
rf=pd.DataFrame({'Actual':train_y, 'Predicted':pred, 'Week': train_new.Week})
rf
week_mean=rf.groupby('Week').apply(lambda x:np.mean(x))
week_mean
week_mean.plot(kind='line',x='Week',y='Actual', color='yellow',ax=plt.gca())
week_mean.plot(kind='line',x='Week',y='Predicted', color='blue', ax=plt.gca())
plt.xlabel('Week Number')
plt.ylabel('Weekly Sales')
plt.title('Comparison of Predicted and Actual Sales in Random Forest')
plt.show()
print("R-squared for Train:",rfr.score(train_X, train_y))
y_pred = rfr.predict(test_X)
pd.DataFrame(y_pred)
RF_output=pd.read_csv('test.csv')
RF_output['Weekly_Sales']=pd.DataFrame(y_pred)
RF_output.to_csv('RF_output.csv')
###Output
_____no_output_____ |
analysis_and_initial_cleaning.ipynb | ###Markdown
Limpieza inicialEste notebook documenta el proceso de razonamiento para limpiar la data proporcionada. Al final se genera un script con la data limpia para ser preprocesada. Cargar los datos que usaremosUsaremos los archivos entrenamiento_precios_vivienda.csv y prueba_precios_vivienda.csv. Note que los datos del archivo prueba_precios_vivienda.csv no contienen la columna de los precios de la vivienda. La idea de este archivo, es que usted complete dicha columna con los predicciones resultantes de su modelo, y mediante un proceso de validaciรณn externo, Learning Code calcula el desempeรฑo de este. Esta es una prรกctica muy comรบn en pruebas de este tipo.
###Code
import pandas as pd
from content.utils.data_processing import load_csv_data, set_index
trainData = load_csv_data("./content/sample_data/train.csv")
testData = load_csv_data("./content/sample_data/test.csv")
# Ahora asignamos un index para manejar de manera mas eficiente la data.
indexedTrainData = set_index(trainData)
indexedTestData = set_index(testData)
print(f'Train data have {indexedTrainData.shape[0]} rows, \nTest data only {indexedTestData.shape[0]}')
###Output
Train data have 9629 rows,
Test data only 3175
###Markdown
Ahora veamos que data esta vacia en cada columna
###Code
indexedTrainData[indexedTrainData.columns[indexedTrainData.isnull().any()]].isnull().sum()
###Output
_____no_output_____
###Markdown
Pocos datos faltantes, pero los pocos que no estan pueden rellenarse facilmente. El tipo de subsidio tiene demasiados datos faltantes.
###Code
indexedTestData[indexedTestData.columns[indexedTestData.isnull().any()]].isnull().sum()
###Output
_____no_output_____
###Markdown
Aparte de los datos faltantes en el valor (entendible pues son datos que deben rellenarse), se nota que el tipo de subsidio tiene demasiados datos faltantes. En los demas pueden rellenarse los datos. Fecha de aprobacion y tipo de subsidio de nuevo tienen muchos datos faltantes. Limpiando los datosUna columna que tiene demasiados datos faltantes en ambos dataframes es el de fecha de aprobaciรณn, por lo que se removera. Tambien podemos alegar que la fecha no deberia ser relevante, por lo menos en comparacion a otros parametros.Tipo de subsidio tambien vemos que presenta muchos datos nulos, se eliminira. Data que no aporta nadaLas columnas que consideramos que tienen poca informacion relevante y deben borrarse son:
###Code
extra_data = [
'fecha_aprobaciรณn',
'tipo_subsidio',
'numero_garaje_1',
'matricula_garaje_1',
'numero_garaje_2',
'matricula_garaje_2',
'numero_garaje_3',
'matricula_garaje_3',
'numero_garaje_4',
'matricula_garaje_4',
'numero_garaje_5',
'matricula_garaje_5',
'numero_deposito_1',
'matricula_inmobiliaria_deposito_1',
'numero_deposito_2',
'matricula_inmobiliaria_deposito_2',
'numero_deposito_3',
'matricula_inmobiliaria_deposito_3',
'numero_deposito_4',
'matricula_inmobiliaria_deposito_4',
'numero_deposito_5',
'matricula_inmobiliaria_deposito_5',
'metodo_valuacion_1',
'concepto_del_metodo_1',
'metodo_valuacion_2',
'concepto_del_metodo_2',
'metodo_valuacion_3',
'concepto_del_metodo_3',
'metodo_valuacion_4',
'concepto_del_metodo_4',
'metodo_valuacion_5',
'concepto_del_metodo_5',
'metodo_valuacion_6',
'concepto_del_metodo_6',
'metodo_valuacion_7',
'concepto_del_metodo_7',
'metodo_valuacion_8',
'concepto_del_metodo_8',
'metodo_valuacion_9',
'concepto_del_metodo_9',
'Longitud',
'Latitud',
'tipo_deposito',
'numero_total_depositos',
]
###Output
_____no_output_____
###Markdown
Se proporciono el archivo `PuntosInteres.csv`, creemos que podria comparase con las columnas `Longitud` y `Latitud`, pero requiere clasificar los tipos de interes, por ejemplo "sera que la farmacia es beneficiosa para el valuo?". Por lo tanto decidimos que no se tomara en cuenta dada la limitante de tiempo. Valorizacion en base a la percepcion y la descripcion del inmueblePor otro lado tenemos algunas que consideramos que pueden arrojan informacion pero necesitan trabajo adicional, mas que todo preprocesar intensamente la data, para extraerle un score o sentimiento:Estas se trabajaran aparte y se integraran despues a la data para el entrenamiento.
###Code
descriptions_related = [
'descripcion_clase_inmueble',
'perspectivas_de_valorizacion',
'actualidad_edificadora',
'comportamiento_oferta_demanda',
'observaciones_generales_inmueble',
'observaciones_estructura',
'observaciones_generales_construccion',
'observaciones_dependencias',
'descripcion_tipo_inmueble',
'descripcion_uso_inmueble',
]
###Output
_____no_output_____
###Markdown
Influencia de la zonaUn caso interesante aca es el de 'sector' pues usualmente rural siempre vale menos que urbano por metro cuadrado. Sin embargo combinandolo con los anteriores puede dar mas informacion sobre la zona especifica, por ejemplo inidicar que un apartamento en la zona urbana tiene mayor valor que un lote urbano.Pero las 3 ultimas son las que dan mas informacion, pues indican con palabras si el barrio o la ciudad estan cotizadas. Creemos que usar tambien departamento, municipio o barrio implicaria en la practica hacer una etiquetacion de tales columnas (es bogota bueno, malo o no afecta al valor?). Por eso se incluyo al final a las 3 primeras para eliminar, pues no aportan tanta data como se quisiera.
###Code
zone_related = [
'departamento_inmueble',
'municipio_inmueble',
'barrio',
'descripcion_general_sector',
'direccion_inmueble_informe',
'descripcion_general_sector',
]
###Output
_____no_output_____
###Markdown
Influencia de la estructuraSe refieren a elementos de la infraestructura en si, por ejemplo si la estructura se ve suficientemente segura.Se sacara por el momento y se integrara luego de haberse analizado aparte.
###Code
structure_related = [
'observaciones_generales_inmueble',
'observaciones_estructura',
'observaciones_dependencias',
'observaciones_generales_construccion',
]
###Output
_____no_output_____
###Markdown
Area, altura, dimensionesAca podria pensarse que el area total y el area contruida serian los valores mas importantes, y otros valores no ayudan individualmente, asi mismo la atura no se toma en cuenta.
###Code
dimensions_related = [
'area_privada',
'area_garaje',
'area_deposito',
'area_terreno',
'area_construccion',
'area_otros',
'area_libre',
]
###Output
_____no_output_____
###Markdown
Notamos que hay una seccion de "garages", segun se analiza la columna que podria aportar mas informacion es la `numero_total_de_garajes` y alternativamente `total_cupos_parquedaro`, las demas columnas son redundantes.
###Code
garage_related = [
'garaje_cubierto_1',
'garaje_doble_1',
'garaje_paralelo_1',
'garaje_servidumbre_1',
'garaje_cubierto_2',
'garaje_doble_2',
'garaje_paralelo_2',
'garaje_servidumbre_2',
'garaje_cubierto_3',
'garaje_doble_3',
'garaje_paralelo_3',
'garaje_servidumbre_3',
'garaje_cubierto_4',
'garaje_doble_4',
'garaje_paralelo_4',
'garaje_servidumbre_4',
'garaje_cubierto_5',
'garaje_doble_5',
'garaje_paralelo_5',
'garaje_servidumbre_5',
'garaje_visitantes', # ya oncluido en el numero de garages
]
###Output
_____no_output_____
###Markdown
Otra seccion es la referente a las normas de contruccion, pensamos que no son importantes
###Code
norms_solumns = [
'altura_permitida',
'observaciones_altura_permitida',
'aislamiento_posterior',
'observaciones_aislamiento_posterior',
'aislamiento_lateral',
'observaciones_aislamiento_lateral',
'antejardin',
'observaciones_antejardin',
'indice_ocupacion',
'observaciones_indice_ocupacion',
'indice_construccion',
'observaciones_indice_construccion',
'predio_subdividido_fisicamente', # Si | No (Contiene datos espureos)
'rph', # (Muchas, se nota tambien muchos datos espureos)
'sometido_a_propiedad_horizontal', # Si | No (Contiene datos espureos)
]
###Output
_____no_output_____
###Markdown
Columnas de casos muy especificosSe refiere a columnas que solo aplican a casos no muy generales, por ejemplo el de numero de unidades se refiere a cuantas subdivisiones tiene un predio, pero si notamos no puede aplicarse a la mayoria de casos que son casas o terrenos individuales, en cuyo casos se asigna usualmente un cero.
###Code
specific_cases = [
'condicion_ph',
'ajustes_sismoresistentes', # No Disponibles | No Reparados | Reparados (Contiene datos espureos)
'danos_previos', # No disponible | Sin daรฑos previos | Con daรฑos previos (Contiene datos espureos)
]
###Output
_____no_output_____
###Markdown
ValorAca solo tomamos el valor total, las demas quedan borradasNos quedaremos con `valor_total_avaluo` pues el que se requiere en la descripcion del proyecto, aunque 'valor_avaluo_en_uvr' es independiente de la fecha, es decir que muestra mas claro la diferencia de comprar un terreno en el 2000 contra comprar en el 2020, por ejemplo.
###Code
value_related = [
'valor_area_privada',
'valor_area_garaje',
'valor_area_deposito',
'valor_area_terreno',
'valor_area_construccion',
'valor_area_otros',
'valor_area_libre',
'valor_uvr',
'valor_avaluo_en_uvr',
]
###Output
_____no_output_____
###Markdown
Con lo anterior podemos limpiar la data inicial de columnas innecesarias, y luego integrar las que se analizaran aparte.
###Code
columnsToErase = extra_data + descriptions_related + zone_related + structure_related + dimensions_related + garage_related + norms_solumns + value_related + specific_cases
cleanTrainData = indexedTrainData.drop(columnsToErase, axis=1)
cleanTestData = indexedTestData.drop(columnsToErase, axis=1)
print(f'Train data now have {cleanTrainData.shape}')
print(f'Test data now have {cleanTestData.shape}')
cleanTrainData.head()
###Output
_____no_output_____
###Markdown
Seleccionemos primero las columnas con data categorica
###Code
categorical_columns = [
# Seccion avaluo
'objeto', # Originaciรณn | Remate (Contiene datos espureos)
'motivo', # Crรฉdito hipotecario de vivienda | Empleados | Leasing Visto Bueno | Leasing Habitacional | Remates | Garantรญa | Actualizacion de garantias | Colomext Hipotecario | Credito Comercial | Compra de cartera | Dacion en Pago | Leasing Comercial | Reformas | Originacion | Leasing Inmobiliario - Persona Natural
'proposito', # Crรฉdito hipotecario de vivienda | Garantรญa Hipotecaria | Transaccion Comercial de Venta | Valor Asegurable
'tipo_avaluo', # Hipotecario | Remates | Garantia Hipotecaria
'tipo_credito', # Vivienda | Diferente de Vivienda | Hipotecario
# Seccion Informacion general y situacional
'sector', # Urbano | Rural | Poblado (Contiene datos espureos)
# Seccion Informacion del inmueble
'tipo_inmueble', # Apartamento | Casa | Casa Rural | Conjunto o Edificio | Deposito | Finca | Garaje | Lote | Lote Urbano | Oficina (Contiene datos espureos)
'uso_actual', # (Muchas, se nota tambien muchos datos espureos)
'clase_inmueble', # (Muchas, se nota tambien muchos datos espureos)
'ocupante', # (Muchas, se nota tambien muchos datos espureos)
'area_actividad', # (Muchas, se nota tambien muchos datos espureos)
'uso_principal_ph', # Vivienda | Finca | Viviend, Serv y Comercio (Muchas, se nota tambien muchos datos espureos)
'estructura', # Mamposteria Estructural | Tradicional | Industrializada | Muro de carga (Contiene datos espureos)
'cubierta', # Teja Metalica | Teja Plastica | Tradicional | Teja fibrocemento | Teja de Barro (Contiene datos espureos)
'fachada', # Concreto texturado | Flotante | Graniplast | Industrilizada | Ladrillo a la vista (Contiene datos espureos)
'estructura_reforzada', # Flotante | Graniplast | Trabes coladas en sitio | No tiene trabes (Contiene datos espureos)
'material_de_construccion', # Acero | Adobe, bahareque o tapia | Concreto Reforzado (Contiene datos espureos)
'detalle_material', # Mamposterรญa reforzada | Pรณrticos | Mamposterรญa confinada (Contiene datos espureos)
'iluminacion', # Bueno | Paneles prefabricados | Muros (Contiene datos espureos)
'calidad_acabados_cocina', # Integral | Semi-Integral | Sencillo | Bueno | Lujoso | Normal | Regular | Sin Acabados
# Seccion Garage
'tipo_garaje', # Bueno | Comunal | Exclusivo | Integral | Lujoso | No tiene | Normal | Privado | Regular | Semi-Integral | Sencillo | Sin Acabados
]
binary_columns = [
'alcantarillado_en_el_sector', # Si | No (Contiene datos espureos)
'acueducto_en_el_sector', # Si | No (Contiene datos espureos)
'gas_en_el_sector', # Si | No
'energia_en_el_sector', # Si | No
'telefono_en_el_sector', # Si | No
'vias_pavimentadas', # Si | No
'sardineles_en_las_vias', # Si | No
'andenes_en_las_vias', # Si | No
'barrio_legal', # Si | No (Contiene datos espureos)
'paradero', # Si | No (Contiene datos espureos)
'alumbrado', # Si | No (Contiene datos espureos)
'arborizacion', # Si | No (Contiene datos espureos)
'alamedas', # Si | No
'ciclo_rutas', # Si | No
'alcantarillado_en_el_predio', # Si | No (Contiene datos espureos)
'acueducto_en_el_predio', # Si | No (Contiene datos espureos)
'gas_en_el_predio', # Si | No (Contiene datos espureos)
'energia_en_el_predio', # Si | No (Contiene datos espureos)
'telefono_en_el_predio', # Si | No (Contiene datos espureos)
'porteria', # Si | No (Contiene datos espureos)
'citofono', # Si | No (Contiene datos espureos)
'bicicletero', # Si | No (Contiene datos espureos)
'piscina', # Si | No (Contiene datos espureos)
'tanque_de_agua', # Si | No (Contiene datos espureos)
'club_house', # Si | No (Contiene dato espureo "0", podria tomarse como No)
'teatrino', # Si | No (Contiene dato espureo "0", podria tomarse como No)
'sauna', # Si | No (Contiene dato espureo "0", podria tomarse como No)
'vigilancia_privada', # Si | No (Contiene dato espureo "0", podria tomarse como No)
'administracion', # Si | No (Contiene datos espureos)
]
###Output
_____no_output_____
###Markdown
Datos ordinalesExpresan una cualidad a travรฉs de un dato que es posible ordenar a travรฉs de una escala previamente definida.
###Code
ordinal_columns = [
'estrato', # 1 - 6 (Contiene datos espureos)
'topografia_sector', # Inclinado | Ligera | Plano (Contiene datos espureos)
'condiciones_salubridad', # Buenas | Malas | Regulares (Contiene datos espureos)
'transporte', # Bueno | Regular | Malo (Contiene datos espureos)
'demanda_interes', # Nula | Bueno | Debil | Fuerte (Contiene datos espureos)
'nivel_equipamiento_comercial', # En Proyecto | Regular Malo | Bueno | Muy bueno (Contiene datos espureos)
'tipo_vigilancia', # 12 Horas | 24 Horas | No (Dato espureo Si podria tomarse como "24 Horas", dato espureo "0" podria tomarse como "No")
'tipo_fachada', # De 0 a 3 metros | de 3 a 6 metros | Mayor a 6 metros (Contiene datos espureos)
'ventilacion', # Bueno | Regular | Malo (Contiene datos espureos)
'irregularidad_planta', # Sin irregularidad | No disponible | Con irregularidad (Contiene datos espureos)
'irregularidad_altura', # Sin irregularidad | No disponible | Con irregularidad (Contiene datos espureos)=
'estado_acabados_cocina', # Bueno | Lujoso | Malo | Normal | Regular | Sencillo | Sin acabados
'estado_acabados_pisos', # Bueno | Sin Acabados | Normal | Sencillo (Contiene datos espureos)
'calidad_acabados_pisos', # Bueno | Sin Acabados | Normal | Sencillo (Contiene datos espureos)
'estado_acabados_muros', # Bueno | Sin Acabados | Normal | Sencillo (Contiene datos espureos)
'calidad_acabados_muros', # Bueno | Sin Acabados | Normal | Sencillo (Contiene datos espureos)
'estado_acabados_techos', # Bueno | Sin Acabados | Normal | Sencillo (Contiene datos espureos)
'calidad_acabados_techos', # Bueno | Sin Acabados | Normal | Sencillo (Contiene datos espureos)
'estado_acabados_madera', # Bueno | Sin Acabados | Normal | Sencillo (Contiene datos espureos)
'calidad_acabados_madera', # Bueno | Lujoso | Malo | Normal | Regular | Sencillo | Sin acabados
'estado_acabados_metal', # Bueno | Lujoso | Malo | Normal | Regular | Sencillo | Sin acabados
'calidad_acabados_metal', # Bueno | Lujoso | Malo | Normal | Regular | Sencillo | Sin acabados
'estado_acabados_banos', # Bueno | Lujoso | Malo | Normal | Regular | Sencillo | Sin acabados
'calidad_acabados_banos', # Bueno | Lujoso | Malo | Normal | Regular | Sencillo | Sin acabados
]
###Output
_____no_output_____
###Markdown
Datos numericosEstos datos son expresados en nรบmeros y sรญ que pueden medirse.
###Code
numeric_columns = [
# Seccion Informacion del inmueble
'unidades', # [Int] 0 - 92 (Contiene datos espureos)
'contadores_agua', # [Int] 0 - 6 (Contiene datos espureos "92", "Aplica", "No", podria asumirse que es cero, "Resultante")
'contadores_luz', # [Int] 0 - 6 (Contiene datos espureos "92", "Aplica", "No", podria asumirse que es cero)
'accesorios', # # [Int] 0 - 46 (Contiene dato espureo "No", podria asumirse que es cero)
'area_valorada', # [Float] 0.0 - 1058.2 (Contiene unos numeros gitantes)
'numero_piso', # [Int] 0 - 99 (Contiene datos espureos)
'numero_de_edificios', # [Int] 0 - 99 (Contiene datos espureos)
'vetustez', # a veces dice anhos antiguedad, a veces el anho de construccion
'pisos_bodega', # [Int] 0 - 52 (Contiene datos espureos)
'habitaciones', # [Int] 0 - 32 (Contiene datos espureos)
'estar_habitacion', # [Int] 0 - 9 (Contiene datos espureos)
'cuarto_servicio', # [Int] 0 - 5 (Contiene datos espureos)
'closet', # [Int] 0 - 17 (Contiene datos espureos)
'sala', # [Int] 0 - 24 (Contiene datos espureos)
'comedor', # [Int] 0 - 31 (Contiene datos espureos)
'bano_privado', # [Int] 0 - 24 (Contiene datos espureos)
'bano_social', # [Int] 0 - 12
'bano_servicio', # [Int] 0 - 11
'cocina', # [Int] 0 - 13
'estudio', # [Int] 0 - 3
'balcon', # [Int] 0 - 11
'terraza', # [Int] 0 - 9
'patio_interior', # [Int] 0 - 11
'jardin', # [Int] 0 - 4
'zona_de_ropas', # [Int] 0 - 13
'zona_verde_privada', # [Int] 0 - 4
'local', # [Int] 0 - 10
'oficina', # [Int] 0 - 9
'bodega', # [Int] 0 - 2
# Seccion Garage
'numero_total_de_garajes', # [Int] 0 - 5 (Contiene datos espureos)
'total_cupos_parquedaro', # [Int] 0 - 8 (Contiene datos espureos)
]
###Output
_____no_output_____
###Markdown
Se notan varios datos espureos, en el id 13365 pudimos notar que las columnas estan movidas, se movio la data manualmente.Otro arreglo que se realizo sobre la data 320437301104211601entre otros que tenia columnas movidas Se eliminaron472485547621833837141915741675174933654059572877395814731532274628554008436546694783524352695986621664147200742079348124839088141004710543113211230512839160481723911928039462984393112921394172217772997Al inspeccionar visualmente la data, la mayoria por que el valor total esta vacio, junto a otros errores1398341752659673015942403276928713568364538353864391140314059413541684267449246534726498350015496593563726866691971067529758678717982808996029618102891075310883111841181012396125301319513204134201410814210142251468915230160571620416508165401676416850169241705717234172361744217543176861772817796178061814318214267046126403 Con respecto al documento de pruebas tambien se hallaron problemas que se solucionaron de la siguiente maneraSe modifico varios que tenian el mismo error que en el de entrenamiento. Por ultimo, se procedio a terminar de limpiar data eliminando celdas vacias.El el caso de las columnas numericas se relleno con ceros cuando la celda esta vacia.
###Code
cleanTrainData[numeric_columns] = cleanTrainData[numeric_columns].fillna(0)
for column in numeric_columns:
cleanTrainData[column] = cleanTrainData[column].str.replace(",", ".")
cleanTrainData[numeric_columns] = cleanTrainData.loc[:,numeric_columns].transform(lambda x: x.map(lambda x: { "Si": 1., "No": 0. }.get(x,x)))
cleanTrainData[numeric_columns] = cleanTrainData[numeric_columns].apply(pd.to_numeric).astype(float)
cleanTrainData[numeric_columns].isnull().sum()
cleanTrainData[numeric_columns]
###Output
_____no_output_____
###Markdown
Ahora las columnas booleanas convertirlas a numeros (1/0)
###Code
cleanTrainData[binary_columns] = cleanTrainData.loc[:,binary_columns].transform(lambda x: x.map(lambda x: { "Si": 1., "No": 0. }.get(x,x)))
cleanTrainData[binary_columns] = cleanTrainData[binary_columns].fillna(0.).apply(pd.to_numeric).astype(float)
cleanTrainData[binary_columns].isnull().sum()
cleanTrainData[binary_columns]
###Output
_____no_output_____
###Markdown
Ahora trataremos individualmente algunas columnas ordinales
###Code
cleanTrainData['estrato'] = cleanTrainData.loc[:,'estrato'].transform(lambda x: x.map(lambda x: { "Comercial": 7., "Oficina": 8., "Industrial": 9., "No": 0. }.get(x,x)))
cleanTrainData['topografia_sector'] = cleanTrainData.loc[:,'topografia_sector'].transform(lambda x: x.map(lambda x: { "Plano": 0., "Ligera": 1., "Inclinado": 2., "Accidentada": 3., "No": 0. }.get(x,x)))
cleanTrainData['condiciones_salubridad'] = cleanTrainData.loc[:,'condiciones_salubridad'].transform(lambda x: x.map(lambda x: { "Malas": 0., "Bueno": 1., "Buenas": 1., "Regulares": 2., "Malas": 3., "No": 0. }.get(x,x)))
cleanTrainData['transporte'] = cleanTrainData.loc[:,'transporte'].transform(lambda x: x.map(lambda x: { "Malo": 0., "Regular": 1., "Bueno": 2., "Vivienda": 3., "Hotelero": 4., "No": 0. }.get(x,x)))
cleanTrainData['demanda_interes'] = cleanTrainData.loc[:,'demanda_interes'].transform(lambda x: x.map(lambda x: { "Nula": 0., "Dรฉbil": 1., "Media": 2., "Bueno": 3., "Fuerte": 4., "No": 0. }.get(x,x)))
cleanTrainData['nivel_equipamiento_comercial'] = cleanTrainData.loc[:,'nivel_equipamiento_comercial'].transform(lambda x: x.map(lambda x: { "En Proyecto": 1., "Regular Malo": 0., "Bueno": 2., "Muy bueno": 3., "No": 0. }.get(x,x)))
cleanTrainData['tipo_vigilancia'] = cleanTrainData.loc[:,'tipo_vigilancia'].transform(lambda x: x.map(lambda x: { "12 Horas": 1., "24 Horas": 2., "No": 0. }.get(x,x)))
cleanTrainData['tipo_fachada'] = cleanTrainData.loc[:,'tipo_fachada'].transform(lambda x: x.map(lambda x: { "De 0 a 3 metros": 1., "De 3 a 6 metros": 2., "Mayor a 6 metros": 3., "No": 0. }.get(x,x)))
cleanTrainData['ventilacion'] = cleanTrainData.loc[:,'ventilacion'].transform(lambda x: x.map(lambda x: { "Malo": 0., "Regular": 1., "Bueno": 2., "No": 0. }.get(x,x)))
cleanTrainData['irregularidad_planta'] = cleanTrainData.loc[:,'irregularidad_planta'].transform(lambda x: x.map(lambda x: { "No disponible": 0., "Con irregularidad": 1., "Sin irregularidad": 2., "No": 0. }.get(x,x)))
cleanTrainData['irregularidad_altura'] = cleanTrainData.loc[:,'irregularidad_altura'].transform(lambda x: x.map(lambda x: { "No disponible": 0., "Con irregularidad": 1., "Sin irregularidad": 2., "No": 0. }.get(x,x)))
dictionary_details = { "Malo": 0., "Sin Acabados": 1., "Sin acabados": 1., "Sencillo": 2., "Normal": 4., "Bueno": 5., "Lujoso": 5., "No disponible": 0., "Regular": 3., "No": 0.}
cleanTrainData['estado_acabados_cocina'] = cleanTrainData.loc[:,'estado_acabados_cocina'].transform(lambda x: x.map(lambda x: dictionary_details.get(x,x)))
cleanTrainData['estado_acabados_pisos'] = cleanTrainData.loc[:,'estado_acabados_pisos'].transform(lambda x: x.map(lambda x: dictionary_details.get(x,x)))
cleanTrainData['calidad_acabados_pisos'] = cleanTrainData.loc[:,'calidad_acabados_pisos'].transform(lambda x: x.map(lambda x: dictionary_details.get(x,x)))
cleanTrainData['estado_acabados_muros'] = cleanTrainData.loc[:,'estado_acabados_muros'].transform(lambda x: x.map(lambda x: dictionary_details.get(x,x)))
cleanTrainData['calidad_acabados_muros'] = cleanTrainData.loc[:,'calidad_acabados_muros'].transform(lambda x: x.map(lambda x: dictionary_details.get(x,x)))
cleanTrainData['estado_acabados_techos'] = cleanTrainData.loc[:,'estado_acabados_techos'].transform(lambda x: x.map(lambda x: dictionary_details.get(x,x)))
cleanTrainData['calidad_acabados_techos'] = cleanTrainData.loc[:,'calidad_acabados_techos'].transform(lambda x: x.map(lambda x: dictionary_details.get(x,x)))
cleanTrainData['estado_acabados_madera'] = cleanTrainData.loc[:,'estado_acabados_madera'].transform(lambda x: x.map(lambda x: dictionary_details.get(x,x)))
cleanTrainData['calidad_acabados_madera'] = cleanTrainData.loc[:,'calidad_acabados_madera'].transform(lambda x: x.map(lambda x: dictionary_details.get(x,x)))
cleanTrainData['estado_acabados_metal'] = cleanTrainData.loc[:,'estado_acabados_metal'].transform(lambda x: x.map(lambda x: dictionary_details.get(x,x)))
cleanTrainData['calidad_acabados_metal'] = cleanTrainData.loc[:,'calidad_acabados_metal'].transform(lambda x: x.map(lambda x: dictionary_details.get(x,x)))
cleanTrainData['estado_acabados_banos'] = cleanTrainData.loc[:,'estado_acabados_banos'].transform(lambda x: x.map(lambda x: dictionary_details.get(x,x)))
cleanTrainData['calidad_acabados_banos'] = cleanTrainData.loc[:,'calidad_acabados_banos'].transform(lambda x: x.map(lambda x: dictionary_details.get(x,x)))
cleanTrainData[ordinal_columns] = cleanTrainData[ordinal_columns].fillna(0.).apply(pd.to_numeric).astype(float)
cleanTrainData[ordinal_columns].isnull().sum()
cleanTrainData[ordinal_columns]
###Output
_____no_output_____
###Markdown
Por ultimo convertimos las categoricas en dummies
###Code
cleanTrainData = pd.get_dummies(cleanTrainData,
columns = categorical_columns,
dtype=float
)
cleanTrainData
###Output
_____no_output_____ |
LDA/LDA.ipynb | ###Markdown
Some information about the Algorithm
###Code
from sklearn import datasets
from sklearn.cross_validation import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
# load Iris dataset from sklearn
iris =datasets.load_iris()
features = iris.data
# print (features)
labels = iris.target
# print(labels)
# split the data to 60% training and 40% testing
x_train,x_test,y_train,y_test=train_test_split(features,labels,test_size=.4)
print('Training samples is : ',len(x_train))
print('Testing samples is : ', len((x_test)))
LDA = LinearDiscriminantAnalysis()
clf = LDA.fit(x_train,y_train)
predictions = LDA.predict(x_test)
print('Training ......')
print ('Accuracy is : ',accuracy_score(y_test,predictions))
###Output
Training samples is : 90
Testing samples is : 60
Training ......
Accuracy is : 0.983333333333
###Markdown
***Tweet Activity Over Years***
###Code
'''import plotly.plotly as py
import plotly.graph_objs as go
'''
tweets['datetime'] = pd.to_datetime(tweets['datetime'], format='%Y-%m-%d')
tweetsT = tweets['datetime']
trace = go.Histogram(
x=tweetsT,
marker=dict(
color='blue'
),
opacity=0.75
)
layout = go.Layout(
title='Tweet Activity in May',
height=450,
width=1200,
xaxis=dict(
title='Date and Month'
),
yaxis=dict(
title='Tweet Quantity'
),
bargap=0.2,
)
data = [trace]
fig = go.Figure(data=data, layout=layout)
py.offline.iplot(fig)
# Preparing a corpus for analysis and checking first 10 entries
corpus=[]
a=[]
for i in range(len(tweets['text'])):
a=tweets['text'][i]
corpus.append(a)
corpus[0:10]
TEMP_FOLDER = tempfile.gettempdir()
print('Folder "{}" will be used to save temporary dictionary and corpus.'.format(TEMP_FOLDER))
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
import nltk
nltk.download('stopwords')
# removing common words and tokenizing
list1 = ['corona', 'coronavirus','indonesia', 'indonesian','covid19', 'covid', 'via',
'city', 'names', 'may', 'today', 'new', 'could',
'24', '557', '678', '4', '20', '1520', '25773', '30', '10', '25216', '29', '1', '53', '28',
'รขโฌยฆ', 'รขโฌยข', 'รขโฌโข', 'รขโฌโ', 'รยซ', 'รขโฌ', 'รยป', 'รขโยฌ', 'รยฃ', 'รยฉ', 'รยฐc', ' รยฃ', 'รฅ', 'รข', 'รซ']
stoplist = stopwords.words('english') + list(punctuation) + list1
texts = [[word for word in str(document).lower().split() if word not in stoplist] for document in corpus]
dictionary = corpora.Dictionary(texts)
dictionary.save(os.path.join(TEMP_FOLDER, 'elon.dict')) # store the dictionary, for future reference
#print(dictionary)
#print(dictionary.token2id)
corpus = [dictionary.doc2bow(text) for text in texts]
corpora.MmCorpus.serialize(os.path.join(TEMP_FOLDER, 'elon.mm'), corpus) # store to disk, for later use
###Output
2020-06-23 11:02:31,023 : INFO : storing corpus in Matrix Market format to C:\Users\yusuf\AppData\Local\Temp\elon.mm
2020-06-23 11:02:31,025 : INFO : saving sparse matrix to C:\Users\yusuf\AppData\Local\Temp\elon.mm
2020-06-23 11:02:31,026 : INFO : PROGRESS: saving document #0
2020-06-23 11:02:31,056 : INFO : PROGRESS: saving document #1000
2020-06-23 11:02:31,083 : INFO : PROGRESS: saving document #2000
2020-06-23 11:02:31,108 : INFO : PROGRESS: saving document #3000
2020-06-23 11:02:31,134 : INFO : PROGRESS: saving document #4000
2020-06-23 11:02:31,140 : INFO : saved 4225x9746 matrix, density=0.132% (54414/41176850)
2020-06-23 11:02:31,142 : INFO : saving MmCorpus index to C:\Users\yusuf\AppData\Local\Temp\elon.mm.index
###Markdown
In the previous cells, I created a corpus of documents represented as a stream of vectors. To continue, lets use that corpus, with the help of Gensim. Creating a transformation The transformations are standard Python objects, typically initialized by means of a training corpus:Different transformations may require different initialization parameters; in case of TfIdf, the โtrainingโ consists simply ofgoing through the supplied corpus once and computing document frequencies of all its features.Training other models, such as Latent Semantic Analysis or Latent Dirichlet Allocation, is much more involved and,consequently, takes much more time.
###Code
tfidf = models.TfidfModel(corpus) # step 1 -- initialize a model
###Output
2020-06-23 11:02:31,153 : INFO : collecting document frequencies
2020-06-23 11:02:31,154 : INFO : PROGRESS: processing document #0
2020-06-23 11:02:31,172 : INFO : calculating IDF weights for 4225 documents and 9746 features (54414 matrix non-zeros)
###Markdown
NoteTransformations always convert between two specific vector spaces. The same vector space (= the same set of feature ids) must be used for training as well as for subsequent vector transformations. Failure to use the same input feature space, such as applying a different string preprocessing, using different feature ids, or using bag-of-words input vectors where TfIdf vectors are expected, will result in feature mismatch during transformation calls and consequently in either garbage output and/or runtime exceptions. From now on, tfidf is treated as a read-only object that can be used to apply a transformation to a whole corpus:
###Code
corpus_tfidf = tfidf[corpus] # step 2 -- use the model to transform vectors
###Output
_____no_output_____
###Markdown
LDA:https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation Latent Dirichlet Allocation, LDA is yet another transformation from bag-of-words counts into a topic space of lower dimensionality. LDA is a probabilistic extension of LSA (also called multinomial PCA), so LDAโs topics can be interpreted as probability distributions over words. These distributions are, just like with LSA, inferred automatically from a training corpus. Documents are in turn interpreted as a (soft) mixture of these topics (again, just like with LSA).
###Code
total_topics = 5
lda = models.LdaModel(corpus, id2word=dictionary, num_topics=total_topics)
corpus_lda = lda[corpus_tfidf] # create a double wrapper over the original corpus: bow->tfidf->fold-in-lsi
#Show first n important word in the topics:
lda.show_topics(total_topics,5)
data_lda = {i: OrderedDict(lda.show_topic(i,25)) for i in range(total_topics)}
#data_lda
df_lda = pd.DataFrame(data_lda)
df_lda = df_lda.fillna(0).T
print(df_lda.shape)
df_lda
g=sns.clustermap(df_lda.corr(), center=0, standard_scale=1, cmap="RdBu", metric='cosine', linewidths=.75, figsize=(15, 15))
plt.setp(g.ax_heatmap.yaxis.get_majorticklabels(), rotation=0)
plt.show()
#plt.setp(ax_heatmap.get_yticklabels(), rotation=0) # For y axis
pyLDAvis.enable_notebook()
panel = pyLDAvis.gensim.prepare(lda, corpus_lda, dictionary, mds='tsne')
panel
###Output
_____no_output_____
###Markdown
Latent Dirichlet Allocation LDA WikifetcherRaw Text von Wikipedia mittels Suchbegriffen LDAbuilderAusfรผhren der LDA mit der gegebenen Dokumentliste (Rohtext-Liste von Wikifetcher) AusfรผhrungZusรคtzlich fรผr jeden Ausfรผhrungsblock wird die Ausfรผhrungszeit gemessen. Konfiguration - Wir benรถtigen Zugriff auf Wikipedia fรผr den Rohtext- Natural Language Toolkit NLTK fรผr die Tokenisierung und Stemming- Stop_words, um nichtssagende Wรถrter zu entfernen- Gensim fรผr die Implementierung der Latent Dirichlet Allocation LDA
###Code
import wikipedia
import time
from nltk.tokenize import RegexpTokenizer
from stop_words import get_stop_words
from nltk.stem.porter import PorterStemmer
import re
import warnings
warnings.filterwarnings(action='ignore', category=UserWarning, module='gensim')
from gensim import corpora, models
start = time.time()
sentence_pat = re.compile(r'([A-Z][^\.!?]*[\.!?])', re.M)
tokenizer = RegexpTokenizer(r'\w+')
# Erzeuge englische stop words Liste
en_stop = get_stop_words('en')
# Erzeuge p_stemmer der Klasse PorterStemmer
p_stemmer = PorterStemmer()
doc_list = []
wikipedia.set_lang('en')
end = time.time()
print('Ausfรผhrungszeit: %f' %(end-start) + ' s')
###Output
Ausfรผhrungszeit: 0.001001 s
###Markdown
Wikipedia ContentMittels Suchbegriffen holen wir den Rohen Inhalt aus Wikipedia.Danach wird der Inhalt in Sรคtze getrennt, welche zur Dokumentliste hinzugefรผgt werden.
###Code
def get_page(name):
first_found = wikipedia.search(name)[0]
try:
return(wikipedia.page(first_found).content)
except wikipedia.exceptions.DisambiguationError as e:
return(wikipedia.page(e.options[0]).content)
start = time.time()
search_terms = ['Nature', 'Volcano', 'Ocean', 'Landscape', 'Earth', 'Animals']
separator = '== References =='
for term in search_terms:
full_content = get_page(term).split(separator, 1)[0]
# sentence_list = sentence_pat.findall(full_content)
#for sentence in sentence_list:
doc_list.append(full_content)
print(full_content[0:1000] + '...')
print('---')
end = time.time()
print('Ausfรผhrungszeit: %f' %(end-start) + ' s')
###Output
Nature, in the broadest sense, is the natural, physical, or material world or universe. "Nature" can refer to the phenomena of the physical world, and also to life in general. The study of nature is a large part of science. Although humans are part of nature, human activity is often understood as a separate category from other natural phenomena.
The word nature is derived from the Latin word natura, or "essential qualities, innate disposition", and in ancient times, literally meant "birth". Natura is a Latin translation of the Greek word physis (ฯฯฯฮนฯ), which originally related to the intrinsic characteristics that plants, animals, and other features of the world develop of their own accord. The concept of nature as a whole, the physical universe, is one of several expansions of the original notion; it began with certain core applications of the word ฯฯฯฮนฯ by pre-Socratic philosophers, and has steadily gained currency ever since. This usage continued during the advent of modern scienti...
---
A volcano is a rupture in the crust of a planetary-mass object, such as Earth, that allows hot lava, volcanic ash, and gases to escape from a magma chamber below the surface.
Earth's volcanoes occur because its crust is broken into 17 major, rigid tectonic plates that float on a hotter, softer layer in its mantle. Therefore, on Earth, volcanoes are generally found where tectonic plates are diverging or converging, and most are found underwater. For example, a mid-oceanic ridge, such as the Mid-Atlantic Ridge, has volcanoes caused by divergent tectonic plates whereas the Pacific Ring of Fire has volcanoes caused by convergent tectonic plates. Volcanoes can also form where there is stretching and thinning of the crust's plates, e.g., in the East African Rift and the Wells Gray-Clearwater volcanic field and Rio Grande Rift in North America. This type of volcanism falls under the umbrella of "plate hypothesis" volcanism. Volcanism away from plate boundaries has also been explained as mantl...
---
An ocean (from Ancient Greek แฝจฮบฮตฮฑฮฝฯฯ, transc. Okeanรณs, the sea of classical antiquity) is a body of saline water that composes much of a planet's hydrosphere. On Earth, an ocean is one of the major conventional divisions of the World Ocean. These are, in descending order by area, the Pacific, Atlantic, Indian, Southern (Antarctic), and Arctic Oceans. The word sea is often used interchangeably with "ocean" in American English but, strictly speaking, a sea is a body of saline water (generally a division of the world ocean) partly or fully enclosed by land.
Saline water covers approximately 360,000,000 km2 (140,000,000 sq mi) and is customarily divided into several principal oceans and smaller seas, with the ocean covering approximately 71% of Earth's surface and 90% of the Earth's biosphere. The ocean contains 97% of Earth's water, and oceanographers have stated that less than 5% of the World Ocean has been explored. The total volume is approximately 1.35 billion cubic kilometers (320 mi...
---
A landscape is the visible features of an area of land, its landforms and how they integrate with natural or man-made features.
A landscape includes the physical elements of geophysically defined landforms such as (ice-capped) mountains, hills, water bodies such as rivers, lakes, ponds and the sea, living elements of land cover including indigenous vegetation, human elements including different forms of land use, buildings and structures, and transitory elements such as lighting and weather conditions.
Combining both their physical origins and the cultural overlay of human presence, often created over millennia, landscapes reflect a living synthesis of people and place that is vital to local and national identity. The character of a landscape helps define the self-image of the people who inhabit it and a sense of place that differentiates one region from other regions. It is the dynamic backdrop to peopleโs lives. Landscape can be as varied as farmland, a landscape park, or wilderness...
---
Earth is the third planet from the Sun and the only object in the Universe known to harbor life. According to radiometric dating and other sources of evidence, Earth formed over 4 billion years ago. Earth's gravity interacts with other objects in space, especially the Sun and the Moon, Earth's only natural satellite. Earth revolves around the Sun in 365.26 days, a period known as an Earth year. During this time, Earth rotates about its axis about 366.26 times.
Earth's axis of rotation is tilted, producing seasonal variations on the planet's surface. The gravitational interaction between the Earth and Moon causes ocean tides, stabilizes the Earth's orientation on its axis, and gradually slows its rotation. Earth is the densest planet in the Solar System and the largest of the four terrestrial planets.
Earth's lithosphere is divided into several rigid tectonic plates that migrate across the surface over periods of many millions of years. About 71% of Earth's surface is covered with water...
---
Animals are eukaryotic, multicellular organisms that form the biological kingdom Animalia. With few exceptions, animals are motile (able to move), heterotrophic (consume organic material), reproduce sexually, and their embryonic development includes a blastula stage. The body plan of the animal derives from this blastula, differentiating specialized tissues and organs as it develops; this plan eventually becomes fixed, although some undergo metamorphosis at some stage in their lives.
Zoology is the study of animals. Currently there are over 66 thousand (less than 5% of all animals) vertebrate species, and over 1.3 million (over 95% of all animals) invertebrate species in existence. Classification of animals into groups (taxonomy) is accomplished using either the hierarchical Linnaean system; or cladistics, which displays diagrams (phylogenetic trees) called cladograms to show relationships based on the evolutionary principle of the most recent common ancestor. Some recent classificatio...
---
Ausfรผhrungszeit: 8.894520 s
###Markdown
VorverarbeitungDer Text wird nun Tokenisiert, gestemt, nutzlose Wรถrter werden entfernt
###Code
num_topics = 5
num_words_per_topic = 20
texts = []
import pandas as pd
start = time.time()
for doc in doc_list:
raw = doc.lower()
# Erzeuge tokens
tokens = tokenizer.tokenize(raw)
# Entferne unnรผtze Information
stopped_tokens = [i for i in tokens if not i in en_stop]
# Stemme tokens - Entfernung von Duplikaten und Transformation zu Grundform (Optional)
# stemmed_tokens = [p_stemmer.stem(i) for i in stopped_tokens]
texts.append(stopped_tokens)
output_preprocessed = pd.Series(texts)
print(output_preprocessed)
end = time.time()
print('Ausfรผhrungszeit: %f' %(end-start) + ' s')
###Output
0 [nature, broadest, sense, natural, physical, m...
1 [volcano, rupture, crust, planetary, mass, obj...
2 [ocean, ancient, greek, แฝ ฮบฮตฮฑฮฝฯฯ, transc, okean...
3 [landscape, visible, features, area, land, lan...
4 [earth, third, planet, sun, object, universe, ...
5 [animals, eukaryotic, multicellular, organisms...
dtype: object
Ausfรผhrungszeit: 0.062492 s
###Markdown
Dictionary und VektorenIn diesem Abschnitt wird nun der Bag-of-words Korpus erstellt. Die Vektoren werden spรคter fรผr das LDA-Modell benรถtigt
###Code
start = time.time()
# Erzeuge ein dictionary
dictionary = corpora.Dictionary(texts)
# Konvertiere dictionary in Bag-of-Words
# corpus ist eine Liste von Vektoren - Jeder Dokument-Vektor ist eine Serie von Tupeln
corpus = [dictionary.doc2bow(text) for text in texts]
output_vectors = pd.Series(corpus)
print(dictionary)
print('---')
print(output_vectors)
end = time.time()
print('Ausfรผhrungszeit: %f' %(end-start) + ' s')
###Output
Dictionary(5354 unique tokens: ['nature', 'broadest', 'sense', 'natural', 'physical']...)
---
0 [(0, 51), (1, 2), (2, 1), (3, 32), (4, 9), (5,...
1 [(3, 2), (5, 6), (6, 1), (8, 28), (9, 2), (11,...
2 [(3, 4), (4, 2), (5, 1), (6, 15), (8, 12), (11...
3 [(0, 10), (2, 4), (3, 15), (4, 10), (5, 2), (6...
4 [(0, 2), (2, 1), (3, 7), (4, 3), (5, 6), (6, 1...
5 [(5, 2), (6, 2), (8, 5), (9, 1), (10, 1), (11,...
dtype: object
Ausfรผhrungszeit: 0.062440 s
###Markdown
LDA-ModellSchlieรlich kann das LDA-Modell angewandt werden. Die รbergabeparameter dafรผr sind die Liste der Vektoren, die Anzahl der Themen, das Dictionary, sowie die Aktualisierungsrate.In der Trainingsphase sollte eine hรถhere Aktualisierungsrate >= 20 gewรคhlt werden.
###Code
start = time.time()
# Wende LDA-Modell an
ldamodel = models.ldamodel.LdaModel(corpus, num_topics=num_topics, id2word = dictionary, passes=50)
lda = ldamodel.print_topics(num_topics=num_topics, num_words=num_words_per_topic)
for topic in lda:
for entry in topic:
print(entry)
print('---')
end = time.time()
print('Ausfรผhrungszeit: %f' %(end-start) + ' s')
###Output
0
---
0.032*"earth" + 0.018*"s" + 0.008*"sun" + 0.008*"surface" + 0.005*"solar" + 0.005*"atmosphere" + 0.005*"moon" + 0.005*"1" + 0.005*"life" + 0.004*"water" + 0.004*"years" + 0.004*"land" + 0.004*"million" + 0.004*"5" + 0.003*"oceans" + 0.003*"year" + 0.003*"3" + 0.003*"energy" + 0.003*"field" + 0.003*"crust"
---
1
---
0.011*"water" + 0.010*"ocean" + 0.009*"animals" + 0.007*"earth" + 0.007*"surface" + 0.006*"life" + 0.005*"nature" + 0.005*"also" + 0.005*"zone" + 0.005*"oceans" + 0.005*"s" + 0.004*"species" + 0.004*"can" + 0.004*"natural" + 0.004*"human" + 0.004*"animal" + 0.004*"may" + 0.003*"world" + 0.003*"called" + 0.003*"within"
---
2
---
0.036*"landscape" + 0.009*"landscapes" + 0.007*"s" + 0.006*"painting" + 0.006*"poetry" + 0.006*"century" + 0.005*"human" + 0.004*"chinese" + 0.004*"cultural" + 0.004*"english" + 0.004*"land" + 0.004*"also" + 0.004*"natural" + 0.004*"garden" + 0.004*"art" + 0.003*"people" + 0.003*"can" + 0.003*"gardens" + 0.003*"term" + 0.003*"many"
---
3
---
0.019*"volcanoes" + 0.014*"volcanic" + 0.010*"lava" + 0.008*"volcano" + 0.007*"s" + 0.006*"can" + 0.006*"earth" + 0.006*"eruptions" + 0.006*"eruption" + 0.006*"years" + 0.005*"also" + 0.005*"activity" + 0.005*"surface" + 0.004*"active" + 0.004*"ash" + 0.004*"may" + 0.004*"extinct" + 0.004*"erupted" + 0.004*"flows" + 0.004*"mount"
---
4
---
0.000*"earth" + 0.000*"landscape" + 0.000*"s" + 0.000*"volcanoes" + 0.000*"water" + 0.000*"surface" + 0.000*"also" + 0.000*"can" + 0.000*"ocean" + 0.000*"volcanic" + 0.000*"lava" + 0.000*"life" + 0.000*"years" + 0.000*"animals" + 0.000*"oceans" + 0.000*"within" + 0.000*"natural" + 0.000*"many" + 0.000*"nature" + 0.000*"may"
---
Ausfรผhrungszeit: 20.614590 s
###Markdown
VisualisierungMit pyLDAvis
###Code
import pyLDAvis.gensim
# dprecation warnings bei pyLDAvis vermeiden
warnings.simplefilter("ignore", DeprecationWarning)
start = time.time()
pyLDAvis.enable_notebook()
vis_data = pyLDAvis.gensim.prepare(ldamodel, corpus, dictionary)
end = time.time()
print('Ausfรผhrungszeit: %f' %(end-start) + ' s')
pyLDAvis.display(vis_data)
###Output
_____no_output_____ |
notebooks/Reinforcement_Learning_Exploitation_Demo.ipynb | ###Markdown
> **How to run this notebook (command-line)?**1. Install the `ReinventCommunity` environment:`conda env create -f environment.yml`2. Activate the environment:`conda activate ReinventCommunity`3. Execute `jupyter`:`jupyter notebook`4. Copy the link to a browser `REINVENT 3.0`: reinforcement learning exploitation demoThis demo illustrates how to set up a `REINVENT` run to optimize molecules that are active against _Aurora_ kinase. We use here predictive model as the main component to guide the generation of the molecules. we also include a `qed_score` component to stimulate the generation of more "drug-like" molecules. 1. Set up the paths_Please update the following code block such that it reflects your system's installation and execute it._
###Code
# load dependencies
import os
import re
import json
import tempfile
# --------- change these path variables as required
reinvent_dir = os.path.expanduser("~/Desktop/reinventcli")
reinvent_env = os.path.expanduser("~/miniconda3/envs/reinvent.v3.0")
output_dir = os.path.expanduser("~/Desktop/REINVENT_RL_Exploitation_demo")
# --------- do not change
# get the notebook's root path
try: ipynb_path
except NameError: ipynb_path = os.getcwd()
# if required, generate a folder to store the results
try:
os.mkdir(output_dir)
except FileExistsError:
pass
###Output
_____no_output_____
###Markdown
2. Setting up the configuration In the cells below we will build a nested dictionary object that will be eventually converted to JSON file which in turn will be consumed by `REINVENT`. You can find this file in your `output_dir` location. A) Declare the run type
###Code
# initialize the dictionary
configuration = {
"version": 3, # we are going to use REINVENT's newest release
"run_type": "reinforcement_learning" # other run types: "sampling", "validation",
# "transfer_learning",
# "scoring" and "create_model"
}
###Output
_____no_output_____
###Markdown
B) Sort out the logging detailsThis includes `result_folder` path where the results will be produced.Also: `REINVENT` can send custom log messages to a remote location. We have retained this capability in the code. if the `recipient` value differs from `"local"` `REINVENT` will attempt to POST the data to the specified `recipient`.
###Code
# add block to specify whether to run locally or not and
# where to store the results and logging
configuration["logging"] = {
"sender": "http://0.0.0.1", # only relevant if "recipient" is set to "remote"
"recipient": "local", # either to local logging or use a remote REST-interface
"logging_frequency": 10, # log every x-th steps
"logging_path": os.path.join(output_dir, "progress.log"), # load this folder in tensorboard
"result_folder": os.path.join(output_dir, "results"), # will hold the compounds (SMILES) and summaries
"job_name": "Reinforcement learning demo", # set an arbitrary job name for identification
"job_id": "demo" # only relevant if "recipient" is set to a specific REST endpoint
}
###Output
_____no_output_____
###Markdown
Create `"parameters"` field
###Code
# add the "parameters" block
configuration["parameters"] = {}
###Output
_____no_output_____
###Markdown
C) Set Diversity FilterDuring each step of Reinforcement Learning the compounds scored above `minscore` threshold are kept in memory. The scored smiles are written out to a file in the results folder `scaffold_memory.csv`. In the example here we are not using any filter by setting it to `"NoFilter"`. This will lead to exploitation of the chemical space in vicinity to the local optimum for the defined scoring function. The scoring function will likely reach a higher overall score sooner than the exploration scenario.For exploratory behavior the diversity filters below should be set to any of the listed alternatives `"IdenticalTopologicalScaffold"`, `"IdenticalMurckoScaffold"` or `"ScaffoldSimilarity"`. This will boost the diversity of generated solutions. The maximum value of the scoring fuinction will be lower in exploration mode because the Agent is encouraged to search for diverse solutions rather than to only optimize the best that are being found so far. The number of generated compounds should be higher in comparison to the exploitation scenario.
###Code
# add a "diversity_filter"
configuration["parameters"]["diversity_filter"] = {
"name": "NoFilter", # other options are: "IdenticalTopologicalScaffold",
# "IdenticalMurckoScaffold" and "ScaffoldSimilarity"
# -> use "NoFilter" to disable this feature
"nbmax": 25, # the bin size; penalization will start once this is exceeded
"minscore": 0.4, # the minimum total score to be considered for binning
"minsimilarity": 0.4 # the minimum similarity to be placed into the same bin
}
###Output
_____no_output_____
###Markdown
D) Set Inception* `smiles` provide here a list of smiles to be incepted * `memory_size` the number of smiles allowed in the inception memory* `sample_size` the number of smiles that can be sampled at each reinforcement learning step from inception memory
###Code
# prepare the inception (we do not use it in this example, so "smiles" is an empty list)
configuration["parameters"]["inception"] = {
"smiles": [], # fill in a list of SMILES here that can be used (or leave empty)
"memory_size": 100, # sets how many molecules are to be remembered
"sample_size": 10 # how many are to be sampled each epoch from the memory
}
###Output
_____no_output_____
###Markdown
E) Set the general Reinforcement Learning parameters* `n_steps` is the amount of Reinforcement Learning steps to perform. Best start with 1000 steps and see if thats enough.* `agent` is the generative model that undergoes transformation during the Reinforcement Learning run.We reccomend keeping the other parameters to their default values.
###Code
# set all "reinforcement learning"-specific run parameters
configuration["parameters"]["reinforcement_learning"] = {
"prior": os.path.join(ipynb_path, "models/random.prior.new"), # path to the pre-trained model
"agent": os.path.join(ipynb_path, "models/random.prior.new"), # path to the pre-trained model
"n_steps": 1000, # the number of epochs (steps) to be performed; often 1000
"sigma": 128, # used to calculate the "augmented likelihood", see publication
"learning_rate": 0.0001, # sets how strongly the agent is influenced by each epoch
"batch_size": 128, # specifies how many molecules are generated per epoch
"reset": 0, # if not '0', the reset the agent if threshold reached to get
# more diverse solutions
"reset_score_cutoff": 0.5, # if resetting is enabled, this is the threshold
"margin_threshold": 50 # specify the (positive) margin between agent and prior
}
###Output
_____no_output_____
###Markdown
F) Define the scoring functionWe will use a `custom_product` type. The component types included are:* `predictive_property` which is the target activity to _Aurora_ kinase represented by the predictive `regression` model. Note that we set the weight of this component to be the highest.* `qed_score` is the implementation of QED in RDKit. It biases the egenration of molecules towars more "drug-like" space. Depending on the study case can have beneficial or detrimental effect.* `custom_alerts` the `"smiles"` field also can work with SMILES or SMARTSNote: The model used in this example is a regression model
###Code
# prepare the scoring function definition and add at the end
scoring_function = {
"name": "custom_product", # this is our default one (alternative: "custom_sum")
"parallel": False, # sets whether components are to be executed
# in parallel; note, that python uses "False" / "True"
# but the JSON "false" / "true"
# the "parameters" list holds the individual components
"parameters": [
# add component: an activity model
{
"component_type": "predictive_property", # this is a scikit-learn model, returning
# activity values
"name": "Aurora kinase", # arbitrary name for the component
"weight": 6, # the weight ("importance") of the component (default: 1)
"specific_parameters": {
"model_path": os.path.join(ipynb_path, "models/Aurora_model.pkl"), # absolute model path
"transformation": {
"transformation_type": "sigmoid", # see description above
"high": 9, # parameter for sigmoid transformation
"low": 4, # parameter for sigmoid transformation
"k": 0.25 # parameter for sigmoid transformation
},
"scikit": "regression", # model can be "regression" or "classification"
"descriptor_type": "ecfp_counts", # sets the input descriptor for this model
"size": 2048, # parameter of descriptor type
"radius": 3, # parameter of descriptor type
"use_counts": True, # parameter of descriptor type
"use_features": True # parameter of descriptor type
}
},
# add component: QED
{
"component_type": "qed_score", # this is the QED score as implemented in RDKit
"name": "QED", # arbitrary name for the component
"weight": 2 # the weight ("importance") of the component (default: 1)
},
# add component: enforce to NOT match a given substructure
{
"component_type": "custom_alerts",
"name": "Custom alerts", # arbitrary name for the component
"weight": 1, # the weight of the component (default: 1)
"specific_parameters": {
"smiles": [ # specify the substructures (as list) to penalize
"[*;r8]",
"[*;r9]",
"[*;r10]",
"[*;r11]",
"[*;r12]",
"[*;r13]",
"[*;r14]",
"[*;r15]",
"[*;r16]",
"[*;r17]",
"[#8][#8]",
"[#6;+]",
"[#16][#16]",
"[#7;!n][S;!$(S(=O)=O)]",
"[#7;!n][#7;!n]",
"C#C",
"C(=[O,S])[O,S]",
"[#7;!n][C;!$(C(=[O,N])[N,O])][#16;!s]",
"[#7;!n][C;!$(C(=[O,N])[N,O])][#7;!n]",
"[#7;!n][C;!$(C(=[O,N])[N,O])][#8;!o]",
"[#8;!o][C;!$(C(=[O,N])[N,O])][#16;!s]",
"[#8;!o][C;!$(C(=[O,N])[N,O])][#8;!o]",
"[#16;!s][C;!$(C(=[O,N])[N,O])][#16;!s]"
]
}
}]
}
configuration["parameters"]["scoring_function"] = scoring_function
###Output
_____no_output_____
###Markdown
NOTE: Getting the selectivity score component to reach satisfactory levels is non-trivial and might take considerably higher number of steps 3. Write out the configuration We now have successfully filled the dictionary and will write it out as a `JSON` file in the output directory. Please have a look at the file before proceeding in order to see how the paths have been inserted where required and the `dict` -> `JSON` translations (e.g. `True` to `true`) have taken place.
###Code
# write the configuration file to the disc
configuration_JSON_path = os.path.join(output_dir, "RL_config.json")
with open(configuration_JSON_path, 'w') as f:
json.dump(configuration, f, indent=4, sort_keys=True)
###Output
_____no_output_____
###Markdown
4. Run `REINVENT`Now it is time to execute `REINVENT` locally. Note, that depending on the number of epochs (steps) and the execution time of the scoring function components, this might take a while. The command-line execution looks like this:``` activate envionmentconda activate reinvent.v3.0 execute REINVENTpython /input.py .json```
###Code
%%capture captured_err_stream --no-stderr
# execute REINVENT from the command-line
!{reinvent_env}/bin/python {reinvent_dir}/input.py {configuration_JSON_path}
# print the output to a file, just to have it for documentation
with open(os.path.join(output_dir, "run.err"), 'w') as file:
file.write(captured_err_stream.stdout)
# prepare the output to be parsed
list_epochs = re.findall(r'INFO.*?local', captured_err_stream.stdout, re.DOTALL)
data = [epoch for idx, epoch in enumerate(list_epochs) if idx in [1, 75, 124]]
data = ["\n".join(element.splitlines()[:-1]) for element in data]
###Output
_____no_output_____
###Markdown
Below you see the print-out of the first, one from the middle and the last epoch, respectively. Note, that the fraction of valid `SMILES` is high right from the start (because we use a pre-trained prior). You can see the partial scores for each component for the first couple of compounds, but the most important information is the average score. You can clearly see how it increases over time.
###Code
for element in data:
print(element)
###Output
INFO
Step 0 Fraction valid SMILES: 96.1 Score: 0.2655 Time elapsed: 0 Time left: 0.0
Agent Prior Target Score SMILES
-51.31 -51.31 -51.31 0.00 C(C(CC=C(CCC=C(CCC(=O)O)C=C(CC=CCCC=C(C)C)C)=O)C)=C
-35.59 -35.59 21.37 0.44 c1cc(C(=O)NC(C)c2ccc(OC3CCN(c4ccc(OCC5C(F)(F)C5)cn4)CC3O)cc2)c(OC)nc1
-27.17 -27.17 -27.17 0.00 c1c(Cl)ccc2c(=Nc3c(Cl)cc(OC)cc3)c(C(OCC)=O)c[nH]c12
-32.39 -32.39 -32.39 0.00 C(=O)(OCC)C1(C(C)=NN)CC2c3c(cccc3)C1c1ccccc12
-26.54 -26.54 19.96 0.36 C1(=O)C(Oc2ccc(C(N)=N)cc2)(CC)CCC1O
-22.56 -22.56 30.60 0.42 C(NS(c1ccc(NC(=O)C)cc1)(=O)=O)Cc1cc(C)ccc1
-32.63 -32.63 18.67 0.40 c1(CNC(=O)CN2C(=O)C3N(CCOC)CCC3O2)c2c(ccc1)cccc2
-28.76 -28.76 23.42 0.41 O=C(N(C)CC(Nc1c(C)cc(Br)cc1)=O)C1(CC)C(Cl)(Cl)C1C
-32.71 -32.71 24.86 0.45 O=S(c1ccc(C(=O)N)cc1)(=O)Oc1c(NC(c2ccco2)=O)cc(Cl)cc1
-32.85 -32.85 -32.85 0.00 N=c1[nH]c2nc(-c3ccc(CCNC(CCC(=O)O)=O)cc3)cnc2c(=N)[nH]1
Aurora kinase QED Custom alerts raw_Aurora kinase
0.0 0.0 0.0 0.0
0.46610119938850403 0.38718461990356445 1.0 6.382042407989502
0.37139689922332764 0.644282877445221 0.0 6.042923450469971
0.3336728513240814 0.40394356846809387 0.0 5.899266719818115
0.31423062086105347 0.5612077116966248 1.0 5.822140693664551
0.3267623484134674 0.8531444668769836 1.0 5.872127056121826
0.3209114670753479 0.7809154391288757 1.0 5.848917484283447
0.3407776951789856 0.6974791288375854 1.0 5.926878452301025
0.4110566973686218 0.5894344449043274 1.0 6.187656402587891
0.3114776313304901 0.34777504205703735 0.0 5.8110175132751465
INFO
Step 72 Fraction valid SMILES: 99.2 Score: 0.3254 Time elapsed: 44 Time left: 559.3
Agent Prior Target Score SMILES
-21.50 -21.70 52.27 0.42 c1(C(Nc2ccccc2OC)=O)ccc(NC(=O)C2CC2)cc1
-29.32 -30.07 -30.07 0.00 c1c(S(=O)(N)=O)ccc(Cl)c1C(NNC(c1oc2c(cccc2)c1)=O)=O
-19.69 -20.35 54.20 0.42 Cc1sc2n(n1)cc(-c1cc3c(cc1)OCCO3)n2
-30.97 -30.64 44.41 0.42 O=S(=O)(CC)N1c2c(cc(OC)c(OC)c2)CC1(C)C
-25.06 -25.66 42.32 0.38 Clc1ccc(-c2cccc(-c3c(N)c(O)oc3)c2)cc1
-32.35 -33.76 56.65 0.51 C(c1cc(F)c(F)cc1F)C(NC(C)C)C(N=c1[nH]cc(Cl)s1)=O
-22.32 -23.29 47.97 0.40 Fc1ccc(Oc2ccc(COc3nc(=O)n(C)c(N4CCOCC4)c3)cc2F)cc1
-35.10 -35.53 36.74 0.41 c1c2c(cc(F)c1)-c1c(c3cc([N+](=O)[O-])ccc3n1C)C(CCC)(C)NC2=O
-23.14 -23.47 40.64 0.36 N(C(=O)CCN1CCOCC1)c1ccc2c(c1)c(C)cc(N1CCN(C)CC1)n2
-29.77 -29.73 35.43 0.37 c1(-c2sc(C(=O)CC(C)C)cc2)cn(Cc2ccccc2)nn1
Aurora kinase QED Custom alerts raw_Aurora kinase
0.322376549243927 0.890160858631134 1.0 5.854750156402588
0.3206542730331421 0.5836869478225708 0.0 5.8478922843933105
0.35576075315475464 0.683357834815979 1.0 5.984221458435059
0.33314836025238037 0.8549820780754089 1.0 5.897216796875
0.30833700299263 0.7257052063941956 1.0 5.798262119293213
0.43912962079048157 0.7861695885658264 1.0 6.2874603271484375
0.34998154640197754 0.5994198322296143 1.0 5.962238788604736
0.37158986926078796 0.5297967791557312 1.0 6.043641567230225
0.2724348306655884 0.8323323130607605 1.0 5.6467814445495605
0.3045485317707062 0.6354219317436218 1.0 5.782779216766357
INFO
Step 121 Fraction valid SMILES: 99.2 Score: 0.3721 Time elapsed: 74 Time left: 533.2
Agent Prior Target Score SMILES
-31.75 -30.82 38.81 0.39 Cc1c(C(NCC2(N)CCS(=O)(=O)CC2)=O)cc(Cl)cc1
-18.79 -20.73 37.41 0.33 c12c([n+]([O-])c(-c3cccs3)c(C)[n+]1[O-])cccc2
-17.08 -18.10 61.54 0.45 FC(c1cc(N2C(=O)N(C)C(c3ccc(C#N)cc3)C3=C2CCC3=O)ccc1)(F)F
-20.02 -21.80 53.73 0.42 C1N(C(C)C)CCC(Oc2c(OC)ccc(C(NC3CCCC3)=O)c2)C1
-25.74 -27.65 45.30 0.41 c1n[nH]c(=NS(=O)(=O)c2ccc(Oc3c(-c4ccn[nH]4)cc(F)cc3)c(C#N)c2)s1
-21.43 -23.16 -23.16 0.00 N1N=C(c2ccc(C)cc2)CC1c1c(C)nn(-c2ccccc2)c1Cl
-33.71 -33.20 -33.20 0.00 n1(CC)c2ccccc2c2c3c(c4c5c(cccc5)[nH]c41)cccc3C(=O)C2=O
-19.83 -21.49 49.54 0.40 C1N(C=C2C(=O)c3ccccc3C2=O)CCN(Cc2ccccc2)C1
-28.13 -28.89 43.99 0.41 c1nc2[nH]c(SC)nc(=Nc3ccc(CC)cc3)c2cc1C#N
-22.24 -22.75 51.72 0.42 c1(O)c(C(NC(C)C2CCCCC2)=O)cc(Cl)cc1
Aurora kinase QED Custom alerts raw_Aurora kinase
0.2992677688598633 0.8734317421913147 1.0 5.761015892028809
0.28419438004493713 0.4957776963710785 1.0 5.697640895843506
0.3830990195274353 0.7126726508140564 1.0 6.086191177368164
0.33775582909584045 0.8417723178863525 1.0 5.9151692390441895
0.38679444789886475 0.48748698830604553 1.0 6.099748134613037
0.39176592230796814 0.7507633566856384 0.0 6.1179118156433105
0.0 0.0 0.0 0.0
0.34084388613700867 0.6400846242904663 1.0 5.9271345138549805
0.3619540333747864 0.5925776362419128 1.0 6.007602691650391
0.32573503255844116 0.8865053057670593 1.0 5.868067741394043
###Markdown
5. Analyse the resultsIn order to analyze the run in a more intuitive way, we can use `tensorboard`:``` go to the root folder of the outputcd /REINVENT_RL_demo make sure, you have activated the proper environmentconda activate reinvent.v3.0 start tensorboardtensorboard --logdir progress.log```Then copy the link provided to a browser window, e.g. "http://workstation.url.com:6006/". The following figures are exmaple plots - remember, that there is always some randomness involved. In `tensorboard` you can monitor the individual scoring function components. The score for predicted Aurora Kinase activity.The average score over time.It might also be informative to look at the results from the prior (dark blue), the agent (blue) and the augmented likelihood (purple) over time.And last but not least, there is a "Images" tab available that lets you browse through the compounds generated in an easy way. In the molecules, the substructure matches that were defined to be required are highlighted in red (if present). Also, the total scores are given per molecule. The results folder will hold four different files: the agent (pickled), the input JSON (just for reference purposes), the memory (highest scoring compounds in `CSV` format) and the scaffold memory (in `CSV` format).
###Code
!head -n 15 {output_dir}/results/memory.csv
###Output
,smiles,score,likelihood
27,C1N(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CCCC1O,0.8451204,-42.869774
22,C1N(c2ncncc2-c2cn(CC4OCCN(C)CC4)nc2)CCCC1O,0.8451204,-53.072266
19,C1N(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CC(O)CC1,0.8451204,-45.846977
50,C1CN(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CC(O)C1,0.8451204,-45.26066
61,N1(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CC(O)CCC1,0.8451204,-45.653194
60,N1(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CCCC(O)C1,0.8451204,-43.792747
55,C1N(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CCCCC1O,0.84456897,-48.738205
112,C1N(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CC(O)CCC1,0.84456897,-49.809258
92,N1(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CC(O)CCCC1,0.84456897,-52.195297
107,C1N(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CCNCC1,0.8443355,-43.1882
51,N1(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CCNCCC1,0.8443355,-43.12227
70,N1CCN(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CCC1,0.8443355,-44.6098
62,N1(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CCCNCC1,0.8443355,-46.611633
1,C1N(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CCCC1=N,0.8419696,-52.32989
###Markdown
> **How to run this notebook (command-line)?**1. Install the `ReinventCommunity` environment:`conda env create -f environment.yml`2. Activate the environment:`conda activate ReinventCommunity`3. Execute `jupyter`:`jupyter notebook`4. Copy the link to a browser `REINVENT 3.0`: reinforcement learning exploitation demoThis demo illustrates how to set up a `REINVENT` run to optimize molecules that are active against _Aurora_ kinase. We use here predictive model as the main component to guide the generation of the molecules. we also include a `qed_score` component to stimulate the generation of more "drug-like" molecules. 1. Set up the paths_Please update the following code block such that it reflects your system's installation and execute it._
###Code
# load dependencies
import os
import re
import json
import tempfile
# --------- change these path variables as required
reinvent_dir = os.path.expanduser("~/Desktop/Reinvent")
reinvent_env = os.path.expanduser("~/miniconda3/envs/reinvent.v3.0")
output_dir = os.path.expanduser("~/Desktop/REINVENT_RL_Exploitation_demo")
# --------- do not change
# get the notebook's root path
try: ipynb_path
except NameError: ipynb_path = os.getcwd()
# if required, generate a folder to store the results
try:
os.mkdir(output_dir)
except FileExistsError:
pass
###Output
_____no_output_____
###Markdown
2. Setting up the configuration In the cells below we will build a nested dictionary object that will be eventually converted to JSON file which in turn will be consumed by `REINVENT`. You can find this file in your `output_dir` location. A) Declare the run type
###Code
# initialize the dictionary
configuration = {
"version": 3, # we are going to use REINVENT's newest release
"run_type": "reinforcement_learning" # other run types: "sampling", "validation",
# "transfer_learning",
# "scoring" and "create_model"
}
###Output
_____no_output_____
###Markdown
B) Sort out the logging detailsThis includes `result_folder` path where the results will be produced.Also: `REINVENT` can send custom log messages to a remote location. We have retained this capability in the code. if the `recipient` value differs from `"local"` `REINVENT` will attempt to POST the data to the specified `recipient`.
###Code
# add block to specify whether to run locally or not and
# where to store the results and logging
configuration["logging"] = {
"sender": "http://0.0.0.1", # only relevant if "recipient" is set to "remote"
"recipient": "local", # either to local logging or use a remote REST-interface
"logging_frequency": 10, # log every x-th steps
"logging_path": os.path.join(output_dir, "progress.log"), # load this folder in tensorboard
"result_folder": os.path.join(output_dir, "results"), # will hold the compounds (SMILES) and summaries
"job_name": "Reinforcement learning demo", # set an arbitrary job name for identification
"job_id": "demo" # only relevant if "recipient" is set to a specific REST endpoint
}
###Output
_____no_output_____
###Markdown
Create `"parameters"` field
###Code
# add the "parameters" block
configuration["parameters"] = {}
###Output
_____no_output_____
###Markdown
C) Set Diversity FilterDuring each step of Reinforcement Learning the compounds scored above `minscore` threshold are kept in memory. The scored smiles are written out to a file in the results folder `scaffold_memory.csv`. In the example here we are not using any filter by setting it to `"NoFilter"`. This will lead to exploitation of the chemical space in vicinity to the local optimum for the defined scoring function. The scoring function will likely reach a higher overall score sooner than the exploration scenario.For exploratory behavior the diversity filters below should be set to any of the listed alternatives `"IdenticalTopologicalScaffold"`, `"IdenticalMurckoScaffold"` or `"ScaffoldSimilarity"`. This will boost the diversity of generated solutions. The maximum value of the scoring fuinction will be lower in exploration mode because the Agent is encouraged to search for diverse solutions rather than to only optimize the best that are being found so far. The number of generated compounds should be higher in comparison to the exploitation scenario.
###Code
# add a "diversity_filter"
configuration["parameters"]["diversity_filter"] = {
"name": "NoFilter", # other options are: "IdenticalTopologicalScaffold",
# "IdenticalMurckoScaffold" and "ScaffoldSimilarity"
# -> use "NoFilter" to disable this feature
"nbmax": 25, # the bin size; penalization will start once this is exceeded
"minscore": 0.4, # the minimum total score to be considered for binning
"minsimilarity": 0.4 # the minimum similarity to be placed into the same bin
}
###Output
_____no_output_____
###Markdown
D) Set Inception* `smiles` provide here a list of smiles to be incepted * `memory_size` the number of smiles allowed in the inception memory* `sample_size` the number of smiles that can be sampled at each reinforcement learning step from inception memory
###Code
# prepare the inception (we do not use it in this example, so "smiles" is an empty list)
configuration["parameters"]["inception"] = {
"smiles": [], # fill in a list of SMILES here that can be used (or leave empty)
"memory_size": 100, # sets how many molecules are to be remembered
"sample_size": 10 # how many are to be sampled each epoch from the memory
}
###Output
_____no_output_____
###Markdown
E) Set the general Reinforcement Learning parameters* `n_steps` is the amount of Reinforcement Learning steps to perform. Best start with 1000 steps and see if thats enough.* `agent` is the generative model that undergoes transformation during the Reinforcement Learning run.We reccomend keeping the other parameters to their default values.
###Code
# set all "reinforcement learning"-specific run parameters
configuration["parameters"]["reinforcement_learning"] = {
"prior": os.path.join(ipynb_path, "models/random.prior.new"), # path to the pre-trained model
"agent": os.path.join(ipynb_path, "models/random.prior.new"), # path to the pre-trained model
"n_steps": 1000, # the number of epochs (steps) to be performed; often 1000
"sigma": 128, # used to calculate the "augmented likelihood", see publication
"learning_rate": 0.0001, # sets how strongly the agent is influenced by each epoch
"batch_size": 128, # specifies how many molecules are generated per epoch
"reset": 0, # if not '0', the reset the agent if threshold reached to get
# more diverse solutions
"reset_score_cutoff": 0.5, # if resetting is enabled, this is the threshold
"margin_threshold": 50 # specify the (positive) margin between agent and prior
}
###Output
_____no_output_____
###Markdown
F) Define the scoring functionWe will use a `custom_product` type. The component types included are:* `predictive_property` which is the target activity to _Aurora_ kinase represented by the predictive `regression` model. Note that we set the weight of this component to be the highest.* `qed_score` is the implementation of QED in RDKit. It biases the egenration of molecules towars more "drug-like" space. Depending on the study case can have beneficial or detrimental effect.* `custom_alerts` the `"smiles"` field also can work with SMILES or SMARTSNote: The model used in this example is a regression model
###Code
# prepare the scoring function definition and add at the end
scoring_function = {
"name": "custom_product", # this is our default one (alternative: "custom_sum")
"parallel": False, # sets whether components are to be executed
# in parallel; note, that python uses "False" / "True"
# but the JSON "false" / "true"
# the "parameters" list holds the individual components
"parameters": [
# add component: an activity model
{
"component_type": "predictive_property", # this is a scikit-learn model, returning
# activity values
"name": "Aurora kinase", # arbitrary name for the component
"weight": 6, # the weight ("importance") of the component (default: 1)
"specific_parameters": {
"model_path": os.path.join(ipynb_path, "models/Aurora_model.pkl"), # absolute model path
"transformation": {
"transformation_type": "sigmoid", # see description above
"high": 9, # parameter for sigmoid transformation
"low": 4, # parameter for sigmoid transformation
"k": 0.25 # parameter for sigmoid transformation
},
"scikit": "regression", # model can be "regression" or "classification"
"descriptor_type": "ecfp_counts", # sets the input descriptor for this model
"size": 2048, # parameter of descriptor type
"radius": 3, # parameter of descriptor type
"use_counts": True, # parameter of descriptor type
"use_features": True # parameter of descriptor type
}
},
# add component: QED
{
"component_type": "qed_score", # this is the QED score as implemented in RDKit
"name": "QED", # arbitrary name for the component
"weight": 2 # the weight ("importance") of the component (default: 1)
},
# add component: enforce to NOT match a given substructure
{
"component_type": "custom_alerts",
"name": "Custom alerts", # arbitrary name for the component
"weight": 1, # the weight of the component (default: 1)
"specific_parameters": {
"smiles": [ # specify the substructures (as list) to penalize
"[*;r8]",
"[*;r9]",
"[*;r10]",
"[*;r11]",
"[*;r12]",
"[*;r13]",
"[*;r14]",
"[*;r15]",
"[*;r16]",
"[*;r17]",
"[#8][#8]",
"[#6;+]",
"[#16][#16]",
"[#7;!n][S;!$(S(=O)=O)]",
"[#7;!n][#7;!n]",
"C#C",
"C(=[O,S])[O,S]",
"[#7;!n][C;!$(C(=[O,N])[N,O])][#16;!s]",
"[#7;!n][C;!$(C(=[O,N])[N,O])][#7;!n]",
"[#7;!n][C;!$(C(=[O,N])[N,O])][#8;!o]",
"[#8;!o][C;!$(C(=[O,N])[N,O])][#16;!s]",
"[#8;!o][C;!$(C(=[O,N])[N,O])][#8;!o]",
"[#16;!s][C;!$(C(=[O,N])[N,O])][#16;!s]"
]
}
}]
}
configuration["parameters"]["scoring_function"] = scoring_function
###Output
_____no_output_____
###Markdown
NOTE: Getting the selectivity score component to reach satisfactory levels is non-trivial and might take considerably higher number of steps 3. Write out the configuration We now have successfully filled the dictionary and will write it out as a `JSON` file in the output directory. Please have a look at the file before proceeding in order to see how the paths have been inserted where required and the `dict` -> `JSON` translations (e.g. `True` to `true`) have taken place.
###Code
# write the configuration file to the disc
configuration_JSON_path = os.path.join(output_dir, "RL_config.json")
with open(configuration_JSON_path, 'w') as f:
json.dump(configuration, f, indent=4, sort_keys=True)
###Output
_____no_output_____
###Markdown
4. Run `REINVENT`Now it is time to execute `REINVENT` locally. Note, that depending on the number of epochs (steps) and the execution time of the scoring function components, this might take a while. The command-line execution looks like this:``` activate envionmentconda activate reinvent.v3.0 execute REINVENTpython /input.py .json```
###Code
%%capture captured_err_stream --no-stderr
# execute REINVENT from the command-line
!{reinvent_env}/bin/python {reinvent_dir}/input.py {configuration_JSON_path}
# print the output to a file, just to have it for documentation
with open(os.path.join(output_dir, "run.err"), 'w') as file:
file.write(captured_err_stream.stdout)
# prepare the output to be parsed
list_epochs = re.findall(r'INFO.*?local', captured_err_stream.stdout, re.DOTALL)
data = [epoch for idx, epoch in enumerate(list_epochs) if idx in [1, 75, 124]]
data = ["\n".join(element.splitlines()[:-1]) for element in data]
###Output
_____no_output_____
###Markdown
Below you see the print-out of the first, one from the middle and the last epoch, respectively. Note, that the fraction of valid `SMILES` is high right from the start (because we use a pre-trained prior). You can see the partial scores for each component for the first couple of compounds, but the most important information is the average score. You can clearly see how it increases over time.
###Code
for element in data:
print(element)
###Output
INFO
Step 0 Fraction valid SMILES: 96.1 Score: 0.2655 Time elapsed: 0 Time left: 0.0
Agent Prior Target Score SMILES
-51.31 -51.31 -51.31 0.00 C(C(CC=C(CCC=C(CCC(=O)O)C=C(CC=CCCC=C(C)C)C)=O)C)=C
-35.59 -35.59 21.37 0.44 c1cc(C(=O)NC(C)c2ccc(OC3CCN(c4ccc(OCC5C(F)(F)C5)cn4)CC3O)cc2)c(OC)nc1
-27.17 -27.17 -27.17 0.00 c1c(Cl)ccc2c(=Nc3c(Cl)cc(OC)cc3)c(C(OCC)=O)c[nH]c12
-32.39 -32.39 -32.39 0.00 C(=O)(OCC)C1(C(C)=NN)CC2c3c(cccc3)C1c1ccccc12
-26.54 -26.54 19.96 0.36 C1(=O)C(Oc2ccc(C(N)=N)cc2)(CC)CCC1O
-22.56 -22.56 30.60 0.42 C(NS(c1ccc(NC(=O)C)cc1)(=O)=O)Cc1cc(C)ccc1
-32.63 -32.63 18.67 0.40 c1(CNC(=O)CN2C(=O)C3N(CCOC)CCC3O2)c2c(ccc1)cccc2
-28.76 -28.76 23.42 0.41 O=C(N(C)CC(Nc1c(C)cc(Br)cc1)=O)C1(CC)C(Cl)(Cl)C1C
-32.71 -32.71 24.86 0.45 O=S(c1ccc(C(=O)N)cc1)(=O)Oc1c(NC(c2ccco2)=O)cc(Cl)cc1
-32.85 -32.85 -32.85 0.00 N=c1[nH]c2nc(-c3ccc(CCNC(CCC(=O)O)=O)cc3)cnc2c(=N)[nH]1
Aurora kinase QED Custom alerts raw_Aurora kinase
0.0 0.0 0.0 0.0
0.46610119938850403 0.38718461990356445 1.0 6.382042407989502
0.37139689922332764 0.644282877445221 0.0 6.042923450469971
0.3336728513240814 0.40394356846809387 0.0 5.899266719818115
0.31423062086105347 0.5612077116966248 1.0 5.822140693664551
0.3267623484134674 0.8531444668769836 1.0 5.872127056121826
0.3209114670753479 0.7809154391288757 1.0 5.848917484283447
0.3407776951789856 0.6974791288375854 1.0 5.926878452301025
0.4110566973686218 0.5894344449043274 1.0 6.187656402587891
0.3114776313304901 0.34777504205703735 0.0 5.8110175132751465
INFO
Step 72 Fraction valid SMILES: 99.2 Score: 0.3254 Time elapsed: 44 Time left: 559.3
Agent Prior Target Score SMILES
-21.50 -21.70 52.27 0.42 c1(C(Nc2ccccc2OC)=O)ccc(NC(=O)C2CC2)cc1
-29.32 -30.07 -30.07 0.00 c1c(S(=O)(N)=O)ccc(Cl)c1C(NNC(c1oc2c(cccc2)c1)=O)=O
-19.69 -20.35 54.20 0.42 Cc1sc2n(n1)cc(-c1cc3c(cc1)OCCO3)n2
-30.97 -30.64 44.41 0.42 O=S(=O)(CC)N1c2c(cc(OC)c(OC)c2)CC1(C)C
-25.06 -25.66 42.32 0.38 Clc1ccc(-c2cccc(-c3c(N)c(O)oc3)c2)cc1
-32.35 -33.76 56.65 0.51 C(c1cc(F)c(F)cc1F)C(NC(C)C)C(N=c1[nH]cc(Cl)s1)=O
-22.32 -23.29 47.97 0.40 Fc1ccc(Oc2ccc(COc3nc(=O)n(C)c(N4CCOCC4)c3)cc2F)cc1
-35.10 -35.53 36.74 0.41 c1c2c(cc(F)c1)-c1c(c3cc([N+](=O)[O-])ccc3n1C)C(CCC)(C)NC2=O
-23.14 -23.47 40.64 0.36 N(C(=O)CCN1CCOCC1)c1ccc2c(c1)c(C)cc(N1CCN(C)CC1)n2
-29.77 -29.73 35.43 0.37 c1(-c2sc(C(=O)CC(C)C)cc2)cn(Cc2ccccc2)nn1
Aurora kinase QED Custom alerts raw_Aurora kinase
0.322376549243927 0.890160858631134 1.0 5.854750156402588
0.3206542730331421 0.5836869478225708 0.0 5.8478922843933105
0.35576075315475464 0.683357834815979 1.0 5.984221458435059
0.33314836025238037 0.8549820780754089 1.0 5.897216796875
0.30833700299263 0.7257052063941956 1.0 5.798262119293213
0.43912962079048157 0.7861695885658264 1.0 6.2874603271484375
0.34998154640197754 0.5994198322296143 1.0 5.962238788604736
0.37158986926078796 0.5297967791557312 1.0 6.043641567230225
0.2724348306655884 0.8323323130607605 1.0 5.6467814445495605
0.3045485317707062 0.6354219317436218 1.0 5.782779216766357
INFO
Step 121 Fraction valid SMILES: 99.2 Score: 0.3721 Time elapsed: 74 Time left: 533.2
Agent Prior Target Score SMILES
-31.75 -30.82 38.81 0.39 Cc1c(C(NCC2(N)CCS(=O)(=O)CC2)=O)cc(Cl)cc1
-18.79 -20.73 37.41 0.33 c12c([n+]([O-])c(-c3cccs3)c(C)[n+]1[O-])cccc2
-17.08 -18.10 61.54 0.45 FC(c1cc(N2C(=O)N(C)C(c3ccc(C#N)cc3)C3=C2CCC3=O)ccc1)(F)F
-20.02 -21.80 53.73 0.42 C1N(C(C)C)CCC(Oc2c(OC)ccc(C(NC3CCCC3)=O)c2)C1
-25.74 -27.65 45.30 0.41 c1n[nH]c(=NS(=O)(=O)c2ccc(Oc3c(-c4ccn[nH]4)cc(F)cc3)c(C#N)c2)s1
-21.43 -23.16 -23.16 0.00 N1N=C(c2ccc(C)cc2)CC1c1c(C)nn(-c2ccccc2)c1Cl
-33.71 -33.20 -33.20 0.00 n1(CC)c2ccccc2c2c3c(c4c5c(cccc5)[nH]c41)cccc3C(=O)C2=O
-19.83 -21.49 49.54 0.40 C1N(C=C2C(=O)c3ccccc3C2=O)CCN(Cc2ccccc2)C1
-28.13 -28.89 43.99 0.41 c1nc2[nH]c(SC)nc(=Nc3ccc(CC)cc3)c2cc1C#N
-22.24 -22.75 51.72 0.42 c1(O)c(C(NC(C)C2CCCCC2)=O)cc(Cl)cc1
Aurora kinase QED Custom alerts raw_Aurora kinase
0.2992677688598633 0.8734317421913147 1.0 5.761015892028809
0.28419438004493713 0.4957776963710785 1.0 5.697640895843506
0.3830990195274353 0.7126726508140564 1.0 6.086191177368164
0.33775582909584045 0.8417723178863525 1.0 5.9151692390441895
0.38679444789886475 0.48748698830604553 1.0 6.099748134613037
0.39176592230796814 0.7507633566856384 0.0 6.1179118156433105
0.0 0.0 0.0 0.0
0.34084388613700867 0.6400846242904663 1.0 5.9271345138549805
0.3619540333747864 0.5925776362419128 1.0 6.007602691650391
0.32573503255844116 0.8865053057670593 1.0 5.868067741394043
###Markdown
5. Analyse the resultsIn order to analyze the run in a more intuitive way, we can use `tensorboard`:``` go to the root folder of the outputcd /REINVENT_RL_demo make sure, you have activated the proper environmentconda activate reinvent.v3.0 start tensorboardtensorboard --logdir progress.log```Then copy the link provided to a browser window, e.g. "http://workstation.url.com:6006/". The following figures are exmaple plots - remember, that there is always some randomness involved. In `tensorboard` you can monitor the individual scoring function components. The score for predicted Aurora Kinase activity.The average score over time.It might also be informative to look at the results from the prior (dark blue), the agent (blue) and the augmented likelihood (purple) over time.And last but not least, there is a "Images" tab available that lets you browse through the compounds generated in an easy way. In the molecules, the substructure matches that were defined to be required are highlighted in red (if present). Also, the total scores are given per molecule. The results folder will hold four different files: the agent (pickled), the input JSON (just for reference purposes), the memory (highest scoring compounds in `CSV` format) and the scaffold memory (in `CSV` format).
###Code
!head -n 15 {output_dir}/results/memory.csv
###Output
,smiles,score,likelihood
27,C1N(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CCCC1O,0.8451204,-42.869774
22,C1N(c2ncncc2-c2cn(CC4OCCN(C)CC4)nc2)CCCC1O,0.8451204,-53.072266
19,C1N(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CC(O)CC1,0.8451204,-45.846977
50,C1CN(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CC(O)C1,0.8451204,-45.26066
61,N1(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CC(O)CCC1,0.8451204,-45.653194
60,N1(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CCCC(O)C1,0.8451204,-43.792747
55,C1N(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CCCCC1O,0.84456897,-48.738205
112,C1N(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CC(O)CCC1,0.84456897,-49.809258
92,N1(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CC(O)CCCC1,0.84456897,-52.195297
107,C1N(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CCNCC1,0.8443355,-43.1882
51,N1(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CCNCCC1,0.8443355,-43.12227
70,N1CCN(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CCC1,0.8443355,-44.6098
62,N1(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CCCNCC1,0.8443355,-46.611633
1,C1N(c2ncncc2-c2cn(CC3OCCN(C)CC3)nc2)CCCC1=N,0.8419696,-52.32989
###Markdown
> **How to run this notebook (command-line)?**1. Install the `ReinventCommunity` environment:`conda env create -f environment.yml`2. Activate the environment:`conda activate ReinventCommunity`3. Execute `jupyter`:`jupyter notebook`4. Copy the link to a browser `REINVENT 3.0`: reinforcement learning exploitation demoThis demo illustrates how to set up a `REINVENT` run to optimize molecules that are active against _Aurora_ kinase. We use here predictive model as the main component to guide the generation of the molecules. we also include a `qed_score` component to stimulate the generation of more "drug-like" molecules. 1. Set up the paths_Please update the following code block such that it reflects your system's installation and execute it._
###Code
# load dependencies
import os
import re
import json
import tempfile
# --------- change these path variables as required
reinvent_dir = os.path.expanduser("~/Desktop/Projects/Publications/2020/2020-04_REINVENT_2.0/Reinvent")
reinvent_env = os.path.expanduser("~/miniconda3/envs/reinvent_shared.v2.1")
output_dir = os.path.expanduser("~/Desktop/REINVENT_RL_demo")
# --------- do not change
# get the notebook's root path
try: ipynb_path
except NameError: ipynb_path = os.getcwd()
# if required, generate a folder to store the results
try:
os.mkdir(output_dir)
except FileExistsError:
pass
###Output
_____no_output_____
###Markdown
2. Setting up the configuration In the cells below we will build a nested dictionary object that will be eventually converted to JSON file which in turn will be consumed by `REINVENT`. You can find this file in your `output_dir` location. A) Declare the run type
###Code
# initialize the dictionary
configuration = {
"version": 3, # we are going to use REINVENT's newest release
"run_type": "reinforcement_learning" # other run types: "sampling", "validation",
# "transfer_learning",
# "scoring" and "create_model"
}
###Output
_____no_output_____
###Markdown
B) Sort out the logging detailsThis includes `resultdir` path where the results will be produced.Also: `REINVENT` can send custom log messages to a remote location. We have retained this capability in the code. if the `recipient` value differs from `"local"` `REINVENT` will attempt to POST the data to the specified `recipient`.
###Code
# add block to specify whether to run locally or not and
# where to store the results and logging
configuration["logging"] = {
"sender": "http://0.0.0.1", # only relevant if "recipient" is set to "remote"
"recipient": "local", # either to local logging or use a remote REST-interface
"logging_frequency": 10, # log every x-th steps
"logging_path": os.path.join(output_dir, "progress.log"), # load this folder in tensorboard
"resultdir": os.path.join(output_dir, "results"), # will hold the compounds (SMILES) and summaries
"job_name": "Reinforcement learning demo", # set an arbitrary job name for identification
"job_id": "demo" # only relevant if "recipient" is set to a specific REST endpoint
}
###Output
_____no_output_____
###Markdown
Create `"parameters"` field
###Code
# add the "parameters" block
configuration["parameters"] = {}
###Output
_____no_output_____
###Markdown
C) Set Diversity FilterDuring each step of Reinforcement Learning the compounds scored above `minscore` threshold are kept in memory. The scored smiles are written out to a file in the results folder `scaffold_memory.csv`. In the example here we are not using any filter by setting it to `"NoFilter"`. This will lead to exploitation of the chemical space in vicinity to the local optimum for the defined scoring function. The scoring function will likely reach a higher overall score sooner than the exploration scenario.For exploratory behavior the diversity filters below should be set to any of the listed alternatives `"IdenticalTopologicalScaffold"`, `"IdenticalMurckoScaffold"` or `"ScaffoldSimilarity"`. This will boost the diversity of generated solutions. The maximum value of the scoring fuinction will be lower in exploration mode because the Agent is encouraged to search for diverse solutions rather than to only optimize the best that are being found so far. The number of generated compounds should be higher in comparison to the exploitation scenario.
###Code
# add a "diversity_filter"
configuration["parameters"]["diversity_filter"] = {
"name": "NoFilter", # other options are: "IdenticalTopologicalScaffold",
# "IdenticalMurckoScaffold" and "ScaffoldSimilarity"
# -> use "NoFilter" to disable this feature
"nbmax": 25, # the bin size; penalization will start once this is exceeded
"minscore": 0.4, # the minimum total score to be considered for binning
"minsimilarity": 0.4 # the minimum similarity to be placed into the same bin
}
###Output
_____no_output_____
###Markdown
D) Set Inception* `smiles` provide here a list of smiles to be incepted * `memory_size` the number of smiles allowed in the inception memory* `sample_size` the number of smiles that can be sampled at each reinforcement learning step from inception memory
###Code
# prepare the inception (we do not use it in this example, so "smiles" is an empty list)
configuration["parameters"]["inception"] = {
"smiles": [], # fill in a list of SMILES here that can be used (or leave empty)
"memory_size": 100, # sets how many molecules are to be remembered
"sample_size": 10 # how many are to be sampled each epoch from the memory
}
###Output
_____no_output_____
###Markdown
E) Set the general Reinforcement Learning parameters* `n_steps` is the amount of Reinforcement Learning steps to perform. Best start with 1000 steps and see if thats enough.* `agent` is the generative model that undergoes transformation during the Reinforcement Learning run.We reccomend keeping the other parameters to their default values.
###Code
# set all "reinforcement learning"-specific run parameters
configuration["parameters"]["reinforcement_learning"] = {
"prior": os.path.join(ipynb_path, "models/augmented.prior"), # path to the pre-trained model
"agent": os.path.join(ipynb_path, "models/augmented.prior"), # path to the pre-trained model
"n_steps": 1000, # the number of epochs (steps) to be performed; often 1000
"sigma": 128, # used to calculate the "augmented likelihood", see publication
"learning_rate": 0.0001, # sets how strongly the agent is influenced by each epoch
"batch_size": 128, # specifies how many molecules are generated per epoch
"reset": 0, # if not '0', the reset the agent if threshold reached to get
# more diverse solutions
"reset_score_cutoff": 0.5, # if resetting is enabled, this is the threshold
"margin_threshold": 50 # specify the (positive) margin between agent and prior
}
###Output
_____no_output_____
###Markdown
F) Define the scoring functionWe will use a `custom_product` type. The component types included are:* `predictive_property` which is the target activity to _Aurora_ kinase represented by the predictive `regression` model. Note that we set the weight of this component to be the highest.* `qed_score` is the implementation of QED in RDKit. It biases the egenration of molecules towars more "drug-like" space. Depending on the study case can have beneficial or detrimental effect.* `custom_alerts` the `"smiles"` field also can work with SMILES or SMARTSNote: The model used in this example is a regression model
###Code
# prepare the scoring function definition and add at the end
scoring_function = {
"name": "custom_product", # this is our default one (alternative: "custom_sum")
"parallel": False, # sets whether components are to be executed
# in parallel; note, that python uses "False" / "True"
# but the JSON "false" / "true"
# the "parameters" list holds the individual components
"parameters": [
# add component: an activity model
{
"component_type": "predictive_property", # this is a scikit-learn model, returning
# activity values
"name": "Aurora kinase", # arbitrary name for the component
"weight": 6, # the weight ("importance") of the component (default: 1)
"model_path": os.path.join(ipynb_path, "models/Aurora_model.pkl"), # absolute model path
"smiles": [], # list of SMILES (not required for this component)
"specific_parameters": {
"transformation_type": "sigmoid", # see description above
"high": 9, # parameter for sigmoid transformation
"low": 4, # parameter for sigmoid transformation
"k": 0.25, # parameter for sigmoid transformation
"scikit": "regression", # model can be "regression" or "classification"
"transformation": True, # enable the transformation
"descriptor_type": "ecfp_counts", # sets the input descriptor for this model
"size": 2048, # parameter of descriptor type
"radius": 3, # parameter of descriptor type
"use_counts": True, # parameter of descriptor type
"use_features": True # parameter of descriptor type
}
},
# add component: QED
{
"component_type": "qed_score", # this is the QED score as implemented in RDKit
"name": "QED", # arbitrary name for the component
"weight": 2, # the weight ("importance") of the component (default: 1)
"model_path": None,
"smiles": None
},
# add component: enforce to NOT match a given substructure
{
"component_type": "custom_alerts",
"name": "Custom alerts", # arbitrary name for the component
"weight": 1, # the weight of the component (default: 1)
"model_path": None, # not required; note, this is "null" in JSON
"smiles": [ # specify the substructures (as list) to penalize
"[*;r8]",
"[*;r9]",
"[*;r10]",
"[*;r11]",
"[*;r12]",
"[*;r13]",
"[*;r14]",
"[*;r15]",
"[*;r16]",
"[*;r17]",
"[#8][#8]",
"[#6;+]",
"[#16][#16]",
"[#7;!n][S;!$(S(=O)=O)]",
"[#7;!n][#7;!n]",
"C#C",
"C(=[O,S])[O,S]",
"[#7;!n][C;!$(C(=[O,N])[N,O])][#16;!s]",
"[#7;!n][C;!$(C(=[O,N])[N,O])][#7;!n]",
"[#7;!n][C;!$(C(=[O,N])[N,O])][#8;!o]",
"[#8;!o][C;!$(C(=[O,N])[N,O])][#16;!s]",
"[#8;!o][C;!$(C(=[O,N])[N,O])][#8;!o]",
"[#16;!s][C;!$(C(=[O,N])[N,O])][#16;!s]"
],
"specific_parameters": None # not required; note, this is "null" in JSON
}]
}
configuration["parameters"]["scoring_function"] = scoring_function
###Output
_____no_output_____
###Markdown
NOTE: Getting the selectivity score component to reach satisfactory levels is non-trivial and might take considerably higher number of steps 3. Write out the configuration We now have successfully filled the dictionary and will write it out as a `JSON` file in the output directory. Please have a look at the file before proceeding in order to see how the paths have been inserted where required and the `dict` -> `JSON` translations (e.g. `True` to `true`) have taken place.
###Code
# write the configuration file to the disc
configuration_JSON_path = os.path.join(output_dir, "RL_config.json")
with open(configuration_JSON_path, 'w') as f:
json.dump(configuration, f, indent=4, sort_keys=True)
###Output
_____no_output_____
###Markdown
4. Run `REINVENT`Now it is time to execute `REINVENT` locally. Note, that depending on the number of epochs (steps) and the execution time of the scoring function components, this might take a while. The command-line execution looks like this:``` activate envionmentconda activate reinvent.v3.0 execute REINVENTpython /input.py .json```
###Code
%%capture captured_err_stream --no-stderr
# execute REINVENT from the command-line
!python {reinvent_dir}/input.py {configuration_JSON_path}
# print the output to a file, just to have it for documentation
with open(os.path.join(output_dir, "run.err"), 'w') as file:
file.write(captured_err_stream.stdout)
# prepare the output to be parsed
list_epochs = re.findall(r'INFO.*?local', captured_err_stream.stdout, re.DOTALL)
data = [epoch for idx, epoch in enumerate(list_epochs) if idx in [1, 75, 124]]
data = ["\n".join(element.splitlines()[:-1]) for element in data]
###Output
_____no_output_____
###Markdown
Below you see the print-out of the first, one from the middle and the last epoch, respectively. Note, that the fraction of valid `SMILES` is high right from the start (because we use a pre-trained prior). You can see the partial scores for each component for the first couple of compounds, but the most important information is the average score. You can clearly see how it increases over time.
###Code
for element in data:
print(element)
###Output
INFO
Step 0 Fraction valid SMILES: 99.2 Score: 0.2306 Time elapsed: 0 Time left: 0.0
Agent Prior Target Score SMILES
-19.51 -19.51 21.92 0.32 n1cnc(N2CCN(C)CC2)c2c(-c3ccccc3)c(-c3ccccc3)oc12
-53.61 -53.61 -53.61 0.00 c1c(-c2ccccc2)c(C)cc(CCC2COC3C(NC(C(C)NC)=O)(OC)COCC3(O)C2)c1
-32.90 -32.90 15.19 0.38 c1cc(C(Nc2cc(N=c3[nH]ccc(-c4ccnc(-c5cccnc5)c4)n3)ccc2)=O)ccc1NC(=O)C=C
-18.69 -18.69 28.08 0.37 OC1C(O)C(n2c3c(nc2)c(=Nc2ccccc2)[nH]cn3)OC1CO
-24.39 -24.39 16.43 0.32 O=C(c1c(C(=O)NCCC)nc[nH]1)N=c1c(C)ccc[nH]1
-34.69 -34.69 10.84 0.36 C1CC(CNC(=O)C(N)Cc2ccc(OC)cc2)CCN1C(=O)Cn1cccn1
-21.42 -21.42 -21.42 0.00 c1(=Cc2[nH]c(=O)[nH]c2O)c2nc(Nc3cccc(C#C)c3)cc(=NC3CC3)n2nc1
-23.29 -23.29 13.14 0.28 n1(-c2ccc([N+](=O)[O-])cc2)c(O)c(C2=c3ccccc3=NC2=O)c2ccccc12
-28.67 -28.67 -28.67 0.00 c1cccc2c1CC1(C2=O)OC(c2ccccc2)(c2ccccc2)OO1
-59.00 -59.00 -59.00 0.00 O(C1(OC(C)C(NC(C(NC(COC(CCC(=C)C2CCC(C=C)(C)C(C(=O)C)C2)=O)=O)=O)C)=O)CCCCC1)C
Aurora kinase Custom alerts
0.32369956374168396 1.0
0.3522017002105713 0.0
0.37566933035850525 1.0
0.3654446601867676 1.0
0.31891217827796936 1.0
0.355679452419281 1.0
0.39393407106399536 0.0
0.2846657335758209 1.0
0.2996329665184021 0.0
0.3963213264942169 0.0
INFO
Step 74 Fraction valid SMILES: 97.7 Score: 0.3038 Time elapsed: 26 Time left: 321.0
Agent Prior Target Score SMILES
-31.52 -32.78 -32.78 0.00 n1(C)ncc(-c2ccc3ncc(NCc4cccc(SC)c4)c3c2)c1
-19.84 -18.95 22.67 0.33 COc1c(OC)cc(C(N(CC)CC)=O)cc1OC
-17.59 -18.34 20.06 0.30 O(c1cc(C)c(N)c(C)c1)C
-46.62 -46.57 3.88 0.39 O1CCC2(CC1)C1(CCN(C(c3c4ncncc4ccc3)CF)CC1)NC(=O)C2
-30.05 -31.16 9.96 0.32 c1ccc(Nc2ccc(CN3CCN(C)CC3)cc2)c(-c2sc3c(n2)cncc3)c1
-35.30 -36.26 27.78 0.50 c1c(NC(=O)N=c2cc(C)[nH]c(N3CCN(c4cccc(OC)c4)CC3)n2)cc(Cl)c(Cl)c1
-31.53 -29.53 13.16 0.33 c1ccccc1CN(C(C)=O)CCC(Nc1ccc(OCC)cc1)=O
-37.70 -38.49 0.82 0.31 C(CN)CCC1C(=O)N(CCC2CCCC2)Cc2c(ccc(OCC(=NO)O)c2)C1
-31.83 -32.79 25.84 0.46 c1(-c2cc(C(=O)NCC3CCO3)c3cnn(C)c3n2)ccc(F)c(F)c1
-27.14 -25.95 -25.95 0.00 C(N1CCC(NC(NC(=O)c2c([N+]([O-])=O)cccc2)=S)CC1)c1ccccc1
Aurora kinase Custom alerts
0.0 0.0
0.32515692710876465 1.0
0.30000120401382446 1.0
0.3941750228404999 1.0
0.321297287940979 1.0
0.5003294944763184 1.0
0.3334886431694031 1.0
0.3071417510509491 1.0
0.45798826217651367 1.0
0.28031840920448303 0.0
INFO
Step 123 Fraction valid SMILES: 99.2 Score: 0.3181 Time elapsed: 44 Time left: 311.2
Agent Prior Target Score SMILES
-27.24 -25.90 -25.90 0.00 C(O)(Nc1cc(C)ccc1)c1cn(-c2ccc(OC(F)F)cc2)c(O)c1
-22.40 -22.08 21.31 0.34 c1c2c(ccc1Cl)C(=O)N(C)Cc1n-2cnc1Br
-38.70 -36.53 -36.53 0.00 C1C(O)C(O)C2OC(=O)C(=C)C2CN1S(=O)(=O)CCC
-29.12 -30.39 12.22 0.33 C1N(Cc2ccc(OCCCCN3c4ccccc4CCc4c3cc(Cl)cc4)cc2)CCCC1
-35.30 -37.37 16.22 0.42 c1c(C(C)C)c(N2CCN(C(=O)CCc3c(Cl)onc3C)CC2)n2cc(C(NCC)=O)ccc12
-39.06 -40.07 8.68 0.38 S(c1c2ccc(-c3n[nH]nn3)cc2c(OC)cn1)c1cc2c(c(Cl)c1)OCCO2
-30.11 -28.48 -28.48 0.00 N(CC)(CCOC(=O)c1ccc(Cl)c(Cl)c1)CC=C
-23.56 -24.55 14.09 0.30 c1(S(=O)(N(CC)CC)=O)ccc(C(=O)Nc2ccc(S(C)(=O)=O)cc2)cc1
-28.55 -28.66 15.51 0.35 c1(COc2ccc(CN3CCS(=O)(=O)CC3)cc2)c2ccccc2n[nH]1
-30.33 -32.81 18.71 0.40 C1CC(CC(=O)Nc2cc3c(nc[nH]c3=Nc3ccc(OCc4ccccc4)c(Cl)c3)cc2OC)CCN1Cc1cc(Cl)c(Cl)cc1
Aurora kinase Custom alerts
0.42681893706321716 0.0
0.3390008211135864 1.0
0.3794780969619751 0.0
0.3328739106655121 1.0
0.4186704456806183 1.0
0.380911260843277 1.0
0.43136125802993774 0.0
0.30190280079841614 1.0
0.34505319595336914 1.0
0.40252068638801575 1.0
###Markdown
5. Analyse the resultsIn order to analyze the run in a more intuitive way, we can use `tensorboard`:``` go to the root folder of the outputcd /REINVENT_RL_demo make sure, you have activated the proper environmentconda activate reinvent.v3.0 start tensorboardtensorboard --logdir progress.log```Then copy the link provided to a browser window, e.g. "http://workstation.url.com:6006/". The following figures are exmaple plots - remember, that there is always some randomness involved. In `tensorboard` you can monitor the individual scoring function components. The score for predicted Aurora Kinase activity.The average score over time.It might also be informative to look at the results from the prior (dark blue), the agent (blue) and the augmented likelihood (purple) over time.And last but not least, there is a "Images" tab available that lets you browse through the compounds generated in an easy way. In the molecules, the substructure matches that were defined to be required are highlighted in red (if present). Also, the total scores are given per molecule. The results folder will hold four different files: the agent (pickled), the input JSON (just for reference purposes), the memory (highest scoring compounds in `CSV` format) and the scaffold memory (in `CSV` format).
###Code
!head -n 15 {output_dir}/results/memory.csv
###Output
,smiles,score,likelihood
65,C(CCCn1cc(C(C)(C)C)c2c(C(C)C)cc(C(C)C)cc2c1=O)C(=O)N=c1nc[nH][nH]1,0.3286117,-50.641468
70,C1C(N(CCC)CCC)Cc2cccc3[nH]c(=O)n(c32)C1,0.32649106,-18.146914
26,O1c2c(nc(OC)cc2)C(C(NCCCN(C)C)=O)(Cc2ccccc2)c2ccccc21,0.32437962,-35.405247
60,c1c(C(CNCCc2ccc(NS(=O)(c3ccc(-c4oc(Cc5c[nH]c(=N)s5)cc4)nc3)=O)cc2)O)c[nH]c(=N)c1,0.32314676,-38.32259
99,c1c2c(cc(Cl)c1Cl)C(CC(=O)c1cnn(C)c1)(O)C(=O)N2,0.31027606,-27.762121
11,c1c(O)c(C(Cc2ccc(Cl)cc2)=O)cc(O)c1Oc2c(O)cc(O)cc2CCC(O)c1cc(O)c(OC)cc1,0.30576745,-52.903526
32,c1(C(NC(Cc2ccccc2)C(C(NCCN2CCOCC2)=O)=O)=O)cc(C(=O)NS(Cc2ccccc2)(=O)=O)c(NCCC)s1,0.30178678,-43.933296
1,c1c(C(C)C)ccc(NC(c2cc3c(cc2)[nH]c2c(C(N)=O)ccc(O)c23)=O)c1,0.30052438,-31.108843
108,c1(C(C(F)(F)F)(F)F)cc(Cn2c3cccc(NC(c4n5ccc(OCCN6CCN(C)CC6)cc5nc4)=O)c3c(CC)n2)ccc1,0.29700187,-34.311478
118,c1ccc(C(COc2ccc3c(occ(Oc4ccccc4)c3=O)c2)(O)C(N2CCCCC2)C)cc1F,0.29602197,-45.389744
109,C1CN(CC(CNC(c2ccc3n(c(=O)cc(C)n3)c2)=O)O)CCC1Cc1ccccc1,0.29525602,-29.0487
19,c1cc(CC2C(OCC3CC3)CCN(c3ncncc3)C2)ccc1OC,0.29047668,-26.524956
109,c1cc(CN2Cc3c(cccc3)CC2)ccc1Cc1n[nH]c(=O)c2c1CCCC2,0.2882794,-24.313461
0,c1c2c(ccc1OCCN(C)CCc1ccc(NS(=O)(C)=O)cc1)CCC2,0.28584373,-26.92916
###Markdown
> **How to run this notebook (command-line)?**1. Install the `ReinventCommunity` environment:`conda env create -f environment.yml`2. Activate the environment:`conda activate ReinventCommunity`3. Execute `jupyter`:`jupyter notebook`4. Copy the link to a browser `REINVENT 2.0`: reinforcement learning exploitation demoThis demo illustrates how to set up a `REINVENT` run to optimize molecules that are active against _Aurora_ kinase. We use here predictive model as the main component to guide the generation of the molecules. we also include a `qed_score` component to stimulate the generation of more "drug-like" molecules. 1. Set up the paths_Please update the following code block such that it reflects your system's installation and execute it._
###Code
# load dependencies
import os
import re
import json
import tempfile
# --------- change these path variables as required
reinvent_dir = os.path.expanduser("~/Desktop/Projects/Publications/2020/2020-04_REINVENT_2.0/Reinvent")
reinvent_env = os.path.expanduser("~/miniconda3/envs/reinvent_shared.v2.1")
output_dir = os.path.expanduser("~/Desktop/REINVENT_RL_demo")
# --------- do not change
# get the notebook's root path
try: ipynb_path
except NameError: ipynb_path = os.getcwd()
# if required, generate a folder to store the results
try:
os.mkdir(output_dir)
except FileExistsError:
pass
###Output
_____no_output_____
###Markdown
2. Setting up the configuration In the cells below we will build a nested dictionary object that will be eventually converted to JSON file which in turn will be consumed by `REINVENT`. You can find this file in your `output_dir` location. A) Declare the run type
###Code
# initialize the dictionary
configuration = {
"version": 2, # we are going to use REINVENT's newest release
"run_type": "reinforcement_learning" # other run types: "sampling", "validation",
# "transfer_learning",
# "scoring" and "create_model"
}
###Output
_____no_output_____
###Markdown
B) Sort out the logging detailsThis includes `resultdir` path where the results will be produced.Also: `REINVENT` can send custom log messages to a remote location. We have retained this capability in the code. if the `recipient` value differs from `"local"` `REINVENT` will attempt to POST the data to the specified `recipient`.
###Code
# add block to specify whether to run locally or not and
# where to store the results and logging
configuration["logging"] = {
"sender": "http://0.0.0.1", # only relevant if "recipient" is set to "remote"
"recipient": "local", # either to local logging or use a remote REST-interface
"logging_frequency": 10, # log every x-th steps
"logging_path": os.path.join(output_dir, "progress.log"), # load this folder in tensorboard
"resultdir": os.path.join(output_dir, "results"), # will hold the compounds (SMILES) and summaries
"job_name": "Reinforcement learning demo", # set an arbitrary job name for identification
"job_id": "demo" # only relevant if "recipient" is set to a specific REST endpoint
}
###Output
_____no_output_____
###Markdown
Create `"parameters"` field
###Code
# add the "parameters" block
configuration["parameters"] = {}
###Output
_____no_output_____
###Markdown
C) Set Diversity FilterDuring each step of Reinforcement Learning the compounds scored above `minscore` threshold are kept in memory. The scored smiles are written out to a file in the results folder `scaffold_memory.csv`. In the example here we are not using any filter by setting it to `"NoFilter"`. This will lead to exploitation of the chemical space in vicinity to the local optimum for the defined scoring function. The scoring function will likely reach a higher overall score sooner than the exploration scenario.For exploratory behavior the diversity filters below should be set to any of the listed alternatives `"IdenticalTopologicalScaffold"`, `"IdenticalMurckoScaffold"` or `"ScaffoldSimilarity"`. This will boost the diversity of generated solutions. The maximum value of the scoring fuinction will be lower in exploration mode because the Agent is encouraged to search for diverse solutions rather than to only optimize the best that are being found so far. The number of generated compounds should be higher in comparison to the exploitation scenario.
###Code
# add a "diversity_filter"
configuration["parameters"]["diversity_filter"] = {
"name": "NoFilter", # other options are: "IdenticalTopologicalScaffold",
# "IdenticalMurckoScaffold" and "ScaffoldSimilarity"
# -> use "NoFilter" to disable this feature
"nbmax": 25, # the bin size; penalization will start once this is exceeded
"minscore": 0.4, # the minimum total score to be considered for binning
"minsimilarity": 0.4 # the minimum similarity to be placed into the same bin
}
###Output
_____no_output_____
###Markdown
D) Set Inception* `smiles` provide here a list of smiles to be incepted * `memory_size` the number of smiles allowed in the inception memory* `sample_size` the number of smiles that can be sampled at each reinforcement learning step from inception memory
###Code
# prepare the inception (we do not use it in this example, so "smiles" is an empty list)
configuration["parameters"]["inception"] = {
"smiles": [], # fill in a list of SMILES here that can be used (or leave empty)
"memory_size": 100, # sets how many molecules are to be remembered
"sample_size": 10 # how many are to be sampled each epoch from the memory
}
###Output
_____no_output_____
###Markdown
E) Set the general Reinforcement Learning parameters* `n_steps` is the amount of Reinforcement Learning steps to perform. Best start with 1000 steps and see if thats enough.* `agent` is the generative model that undergoes transformation during the Reinforcement Learning run.We reccomend keeping the other parameters to their default values.
###Code
# set all "reinforcement learning"-specific run parameters
configuration["parameters"]["reinforcement_learning"] = {
"prior": os.path.join(reinvent_dir, "data/augmented.prior"), # path to the pre-trained model
"agent": os.path.join(reinvent_dir, "data/augmented.prior"), # path to the pre-trained model
"n_steps": 1000, # the number of epochs (steps) to be performed; often 1000
"sigma": 128, # used to calculate the "augmented likelihood", see publication
"learning_rate": 0.0001, # sets how strongly the agent is influenced by each epoch
"batch_size": 128, # specifies how many molecules are generated per epoch
"reset": 0, # if not '0', the reset the agent if threshold reached to get
# more diverse solutions
"reset_score_cutoff": 0.5, # if resetting is enabled, this is the threshold
"margin_threshold": 50 # specify the (positive) margin between agent and prior
}
###Output
_____no_output_____
###Markdown
F) Define the scoring functionWe will use a `custom_product` type. The component types included are:* `predictive_property` which is the target activity to _Aurora_ kinase represented by the predictive `regression` model. Note that we set the weight of this component to be the highest.* `qed_score` is the implementation of QED in RDKit. It biases the egenration of molecules towars more "drug-like" space. Depending on the study case can have beneficial or detrimental effect.* `custom_alerts` the `"smiles"` field also can work with SMILES or SMARTSNote: The model used in this example is a regression model
###Code
# prepare the scoring function definition and add at the end
scoring_function = {
"name": "custom_product", # this is our default one (alternative: "custom_sum")
"parallel": False, # sets whether components are to be executed
# in parallel; note, that python uses "False" / "True"
# but the JSON "false" / "true"
# the "parameters" list holds the individual components
"parameters": [
# add component: an activity model
{
"component_type": "predictive_property", # this is a scikit-learn model, returning
# activity values
"name": "Aurora kinase", # arbitrary name for the component
"weight": 6, # the weight ("importance") of the component (default: 1)
"model_path": os.path.join(reinvent_dir, "data/Aurora_model.pkl"), # absolute model path
"smiles": [], # list of SMILES (not required for this component)
"specific_parameters": {
"transformation_type": "sigmoid", # see description above
"high": 9, # parameter for sigmoid transformation
"low": 4, # parameter for sigmoid transformation
"k": 0.25, # parameter for sigmoid transformation
"scikit": "regression", # model can be "regression" or "classification"
"transformation": True, # enable the transformation
"descriptor_type": "ecfp_counts", # sets the input descriptor for this model
"size": 2048, # parameter of descriptor type
"radius": 3, # parameter of descriptor type
"use_counts": True, # parameter of descriptor type
"use_features": True # parameter of descriptor type
}
},
# add component: QED
{
"component_type": "qed_score", # this is the QED score as implemented in RDKit
"name": "QED", # arbitrary name for the component
"weight": 2, # the weight ("importance") of the component (default: 1)
"model_path": None,
"smiles": None
},
# add component: enforce to NOT match a given substructure
{
"component_type": "custom_alerts",
"name": "Custom alerts", # arbitrary name for the component
"weight": 1, # the weight of the component (default: 1)
"model_path": None, # not required; note, this is "null" in JSON
"smiles": [ # specify the substructures (as list) to penalize
"[*;r8]",
"[*;r9]",
"[*;r10]",
"[*;r11]",
"[*;r12]",
"[*;r13]",
"[*;r14]",
"[*;r15]",
"[*;r16]",
"[*;r17]",
"[#8][#8]",
"[#6;+]",
"[#16][#16]",
"[#7;!n][S;!$(S(=O)=O)]",
"[#7;!n][#7;!n]",
"C#C",
"C(=[O,S])[O,S]",
"[#7;!n][C;!$(C(=[O,N])[N,O])][#16;!s]",
"[#7;!n][C;!$(C(=[O,N])[N,O])][#7;!n]",
"[#7;!n][C;!$(C(=[O,N])[N,O])][#8;!o]",
"[#8;!o][C;!$(C(=[O,N])[N,O])][#16;!s]",
"[#8;!o][C;!$(C(=[O,N])[N,O])][#8;!o]",
"[#16;!s][C;!$(C(=[O,N])[N,O])][#16;!s]"
],
"specific_parameters": None # not required; note, this is "null" in JSON
}]
}
configuration["parameters"]["scoring_function"] = scoring_function
###Output
_____no_output_____
###Markdown
NOTE: Getting the selectivity score component to reach satisfactory levels is non-trivial and might take considerably higher number of steps 3. Write out the configuration We now have successfully filled the dictionary and will write it out as a `JSON` file in the output directory. Please have a look at the file before proceeding in order to see how the paths have been inserted where required and the `dict` -> `JSON` translations (e.g. `True` to `true`) have taken place.
###Code
# write the configuration file to the disc
configuration_JSON_path = os.path.join(output_dir, "RL_config.json")
with open(configuration_JSON_path, 'w') as f:
json.dump(configuration, f, indent=4, sort_keys=True)
###Output
_____no_output_____
###Markdown
4. Run `REINVENT`Now it is time to execute `REINVENT` locally. Note, that depending on the number of epochs (steps) and the execution time of the scoring function components, this might take a while. The command-line execution looks like this:``` activate envionmentconda activate reinvent_shared.v2.1 execute REINVENTpython /input.py .json```
###Code
%%capture captured_err_stream --no-stderr
# execute REINVENT from the command-line
!python {reinvent_dir}/input.py {configuration_JSON_path}
# print the output to a file, just to have it for documentation
with open(os.path.join(output_dir, "run.err"), 'w') as file:
file.write(captured_err_stream.stdout)
# prepare the output to be parsed
list_epochs = re.findall(r'INFO.*?local', captured_err_stream.stdout, re.DOTALL)
data = [epoch for idx, epoch in enumerate(list_epochs) if idx in [1, 75, 124]]
data = ["\n".join(element.splitlines()[:-1]) for element in data]
###Output
_____no_output_____
###Markdown
Below you see the print-out of the first, one from the middle and the last epoch, respectively. Note, that the fraction of valid `SMILES` is high right from the start (because we use a pre-trained prior). You can see the partial scores for each component for the first couple of compounds, but the most important information is the average score. You can clearly see how it increases over time.
###Code
for element in data:
print(element)
###Output
INFO
Step 0 Fraction valid SMILES: 99.2 Score: 0.2306 Time elapsed: 0 Time left: 0.0
Agent Prior Target Score SMILES
-19.51 -19.51 21.92 0.32 n1cnc(N2CCN(C)CC2)c2c(-c3ccccc3)c(-c3ccccc3)oc12
-53.61 -53.61 -53.61 0.00 c1c(-c2ccccc2)c(C)cc(CCC2COC3C(NC(C(C)NC)=O)(OC)COCC3(O)C2)c1
-32.90 -32.90 15.19 0.38 c1cc(C(Nc2cc(N=c3[nH]ccc(-c4ccnc(-c5cccnc5)c4)n3)ccc2)=O)ccc1NC(=O)C=C
-18.69 -18.69 28.08 0.37 OC1C(O)C(n2c3c(nc2)c(=Nc2ccccc2)[nH]cn3)OC1CO
-24.39 -24.39 16.43 0.32 O=C(c1c(C(=O)NCCC)nc[nH]1)N=c1c(C)ccc[nH]1
-34.69 -34.69 10.84 0.36 C1CC(CNC(=O)C(N)Cc2ccc(OC)cc2)CCN1C(=O)Cn1cccn1
-21.42 -21.42 -21.42 0.00 c1(=Cc2[nH]c(=O)[nH]c2O)c2nc(Nc3cccc(C#C)c3)cc(=NC3CC3)n2nc1
-23.29 -23.29 13.14 0.28 n1(-c2ccc([N+](=O)[O-])cc2)c(O)c(C2=c3ccccc3=NC2=O)c2ccccc12
-28.67 -28.67 -28.67 0.00 c1cccc2c1CC1(C2=O)OC(c2ccccc2)(c2ccccc2)OO1
-59.00 -59.00 -59.00 0.00 O(C1(OC(C)C(NC(C(NC(COC(CCC(=C)C2CCC(C=C)(C)C(C(=O)C)C2)=O)=O)=O)C)=O)CCCCC1)C
Aurora kinase Custom alerts
0.32369956374168396 1.0
0.3522017002105713 0.0
0.37566933035850525 1.0
0.3654446601867676 1.0
0.31891217827796936 1.0
0.355679452419281 1.0
0.39393407106399536 0.0
0.2846657335758209 1.0
0.2996329665184021 0.0
0.3963213264942169 0.0
INFO
Step 74 Fraction valid SMILES: 97.7 Score: 0.3038 Time elapsed: 26 Time left: 321.0
Agent Prior Target Score SMILES
-31.52 -32.78 -32.78 0.00 n1(C)ncc(-c2ccc3ncc(NCc4cccc(SC)c4)c3c2)c1
-19.84 -18.95 22.67 0.33 COc1c(OC)cc(C(N(CC)CC)=O)cc1OC
-17.59 -18.34 20.06 0.30 O(c1cc(C)c(N)c(C)c1)C
-46.62 -46.57 3.88 0.39 O1CCC2(CC1)C1(CCN(C(c3c4ncncc4ccc3)CF)CC1)NC(=O)C2
-30.05 -31.16 9.96 0.32 c1ccc(Nc2ccc(CN3CCN(C)CC3)cc2)c(-c2sc3c(n2)cncc3)c1
-35.30 -36.26 27.78 0.50 c1c(NC(=O)N=c2cc(C)[nH]c(N3CCN(c4cccc(OC)c4)CC3)n2)cc(Cl)c(Cl)c1
-31.53 -29.53 13.16 0.33 c1ccccc1CN(C(C)=O)CCC(Nc1ccc(OCC)cc1)=O
-37.70 -38.49 0.82 0.31 C(CN)CCC1C(=O)N(CCC2CCCC2)Cc2c(ccc(OCC(=NO)O)c2)C1
-31.83 -32.79 25.84 0.46 c1(-c2cc(C(=O)NCC3CCO3)c3cnn(C)c3n2)ccc(F)c(F)c1
-27.14 -25.95 -25.95 0.00 C(N1CCC(NC(NC(=O)c2c([N+]([O-])=O)cccc2)=S)CC1)c1ccccc1
Aurora kinase Custom alerts
0.0 0.0
0.32515692710876465 1.0
0.30000120401382446 1.0
0.3941750228404999 1.0
0.321297287940979 1.0
0.5003294944763184 1.0
0.3334886431694031 1.0
0.3071417510509491 1.0
0.45798826217651367 1.0
0.28031840920448303 0.0
INFO
Step 123 Fraction valid SMILES: 99.2 Score: 0.3181 Time elapsed: 44 Time left: 311.2
Agent Prior Target Score SMILES
-27.24 -25.90 -25.90 0.00 C(O)(Nc1cc(C)ccc1)c1cn(-c2ccc(OC(F)F)cc2)c(O)c1
-22.40 -22.08 21.31 0.34 c1c2c(ccc1Cl)C(=O)N(C)Cc1n-2cnc1Br
-38.70 -36.53 -36.53 0.00 C1C(O)C(O)C2OC(=O)C(=C)C2CN1S(=O)(=O)CCC
-29.12 -30.39 12.22 0.33 C1N(Cc2ccc(OCCCCN3c4ccccc4CCc4c3cc(Cl)cc4)cc2)CCCC1
-35.30 -37.37 16.22 0.42 c1c(C(C)C)c(N2CCN(C(=O)CCc3c(Cl)onc3C)CC2)n2cc(C(NCC)=O)ccc12
-39.06 -40.07 8.68 0.38 S(c1c2ccc(-c3n[nH]nn3)cc2c(OC)cn1)c1cc2c(c(Cl)c1)OCCO2
-30.11 -28.48 -28.48 0.00 N(CC)(CCOC(=O)c1ccc(Cl)c(Cl)c1)CC=C
-23.56 -24.55 14.09 0.30 c1(S(=O)(N(CC)CC)=O)ccc(C(=O)Nc2ccc(S(C)(=O)=O)cc2)cc1
-28.55 -28.66 15.51 0.35 c1(COc2ccc(CN3CCS(=O)(=O)CC3)cc2)c2ccccc2n[nH]1
-30.33 -32.81 18.71 0.40 C1CC(CC(=O)Nc2cc3c(nc[nH]c3=Nc3ccc(OCc4ccccc4)c(Cl)c3)cc2OC)CCN1Cc1cc(Cl)c(Cl)cc1
Aurora kinase Custom alerts
0.42681893706321716 0.0
0.3390008211135864 1.0
0.3794780969619751 0.0
0.3328739106655121 1.0
0.4186704456806183 1.0
0.380911260843277 1.0
0.43136125802993774 0.0
0.30190280079841614 1.0
0.34505319595336914 1.0
0.40252068638801575 1.0
###Markdown
5. Analyse the resultsIn order to analyze the run in a more intuitive way, we can use `tensorboard`:``` go to the root folder of the outputcd /REINVENT_RL_demo make sure, you have activated the proper environmentconda activate reinvent_shared.v2.1 start tensorboardtensorboard --logdir progress.log```Then copy the link provided to a browser window, e.g. "http://workstation.url.com:6006/". The following figures are exmaple plots - remember, that there is always some randomness involved. In `tensorboard` you can monitor the individual scoring function components. The score for predicted Aurora Kinase activity.The average score over time.It might also be informative to look at the results from the prior (dark blue), the agent (blue) and the augmented likelihood (purple) over time.And last but not least, there is a "Images" tab available that lets you browse through the compounds generated in an easy way. In the molecules, the substructure matches that were defined to be required are highlighted in red (if present). Also, the total scores are given per molecule. The results folder will hold four different files: the agent (pickled), the input JSON (just for reference purposes), the memory (highest scoring compounds in `CSV` format) and the scaffold memory (in `CSV` format).
###Code
!head -n 15 {output_dir}/results/memory.csv
###Output
,smiles,score,likelihood
65,C(CCCn1cc(C(C)(C)C)c2c(C(C)C)cc(C(C)C)cc2c1=O)C(=O)N=c1nc[nH][nH]1,0.3286117,-50.641468
70,C1C(N(CCC)CCC)Cc2cccc3[nH]c(=O)n(c32)C1,0.32649106,-18.146914
26,O1c2c(nc(OC)cc2)C(C(NCCCN(C)C)=O)(Cc2ccccc2)c2ccccc21,0.32437962,-35.405247
60,c1c(C(CNCCc2ccc(NS(=O)(c3ccc(-c4oc(Cc5c[nH]c(=N)s5)cc4)nc3)=O)cc2)O)c[nH]c(=N)c1,0.32314676,-38.32259
99,c1c2c(cc(Cl)c1Cl)C(CC(=O)c1cnn(C)c1)(O)C(=O)N2,0.31027606,-27.762121
11,c1c(O)c(C(Cc2ccc(Cl)cc2)=O)cc(O)c1Oc2c(O)cc(O)cc2CCC(O)c1cc(O)c(OC)cc1,0.30576745,-52.903526
32,c1(C(NC(Cc2ccccc2)C(C(NCCN2CCOCC2)=O)=O)=O)cc(C(=O)NS(Cc2ccccc2)(=O)=O)c(NCCC)s1,0.30178678,-43.933296
1,c1c(C(C)C)ccc(NC(c2cc3c(cc2)[nH]c2c(C(N)=O)ccc(O)c23)=O)c1,0.30052438,-31.108843
108,c1(C(C(F)(F)F)(F)F)cc(Cn2c3cccc(NC(c4n5ccc(OCCN6CCN(C)CC6)cc5nc4)=O)c3c(CC)n2)ccc1,0.29700187,-34.311478
118,c1ccc(C(COc2ccc3c(occ(Oc4ccccc4)c3=O)c2)(O)C(N2CCCCC2)C)cc1F,0.29602197,-45.389744
109,C1CN(CC(CNC(c2ccc3n(c(=O)cc(C)n3)c2)=O)O)CCC1Cc1ccccc1,0.29525602,-29.0487
19,c1cc(CC2C(OCC3CC3)CCN(c3ncncc3)C2)ccc1OC,0.29047668,-26.524956
109,c1cc(CN2Cc3c(cccc3)CC2)ccc1Cc1n[nH]c(=O)c2c1CCCC2,0.2882794,-24.313461
0,c1c2c(ccc1OCCN(C)CCc1ccc(NS(=O)(C)=O)cc1)CCC2,0.28584373,-26.92916
###Markdown
> **How to run this notebook (command-line)?**1. Install the `reinvent_shared.v2.1` environment:`conda env create -f reinvent_shared.yml`2. Activate the environment:`conda activate reinvent_shared.v2.1`3. Execute `jupyter`:`jupyter notebook`4. Copy the link to a browser `REINVENT 2.0`: reinforcement learning exploitation demoThis demo illustrates how to set up a `REINVENT` run to optimize molecules that are active against _Aurora_ kinase. We use here predictive model as the main component to guide the generation of the molecules. we also include a `qed_score` component to stimulate the generation of more "drug-like" molecules. 1. Set up the paths_Please update the following code block such that it reflects your system's installation and execute it._
###Code
# load dependencies
import os
import re
import json
import tempfile
# --------- change these path variables as required
reinvent_dir = os.path.expanduser("~/Desktop/Projects/Publications/2020/2020-04_REINVENT_2.0/Reinvent")
reinvent_env = os.path.expanduser("~/miniconda3/envs/reinvent_shared.v2.1")
output_dir = os.path.expanduser("~/Desktop/REINVENT_RL_demo")
# --------- do not change
# get the notebook's root path
try: ipynb_path
except NameError: ipynb_path = os.getcwd()
# if required, generate a folder to store the results
try:
os.mkdir(output_dir)
except FileExistsError:
pass
###Output
_____no_output_____
###Markdown
2. Setting up the configuration In the cells below we will build a nested dictionary object that will be eventually converted to JSON file which in turn will be consumed by `REINVENT`. You can find this file in your `output_dir` location. A) Declare the run type
###Code
# initialize the dictionary
configuration = {
"version": 2, # we are going to use REINVENT's newest release
"run_type": "reinforcement_learning" # other run types: "sampling", "validation",
# "transfer_learning",
# "scoring" and "create_model"
}
###Output
_____no_output_____
###Markdown
B) Sort out the logging detailsThis includes `resultdir` path where the results will be produced.Also: `REINVENT` can send custom log messages to a remote location. We have retained this capability in the code. if the `recipient` value differs from `"local"` `REINVENT` will attempt to POST the data to the specified `recipient`.
###Code
# add block to specify whether to run locally or not and
# where to store the results and logging
configuration["logging"] = {
"sender": "http://0.0.0.1", # only relevant if "recipient" is set to "remote"
"recipient": "local", # either to local logging or use a remote REST-interface
"logging_frequency": 10, # log every x-th steps
"logging_path": os.path.join(output_dir, "progress.log"), # load this folder in tensorboard
"resultdir": os.path.join(output_dir, "results"), # will hold the compounds (SMILES) and summaries
"job_name": "Reinforcement learning demo", # set an arbitrary job name for identification
"job_id": "demo" # only relevant if "recipient" is set to a specific REST endpoint
}
###Output
_____no_output_____
###Markdown
Create `"parameters"` field
###Code
# add the "parameters" block
configuration["parameters"] = {}
###Output
_____no_output_____
###Markdown
C) Set Diversity FilterDuring each step of Reinforcement Learning the compounds scored above `minscore` threshold are kept in memory. The scored smiles are written out to a file in the results folder `scaffold_memory.csv`. In the example here we are not using any filter by setting it to `"NoFilter"`. This will lead to exploitation of the chemical space in vicinity to the local optimum for the defined scoring function. The scoring function will likely reach a higher overall score sooner than the exploration scenario.For exploratory behavior the diversity filters below should be set to any of the listed alternatives `"IdenticalTopologicalScaffold"`, `"IdenticalMurckoScaffold"` or `"ScaffoldSimilarity"`. This will boost the diversity of generated solutions. The maximum value of the scoring fuinction will be lower in exploration mode because the Agent is encouraged to search for diverse solutions rather than to only optimize the best that are being found so far. The number of generated compounds should be higher in comparison to the exploitation scenario.
###Code
# add a "diversity_filter"
configuration["parameters"]["diversity_filter"] = {
"name": "NoFilter", # other options are: "IdenticalTopologicalScaffold",
# "IdenticalMurckoScaffold" and "ScaffoldSimilarity"
# -> use "NoFilter" to disable this feature
"nbmax": 25, # the bin size; penalization will start once this is exceeded
"minscore": 0.4, # the minimum total score to be considered for binning
"minsimilarity": 0.4 # the minimum similarity to be placed into the same bin
}
###Output
_____no_output_____
###Markdown
D) Set Inception* `smiles` provide here a list of smiles to be incepted * `memory_size` the number of smiles allowed in the inception memory* `sample_size` the number of smiles that can be sampled at each reinforcement learning step from inception memory
###Code
# prepare the inception (we do not use it in this example, so "smiles" is an empty list)
configuration["parameters"]["inception"] = {
"smiles": [], # fill in a list of SMILES here that can be used (or leave empty)
"memory_size": 100, # sets how many molecules are to be remembered
"sample_size": 10 # how many are to be sampled each epoch from the memory
}
###Output
_____no_output_____
###Markdown
E) Set the general Reinforcement Learning parameters* `n_steps` is the amount of Reinforcement Learning steps to perform. Best start with 1000 steps and see if thats enough.* `agent` is the generative model that undergoes transformation during the Reinforcement Learning run.We reccomend keeping the other parameters to their default values.
###Code
# set all "reinforcement learning"-specific run parameters
configuration["parameters"]["reinforcement_learning"] = {
"prior": os.path.join(reinvent_dir, "data/augmented.prior"), # path to the pre-trained model
"agent": os.path.join(reinvent_dir, "data/augmented.prior"), # path to the pre-trained model
"n_steps": 1000, # the number of epochs (steps) to be performed; often 1000
"sigma": 128, # used to calculate the "augmented likelihood", see publication
"learning_rate": 0.0001, # sets how strongly the agent is influenced by each epoch
"batch_size": 128, # specifies how many molecules are generated per epoch
"reset": 0, # if not '0', the reset the agent if threshold reached to get
# more diverse solutions
"reset_score_cutoff": 0.5, # if resetting is enabled, this is the threshold
"margin_threshold": 50 # specify the (positive) margin between agent and prior
}
###Output
_____no_output_____
###Markdown
F) Define the scoring functionWe will use a `custom_product` type. The component types included are:* `predictive_property` which is the target activity to _Aurora_ kinase represented by the predictive `regression` model. Note that we set the weight of this component to be the highest.* `qed_score` is the implementation of QED in RDKit. It biases the egenration of molecules towars more "drug-like" space. Depending on the study case can have beneficial or detrimental effect.* `custom_alerts` the `"smiles"` field also can work with SMILES or SMARTSNote: The model used in this example is a regression model
###Code
# prepare the scoring function definition and add at the end
scoring_function = {
"name": "custom_product", # this is our default one (alternative: "custom_sum")
"parallel": False, # sets whether components are to be executed
# in parallel; note, that python uses "False" / "True"
# but the JSON "false" / "true"
# the "parameters" list holds the individual components
"parameters": [
# add component: an activity model
{
"component_type": "predictive_property", # this is a scikit-learn model, returning
# activity values
"name": "Aurora kinase", # arbitrary name for the component
"weight": 6, # the weight ("importance") of the component (default: 1)
"model_path": os.path.join(reinvent_dir, "data/Aurora_model.pkl"), # absolute model path
"smiles": [], # list of SMILES (not required for this component)
"specific_parameters": {
"transformation_type": "sigmoid", # see description above
"high": 9, # parameter for sigmoid transformation
"low": 4, # parameter for sigmoid transformation
"k": 0.25, # parameter for sigmoid transformation
"scikit": "regression", # model can be "regression" or "classification"
"transformation": True, # enable the transformation
"descriptor_type": "ecfp_counts", # sets the input descriptor for this model
"size": 2048, # parameter of descriptor type
"radius": 3, # parameter of descriptor type
"use_counts": True, # parameter of descriptor type
"use_features": True # parameter of descriptor type
}
},
# add component: QED
{
"component_type": "qed_score", # this is the QED score as implemented in RDKit
"name": "QED", # arbitrary name for the component
"weight": 2, # the weight ("importance") of the component (default: 1)
"model_path": None,
"smiles": None
},
# add component: enforce to NOT match a given substructure
{
"component_type": "custom_alerts",
"name": "Custom alerts", # arbitrary name for the component
"weight": 1, # the weight of the component (default: 1)
"model_path": None, # not required; note, this is "null" in JSON
"smiles": [ # specify the substructures (as list) to penalize
"[*;r8]",
"[*;r9]",
"[*;r10]",
"[*;r11]",
"[*;r12]",
"[*;r13]",
"[*;r14]",
"[*;r15]",
"[*;r16]",
"[*;r17]",
"[#8][#8]",
"[#6;+]",
"[#16][#16]",
"[#7;!n][S;!$(S(=O)=O)]",
"[#7;!n][#7;!n]",
"C#C",
"C(=[O,S])[O,S]",
"[#7;!n][C;!$(C(=[O,N])[N,O])][#16;!s]",
"[#7;!n][C;!$(C(=[O,N])[N,O])][#7;!n]",
"[#7;!n][C;!$(C(=[O,N])[N,O])][#8;!o]",
"[#8;!o][C;!$(C(=[O,N])[N,O])][#16;!s]",
"[#8;!o][C;!$(C(=[O,N])[N,O])][#8;!o]",
"[#16;!s][C;!$(C(=[O,N])[N,O])][#16;!s]"
],
"specific_parameters": None # not required; note, this is "null" in JSON
}]
}
configuration["parameters"]["scoring_function"] = scoring_function
###Output
_____no_output_____
###Markdown
NOTE: Getting the selectivity score component to reach satisfactory levels is non-trivial and might take considerably higher number of steps 3. Write out the configuration We now have successfully filled the dictionary and will write it out as a `JSON` file in the output directory. Please have a look at the file before proceeding in order to see how the paths have been inserted where required and the `dict` -> `JSON` translations (e.g. `True` to `true`) have taken place.
###Code
# write the configuration file to the disc
configuration_JSON_path = os.path.join(output_dir, "RL_config.json")
with open(configuration_JSON_path, 'w') as f:
json.dump(configuration, f, indent=4, sort_keys=True)
###Output
_____no_output_____
###Markdown
4. Run `REINVENT`Now it is time to execute `REINVENT` locally. Note, that depending on the number of epochs (steps) and the execution time of the scoring function components, this might take a while. The command-line execution looks like this:``` activate envionmentconda activate reinvent_shared.v2.1 execute REINVENTpython /input.py .json```
###Code
%%capture captured_err_stream --no-stderr
# execute REINVENT from the command-line
!python {reinvent_dir}/input.py {configuration_JSON_path}
# print the output to a file, just to have it for documentation
with open(os.path.join(output_dir, "run.err"), 'w') as file:
file.write(captured_err_stream.stdout)
# prepare the output to be parsed
list_epochs = re.findall(r'INFO.*?local', captured_err_stream.stdout, re.DOTALL)
data = [epoch for idx, epoch in enumerate(list_epochs) if idx in [1, 75, 124]]
data = ["\n".join(element.splitlines()[:-1]) for element in data]
###Output
_____no_output_____
###Markdown
Below you see the print-out of the first, one from the middle and the last epoch, respectively. Note, that the fraction of valid `SMILES` is high right from the start (because we use a pre-trained prior). You can see the partial scores for each component for the first couple of compounds, but the most important information is the average score. You can clearly see how it increases over time.
###Code
for element in data:
print(element)
###Output
INFO
Step 0 Fraction valid SMILES: 99.2 Score: 0.2306 Time elapsed: 0 Time left: 0.0
Agent Prior Target Score SMILES
-19.51 -19.51 21.92 0.32 n1cnc(N2CCN(C)CC2)c2c(-c3ccccc3)c(-c3ccccc3)oc12
-53.61 -53.61 -53.61 0.00 c1c(-c2ccccc2)c(C)cc(CCC2COC3C(NC(C(C)NC)=O)(OC)COCC3(O)C2)c1
-32.90 -32.90 15.19 0.38 c1cc(C(Nc2cc(N=c3[nH]ccc(-c4ccnc(-c5cccnc5)c4)n3)ccc2)=O)ccc1NC(=O)C=C
-18.69 -18.69 28.08 0.37 OC1C(O)C(n2c3c(nc2)c(=Nc2ccccc2)[nH]cn3)OC1CO
-24.39 -24.39 16.43 0.32 O=C(c1c(C(=O)NCCC)nc[nH]1)N=c1c(C)ccc[nH]1
-34.69 -34.69 10.84 0.36 C1CC(CNC(=O)C(N)Cc2ccc(OC)cc2)CCN1C(=O)Cn1cccn1
-21.42 -21.42 -21.42 0.00 c1(=Cc2[nH]c(=O)[nH]c2O)c2nc(Nc3cccc(C#C)c3)cc(=NC3CC3)n2nc1
-23.29 -23.29 13.14 0.28 n1(-c2ccc([N+](=O)[O-])cc2)c(O)c(C2=c3ccccc3=NC2=O)c2ccccc12
-28.67 -28.67 -28.67 0.00 c1cccc2c1CC1(C2=O)OC(c2ccccc2)(c2ccccc2)OO1
-59.00 -59.00 -59.00 0.00 O(C1(OC(C)C(NC(C(NC(COC(CCC(=C)C2CCC(C=C)(C)C(C(=O)C)C2)=O)=O)=O)C)=O)CCCCC1)C
Aurora kinase Custom alerts
0.32369956374168396 1.0
0.3522017002105713 0.0
0.37566933035850525 1.0
0.3654446601867676 1.0
0.31891217827796936 1.0
0.355679452419281 1.0
0.39393407106399536 0.0
0.2846657335758209 1.0
0.2996329665184021 0.0
0.3963213264942169 0.0
INFO
Step 74 Fraction valid SMILES: 97.7 Score: 0.3038 Time elapsed: 26 Time left: 321.0
Agent Prior Target Score SMILES
-31.52 -32.78 -32.78 0.00 n1(C)ncc(-c2ccc3ncc(NCc4cccc(SC)c4)c3c2)c1
-19.84 -18.95 22.67 0.33 COc1c(OC)cc(C(N(CC)CC)=O)cc1OC
-17.59 -18.34 20.06 0.30 O(c1cc(C)c(N)c(C)c1)C
-46.62 -46.57 3.88 0.39 O1CCC2(CC1)C1(CCN(C(c3c4ncncc4ccc3)CF)CC1)NC(=O)C2
-30.05 -31.16 9.96 0.32 c1ccc(Nc2ccc(CN3CCN(C)CC3)cc2)c(-c2sc3c(n2)cncc3)c1
-35.30 -36.26 27.78 0.50 c1c(NC(=O)N=c2cc(C)[nH]c(N3CCN(c4cccc(OC)c4)CC3)n2)cc(Cl)c(Cl)c1
-31.53 -29.53 13.16 0.33 c1ccccc1CN(C(C)=O)CCC(Nc1ccc(OCC)cc1)=O
-37.70 -38.49 0.82 0.31 C(CN)CCC1C(=O)N(CCC2CCCC2)Cc2c(ccc(OCC(=NO)O)c2)C1
-31.83 -32.79 25.84 0.46 c1(-c2cc(C(=O)NCC3CCO3)c3cnn(C)c3n2)ccc(F)c(F)c1
-27.14 -25.95 -25.95 0.00 C(N1CCC(NC(NC(=O)c2c([N+]([O-])=O)cccc2)=S)CC1)c1ccccc1
Aurora kinase Custom alerts
0.0 0.0
0.32515692710876465 1.0
0.30000120401382446 1.0
0.3941750228404999 1.0
0.321297287940979 1.0
0.5003294944763184 1.0
0.3334886431694031 1.0
0.3071417510509491 1.0
0.45798826217651367 1.0
0.28031840920448303 0.0
INFO
Step 123 Fraction valid SMILES: 99.2 Score: 0.3181 Time elapsed: 44 Time left: 311.2
Agent Prior Target Score SMILES
-27.24 -25.90 -25.90 0.00 C(O)(Nc1cc(C)ccc1)c1cn(-c2ccc(OC(F)F)cc2)c(O)c1
-22.40 -22.08 21.31 0.34 c1c2c(ccc1Cl)C(=O)N(C)Cc1n-2cnc1Br
-38.70 -36.53 -36.53 0.00 C1C(O)C(O)C2OC(=O)C(=C)C2CN1S(=O)(=O)CCC
-29.12 -30.39 12.22 0.33 C1N(Cc2ccc(OCCCCN3c4ccccc4CCc4c3cc(Cl)cc4)cc2)CCCC1
-35.30 -37.37 16.22 0.42 c1c(C(C)C)c(N2CCN(C(=O)CCc3c(Cl)onc3C)CC2)n2cc(C(NCC)=O)ccc12
-39.06 -40.07 8.68 0.38 S(c1c2ccc(-c3n[nH]nn3)cc2c(OC)cn1)c1cc2c(c(Cl)c1)OCCO2
-30.11 -28.48 -28.48 0.00 N(CC)(CCOC(=O)c1ccc(Cl)c(Cl)c1)CC=C
-23.56 -24.55 14.09 0.30 c1(S(=O)(N(CC)CC)=O)ccc(C(=O)Nc2ccc(S(C)(=O)=O)cc2)cc1
-28.55 -28.66 15.51 0.35 c1(COc2ccc(CN3CCS(=O)(=O)CC3)cc2)c2ccccc2n[nH]1
-30.33 -32.81 18.71 0.40 C1CC(CC(=O)Nc2cc3c(nc[nH]c3=Nc3ccc(OCc4ccccc4)c(Cl)c3)cc2OC)CCN1Cc1cc(Cl)c(Cl)cc1
Aurora kinase Custom alerts
0.42681893706321716 0.0
0.3390008211135864 1.0
0.3794780969619751 0.0
0.3328739106655121 1.0
0.4186704456806183 1.0
0.380911260843277 1.0
0.43136125802993774 0.0
0.30190280079841614 1.0
0.34505319595336914 1.0
0.40252068638801575 1.0
###Markdown
5. Analyse the resultsIn order to analyze the run in a more intuitive way, we can use `tensorboard`:``` go to the root folder of the outputcd /REINVENT_RL_demo make sure, you have activated the proper environmentconda activate reinvent_shared.v2.1 start tensorboardtensorboard --logdir progress.log```Then copy the link provided to a browser window, e.g. "http://workstation.url.com:6006/". The following figures are exmaple plots - remember, that there is always some randomness involved. In `tensorboard` you can monitor the individual scoring function components. The score for predicted Aurora Kinase activity.The average score over time.It might also be informative to look at the results from the prior (dark blue), the agent (blue) and the augmented likelihood (purple) over time.And last but not least, there is a "Images" tab available that lets you browse through the compounds generated in an easy way. In the molecules, the substructure matches that were defined to be required are highlighted in red (if present). Also, the total scores are given per molecule. The results folder will hold four different files: the agent (pickled), the input JSON (just for reference purposes), the memory (highest scoring compounds in `CSV` format) and the scaffold memory (in `CSV` format).
###Code
!head -n 15 {output_dir}/results/memory.csv
###Output
,smiles,score,likelihood
65,C(CCCn1cc(C(C)(C)C)c2c(C(C)C)cc(C(C)C)cc2c1=O)C(=O)N=c1nc[nH][nH]1,0.3286117,-50.641468
70,C1C(N(CCC)CCC)Cc2cccc3[nH]c(=O)n(c32)C1,0.32649106,-18.146914
26,O1c2c(nc(OC)cc2)C(C(NCCCN(C)C)=O)(Cc2ccccc2)c2ccccc21,0.32437962,-35.405247
60,c1c(C(CNCCc2ccc(NS(=O)(c3ccc(-c4oc(Cc5c[nH]c(=N)s5)cc4)nc3)=O)cc2)O)c[nH]c(=N)c1,0.32314676,-38.32259
99,c1c2c(cc(Cl)c1Cl)C(CC(=O)c1cnn(C)c1)(O)C(=O)N2,0.31027606,-27.762121
11,c1c(O)c(C(Cc2ccc(Cl)cc2)=O)cc(O)c1Oc2c(O)cc(O)cc2CCC(O)c1cc(O)c(OC)cc1,0.30576745,-52.903526
32,c1(C(NC(Cc2ccccc2)C(C(NCCN2CCOCC2)=O)=O)=O)cc(C(=O)NS(Cc2ccccc2)(=O)=O)c(NCCC)s1,0.30178678,-43.933296
1,c1c(C(C)C)ccc(NC(c2cc3c(cc2)[nH]c2c(C(N)=O)ccc(O)c23)=O)c1,0.30052438,-31.108843
108,c1(C(C(F)(F)F)(F)F)cc(Cn2c3cccc(NC(c4n5ccc(OCCN6CCN(C)CC6)cc5nc4)=O)c3c(CC)n2)ccc1,0.29700187,-34.311478
118,c1ccc(C(COc2ccc3c(occ(Oc4ccccc4)c3=O)c2)(O)C(N2CCCCC2)C)cc1F,0.29602197,-45.389744
109,C1CN(CC(CNC(c2ccc3n(c(=O)cc(C)n3)c2)=O)O)CCC1Cc1ccccc1,0.29525602,-29.0487
19,c1cc(CC2C(OCC3CC3)CCN(c3ncncc3)C2)ccc1OC,0.29047668,-26.524956
109,c1cc(CN2Cc3c(cccc3)CC2)ccc1Cc1n[nH]c(=O)c2c1CCCC2,0.2882794,-24.313461
0,c1c2c(ccc1OCCN(C)CCc1ccc(NS(=O)(C)=O)cc1)CCC2,0.28584373,-26.92916
|
.ipynb_checkpoints/0_visualizing_Data-checkpoint.ipynb | ###Markdown
Bar Charts
###Code
movies = ["Annie Hall", "Ben-Hur", "Casablanca", "Gandhi", "West Side Story"]
num_oscars = [5, 11, 3, 8, 10]
# bars are by default width 0.8, so we'll add 0.1 to the left coordinates
# so that each bar is centered
xs = [i + 0.1 for i, _ in enumerate(movies)]
# plot bars with left x-coordinates [xs], heights [num_oscars]
plt.bar(xs, num_oscars)
plt.ylabel("# of Academy Awards")
plt.title("My Favourite Movies")
# label x-axis with movie names at bar centers
plt.xticks([i + .1 for i, _ in enumerate(movies)], movies);
###Output
_____no_output_____
###Markdown
Histograms
###Code
from collections import Counter
grades = [83,95,91,87,70,0,85,82,100,67,73,77,0]
# bucket grade by decile, but place the 100 in the 90s
histogram = Counter(min(grade//10 * 10, 90) for grade in grades)
histogram
plt.bar([x + 5 for x in histogram.keys()], # shift bars to the right by 5
histogram.values(), # give the bars the correct values
10,# increase the width
edgecolor=(0, 0, 0)) # black edges for the bars
plt.axis([-5, 105, 0, 5])
plt.xticks([i for i in range(0,110,10)]);
plt.xlabel("Decile")
plt.ylabel("# of Students")
plt.title("Distribution of Exam 1 Grades")
###Output
_____no_output_____
###Markdown
Line Charts
###Code
variance = [1, 2, 4, 8, 16, 32, 64, 128, 256]
bias_squared = [256, 128, 64, 32, 16, 8, 4, 2, 1]
total_error = [x + y for x,y in zip(variance, bias_squared)]
xs = [i for i, _ in enumerate(variance)]
# we can make multiple calls to plt.plot
# to show multiple series on the same chart
plt.figure(figsize=(10,6))
plt.plot(xs, variance, 'g-', label = 'variance') # green solid line
plt.plot(xs, bias_squared, 'r-', label = 'bias^2') # red dotted line
plt.plot(xs, total_error, 'b:', label = 'total error') # blue dotted line
# because we've assigned labels to each series
# we can get a legend for free
# loc=9 means "top center"
plt.legend(loc=9)
plt.xlabel("model complexity")
plt.title("The Bias-Variance tradeoff")
###Output
_____no_output_____
###Markdown
Scatterplots
###Code
friends = [ 70, 65, 72, 63, 71, 64, 60, 64, 67]
minutes = [175, 170, 205, 120, 220, 130, 105, 145, 190]
labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i']
plt.scatter(friends, minutes)
# label each point
for label, friend_count, minute_count in zip(labels, friends, minutes):
plt.annotate(label,
xy=(friend_count,minute_count), # put the label with its point
xytext=(5, -5),
textcoords="offset points")
plt.title("Daily Minutes vs. Number of Friends");
plt.xlabel("# of Friends")
plt.ylabel("daily minutes spend on the website");
#plt.axis('equal')
###Output
_____no_output_____
###Markdown
Warning for variables that are comparable with axis that aren't
###Code
test_1_grades = [ 99, 90, 85, 97, 80]
test_2_grades = [100, 85, 60, 90, 70]
plt.scatter(test_1_grades, test_2_grades)
#plt.title("Axes Aren't Comparable")
plt.xlabel("test 1 grade")
plt.ylabel("test 2 grade")
#plt.axis("equal") # turn this command on and off to see the difference
plt.show() # for ipython
%whos
###Output
Variable Type Data/Info
------------------------------------
Counter type <class 'collections.Counter'>
bias_squared list n=9
friend_count int 67
friends list n=9
gdp list n=7
grades list n=13
histogram Counter Counter({80: 4, 90: 3, 70: 3, 0: 2, 60: 1})
label str i
labels list n=9
minute_count int 190
minutes list n=9
movies list n=5
num_oscars list n=5
plt module <module 'matplotlib.pyplo<...>\\matplotlib\\pyplot.py'>
test_1_grades list n=5
test_2_grades list n=5
total_error list n=9
variance list n=9
xs list n=9
years list n=7
|
10 - Random Forests/random-forests.ipynb | ###Markdown
IntroductionDecision trees leave you with a difficult decision. A deep tree with lots of leaves will overfit because each prediction is coming from historical data from only the few houses at its leaf. But a shallow tree with few leaves will perform poorly because it fails to capture as many distinctions in the raw data.Even today's most sophisticated modeling techniques face this tension between underfitting and overfitting. But, many models have clever ideas that can lead to better performance. We'll look at the **random forest** as an example.The random forest uses many trees, and it makes a prediction by averaging the predictions of each component tree. It generally has much better predictive accuracy than a single decision tree and it works well with default parameters. If you keep modeling, you can learn more models with even better performance, but many of those are sensitive to getting the right parameters. ExampleYou've already seen the code to load the data a few times. At the end of data-loading, we have the following variables:- train_X- val_X- train_y- val_y
###Code
import pandas as pd
# Load data
melbourne_file_path = '../input/melbourne-housing-snapshot/melb_data.csv'
melbourne_data = pd.read_csv(melbourne_file_path)
# Filter rows with missing values
melbourne_data = melbourne_data.dropna(axis=0)
# Choose target and features
y = melbourne_data.Price
melbourne_features = ['Rooms', 'Bathroom', 'Landsize', 'BuildingArea',
'YearBuilt', 'Lattitude', 'Longtitude']
X = melbourne_data[melbourne_features]
from sklearn.model_selection import train_test_split
# split data into training and validation data, for both features and target
# The split is based on a random number generator. Supplying a numeric value to
# the random_state argument guarantees we get the same split every time we
# run this script.
train_X, val_X, train_y, val_y = train_test_split(X, y,random_state = 0)
###Output
_____no_output_____
###Markdown
We build a random forest model similarly to how we built a decision tree in scikit-learn - this time using the `RandomForestRegressor` class instead of `DecisionTreeRegressor`.
###Code
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
forest_model = RandomForestRegressor(random_state=1)
forest_model.fit(train_X, train_y)
melb_preds = forest_model.predict(val_X)
print(mean_absolute_error(val_y, melb_preds))
###Output
191669.7536453626
|
Models/Training utils/LSTM.ipynb | ###Markdown
Prepare data
###Code
path_train = '../Datasets/Videos/lstm/train/'
path_test = '../Datasets/Videos/lstm/test/'
X_train = []
X_test = []
y_train = []
y_test = []
for file in os.listdir(path_train):
if file.endswith(".csv"):
data = pd.read_csv(path_train + file).drop('Unnamed: 0', axis=1)
pos = 9
while pos < data.shape[0]:
X_train.append(data.drop('label', axis=1).values[pos-9: pos+1, :])
y_train.append(data['label'].iloc[pos])
pos += 1
for file in os.listdir(path_test):
if file.endswith(".csv"):
data = pd.read_csv(path_test + file).drop('Unnamed: 0', axis=1)
pos = 9
while pos < data.shape[0]:
X_test.append(data.drop('label', axis=1).values[pos-9: pos+1, :])
y_test.append(data['label'].iloc[pos])
pos += 1
X_train = np.array(X_train)
X_test = np.array(X_test)
def label_to_float(x):
return 0.0 if x == 'fire' else 1.0
y_train = np.array([label_to_float(x) for x in y_train])
y_test = np.array([label_to_float(x) for x in y_test])
scale = np.abs(X_train).max()
X_train /= scale
X_test /= scale
###Output
_____no_output_____
###Markdown
Train model
###Code
n_timesteps = 10
n_features = 640
model = Sequential()
model.add(LSTM(100, input_shape=(n_timesteps, n_features)))
model.add(Dropout(0.5))
model.add(Dense(100, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=30, batch_size=64, verbose=1)
model.save('Trained models/LSTM #1')
results = model.evaluate(X_test, y_test, batch_size=64)
print('test loss, test acc:', results)
from sklearn.metrics import classification_report
y_pred = model.predict(X_test, batch_size=64, verbose=1)
y_pred[y_pred <= 0.5] = 0
y_pred[y_pred > 0.5] = 1
print(classification_report(y_test, y_pred))
###Output
276/276 [==============================] - 0s 1ms/step
precision recall f1-score support
0.0 0.94 0.97 0.96 260
1.0 0.00 0.00 0.00 16
accuracy 0.92 276
macro avg 0.47 0.49 0.48 276
weighted avg 0.89 0.92 0.90 276
###Markdown
2 layers
###Code
n_timesteps = 10
n_features = 640
model2 = Sequential()
model2.add(LSTM(100, input_shape=(n_timesteps, n_features), return_sequences=True))
model2.add(Dropout(0.5))
model2.add(LSTM(200, return_sequences=False))
model2.add(Dropout(0.5))
model2.add(Dense(100, activation='relu'))
model2.add(Dense(1, activation='sigmoid'))
model2.summary()
model2.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
hist2 = model2.fit(X_train, y_train, epochs=30, batch_size=64, verbose=1)
model2.save('Trained models/LSTM #2')
results2 = model2.evaluate(X_test, y_test, batch_size=64)
print('test loss, test acc:', results2)
from sklearn.metrics import classification_report
y_pred = model2.predict(X_test, batch_size=64, verbose=1)
y_pred[y_pred <= 0.5] = 0
y_pred[y_pred > 0.5] = 1
print(classification_report(y_test, y_pred))
###Output
276/276 [==============================] - 0s 661us/step
precision recall f1-score support
0.0 0.94 0.97 0.96 260
1.0 0.11 0.06 0.08 16
accuracy 0.92 276
macro avg 0.53 0.52 0.52 276
weighted avg 0.90 0.92 0.91 276
###Markdown
3 layers
###Code
n_timesteps = 10
n_features = 640
model3 = Sequential()
model3.add(LSTM(100, input_shape=(n_timesteps, n_features), return_sequences=True))
model3.add(Dropout(0.5))
model3.add(LSTM(200, return_sequences=True))
model3.add(Dropout(0.5))
model3.add(LSTM(200, return_sequences=False))
model3.add(Dropout(0.5))
model3.add(Dense(100, activation='relu'))
model3.add(Dense(1, activation='sigmoid'))
model3.summary()
model3.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
hist2 = model3.fit(X_train, y_train, epochs=30, batch_size=64, verbose=1)
model3.save('Trained models/LSTM #3')
results3 = model3.evaluate(X_test, y_test, batch_size=64)
print('test loss, test acc:', results3)
from sklearn.metrics import classification_report
y_pred = model3.predict(X_test, batch_size=64, verbose=1)
y_pred[y_pred <= 0.5] = 0
y_pred[y_pred > 0.5] = 1
print(classification_report(y_test, y_pred))
###Output
276/276 [==============================] - 1s 2ms/step
precision recall f1-score support
0.0 0.94 0.95 0.95 260
1.0 0.08 0.06 0.07 16
accuracy 0.90 276
macro avg 0.51 0.51 0.51 276
weighted avg 0.89 0.90 0.90 276
###Markdown
Mega
###Code
n_timesteps = 10
n_features = 640
mega = Sequential()
mega.add(LSTM(100, input_shape=(n_timesteps, n_features), return_sequences=True))
mega.add(Dropout(0.5))
mega.add(LSTM(200, return_sequences=True))
mega.add(Dropout(0.5))
mega.add(LSTM(300, return_sequences=True))
mega.add(Dropout(0.5))
mega.add(LSTM(100, return_sequences=False))
mega.add(Dropout(0.5))
mega.add(Dense(100, activation='relu'))
mega.add(Dense(1, activation='sigmoid'))
mega.summary()
mega.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
hist_mega = mega.fit(X_train, y_train, epochs=30, batch_size=64, verbose=1)
mega.save('Trained models/LSTM #mega')
from sklearn.metrics import classification_report
y_pred = mega.predict(X_test, batch_size=64, verbose=1)
y_pred[y_pred <= 0.5] = 0
y_pred[y_pred > 0.5] = 1
print(classification_report(y_test, y_pred))
results_mega = mega.evaluate(X_test, y_test, batch_size=64)
print('test loss, test acc:', results_mega)
###Output
276/276 [==============================] - 1s 3ms/step
test loss, test acc: [0.42382233384726703, 0.9202898740768433]
|
03-Preprocessing_ConnectomeDB.ipynb | ###Markdown
Preprocess ConnectomeDB The script in this file can be used to extract the type of .nii.gz images desired from the files downloaded from the Connectome Database. ConnectomeDB files can be downloaded once registered in https://db.humanconnectome.org/
###Code
# Required librearies are imported
import os
import glob
import zipfile
# Path where are downloaded the data of patients
#path = 'E:/Z-MRI/Test'
path = 'E:/Z-MRI/3T_MRI'
# Path where will be uncompressed the files we want from the patients
output_path = 'E:/Z-MRI/3T_T1w'
# File pattern we want to uncompress
pattern = '3T_T1w_MPR1.nii.gz'
files = glob.glob(path + '/*.zip')
# Show first file
files[0]
# Show the number of files
print("The number of files is %d." % len(files))
# Reference:
# https://stackoverflow.com/questions/4917284/extract-files-from-zip-without-keeping-the-structure-using-python-zipfile
for file in files:
with zipfile.ZipFile(file) as z:
for member in z.infolist():
filename = os.path.basename(member.filename)
# skip directories
if not filename:
continue
# extract file
if (filename[-len(pattern):] == pattern):
print("Extracting " + filename)
# The full path is replaced by the filename only
member.filename = os.path.basename(member.filename)
z.extract(member, output_path)
###Output
Extracting 100206_3T_T1w_MPR1.nii.gz
Extracting 100307_3T_T1w_MPR1.nii.gz
Extracting 100408_3T_T1w_MPR1.nii.gz
Extracting 100610_3T_T1w_MPR1.nii.gz
Extracting 101006_3T_T1w_MPR1.nii.gz
Extracting 101107_3T_T1w_MPR1.nii.gz
Extracting 101309_3T_T1w_MPR1.nii.gz
Extracting 101410_3T_T1w_MPR1.nii.gz
Extracting 101915_3T_T1w_MPR1.nii.gz
Extracting 102008_3T_T1w_MPR1.nii.gz
Extracting 102109_3T_T1w_MPR1.nii.gz
Extracting 102311_3T_T1w_MPR1.nii.gz
Extracting 102513_3T_T1w_MPR1.nii.gz
Extracting 102614_3T_T1w_MPR1.nii.gz
Extracting 102715_3T_T1w_MPR1.nii.gz
Extracting 102816_3T_T1w_MPR1.nii.gz
Extracting 103010_3T_T1w_MPR1.nii.gz
Extracting 103111_3T_T1w_MPR1.nii.gz
Extracting 103212_3T_T1w_MPR1.nii.gz
Extracting 103414_3T_T1w_MPR1.nii.gz
Extracting 103515_3T_T1w_MPR1.nii.gz
Extracting 103818_3T_T1w_MPR1.nii.gz
Extracting 104012_3T_T1w_MPR1.nii.gz
Extracting 104416_3T_T1w_MPR1.nii.gz
Extracting 104820_3T_T1w_MPR1.nii.gz
Extracting 105014_3T_T1w_MPR1.nii.gz
Extracting 105115_3T_T1w_MPR1.nii.gz
Extracting 105216_3T_T1w_MPR1.nii.gz
Extracting 105620_3T_T1w_MPR1.nii.gz
Extracting 105923_3T_T1w_MPR1.nii.gz
Extracting 106016_3T_T1w_MPR1.nii.gz
Extracting 106319_3T_T1w_MPR1.nii.gz
Extracting 106521_3T_T1w_MPR1.nii.gz
Extracting 106824_3T_T1w_MPR1.nii.gz
Extracting 107018_3T_T1w_MPR1.nii.gz
Extracting 107220_3T_T1w_MPR1.nii.gz
Extracting 107321_3T_T1w_MPR1.nii.gz
Extracting 107422_3T_T1w_MPR1.nii.gz
Extracting 107725_3T_T1w_MPR1.nii.gz
Extracting 108020_3T_T1w_MPR1.nii.gz
Extracting 108121_3T_T1w_MPR1.nii.gz
Extracting 108222_3T_T1w_MPR1.nii.gz
Extracting 108323_3T_T1w_MPR1.nii.gz
Extracting 108525_3T_T1w_MPR1.nii.gz
Extracting 108828_3T_T1w_MPR1.nii.gz
Extracting 109123_3T_T1w_MPR1.nii.gz
Extracting 109325_3T_T1w_MPR1.nii.gz
Extracting 109830_3T_T1w_MPR1.nii.gz
Extracting 110007_3T_T1w_MPR1.nii.gz
Extracting 110411_3T_T1w_MPR1.nii.gz
Extracting 110613_3T_T1w_MPR1.nii.gz
Extracting 111009_3T_T1w_MPR1.nii.gz
Extracting 111211_3T_T1w_MPR1.nii.gz
Extracting 111312_3T_T1w_MPR1.nii.gz
Extracting 111413_3T_T1w_MPR1.nii.gz
Extracting 111514_3T_T1w_MPR1.nii.gz
Extracting 111716_3T_T1w_MPR1.nii.gz
Extracting 112112_3T_T1w_MPR1.nii.gz
Extracting 112314_3T_T1w_MPR1.nii.gz
Extracting 112516_3T_T1w_MPR1.nii.gz
Extracting 112819_3T_T1w_MPR1.nii.gz
Extracting 112920_3T_T1w_MPR1.nii.gz
Extracting 113215_3T_T1w_MPR1.nii.gz
Extracting 113316_3T_T1w_MPR1.nii.gz
Extracting 113417_3T_T1w_MPR1.nii.gz
Extracting 113619_3T_T1w_MPR1.nii.gz
Extracting 113821_3T_T1w_MPR1.nii.gz
Extracting 113922_3T_T1w_MPR1.nii.gz
Extracting 114116_3T_T1w_MPR1.nii.gz
Extracting 114217_3T_T1w_MPR1.nii.gz
Extracting 114318_3T_T1w_MPR1.nii.gz
Extracting 114419_3T_T1w_MPR1.nii.gz
Extracting 114621_3T_T1w_MPR1.nii.gz
Extracting 114823_3T_T1w_MPR1.nii.gz
Extracting 114924_3T_T1w_MPR1.nii.gz
Extracting 115017_3T_T1w_MPR1.nii.gz
Extracting 115219_3T_T1w_MPR1.nii.gz
Extracting 115320_3T_T1w_MPR1.nii.gz
Extracting 115724_3T_T1w_MPR1.nii.gz
Extracting 115825_3T_T1w_MPR1.nii.gz
Extracting 116120_3T_T1w_MPR1.nii.gz
Extracting 116221_3T_T1w_MPR1.nii.gz
Extracting 116423_3T_T1w_MPR1.nii.gz
Extracting 116524_3T_T1w_MPR1.nii.gz
Extracting 116726_3T_T1w_MPR1.nii.gz
Extracting 117021_3T_T1w_MPR1.nii.gz
Extracting 117122_3T_T1w_MPR1.nii.gz
Extracting 117324_3T_T1w_MPR1.nii.gz
Extracting 117728_3T_T1w_MPR1.nii.gz
Extracting 117930_3T_T1w_MPR1.nii.gz
Extracting 118023_3T_T1w_MPR1.nii.gz
Extracting 118124_3T_T1w_MPR1.nii.gz
Extracting 118225_3T_T1w_MPR1.nii.gz
Extracting 118528_3T_T1w_MPR1.nii.gz
Extracting 118730_3T_T1w_MPR1.nii.gz
Extracting 118831_3T_T1w_MPR1.nii.gz
Extracting 118932_3T_T1w_MPR1.nii.gz
Extracting 119025_3T_T1w_MPR1.nii.gz
Extracting 119126_3T_T1w_MPR1.nii.gz
Extracting 119732_3T_T1w_MPR1.nii.gz
Extracting 119833_3T_T1w_MPR1.nii.gz
Extracting 120010_3T_T1w_MPR1.nii.gz
Extracting 120111_3T_T1w_MPR1.nii.gz
Extracting 120212_3T_T1w_MPR1.nii.gz
Extracting 120414_3T_T1w_MPR1.nii.gz
Extracting 120515_3T_T1w_MPR1.nii.gz
Extracting 120717_3T_T1w_MPR1.nii.gz
Extracting 121315_3T_T1w_MPR1.nii.gz
Extracting 121416_3T_T1w_MPR1.nii.gz
Extracting 121618_3T_T1w_MPR1.nii.gz
Extracting 121719_3T_T1w_MPR1.nii.gz
Extracting 121820_3T_T1w_MPR1.nii.gz
Extracting 121921_3T_T1w_MPR1.nii.gz
Extracting 122317_3T_T1w_MPR1.nii.gz
Extracting 122418_3T_T1w_MPR1.nii.gz
Extracting 122620_3T_T1w_MPR1.nii.gz
Extracting 122822_3T_T1w_MPR1.nii.gz
Extracting 123117_3T_T1w_MPR1.nii.gz
Extracting 123420_3T_T1w_MPR1.nii.gz
Extracting 123521_3T_T1w_MPR1.nii.gz
Extracting 123723_3T_T1w_MPR1.nii.gz
Extracting 123824_3T_T1w_MPR1.nii.gz
Extracting 123925_3T_T1w_MPR1.nii.gz
Extracting 124220_3T_T1w_MPR1.nii.gz
Extracting 124422_3T_T1w_MPR1.nii.gz
Extracting 124624_3T_T1w_MPR1.nii.gz
Extracting 124826_3T_T1w_MPR1.nii.gz
Extracting 125222_3T_T1w_MPR1.nii.gz
Extracting 125424_3T_T1w_MPR1.nii.gz
Extracting 125525_3T_T1w_MPR1.nii.gz
Extracting 126325_3T_T1w_MPR1.nii.gz
Extracting 126426_3T_T1w_MPR1.nii.gz
Extracting 126628_3T_T1w_MPR1.nii.gz
Extracting 126931_3T_T1w_MPR1.nii.gz
Extracting 127226_3T_T1w_MPR1.nii.gz
Extracting 127327_3T_T1w_MPR1.nii.gz
Extracting 127630_3T_T1w_MPR1.nii.gz
Extracting 127731_3T_T1w_MPR1.nii.gz
Extracting 127832_3T_T1w_MPR1.nii.gz
Extracting 127933_3T_T1w_MPR1.nii.gz
Extracting 128026_3T_T1w_MPR1.nii.gz
Extracting 128127_3T_T1w_MPR1.nii.gz
Extracting 128329_3T_T1w_MPR1.nii.gz
Extracting 128632_3T_T1w_MPR1.nii.gz
Extracting 128935_3T_T1w_MPR1.nii.gz
Extracting 129028_3T_T1w_MPR1.nii.gz
Extracting 129129_3T_T1w_MPR1.nii.gz
Extracting 129331_3T_T1w_MPR1.nii.gz
Extracting 129432_3T_T1w_MPR1.nii.gz
Extracting 129533_3T_T1w_MPR1.nii.gz
Extracting 129634_3T_T1w_MPR1.nii.gz
Extracting 129937_3T_T1w_MPR1.nii.gz
Extracting 130013_3T_T1w_MPR1.nii.gz
Extracting 130114_3T_T1w_MPR1.nii.gz
Extracting 130316_3T_T1w_MPR1.nii.gz
Extracting 130417_3T_T1w_MPR1.nii.gz
Extracting 130518_3T_T1w_MPR1.nii.gz
Extracting 130619_3T_T1w_MPR1.nii.gz
Extracting 130720_3T_T1w_MPR1.nii.gz
Extracting 130821_3T_T1w_MPR1.nii.gz
Extracting 130922_3T_T1w_MPR1.nii.gz
Extracting 131217_3T_T1w_MPR1.nii.gz
Extracting 131419_3T_T1w_MPR1.nii.gz
Extracting 131621_3T_T1w_MPR1.nii.gz
Extracting 131722_3T_T1w_MPR1.nii.gz
Extracting 131823_3T_T1w_MPR1.nii.gz
Extracting 131924_3T_T1w_MPR1.nii.gz
Extracting 132017_3T_T1w_MPR1.nii.gz
Extracting 132118_3T_T1w_MPR1.nii.gz
Extracting 133019_3T_T1w_MPR1.nii.gz
Extracting 133625_3T_T1w_MPR1.nii.gz
Extracting 133827_3T_T1w_MPR1.nii.gz
Extracting 133928_3T_T1w_MPR1.nii.gz
Extracting 134021_3T_T1w_MPR1.nii.gz
Extracting 134223_3T_T1w_MPR1.nii.gz
Extracting 134324_3T_T1w_MPR1.nii.gz
Extracting 134425_3T_T1w_MPR1.nii.gz
Extracting 134627_3T_T1w_MPR1.nii.gz
Extracting 134728_3T_T1w_MPR1.nii.gz
Extracting 134829_3T_T1w_MPR1.nii.gz
Extracting 135124_3T_T1w_MPR1.nii.gz
Extracting 135225_3T_T1w_MPR1.nii.gz
Extracting 135528_3T_T1w_MPR1.nii.gz
Extracting 135629_3T_T1w_MPR1.nii.gz
Extracting 135730_3T_T1w_MPR1.nii.gz
Extracting 135932_3T_T1w_MPR1.nii.gz
Extracting 136126_3T_T1w_MPR1.nii.gz
Extracting 136227_3T_T1w_MPR1.nii.gz
Extracting 136631_3T_T1w_MPR1.nii.gz
Extracting 136732_3T_T1w_MPR1.nii.gz
Extracting 136833_3T_T1w_MPR1.nii.gz
Extracting 137027_3T_T1w_MPR1.nii.gz
Extracting 137128_3T_T1w_MPR1.nii.gz
Extracting 137229_3T_T1w_MPR1.nii.gz
Extracting 137431_3T_T1w_MPR1.nii.gz
Extracting 137532_3T_T1w_MPR1.nii.gz
Extracting 137633_3T_T1w_MPR1.nii.gz
Extracting 137936_3T_T1w_MPR1.nii.gz
Extracting 138130_3T_T1w_MPR1.nii.gz
Extracting 138231_3T_T1w_MPR1.nii.gz
Extracting 138332_3T_T1w_MPR1.nii.gz
Extracting 138534_3T_T1w_MPR1.nii.gz
Extracting 138837_3T_T1w_MPR1.nii.gz
Extracting 139233_3T_T1w_MPR1.nii.gz
Extracting 139435_3T_T1w_MPR1.nii.gz
Extracting 139637_3T_T1w_MPR1.nii.gz
Extracting 139839_3T_T1w_MPR1.nii.gz
Extracting 140117_3T_T1w_MPR1.nii.gz
Extracting 140319_3T_T1w_MPR1.nii.gz
Extracting 140420_3T_T1w_MPR1.nii.gz
Extracting 140824_3T_T1w_MPR1.nii.gz
Extracting 140925_3T_T1w_MPR1.nii.gz
Extracting 141119_3T_T1w_MPR1.nii.gz
Extracting 141422_3T_T1w_MPR1.nii.gz
Extracting 141826_3T_T1w_MPR1.nii.gz
Extracting 142424_3T_T1w_MPR1.nii.gz
Extracting 142828_3T_T1w_MPR1.nii.gz
Extracting 143224_3T_T1w_MPR1.nii.gz
Extracting 143325_3T_T1w_MPR1.nii.gz
Extracting 143426_3T_T1w_MPR1.nii.gz
Extracting 143527_3T_T1w_MPR1.nii.gz
Extracting 143830_3T_T1w_MPR1.nii.gz
|
notebooks/Read_formation_tops.ipynb | ###Markdown
Read formation tops- Read tops to dictionaries- Read tops to `striplog`- Read tops to `pandas` Raw data
###Code
from striplog.utils import read_petrel
import numpy as np
fname = "../data/NAM/Formation_tops/Well_Tops.asc"
# Need striplog 0.8.8 to pass a single function.
nullify = lambda x: np.nan if x==-999 else x
data = read_petrel(fname, function=nullify)
data.keys()
###Output
_____no_output_____
###Markdown
To `pandas`
###Code
import pandas as pd
df = pd.DataFrame.from_dict(data)
df.head()
###Output
_____no_output_____
###Markdown
Read metadataWe need the CRS, among other things. Spoiler alert, it's this one: https://spatialreference.org/ref/epsg/28992/I read the metadata file, `../data/Formation_tops/Well_Tops.asc.crsmeta.xml`, in the [Read CRS metadta](./Read_CRS_metadata.ipynb) notebook. For now we'll use the EPSG code I have. To CSV
###Code
df.to_csv("../data/NAM/Formation_tops/Groningen__Formation_tops__EPSG_28992.csv", index=False)
###Output
_____no_output_____ |
ShopifyChallenge.ipynb | ###Markdown
Winter 2019 Data Science Intern Challenge Question 1 Analysis (*Scroll down to the end of Question 1 if you would prefer a summary of the answers found from the analysis below. Question 2 answers are there too.*)
###Code
#Load the data
library(tidyverse) #For dplyr and ggplot2
library(ggthemes) #Used with ggplot2
library(lubridate) #For dates
library(repr) #For sizing graphs
#Load the dataset
shopify <- read.csv("shopify.csv")
###Output
_____no_output_____
###Markdown
Notice that there's something curious happening when we inspect how many transactions are made for each transaction size:
###Code
items_count <- as.tibble(table(shopify$total_items))
colnames(items_count) <- c("total_items", "count_transactions")
items_count
###Output
_____no_output_____
###Markdown
All of the transaction sizes are 8 items or smaller, except for the 17 of size 2000. It's likely that these excessively large transactions are driving up the AOV. Let's inspect this further.
###Code
manyItems <- shopify %>%
filter(total_items==2000)
#Convert created_at from factor to dttm so we can order by date
manyItems$created_at <- ymd_hms(manyItems$created_at)
#Order by date
manyItems <- manyItems[order(manyItems$created_at),]
manyItems
###Output
_____no_output_____
###Markdown
We see that all of the recorded transactions of size 2000 occurred from the same user_id (607) of the same shop_id (42), and that the order_amount is exactly the same in each case (704000). Moreover, we see that there are some days where there are multiple identical transactions, and all purchases are made at exactly 4 a.m., to the second. Either there was a mistake in the dataset with duplicate entries, or this customer is automating the process of buying shoes in bulk, which he or she will presumably sell at a higher price. There's also something fishy going on when we inspect the maximum order amount for the various transaction sizes:
###Code
shopify %>%
group_by(total_items) %>%
summarise(mean_order_amount = round(mean(order_amount), 2),
max_order_amount = max(order_amount)) %>%
mutate(fishy_observation = max_order_amount/total_items)
###Output
_____no_output_____
###Markdown
There are very large maximum order amounts for purchases of 1 item, 2 items, 3 items, 4 items and 6 items. Moreover, when each of these maximum order amounts is divided by the total items bought in its respective transaction, we get 25725. We would never expect an average pair of shoes to cost 25725, and therefore there is probably a specific recording mistake that is being repeated in the dataset. Now how do we handle these potential problems in our dataset? Let's take a look at a scatterplot which shows the order amount for each of our 5000 transactions.
###Code
#Sizing the graph output
options(repr.plot.width=8, repr.plot.height=3)
#Creating the graph
shopify %>%
ggplot(aes(x=order_id, y=order_amount)) +
geom_point(color="blue", alpha=0.2) +
labs(x='', y="Order Amount ($)", title="Most order amounts are of reasonable sizes", caption="Each point is one transaction") +
scale_y_continuous(breaks=200000*(0:3), labels=c('0', '200000', '400000', '600000')) +
theme_few() +
theme(axis.title.y = element_text(size=10))
###Output
_____no_output_____
###Markdown
Notice that most of the transactions are along the dark blue line, which corresponds to purchases that are a few hundred or a few thousand dollars - plausible order amounts when buying at most 8 pairs of shoes. We saw that our original choice of evaluation metric, AOV, is largely affected by the numerous extreme values in this dataset, both from the 2000 item purchases and from the fishy order amounts that were multiples of 25725. To protect our evaluation metric from the effects of these outliers it would therefore be wise to instead use a robust evaluation metric, median, which will be found among the points in the dark blue line.
###Code
median(shopify$order_amount)
###Output
_____no_output_____
###Markdown
Shopify Challenge 2021 Data Science Internship Jose Rincon
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Question 1: On Shopify, we have exactly 100 sneaker shops, and each of these shops sells only one model of shoe. We want to do some analysis of the average order value (AOV). When we look at orders data over a 30 day window, we naively calculate an AOV of 3145.13. Given that we know these shops are selling sneakers, a relatively affordable item, something seems wrong with our analysis. a. Think about what could be going wrong with our calculation. Think about a better way to evaluate this data. Solution a Read data using pandas
###Code
my_data = pd.read_excel("/home/jose/Documents/Professional Development/shopify_challenge/2019 Winter Data Science Intern Challenge Data Set.xlsx", engine = 'openpyxl')
###Output
_____no_output_____
###Markdown
Perform 30 day window average using the original computation
###Code
order_values = my_data['order_amount'].to_numpy()
store_ids = my_data['shop_id'].to_numpy()
average_order_value = np.mean(order_values)
print(average_order_value)
###Output
3145.128
###Markdown
Find any possible outliers in the dataset. We do this by finding the average and standard deviation of number of items in an order. Orders that fall within three standard deviations could be deemed common for the stores. Orders with very large number of shoes could be unusual for the stores. Those outliers could be ignored in our calculation.
###Code
total_items = my_data['total_items'].to_numpy()
###Output
_____no_output_____
###Markdown
Compute mean and standard deviation of total_items per order
###Code
mean = np.mean(total_items)
std = np.std(total_items)
median = np.median(total_items)
print(mean, std, median)
###Output
8.7872 116.3086871912842 2.0
###Markdown
Find outliers in store orders
###Code
# Use 3 standard deviations to find ourliers
cut_off = 3 * std
# find lower boundary of our good data
lower = mean - cut_off
# find upper boundary of our good data
upper = mean + cut_off
# find the indices of outliers
indices_outliers = (total_items < lower) + (total_items > upper)
# find the number of items in outliers
outliers_total_items = total_items[indices_outliers]
print(outliers_total_items)
# find the store ids with the outliers
outliers_store_ids = store_ids[indices_outliers]
print(outliers_store_ids)
###Output
[2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000 2000
2000 2000 2000]
[42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42]
###Markdown
They are orders with 2000 items sold by store 42. Let's now remove the outliers from our data
###Code
# get new_total_items
new_total_items = total_items[~indices_outliers]
# get new_order_values
new_order_values = order_values[~indices_outliers]
# get max, min, mean, std of new_total_items
max_total_items = np.max(new_total_items)
min_total_items = np.min(new_total_items)
average_total_items = np.average(new_total_items)
std_total_items = np.std(new_total_items)
print('max_total_items', max_total_items)
print('min_total_items', min_total_items)
print('average_total_items', average_total_items)
print('std_total_items', std_total_items)
###Output
max_total_items 8
min_total_items 1
average_total_items 1.9939795304033714
std_total_items 0.9830817903534801
###Markdown
There are 17 outliers from our 5000 order data set. I am not sure why store 42 has those 2000 item orders made by their customer 607 but these type of orders are not usual for a retail shoe store. Moreover, there are also outliers in store 78 where shoes are sold for a unit price of 25725. These outliers are definetely affecting the Average Order Value (AOV). b. What metric would you report for this dataset? Solution b Given that this data set has a few very large outliers, the mean is skewed by these few samples. We could use the median as better value of central tendency. c. What is the value? Solution: The Median Order Value is
###Code
median_order_value = np.median(order_values)
print("The Median Order Value (MOV) is: ", median_order_value)
###Output
The Median Order Value (MOV) is: 284.0
###Markdown
Question 2: For this question youโll need to use SQL. Follow this link to access the data set required for the challenge. Please use queries to answer the following questions. Paste your queries along with your final numerical answers below a. How many orders were shipped by Speedy Express in total?
###Code
'''
SELECT COUNT(tempOrders.ShipperID)
FROM Orders AS tempOrders
WHERE (SELECT ShipperID
FROM Shippers AS tempShippers
WHERE tempShippers.ShipperName == "Speedy Express") == tempOrders.ShipperID'''
###Output
_____no_output_____
###Markdown
Solution a: The number of orders shipped by Speedy Express were 54 b. What is the last name of the employee with the most orders?
###Code
'''
SELECT Employees.LastName
FROM Employees
LEFT JOIN Orders
ON Orders.EmployeeID = Employees.EmployeeID
GROUP BY Orders.EmployeeID
ORDER BY COUNT(Orders.EmployeeID) DESC
LIMIT 1;'''
###Output
_____no_output_____
###Markdown
Solution b: The employee with most orders is Peacock c. What product was ordered the most by customers in Germany?
###Code
'''SELECT
Products.ProductName, SUM(OrderDetails.Quantity) AS Total_orders, Customers.Country
FROM Products
JOIN OrderDetails ON OrderDetails.ProductID = Products.ProductID
JOIN Orders ON Orders.OrderID = OrderDetails.OrderID
JOIN Customers ON Customers.CustomerID = Orders.CustomerID
WHERE Customers.Country = "Germany"
GROUP BY Products.ProductName
ORDER BY Total_orders DESC
LIMIT 1;'''
###Output
_____no_output_____ |
1_ShallowToDeepNeuralNetwork/6_Deep+Neural+Network+-+Application+v8.ipynb | ###Markdown
Deep Neural Network for Image Classification: ApplicationWhen you finish this, you will have finished the last programming assignment of Week 4, and also the last programming assignment of this course! You will use the functions you'd implemented in the previous assignment to build a deep network, and apply it to cat vs non-cat classification. Hopefully, you will see an improvement in accuracy relative to your previous logistic regression implementation. **After this assignment you will be able to:**- Build and apply a deep neural network to supervised learning. Let's get started! 1 - Packages Let's first import all the packages that you will need during this assignment. - [numpy](https://www.numpy.org/) is the fundamental package for scientific computing with Python.- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.- [h5py](http://www.h5py.org) is a common package to interact with a dataset that is stored on an H5 file.- [PIL](http://www.pythonware.com/products/pil/) and [scipy](https://www.scipy.org/) are used here to test your model with your own picture at the end.- dnn_app_utils provides the functions implemented in the "Building your Deep Neural Network: Step by Step" assignment to this notebook.- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
###Code
import time
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
from dnn_app_utils_v3 import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
###Output
_____no_output_____
###Markdown
2 - DatasetYou will use the same "Cat vs non-Cat" dataset as in "Logistic Regression as a Neural Network" (Assignment 2). The model you had built had 70% test accuracy on classifying cats vs non-cats images. Hopefully, your new model will perform a better!**Problem Statement**: You are given a dataset ("data.h5") containing: - a training set of m_train images labelled as cat (1) or non-cat (0) - a test set of m_test images labelled as cat and non-cat - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB).Let's get more familiar with the dataset. Load the data by running the cell below.
###Code
train_x_orig, train_y, test_x_orig, test_y, classes = load_data()
###Output
_____no_output_____
###Markdown
The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to see other images.
###Code
# Example of a picture
index = 10
plt.imshow(train_x_orig[index])
print ("y = " + str(train_y[0,index]) + ". It's a " + classes[train_y[0,index]].decode("utf-8") + " picture.")
# Explore your dataset
m_train = train_x_orig.shape[0]
num_px = train_x_orig.shape[1]
m_test = test_x_orig.shape[0]
print ("Number of training examples: " + str(m_train))
print ("Number of testing examples: " + str(m_test))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_x_orig shape: " + str(train_x_orig.shape))
print ("train_y shape: " + str(train_y.shape))
print ("test_x_orig shape: " + str(test_x_orig.shape))
print ("test_y shape: " + str(test_y.shape))
###Output
Number of training examples: 209
Number of testing examples: 50
Each image is of size: (64, 64, 3)
train_x_orig shape: (209, 64, 64, 3)
train_y shape: (1, 209)
test_x_orig shape: (50, 64, 64, 3)
test_y shape: (1, 50)
###Markdown
As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below. Figure 1: Image to vector conversion.
###Code
# Reshape the training and test examples
train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T # The "-1" makes reshape flatten the remaining dimensions
test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T
# Standardize data to have feature values between 0 and 1.
train_x = train_x_flatten/255.
test_x = test_x_flatten/255.
print ("train_x's shape: " + str(train_x.shape))
print ("test_x's shape: " + str(test_x.shape))
###Output
train_x's shape: (12288, 209)
test_x's shape: (12288, 50)
###Markdown
$12,288$ equals $64 \times 64 \times 3$ which is the size of one reshaped image vector. 3 - Architecture of your model Now that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images from non-cat images.You will build two different models:- A 2-layer neural network- An L-layer deep neural networkYou will then compare the performance of these models, and also try out different values for $L$. Let's look at the two architectures. 3.1 - 2-layer neural network Figure 2: 2-layer neural network. The model can be summarized as: ***INPUT -> LINEAR -> RELU -> LINEAR -> SIGMOID -> OUTPUT***. Detailed Architecture of figure 2:- The input is a (64,64,3) image which is flattened to a vector of size $(12288,1)$. - The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ of size $(n^{[1]}, 12288)$.- You then add a bias term and take its relu to get the following vector: $[a_0^{[1]}, a_1^{[1]},..., a_{n^{[1]}-1}^{[1]}]^T$.- You then repeat the same process.- You multiply the resulting vector by $W^{[2]}$ and add your intercept (bias). - Finally, you take the sigmoid of the result. If it is greater than 0.5, you classify it to be a cat. 3.2 - L-layer deep neural networkIt is hard to represent an L-layer deep neural network with the above representation. However, here is a simplified network representation: Figure 3: L-layer neural network. The model can be summarized as: ***[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID***Detailed Architecture of figure 3:- The input is a (64,64,3) image which is flattened to a vector of size (12288,1).- The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ and then you add the intercept $b^{[1]}$. The result is called the linear unit.- Next, you take the relu of the linear unit. This process could be repeated several times for each $(W^{[l]}, b^{[l]})$ depending on the model architecture.- Finally, you take the sigmoid of the final linear unit. If it is greater than 0.5, you classify it to be a cat. 3.3 - General methodologyAs usual you will follow the Deep Learning methodology to build the model: 1. Initialize parameters / Define hyperparameters 2. Loop for num_iterations: a. Forward propagation b. Compute cost function c. Backward propagation d. Update parameters (using parameters, and grads from backprop) 4. Use trained parameters to predict labelsLet's now implement those two models! 4 - Two-layer neural network**Question**: Use the helper functions you have implemented in the previous assignment to build a 2-layer neural network with the following structure: *LINEAR -> RELU -> LINEAR -> SIGMOID*. The functions you may need and their inputs are:```pythondef initialize_parameters(n_x, n_h, n_y): ... return parameters def linear_activation_forward(A_prev, W, b, activation): ... return A, cachedef compute_cost(AL, Y): ... return costdef linear_activation_backward(dA, cache, activation): ... return dA_prev, dW, dbdef update_parameters(parameters, grads, learning_rate): ... return parameters```
###Code
### CONSTANTS DEFINING THE MODEL ####
n_x = 12288 # num_px * num_px * 3
n_h = 7
n_y = 1
layers_dims = (n_x, n_h, n_y)
# GRADED FUNCTION: two_layer_model
def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):
"""
Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (n_x, number of examples)
Y -- true "label" vector (containing 1 if cat, 0 if non-cat), of shape (1, number of examples)
layers_dims -- dimensions of the layers (n_x, n_h, n_y)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- If set to True, this will print the cost every 100 iterations
Returns:
parameters -- a dictionary containing W1, W2, b1, and b2
"""
np.random.seed(1)
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
(n_x, n_h, n_y) = layers_dims
# Initialize parameters dictionary, by calling one of the functions you'd previously implemented
### START CODE HERE ### (โ 1 line of code)
parameters = initialize_parameters(n_x=n_x, n_h=n_h, n_y=n_y)
### END CODE HERE ###
# Get W1, b1, W2 and b2 from the dictionary parameters.
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1, W2, b2". Output: "A1, cache1, A2, cache2".
### START CODE HERE ### (โ 2 lines of code)
A1, cache1 = linear_activation_forward(A_prev=X, activation="relu", b=b1, W=W1)
A2, cache2 = linear_activation_forward(A_prev=A1, activation="sigmoid", b=b2, W=W2)
### END CODE HERE ###
# Compute cost
### START CODE HERE ### (โ 1 line of code)
cost = compute_cost(AL=A2, Y=Y)
### END CODE HERE ###
# Initializing backward propagation
dA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2))
# Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1".
### START CODE HERE ### (โ 2 lines of code)
dA1, dW2, db2 = linear_activation_backward(dA=dA2, cache=cache2, activation="sigmoid")
dA0, dW1, db1 = linear_activation_backward(dA=dA1, cache=cache1, activation="relu")
### END CODE HERE ###
# Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2
grads['dW1'] = dW1
grads['db1'] = db1
grads['dW2'] = dW2
grads['db2'] = db2
# Update parameters.
### START CODE HERE ### (approx. 1 line of code)
parameters = update_parameters(parameters=parameters, grads=grads, learning_rate=learning_rate)
### END CODE HERE ###
# Retrieve W1, b1, W2, b2 from parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print("Cost after iteration {}: {}".format(i, np.squeeze(cost)))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
Run the cell below to train your parameters. See if your model runs. The cost should be decreasing. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (โฌ) on the upper bar of the notebook to stop the cell and try to find your error.
###Code
parameters = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2500, print_cost=True)
###Output
Cost after iteration 0: 0.6930497356599888
Cost after iteration 100: 0.6464320953428849
Cost after iteration 200: 0.6325140647912677
Cost after iteration 300: 0.6015024920354665
Cost after iteration 400: 0.5601966311605747
Cost after iteration 500: 0.5158304772764729
Cost after iteration 600: 0.47549013139433255
Cost after iteration 700: 0.43391631512257495
Cost after iteration 800: 0.400797753620389
Cost after iteration 900: 0.3580705011323798
Cost after iteration 1000: 0.3394281538366411
Cost after iteration 1100: 0.3052753636196264
Cost after iteration 1200: 0.2749137728213018
Cost after iteration 1300: 0.24681768210614854
Cost after iteration 1400: 0.19850735037466094
Cost after iteration 1500: 0.17448318112556666
Cost after iteration 1600: 0.17080762978096128
Cost after iteration 1700: 0.11306524562164724
Cost after iteration 1800: 0.09629426845937152
Cost after iteration 1900: 0.08342617959726856
Cost after iteration 2000: 0.07439078704319078
Cost after iteration 2100: 0.06630748132267927
Cost after iteration 2200: 0.05919329501038164
Cost after iteration 2300: 0.05336140348560553
Cost after iteration 2400: 0.048554785628770115
###Markdown
**Expected Output**: **Cost after iteration 0** 0.6930497356599888 **Cost after iteration 100** 0.6464320953428849 **...** ... **Cost after iteration 2400** 0.048554785628770226 Good thing you built a vectorized implementation! Otherwise it might have taken 10 times longer to train this.Now, you can use the trained parameters to classify images from the dataset. To see your predictions on the training and test sets, run the cell below.
###Code
predictions_train = predict(train_x, train_y, parameters)
###Output
Accuracy: 1.0
###Markdown
**Expected Output**: **Accuracy** 1.0
###Code
predictions_test = predict(test_x, test_y, parameters)
###Output
Accuracy: 0.72
###Markdown
**Expected Output**: **Accuracy** 0.72 **Note**: You may notice that running the model on fewer iterations (say 1500) gives better accuracy on the test set. This is called "early stopping" and we will talk about it in the next course. Early stopping is a way to prevent overfitting. Congratulations! It seems that your 2-layer neural network has better performance (72%) than the logistic regression implementation (70%, assignment week 2). Let's see if you can do even better with an $L$-layer model. 5 - L-layer Neural Network**Question**: Use the helper functions you have implemented previously to build an $L$-layer neural network with the following structure: *[LINEAR -> RELU]$\times$(L-1) -> LINEAR -> SIGMOID*. The functions you may need and their inputs are:```pythondef initialize_parameters_deep(layers_dims): ... return parameters def L_model_forward(X, parameters): ... return AL, cachesdef compute_cost(AL, Y): ... return costdef L_model_backward(AL, Y, caches): ... return gradsdef update_parameters(parameters, grads, learning_rate): ... return parameters```
###Code
### CONSTANTS ###
layers_dims = [12288, 20, 7, 5, 1] # 4-layer model
# GRADED FUNCTION: L_layer_model
def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009
"""
Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.
Arguments:
X -- data, numpy array of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
layers_dims -- list containing the input size and each layer size, of length (number of layers + 1).
learning_rate -- learning rate of the gradient descent update rule
num_iterations -- number of iterations of the optimization loop
print_cost -- if True, it prints the cost every 100 steps
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
np.random.seed(1)
costs = [] # keep track of cost
# Parameters initialization. (โ 1 line of code)
### START CODE HERE ###
parameters = initialize_parameters_deep(layer_dims=layers_dims)
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID.
### START CODE HERE ### (โ 1 line of code)
AL, caches = L_model_forward(X=X, parameters=parameters)
### END CODE HERE ###
# Compute cost.
### START CODE HERE ### (โ 1 line of code)
cost = compute_cost(AL=AL, Y=Y)
### END CODE HERE ###
# Backward propagation.
### START CODE HERE ### (โ 1 line of code)
grads = L_model_backward(AL=AL, Y=Y, caches=caches)
### END CODE HERE ###
# Update parameters.
### START CODE HERE ### (โ 1 line of code)
parameters = update_parameters(grads=grads, parameters=parameters, learning_rate=learning_rate)
### END CODE HERE ###
# Print the cost every 100 training example
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
if print_cost and i % 100 == 0:
costs.append(cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
You will now train the model as a 4-layer neural network. Run the cell below to train your model. The cost should decrease on every iteration. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (โฌ) on the upper bar of the notebook to stop the cell and try to find your error.
###Code
parameters = L_layer_model(train_x, train_y, layers_dims, num_iterations = 2500, print_cost = True)
###Output
Cost after iteration 0: 0.771749
Cost after iteration 100: 0.672053
Cost after iteration 200: 0.648263
Cost after iteration 300: 0.611507
Cost after iteration 400: 0.567047
Cost after iteration 500: 0.540138
Cost after iteration 600: 0.527930
Cost after iteration 700: 0.465477
Cost after iteration 800: 0.369126
Cost after iteration 900: 0.391747
Cost after iteration 1000: 0.315187
Cost after iteration 1100: 0.272700
Cost after iteration 1200: 0.237419
Cost after iteration 1300: 0.199601
Cost after iteration 1400: 0.189263
Cost after iteration 1500: 0.161189
Cost after iteration 1600: 0.148214
Cost after iteration 1700: 0.137775
Cost after iteration 1800: 0.129740
Cost after iteration 1900: 0.121225
Cost after iteration 2000: 0.113821
Cost after iteration 2100: 0.107839
Cost after iteration 2200: 0.102855
Cost after iteration 2300: 0.100897
Cost after iteration 2400: 0.092878
###Markdown
**Expected Output**: **Cost after iteration 0** 0.771749 **Cost after iteration 100** 0.672053 **...** ... **Cost after iteration 2400** 0.092878
###Code
pred_train = predict(train_x, train_y, parameters)
###Output
Accuracy: 0.985645933014
###Markdown
**Train Accuracy** 0.985645933014
###Code
pred_test = predict(test_x, test_y, parameters)
###Output
Accuracy: 0.8
###Markdown
**Expected Output**: **Test Accuracy** 0.8 Congrats! It seems that your 4-layer neural network has better performance (80%) than your 2-layer neural network (72%) on the same test set. This is good performance for this task. Nice job! Though in the next course on "Improving deep neural networks" you will learn how to obtain even higher accuracy by systematically searching for better hyperparameters (learning_rate, layers_dims, num_iterations, and others you'll also learn in the next course). 6) Results AnalysisFirst, let's take a look at some images the L-layer model labeled incorrectly. This will show a few mislabeled images.
###Code
print_mislabeled_images(classes, test_x, test_y, pred_test)
###Output
_____no_output_____
###Markdown
**A few types of images the model tends to do poorly on include:** - Cat body in an unusual position- Cat appears against a background of a similar color- Unusual cat color and species- Camera Angle- Brightness of the picture- Scale variation (cat is very large or small in image) 7) Test with your own image (optional/ungraded exercise) Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Change your image's name in the following code 4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
###Code
## START CODE HERE ##
my_image = "my_image.jpg" # change this to the name of your image file
my_label_y = [1] # the true class of your image (1 -> cat, 0 -> non-cat)
## END CODE HERE ##
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((num_px*num_px*3,1))
my_image = my_image/255.
my_predicted_image = predict(my_image, my_label_y, parameters)
plt.imshow(image)
print ("y = " + str(np.squeeze(my_predicted_image)) + ", your L-layer model predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
###Output
_____no_output_____ |
master/convteo.ipynb | ###Markdown
Demonstration of Convolution TheoremIllustrate the discrete convolution theorem.F indicates Fourier transform operator and F{f} and F{g} are the fourier transform of "f" and "g" so we have:$$ F\left \{ f * g \right \} = F\left \{ f \right \} \cdot F\left \{ g \right \} $$$$ F(f\cdot g) = F\left \{ f \right \} * F\left \{ g \right \} $$ Importing
###Code
%matplotlib inline
import numpy as np
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import sys,os
ea979path = os.path.abspath('../../')
if ea979path not in sys.path:
sys.path.append(ea979path)
import ea979.src as ia
from numpy.fft import fft2
from numpy.fft import ifft2
###Output
_____no_output_____
###Markdown
Numeric sample
###Code
fr = np.linspace(-1,1,6)
f = np.array([fr,2*fr,fr,fr])
print(f)
hh = np.array([-1,0,+1])
h = np.array([hh,2*hh,hh])
print(h)
g = ia.pconv(f,h)
print(g)
###Output
[[ 6.4 6.4 -3.2 -3.2 -3.2 -3.2]
[ 8. 8. -4. -4. -4. -4. ]
[ 9.6 9.6 -4.8 -4.8 -4.8 -4.8]
[ 8. 8. -4. -4. -4. -4. ]]
###Markdown
See that f and h are periodic images and the period is (H,W) that is the shape of f.At the following code, the F and H need to be the same shape
###Code
#Deixar o h (3,3) com o mesmo shape de f (4,6)
aux = np.zeros(f.shape)
r,c = h.shape
aux[:r,:c] = h
F = fft2(f)
H = fft2(aux)
G = F * H
gg = ifft2(G)
print("Result gg: \n",np.around(gg))
###Output
Result gg:
[[ 6.-0.j 6.-0.j -3.-0.j -3.-0.j -3.+0.j -3.-0.j]
[ 8.-0.j 8.-0.j -4.-0.j -4.-0.j -4.+0.j -4.-0.j]
[ 10.-0.j 10.-0.j -5.-0.j -5.-0.j -5.+0.j -5.-0.j]
[ 8.-0.j 8.-0.j -4.-0.j -4.-0.j -4.+0.j -4.-0.j]]
###Markdown
gg and g need to be equal:
###Code
print('The discrete convolution theorem worked?', np.allclose(gg.real,g))
###Output
The discrete convolution theorem worked? True
###Markdown
Using an image to illustrate the discrete convolution theoremSee the original image (keyb,tif) and h
###Code
f = mpimg.imread('../data/keyb.tif')
plt.imshow(f,cmap='gray');
plt.title('Original')
plt.colorbar()
plt.show()
hh = np.array([-1,0,+1])
h = np.array([hh,2*hh,hh])
print(h)
###Output
[[-1 0 1]
[-2 0 2]
[-1 0 1]]
###Markdown
Convolution in frequency domain:
###Code
aux = np.zeros(f.shape)
r,c = h.shape
aux[:r,:c] = h
F = fft2(f)
H = fft2(aux)
x,y = f.shape
plt.figure(1)
plt.imshow(np.log(np.abs(ia.ptrans(F,(x//2,y//2))+1)),cmap='gray')
plt.title('DFT of f')
plt.colorbar()
plt.figure(2)
plt.imshow(np.log(np.abs(ia.ptrans(H,(x//2,y//2))+1)),cmap='gray')
plt.title('DFT of h')
plt.colorbar()
G = F * H
plt.figure(3)
plt.imshow(np.log(np.abs(ia.ptrans(G,(x//2,y//2))+1)),cmap='gray')
plt.title('F * H')
plt.colorbar()
gg = ifft2(G)
plt.figure(4)
plt.imshow(gg.real.astype(np.float),cmap='gray');
plt.title('Convolution in frequency domain')
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Convolution in space domain
###Code
g = ia.pconv(f,h)
plt.imshow(g.real.astype(np.float),cmap='gray');
plt.title('Convolution in space domain')
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
The convolution in frequency domain and space domain need to be equals
###Code
print('The discrete convolution theorem worked?', np.allclose(gg.real,g))
###Output
The discrete convolution theorem worked? True
###Markdown
Demonstration of Convolution TheoremIllustrate the discrete convolution theorem.F indicates Fourier transform operator and F{f} and F{g} are the fourier transform of "f" and "g" so we have:$$ F\left \{ f * g \right \} = F\left \{ f \right \} \cdot F\left \{ g \right \} $$$$ F(f\cdot g) = F\left \{ f \right \} * F\left \{ g \right \} $$ Importing
###Code
%matplotlib inline
import numpy as np
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import sys,os
ia898path = os.path.abspath('/etc/jupyterhub/ia898_1s2017/')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
from numpy.fft import fft2
from numpy.fft import ifft2
###Output
_____no_output_____
###Markdown
Numeric sample
###Code
fr = np.linspace(-1,1,6)
f = np.array([fr,2*fr,fr,fr])
print(f)
hh = np.array([-1,0,+1])
h = np.array([hh,2*hh,hh])
print(h)
g = ia.pconv(f,h)
print(g)
###Output
[[ 6.4 6.4 -3.2 -3.2 -3.2 -3.2]
[ 8. 8. -4. -4. -4. -4. ]
[ 9.6 9.6 -4.8 -4.8 -4.8 -4.8]
[ 8. 8. -4. -4. -4. -4. ]]
###Markdown
See that f and h are periodic images and the period is (H,W) that is the shape of f.At the following code, the F and H need to be the same shape
###Code
#Deixar o h (3,3) com o mesmo shape de f (4,6)
aux = np.zeros(f.shape)
r,c = h.shape
aux[:r,:c] = h
F = fft2(f)
H = fft2(aux)
G = F * H
gg = ifft2(G)
print("Result gg: \n",np.around(gg))
###Output
Result gg:
[[ 6.-0.j 6.-0.j -3.-0.j -3.-0.j -3.+0.j -3.-0.j]
[ 8.-0.j 8.-0.j -4.-0.j -4.-0.j -4.+0.j -4.-0.j]
[ 10.-0.j 10.-0.j -5.-0.j -5.-0.j -5.+0.j -5.-0.j]
[ 8.-0.j 8.-0.j -4.-0.j -4.-0.j -4.+0.j -4.-0.j]]
###Markdown
gg and g need to be equal:
###Code
print('The discrete convolution theorem worked?', np.allclose(gg.real,g))
###Output
The discrete convolution theorem worked? True
###Markdown
Using an image to illustrate the discrete convolution theoremSee the original image (keyb,tif) and h
###Code
f = mpimg.imread('/home/lotufo/ia898/data/keyb.tif')
plt.imshow(f,cmap='gray');
plt.title('Original')
plt.colorbar()
plt.show()
hh = np.array([-1,0,+1])
h = np.array([hh,2*hh,hh])
print(h)
###Output
[[-1 0 1]
[-2 0 2]
[-1 0 1]]
###Markdown
Convolution in frequency domain:
###Code
aux = np.zeros(f.shape)
r,c = h.shape
aux[:r,:c] = h
F = fft2(f)
H = fft2(aux)
x,y = f.shape
plt.figure(1)
plt.imshow(np.log(np.abs(ia.ptrans(F,(x//2,y//2))+1)),cmap='gray')
plt.title('DFT of f')
plt.colorbar()
plt.figure(2)
plt.imshow(np.log(np.abs(ia.ptrans(H,(x//2,y//2))+1)),cmap='gray')
plt.title('DFT of h')
plt.colorbar()
G = F * H
plt.figure(3)
plt.imshow(np.log(np.abs(ia.ptrans(G,(x//2,y//2))+1)),cmap='gray')
plt.title('F * H')
plt.colorbar()
gg = ifft2(G)
plt.figure(4)
plt.imshow(gg.real.astype(np.float),cmap='gray');
plt.title('Convolution in frequency domain')
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
Convolution in space domain
###Code
g = ia.pconv(f,h)
plt.imshow(g.real.astype(np.float),cmap='gray');
plt.title('Convolution in space domain')
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
The convolution in frequency domain and space domain need to be equals
###Code
print('The discrete convolution theorem worked?', np.allclose(gg.real,g))
###Output
The discrete convolution theorem worked? True
|
code/basics_and_cnn/9 mnist mlp with best model save.ipynb | ###Markdown
Multi Layer Perceptron * split train validation and test sets* design model* save best model* test best model libraries
###Code
import torch
import numpy as np
from torchvision import datasets # to load mnist dataset
import torchvision.transforms as transforms # dataset transformations such as totensor
from torch.utils.data.sampler import SubsetRandomSampler # random sampler
###Output
_____no_output_____
###Markdown
load, transform and split data sets
###Code
num_workers = 0
batch_size = 64
validation_size = 0.3
# data transformations. In this instance, test and train will have the same transformation which is not the case most often
transform = transforms.ToTensor()
# train and test sets
train_set = datasets.MNIST(root='../data',train=True,download=True, transform=transform)
test_set = datasets.MNIST(root='../data',train=False,download=False, transform=transform)
num_train = int(np.floor(len(train_set)*(1-validation_size)))
num_valid = int(np.floor(len(train_set)*validation_size))
print(num_valid,num_train)
ids = np.arange(len(train_set))
np.random.shuffle(ids)
# define samplers:
train_ids, validation_ids = ids[:num_train], ids[num_train:]
print(len(train_ids),len(validation_ids))
train_sampler = SubsetRandomSampler(train_ids)
valid_sampler = SubsetRandomSampler(validation_ids)
# data loader
train_loader = torch.utils.data.DataLoader(train_set, batch_size=batch_size,
sampler=train_sampler, num_workers=num_workers)
valid_loader = torch.utils.data.DataLoader(train_set, batch_size=batch_size,
sampler=valid_sampler, num_workers=num_workers)
test_loader = torch.utils.data.DataLoader(test_set, batch_size=batch_size,
num_workers=num_workers)
###Output
18000 42000
42000 18000
###Markdown
plot samples
###Code
import matplotlib.pyplot as plt
%matplotlib inline
dataiter = iter(train_loader)
images, labels = dataiter.next() # get the batch
images = images.numpy() # convert to numpy
fig = plt.figure(figsize = (25,4))
for i in np.arange(20):
ax = fig.add_subplot(2, 20/2, i+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[i]),cmap='gray')
ax.set_title(labels[i].item())
###Output
_____no_output_____
###Markdown
Network
###Code
import torch.nn as nn
import torch.nn.functional as F
class Network(nn.Module):
def __init__(self,batch_size,flat_image_size):
super(Network,self).__init__()
self.batch_size = batch_size
self.flat_image_size = flat_image_size
#input layer
self.fc1 = nn.Linear(28*28,128)
# hidden layers
self.fc2 = nn.Linear(128,64)
self.fc3 = nn.Linear(64,32)
self.classifier = nn.Linear(32,10)
# dropout
self.dropout = nn.Dropout(p=0.2)
def forward(self,x):
#reshape the image
x = x.view(-1,self.flat_image_size)
# network
x = F.relu(self.fc1(x))
#second layer
x = F.relu(self.fc2(x))
x = self.dropout(x)
#third layer
x = F.relu(self.fc3(x))
x = self.dropout(x)
# output layer
x = F.log_softmax(self.classifier(x), dim=1)
return x
model = Network(64, 28*28)
print(model)
###Output
Network(
(fc1): Linear(in_features=784, out_features=128, bias=True)
(fc2): Linear(in_features=128, out_features=64, bias=True)
(fc3): Linear(in_features=64, out_features=32, bias=True)
(classifier): Linear(in_features=32, out_features=10, bias=True)
(dropout): Dropout(p=0.2, inplace=False)
)
###Markdown
loss and optimiser
###Code
from torch import optim
criterion = nn.NLLLoss() # Loss function Negative log likelyhood loss
optimizer = optim.Adam(model.parameters(), lr=0.01) # learning rate 0.003
def accuracy(y_hat_tensor,label_tensor):
'''
args:
y_hat_tensor tensor: direct output of the model.
label_tensor tensor: actual labels of the given items
returns:
accuracy float
accurate float: number of accurately labeled items
total_samples float : number of samples investigated
'''
y_hat_tensor = torch.exp(y_hat_tensor)
values, pred_labels = y_hat_tensor.max(1) # works like numpy argmax plus returns the values of the cells.
accurate = sum(1 for a, b in zip(pred_labels.numpy(), label_tensor.numpy()) if a == b)
total_samples = len(label_tensor)
accuracy = accurate/total_samples
return accuracy,accurate,total_samples
epochs = 10
epoch = 0
valid_loss_min = np.Inf
train_losses = []
valid_losses = []
for e in range(epochs):
running_loss = 0
total_accurate = 0
total_samples = 0
for images, labels in train_loader:
# Training pass
#print(images.shape)
output = model(images) # directly passes the images into forward method
loss = criterion(output, labels)
optimizer.zero_grad() # clear gradients
loss.backward() # compute gradients
optimizer.step() # update weights
batch_train_accuracy,accurate,total_sample = accuracy(output,labels)
running_loss += loss.item()
total_accurate += accurate
total_samples += total_sample
#print(total_accurate)
else:
with torch.no_grad():
model.eval()
valid_loss = 0
total_samples_test = 0
total_accurate_test = 0
for images, labels in valid_loader:
output = model(images)
valid_loss += criterion(output, labels)
batch_test_accuracy,accurate_test,total_sample_test = accuracy(output,labels)
total_accurate_test += accurate_test
total_samples_test += total_sample_test
model.train()
train_losses.append(running_loss/len(train_loader))
valid_losses.append(valid_loss/len(valid_loader))
print('''---------- epoch : {} -----------'''.format(epoch+1))
print(''' Training Accuracy = {} - Training Loss = {}'''.format(total_accurate/total_samples,running_loss/len(train_loader)))
print(''' Test Accuracy = {} - Test Loss = {}'''.format(total_accurate_test/total_samples_test,valid_loss/len(valid_loader)))
epoch += 1
print(valid_loss/len(valid_loader))
print(valid_loss_min)
if valid_loss/len(valid_loader)<valid_loss_min:
valid_loss_min = valid_loss/len(valid_loader)
print('validation loss decreased! Saving model..')
torch.save(model.state_dict(), '../models/model_9.pt')
###Output
---------- epoch : 1 -----------
Training Accuracy = 0.8858571428571429 - Training Loss = 0.4003699090915819
Test Accuracy = 0.9433888888888889 - Test Loss = 0.2006707787513733
tensor(0.2007)
inf
validation loss decreased! Saving model..
---------- epoch : 2 -----------
Training Accuracy = 0.9413571428571429 - Training Loss = 0.23330283287630518
Test Accuracy = 0.9547222222222222 - Test Loss = 0.1671222746372223
tensor(0.1671)
tensor(0.2007)
validation loss decreased! Saving model..
---------- epoch : 3 -----------
Training Accuracy = 0.9457857142857143 - Training Loss = 0.2083977587064629
Test Accuracy = 0.9576666666666667 - Test Loss = 0.1852959543466568
tensor(0.1853)
tensor(0.1671)
---------- epoch : 4 -----------
Training Accuracy = 0.9511904761904761 - Training Loss = 0.19451083942023042
Test Accuracy = 0.9594444444444444 - Test Loss = 0.17316097021102905
tensor(0.1732)
tensor(0.1671)
---------- epoch : 5 -----------
Training Accuracy = 0.9571190476190476 - Training Loss = 0.17043421232237946
Test Accuracy = 0.9540555555555555 - Test Loss = 0.20067720115184784
tensor(0.2007)
tensor(0.1671)
---------- epoch : 6 -----------
Training Accuracy = 0.9557619047619048 - Training Loss = 0.17273564156020113
Test Accuracy = 0.9566666666666667 - Test Loss = 0.19843901693820953
tensor(0.1984)
tensor(0.1671)
---------- epoch : 7 -----------
Training Accuracy = 0.9575476190476191 - Training Loss = 0.1677878784303535
Test Accuracy = 0.9573333333333334 - Test Loss = 0.2368217408657074
tensor(0.2368)
tensor(0.1671)
---------- epoch : 8 -----------
Training Accuracy = 0.9625476190476191 - Training Loss = 0.15237614573521718
Test Accuracy = 0.9652777777777778 - Test Loss = 0.1935972422361374
tensor(0.1936)
tensor(0.1671)
---------- epoch : 9 -----------
Training Accuracy = 0.9627857142857142 - Training Loss = 0.15491290535822372
Test Accuracy = 0.9646666666666667 - Test Loss = 0.20754936337471008
tensor(0.2075)
tensor(0.1671)
---------- epoch : 10 -----------
Training Accuracy = 0.963047619047619 - Training Loss = 0.14900076282501243
Test Accuracy = 0.9531111111111111 - Test Loss = 0.23505236208438873
tensor(0.2351)
tensor(0.1671)
###Markdown
load the best model
###Code
model.load_state_dict(torch.load('../models/model_9.pt'))
###Output
_____no_output_____
###Markdown
Test
###Code
with torch.no_grad():
model.eval()
test_loss = 0
total_samples_test = 0
total_accurate_test = 0
for images, labels in test_loader:
output = model(images)
test_loss += criterion(output, labels)
batch_test_accuracy,accurate_test,total_sample_test = accuracy(output,labels)
total_accurate_test += accurate_test
total_samples_test += total_sample_test
loss = test_loss/len(test_loader)
print(loss)
print(total_accurate_test/total_samples_test)
###Output
0.9565
|
ch02_basics/Concept09_queue.ipynb | ###Markdown
Ch `01`: Concept `09` Using Queues If you have a lot of training data, you probably don't want to load it all into memory at once. The QueueRunner in TensorFlow is a tool to efficiently employ a queue data-structure in a multi-threaded way.
###Code
import tensorflow as tf
import numpy as np
###Output
_____no_output_____
###Markdown
We will be running multiple threads, so let's figure out the number of CPUs:
###Code
import multiprocessing
NUM_THREADS = multiprocessing.cpu_count()
###Output
_____no_output_____
###Markdown
Generate some fake data to work with:
###Code
xs = np.random.randn(100, 3)
ys = np.random.randint(0, 2, size=100)
###Output
_____no_output_____
###Markdown
Here's a couple concrete examples of our data:
###Code
xs_and_ys = zip(xs, ys)
for _ in range(5):
x, y = next(xs_and_ys)
print('Input {} ---> Output {}'.format(x, y))
###Output
Input [ 1.46034759 0.71462742 0.73288402] ---> Output 0
Input [ 1.1537654 -0.09128405 0.08036941] ---> Output 1
Input [-0.61164559 -0.19188485 0.06064167] ---> Output 0
Input [ 0.1007337 0.34815357 0.24346031] ---> Output 0
Input [-1.25581117 1.44738085 1.15035257] ---> Output 0
###Markdown
Define a queue:
###Code
queue = tf.FIFOQueue(capacity=1000, dtypes=[tf.float32, tf.int32])
###Output
_____no_output_____
###Markdown
Set up the enqueue and dequeue ops:
###Code
enqueue_op = queue.enqueue_many([xs, ys])
x_op, y_op = queue.dequeue()
###Output
_____no_output_____
###Markdown
Define a QueueRunner:
###Code
qr = tf.train.QueueRunner(queue, [enqueue_op] * 4)
###Output
_____no_output_____
###Markdown
Now that all variables and ops have been defined, let's get started with a session:
###Code
sess = tf.InteractiveSession()
###Output
_____no_output_____
###Markdown
Create threads for the QueueRunner:
###Code
coord = tf.train.Coordinator()
enqueue_threads = qr.create_threads(sess, coord=coord, start=True)
###Output
_____no_output_____
###Markdown
Test out dequeueing:
###Code
for _ in range(100):
if coord.should_stop():
break
x, y = sess.run([x_op, y_op])
print(x, y)
coord.request_stop()
coord.join(enqueue_threads)
###Output
[ 1.46034753 0.71462744 0.73288405] 0
[ 1.15376544 -0.09128405 0.08036941] 1
[-0.61164558 -0.19188486 0.06064167] 0
[ 0.1007337 0.34815356 0.24346031] 0
[-1.25581121 1.4473809 1.1503526 ] 0
[ 0.60369009 -0.87942719 -1.37121975] 1
[ 1.30641925 1.55316997 1.01789773] 0
[ 0.0575242 0.59463078 0.47600508] 1
[-1.22782397 -0.86792755 1.37459588] 1
[-0.27896652 0.51645088 1.36873603] 0
[-0.34542757 0.79360306 0.32000065] 0
[-0.46792462 -0.31817994 0.91739392] 0
[ 0.24787657 0.83848852 1.16125166] 0
[-0.46220389 -0.09412029 -0.9981451 ] 1
[ 0.06739734 -1.08405316 -0.3582162 ] 1
[-1.2644819 -0.27479929 1.15882337] 1
[-0.68015367 -0.10199564 1.4274267 ] 0
[-0.48884565 -0.39484504 0.1496018 ] 1
[ 1.48414564 -0.43943462 -0.12646018] 0
[ 0.49450573 0.42091215 -0.17693481] 0
[ 0.02265234 0.99832052 0.26808155] 1
[-0.94086462 1.67000341 0.92434174] 1
[-0.50961769 -0.39044595 -0.5737586 ] 0
[-0.95702702 0.61196166 -0.86487901] 1
[-0.6125344 -0.30916786 -1.06602347] 1
[-1.91383719 0.26860073 0.50380921] 1
[-0.14638679 0.11614402 1.36613548] 1
[-0.56817967 1.4221288 0.99365205] 0
[-0.04597072 0.43875724 -0.4809106 ] 0
[-0.2000681 -0.2384561 0.06599616] 0
[ 0.5862993 0.85386461 0.82285357] 1
[ 1.64371336 -0.46838599 0.22755136] 0
[ 0.21683638 -0.96399426 1.78278649] 1
[ 0.03778305 2.49208736 0.07467758] 0
[-1.48958826 -0.11699235 0.98281074] 1
[-0.27623582 -0.41658697 -0.89554274] 0
[-1.64742625 1.83507264 -0.76936585] 0
[-1.5386405 0.14272654 0.17047048] 1
[ 0.63654041 1.75451732 -1.14198494] 0
[-0.57061732 0.11121389 1.39394116] 1
[ 1.94736981 -0.36588097 0.54801333] 1
[-0.56976408 -1.36990237 -0.9922803 ] 1
[-2.47653961 1.19603479 -0.3038739 ] 0
[-0.76740891 -0.49611184 0.47167206] 0
[ 1.62004089 0.13268068 0.28845155] 0
[-0.91749012 -0.30151108 -0.08271972] 0
[-0.21053326 -0.16114895 -0.52424961] 1
[ 0.19968066 0.2387522 2.0314014 ] 0
[-0.29072183 0.53720349 -0.38972732] 0
[-0.85891634 -0.26684314 -1.91741192] 1
[-2.07077003 1.97488022 -0.92741841] 0
[ 2.37270904 2.19385314 -0.29643178] 0
[-0.18054648 -0.1651988 1.70858753] 1
[-0.27851281 -0.13095042 0.30613536] 1
[-0.13653868 -0.14431253 1.3018136 ] 1
[-1.79938364 0.26698261 -0.3283855 ] 0
[-0.43491617 -0.8737886 -0.48871836] 1
[-0.27275884 0.08004636 -0.34334385] 0
[-0.06538768 -0.47280514 -1.82918119] 0
[ 1.72329473 0.6359638 1.53474641] 0
[ 0.88200653 0.87051851 0.17676826] 1
[-2.22127795 -0.39812142 0.69118947] 0
[-0.90146214 0.23153968 -1.07890677] 0
[-0.66513097 -0.74897975 -1.9886812 ] 0
[ 0.95217085 -0.1361241 -0.81558466] 1
[ 0.97319698 0.10349847 1.78010297] 0
[ 0.54321396 1.10134006 -1.03641176] 1
[ 0.46445891 0.56387979 0.10383373] 0
[ 0.22231635 -1.20880091 0.20125042] 1
[ 0.56338882 -0.76195502 -0.33035895] 0
[ 0.13885871 0.62347603 0.32560909] 0
[-0.63413048 0.19185983 1.65251637] 1
[ 0.81965917 -0.14427175 -0.9943186 ] 0
[ 1.98786604 -1.38118052 -0.34296793] 0
[-0.49028778 -0.30242845 0.81718981] 0
[ 0.48434621 -1.3200016 -0.32307461] 0
[-0.91041267 -0.34315997 0.71205115] 0
[ 0.61457998 -0.85814965 0.6939835 ] 0
[-0.40195578 -1.11846507 -0.19713871] 1
[-0.47889531 -0.75685191 1.68955612] 1
[ 1.51117146 -2.23529124 1.13895822] 0
[-0.00831293 -0.50950557 0.08648733] 1
[-0.47011089 1.04781067 -0.05893843] 1
[-0.34855339 -0.5695411 -0.12196264] 1
[-0.47251806 -0.49479187 0.27609721] 0
[-2.04546118 -0.16185458 1.42348552] 0
[-0.67136103 -0.16650072 0.3609505 ] 0
[ 1.22566068 1.18665588 -1.87292075] 0
[-0.80474126 -0.1114784 0.00531922] 1
[ 0.62691861 -3.26328206 -0.39003551] 0
[-0.77470082 -1.23692167 -1.55790484] 0
[-0.49005547 -0.19645052 -0.21566501] 1
[-0.44095206 -0.13273652 -0.59810853] 0
[-0.9750855 -0.46043435 0.06064714] 1
[-0.181191 -0.12452056 0.23064452] 1
[-0.34818363 -1.13179028 1.20628965] 0
[-1.58196092 -1.3506341 -2.05767131] 1
[-1.66225421 -0.43541616 1.55258 ] 0
[-0.12949325 -0.15456693 0.04389611] 0
[ 0.24592777 0.11407969 -0.31221709] 1
|
hw4/t08902205.ipynb | ###Markdown
Graph Plotting
###Code
df_plot["Date"] = df_plot['Date'].astype('datetime64[ns]')
df_plot["Date"] = df_plot["Date"].map(mdates.date2num)
df_plot.head()
###Output
_____no_output_____
###Markdown
Candlestick chart with 2 moving average lines
###Code
f, ax = plt.subplots()
# f.hold(True)
f.set_size_inches((12.8,9.6))
candlestick_ohlc(ax, df_plot.values, width=5, colorup='g', colordown='r')
ma_10_plot = MA(df_plot_ma_10["Close"], timeperiod=10, matype=0)
ma_30_plot = MA(df_plot_ma_30["Close"], timeperiod=30, matype=0)
ma_10_plot = ma_10_plot.dropna(axis=0)
ma_30_plot = ma_30_plot.dropna(axis=0)
ax.xaxis_date()
ax.plot(df_plot["Date"],ma_10_plot, label="MA-10")
ax.plot(df_plot["Date"],ma_30_plot, label="MA-30")
ax.legend()
plt.show()
###Output
C:\Users\TaiT_\Anaconda3\lib\site-packages\pandas\plotting\_matplotlib\converter.py:103: FutureWarning: Using an implicitly registered datetime converter for a matplotlib plotting method. The converter was registered by pandas on import. Future versions of pandas will require you to explicitly register matplotlib converters.
To register the converters:
>>> from pandas.plotting import register_matplotlib_converters
>>> register_matplotlib_converters()
warnings.warn(msg, FutureWarning)
###Markdown
K/D Line
###Code
k_plot, d_plot = STOCH(df_plot_kd["High"], df_plot_kd["Low"], df_plot_kd["Close"],fastk_period=5, slowk_period=3, slowk_matype=0, slowd_period=3, slowd_matype=0)
k_plot = k_plot.dropna(axis=0)
d_plot = d_plot.dropna(axis=0)
plt.figure(figsize=[19.2,3.6])
plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%Y/%m/%d'))
plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval=30))
plt.plot(df_plot["Date"],k_plot, label="K")
plt.plot(df_plot["Date"],d_plot, label="D")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Volume bar chart
###Code
dates = df_plot["Date"]
dates = np.asarray(dates)
volume = df_plot["Volume"]
volume = np.asarray(volume)
pos = df_plot['Open']-df_plot['Close']>0
neg = df_plot['Open']-df_plot['Close']<0
plt.figure(figsize=(24,7.2))
plt.bar(dates[pos],volume[pos],color='green',width=0.7)
plt.bar(dates[neg],volume[neg],color='red',width=0.7)
plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%Y/%m/%d'))
plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval=30))
plt.show()
###Output
_____no_output_____
###Markdown
Data preprocess Add technical analysis
###Code
ma_10_train = MA(df_train["Close"], timeperiod=10, matype=0)
ma_30_train = MA(df_train["Close"], timeperiod=30, matype=0)
k_train, d_train = STOCH(df_train["High"], df_train["Low"], df_train["Close"],fastk_period=5, slowk_period=3, slowk_matype=0, slowd_period=3, slowd_matype=0)
df_train["MA10"] = ma_10_train
df_train["MA30"] = ma_30_train
df_train["K"] = k_train
df_train["D"] = d_train
ma_10_validation = MA(df_validation["Close"], timeperiod=10, matype=0)
ma_30_validation = MA(df_validation["Close"], timeperiod=30, matype=0)
k_validation, d_validation = STOCH(df_validation["High"], df_validation["Low"], df_validation["Close"],fastk_period=5, slowk_period=3, slowk_matype=0, slowd_period=3, slowd_matype=0)
df_validation["MA10"] = ma_10_validation
df_validation["MA30"] = ma_30_validation
df_validation["K"] = k_validation
df_validation["D"] = d_validation
df_validation.shape
###Output
_____no_output_____
###Markdown
Drop dates and NA
###Code
df_train = df_train.dropna(axis=0)
df_train = df_train.drop(["Date"], axis=1)
df_validation = df_validation.dropna(axis=0)
df_validation["Date"] = df_validation["Date"].astype('datetime64[ns]')
df_validation["Date"] = df_validation["Date"].map(mdates.date2num)
validation_date = df_validation["Date"].copy()
df_validation = df_validation.drop(["Date"], axis=1)
df_validation.shape
###Output
_____no_output_____
###Markdown
Normalize data
###Code
def normalizeDataframe(data_frame):
normalize_df = data_frame.copy()
for column in normalize_df.columns:
min_value = min(normalize_df[column])
max_value = max(normalize_df[column])
normalize_df[column] = (normalize_df[column] - min_value) / (max_value - min_value)
return normalize_df
df_train = normalizeDataframe(df_train)
df_validation = normalizeDataframe(df_validation)
df_validation.shape
###Output
_____no_output_____
###Markdown
Prepare X_train, X_validation, y_train, y_validation for RNN
###Code
data_train = df_train.values
data_validation = df_validation.values
X_train = []
y_train = []
X_validation = []
y_validation = []
for i in range(30,data_train.shape[0]):
X_train.append(data_train[i-30:i])
y_train.append(data_train[i, 0])
for i in range(30, data_validation.shape[0]):
X_validation.append(data_validation[i-30:i])
y_validation.append(data_validation[i,0])
X_train, y_train = np.array(X_train), np.array(y_train)
X_validation, y_validation = np.array(X_validation), np.array(y_validation)
X_train.shape
# y_train.shape
###Output
_____no_output_____
###Markdown
Building Models
###Code
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, LSTM, Dropout, SimpleRNN, GRU
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping
early_stopping = EarlyStopping(monitor="val_loss", mode="min", patience=8)
def plotModelLoss(history):
plt.figure(figsize=[9.6,7.2])
plt.plot(history["loss"])
plt.plot(history["val_loss"])
plt.title("model loss")
plt.ylabel("loss")
plt.xlabel("Epoch")
plt.legend(["Train", "Validation"], loc="upper left")
plt.show()
def plotPrediction(model,name="Prediction by RNN"):
y_pred = model.predict(X_validation)
plt.figure(figsize=[19.2,14.4])
plt.plot(validation_date[30:], y_validation)
plt.plot(validation_date[30:], y_pred)
plt.gca().xaxis.set_major_formatter(mdates.DateFormatter('%Y/%m/%d'))
plt.gca().xaxis.set_major_locator(mdates.DayLocator(interval=30))
plt.gcf().autofmt_xdate()
plt.title(name)
plt.legend(["real", "predict"], loc="upper left")
plt.show()
###Output
_____no_output_____
###Markdown
Vanilla RNN
###Code
regressor_RNN = Sequential()
regressor_RNN.add(SimpleRNN(units = 32, activation = 'tanh', input_shape = (X_train.shape[1], X_train.shape[2])))
regressor_RNN.add(Dense(units = 1))
regressor_RNN.summary()
checkpoint_RNN = ModelCheckpoint(filepath="best_params_RNN.hdf5", monitor="val_loss",verbose=1,save_best_only=True)
regressor_RNN.compile(optimizer='adam', loss = 'mean_squared_error')
# regressor_RNN.load_weights("best_params_RNN.hdf5")
RNN_history = regressor_RNN.fit(X_train, y_train, epochs=256, batch_size=64, validation_data = (X_validation, y_validation),callbacks=[checkpoint_RNN, early_stopping])
plotModelLoss(RNN_history.history)
plotPrediction(regressor_RNN)
###Output
_____no_output_____
###Markdown
LSTM
###Code
regressor_LSTM = Sequential()
regressor_LSTM.add(LSTM(units = 32, activation = 'tanh', input_shape = (X_train.shape[1], X_train.shape[2])))
regressor_LSTM.add(Dense(units = 1))
regressor_LSTM.summary()
checkpoint_LSTM = ModelCheckpoint(filepath="best_params_LSTM.hdf5", monitor="val_loss",verbose=1,save_best_only=True)
regressor_LSTM.compile(optimizer='adam', loss = 'mean_squared_error')
LSTM_history = regressor_LSTM.fit(X_train, y_train, epochs=256, batch_size=64, validation_data = (X_validation, y_validation),callbacks=[checkpoint_LSTM, early_stopping])
plotModelLoss(LSTM_history.history)
plotPrediction(regressor_LSTM, name="Prediction by LSTM")
###Output
_____no_output_____
###Markdown
GRU
###Code
regressor_GRU = Sequential()
regressor_GRU.add(GRU(units = 32, activation = 'tanh', input_shape = (X_train.shape[1], X_train.shape[2])))
regressor_GRU.add(Dense(units = 1))
regressor_GRU.summary()
checkpoint_GRU = ModelCheckpoint(filepath="best_params_GRU.hdf5", monitor="val_loss",verbose=1,save_best_only=True)
regressor_GRU.compile(optimizer='adam', loss = 'mean_squared_error')
GRU_history = regressor_GRU.fit(X_train, y_train, epochs=256, batch_size=64, validation_data = (X_validation, y_validation),callbacks=[checkpoint_GRU, early_stopping])
plotModelLoss(GRU_history.history)
plotPrediction(regressor_GRU, name="Prediction by GRU")
###Output
_____no_output_____ |
_notebooks/2020-07-18-creating_meshes.ipynb | ###Markdown
Plotting surface in matplotlib > Simple notebook looking at meshes in matplotlib- toc:true- badges: true- comments: true- author: Pushkar G. Ghanekar- categories: [python, data-visualization, machine-learning] This is adapted from the following Tutorial: [Link](https://pundit.pratt.duke.edu/wiki/Python:Plotting_Surfaces)
###Code
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import axes3d
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
fig = plt.figure(1, clear=True)
ax = fig.add_subplot(1,1,1, projection='3d')
x = np.array([[1, 3], [2, 4]]) #Array format: [[a,b],[c,d]] -- a b are in row; c d are in row
y = np.array([[5, 6], [7, 8]])
z = np.array([[9, 12], [10, 11]])
ax.plot_surface(x, y, z)
ax.set(xlabel='x', ylabel='y', zlabel='z')
fig.tight_layout()
###Output
_____no_output_____
###Markdown
MeshgridMesh is important to create a surface since just looking at the x, y vector by themselves what you would look at is the diagonal of the matrix formed by combination of all the possible x values with y values. For the given x and y vector, every entry in x vector can have the entire y vector as a possible point. So it is important to generate an array which captures all these possible pairing. So using `mesh-grid` if `x-vector` is of dimensions M and `y-vector` is of dimensions N -- the final resulting matrix is NxM dimensions where every $n^{th}$ entry in `y` all the entries of `x` are added. Finally the ouput is given as `x` coordinate of that matrix and `y` coordinate of that matrix. Example: * $X$ : $\begin{bmatrix} x_{1} & x_{2} & x_{3} \end{bmatrix}$* $Y$ : $\begin{bmatrix} y_{1} & y_{2} \end{bmatrix}$Then resulting mesh would be: $$ X-Y-Mesh = \begin{bmatrix} x_{1}y_{1} & x_{2}y_{1} & x_{3}y_{1} \\ x_{1}y_{2} & x_{2}y_{2} & x_{3}y_{2} \end{bmatrix}$$$$ X-path = \begin{bmatrix} x_{1} & x_{2} & x_{3} \\ x_{1} & x_{2} & x_{3} \end{bmatrix}$$$$ X-path = \begin{bmatrix} y_{1} & y_{1} & y_{1} \\ y_{2} & y_{2} & y_{2} \end{bmatrix}$$
###Code
#Setting the bounds of the x and y axis
x_axis_range = np.arange(-2,2.1,1)
y_axis_range = np.arange(-4,4.1,1)
#Make the meshgrid for the x and y
(x,y) = np.meshgrid(x_axis_range, y_axis_range, sparse=True)
z = x + y
fig = plt.figure(1, clear=True)
ax = fig.add_subplot(1,1,1, projection='3d')
ax.plot_surface(x, y, z)
fig.tight_layout()
###Output
_____no_output_____
###Markdown
Plotting this 2D function: $$ z = e^{-\sqrt {x^2 + y^2}}cos(4x)cos(4y) $$ using the surface
###Code
import matplotlib.cm as cm
x_axis_bound = np.linspace(-1.8,1.8,100)
y_axis_bound = np.linspace(-1.8,1.8,100)
(x,y) = np.meshgrid(x_axis_bound, y_axis_bound, sparse=True)
def f(x,y):
return np.exp(-np.sqrt( x**2 + y**2 )) * np.cos(4*x) * np.cos(4*y)
Z = f(x,y)
fig = plt.figure(1, clear=True)
ax = fig.add_subplot(1,1,1, projection='3d')
ax.plot_surface(x, y, Z, cmap=cm.hot)
ax.set_xlabel('x')
ax.set_ylabel('y')
fig.tight_layout()
###Output
_____no_output_____ |
NHDPlus21_Into_SB_For_BIS/Reg15_NHDPlusV21_IntoSB_BIS.ipynb | ###Markdown
This python code builds ScienceBase items that house and describe specific versions of data files from the NHDPlusV2.1 that are being used in the Biogeographic Information System. Data were extracted from ftp://ftp.horizon-systems.com/NHDplus/NHDPlusV21/ and stored within ScienceBase as attachments. Although reorganized, the files stored in the ScienceBase Items were not altered. In future iterations of this code we would like to avoid using local disk space and operations that may be dependent on a local operating system.
###Code
import pysb
import urllib
import os
import getpass
import time
import subprocess
from zipfile import ZipFile
import zipfile
#Downloads Files of Interest, The next few steps should be done within memory when we get a chance but didn't find a complete workflow of methods that would get us where we needed to be in memory
import urllib.request as ur
ur.urlretrieve('ftp://ftp.horizon-systems.com/NHDplus/NHDPlusV21/Data/NHDPlusCO/NHDPlus15/NHDPlusV21_CO_15_NHDPLusAttributes_08.7z', 'NHDPlusV21_CO_15_NHDPLusAttributes_08.7z')
ur.urlretrieve('ftp://ftp.horizon-systems.com/NHDplus/NHDPlusV21/Data/NHDPlusCO/NHDPlus15/NHDPlusV21_CO_15_NHDSnapshot_04.7z', 'NHDPlusV21_CO_15_NHDSnapshot_04.7z')
ur.urlretrieve('ftp://ftp.horizon-systems.com/NHDplus/NHDPlusV21/Data/NHDPlusCO/NHDPlus15/NHDPlusV21_CO_15_NHDPlusCatchment_01.7z', 'NHDPlusV21_CO_15_NHDPlusCatchment_01.7z')
#This code isn't currently doing anything in the SB item creation, but eventually something like this could be used to track the "last update" of the NHD file being harvested.
import urllib.request
with urllib.request.urlopen('ftp://ftp.horizon-systems.com/NHDplus/NHDPlusV21/Data/NHDPlusCO/NHDPlus15/') as response:
html = response.read()
print (html)
#Unzips the 7z files. This may only run on windows?
subprocess.call(r'"C:\Program Files\7-Zip\7z.exe" x ' + 'NHDPlusV21_CO_15_NHDPLusAttributes_08.7z' )
subprocess.call(r'"C:\Program Files\7-Zip\7z.exe" x ' + 'NHDPlusV21_CO_15_NHDSnapshot_04.7z' )
subprocess.call(r'"C:\Program Files\7-Zip\7z.exe" x ' + 'NHDPlusV21_CO_15_NHDPlusCatchment_01.7z' )
#Selects only the files we are using and zips them into 3 directories (using .zip). The three folders include Hydrography, NHDPlusAttributes, and Catchment
dataTypes = ['Hydrography', 'NHDPlusAttributes', 'Catchment']
for fileType in dataTypes:
z = ZipFile((fileType + '.zip'), 'w')
if fileType == 'Hydrography':
ZipFileList = ['NHDWaterbody.dbf','NHDWaterbody.prj','NHDWaterbody.shp','NHDWaterbody.shx','NHDFlowline.dbf','NHDFlowline.prj','NHDFlowline.shp','NHDFlowline.shx' ]
for file in ZipFileList:
procFile = ('NHDPlusCO/NHDPlus15/NHDSnapshot/Hydrography/' + file)
z.write(procFile, file)
elif fileType == 'NHDPlusAttributes':
ZipFileList = ['elevslope.dbf','PlusFlow.dbf','PlusFlowLineVAA.dbf']
for file in ZipFileList:
procFile = ('NHDPlusCO/NHDPlus15/NHDPlusAttributes/' + file)
z.write(procFile, file)
elif fileType == 'Catchment':
target_dir = r'NHDPlusCO\NHDPlus15\NHDPlusCatchment'
CatZip = ZipFile('Catchment.zip', 'w', zipfile.ZIP_DEFLATED)
rootlen = len(target_dir) + 1
for base, dirs, files in os.walk(target_dir):
for file in files:
fn = os.path.join(base, file)
CatZip.write(fn, fn[rootlen:])
#Create ScienceBase Item
loginu=input("Username: ") #asks user for username
sb = pysb.SbSession()
sb.loginc(str(loginu))
time.sleep(2)
ret = sb.upload_files_and_create_item(sb.get_my_items_id(), ['Catchment.zip', 'Hydrography.zip', 'NHDPlusAttributes.zip'])
SbItem = ret['id']
print (SbItem)
#Variables to populate the metadata in the SB Item
#Acquisition Date
import datetime
dNow = datetime.datetime.now()
AcqDate = dNow.strftime("%Y-%m-%d")
#AcqDate = dNow.isoformat()
UpdateItem = {'id': SbItem,
'title': 'NHDPlusV2.1 Processing Region 15; Files Used in the Biogeographic Information System',
'body': 'A subset of files from within processing region 15 of the NHDPlus Version 2.1. Although reorganized, the files within the attachments are unaltered from the NHDPlus Version 2.1 as they were acquired (see acquisition date listed within this metadata). This item links to python code used to generate the item.',
'purpose': 'This item is intended to preseve specific versions of files being used in the Biogeographic Information System.',
'dates': [{'type': 'Acquisition', 'dateString': AcqDate, 'label': 'Acquisition'}],
'webLinks': [{"type":"sourceCode","typeLabel":"Source Code","uri":"https://github.com/dwief-usgs/BCB_Ipython_Notebooks/blob/master/NHDPlus21_Into_SB_For_BIS/Reg15_NHDPlusV21_IntoSB_BIS.ipynb","rel":"related","title":"Python Code Used to Develop and Populate This SB Item","hidden":False},{"type":"webLink","typeLabel":"Web Link","uri":"http://www.horizon-systems.com/NHDPlus/NHDPlusV2_home.php","rel":"related","title":"Additional Information About the NHDPlusV2","hidden":False}],
'contacts': [{"name":"Horizon Systems","type":"Data Owner","contactType":"organization","onlineResource":"http://www.horizon-systems.com","organization":{},"primaryLocation":{"streetAddress":{},"mailAddress":{}}},{"name":"Daniel J Wieferich","oldPartyId":66431,"type":"Contact","contactType":"person","email":"[email protected]","active":True,"jobTitle":"Physical Scientist","firstName":"Daniel","middleName":"J","lastName":"Wieferich","organization":{"displayText":"Biogeographic Characterization"},"primaryLocation":{"name":"CN=Daniel J Wieferich,OU=CSS,OU=Users,OU=OITS,OU=DI,DC=gs,DC=doi,DC=net - Primary Location","building":"DFC Bldg 810","buildingCode":"KBT","officePhone":"3032024594","faxPhone":"3032024710","streetAddress":{"line1":"W 6th Ave Kipling St","city":"Lakewood","state":"CO","zip":"80225"},"mailAddress":{}},"orcId":"0000-0003-1554-7992"}],
'tags': [{"type":"Theme","scheme":"BIS","name":"NHDPlusV2.1"},{"type":"Theme","scheme":"BIS","name":"Reg15"}]
}
updateItem = sb.updateSbItem(UpdateItem)
#Remove unneeded local copies of files
import shutil
import os
os.remove('Catchment.zip')
os.remove('Hydrography.zip')
os.remove('NHDPlusAttributes.zip')
os.remove('NHDPlusV21_CO_15_NHDPLusAttributes_08.7z')
os.remove('NHDPlusV21_CO_15_NHDPlusCatchment_01.7z')
os.remove('NHDPlusV21_CO_15_NHDSnapshot_04.7z')
shutil.rmtree('NHDPlusCO')
###Output
_____no_output_____ |
examples/contactMatrix/ex04-non-normal-transients.ipynb | ###Markdown
Non normal network and transient response Introduction: Where eigen-analysis breaks downConsider an evolution equation of the form $$\dot{\boldsymbol{u}}=\boldsymbol{J}\boldsymbol{u}$$ where $$\boldsymbol{J}=\begin{pmatrix}-1 & 500\\0 & -2\end{pmatrix}$$The eigenvalues are clearly $-1,-2$. However, for some initial conditions this system will still grow massively in amplitude
###Code
import numpy as np
import scipy.linalg as spl
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
from pyross.contactMatrix import characterise_transient
A = [[-1,500],[0,-2]]
x0 = [1,1]
tf = 10
def linear_system(t, x, A): return A@x
ivp = solve_ivp(linear_system, (0,tf), x0, args=[A], t_eval=np.arange(0,tf,.1))
plt.plot(ivp.t,spl.norm(ivp.y.T, axis=1)/spl.norm(x0))
plt.xlabel("time")
plt.ylabel("$|u|/|u_0|$")
###Output
_____no_output_____
###Markdown
Here we see a massive amplification of the initial conditions, although the eigenvalues would suggest exponential decay. What is happening? The answer is that any non-normal matrix $\boldsymbol{J}$ (such that $\boldsymbol{J}\boldsymbol{J}^T \neq \boldsymbol{J}^T\boldsymbol{J}$) will give a transient response as the system relaxes back down to the (non-orthogonal) eigendirection.Such transients can be classified in terms of the spectral abcissa (eigenvalue(s) with maximal real component) $\alpha (\boldsymbol{J})$ which determines the long term behaviour, the numerical abcissa (eigenvalues of $\frac{1}{2}(\boldsymbol{J}+\boldsymbol{J}^T)$) $\omega (\boldsymbol{J})$, the Kreiss constant $\mathcal{K}(\boldsymbol{J})$ which gives a lower bound to the transient behaviour (the upper bound is given by $eN\mathcal{K}(\boldsymbol{J})$ where $N$ is the matrix dimensionality), and the time over which the transient occurs $\tau=\log(\mathcal{K})/a$ where $a$ is the real part of the maximal pseudoeigenvalue.These quantities can be found using the `characterise_transient` function:
###Code
mcA = characterise_transient(A)
t=ivp.t
f, ax = plt.subplots()
plt.plot(ivp.t,spl.norm(ivp.y.T, axis=1)/spl.norm(x0))
ax.set_xlabel("time")
ax.set_ylabel(r'$|u|/|u_0|$')
ax.set_title(r'$\dot{u}=J\cdot u$')
ax.set_ylim((-.1,np.max(spl.norm(ivp.y.T, axis=1)/spl.norm(x0))*1.1))
t_trunc = t[np.where(t<mcA[3])]
ax.plot(t_trunc,np.exp(mcA[1]*t_trunc),"--",color="orange")
ax.plot(t, np.exp(mcA[0]*t),"--",color="darkgreen")
plt.axhline(y=mcA[2],linestyle="dotted",color="black")
if 3*mcA[3]<t[-1]:
plt.axvline(x=mcA[3],linestyle="dotted",color="black")
ax.set_xlim((-.1, np.min([3*mcA[3],t[-1]])))
plt.annotate(r'Long time behaviour $\alpha (J)$',[1,1], [.2,2])
plt.annotate(r'Inital growth rate $\omega (J)$',[.01,90])
plt.annotate(r'Transient duration',[3.4,20], [3.3,20])
plt.annotate(r'Kreiss constant',[3.4,26], [5.3,90])
###Output
/home/ab/python/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:12: RuntimeWarning: overflow encountered in exp
if sys.path[0] == '':
/home/ab/python/anaconda3/lib/python3.6/site-packages/numpy/core/_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
/home/ab/python/anaconda3/lib/python3.6/site-packages/numpy/core/_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
/home/ab/python/anaconda3/lib/python3.6/site-packages/matplotlib/transforms.py:918: ComplexWarning: Casting complex values to real discards the imaginary part
self._points[:, 0] = interval
###Markdown
Exponential growth Suppose the system we are interested in grows exponentially in time. Then there is no meaning to a lower bound for a transient process, since the system will always saturate this bound at a large enough time.
###Code
A2 = np.array([[3,2],[9,4]])
mcA2 =characterise_transient(A2)
print("Kreiss constant = ", mcA2[2])
x0 = [1,1]
tf = 1
ivp_exp = solve_ivp(linear_system, (0,tf), x0, args=[A2], t_eval=np.arange(0,tf,.1))
mc = characterise_transient(A2)
t=ivp_exp.t
f, ax = plt.subplots()
plt.plot(ivp_exp.t,spl.norm(ivp_exp.y.T, axis=1)/spl.norm(x0))
ax.set_xlabel("time")
ax.set_ylabel(r'$|x|/|x_0|$')
ax.set_title(r'$\dot{x}=A\cdot x$')
ax.set_ylim((-.1,np.max(spl.norm(ivp_exp.y.T, axis=1)/spl.norm(x0))*1.1))
t_trunc = t[np.where(t<mcA2[3])]
ax.set_xlim((-.1, np.min([3*mcA2[3],t[-1]])))
plt.yscale('log')
plt.autoscale(enable=True, axis='y', tight=True)
plt.plot(t,np.exp(mcA2[0]*t),color="orange")
plt.legend(["Evolution with $A$","evolution with $\lambda_{Max}$"])
###Output
Kreiss constant = (2002900100+0j)
###Markdown
The Kreiss constant $K_0 \approx 10^{16}$ doesn't give us any useful information. Is there any way to get a good estimate for the transient properties of this system? The answer is, in fact, yes. Consider the ratio of maximum transient growth to maximum regular growth $$\frac{e^{\boldsymbol{J}t}}{e^{\lambda_{\text{max}}t}}$$ This is the solution to the associated kinematical system $$\dot{u}=\left(\boldsymbol{J}-\lambda_{\text{max}}I\right)u = \Gamma u$$ If we now characterise the transients of $\Gamma$ we get:
###Code
Gamma = A2 - np.max(spl.eigvals(A2))*np.identity(len(A2))
mcA2 = characterise_transient(Gamma)
print("Kreiss constant = ", mcA2[2])
x0 = [1.7,1]
tf = 1
ivp_exp2 = solve_ivp(linear_system, (0,tf), x0, args=[Gamma], t_eval=np.arange(0,tf,.01))
f, ax = plt.subplots()
plt.plot(ivp_exp2.t,spl.norm(ivp_exp2.y.T, axis=1)/spl.norm(x0))
ax.set_xlabel("time")
ax.set_ylabel(r'$|u|/|u_0|$')
# ax.set_title(r'$\dot{u}=\Gamma\cdot u$')
ax.plot(t, np.exp(mcA2[0]*t),"--",color="darkgreen")
ax.set_ylim((-.1,np.max(spl.norm(ivp_exp2.y.T, axis=1)/spl.norm(x0))*1.1))
t_trunc = t[np.where(t<mcA2[3])]
ax.plot(t_trunc,np.exp(mcA2[1]*t_trunc),"--",color="orange")
plt.axhline(y=mcA2[2],linestyle="dotted",color="black")
plt.ylim([.98,1.4])
plt.annotate(r'Long time behaviour $\alpha (\Gamma)$', [.2,1.01])
plt.annotate(r'Initial growth rate $\omega (\Gamma)$',[.0,1.05], rotation=68)
plt.annotate(r'Kreiss constant $\mathcal{K} (\Gamma)$', [.4,1.3])
###Output
Kreiss constant = (1.292701+0j)
###Markdown
Non normal network and transient response Introduction: Where eigen-analysis breaks downConsider an evolution equation of the form $$\dot{\boldsymbol{x}}=\boldsymbol{A}\boldsymbol{x}$$ where $$\boldsymbol{A}=\begin{pmatrix}-1 & 500\\0 & -2\end{pmatrix}$$The eigenvalues are clearly $-1,-2$. However, for some initial conditions this system will still grow massively in amplitude
###Code
%%capture
## compile PyRoss for this notebook
import os
owd = os.getcwd()
os.chdir('../../')
%run setup.py install
os.chdir(owd)
import numpy as np
import scipy.linalg as spl
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
from pyross.contactMatrix import characterise_transient
A = [[-1,500],[0,-2]]
x0 = [1,1]
tf = 10
def linear_system(t, x, A): return A@x
ivp = solve_ivp(linear_system, (0,tf), x0, args=[A], t_eval=np.arange(0,tf,.1))
plt.plot(ivp.t,spl.norm(ivp.y.T, axis=1)/spl.norm(x0))
plt.xlabel("time")
plt.ylabel("$|x|/|x_0|$")
###Output
_____no_output_____
###Markdown
Here we see a massive amplification of the initial conditions, although the eigenvalues would suggest exponential decay. What is happening? The answer is that any non-normal matrix $\boldsymbol{A}$ (such that $\boldsymbol{A}\boldsymbol{A}^T \neq \boldsymbol{A}^T\boldsymbol{A}$) will give a transient response as the system relaxes back down to the (non-orthogonal) eigendirection.Such transients can be classified in terms of the spectral abcissa (eigenvalue(s) with maximal real component) $\alpha (\boldsymbol{A})$ which determines the long term behaviour, the numerical abcissa (eigenvalues of $\frac{1}{2}(\boldsymbol{A}+\boldsymbol{A}^T)$) $\omega (\boldsymbol{A})$, the Kreiss constant $\mathcal{K}(\boldsymbol{A})$ which gives a lower bound to the transient behaviour (the upper bound is given by $eN\mathcal{K}(\boldsymbol{A})$ where $N$ is the matrix dimensionality), and the time over which the transient occurs $\tau=\log(\mathcal{K})/a$ where $a$ is the real part of the maximal pseudoeigenvalue.These quantities can be found using the `characterise_transient` function:
###Code
mcA = characterise_transient(A)
t=ivp.t
f, ax = plt.subplots()
plt.plot(ivp.t,spl.norm(ivp.y.T, axis=1)/spl.norm(x0))
ax.set_xlabel("time")
ax.set_ylabel(r'$|x|/|x_0|$')
ax.set_title(r'$\dot{x}=A\cdot x$')
ax.set_ylim((-.1,np.max(spl.norm(ivp.y.T, axis=1)/spl.norm(x0))*1.1))
t_trunc = t[np.where(t<mcA[3])]
ax.plot(t_trunc,np.exp(mcA[1]*t_trunc),"--",color="orange")
ax.plot(t, np.exp(mcA[0]*t),"--",color="darkgreen")
plt.axhline(y=mcA[2],linestyle="dotted",color="black")
if 3*mcA[3]<t[-1]:
plt.axvline(x=mcA[3],linestyle="dotted",color="black")
ax.set_xlim((-.1, np.min([3*mcA[3],t[-1]])))
plt.annotate(r'Long time behaviour $\alpha (A)$',[1,1], [.2,2])
plt.annotate(r'Inital growth rate $\omega (A)$',[.01,90])
plt.annotate(r'Transient duration',[3.4,20], [3.3,20])
plt.annotate(r'Kreiss constant',[3.4,26], [5.3,90])
###Output
/Users/rsingh/software/anaconda/lib/python3.7/site-packages/ipykernel_launcher.py:12: RuntimeWarning: overflow encountered in exp
if sys.path[0] == '':
/Users/rsingh/software/anaconda/lib/python3.7/site-packages/numpy/core/_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
/Users/rsingh/software/anaconda/lib/python3.7/site-packages/numpy/core/_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
/Users/rsingh/software/anaconda/lib/python3.7/site-packages/matplotlib/transforms.py:2817: ComplexWarning: Casting complex values to real discards the imaginary part
vmin, vmax = map(float, [vmin, vmax])
###Markdown
Exponential growth Suppose the system we are interested in grows exponentially in time. Then there is no meaning to a lower bound for a transient process, since the system will always saturate this bound at a large enough time.
###Code
A2 = np.array([[3,2],[9,4]])
mcA2 =characterise_transient(A2)
print("Kreiss constant = ", mcA2[2])
x0 = [1,1]
tf = 1
ivp_exp = solve_ivp(linear_system, (0,tf), x0, args=[A2], t_eval=np.arange(0,tf,.1))
mc = characterise_transient(A2)
t=ivp_exp.t
f, ax = plt.subplots()
plt.plot(ivp_exp.t,spl.norm(ivp_exp.y.T, axis=1)/spl.norm(x0))
ax.set_xlabel("time")
ax.set_ylabel(r'$|x|/|x_0|$')
ax.set_title(r'$\dot{x}=A\cdot x$')
ax.set_ylim((-.1,np.max(spl.norm(ivp_exp.y.T, axis=1)/spl.norm(x0))*1.1))
t_trunc = t[np.where(t<mcA2[3])]
ax.set_xlim((-.1, np.min([3*mcA2[3],t[-1]])))
plt.yscale('log')
plt.autoscale(enable=True, axis='y', tight=True)
plt.plot(t,np.exp(mcA2[0]*t),color="orange")
plt.legend(["Evolution with $A$","evolution with $\lambda_{Max}$"])
###Output
Kreiss constant = (2002900200+0j)
###Markdown
The Kreiss constant $K_0 \approx 10^{16}$ doesn't give us any useful information. Is there any way to get a good estimate for the transient properties of this system? The answer is, in fact, yes. Consider the ratio of maximum transient growth to maximum regular growth $$\frac{e^{\boldsymbol{A}t}}{e^{\lambda_{\text{max}}t}}$$ This is the solution to the associated kinematical system $$\dot{x}=\left(\boldsymbol{A}-\lambda_{\text{max}}I\right)x = \Gamma x$$ If we now characterise the transients of $\Gamma$ we get:
###Code
Gamma = A2 - np.max(spl.eigvals(A2))*np.identity(len(A2))
mcA2 = characterise_transient(Gamma)
print("Kreiss constant = ", mcA2[2])
x0 = [2,1]
tf = 10
ivp_exp2 = solve_ivp(linear_system, (0,tf), x0, args=[Gamma], t_eval=np.arange(0,tf,.1))
f, ax = plt.subplots()
plt.plot(ivp_exp2.t,spl.norm(ivp_exp2.y.T, axis=1)/spl.norm(x0))
ax.set_xlabel("time")
ax.set_ylabel(r'$|x|/|x_0|$')
ax.set_title(r'$\dot{x}=\Gamma\cdot x$')
ax.plot(t, np.exp(mcA2[0]*t),"--",color="darkgreen")
ax.set_ylim((-.1,np.max(spl.norm(ivp_exp2.y.T, axis=1)/spl.norm(x0))*1.1))
t_trunc = t[np.where(t<mcA2[3])]
ax.plot(t_trunc,np.exp(mcA2[1]*t_trunc),"--",color="orange")
plt.axhline(y=mcA2[2],linestyle="dotted",color="black")
plt.ylim([.98,1.4])
plt.annotate(r'Long time behaviour $\alpha (\Gamma)$', [.2,1.01])
plt.annotate(r'Inital growth rate $\omega (\Gamma)$',[.4,1.35])
plt.annotate(r'Kreiss constant $\mathcal{K} (\Gamma)$', [3.53,1.3])
###Output
Kreiss constant = (1.292701+0j)
###Markdown
Non normal network and transient response Introduction: Where eigen-analysis breaks downConsider an evolution equation of the form $$\dot{\boldsymbol{u}}=\boldsymbol{J}\boldsymbol{u}$$ where $$\boldsymbol{J}=\begin{pmatrix}-1 & 500\\0 & -2\end{pmatrix}$$The eigenvalues are clearly $-1,-2$. However, for some initial conditions this system will still grow massively in amplitude
###Code
%%capture
## compile PyRoss for this notebook
import os
owd = os.getcwd()
os.chdir('../../')
%run setup.py install
os.chdir(owd)
import numpy as np
import scipy.linalg as spl
from scipy.integrate import solve_ivp
import matplotlib.pyplot as plt
from pyross.contactMatrix import characterise_transient
A = [[-1,500],[0,-2]]
x0 = [1,1]
tf = 10
def linear_system(t, x, A): return A@x
ivp = solve_ivp(linear_system, (0,tf), x0, args=[A], t_eval=np.arange(0,tf,.1))
plt.plot(ivp.t,spl.norm(ivp.y.T, axis=1)/spl.norm(x0))
plt.xlabel("time")
plt.ylabel("$|u|/|u_0|$")
###Output
_____no_output_____
###Markdown
Here we see a massive amplification of the initial conditions, although the eigenvalues would suggest exponential decay. What is happening? The answer is that any non-normal matrix $\boldsymbol{J}$ (such that $\boldsymbol{J}\boldsymbol{J}^T \neq \boldsymbol{J}^T\boldsymbol{J}$) will give a transient response as the system relaxes back down to the (non-orthogonal) eigendirection.Such transients can be classified in terms of the spectral abcissa (eigenvalue(s) with maximal real component) $\alpha (\boldsymbol{J})$ which determines the long term behaviour, the numerical abcissa (eigenvalues of $\frac{1}{2}(\boldsymbol{J}+\boldsymbol{J}^T)$) $\omega (\boldsymbol{J})$, the Kreiss constant $\mathcal{K}(\boldsymbol{J})$ which gives a lower bound to the transient behaviour (the upper bound is given by $eN\mathcal{K}(\boldsymbol{J})$ where $N$ is the matrix dimensionality), and the time over which the transient occurs $\tau=\log(\mathcal{K})/a$ where $a$ is the real part of the maximal pseudoeigenvalue.These quantities can be found using the `characterise_transient` function:
###Code
mcA = characterise_transient(A)
t=ivp.t
f, ax = plt.subplots()
plt.plot(ivp.t,spl.norm(ivp.y.T, axis=1)/spl.norm(x0))
ax.set_xlabel("time")
ax.set_ylabel(r'$|u|/|u_0|$')
ax.set_title(r'$\dot{u}=J\cdot u$')
ax.set_ylim((-.1,np.max(spl.norm(ivp.y.T, axis=1)/spl.norm(x0))*1.1))
t_trunc = t[np.where(t<mcA[3])]
ax.plot(t_trunc,np.exp(mcA[1]*t_trunc),"--",color="orange")
ax.plot(t, np.exp(mcA[0]*t),"--",color="darkgreen")
plt.axhline(y=mcA[2],linestyle="dotted",color="black")
if 3*mcA[3]<t[-1]:
plt.axvline(x=mcA[3],linestyle="dotted",color="black")
ax.set_xlim((-.1, np.min([3*mcA[3],t[-1]])))
plt.annotate(r'Long time behaviour $\alpha (J)$',[1,1], [.2,2])
plt.annotate(r'Inital growth rate $\omega (J)$',[.01,90])
plt.annotate(r'Transient duration',[3.4,20], [3.3,20])
plt.annotate(r'Kreiss constant',[3.4,26], [5.3,90])
###Output
/home/ab/python/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:12: RuntimeWarning: overflow encountered in exp
if sys.path[0] == '':
/home/ab/python/anaconda3/lib/python3.6/site-packages/numpy/core/_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
/home/ab/python/anaconda3/lib/python3.6/site-packages/numpy/core/_asarray.py:85: ComplexWarning: Casting complex values to real discards the imaginary part
return array(a, dtype, copy=False, order=order)
/home/ab/python/anaconda3/lib/python3.6/site-packages/matplotlib/transforms.py:918: ComplexWarning: Casting complex values to real discards the imaginary part
self._points[:, 0] = interval
###Markdown
Exponential growth Suppose the system we are interested in grows exponentially in time. Then there is no meaning to a lower bound for a transient process, since the system will always saturate this bound at a large enough time.
###Code
A2 = np.array([[3,2],[9,4]])
mcA2 =characterise_transient(A2)
print("Kreiss constant = ", mcA2[2])
x0 = [1,1]
tf = 1
ivp_exp = solve_ivp(linear_system, (0,tf), x0, args=[A2], t_eval=np.arange(0,tf,.1))
mc = characterise_transient(A2)
t=ivp_exp.t
f, ax = plt.subplots()
plt.plot(ivp_exp.t,spl.norm(ivp_exp.y.T, axis=1)/spl.norm(x0))
ax.set_xlabel("time")
ax.set_ylabel(r'$|x|/|x_0|$')
ax.set_title(r'$\dot{x}=A\cdot x$')
ax.set_ylim((-.1,np.max(spl.norm(ivp_exp.y.T, axis=1)/spl.norm(x0))*1.1))
t_trunc = t[np.where(t<mcA2[3])]
ax.set_xlim((-.1, np.min([3*mcA2[3],t[-1]])))
plt.yscale('log')
plt.autoscale(enable=True, axis='y', tight=True)
plt.plot(t,np.exp(mcA2[0]*t),color="orange")
plt.legend(["Evolution with $A$","evolution with $\lambda_{Max}$"])
###Output
Kreiss constant = (2002900100+0j)
###Markdown
The Kreiss constant $K_0 \approx 10^{16}$ doesn't give us any useful information. Is there any way to get a good estimate for the transient properties of this system? The answer is, in fact, yes. Consider the ratio of maximum transient growth to maximum regular growth $$\frac{e^{\boldsymbol{J}t}}{e^{\lambda_{\text{max}}t}}$$ This is the solution to the associated kinematical system $$\dot{u}=\left(\boldsymbol{J}-\lambda_{\text{max}}I\right)u = \Gamma u$$ If we now characterise the transients of $\Gamma$ we get:
###Code
Gamma = A2 - np.max(spl.eigvals(A2))*np.identity(len(A2))
mcA2 = characterise_transient(Gamma)
print("Kreiss constant = ", mcA2[2])
x0 = [1.7,1]
tf = 1
ivp_exp2 = solve_ivp(linear_system, (0,tf), x0, args=[Gamma], t_eval=np.arange(0,tf,.01))
f, ax = plt.subplots()
plt.plot(ivp_exp2.t,spl.norm(ivp_exp2.y.T, axis=1)/spl.norm(x0))
ax.set_xlabel("time")
ax.set_ylabel(r'$|u|/|u_0|$')
# ax.set_title(r'$\dot{u}=\Gamma\cdot u$')
ax.plot(t, np.exp(mcA2[0]*t),"--",color="darkgreen")
ax.set_ylim((-.1,np.max(spl.norm(ivp_exp2.y.T, axis=1)/spl.norm(x0))*1.1))
t_trunc = t[np.where(t<mcA2[3])]
ax.plot(t_trunc,np.exp(mcA2[1]*t_trunc),"--",color="orange")
plt.axhline(y=mcA2[2],linestyle="dotted",color="black")
plt.ylim([.98,1.4])
plt.annotate(r'Long time behaviour $\alpha (\Gamma)$', [.2,1.01])
plt.annotate(r'Initial growth rate $\omega (\Gamma)$',[.0,1.05], rotation=68)
plt.annotate(r'Kreiss constant $\mathcal{K} (\Gamma)$', [.4,1.3])
###Output
Kreiss constant = (1.292701+0j)
|
content/notebooks/kl-divergence.ipynb | ###Markdown
Motivating ExampleTo give a simple concrete example, lets suppose that we are given two of normal distributions $N(\mu_1,1)$ and $N(\mu_2, 1)$ where a normal distribution is defined as $f(x\ |\ \mu,\ \sigma^2)=\frac{1}{\sqrt{2\sigma^2\pi}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}$. Our goal is to find a new approximating normal distribution $N(\mu_Q, \sigma_Q)$ that best fits the sum of the original normal distributions.Here we run into a problem: **how do we define the quality of fit of the original distribution to the new distribution?** For example, would it be better to smooth out the approximating normal distribution across the two original modes or to fully cover one mode while leaving the other one uncovered? Visually, this corresponds to preferring option A or option B in the following plots:
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sb
import scipy
from scipy import stats
x = np.linspace(-10, 10, num=300)
norm_1 = stats.norm.pdf(x, loc=3) / 2
norm_2 = stats.norm.pdf(x, loc=-3) / 2
two_norms = norm_1 + norm_2
approx_norm_middle = stats.norm.pdf(x, loc=0, scale=4)
plt.figure(figsize=(16, 6))
plt.subplot(1, 2, 1)
plt.plot(x, two_norms, label='P=(N(-3, 1) + N(3, 1))/2')
plt.plot(x, approx_norm_middle, label='Q=N(0, 4)')
plt.title('Option A')
plt.legend(loc=2)
approx_norm_side = stats.norm.pdf(x, loc=3, scale=2)
plt.subplot(1, 2, 2)
plt.plot(x, two_norms, label='P=(N(-3, 1) + N(3, 1))/2')
plt.plot(x, approx_norm_side, label='Q=N(3, 2)')
plt.title(f'Option B')
plt.legend(loc=2)
plt.show()
###Output
_____no_output_____
###Markdown
To give one answer to this question lets first label the original distribution (average of two normals) $P$ and the distribution we are using to approximate it $Q$. The view that the KL divergence takes is of asking "if I gave someone only $Q$, how much *additional* information would they need to know everything about $P$?"The usefulness of this formulation becomes obvious if you consider trying to approximate a very complex distribution $P$ with a simpler distribution $Q$. **You want to know how bad your new approximation $Q$ is!**. To do so we first need to visit the concepts of entropy and cross entropy though. EntropyWe can formalize this notion of counting the information contained in a distribution by computing its [entropy](https://en.wikipedia.org/wiki/Entropy_(information_theory). An intuitive way to understand what entropy means is by viewing it as the number of bits needed to encode some piece of information. For example, if I toss a coin three times I can have complete information of the events that occurred using only three bits (a 1 or 0 for each heads/tails).We are interested in the entropy of probability distributions which is defined as:$$H(X)=-\sum_{i=1}^n P(x_i)\log P(x_i)$$That is all well and good, but what does it mean? Lets start with a simple concrete example. Suppose we have a simple probability distribution over the likelihood of a coin flip resulting in heads or tails $[p, 1-p]$. Plugging this into the formula for entropy $H(x)$ we get $H(X)=-(p\log p +(1-p)\log (1-p))$Setting $p=.5$ results in $H(x)=.69$, and setting $p=.9$ results in $H(x)=.32$. We can also observe that as $p\rightarrow 1$, $H(X)\rightarrow 0$. This shows that if $p$ is very close to $1$ (where almost all the coin tosses will be heads), then the entropy is low. If $p$ is close to $.5$ then the entropy is at its peak.Conceptually this makes sense since there is more information in a sequence of coin tosses where the results are mixed rather than one where they are all the same. You can see this by considering the case where the distribution generates heads with likelihood $.99$ and tails with likelihood $.01$. A naive way to convey this information would be to report a $1$ for each heads and a $0$ for each tails. One way to represent this more efficiently would be to encode every two heads as a $1$, one heads as $01$, and tails as $00$ (note that there is no $0$ symbol otherwise you would not be able to tell whether $01$ meant a tails then a heads or one heads). This means that for every pair of heads we can represent it in half as many bits, but what about the other cases? We only need to represent a single heads when a tails occurs for which the overall cost of this combination is $4$ bits for 2 numbers. Take an example of encoding 99 heads and 1 tails: it would use $98/2=54$ bits to represent nearly all the heads and $4$ bits for the remaining heads and tails for a grand total of $58$ bits. This is much less than $100$ bits and its all possible because the entropy is low!Now lets formalize the intuition from that example and return to the normal definition of entropy to explain why the entropy is defined that way.$$H(X)=-\sum_{i=1}^n P(x_i)\log P(x_i)=\sum_{i=1}^n P(x_i)\log \frac{1}{P(x_i)}$$To assist us lets define an information function $I$ in terms of an event $i$ and probability of that event $p_i$. How much information is acquired due to the observation of event $i$? Consider the properties of the information function $I$1. When $p$ goes up then $I(p)$ goes down. When $p$ goes down then $I(p)$ goes up. This is sensible because under the coin toss example making a particular event more likely caused the entropy to go down and vice versa.2. $I(p)\ge 0$: Information cannot be negative, also sensible.3. $I(1)=0$: Events that always occur contain no information. This makes sense since as we took the limit of $p\rightarrow 1$, $H(X)\rightarrow 0$.4. $I(p_1p_2)=I(p_1)+I(p_2)$: Information due to independent events is additive. To see why property (4) is crucial and true consider two individual events. If the first event could result in one of $n$ equally likely outcomes and the second event in $m$ equally likely outcomes then there are $mn$ possible outcomes of both events combined. From information theory we know that $\log_2(n)$ bits and $\log_2(m)$ bits are required to encode events $n$ and $m$ respectively. From the property of logarithms we know that $\log_2(n)+\log_2(m)=\log_2(mn)$ so logarithmic functions preserve (4)! If we recall that the events are equally likely with some probability $p$ then we can realize that $1/p$ is the number of possible outcomes so it corresponds to choosing $I(p)=\log(1/p)$ (this generalizes with some more math). If we sample $N$ points then we observe each outcome $i$ on average $n_i=Np_i$. Thus the total amount of information received is:$$\sum_i n_i I(p_i)=\sum_i N p_i\log\frac{1}{p_i}$$Finally note that if we want the average amount of information per event that is simply $\sum_i p_i\log\frac{1}{p_i}$ which is exactly the expression for entropy $H(X)$ Cross EntropyWe have now seen that entropy gives us a way to quantify the information content of a given probability distribution, but what about the information content of one distribution relative to another? The [cross entropy](https://en.wikipedia.org/wiki/Cross_entropy) which is defined similarly to the regular entropy is used to calculate this. It quantifies the amount of information required to encode information coming from a probability distribution $P$ by using a different/wrong distribution $Q$. In particular, we want to know the average number of bits needed to encode some outcomes $x_i$ from $X$ with the probability distribution $q(x_i)=2^{-l_i}$ where $l_i$ is the length of the code for $x_i$ in bits. To arrive at the definition for cross entropy we will take the expectation of this length over the probability distribution $p$.$$\begin{align*}H(p,q)&=E_p[l_i]=E_p\big[\log\frac{1}{q(x_i)}\big]\\H(p,q)&=\sum_{x_i}p(x_i)\log\frac{1}{q(x_i)}\\H(p,q)&=-\sum_x p(x)\log q(x)\end{align*}$$With the definition of the cross entropy we can now move onto combining it with the entropy to arrive at the KL divergence. KL DivergenceNow armed with the definitions for entropy and cross entropy we are ready to return to defining the KL divergence. Recall that $H(P, Q)$ represents the amount of information needed to encode $P$ with $Q$. Also recall that $H(P)$ is the amount of information necessary to encode $P$. Knowing these makes defining the KL divergence trivial as simply the amount of information needed to encode $P$ with $Q$ minus the amount of information to encode $P$ with itself:$$\begin{align*}D_{KL}(P||Q)&=H(P,Q)-H(P)\\&=-\sum_x p(x)\log q(x)+\sum_x p(x)\log p(x)\\&=\sum_x\bigg[p(x)[\log p(x)-\log q(x)]\bigg]\\&=\sum_x p(x)\log\frac{p(x)}{q(x)}\end{align*}$$With the origin and derivation of the KL divergence clear now, lets get some intuition on how the KL divergence behaves then returning to the original example involving two normal distributions. Observe that:1. When $p(x)$ is large, but $q(x)$ is small, the divergence gets very big. This corresponds to not covering $P$ well with $Q$2. When $p(x)$ is small, but $q(x)$ is large, the divergence is also large, but not as large as in (1). This corresponds to putting $Q$ where $P$ is not.I have again plotted the normal distributions from the beginning of this post, but am now including the raw value of the KL divergence as well as its value at each point.
###Code
approx_norm_middle = stats.norm.pdf(x, loc=0, scale=4)
middle_kl = stats.entropy(two_norms, approx_norm_middle)
middle_pointwise_kl = scipy.special.kl_div(two_norms, approx_norm_middle)
plt.figure(figsize=(16, 6))
plt.subplot(1, 2, 1)
plt.plot(x, two_norms, label='P=(N(-3, 1) + N(3, 1))/2')
plt.plot(x, approx_norm_middle, label='Q=N(0, 4)')
plt.plot(x, middle_pointwise_kl, label='KL Divergence', linestyle='-')
plt.title(f'Approximate two Guassians with one in the center, KL={middle_kl:.4f}')
plt.legend()
plt.subplot(1, 2, 2)
approx_norm_side = stats.norm.pdf(x, loc=3, scale=2)
side_kl = stats.entropy(two_norms, approx_norm_side)
side_pointwise_kl = scipy.special.kl_div(two_norms, approx_norm_side)
plt.plot(x, two_norms, label='P=(N(-3, 1) + N(3, 1))/2')
plt.plot(x, approx_norm_side, label='Q=N(3, 2)')
plt.plot(x, side_pointwise_kl, label='KL Divergence', linestyle='-')
plt.title(f'Approximate two Guassians by covering one more than the other, KL={side_kl:.4f}')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
By looking at this its now easy to see how properties (1) and (2) play out in practice. The KL divergence is much happier with the solution on the left since $P$ is always at least partially covered. It is comparatively unhappy with the right solution since it leaves the left normal mode uncovered. Thus, in general the KL divergence of $P$ approximated with $Q$ prefers to average out modes.One increasingly common use case for the KL divergence in machine learning is in [Variational Inference](https://en.wikipedia.org/wiki/Variational_Bayesian_methods). For a number of reasons, the optimized quantity is the KL divergence of $Q$ approximated by $P$ written as $D_{KL}(Q||P)$. The KL divergence is **not** symmetric so the behavior could be and in general is different. I have drawn the same normal distributions but instead this time using this alternative use of the KL divergence.
###Code
approx_norm_middle = stats.norm.pdf(x, loc=0, scale=4)
middle_kl = stats.entropy(approx_norm_middle, two_norms)
middle_pointwise_kl = scipy.special.kl_div(approx_norm_middle, two_norms)
plt.figure(figsize=(16, 6))
plt.subplot(1, 2, 1)
plt.plot(x, two_norms, label='P=(N(-3, 1) + N(3, 1))/2')
plt.plot(x, approx_norm_middle, label='Q=N(0, 4)')
plt.plot(x, middle_pointwise_kl, label='KL Divergence', linestyle='-')
plt.title(f'Approximate two Guassians with one in the center, KL={middle_kl:.4f}')
plt.legend()
plt.subplot(1, 2, 2)
approx_norm_side = stats.norm.pdf(x, loc=3, scale=2)
side_kl = stats.entropy(approx_norm_side, two_norms)
side_pointwise_kl = scipy.special.kl_div(approx_norm_side, two_norms)
plt.plot(x, two_norms, label='P=(N(-3, 1) + N(3, 1))/2')
plt.plot(x, approx_norm_side, label='Q=N(3, 2)')
plt.plot(x, side_pointwise_kl, label='KL Divergence', linestyle='-')
plt.title(f'Approximate two Guassians by covering one more than the other, KL={side_kl:.4f}')
plt.legend()
plt.show()
###Output
_____no_output_____ |
notebooks/TRIANGLES_eval_models.ipynb | ###Markdown
TRIANGLES
###Code
data_dir = '/scratch/ssd/data/graph_attention_pool/'
checkpoints_dir = '../checkpoints'
device = 'cuda'
with open('%s/random_graphs_triangles_test.pkl' % data_dir, 'rb') as f:
data = pickle.load(f)
print(data.keys())
targets = torch.from_numpy(data['graph_labels']).long()
Node_degrees = [np.sum(A, 1).astype(np.int32) for A in data['Adj_matrices']]
feature_dim = data['Max_degree'] + 1
node_features = []
for i in range(len(data['Adj_matrices'])):
N = data['Adj_matrices'][i].shape[0]
D_onehot = np.zeros((N, feature_dim ))
D_onehot[np.arange(N), Node_degrees[i]] = 1
node_features.append(D_onehot)
def acc(pred):
n = len(pred)
return torch.mean((torch.stack(pred).view(n) == targets[:len(pred)].view(n)).float()).item() * 100
def test(model, index, show_img=False):
N_nodes = data['Adj_matrices'][index].shape[0]
mask = torch.ones(1, N_nodes, dtype=torch.uint8)
x = torch.from_numpy(node_features[index]).unsqueeze(0).float()
A = torch.from_numpy(data['Adj_matrices'][index].astype(np.float32)).float().unsqueeze(0)
y, other_outputs = model(data_to_device([x, A, mask, -1, {'N_nodes': torch.zeros(1, 1) + N_nodes}],
device))
y = y.round().long().data.cpu()[0][0]
alpha = other_outputs['alpha'][0].data.cpu() if 'alpha' in other_outputs else []
return y, alpha
# This function returns predictions for the entire clean and noise test sets
def get_predictions(model_path):
state = torch.load(model_path)
args = state['args']
model = ChebyGIN(in_features=14,
out_features=1,
filters=args.filters,
K=args.filter_scale,
n_hidden=args.n_hidden,
aggregation=args.aggregation,
dropout=args.dropout,
readout=args.readout,
pool=args.pool,
pool_arch=args.pool_arch)
model.load_state_dict(state['state_dict'])
model = model.eval().to(device)
# print(model)
# Get predictions
pred, alpha = [], []
for index in range(len(data['Adj_matrices'])):
y = test(model, index, index == 0)
pred.append(y[0])
alpha.append(y[1])
if len(pred) % 1000 == 0:
print('{}/{}, acc on the combined test set={:.2f}%'.format(len(pred), len(data['Adj_matrices']), acc(pred)))
return pred, alpha
###Output
_____no_output_____
###Markdown
Weakly-supervised attention model
###Code
pred, alpha = get_predictions('%s/checkpoint_triangles_230187_epoch100_seed0000111.pth.tar' % checkpoints_dir)
###Output
ChebyGINLayer torch.Size([64, 98]) tensor([0.5568, 0.5545, 0.5580, 0.5656, 0.5318, 0.5698, 0.5655, 0.5937, 0.6087,
0.5437], grad_fn=<SliceBackward>)
ChebyGINLayer torch.Size([32, 128]) tensor([0.5730, 0.5968, 0.5778, 0.5940, 0.5981, 0.5787, 0.5619, 0.5798, 0.5741,
0.5833], grad_fn=<SliceBackward>)
ChebyGINLayer torch.Size([32, 64]) tensor([0.5703, 0.5380, 0.5825, 0.5836, 0.5649, 0.5537, 0.6568, 0.6129, 0.6161,
0.5258], grad_fn=<SliceBackward>)
ChebyGINLayer torch.Size([1, 64]) tensor([0.5634], grad_fn=<SliceBackward>)
ChebyGINLayer torch.Size([64, 448]) tensor([0.5923, 0.5840, 0.5608, 0.5615, 0.5799, 0.5668, 0.5924, 0.5840, 0.5709,
0.5637], grad_fn=<SliceBackward>)
ChebyGINLayer torch.Size([32, 128]) tensor([0.5606, 0.5821, 0.5540, 0.5596, 0.6033, 0.6147, 0.5738, 0.5865, 0.5981,
0.5800], grad_fn=<SliceBackward>)
ChebyGINLayer torch.Size([32, 64]) tensor([0.5938, 0.6073, 0.5995, 0.5230, 0.6091, 0.6070, 0.5901, 0.5752, 0.5594,
0.5499], grad_fn=<SliceBackward>)
ChebyGINLayer torch.Size([1, 64]) tensor([0.6102], grad_fn=<SliceBackward>)
ChebyGINLayer torch.Size([64, 448]) tensor([0.5877, 0.5797, 0.5591, 0.5688, 0.5758, 0.5645, 0.5483, 0.5846, 0.5883,
0.5961], grad_fn=<SliceBackward>)
1000/10000, acc on the combined test set=83.00%
2000/10000, acc on the combined test set=76.30%
3000/10000, acc on the combined test set=72.23%
4000/10000, acc on the combined test set=68.73%
5000/10000, acc on the combined test set=66.82%
6000/10000, acc on the combined test set=59.90%
7000/10000, acc on the combined test set=55.04%
8000/10000, acc on the combined test set=51.60%
9000/10000, acc on the combined test set=49.02%
10000/10000, acc on the combined test set=46.69%
|
Term Project_Neural Network_numeric.ipynb | ###Markdown
New Section
###Code
packageList <- c("dplyr", "keras","jpeg", "ggplot2", "rio")
for(package in packageList){
if(!require(package,character.only = TRUE)){
install.packages(package);require(package,character.only = TRUE);}
}
df <- read.csv("NHANES for ML.csv", row.names=1)
# table(df$HbA1c)
colnames(df)
table(df$diagnosed.diabetes)
table(df$diagnosed.kidney.disease)
table(df$diagnosed.diabetes, df$diagnosed.kidney.disease, dnn = list("diabetes", "kidney.disease"))
table(df$KIQ022)
table(df$DIQ010)
# table(df$URDACT)
table(df$PA_level)
dim(df)
# create selection dataframe for columns to avoid NA
selection <- sapply(df, function(xx) {c("Missing.numbers" = sum(is.na(xx)),
"Missing.percentage" = sum(is.na(xx))/nrow(df),
"Is.numeric" = is.numeric(xx),
"Median.values" = ifelse( is.numeric(xx), median(xx, na.rm = TRUE), 999999999) ) }) %>%
t %>% as.data.frame() %>% add_rownames
hist(selection$Missing.percentage, breaks = 200)
select.names <- subset(selection, Missing.percentage < 0.1 & Is.numeric == 1)$rowname # set 10% as the cutting line to select columns
select.names
character.names <- subset(selection, Is.numeric == 0)$rowname
character.names
df1 <- df[, c(character.names, select.names)]
# delete rows with NA vaules
df2 <- df1
for (col in select.names) {df2 <- subset(df2, !is.na(df2[[col]]))}
sum(is.na(df2))
# Step 1.Set up the data
# 1/3 is test and the rest are training
n <- nrow(df2)
set.seed (13)
ntest <- trunc(n / 3)
testid <- sample (1:n, ntest)
# Step 2.Create x and y
x <- model.matrix(HbA1c ~ . - 1, data = df2) %>% scale () # long time running
dim(x)
x_train <- array(x[-testid , ], dim = c(dim(x[-testid , ])[1], dim(x[-testid , ])[2]))
x_test <- array(x[testid , ], dim = c(dim(x[testid , ])[1], dim(x[testid , ])[2]))
y <- df2$diagnosed.diabetes
g_train <- y[-testid]
g_test <- y[testid]
y_train <- to_categorical(g_train, length(unique(y)))
y_test <- to_categorical(g_test , length(unique(y)))
#Step 3.Linear regression
lfit <- lm(HbA1c ~ ., data = df2[-testid , ])
lpred <- predict(lfit , df2[testid , ])
with(df2[testid , ], mean(abs(lpred - HbA1c))) # method 2
modnn <- keras_model_sequential () %>%
layer_dense(units = round(max(x)), activation = "relu",
input_shape = ncol(x)) %>%
layer_dropout(rate = 0.4) %>%
layer_dense(units = round(max(x))/2, activation = "relu" ) %>%
layer_dropout(rate = 0.3) %>%
layer_dense(units = 1)
modnn %>% compile(loss = "mse",
optimizer = optimizer_rmsprop (),
metrics = list("mean_absolute_error")
)
history <- modnn %>% fit(
x_train, y_train, epochs = 150, batch_size = (max(x_train)/2),
validation_data = list(x_test , y_test)
)
plot(history , smooth = FALSE)
npred <- predict(modnn , x[testid , ])
mean(abs(y[testid] - npred))
npred <- predict(modnn , x[-testid , ])
mean(abs(y[-testid] - npred))
###Output
_____no_output_____ |
Python/ResultsAnalysis/DifferentFeature_Compare.ipynb | ###Markdown
--- ISOLET---
###Code
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('seaborn-dark')
names=['SPEC','NDFS','LS','AEFS', 'CAE', 'MCFS','PFA','AgnoS-S','$|W_1|$','$W_1^2$']
selected_features = [10,25,40,55,70,85]
fig, (ax1, ax2) = plt.subplots(1,2,figsize=(24,8))
ax1.grid(True,linestyle='--',linewidth = 2.5,zorder=-1,axis="y")
ax1.set_xticks(selected_features)
ax1.set_xlabel('Number of selected features', fontsize = 28)
ax1.set_yticks(np.arange(0,1.5,0.28))
ax1.set_ylabel('Linear reconstruction error', fontsize = 28)
ax1.tick_params(labelsize=28)
ax1.set_ylim([-0.05, 1.1])
ax2.grid(True,linestyle='--',linewidth = 2.5,zorder=-1,axis="y")
ax2.set_xticks(selected_features)
ax2.set_xlabel('Number of selected features', fontsize =28)
ax2.set_yticks(np.arange(0,1.0,0.25))
ax2.set_ylabel('Classification accuracy', fontsize = 28)
ax2.tick_params(labelsize=28)
ax2.set_ylim([0, 1.0])
error_1 = [0.076348571948117,0.07706732006898086,0.12167384686141766,0.12004096940824423,0.12998828857527522,0.14228227798254703]
error_2 = [0.07336641316191968,0.11500690614233806,0.11279193384337662,0.13976455731989382,0.13838623209285103,0.17183374820576663]
error_3 = [1.0841869160233764, 1.0539002561490405, 1.0801097755695894, 0.7765303388410707, 0.7323556604028868, 0.7307772395890515]
error_4 = [0.7198136265549714, 0.6083768522214379, 0.5319769727484903, 0.4820256201916196, 0.4429506596423366, 0.4421797105264942]
error_5 = [0.6277713655799244, 0.5195723499119222, 0.4166404659164897, 0.3760960837075645, 0.35409210312886563, 0.3171214981498588]
error_6 = [0.8013841008085286, 0.645445836312186, 0.6161071906135848, 0.5854821723427042, 0.6111251940631501, 0.5722731006860357]
error_7 = [0.71003590252778, 0.5582584445668546, 0.5037666581433617, 0.43557315642500916, 0.4032654591565344, 0.3665151564517582]
error_8=[0.031505184855725205,0.03327306713221566,0.043004340719269105,0.03670922624834148,0.03477488310043814,0.04338000007119776]
error_9 =[0.029837051517744965, 0.02179263467379705, 0.018453868049122874, 0.016304111055073983, 0.014139283627847185, 0.012825284672845084]
error_10 =[0.030390729947764403,0.023238077453928467, 0.019489918583178882, 0.016584674451470344, 0.014147786811767656, 0.012661939958993758]
ax1.plot(selected_features, error_1, marker='o', mec='orange',c='orange', mfc='w',ms=12)
ax1.plot(selected_features, error_2, marker='+', c='blue',ms=12)
ax1.plot(selected_features, error_3, marker='2', c='darkcyan',ms=12)
ax1.plot(selected_features, error_4, marker='v', c='fuchsia',ms=12)
ax1.plot(selected_features, error_5, marker='s', c='chocolate',ms=12)
ax1.plot(selected_features, error_6, marker='X', c='aqua',ms=12)
ax1.plot(selected_features, error_7, marker='d', c='brown',ms=12)
ax1.plot(selected_features, error_8, marker='>', c='dodgerblue',ms=12)
ax1.plot(selected_features, error_9, marker='*', c='red',ms=18,lw=3)
ax1.plot(selected_features, error_10, marker='<', c='green',ms=12,lw=3)
accuracy_1=[0.046183450930083386,0.04490057729313662,0.05388069275176395,0.05195638229634381,0.04682488774855677, 0.04746632456703015]
accuracy_2=[0.13598460551635663, 0.12379730596536241,0.08915971776779986,0.09557408595253368,0.0782552918537524,0.07633098139833226]
accuracy_3=[0.1468890314304041, 0.1892238614496472, 0.2610647851186658, 0.43361128928800513, 0.477228992944195, 0.5253367543296985]
accuracy_4=[0.28223220012828737, 0.4079538165490699, 0.5355997434252726, 0.5368826170622194, 0.6080821039127646, 0.6266837716484926]
accuracy_5= [0.354073123797306, 0.5586914688903143, 0.6792815907633099, 0.704939063502245, 0.7196921103271328, 0.7190506735086594]
accuracy_6=[0.2482360487491982, 0.4939063502245029, 0.5439384220654265, 0.5388069275176395, 0.5785760102629891, 0.6818473380372033]
accuracy_7=[0.23733162283515075, 0.46247594611930726, 0.5618986529826812, 0.6484926234765875, 0.6933932007697242, 0.7190506735086594]
accuracy_8=[0.5202052597819115,0.4413085311096857,0.07376523412443874,0.2783835792174471,0.31430404105195636,0.0885182809493265]
accuracy_9=[0.43681847338037205, 0.7357280307889673, 0.8197562540089801, 0.8576010262989096, 0.8896728672225785, 0.8826170622193714]
accuracy_10=[0.48877485567671586,0.699807568954458, 0.8242463117382938, 0.8300192431045542, 0.8883899935856319, 0.8832584990378448]
ax2.plot(selected_features, accuracy_1, marker='o', mec='orange',c='orange', mfc='w',ms=12)
ax2.plot(selected_features, accuracy_2, marker='+', c='blue',ms=12)
ax2.plot(selected_features, accuracy_3, marker='2', c='darkcyan',ms=12)
ax2.plot(selected_features, accuracy_4, marker='v', c='fuchsia',ms=12)
ax2.plot(selected_features, accuracy_5, marker='s', c='chocolate',ms=12)
ax2.plot(selected_features, accuracy_6, marker='X', c='aqua',ms=12)
ax2.plot(selected_features, accuracy_7, marker='d', c='brown',ms=12)
ax2.plot(selected_features, accuracy_8, marker='>', c='dodgerblue',ms=12)
ax2.plot(selected_features, accuracy_9, marker='*', c='red',ms=18,lw=3)
ax2.plot(selected_features, accuracy_10, marker='<', c='green',ms=12,lw=3)
plt.subplots_adjust(right=1,wspace =0.15,hspace =0)
fig.legend(labels=names,fontsize=28, loc='upper center', bbox_to_anchor=(0.535,1.0),ncol=10,handletextpad=0.1,columnspacing=1.4, fancybox=True,framealpha=0.1,shadow=True)
plt.show()
###Output
_____no_output_____ |
stimuli/tdw_to_png.ipynb | ###Markdown
Convert stimuli generated in TDW to png then upload to s31. The first part of this notbeook converts hdf5 files generated in tdw into png files with the appropriate labels in the format: study_condition_stability_numBlocks_index.png (e.g. curiotower_varyhorizontal_unstable_8_0001.png)2. The second part has some helpful analysis and can be used to visualize towers in the hdf5 format
###Code
#!pip install h5py
import warnings
import os
import h5py
import numpy as np
from PIL import Image
import io
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
1. Function to convert hdf5 into png with new nameTakes a stimulus condition (varyHorizontal,varyScale, or varyNumber) and converts the first frame to a png with information on stability and num blocks
###Code
#condition = 'varyScale'
#condition = 'varyHorizontal'
condition = 'varyNumber'
TDW_DIR = "../../tdw_physics/controllers/D:/{}/".format(condition)
PNG_DIR = "./tdw_png/"
idx = 0
for file in os.listdir(TDW_DIR):
stability = "unstable"
if file.endswith('.hdf5'):
f = h5py.File(os.path.join(TDW_DIR, file))
frames = f['frames']
frame = frames["%04d" % (0)]
#Get block count
numBlocks = str(len(frame['objects']['positions']))
#Get index count
index = str(idx).zfill(4)
#Get stability
if (((frames["%04d" % (0)]['objects']['positions'][-1][1])-
(frames["%04d" % (len(frames)-1)]['objects']['positions'][-1][1]))<0.2):
stability = 'stable'
img = frame['images']
_img = Image.open(io.BytesIO(img["_img"][:]))
new_filename = 'curiotower_' + condition + "_" + stability + "_" + numBlocks + "_" + index + ".png"
_img.save(PNG_DIR+new_filename)
#RENAME HDF5 WITH SAME INDEX
os.rename(TDW_DIR+file, TDW_DIR+'curiotower_' + condition + "_" + stability + "_" + numBlocks + "_" + index + ".hdf5")
idx+=1
###Output
_____no_output_____
###Markdown
2. Some helpful functions to analyze and visualize hdf5 files Visualize first, middle, and last frame of hdf5 file
###Code
# #condition = 'varyScale'
# #condition = 'varyHorizontal'
# condition = 'varyNumber'
# TDW_DIR = "../../tdw_physics/controllers/D:/{}/".format(condition)
TDW_DIR = "../../tdw_physics/controllers/D:/stability/"
FILE = "0003.hdf5"
f = h5py.File(os.path.join(TDW_DIR, FILE))
# print the data structure
print("top keys", [k for k in f.keys()])
frames = f['frames']
n_frames = len([k for k in frames.keys()])
print("num frames: {}".format(n_frames))
#view_frame = np.minimum(view_frame, n_frames - 1)
for view_frame in [0, 40, len(frames)-1]:
frame = frames["%04d" % (view_frame)]
img = frame['images']
_img = Image.open(io.BytesIO(img["_img"][:]))
display(_img)
###Output
top keys []
###Markdown
Function to classify stable, precarious, and unstable towers- we define "unstable" as any tower that falls over (large delta in y-axis height from first to last frame)- "precarious" towers are those that remain standing, but have an x-axis delta greater than the scale_factor/3- "stable" towers are those that remain standing with x-axis delta smaller than scale_factor/3 Calculate tower height- is this on some meaningful absolute scale?- how to account for viewing angle, etc...
###Code
TDW_DIR = "../../tdw_physics/controllers/D:/stability/"
#loop through generated hdf5 files
tower_heights = []
for file in os.listdir(TDW_DIR):
if file.endswith('.hdf5'):
f = h5py.File(os.path.join(TDW_DIR, file))
frames = f['frames']
#get height of tallest block (last placed) in first frame
tower_heights.append(frames["%04d" % (0)]['objects']['positions'][-1][1])
print(tower_heights)
#To calculate precarious, get max difference in x-axis
def get_x_diff(num_objects = 1):
min_x = 100
max_x = -100
for i in range(num_objects):
min_x = min(frames["%04d" % (0)]['objects']['positions'][i][0], min_x)
max_x = max(frames["%04d" % (0)]['objects']['positions'][i][0], max_x)
return(max_x - min_x)
#Stable if it does not fall over
#Precarious if max x-axis jitter is >1/4 scale
#unstable if falls over
###Output
_____no_output_____
###Markdown
Hdf5 hierarchy
###Code
# static/ # Data that doesn't change per frame.
# ....object_ids
# ....mass
# ....static_friction
# ....dynamic_friction
# ....bounciness
# frames/ # Per-frame data.
# ....0000/ # The frame number.
# ........images/ # Each image pass.
# ............_img
# ............_id
# ............_depth
# ............_normals
# ............_flow
# ........objects/ # Per-object data.
# ............positions
# ............forwards
# ............rotations
# ............velocities
# ............angular_velocities
# ........collisions/ # Collisions between two objects.
# ............object_ids
# ............relative_velocities
# ............contacts
# ........env_collisions/ # Collisions between one object and the environment.
# ............object_ids
# ............contacts
# ........camera_matrices/
# .\...........projection_matrix
# ............camera_matrix
# ....0001/
# ........ (etc.)
###Output
_____no_output_____
###Markdown
Inspect Elements of hdf5
###Code
FILE = "0001.hdf5"
f = h5py.File(os.path.join(TDW_DIR, FILE))
# print the data structure
print("top keys", [k for k in f.keys()])
frames = f['frames']
view_frame =1
frame = frames["%04d" % (view_frame)]
frame.keys()
obj = frame['objects']
for key in obj.keys():
print(obj[key])
###Output
<HDF5 dataset "angular_velocities": shape (8, 3), type "<f4">
<HDF5 dataset "forwards": shape (8, 3), type "<f4">
<HDF5 dataset "positions": shape (8, 3), type "<f4">
<HDF5 dataset "rotations": shape (8, 4), type "<f4">
<HDF5 dataset "velocities": shape (8, 3), type "<f4">
###Markdown
Calculate stability
###Code
FILE = "0005.hdf5"
f = h5py.File(os.path.join(TDW_DIR, FILE))
# print the data structure
print("top keys", [k for k in f.keys()])
frames = f['frames']
view_frame =0
frame = frames["%04d" % (view_frame)]
obj = frame['objects']
print("Num objects:", len(obj['positions']))
for pos in obj['positions']:
print(pos)
get_x_diff(len(obj['positions']))
#To calculate precarious, get max difference in x-axis
def get_x_diff(num_objects = len(obj['positions'])):
min_x = 100
max_x = -100
for i in range(num_objects):
min_x = min(frames["%04d" % (0)]['objects']['positions'][i][0], min_x)
max_x = max(frames["%04d" % (0)]['objects']['positions'][i][0], max_x)
return(max_x - min_x)
TDW_DIR = "./controllers/D:/stability/"
STABLE_DIR = "./controllers/D:/stable/"
scale_factor = 0.23
stable_towers = {}
precarious_towers = {}
unstable_towers = {}
#loop through generated hdf5 files
for file in os.listdir(TDW_DIR):
if file.endswith('.hdf5'):
f = h5py.File(os.path.join(TDW_DIR, file))
frames = f['frames']
frame = frames["%04d" % (0)]
obj = frame['objects']
#check if top block has moved down by more than one block length
if (((frames["%04d" % (0)]['objects']['positions'][-1][1])-
(frames["%04d" % (len(frames)-1)]['objects']['positions'][-1][1]))<0.2):
if(get_x_diff(len(obj['positions'])) > scale_factor/2):
precarious_towers[file] = get_x_diff(len(obj['positions']))
else:
stable_towers[file] = get_x_diff(len(obj['positions']))
else:
unstable_towers[file] = get_x_diff(len(obj['positions']))
#os.rename("./controllers/D:/stability/{}".format(file), "./controllers/D:/stable/{}".format(file))
print("Stable:", len(stable_towers), "| Precarious:", len(precarious_towers), "| Unstable:", len(unstable_towers))
print(stable_towers)
###Output
Stable: 13 | Precarious: 0 | Unstable: 7
{'0000.hdf5': 0.05313666, '0017.hdf5': 0.08293782, '0001.hdf5': 0.03446407, '0006.hdf5': 0.080784045, '0007.hdf5': 0.06164322, '0012.hdf5': 0.007347323, '0004.hdf5': 0.00967929, '0008.hdf5': 0.019052664, '0009.hdf5': 0.075513095, '0005.hdf5': 0.034038514, '0013.hdf5': 0.10254289, '0002.hdf5': 0.00050380453, '0019.hdf5': 0.035791665}
|
Algorithm Problems/pgrms_3_124country.ipynb | ###Markdown
ํ๋ก๊ทธ๋๋จธ์ค(Programmers) level3 ๋ฌธ์ - 124๋๋ผ์ ์ซ์- https://programmers.co.kr/learn/courses/30/lessons/12899 ๋ฌธ์ 124 ๋๋ผ๊ฐ ์์ต๋๋ค. 124 ๋๋ผ์์๋ 10์ง๋ฒ์ด ์๋ ๋ค์๊ณผ ๊ฐ์ ์์ ๋ค๋ง์ ๊ท์น์ผ๋ก ์๋ฅผ ํํํฉ๋๋ค.- 124 ๋๋ผ์๋ ์์ฐ์๋ง ์กด์ฌํฉ๋๋ค.- 124 ๋๋ผ์๋ ๋ชจ๋ ์๋ฅผ ํํํ ๋ 1, 2, 4๋ง ์ฌ์ฉํฉ๋๋ค.- ์๋ฅผ ๋ค์ด์ 124 ๋๋ผ์์ ์ฌ์ฉํ๋ ์ซ์๋ ๋ค์๊ณผ ๊ฐ์ด ๋ณํ๋ฉ๋๋ค.```10์ง๋ฒ 124 ๋๋ผ 1 1 2 2 3 4 4 11 5 12 6 14 7 21 8 22 9 24 10 41```์์ฐ์ n์ด ๋งค๊ฐ๋ณ์๋ก ์ฃผ์ด์ง ๋, n์ 124 ๋๋ผ์์ ์ฌ์ฉํ๋ ์ซ์๋ก ๋ฐ๊พผ ๊ฐ์ return ํ๋๋ก solution ํจ์๋ฅผ ์์ฑํด ์ฃผ์ธ์.- ์ ํ์ฌํญ: n์ 500,000,000์ดํ์ ์์ฐ์ ์
๋๋ค.- ์
์ถ๋ ฅ ์```n result1 12 23 44 11``` ์ ๊ทผ - 3์ง๋ฒ์ ๋ณํํด์ ์ ๊ทผ (0,1,2๊ฐ ์๋๋ผ 1,2,3์ผ๋ก ๋ณํํ๋ค๊ณ ์๊ฐ) - 3์ผ๋ก ๋๋ ๋ชซ์ a๋ก ์ ์ฅํ์ฌ string์ผ๋ก ๋ํด๊ฐ - ๋๋จธ์ง๊ฐ 0์ผ ๊ฒฝ์ฐ ๋๋จธ์ง๋ฅผ 4์ผ๋ก ๊ณ ์น๊ณ ๋ชซ์ -1 - ๊ฒฐ๊ณผ๋ฅผ ์ญ์์ผ๋ก ๋ณด์ฌ์ค ํ์ด
###Code
def solution(n):
answer = ''
while n>0:
a = n % 3
n //= 3
if a == 0:
n -= 1
a = 4
answer += str(a)
return(answer[::-1])
# answer = str(a) + answerํ๊ณ return(answer) ๋ณด๋ค ์๋๊ฐ ๋น ๋ฅด๊ฒ ๋์ด
solution(15)
###Output
_____no_output_____ |
pyspark-advanced/jupyter-repartition/Repartitioning - Full.ipynb | ###Markdown
Repartitioning DataFramesPartitions are a central concept in Apache Spark. They are used for distributing and parallelizing work onto different executors, which run on multiple servers. Determining PartitionsBasically Spark uses two different strategies for splitting up data into multiple partitions:1. When Spark loads data, the records are put into partitions along natural borders. For example every HDFS block (and thereby every file) is represented by a different partition. Therefore the number of partitions of a DataFrame read from disk is solely determined by the number of HDFS blocks2. Certain operations like `JOIN`s and aggregations require that records with the same key are physically in the same partition. This is achieved by a shuffle phase. The number of partitions is specified by the global Spark configuration variable `spark.sql.shuffle.partitions` which has a default value of 200. Repartitiong DataSince partitions have a huge influence on the execution, Spark also allows you to explicitly change the partitioning schema of a DataFrame. This makes sense only in a very limited (but still important) set of cases, which we will discuss in this notebook. Weather ExampleSurprise, surprise, we will again use the weather example and see what explicit repartitioning gives us.
###Code
from pyspark.sql import SparkSession
if not 'spark' in locals():
spark = (
SparkSession.builder.master("local[*]")
.config("spark.driver.memory", "24G")
.getOrCreate()
)
spark
###Output
_____no_output_____
###Markdown
Disable Automatic Broadcast JOINsIn order to see the shuffle operations, we need to prevent Spark from executiong `JOIN` operations as broadcast joins. Again this can be turned off by setting the Spark configuration variable `spark.sql.autoBroadcastJoinThreshold` to -1.
###Code
spark.conf.set("spark.sql.adaptive.enabled", False)
spark.conf.set("spark.sql.autoBroadcastJoinThreshold", -1)
###Output
_____no_output_____
###Markdown
1 Load DataFirst we load the weather data, which consists of the measurement data and some station metadata.
###Code
storageLocation = "s3://dimajix-training/data/weather"
###Output
_____no_output_____
###Markdown
1.1 Load MeasurementsMeasurements are stored in multiple directories (one per year). But we will limit ourselves to a single year in the analysis to improve readability of execution plans.
###Code
from functools import reduce
import pyspark.sql.functions as f
# Read in all years, store them in an Python array
raw_weather_per_year = [
spark.read.text(storageLocation + "/" + str(i)).withColumn("year", f.lit(i))
for i in range(2003, 2015)
]
# Union all years together
raw_weather = reduce(lambda l, r: l.union(r), raw_weather_per_year)
###Output
_____no_output_____
###Markdown
Use a single year to keep execution plans small
###Code
raw_weather = spark.read.text(storageLocation + "/2003").withColumn("year", f.lit(2003))
###Output
_____no_output_____
###Markdown
Extract MeasurementsMeasurements were stored in a proprietary text based format, with some values at fixed positions. We need to extract these values with a simple `SELECT` statement.
###Code
weather = raw_weather.select(
f.col("year"),
f.substring(f.col("value"), 5, 6).alias("usaf"),
f.substring(f.col("value"), 11, 5).alias("wban"),
f.substring(f.col("value"), 16, 8).alias("date"),
f.substring(f.col("value"), 24, 4).alias("time"),
f.substring(f.col("value"), 42, 5).alias("report_type"),
f.substring(f.col("value"), 61, 3).alias("wind_direction"),
f.substring(f.col("value"), 64, 1).alias("wind_direction_qual"),
f.substring(f.col("value"), 65, 1).alias("wind_observation"),
(f.substring(f.col("value"), 66, 4).cast("float") / f.lit(10.0)).alias("wind_speed"),
f.substring(f.col("value"), 70, 1).alias("wind_speed_qual"),
(f.substring(f.col("value"), 88, 5).cast("float") / f.lit(10.0)).alias(
"air_temperature"
),
f.substring(f.col("value"), 93, 1).alias("air_temperature_qual"),
)
###Output
_____no_output_____
###Markdown
1.2 Load Station MetadataWe also need to load the weather station meta data containing information about the geo location, country etc of individual weather stations.
###Code
stations = spark.read.option("header", True).csv(storageLocation + "/isd-history")
###Output
_____no_output_____
###Markdown
2 PartitionsSince partitions is a concept at the RDD level and a DataFrame per se does not contain an RDD, we need to access the RDD in order to inspect the number of partitions.
###Code
weather.rdd.getNumPartitions()
###Output
_____no_output_____
###Markdown
2.1 Repartitioning DataYou can repartition any DataFrame by specifying the target number of partitions and the partitioning columns. While it should be clear what *number of partitions* actually means, the term *partitionng columns* might require some explanation. Partitioning ColumnsExcept for the case when Spark initially reads data, all DataFrames are partitioned along *partitioning columns*, which means that all records having the same values in the corresponding columns will end up in the same partition. Spark implicitly performs such repartitioning as shuffle operations for `JOIN`s and grouped aggregation (except when a DataFrame already has the correct partitioning columns and number of partitions) Manual RepartitioningAs already mentioned, you can explicitly repartition a DataFrame using teh `repartition()` method.
###Code
weather_rep = weather.repartition(10, weather["usaf"], weather["wban"])
weather_rep.rdd.getNumPartitions()
###Output
_____no_output_____
###Markdown
3 Repartition & JoinsAs already mentioned, Spark implicitly performs a repartitioning aka shuffle for `JOIN` operations. Execution PlanSo let us inspect the execution plan of a `JOIN` operation.
###Code
result = weather.join(
stations,
(weather["usaf"] == stations["usaf"]) & (weather["wban"] == stations["wban"]),
)
result.explain()
###Output
== Physical Plan ==
*(5) SortMergeJoin [usaf#87, wban#88], [usaf#128, wban#129], Inner
:- *(2) Sort [usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(usaf#87, wban#88, 200), true, [id=#69]
: +- *(1) Project [2003 AS year#84, substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, substring(value#82, 16, 8) AS date#89, substring(value#82, 24, 4) AS time#90, substring(value#82, 42, 5) AS report_type#91, substring(value#82, 61, 3) AS wind_direction#92, substring(value#82, 64, 1) AS wind_direction_qual#93, substring(value#82, 65, 1) AS wind_observation#94, (cast(cast(substring(value#82, 66, 4) as float) as double) / 10.0) AS wind_speed#95, substring(value#82, 70, 1) AS wind_speed_qual#96, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
: +- *(1) Filter (isnotnull(substring(value#82, 11, 5)) AND isnotnull(substring(value#82, 5, 6)))
: +- FileScan text [value#82] Batched: false, DataFilters: [isnotnull(substring(value#82, 11, 5)), isnotnull(substring(value#82, 5, 6))], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
+- *(4) Sort [usaf#128 ASC NULLS FIRST, wban#129 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(usaf#128, wban#129, 200), true, [id=#78]
+- *(3) Project [USAF#128, WBAN#129, STATION NAME#130, CTRY#131, STATE#132, ICAO#133, LAT#134, LON#135, ELEV(M)#136, BEGIN#137, END#138]
+- *(3) Filter (isnotnull(usaf#128) AND isnotnull(wban#129))
+- FileScan csv [USAF#128,WBAN#129,STATION NAME#130,CTRY#131,STATE#132,ICAO#133,LAT#134,LON#135,ELEV(M)#136,BEGIN#137,END#138] Batched: false, DataFilters: [isnotnull(USAF#128), isnotnull(WBAN#129)], Format: CSV, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/isd-history], PartitionFilters: [], PushedFilters: [IsNotNull(USAF), IsNotNull(WBAN)], ReadSchema: struct<USAF:string,WBAN:string,STATION NAME:string,CTRY:string,STATE:string,ICAO:string,LAT:strin...
###Markdown
RemarksAs we already discussed, each `JOIN` is executed with the following steps1. Filter `NULL` values (it's an inner join)2. Repartition DataFrame on the join columns with 200 partitions3. Sort each partition independently4. Perform a `SortMergeJoin` 3.1 Pre-partition data (first try)Now let us try what happens when we explicitly repartition the data before the join operation.
###Code
weather_rep = weather.repartition(10, weather["usaf"], weather["wban"])
weather_rep.rdd.getNumPartitions()
###Output
_____no_output_____
###Markdown
Execution PlanLet's analyze the resulting execution plan. Ideally all the preparation work before the `SortMergeJoin` happens before the `cache` operation.
###Code
result = weather_rep.join(stations, ["usaf", "wban"])
result.explain()
###Output
== Physical Plan ==
*(5) Project [usaf#87, wban#88, 2003 AS year#84, date#89, time#90, report_type#91, wind_direction#92, wind_direction_qual#93, wind_observation#94, wind_speed#95, wind_speed_qual#96, air_temperature#97, air_temperature_qual#98, STATION NAME#130, CTRY#131, STATE#132, ICAO#133, LAT#134, LON#135, ELEV(M)#136, BEGIN#137, END#138]
+- *(5) SortMergeJoin [usaf#87, wban#88], [USAF#128, WBAN#129], Inner
:- *(2) Sort [usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(usaf#87, wban#88, 200), true, [id=#963]
: +- *(1) Project [substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, substring(value#82, 16, 8) AS date#89, substring(value#82, 24, 4) AS time#90, substring(value#82, 42, 5) AS report_type#91, substring(value#82, 61, 3) AS wind_direction#92, substring(value#82, 64, 1) AS wind_direction_qual#93, substring(value#82, 65, 1) AS wind_observation#94, (cast(cast(substring(value#82, 66, 4) as float) as double) / 10.0) AS wind_speed#95, substring(value#82, 70, 1) AS wind_speed_qual#96, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
: +- *(1) Filter (isnotnull(substring(value#82, 5, 6)) AND isnotnull(substring(value#82, 11, 5)))
: +- FileScan text [value#82] Batched: false, DataFilters: [isnotnull(substring(value#82, 5, 6)), isnotnull(substring(value#82, 11, 5))], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
+- *(4) Sort [USAF#128 ASC NULLS FIRST, WBAN#129 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(USAF#128, WBAN#129, 200), true, [id=#972]
+- *(3) Project [USAF#128, WBAN#129, STATION NAME#130, CTRY#131, STATE#132, ICAO#133, LAT#134, LON#135, ELEV(M)#136, BEGIN#137, END#138]
+- *(3) Filter (isnotnull(USAF#128) AND isnotnull(WBAN#129))
+- FileScan csv [USAF#128,WBAN#129,STATION NAME#130,CTRY#131,STATE#132,ICAO#133,LAT#134,LON#135,ELEV(M)#136,BEGIN#137,END#138] Batched: false, DataFilters: [isnotnull(USAF#128), isnotnull(WBAN#129)], Format: CSV, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/isd-history], PartitionFilters: [], PushedFilters: [IsNotNull(USAF), IsNotNull(WBAN)], ReadSchema: struct<USAF:string,WBAN:string,STATION NAME:string,CTRY:string,STATE:string,ICAO:string,LAT:strin...
###Markdown
ObservationsSpark removed our explicit repartition, since it doesn't help and replaced it with the implicit repartition with 200 partitions 3.2 Pre-partition and Cache (second try)Now let us try if we can cache the shuffle (repartition) and sort operation. This is useful in cases, where you have to perform multiple joins on the same set of columns, for example with different DataFrames.So let's simply repartition the `weather` DataFrame on the two columns `usaf` and `wban`.
###Code
weather_rep = weather.repartition(20, weather["usaf"], weather["wban"])
weather_rep.cache()
###Output
_____no_output_____
###Markdown
Execution PlanLet's analyze the resulting execution plan. Ideally all the preparation work before the `SortMergeJoin` happens before the `cache` operation.
###Code
result = weather_rep.join(stations, ["usaf", "wban"])
result.explain()
###Output
== Physical Plan ==
*(5) SortMergeJoin [usaf#87, wban#88], [usaf#128, wban#129], Inner
:- *(2) Sort [usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(usaf#87, wban#88, 200), true, [id=#550]
: +- *(1) Filter (isnotnull(wban#88) AND isnotnull(usaf#87))
: +- InMemoryTableScan [year#84, usaf#87, wban#88, date#89, time#90, report_type#91, wind_direction#92, wind_direction_qual#93, wind_observation#94, wind_speed#95, wind_speed_qual#96, air_temperature#97, air_temperature_qual#98], [isnotnull(wban#88), isnotnull(usaf#87)]
: +- InMemoryRelation [year#84, usaf#87, wban#88, date#89, time#90, report_type#91, wind_direction#92, wind_direction_qual#93, wind_observation#94, wind_speed#95, wind_speed_qual#96, air_temperature#97, air_temperature_qual#98], StorageLevel(disk, memory, deserialized, 1 replicas)
: +- Exchange hashpartitioning(usaf#87, wban#88, 20), false, [id=#402]
: +- *(1) Project [2003 AS year#84, substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, substring(value#82, 16, 8) AS date#89, substring(value#82, 24, 4) AS time#90, substring(value#82, 42, 5) AS report_type#91, substring(value#82, 61, 3) AS wind_direction#92, substring(value#82, 64, 1) AS wind_direction_qual#93, substring(value#82, 65, 1) AS wind_observation#94, (cast(cast(substring(value#82, 66, 4) as float) as double) / 10.0) AS wind_speed#95, substring(value#82, 70, 1) AS wind_speed_qual#96, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
: +- FileScan text [value#82] Batched: false, DataFilters: [], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
+- *(4) Sort [usaf#128 ASC NULLS FIRST, wban#129 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(usaf#128, wban#129, 200), true, [id=#559]
+- *(3) Project [USAF#128, WBAN#129, STATION NAME#130, CTRY#131, STATE#132, ICAO#133, LAT#134, LON#135, ELEV(M)#136, BEGIN#137, END#138]
+- *(3) Filter (isnotnull(usaf#128) AND isnotnull(wban#129))
+- FileScan csv [USAF#128,WBAN#129,STATION NAME#130,CTRY#131,STATE#132,ICAO#133,LAT#134,LON#135,ELEV(M)#136,BEGIN#137,END#138] Batched: false, DataFilters: [isnotnull(USAF#128), isnotnull(WBAN#129)], Format: CSV, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/isd-history], PartitionFilters: [], PushedFilters: [IsNotNull(USAF), IsNotNull(WBAN)], ReadSchema: struct<USAF:string,WBAN:string,STATION NAME:string,CTRY:string,STATE:string,ICAO:string,LAT:strin...
###Markdown
RemarksOuch, now we have *two* shuffle operations. The reason is that Spark will use the default number of partitions for the JOIN operation, but we cached a differently partitioned DataFrame. 3.3 Pre-partition and Cache (third try)Now let us try if we can cache the shuffle (repartition) and sort operation. This is useful in cases, where you have to perform multiple joins on the same set of columns, for example with different DataFrames.So let's simply repartition the `weather` DataFrame on the two columns `usaf` and `wban`. We also have to use 200 partitions, because this is what Spark will use for `JOIN` operations.
###Code
weather_rep = weather.repartition(200, weather["usaf"], weather["wban"])
weather_rep.cache()
###Output
_____no_output_____
###Markdown
Execution PlanLet's analyze the resulting execution plan. Ideally all the preparation work before the `SortMergeJoin` happens before the `cache` operation.
###Code
result = weather_rep.join(stations, ["usaf", "wban"])
result.explain()
###Output
== Physical Plan ==
*(4) Project [usaf#87, wban#88, year#84, date#89, time#90, report_type#91, wind_direction#92, wind_direction_qual#93, wind_observation#94, wind_speed#95, wind_speed_qual#96, air_temperature#97, air_temperature_qual#98, STATION NAME#130, CTRY#131, STATE#132, ICAO#133, LAT#134, LON#135, ELEV(M)#136, BEGIN#137, END#138]
+- *(4) SortMergeJoin [usaf#87, wban#88], [USAF#128, WBAN#129], Inner
:- *(1) Sort [usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST], false, 0
: +- *(1) Filter (isnotnull(wban#88) AND isnotnull(usaf#87))
: +- InMemoryTableScan [year#84, usaf#87, wban#88, date#89, time#90, report_type#91, wind_direction#92, wind_direction_qual#93, wind_observation#94, wind_speed#95, wind_speed_qual#96, air_temperature#97, air_temperature_qual#98], [isnotnull(wban#88), isnotnull(usaf#87)]
: +- InMemoryRelation [year#84, usaf#87, wban#88, date#89, time#90, report_type#91, wind_direction#92, wind_direction_qual#93, wind_observation#94, wind_speed#95, wind_speed_qual#96, air_temperature#97, air_temperature_qual#98], StorageLevel(disk, memory, deserialized, 1 replicas)
: +- Exchange hashpartitioning(usaf#87, wban#88, 200), false, [id=#992]
: +- *(1) Project [2003 AS year#84, substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, substring(value#82, 16, 8) AS date#89, substring(value#82, 24, 4) AS time#90, substring(value#82, 42, 5) AS report_type#91, substring(value#82, 61, 3) AS wind_direction#92, substring(value#82, 64, 1) AS wind_direction_qual#93, substring(value#82, 65, 1) AS wind_observation#94, (cast(cast(substring(value#82, 66, 4) as float) as double) / 10.0) AS wind_speed#95, substring(value#82, 70, 1) AS wind_speed_qual#96, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
: +- FileScan text [value#82] Batched: false, DataFilters: [], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
+- *(3) Sort [USAF#128 ASC NULLS FIRST, WBAN#129 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(USAF#128, WBAN#129, 200), true, [id=#1029]
+- *(2) Project [USAF#128, WBAN#129, STATION NAME#130, CTRY#131, STATE#132, ICAO#133, LAT#134, LON#135, ELEV(M)#136, BEGIN#137, END#138]
+- *(2) Filter (isnotnull(USAF#128) AND isnotnull(WBAN#129))
+- FileScan csv [USAF#128,WBAN#129,STATION NAME#130,CTRY#131,STATE#132,ICAO#133,LAT#134,LON#135,ELEV(M)#136,BEGIN#137,END#138] Batched: false, DataFilters: [isnotnull(USAF#128), isnotnull(WBAN#129)], Format: CSV, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/isd-history], PartitionFilters: [], PushedFilters: [IsNotNull(USAF), IsNotNull(WBAN)], ReadSchema: struct<USAF:string,WBAN:string,STATION NAME:string,CTRY:string,STATE:string,ICAO:string,LAT:strin...
###Markdown
RemarksWe did not reach completely what we wanted. The `sort` and `filter` operation still occur after the cache. 3.4 Pre-partition and Cache (fourth try)We already partially achieved our goal of caching all preparational work of the `SortMergeJoin`, but the sorting was still preformed after the caching. So let's try to insert an appropriate sort operation.
###Code
# Release cache to simplify execution plan
weather_rep.unpersist()
weather_rep = weather.repartition(200, weather["usaf"], weather["wban"]).orderBy(
weather["usaf"], weather["wban"]
)
weather_rep.cache()
###Output
_____no_output_____
###Markdown
Execution Plan
###Code
result = weather_rep.join(stations, ["usaf", "wban"])
result.explain()
###Output
== Physical Plan ==
*(5) SortMergeJoin [usaf#87, wban#88], [usaf#128, wban#129], Inner
:- *(2) Sort [usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(usaf#87, wban#88, 200), true, [id=#662]
: +- *(1) Filter (isnotnull(wban#88) AND isnotnull(usaf#87))
: +- InMemoryTableScan [year#84, usaf#87, wban#88, date#89, time#90, report_type#91, wind_direction#92, wind_direction_qual#93, wind_observation#94, wind_speed#95, wind_speed_qual#96, air_temperature#97, air_temperature_qual#98], [isnotnull(wban#88), isnotnull(usaf#87)]
: +- InMemoryRelation [year#84, usaf#87, wban#88, date#89, time#90, report_type#91, wind_direction#92, wind_direction_qual#93, wind_observation#94, wind_speed#95, wind_speed_qual#96, air_temperature#97, air_temperature_qual#98], StorageLevel(disk, memory, deserialized, 1 replicas)
: +- *(2) Sort [usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST], true, 0
: +- Exchange rangepartitioning(usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST, 200), true, [id=#623]
: +- Exchange hashpartitioning(usaf#87, wban#88, 200), false, [id=#622]
: +- *(1) Project [2003 AS year#84, substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, substring(value#82, 16, 8) AS date#89, substring(value#82, 24, 4) AS time#90, substring(value#82, 42, 5) AS report_type#91, substring(value#82, 61, 3) AS wind_direction#92, substring(value#82, 64, 1) AS wind_direction_qual#93, substring(value#82, 65, 1) AS wind_observation#94, (cast(cast(substring(value#82, 66, 4) as float) as double) / 10.0) AS wind_speed#95, substring(value#82, 70, 1) AS wind_speed_qual#96, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
: +- FileScan text [value#82] Batched: false, DataFilters: [], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
+- *(4) Sort [usaf#128 ASC NULLS FIRST, wban#129 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(usaf#128, wban#129, 200), true, [id=#671]
+- *(3) Project [USAF#128, WBAN#129, STATION NAME#130, CTRY#131, STATE#132, ICAO#133, LAT#134, LON#135, ELEV(M)#136, BEGIN#137, END#138]
+- *(3) Filter (isnotnull(usaf#128) AND isnotnull(wban#129))
+- FileScan csv [USAF#128,WBAN#129,STATION NAME#130,CTRY#131,STATE#132,ICAO#133,LAT#134,LON#135,ELEV(M)#136,BEGIN#137,END#138] Batched: false, DataFilters: [isnotnull(USAF#128), isnotnull(WBAN#129)], Format: CSV, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/isd-history], PartitionFilters: [], PushedFilters: [IsNotNull(USAF), IsNotNull(WBAN)], ReadSchema: struct<USAF:string,WBAN:string,STATION NAME:string,CTRY:string,STATE:string,ICAO:string,LAT:strin...
###Markdown
RemarksWe actually created a worse situation: Now we have two sort operations! Definately not what we wanted to have.So let's think for a moment: The `SortMergeJoin` requires that each partition is sorted, but after the repartioning occured. The `orderBy` operation we used above will create a global order over all partitions (and thereby destroy all the repartition work immediately). So we need something else, which still keeps the current partitions but only sort in each partition independently. 3.5 Pre-partition and Cache (final try)Fortunately Spark provides a `sortWithinPartitions` method, which does exactly what it sounds like.
###Code
# Release cache to simplify execution plan
weather_rep.unpersist()
weather_rep = weather.repartition(
200, weather["usaf"], weather["wban"]
).sortWithinPartitions(weather["usaf"], weather["wban"])
weather_rep.cache()
###Output
_____no_output_____
###Markdown
Execution Plan
###Code
result = weather_rep.join(
stations,
(weather["usaf"] == stations["usaf"]) & (weather["wban"] == stations["wban"]),
)
result.explain()
###Output
== Physical Plan ==
*(4) SortMergeJoin [usaf#87, wban#88], [usaf#128, wban#129], Inner
:- *(1) Filter (isnotnull(wban#88) AND isnotnull(usaf#87))
: +- InMemoryTableScan [year#84, usaf#87, wban#88, date#89, time#90, report_type#91, wind_direction#92, wind_direction_qual#93, wind_observation#94, wind_speed#95, wind_speed_qual#96, air_temperature#97, air_temperature_qual#98], [isnotnull(wban#88), isnotnull(usaf#87)]
: +- InMemoryRelation [year#84, usaf#87, wban#88, date#89, time#90, report_type#91, wind_direction#92, wind_direction_qual#93, wind_observation#94, wind_speed#95, wind_speed_qual#96, air_temperature#97, air_temperature_qual#98], StorageLevel(disk, memory, deserialized, 1 replicas)
: +- *(2) Sort [usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(usaf#87, wban#88, 200), false, [id=#694]
: +- *(1) Project [2003 AS year#84, substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, substring(value#82, 16, 8) AS date#89, substring(value#82, 24, 4) AS time#90, substring(value#82, 42, 5) AS report_type#91, substring(value#82, 61, 3) AS wind_direction#92, substring(value#82, 64, 1) AS wind_direction_qual#93, substring(value#82, 65, 1) AS wind_observation#94, (cast(cast(substring(value#82, 66, 4) as float) as double) / 10.0) AS wind_speed#95, substring(value#82, 70, 1) AS wind_speed_qual#96, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
: +- FileScan text [value#82] Batched: false, DataFilters: [], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
+- *(3) Sort [usaf#128 ASC NULLS FIRST, wban#129 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(usaf#128, wban#129, 200), true, [id=#727]
+- *(2) Project [USAF#128, WBAN#129, STATION NAME#130, CTRY#131, STATE#132, ICAO#133, LAT#134, LON#135, ELEV(M)#136, BEGIN#137, END#138]
+- *(2) Filter (isnotnull(usaf#128) AND isnotnull(wban#129))
+- FileScan csv [USAF#128,WBAN#129,STATION NAME#130,CTRY#131,STATE#132,ICAO#133,LAT#134,LON#135,ELEV(M)#136,BEGIN#137,END#138] Batched: false, DataFilters: [isnotnull(USAF#128), isnotnull(WBAN#129)], Format: CSV, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/isd-history], PartitionFilters: [], PushedFilters: [IsNotNull(USAF), IsNotNull(WBAN)], ReadSchema: struct<USAF:string,WBAN:string,STATION NAME:string,CTRY:string,STATE:string,ICAO:string,LAT:strin...
###Markdown
RemarksThat looks really good. The filter operation is still executed after the cache, but that cannot be cached such that Spark uses this information.So whenever you want to prepartition data, you need to execute the following steps:* repartition with the join columns and default number of partitions* sortWithinPartitions with the join columns* probably cache (otherwise there is no benefit at all) Inspect WebUIWe can also inspect the WebUI and see how everything is executed. Phase 1: Build cache
###Code
result.count()
###Output
_____no_output_____
###Markdown
Phase 2: Use cache
###Code
result.count()
###Output
_____no_output_____
###Markdown
4 Repartition & AggregationsSimilar to `JOIN` operations, Spark also requires an appropriate partitioning in grouped aggregations. Again, we can use the same strategy and appropriateky prepartition data in cases where multiple joins and aggregations are performed using the same columns. 4.1 Simple AggregationSo let's perform the usual aggregation (but this time without a previous `JOIN`) with groups defined by the station id (`usaf` and `wban`).
###Code
result = weather.groupBy(weather["usaf"], weather["wban"]).agg(
f.min(
f.when(weather.air_temperature_qual == f.lit(1), weather.air_temperature)
).alias('min_temp'),
f.max(
f.when(weather.air_temperature_qual == f.lit(1), weather.air_temperature)
).alias('max_temp'),
)
result.explain()
###Output
== Physical Plan ==
*(2) HashAggregate(keys=[usaf#87, wban#88], functions=[min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- Exchange hashpartitioning(usaf#87, wban#88, 200), true, [id=#779]
+- *(1) HashAggregate(keys=[usaf#87, wban#88], functions=[partial_min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), partial_max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- *(1) Project [substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
+- FileScan text [value#82] Batched: false, DataFilters: [], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
###Markdown
RemarksEach grouped aggregation is executed with the following steps:1. Perform partial aggregation (`HashAggregate`)2. Shuffle intermediate result (`Exchange hashpartitioning`)3. Perform final aggregation (`HashAggregate`) 4.2 Aggregation after repartitionNow let us perform the same aggregation, but this time let's use the preaggregated weather data set `weather_rep` instead.
###Code
weather_rep = weather.repartition(87, weather["usaf"], weather["wban"])
weather_rep.unpersist()
result = weather_rep.groupBy(weather["usaf"], weather["wban"]).agg(
f.min(
f.when(weather_rep.air_temperature_qual == f.lit(1), weather_rep.air_temperature)
).alias('min_temp'),
f.max(
f.when(weather_rep.air_temperature_qual == f.lit(1), weather_rep.air_temperature)
).alias('max_temp'),
)
result.explain()
###Output
== Physical Plan ==
*(2) HashAggregate(keys=[usaf#87, wban#88], functions=[min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- *(2) HashAggregate(keys=[usaf#87, wban#88], functions=[partial_min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), partial_max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- Exchange hashpartitioning(usaf#87, wban#88, 87), false, [id=#391]
+- *(1) Project [substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
+- FileScan text [value#82] Batched: false, DataFilters: [], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
###Markdown
RemarksSpark obviously detects the correct partitioning of the `weather_rep` DataFrame. The sorting actually is not required, but does not hurt either (except performance...). Therefore only two steps are executed after the cache operation:1. Partial aggregation (`HashAggregate`)2. Final aggregation (`HashAggregate`)But note that although you saved a shuffle operation of partial aggregates, in most cases it is not adviseable to prepartition data only for aggregations for the following reasons:* You could perform all aggregations in a single `groupBy` and `agg` chain* In most cases the preaggregated data is significantly smaller than the original data, therefore the shuffle doesn't hurt that much 5 Interaction between Join, Aggregate & RepartitionNow we have seen two operations which require a shuffle of the data. Of course Spark is clever enough to avoid an additional shuffle operation in chains of `JOIN` and grouped aggregations, which use the same aggregation columns. 5.1 Aggregation after Join on same keySo let's see what happens with a grouped aggregation after a join operation.
###Code
joined = weather.join(
stations,
(weather["usaf"] == stations["usaf"]) & (weather["wban"] == stations["wban"]),
)
result = joined.groupBy(weather["usaf"], weather["wban"]).agg(
f.min(f.when(joined.air_temperature_qual == f.lit(1), joined.air_temperature)).alias(
'min_temp'
),
f.max(f.when(joined.air_temperature_qual == f.lit(1), joined.air_temperature)).alias(
'max_temp'
),
)
result.explain()
###Output
== Physical Plan ==
*(5) HashAggregate(keys=[usaf#87, wban#88], functions=[min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- *(5) HashAggregate(keys=[usaf#87, wban#88], functions=[partial_min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), partial_max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- *(5) Project [usaf#87, wban#88, air_temperature#97, air_temperature_qual#98]
+- *(5) SortMergeJoin [usaf#87, wban#88], [usaf#128, wban#129], Inner
:- *(2) Sort [usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(usaf#87, wban#88, 200), true, [id=#840]
: +- *(1) Project [substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
: +- *(1) Filter (isnotnull(substring(value#82, 5, 6)) AND isnotnull(substring(value#82, 11, 5)))
: +- FileScan text [value#82] Batched: false, DataFilters: [isnotnull(substring(value#82, 5, 6)), isnotnull(substring(value#82, 11, 5))], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
+- *(4) Sort [usaf#128 ASC NULLS FIRST, wban#129 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(usaf#128, wban#129, 200), true, [id=#849]
+- *(3) Project [USAF#128, WBAN#129]
+- *(3) Filter (isnotnull(usaf#128) AND isnotnull(wban#129))
+- FileScan csv [USAF#128,WBAN#129] Batched: false, DataFilters: [isnotnull(USAF#128), isnotnull(WBAN#129)], Format: CSV, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/isd-history], PartitionFilters: [], PushedFilters: [IsNotNull(USAF), IsNotNull(WBAN)], ReadSchema: struct<USAF:string,WBAN:string>
###Markdown
RemarksAs you can see, Spark performs a single shuffle operation. The order of operation is as follows:1. Filter `NULL` values (it's an inner join)2. Shuffle data on `usaf` and `wban`3. Sort partitions by `usaf` and `wban`4. Perform `SortMergeJoin`5. Perform partial aggregation `HashAggregate`6. Perform final aggregation `HashAggregate` 5.2 Aggregation after Join using repartitioned dataOf course we can also use the pre-repartitioned weather DataFrame. This will work as expected, Spark does not add any additional shuffle operation.
###Code
weather_rep = weather.repartition(84, weather["usaf"], weather["wban"])
joined = weather_rep.join(stations, ["usaf", "wban"])
result = joined.groupBy(weather["usaf"], weather["wban"]).agg(
f.min(f.when(joined.air_temperature_qual == f.lit(1), joined.air_temperature)).alias(
'min_temp'
),
f.max(f.when(joined.air_temperature_qual == f.lit(1), joined.air_temperature)).alias(
'max_temp'
),
)
result.explain()
###Output
== Physical Plan ==
*(5) HashAggregate(keys=[usaf#87, wban#88], functions=[min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- *(5) HashAggregate(keys=[usaf#87, wban#88], functions=[partial_min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), partial_max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- *(5) Project [usaf#87, wban#88, air_temperature#97, air_temperature_qual#98]
+- *(5) SortMergeJoin [usaf#87, wban#88], [USAF#128, WBAN#129], Inner
:- *(2) Sort [usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(usaf#87, wban#88, 200), true, [id=#893]
: +- *(1) Project [substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
: +- *(1) Filter (isnotnull(substring(value#82, 5, 6)) AND isnotnull(substring(value#82, 11, 5)))
: +- FileScan text [value#82] Batched: false, DataFilters: [isnotnull(substring(value#82, 5, 6)), isnotnull(substring(value#82, 11, 5))], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
+- *(4) Sort [USAF#128 ASC NULLS FIRST, WBAN#129 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(USAF#128, WBAN#129, 200), true, [id=#902]
+- *(3) Project [USAF#128, WBAN#129]
+- *(3) Filter (isnotnull(USAF#128) AND isnotnull(WBAN#129))
+- FileScan csv [USAF#128,WBAN#129] Batched: false, DataFilters: [isnotnull(USAF#128), isnotnull(WBAN#129)], Format: CSV, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/isd-history], PartitionFilters: [], PushedFilters: [IsNotNull(USAF), IsNotNull(WBAN)], ReadSchema: struct<USAF:string,WBAN:string>
###Markdown
Note that the explicit repartition has been removed by Spark - therefore it doesn't make any sense to `repartition` before a join operation. 5.3 Aggregation after Join with different keySo far we only looked at join and grouping operations using the same keys. If we use different keys (for example the country) in both operations, we expect Spark to add an additional shuffle operations. Let's see...
###Code
joined = weather.join(stations, ["usaf", "wban"])
result = joined.groupBy(stations["ctry"]).agg(
f.min(f.when(joined.air_temperature_qual == f.lit(1), joined.air_temperature)).alias(
'min_temp'
),
f.max(f.when(joined.air_temperature_qual == f.lit(1), joined.air_temperature)).alias(
'max_temp'
),
)
result.explain()
###Output
== Physical Plan ==
*(6) HashAggregate(keys=[ctry#131], functions=[min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- Exchange hashpartitioning(ctry#131, 200), true, [id=#645]
+- *(5) HashAggregate(keys=[ctry#131], functions=[partial_min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), partial_max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- *(5) Project [air_temperature#97, air_temperature_qual#98, CTRY#131]
+- *(5) SortMergeJoin [usaf#87, wban#88], [USAF#128, WBAN#129], Inner
:- *(2) Sort [usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(usaf#87, wban#88, 200), true, [id=#627]
: +- *(1) Project [substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
: +- *(1) Filter (isnotnull(substring(value#82, 5, 6)) AND isnotnull(substring(value#82, 11, 5)))
: +- FileScan text [value#82] Batched: false, DataFilters: [isnotnull(substring(value#82, 5, 6)), isnotnull(substring(value#82, 11, 5))], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
+- *(4) Sort [USAF#128 ASC NULLS FIRST, WBAN#129 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(USAF#128, WBAN#129, 200), true, [id=#636]
+- *(3) Project [USAF#128, WBAN#129, CTRY#131]
+- *(3) Filter (isnotnull(USAF#128) AND isnotnull(WBAN#129))
+- FileScan csv [USAF#128,WBAN#129,CTRY#131] Batched: false, DataFilters: [isnotnull(USAF#128), isnotnull(WBAN#129)], Format: CSV, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/isd-history], PartitionFilters: [], PushedFilters: [IsNotNull(USAF), IsNotNull(WBAN)], ReadSchema: struct<USAF:string,WBAN:string,CTRY:string>
###Markdown
5.4 Aggregation after Broadcast-Join If we use a broadcast join instead of a sort merge join, the we will have a shuffle operation for the aggregation again (since the broadcast join just avoids the shuffle). Let's verify that theory...
###Code
joined = weather.join(f.broadcast(stations), ["usaf", "wban"])
result = joined.groupBy(weather["usaf"], weather["wban"]).agg(
f.min(f.when(joined.air_temperature_qual == f.lit(1), joined.air_temperature)).alias(
'min_temp'
),
f.max(f.when(joined.air_temperature_qual == f.lit(1), joined.air_temperature)).alias(
'max_temp'
),
)
result.explain()
###Output
== Physical Plan ==
*(3) HashAggregate(keys=[usaf#87, wban#88], functions=[min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- Exchange hashpartitioning(usaf#87, wban#88, 200), true, [id=#578]
+- *(2) HashAggregate(keys=[usaf#87, wban#88], functions=[partial_min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), partial_max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- *(2) Project [usaf#87, wban#88, air_temperature#97, air_temperature_qual#98]
+- *(2) BroadcastHashJoin [usaf#87, wban#88], [USAF#128, WBAN#129], Inner, BuildRight
:- *(2) Project [substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
: +- *(2) Filter (isnotnull(substring(value#82, 5, 6)) AND isnotnull(substring(value#82, 11, 5)))
: +- FileScan text [value#82] Batched: false, DataFilters: [isnotnull(substring(value#82, 5, 6)), isnotnull(substring(value#82, 11, 5))], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
+- BroadcastExchange HashedRelationBroadcastMode(List(input[0, string, true], input[1, string, true])), [id=#572]
+- *(1) Project [USAF#128, WBAN#129]
+- *(1) Filter (isnotnull(USAF#128) AND isnotnull(WBAN#129))
+- FileScan csv [USAF#128,WBAN#129] Batched: false, DataFilters: [isnotnull(USAF#128), isnotnull(WBAN#129)], Format: CSV, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/isd-history], PartitionFilters: [], PushedFilters: [IsNotNull(USAF), IsNotNull(WBAN)], ReadSchema: struct<USAF:string,WBAN:string>
###Markdown
6 CoalesceThere is another use case for changing the number of partitions: Writing results to HDFS/S3/whatever. Per design Spark writes each partition into a separate file, and there is no way around that. But when partitions do not contain many records, this may not only be ugly, but also unperformant and might cause additional trouble. Specifically currently HDFS is not designed to handle many small files, but prefers fewer large files instead.Therefore it is often desireable to reduce the number of partitions of a DataFrame just before writing the result to disk. You could perform this task by a `repartition` operation, but this is an expensive operation requiring an additional shuffle operation. Therefore Spark provides an additional method called `coalesce` which can be used to reduce the number of partitions without incurring an additional shuffle. Spark simply logically concatenates multiple partitions into new partitions. Inspect Number of PartitionsFor this example, we will use the `weather_rep` DataFrame, which contains exactly 200 partitions.
###Code
weather_rep = weather.repartition(200, weather["usaf"], weather["wban"])
weather_rep.cache()
weather_rep.rdd.getNumPartitions()
###Output
_____no_output_____
###Markdown
6.1 Merge Partitions using coalesceIn order to reduce the number of partitions, we simply use the `coalesce` method.
###Code
weather_small = weather_rep.coalesce(16)
weather_small.explain()
weather_small.rdd.getNumPartitions()
###Output
_____no_output_____
###Markdown
Inspect WebUI
###Code
weather_rep.count()
###Output
_____no_output_____
###Markdown
6.2 Saving filesWe already discussed that Spark writes a separate file per partition. So let's see the result when we write the `weather_rep` DataFrame containing 200 partitions. Write 200 Partitions
###Code
weather_rep.write.mode("overwrite").parquet("/tmp/weather_rep")
###Output
_____no_output_____
###Markdown
Inspect the ResultUsing a simple HDFS CLI util, we can inspect the result on HDFS.
###Code
%%bash
hdfs dfs -ls /tmp/weather_rep
###Output
Found 91 items
-rw-r--r-- 1 hadoop hadoop 0 2018-10-07 07:17 /tmp/weather_rep/_SUCCESS
-rw-r--r-- 1 hadoop hadoop 1337 2018-10-07 07:16 /tmp/weather_rep/part-00000-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 24241 2018-10-07 07:16 /tmp/weather_rep/part-00003-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 63340 2018-10-07 07:17 /tmp/weather_rep/part-00005-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 32695 2018-10-07 07:17 /tmp/weather_rep/part-00006-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 126661 2018-10-07 07:17 /tmp/weather_rep/part-00011-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 89610 2018-10-07 07:17 /tmp/weather_rep/part-00013-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 73063 2018-10-07 07:17 /tmp/weather_rep/part-00014-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 70655 2018-10-07 07:17 /tmp/weather_rep/part-00016-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 61512 2018-10-07 07:17 /tmp/weather_rep/part-00017-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 181909 2018-10-07 07:17 /tmp/weather_rep/part-00025-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 67545 2018-10-07 07:17 /tmp/weather_rep/part-00026-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 87515 2018-10-07 07:17 /tmp/weather_rep/part-00028-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 76725 2018-10-07 07:17 /tmp/weather_rep/part-00031-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 16246 2018-10-07 07:17 /tmp/weather_rep/part-00032-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 68058 2018-10-07 07:17 /tmp/weather_rep/part-00033-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 84538 2018-10-07 07:17 /tmp/weather_rep/part-00034-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 73316 2018-10-07 07:17 /tmp/weather_rep/part-00035-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 123655 2018-10-07 07:17 /tmp/weather_rep/part-00036-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 37920 2018-10-07 07:17 /tmp/weather_rep/part-00038-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 57775 2018-10-07 07:17 /tmp/weather_rep/part-00039-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 67351 2018-10-07 07:17 /tmp/weather_rep/part-00040-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 55996 2018-10-07 07:17 /tmp/weather_rep/part-00041-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 59784 2018-10-07 07:17 /tmp/weather_rep/part-00043-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 80773 2018-10-07 07:17 /tmp/weather_rep/part-00046-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 84986 2018-10-07 07:17 /tmp/weather_rep/part-00048-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 133418 2018-10-07 07:17 /tmp/weather_rep/part-00049-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 75265 2018-10-07 07:17 /tmp/weather_rep/part-00050-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 60268 2018-10-07 07:17 /tmp/weather_rep/part-00053-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 76993 2018-10-07 07:17 /tmp/weather_rep/part-00058-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 199806 2018-10-07 07:17 /tmp/weather_rep/part-00059-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 40241 2018-10-07 07:17 /tmp/weather_rep/part-00066-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 97540 2018-10-07 07:17 /tmp/weather_rep/part-00068-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 29008 2018-10-07 07:17 /tmp/weather_rep/part-00071-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 73180 2018-10-07 07:17 /tmp/weather_rep/part-00078-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 3393 2018-10-07 07:17 /tmp/weather_rep/part-00081-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 62817 2018-10-07 07:17 /tmp/weather_rep/part-00084-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 3359 2018-10-07 07:17 /tmp/weather_rep/part-00088-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 34895 2018-10-07 07:17 /tmp/weather_rep/part-00092-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 21333 2018-10-07 07:17 /tmp/weather_rep/part-00096-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 76141 2018-10-07 07:17 /tmp/weather_rep/part-00098-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 48870 2018-10-07 07:17 /tmp/weather_rep/part-00099-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 31191 2018-10-07 07:17 /tmp/weather_rep/part-00100-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 61306 2018-10-07 07:17 /tmp/weather_rep/part-00102-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 145618 2018-10-07 07:17 /tmp/weather_rep/part-00104-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 60617 2018-10-07 07:17 /tmp/weather_rep/part-00108-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 78265 2018-10-07 07:17 /tmp/weather_rep/part-00111-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 31085 2018-10-07 07:17 /tmp/weather_rep/part-00112-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 90587 2018-10-07 07:17 /tmp/weather_rep/part-00113-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 59706 2018-10-07 07:17 /tmp/weather_rep/part-00114-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 22701 2018-10-07 07:17 /tmp/weather_rep/part-00118-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 66911 2018-10-07 07:17 /tmp/weather_rep/part-00119-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 161560 2018-10-07 07:17 /tmp/weather_rep/part-00122-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 79337 2018-10-07 07:17 /tmp/weather_rep/part-00124-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 73118 2018-10-07 07:17 /tmp/weather_rep/part-00127-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 123673 2018-10-07 07:17 /tmp/weather_rep/part-00129-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 75963 2018-10-07 07:17 /tmp/weather_rep/part-00130-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 86810 2018-10-07 07:17 /tmp/weather_rep/part-00132-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 57741 2018-10-07 07:17 /tmp/weather_rep/part-00133-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 3160 2018-10-07 07:17 /tmp/weather_rep/part-00134-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 124276 2018-10-07 07:17 /tmp/weather_rep/part-00137-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 68907 2018-10-07 07:17 /tmp/weather_rep/part-00141-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 37198 2018-10-07 07:17 /tmp/weather_rep/part-00143-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 80649 2018-10-07 07:17 /tmp/weather_rep/part-00145-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 12477 2018-10-07 07:17 /tmp/weather_rep/part-00150-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 52018 2018-10-07 07:17 /tmp/weather_rep/part-00151-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 79631 2018-10-07 07:17 /tmp/weather_rep/part-00152-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 90223 2018-10-07 07:17 /tmp/weather_rep/part-00154-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 135687 2018-10-07 07:17 /tmp/weather_rep/part-00156-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 142939 2018-10-07 07:17 /tmp/weather_rep/part-00157-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 63448 2018-10-07 07:17 /tmp/weather_rep/part-00158-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 144695 2018-10-07 07:17 /tmp/weather_rep/part-00163-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 56188 2018-10-07 07:17 /tmp/weather_rep/part-00164-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 163375 2018-10-07 07:17 /tmp/weather_rep/part-00165-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 61759 2018-10-07 07:17 /tmp/weather_rep/part-00166-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 18942 2018-10-07 07:17 /tmp/weather_rep/part-00171-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 8239 2018-10-07 07:17 /tmp/weather_rep/part-00172-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 78075 2018-10-07 07:17 /tmp/weather_rep/part-00173-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 69343 2018-10-07 07:17 /tmp/weather_rep/part-00174-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 86969 2018-10-07 07:17 /tmp/weather_rep/part-00178-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 30513 2018-10-07 07:17 /tmp/weather_rep/part-00179-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 78521 2018-10-07 07:17 /tmp/weather_rep/part-00181-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 69376 2018-10-07 07:17 /tmp/weather_rep/part-00182-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 15683 2018-10-07 07:17 /tmp/weather_rep/part-00186-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 70658 2018-10-07 07:17 /tmp/weather_rep/part-00187-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 33030 2018-10-07 07:17 /tmp/weather_rep/part-00189-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 56766 2018-10-07 07:17 /tmp/weather_rep/part-00191-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 78657 2018-10-07 07:17 /tmp/weather_rep/part-00192-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 50076 2018-10-07 07:17 /tmp/weather_rep/part-00195-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 78921 2018-10-07 07:17 /tmp/weather_rep/part-00198-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 60186 2018-10-07 07:17 /tmp/weather_rep/part-00199-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
###Markdown
Write 16 PartitionsNow let's write the `coalesce`d DataFrame and inspect the result on HDFS
###Code
weather_small.write.mode("overwrite").parquet("/tmp/weather_small")
###Output
_____no_output_____
###Markdown
Inspect Result
###Code
%%bash
hdfs dfs -ls /tmp/weather_small
###Output
Found 17 items
-rw-r--r-- 1 hadoop hadoop 0 2018-10-07 07:17 /tmp/weather_small/_SUCCESS
-rw-r--r-- 1 hadoop hadoop 290888 2018-10-07 07:17 /tmp/weather_small/part-00000-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 539188 2018-10-07 07:17 /tmp/weather_small/part-00001-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 490533 2018-10-07 07:17 /tmp/weather_small/part-00002-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 338415 2018-10-07 07:17 /tmp/weather_small/part-00003-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 460959 2018-10-07 07:17 /tmp/weather_small/part-00004-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 358779 2018-10-07 07:17 /tmp/weather_small/part-00005-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 394439 2018-10-07 07:17 /tmp/weather_small/part-00006-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 295745 2018-10-07 07:17 /tmp/weather_small/part-00007-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 274293 2018-10-07 07:17 /tmp/weather_small/part-00008-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 352943 2018-10-07 07:17 /tmp/weather_small/part-00009-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 405437 2018-10-07 07:17 /tmp/weather_small/part-00010-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 337051 2018-10-07 07:17 /tmp/weather_small/part-00011-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 521293 2018-10-07 07:17 /tmp/weather_small/part-00012-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 330085 2018-10-07 07:17 /tmp/weather_small/part-00013-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 365699 2018-10-07 07:17 /tmp/weather_small/part-00014-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 398450 2018-10-07 07:17 /tmp/weather_small/part-00015-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
###Markdown
Repartitioning DataFramesPartitions are a central concept in Apache Spark. They are used for distributing and parallelizing work onto different executors, which run on multiple servers. Determining PartitionsBasically Spark uses two different strategies for splitting up data into multiple partitions:1. When Spark loads data, the records are put into partitions along natural borders. For example every HDFS block (and thereby every file) is represented by a different partition. Therefore the number of partitions of a DataFrame read from disk is solely determined by the number of HDFS blocks2. Certain operations like `JOIN`s and aggregations require that records with the same key are physically in the same partition. This is achieved by a shuffle phase. The number of partitions is specified by the global Spark configuration variable `spark.sql.shuffle.partitions` which has a default value of 200. Repartitiong DataSince partitions have a huge influence on the execution, Spark also allows you to explicitly change the partitioning schema of a DataFrame. This makes sense only in a very limited (but still important) set of cases, which we will discuss in this notebook. Weather ExampleSurprise, surprise, we will again use the weather example and see what explicit repartitioning gives us.
###Code
from pyspark.sql import SparkSession
if not 'spark' in locals():
spark = SparkSession.builder \
.master("local[*]") \
.config("spark.driver.memory","24G") \
.getOrCreate()
spark
###Output
_____no_output_____
###Markdown
Disable Automatic Broadcast JOINsIn order to see the shuffle operations, we need to prevent Spark from executiong `JOIN` operations as broadcast joins. Again this can be turned off by setting the Spark configuration variable `spark.sql.autoBroadcastJoinThreshold` to -1.
###Code
spark.conf.set("spark.sql.adaptive.enabled", False)
spark.conf.set("spark.sql.autoBroadcastJoinThreshold", -1)
###Output
_____no_output_____
###Markdown
1 Load DataFirst we load the weather data, which consists of the measurement data and some station metadata.
###Code
storageLocation = "s3://dimajix-training/data/weather"
###Output
_____no_output_____
###Markdown
1.1 Load MeasurementsMeasurements are stored in multiple directories (one per year). But we will limit ourselves to a single year in the analysis to improve readability of execution plans.
###Code
import pyspark.sql.functions as f
from functools import reduce
# Read in all years, store them in an Python array
raw_weather_per_year = [spark.read.text(storageLocation + "/" + str(i)).withColumn("year", f.lit(i)) for i in range(2003,2015)]
# Union all years together
raw_weather = reduce(lambda l,r: l.union(r), raw_weather_per_year)
###Output
_____no_output_____
###Markdown
Use a single year to keep execution plans small
###Code
raw_weather = spark.read.text(storageLocation + "/2003").withColumn("year", f.lit(2003))
###Output
_____no_output_____
###Markdown
Extract MeasurementsMeasurements were stored in a proprietary text based format, with some values at fixed positions. We need to extract these values with a simple `SELECT` statement.
###Code
weather = raw_weather.select(
f.col("year"),
f.substring(f.col("value"),5,6).alias("usaf"),
f.substring(f.col("value"),11,5).alias("wban"),
f.substring(f.col("value"),16,8).alias("date"),
f.substring(f.col("value"),24,4).alias("time"),
f.substring(f.col("value"),42,5).alias("report_type"),
f.substring(f.col("value"),61,3).alias("wind_direction"),
f.substring(f.col("value"),64,1).alias("wind_direction_qual"),
f.substring(f.col("value"),65,1).alias("wind_observation"),
(f.substring(f.col("value"),66,4).cast("float") / f.lit(10.0)).alias("wind_speed"),
f.substring(f.col("value"),70,1).alias("wind_speed_qual"),
(f.substring(f.col("value"),88,5).cast("float") / f.lit(10.0)).alias("air_temperature"),
f.substring(f.col("value"),93,1).alias("air_temperature_qual")
)
###Output
_____no_output_____
###Markdown
1.2 Load Station MetadataWe also need to load the weather station meta data containing information about the geo location, country etc of individual weather stations.
###Code
stations = spark.read \
.option("header", True) \
.csv(storageLocation + "/isd-history")
###Output
_____no_output_____
###Markdown
2 PartitionsSince partitions is a concept at the RDD level and a DataFrame per se does not contain an RDD, we need to access the RDD in order to inspect the number of partitions.
###Code
weather.rdd.getNumPartitions()
###Output
_____no_output_____
###Markdown
2.1 Repartitioning DataYou can repartition any DataFrame by specifying the target number of partitions and the partitioning columns. While it should be clear what *number of partitions* actually means, the term *partitionng columns* might require some explanation. Partitioning ColumnsExcept for the case when Spark initially reads data, all DataFrames are partitioned along *partitioning columns*, which means that all records having the same values in the corresponding columns will end up in the same partition. Spark implicitly performs such repartitioning as shuffle operations for `JOIN`s and grouped aggregation (except when a DataFrame already has the correct partitioning columns and number of partitions) Manual RepartitioningAs already mentioned, you can explicitly repartition a DataFrame using teh `repartition()` method.
###Code
weather_rep = weather.repartition(10, weather["usaf"], weather["wban"])
weather_rep.rdd.getNumPartitions()
###Output
_____no_output_____
###Markdown
3 Repartition & JoinsAs already mentioned, Spark implicitly performs a repartitioning aka shuffle for `JOIN` operations. Execution PlanSo let us inspect the execution plan of a `JOIN` operation.
###Code
result = weather.join(stations, (weather["usaf"] == stations["usaf"]) & (weather["wban"] == stations["wban"]))
result.explain()
###Output
== Physical Plan ==
*(5) SortMergeJoin [usaf#87, wban#88], [usaf#128, wban#129], Inner
:- *(2) Sort [usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(usaf#87, wban#88, 200), true, [id=#69]
: +- *(1) Project [2003 AS year#84, substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, substring(value#82, 16, 8) AS date#89, substring(value#82, 24, 4) AS time#90, substring(value#82, 42, 5) AS report_type#91, substring(value#82, 61, 3) AS wind_direction#92, substring(value#82, 64, 1) AS wind_direction_qual#93, substring(value#82, 65, 1) AS wind_observation#94, (cast(cast(substring(value#82, 66, 4) as float) as double) / 10.0) AS wind_speed#95, substring(value#82, 70, 1) AS wind_speed_qual#96, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
: +- *(1) Filter (isnotnull(substring(value#82, 11, 5)) AND isnotnull(substring(value#82, 5, 6)))
: +- FileScan text [value#82] Batched: false, DataFilters: [isnotnull(substring(value#82, 11, 5)), isnotnull(substring(value#82, 5, 6))], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
+- *(4) Sort [usaf#128 ASC NULLS FIRST, wban#129 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(usaf#128, wban#129, 200), true, [id=#78]
+- *(3) Project [USAF#128, WBAN#129, STATION NAME#130, CTRY#131, STATE#132, ICAO#133, LAT#134, LON#135, ELEV(M)#136, BEGIN#137, END#138]
+- *(3) Filter (isnotnull(usaf#128) AND isnotnull(wban#129))
+- FileScan csv [USAF#128,WBAN#129,STATION NAME#130,CTRY#131,STATE#132,ICAO#133,LAT#134,LON#135,ELEV(M)#136,BEGIN#137,END#138] Batched: false, DataFilters: [isnotnull(USAF#128), isnotnull(WBAN#129)], Format: CSV, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/isd-history], PartitionFilters: [], PushedFilters: [IsNotNull(USAF), IsNotNull(WBAN)], ReadSchema: struct<USAF:string,WBAN:string,STATION NAME:string,CTRY:string,STATE:string,ICAO:string,LAT:strin...
###Markdown
RemarksAs we already discussed, each `JOIN` is executed with the following steps1. Filter `NULL` values (it's an inner join)2. Repartition DataFrame on the join columns with 200 partitions3. Sort each partition independently4. Perform a `SortMergeJoin` 3.1 Pre-partition data (first try)Now let us try what happens when we explicitly repartition the data before the join operation.
###Code
weather_rep = weather.repartition(10, weather["usaf"], weather["wban"])
weather_rep.rdd.getNumPartitions()
###Output
_____no_output_____
###Markdown
Execution PlanLet's analyze the resulting execution plan. Ideally all the preparation work before the `SortMergeJoin` happens before the `cache` operation.
###Code
result = weather_rep.join(stations, ["usaf","wban"])
result.explain()
###Output
== Physical Plan ==
*(5) Project [usaf#87, wban#88, 2003 AS year#84, date#89, time#90, report_type#91, wind_direction#92, wind_direction_qual#93, wind_observation#94, wind_speed#95, wind_speed_qual#96, air_temperature#97, air_temperature_qual#98, STATION NAME#130, CTRY#131, STATE#132, ICAO#133, LAT#134, LON#135, ELEV(M)#136, BEGIN#137, END#138]
+- *(5) SortMergeJoin [usaf#87, wban#88], [USAF#128, WBAN#129], Inner
:- *(2) Sort [usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(usaf#87, wban#88, 200), true, [id=#963]
: +- *(1) Project [substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, substring(value#82, 16, 8) AS date#89, substring(value#82, 24, 4) AS time#90, substring(value#82, 42, 5) AS report_type#91, substring(value#82, 61, 3) AS wind_direction#92, substring(value#82, 64, 1) AS wind_direction_qual#93, substring(value#82, 65, 1) AS wind_observation#94, (cast(cast(substring(value#82, 66, 4) as float) as double) / 10.0) AS wind_speed#95, substring(value#82, 70, 1) AS wind_speed_qual#96, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
: +- *(1) Filter (isnotnull(substring(value#82, 5, 6)) AND isnotnull(substring(value#82, 11, 5)))
: +- FileScan text [value#82] Batched: false, DataFilters: [isnotnull(substring(value#82, 5, 6)), isnotnull(substring(value#82, 11, 5))], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
+- *(4) Sort [USAF#128 ASC NULLS FIRST, WBAN#129 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(USAF#128, WBAN#129, 200), true, [id=#972]
+- *(3) Project [USAF#128, WBAN#129, STATION NAME#130, CTRY#131, STATE#132, ICAO#133, LAT#134, LON#135, ELEV(M)#136, BEGIN#137, END#138]
+- *(3) Filter (isnotnull(USAF#128) AND isnotnull(WBAN#129))
+- FileScan csv [USAF#128,WBAN#129,STATION NAME#130,CTRY#131,STATE#132,ICAO#133,LAT#134,LON#135,ELEV(M)#136,BEGIN#137,END#138] Batched: false, DataFilters: [isnotnull(USAF#128), isnotnull(WBAN#129)], Format: CSV, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/isd-history], PartitionFilters: [], PushedFilters: [IsNotNull(USAF), IsNotNull(WBAN)], ReadSchema: struct<USAF:string,WBAN:string,STATION NAME:string,CTRY:string,STATE:string,ICAO:string,LAT:strin...
###Markdown
ObservationsSpark removed our explicit repartition, since it doesn't help and replaced it with the implicit repartition with 200 partitions 3.2 Pre-partition and Cache (second try)Now let us try if we can cache the shuffle (repartition) and sort operation. This is useful in cases, where you have to perform multiple joins on the same set of columns, for example with different DataFrames.So let's simply repartition the `weather` DataFrame on the two columns `usaf` and `wban`.
###Code
weather_rep = weather.repartition(20, weather["usaf"], weather["wban"])
weather_rep.cache()
###Output
_____no_output_____
###Markdown
Execution PlanLet's analyze the resulting execution plan. Ideally all the preparation work before the `SortMergeJoin` happens before the `cache` operation.
###Code
result = weather_rep.join(stations, ["usaf","wban"])
result.explain()
###Output
== Physical Plan ==
*(5) SortMergeJoin [usaf#87, wban#88], [usaf#128, wban#129], Inner
:- *(2) Sort [usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(usaf#87, wban#88, 200), true, [id=#550]
: +- *(1) Filter (isnotnull(wban#88) AND isnotnull(usaf#87))
: +- InMemoryTableScan [year#84, usaf#87, wban#88, date#89, time#90, report_type#91, wind_direction#92, wind_direction_qual#93, wind_observation#94, wind_speed#95, wind_speed_qual#96, air_temperature#97, air_temperature_qual#98], [isnotnull(wban#88), isnotnull(usaf#87)]
: +- InMemoryRelation [year#84, usaf#87, wban#88, date#89, time#90, report_type#91, wind_direction#92, wind_direction_qual#93, wind_observation#94, wind_speed#95, wind_speed_qual#96, air_temperature#97, air_temperature_qual#98], StorageLevel(disk, memory, deserialized, 1 replicas)
: +- Exchange hashpartitioning(usaf#87, wban#88, 20), false, [id=#402]
: +- *(1) Project [2003 AS year#84, substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, substring(value#82, 16, 8) AS date#89, substring(value#82, 24, 4) AS time#90, substring(value#82, 42, 5) AS report_type#91, substring(value#82, 61, 3) AS wind_direction#92, substring(value#82, 64, 1) AS wind_direction_qual#93, substring(value#82, 65, 1) AS wind_observation#94, (cast(cast(substring(value#82, 66, 4) as float) as double) / 10.0) AS wind_speed#95, substring(value#82, 70, 1) AS wind_speed_qual#96, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
: +- FileScan text [value#82] Batched: false, DataFilters: [], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
+- *(4) Sort [usaf#128 ASC NULLS FIRST, wban#129 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(usaf#128, wban#129, 200), true, [id=#559]
+- *(3) Project [USAF#128, WBAN#129, STATION NAME#130, CTRY#131, STATE#132, ICAO#133, LAT#134, LON#135, ELEV(M)#136, BEGIN#137, END#138]
+- *(3) Filter (isnotnull(usaf#128) AND isnotnull(wban#129))
+- FileScan csv [USAF#128,WBAN#129,STATION NAME#130,CTRY#131,STATE#132,ICAO#133,LAT#134,LON#135,ELEV(M)#136,BEGIN#137,END#138] Batched: false, DataFilters: [isnotnull(USAF#128), isnotnull(WBAN#129)], Format: CSV, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/isd-history], PartitionFilters: [], PushedFilters: [IsNotNull(USAF), IsNotNull(WBAN)], ReadSchema: struct<USAF:string,WBAN:string,STATION NAME:string,CTRY:string,STATE:string,ICAO:string,LAT:strin...
###Markdown
RemarksOuch, now we have *two* shuffle operations. The reason is that Spark will use the default number of partitions for the JOIN operation, but we cached a differently partitioned DataFrame. 3.3 Pre-partition and Cache (third try)Now let us try if we can cache the shuffle (repartition) and sort operation. This is useful in cases, where you have to perform multiple joins on the same set of columns, for example with different DataFrames.So let's simply repartition the `weather` DataFrame on the two columns `usaf` and `wban`. We also have to use 200 partitions, because this is what Spark will use for `JOIN` operations.
###Code
weather_rep = weather.repartition(200, weather["usaf"], weather["wban"])
weather_rep.cache()
###Output
_____no_output_____
###Markdown
Execution PlanLet's analyze the resulting execution plan. Ideally all the preparation work before the `SortMergeJoin` happens before the `cache` operation.
###Code
result = weather_rep.join(stations, ["usaf","wban"])
result.explain()
###Output
== Physical Plan ==
*(4) Project [usaf#87, wban#88, year#84, date#89, time#90, report_type#91, wind_direction#92, wind_direction_qual#93, wind_observation#94, wind_speed#95, wind_speed_qual#96, air_temperature#97, air_temperature_qual#98, STATION NAME#130, CTRY#131, STATE#132, ICAO#133, LAT#134, LON#135, ELEV(M)#136, BEGIN#137, END#138]
+- *(4) SortMergeJoin [usaf#87, wban#88], [USAF#128, WBAN#129], Inner
:- *(1) Sort [usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST], false, 0
: +- *(1) Filter (isnotnull(wban#88) AND isnotnull(usaf#87))
: +- InMemoryTableScan [year#84, usaf#87, wban#88, date#89, time#90, report_type#91, wind_direction#92, wind_direction_qual#93, wind_observation#94, wind_speed#95, wind_speed_qual#96, air_temperature#97, air_temperature_qual#98], [isnotnull(wban#88), isnotnull(usaf#87)]
: +- InMemoryRelation [year#84, usaf#87, wban#88, date#89, time#90, report_type#91, wind_direction#92, wind_direction_qual#93, wind_observation#94, wind_speed#95, wind_speed_qual#96, air_temperature#97, air_temperature_qual#98], StorageLevel(disk, memory, deserialized, 1 replicas)
: +- Exchange hashpartitioning(usaf#87, wban#88, 200), false, [id=#992]
: +- *(1) Project [2003 AS year#84, substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, substring(value#82, 16, 8) AS date#89, substring(value#82, 24, 4) AS time#90, substring(value#82, 42, 5) AS report_type#91, substring(value#82, 61, 3) AS wind_direction#92, substring(value#82, 64, 1) AS wind_direction_qual#93, substring(value#82, 65, 1) AS wind_observation#94, (cast(cast(substring(value#82, 66, 4) as float) as double) / 10.0) AS wind_speed#95, substring(value#82, 70, 1) AS wind_speed_qual#96, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
: +- FileScan text [value#82] Batched: false, DataFilters: [], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
+- *(3) Sort [USAF#128 ASC NULLS FIRST, WBAN#129 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(USAF#128, WBAN#129, 200), true, [id=#1029]
+- *(2) Project [USAF#128, WBAN#129, STATION NAME#130, CTRY#131, STATE#132, ICAO#133, LAT#134, LON#135, ELEV(M)#136, BEGIN#137, END#138]
+- *(2) Filter (isnotnull(USAF#128) AND isnotnull(WBAN#129))
+- FileScan csv [USAF#128,WBAN#129,STATION NAME#130,CTRY#131,STATE#132,ICAO#133,LAT#134,LON#135,ELEV(M)#136,BEGIN#137,END#138] Batched: false, DataFilters: [isnotnull(USAF#128), isnotnull(WBAN#129)], Format: CSV, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/isd-history], PartitionFilters: [], PushedFilters: [IsNotNull(USAF), IsNotNull(WBAN)], ReadSchema: struct<USAF:string,WBAN:string,STATION NAME:string,CTRY:string,STATE:string,ICAO:string,LAT:strin...
###Markdown
RemarksWe did not reach completely what we wanted. The `sort` and `filter` operation still occur after the cache. 3.4 Pre-partition and Cache (fourth try)We already partially achieved our goal of caching all preparational work of the `SortMergeJoin`, but the sorting was still preformed after the caching. So let's try to insert an appropriate sort operation.
###Code
# Release cache to simplify execution plan
weather_rep.unpersist()
weather_rep = weather.repartition(200, weather["usaf"], weather["wban"]) \
.orderBy(weather["usaf"], weather["wban"])
weather_rep.cache()
###Output
_____no_output_____
###Markdown
Execution Plan
###Code
result = weather_rep.join(stations, ["usaf","wban"])
result.explain()
###Output
== Physical Plan ==
*(5) SortMergeJoin [usaf#87, wban#88], [usaf#128, wban#129], Inner
:- *(2) Sort [usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(usaf#87, wban#88, 200), true, [id=#662]
: +- *(1) Filter (isnotnull(wban#88) AND isnotnull(usaf#87))
: +- InMemoryTableScan [year#84, usaf#87, wban#88, date#89, time#90, report_type#91, wind_direction#92, wind_direction_qual#93, wind_observation#94, wind_speed#95, wind_speed_qual#96, air_temperature#97, air_temperature_qual#98], [isnotnull(wban#88), isnotnull(usaf#87)]
: +- InMemoryRelation [year#84, usaf#87, wban#88, date#89, time#90, report_type#91, wind_direction#92, wind_direction_qual#93, wind_observation#94, wind_speed#95, wind_speed_qual#96, air_temperature#97, air_temperature_qual#98], StorageLevel(disk, memory, deserialized, 1 replicas)
: +- *(2) Sort [usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST], true, 0
: +- Exchange rangepartitioning(usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST, 200), true, [id=#623]
: +- Exchange hashpartitioning(usaf#87, wban#88, 200), false, [id=#622]
: +- *(1) Project [2003 AS year#84, substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, substring(value#82, 16, 8) AS date#89, substring(value#82, 24, 4) AS time#90, substring(value#82, 42, 5) AS report_type#91, substring(value#82, 61, 3) AS wind_direction#92, substring(value#82, 64, 1) AS wind_direction_qual#93, substring(value#82, 65, 1) AS wind_observation#94, (cast(cast(substring(value#82, 66, 4) as float) as double) / 10.0) AS wind_speed#95, substring(value#82, 70, 1) AS wind_speed_qual#96, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
: +- FileScan text [value#82] Batched: false, DataFilters: [], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
+- *(4) Sort [usaf#128 ASC NULLS FIRST, wban#129 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(usaf#128, wban#129, 200), true, [id=#671]
+- *(3) Project [USAF#128, WBAN#129, STATION NAME#130, CTRY#131, STATE#132, ICAO#133, LAT#134, LON#135, ELEV(M)#136, BEGIN#137, END#138]
+- *(3) Filter (isnotnull(usaf#128) AND isnotnull(wban#129))
+- FileScan csv [USAF#128,WBAN#129,STATION NAME#130,CTRY#131,STATE#132,ICAO#133,LAT#134,LON#135,ELEV(M)#136,BEGIN#137,END#138] Batched: false, DataFilters: [isnotnull(USAF#128), isnotnull(WBAN#129)], Format: CSV, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/isd-history], PartitionFilters: [], PushedFilters: [IsNotNull(USAF), IsNotNull(WBAN)], ReadSchema: struct<USAF:string,WBAN:string,STATION NAME:string,CTRY:string,STATE:string,ICAO:string,LAT:strin...
###Markdown
RemarksWe actually created a worse situation: Now we have two sort operations! Definately not what we wanted to have.So let's think for a moment: The `SortMergeJoin` requires that each partition is sorted, but after the repartioning occured. The `orderBy` operation we used above will create a global order over all partitions (and thereby destroy all the repartition work immediately). So we need something else, which still keeps the current partitions but only sort in each partition independently. 3.5 Pre-partition and Cache (final try)Fortunately Spark provides a `sortWithinPartitions` method, which does exactly what it sounds like.
###Code
# Release cache to simplify execution plan
weather_rep.unpersist()
weather_rep = weather.repartition(200, weather["usaf"], weather["wban"]) \
.sortWithinPartitions(weather["usaf"], weather["wban"])
weather_rep.cache()
###Output
_____no_output_____
###Markdown
Execution Plan
###Code
result = weather_rep.join(stations, (weather["usaf"] == stations["usaf"]) & (weather["wban"] == stations["wban"]))
result.explain()
###Output
== Physical Plan ==
*(4) SortMergeJoin [usaf#87, wban#88], [usaf#128, wban#129], Inner
:- *(1) Filter (isnotnull(wban#88) AND isnotnull(usaf#87))
: +- InMemoryTableScan [year#84, usaf#87, wban#88, date#89, time#90, report_type#91, wind_direction#92, wind_direction_qual#93, wind_observation#94, wind_speed#95, wind_speed_qual#96, air_temperature#97, air_temperature_qual#98], [isnotnull(wban#88), isnotnull(usaf#87)]
: +- InMemoryRelation [year#84, usaf#87, wban#88, date#89, time#90, report_type#91, wind_direction#92, wind_direction_qual#93, wind_observation#94, wind_speed#95, wind_speed_qual#96, air_temperature#97, air_temperature_qual#98], StorageLevel(disk, memory, deserialized, 1 replicas)
: +- *(2) Sort [usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(usaf#87, wban#88, 200), false, [id=#694]
: +- *(1) Project [2003 AS year#84, substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, substring(value#82, 16, 8) AS date#89, substring(value#82, 24, 4) AS time#90, substring(value#82, 42, 5) AS report_type#91, substring(value#82, 61, 3) AS wind_direction#92, substring(value#82, 64, 1) AS wind_direction_qual#93, substring(value#82, 65, 1) AS wind_observation#94, (cast(cast(substring(value#82, 66, 4) as float) as double) / 10.0) AS wind_speed#95, substring(value#82, 70, 1) AS wind_speed_qual#96, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
: +- FileScan text [value#82] Batched: false, DataFilters: [], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
+- *(3) Sort [usaf#128 ASC NULLS FIRST, wban#129 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(usaf#128, wban#129, 200), true, [id=#727]
+- *(2) Project [USAF#128, WBAN#129, STATION NAME#130, CTRY#131, STATE#132, ICAO#133, LAT#134, LON#135, ELEV(M)#136, BEGIN#137, END#138]
+- *(2) Filter (isnotnull(usaf#128) AND isnotnull(wban#129))
+- FileScan csv [USAF#128,WBAN#129,STATION NAME#130,CTRY#131,STATE#132,ICAO#133,LAT#134,LON#135,ELEV(M)#136,BEGIN#137,END#138] Batched: false, DataFilters: [isnotnull(USAF#128), isnotnull(WBAN#129)], Format: CSV, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/isd-history], PartitionFilters: [], PushedFilters: [IsNotNull(USAF), IsNotNull(WBAN)], ReadSchema: struct<USAF:string,WBAN:string,STATION NAME:string,CTRY:string,STATE:string,ICAO:string,LAT:strin...
###Markdown
RemarksThat looks really good. The filter operation is still executed after the cache, but that cannot be cached such that Spark uses this information.So whenever you want to prepartition data, you need to execute the following steps:* repartition with the join columns and default number of partitions* sortWithinPartitions with the join columns* probably cache (otherwise there is no benefit at all) Inspect WebUIWe can also inspect the WebUI and see how everything is executed. Phase 1: Build cache
###Code
result.count()
###Output
_____no_output_____
###Markdown
Phase 2: Use cache
###Code
result.count()
###Output
_____no_output_____
###Markdown
4 Repartition & AggregationsSimilar to `JOIN` operations, Spark also requires an appropriate partitioning in grouped aggregations. Again, we can use the same strategy and appropriateky prepartition data in cases where multiple joins and aggregations are performed using the same columns. 4.1 Simple AggregationSo let's perform the usual aggregation (but this time without a previous `JOIN`) with groups defined by the station id (`usaf` and `wban`).
###Code
result = weather.groupBy(weather["usaf"], weather["wban"]).agg(
f.min(f.when(weather.air_temperature_qual == f.lit(1), weather.air_temperature)).alias('min_temp'),
f.max(f.when(weather.air_temperature_qual == f.lit(1), weather.air_temperature)).alias('max_temp'),
)
result.explain()
###Output
== Physical Plan ==
*(2) HashAggregate(keys=[usaf#87, wban#88], functions=[min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- Exchange hashpartitioning(usaf#87, wban#88, 200), true, [id=#779]
+- *(1) HashAggregate(keys=[usaf#87, wban#88], functions=[partial_min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), partial_max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- *(1) Project [substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
+- FileScan text [value#82] Batched: false, DataFilters: [], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
###Markdown
RemarksEach grouped aggregation is executed with the following steps:1. Perform partial aggregation (`HashAggregate`)2. Shuffle intermediate result (`Exchange hashpartitioning`)3. Perform final aggregation (`HashAggregate`) 4.2 Aggregation after repartitionNow let us perform the same aggregation, but this time let's use the preaggregated weather data set `weather_rep` instead.
###Code
weather_rep = weather.repartition(87, weather["usaf"], weather["wban"])
weather_rep.unpersist()
result = weather_rep.groupBy(weather["usaf"], weather["wban"]).agg(
f.min(f.when(weather_rep.air_temperature_qual == f.lit(1), weather_rep.air_temperature)).alias('min_temp'),
f.max(f.when(weather_rep.air_temperature_qual == f.lit(1), weather_rep.air_temperature)).alias('max_temp'),
)
result.explain()
###Output
== Physical Plan ==
*(2) HashAggregate(keys=[usaf#87, wban#88], functions=[min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- *(2) HashAggregate(keys=[usaf#87, wban#88], functions=[partial_min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), partial_max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- Exchange hashpartitioning(usaf#87, wban#88, 87), false, [id=#391]
+- *(1) Project [substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
+- FileScan text [value#82] Batched: false, DataFilters: [], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
###Markdown
RemarksSpark obviously detects the correct partitioning of the `weather_rep` DataFrame. The sorting actually is not required, but does not hurt either (except performance...). Therefore only two steps are executed after the cache operation:1. Partial aggregation (`HashAggregate`)2. Final aggregation (`HashAggregate`)But note that although you saved a shuffle operation of partial aggregates, in most cases it is not adviseable to prepartition data only for aggregations for the following reasons:* You could perform all aggregations in a single `groupBy` and `agg` chain* In most cases the preaggregated data is significantly smaller than the original data, therefore the shuffle doesn't hurt that much 5 Interaction between Join, Aggregate & RepartitionNow we have seen two operations which require a shuffle of the data. Of course Spark is clever enough to avoid an additional shuffle operation in chains of `JOIN` and grouped aggregations, which use the same aggregation columns. 5.1 Aggregation after Join on same keySo let's see what happens with a grouped aggregation after a join operation.
###Code
joined = weather.join(stations, (weather["usaf"] == stations["usaf"]) & (weather["wban"] == stations["wban"]))
result = joined.groupBy(weather["usaf"], weather["wban"]).agg(
f.min(f.when(joined.air_temperature_qual == f.lit(1), joined.air_temperature)).alias('min_temp'),
f.max(f.when(joined.air_temperature_qual == f.lit(1), joined.air_temperature)).alias('max_temp'),
)
result.explain()
###Output
== Physical Plan ==
*(5) HashAggregate(keys=[usaf#87, wban#88], functions=[min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- *(5) HashAggregate(keys=[usaf#87, wban#88], functions=[partial_min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), partial_max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- *(5) Project [usaf#87, wban#88, air_temperature#97, air_temperature_qual#98]
+- *(5) SortMergeJoin [usaf#87, wban#88], [usaf#128, wban#129], Inner
:- *(2) Sort [usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(usaf#87, wban#88, 200), true, [id=#840]
: +- *(1) Project [substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
: +- *(1) Filter (isnotnull(substring(value#82, 5, 6)) AND isnotnull(substring(value#82, 11, 5)))
: +- FileScan text [value#82] Batched: false, DataFilters: [isnotnull(substring(value#82, 5, 6)), isnotnull(substring(value#82, 11, 5))], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
+- *(4) Sort [usaf#128 ASC NULLS FIRST, wban#129 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(usaf#128, wban#129, 200), true, [id=#849]
+- *(3) Project [USAF#128, WBAN#129]
+- *(3) Filter (isnotnull(usaf#128) AND isnotnull(wban#129))
+- FileScan csv [USAF#128,WBAN#129] Batched: false, DataFilters: [isnotnull(USAF#128), isnotnull(WBAN#129)], Format: CSV, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/isd-history], PartitionFilters: [], PushedFilters: [IsNotNull(USAF), IsNotNull(WBAN)], ReadSchema: struct<USAF:string,WBAN:string>
###Markdown
RemarksAs you can see, Spark performs a single shuffle operation. The order of operation is as follows:1. Filter `NULL` values (it's an inner join)2. Shuffle data on `usaf` and `wban`3. Sort partitions by `usaf` and `wban`4. Perform `SortMergeJoin`5. Perform partial aggregation `HashAggregate`6. Perform final aggregation `HashAggregate` 5.2 Aggregation after Join using repartitioned dataOf course we can also use the pre-repartitioned weather DataFrame. This will work as expected, Spark does not add any additional shuffle operation.
###Code
weather_rep = weather.repartition(84, weather["usaf"], weather["wban"])
joined = weather_rep.join(stations, ["usaf","wban"])
result = joined.groupBy(weather["usaf"], weather["wban"]).agg(
f.min(f.when(joined.air_temperature_qual == f.lit(1), joined.air_temperature)).alias('min_temp'),
f.max(f.when(joined.air_temperature_qual == f.lit(1), joined.air_temperature)).alias('max_temp'),
)
result.explain()
###Output
== Physical Plan ==
*(5) HashAggregate(keys=[usaf#87, wban#88], functions=[min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- *(5) HashAggregate(keys=[usaf#87, wban#88], functions=[partial_min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), partial_max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- *(5) Project [usaf#87, wban#88, air_temperature#97, air_temperature_qual#98]
+- *(5) SortMergeJoin [usaf#87, wban#88], [USAF#128, WBAN#129], Inner
:- *(2) Sort [usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(usaf#87, wban#88, 200), true, [id=#893]
: +- *(1) Project [substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
: +- *(1) Filter (isnotnull(substring(value#82, 5, 6)) AND isnotnull(substring(value#82, 11, 5)))
: +- FileScan text [value#82] Batched: false, DataFilters: [isnotnull(substring(value#82, 5, 6)), isnotnull(substring(value#82, 11, 5))], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
+- *(4) Sort [USAF#128 ASC NULLS FIRST, WBAN#129 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(USAF#128, WBAN#129, 200), true, [id=#902]
+- *(3) Project [USAF#128, WBAN#129]
+- *(3) Filter (isnotnull(USAF#128) AND isnotnull(WBAN#129))
+- FileScan csv [USAF#128,WBAN#129] Batched: false, DataFilters: [isnotnull(USAF#128), isnotnull(WBAN#129)], Format: CSV, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/isd-history], PartitionFilters: [], PushedFilters: [IsNotNull(USAF), IsNotNull(WBAN)], ReadSchema: struct<USAF:string,WBAN:string>
###Markdown
Note that the explicit repartition has been removed by Spark - therefore it doesn't make any sense to `repartition` before a join operation. 5.3 Aggregation after Join with different keySo far we only looked at join and grouping operations using the same keys. If we use different keys (for example the country) in both operations, we expect Spark to add an additional shuffle operations. Let's see...
###Code
joined = weather.join(stations, ["usaf","wban"])
result = joined.groupBy(stations["ctry"]).agg(
f.min(f.when(joined.air_temperature_qual == f.lit(1), joined.air_temperature)).alias('min_temp'),
f.max(f.when(joined.air_temperature_qual == f.lit(1), joined.air_temperature)).alias('max_temp'),
)
result.explain()
###Output
== Physical Plan ==
*(6) HashAggregate(keys=[ctry#131], functions=[min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- Exchange hashpartitioning(ctry#131, 200), true, [id=#645]
+- *(5) HashAggregate(keys=[ctry#131], functions=[partial_min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), partial_max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- *(5) Project [air_temperature#97, air_temperature_qual#98, CTRY#131]
+- *(5) SortMergeJoin [usaf#87, wban#88], [USAF#128, WBAN#129], Inner
:- *(2) Sort [usaf#87 ASC NULLS FIRST, wban#88 ASC NULLS FIRST], false, 0
: +- Exchange hashpartitioning(usaf#87, wban#88, 200), true, [id=#627]
: +- *(1) Project [substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
: +- *(1) Filter (isnotnull(substring(value#82, 5, 6)) AND isnotnull(substring(value#82, 11, 5)))
: +- FileScan text [value#82] Batched: false, DataFilters: [isnotnull(substring(value#82, 5, 6)), isnotnull(substring(value#82, 11, 5))], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
+- *(4) Sort [USAF#128 ASC NULLS FIRST, WBAN#129 ASC NULLS FIRST], false, 0
+- Exchange hashpartitioning(USAF#128, WBAN#129, 200), true, [id=#636]
+- *(3) Project [USAF#128, WBAN#129, CTRY#131]
+- *(3) Filter (isnotnull(USAF#128) AND isnotnull(WBAN#129))
+- FileScan csv [USAF#128,WBAN#129,CTRY#131] Batched: false, DataFilters: [isnotnull(USAF#128), isnotnull(WBAN#129)], Format: CSV, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/isd-history], PartitionFilters: [], PushedFilters: [IsNotNull(USAF), IsNotNull(WBAN)], ReadSchema: struct<USAF:string,WBAN:string,CTRY:string>
###Markdown
5.4 Aggregation after Broadcast-Join If we use a broadcast join instead of a sort merge join, the we will have a shuffle operation for the aggregation again (since the broadcast join just avoids the shuffle). Let's verify that theory...
###Code
joined = weather.join(f.broadcast(stations), ["usaf","wban"])
result = joined.groupBy(weather["usaf"], weather["wban"]).agg(
f.min(f.when(joined.air_temperature_qual == f.lit(1), joined.air_temperature)).alias('min_temp'),
f.max(f.when(joined.air_temperature_qual == f.lit(1), joined.air_temperature)).alias('max_temp'),
)
result.explain()
###Output
== Physical Plan ==
*(3) HashAggregate(keys=[usaf#87, wban#88], functions=[min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- Exchange hashpartitioning(usaf#87, wban#88, 200), true, [id=#578]
+- *(2) HashAggregate(keys=[usaf#87, wban#88], functions=[partial_min(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END), partial_max(CASE WHEN (cast(air_temperature_qual#98 as int) = 1) THEN air_temperature#97 END)])
+- *(2) Project [usaf#87, wban#88, air_temperature#97, air_temperature_qual#98]
+- *(2) BroadcastHashJoin [usaf#87, wban#88], [USAF#128, WBAN#129], Inner, BuildRight
:- *(2) Project [substring(value#82, 5, 6) AS usaf#87, substring(value#82, 11, 5) AS wban#88, (cast(cast(substring(value#82, 88, 5) as float) as double) / 10.0) AS air_temperature#97, substring(value#82, 93, 1) AS air_temperature_qual#98]
: +- *(2) Filter (isnotnull(substring(value#82, 5, 6)) AND isnotnull(substring(value#82, 11, 5)))
: +- FileScan text [value#82] Batched: false, DataFilters: [isnotnull(substring(value#82, 5, 6)), isnotnull(substring(value#82, 11, 5))], Format: Text, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/2003], PartitionFilters: [], PushedFilters: [], ReadSchema: struct<value:string>
+- BroadcastExchange HashedRelationBroadcastMode(List(input[0, string, true], input[1, string, true])), [id=#572]
+- *(1) Project [USAF#128, WBAN#129]
+- *(1) Filter (isnotnull(USAF#128) AND isnotnull(WBAN#129))
+- FileScan csv [USAF#128,WBAN#129] Batched: false, DataFilters: [isnotnull(USAF#128), isnotnull(WBAN#129)], Format: CSV, Location: InMemoryFileIndex[file:/dimajix/data/weather-noaa-sample/isd-history], PartitionFilters: [], PushedFilters: [IsNotNull(USAF), IsNotNull(WBAN)], ReadSchema: struct<USAF:string,WBAN:string>
###Markdown
6 CoalesceThere is another use case for changing the number of partitions: Writing results to HDFS/S3/whatever. Per design Spark writes each partition into a separate file, and there is no way around that. But when partitions do not contain many records, this may not only be ugly, but also unperformant and might cause additional trouble. Specifically currently HDFS is not designed to handle many small files, but prefers fewer large files instead.Therefore it is often desireable to reduce the number of partitions of a DataFrame just before writing the result to disk. You could perform this task by a `repartition` operation, but this is an expensive operation requiring an additional shuffle operation. Therefore Spark provides an additional method called `coalesce` which can be used to reduce the number of partitions without incurring an additional shuffle. Spark simply logically concatenates multiple partitions into new partitions. Inspect Number of PartitionsFor this example, we will use the `weather_rep` DataFrame, which contains exactly 200 partitions.
###Code
weather_rep = weather.repartition(200, weather["usaf"], weather["wban"])
weather_rep.cache()
weather_rep.rdd.getNumPartitions()
###Output
_____no_output_____
###Markdown
6.1 Merge Partitions using coalesceIn order to reduce the number of partitions, we simply use the `coalesce` method.
###Code
weather_small = weather_rep.coalesce(16)
weather_small.explain()
weather_small.rdd.getNumPartitions()
###Output
_____no_output_____
###Markdown
Inspect WebUI
###Code
weather_rep.count()
###Output
_____no_output_____
###Markdown
6.2 Saving filesWe already discussed that Spark writes a separate file per partition. So let's see the result when we write the `weather_rep` DataFrame containing 200 partitions. Write 200 Partitions
###Code
weather_rep.write.mode("overwrite").parquet("/tmp/weather_rep")
###Output
_____no_output_____
###Markdown
Inspect the ResultUsing a simple HDFS CLI util, we can inspect the result on HDFS.
###Code
%%bash
hdfs dfs -ls /tmp/weather_rep
###Output
Found 91 items
-rw-r--r-- 1 hadoop hadoop 0 2018-10-07 07:17 /tmp/weather_rep/_SUCCESS
-rw-r--r-- 1 hadoop hadoop 1337 2018-10-07 07:16 /tmp/weather_rep/part-00000-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 24241 2018-10-07 07:16 /tmp/weather_rep/part-00003-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 63340 2018-10-07 07:17 /tmp/weather_rep/part-00005-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 32695 2018-10-07 07:17 /tmp/weather_rep/part-00006-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 126661 2018-10-07 07:17 /tmp/weather_rep/part-00011-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 89610 2018-10-07 07:17 /tmp/weather_rep/part-00013-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 73063 2018-10-07 07:17 /tmp/weather_rep/part-00014-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 70655 2018-10-07 07:17 /tmp/weather_rep/part-00016-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 61512 2018-10-07 07:17 /tmp/weather_rep/part-00017-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 181909 2018-10-07 07:17 /tmp/weather_rep/part-00025-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 67545 2018-10-07 07:17 /tmp/weather_rep/part-00026-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 87515 2018-10-07 07:17 /tmp/weather_rep/part-00028-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 76725 2018-10-07 07:17 /tmp/weather_rep/part-00031-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 16246 2018-10-07 07:17 /tmp/weather_rep/part-00032-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 68058 2018-10-07 07:17 /tmp/weather_rep/part-00033-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 84538 2018-10-07 07:17 /tmp/weather_rep/part-00034-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 73316 2018-10-07 07:17 /tmp/weather_rep/part-00035-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 123655 2018-10-07 07:17 /tmp/weather_rep/part-00036-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 37920 2018-10-07 07:17 /tmp/weather_rep/part-00038-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 57775 2018-10-07 07:17 /tmp/weather_rep/part-00039-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 67351 2018-10-07 07:17 /tmp/weather_rep/part-00040-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 55996 2018-10-07 07:17 /tmp/weather_rep/part-00041-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 59784 2018-10-07 07:17 /tmp/weather_rep/part-00043-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 80773 2018-10-07 07:17 /tmp/weather_rep/part-00046-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 84986 2018-10-07 07:17 /tmp/weather_rep/part-00048-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 133418 2018-10-07 07:17 /tmp/weather_rep/part-00049-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 75265 2018-10-07 07:17 /tmp/weather_rep/part-00050-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 60268 2018-10-07 07:17 /tmp/weather_rep/part-00053-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 76993 2018-10-07 07:17 /tmp/weather_rep/part-00058-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 199806 2018-10-07 07:17 /tmp/weather_rep/part-00059-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 40241 2018-10-07 07:17 /tmp/weather_rep/part-00066-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 97540 2018-10-07 07:17 /tmp/weather_rep/part-00068-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 29008 2018-10-07 07:17 /tmp/weather_rep/part-00071-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 73180 2018-10-07 07:17 /tmp/weather_rep/part-00078-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 3393 2018-10-07 07:17 /tmp/weather_rep/part-00081-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 62817 2018-10-07 07:17 /tmp/weather_rep/part-00084-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 3359 2018-10-07 07:17 /tmp/weather_rep/part-00088-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 34895 2018-10-07 07:17 /tmp/weather_rep/part-00092-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 21333 2018-10-07 07:17 /tmp/weather_rep/part-00096-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 76141 2018-10-07 07:17 /tmp/weather_rep/part-00098-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 48870 2018-10-07 07:17 /tmp/weather_rep/part-00099-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 31191 2018-10-07 07:17 /tmp/weather_rep/part-00100-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 61306 2018-10-07 07:17 /tmp/weather_rep/part-00102-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 145618 2018-10-07 07:17 /tmp/weather_rep/part-00104-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 60617 2018-10-07 07:17 /tmp/weather_rep/part-00108-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 78265 2018-10-07 07:17 /tmp/weather_rep/part-00111-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 31085 2018-10-07 07:17 /tmp/weather_rep/part-00112-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 90587 2018-10-07 07:17 /tmp/weather_rep/part-00113-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 59706 2018-10-07 07:17 /tmp/weather_rep/part-00114-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 22701 2018-10-07 07:17 /tmp/weather_rep/part-00118-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 66911 2018-10-07 07:17 /tmp/weather_rep/part-00119-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 161560 2018-10-07 07:17 /tmp/weather_rep/part-00122-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 79337 2018-10-07 07:17 /tmp/weather_rep/part-00124-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 73118 2018-10-07 07:17 /tmp/weather_rep/part-00127-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 123673 2018-10-07 07:17 /tmp/weather_rep/part-00129-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 75963 2018-10-07 07:17 /tmp/weather_rep/part-00130-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 86810 2018-10-07 07:17 /tmp/weather_rep/part-00132-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 57741 2018-10-07 07:17 /tmp/weather_rep/part-00133-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 3160 2018-10-07 07:17 /tmp/weather_rep/part-00134-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 124276 2018-10-07 07:17 /tmp/weather_rep/part-00137-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 68907 2018-10-07 07:17 /tmp/weather_rep/part-00141-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 37198 2018-10-07 07:17 /tmp/weather_rep/part-00143-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 80649 2018-10-07 07:17 /tmp/weather_rep/part-00145-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 12477 2018-10-07 07:17 /tmp/weather_rep/part-00150-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 52018 2018-10-07 07:17 /tmp/weather_rep/part-00151-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 79631 2018-10-07 07:17 /tmp/weather_rep/part-00152-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 90223 2018-10-07 07:17 /tmp/weather_rep/part-00154-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 135687 2018-10-07 07:17 /tmp/weather_rep/part-00156-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 142939 2018-10-07 07:17 /tmp/weather_rep/part-00157-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 63448 2018-10-07 07:17 /tmp/weather_rep/part-00158-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 144695 2018-10-07 07:17 /tmp/weather_rep/part-00163-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 56188 2018-10-07 07:17 /tmp/weather_rep/part-00164-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 163375 2018-10-07 07:17 /tmp/weather_rep/part-00165-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 61759 2018-10-07 07:17 /tmp/weather_rep/part-00166-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 18942 2018-10-07 07:17 /tmp/weather_rep/part-00171-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 8239 2018-10-07 07:17 /tmp/weather_rep/part-00172-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 78075 2018-10-07 07:17 /tmp/weather_rep/part-00173-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 69343 2018-10-07 07:17 /tmp/weather_rep/part-00174-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 86969 2018-10-07 07:17 /tmp/weather_rep/part-00178-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 30513 2018-10-07 07:17 /tmp/weather_rep/part-00179-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 78521 2018-10-07 07:17 /tmp/weather_rep/part-00181-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 69376 2018-10-07 07:17 /tmp/weather_rep/part-00182-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 15683 2018-10-07 07:17 /tmp/weather_rep/part-00186-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 70658 2018-10-07 07:17 /tmp/weather_rep/part-00187-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 33030 2018-10-07 07:17 /tmp/weather_rep/part-00189-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 56766 2018-10-07 07:17 /tmp/weather_rep/part-00191-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 78657 2018-10-07 07:17 /tmp/weather_rep/part-00192-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 50076 2018-10-07 07:17 /tmp/weather_rep/part-00195-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 78921 2018-10-07 07:17 /tmp/weather_rep/part-00198-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 60186 2018-10-07 07:17 /tmp/weather_rep/part-00199-19014412-a1e6-4348-a41d-49986590b40b-c000.snappy.parquet
###Markdown
Write 16 PartitionsNow let's write the `coalesce`d DataFrame and inspect the result on HDFS
###Code
weather_small.write.mode("overwrite").parquet("/tmp/weather_small")
###Output
_____no_output_____
###Markdown
Inspect Result
###Code
%%bash
hdfs dfs -ls /tmp/weather_small
###Output
Found 17 items
-rw-r--r-- 1 hadoop hadoop 0 2018-10-07 07:17 /tmp/weather_small/_SUCCESS
-rw-r--r-- 1 hadoop hadoop 290888 2018-10-07 07:17 /tmp/weather_small/part-00000-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 539188 2018-10-07 07:17 /tmp/weather_small/part-00001-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 490533 2018-10-07 07:17 /tmp/weather_small/part-00002-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 338415 2018-10-07 07:17 /tmp/weather_small/part-00003-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 460959 2018-10-07 07:17 /tmp/weather_small/part-00004-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 358779 2018-10-07 07:17 /tmp/weather_small/part-00005-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 394439 2018-10-07 07:17 /tmp/weather_small/part-00006-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 295745 2018-10-07 07:17 /tmp/weather_small/part-00007-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 274293 2018-10-07 07:17 /tmp/weather_small/part-00008-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 352943 2018-10-07 07:17 /tmp/weather_small/part-00009-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 405437 2018-10-07 07:17 /tmp/weather_small/part-00010-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 337051 2018-10-07 07:17 /tmp/weather_small/part-00011-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 521293 2018-10-07 07:17 /tmp/weather_small/part-00012-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 330085 2018-10-07 07:17 /tmp/weather_small/part-00013-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 365699 2018-10-07 07:17 /tmp/weather_small/part-00014-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
-rw-r--r-- 1 hadoop hadoop 398450 2018-10-07 07:17 /tmp/weather_small/part-00015-31d2cdf1-532c-4549-b706-d040d0a0921b-c000.snappy.parquet
|
2018-05-27-coffee.ipynb | ###Markdown
Using Bayesian inference to pick Coffee ShopsI like going to coffee shops in Edinburgh. I have opinions of them: some are better for meeting a friend and others are totally not laptop-friendly.| | | ||-|-|-||  |  |  | In this post, I prototype a way to use my opinions to rank coffee shops using a really simple probabilistic model.  Ranking based on ComparisonsSince ranking 20+ coffee shops is not that much fun, I'll gather data as comparisons of pairs of coffee shops. For example, I'll tell my system that I think BrewLab is a lot more laptop-friendly than Wellington Coffee, but that BrewLab and Levels are equally laptop-friendly. Then I'll figure out which are the best and worst coffee shops for laptops using probabilistic modelling.Using pairs is convenient because it means I can borrow from approaches that rank players based on matches, like [this `pymc3` demo](https://docs.pymc.io/notebooks/rugby_analytics.html) or [Microsoft's post on TrueSkill](https://www.microsoft.com/en-us/research/project/trueskill-ranking-system/). (We also had a homework assignment on the math of this problem.) Coffee shopsVery important is what I mean by the attributes for coffee shops. For now, I'm using four metrics defined below in `METRIC_LIST`. They are - **reading**: Chill and reading a book - **laptop**: Camp out and doing work on my laptop - **meet**: Meet up with someone - **group**: Grab a table and hang out with folks These metrics are completely independent. It's more like four copies of the same problem with data stored in one place.
###Code
import os
import json
from collections import namedtuple
import matplotlib.pyplot as plt
import numpy as np
import pymc3 as pm
import seaborn as sns
import yaml
# helper functions you can skip over :D
def hide_ticks(plot):
plot.axes.get_xaxis().set_visible(False)
plot.axes.get_yaxis().set_visible(False)
SAVE = True
def maybe_save_plot(filename):
if SAVE:
plt.tight_layout()
plt.savefig('images/' + filename, bbox_inches="tight")
###Output
_____no_output_____
###Markdown
Part 1: Data MetadataWhen I go into `pymc3` land, the data will end up in a big matrix. I'd like a way to associate indexes with static information about the coffee shop, like its name and location. This will come in handy later when I want to label plots or build an app.I really like using [YAML](http://yaml.org) for human-entered, computer-readable data. I entered a list of metadata about the coffee places I've visited in `data/coffee_metadata.yaml`. The `id` field is a human-readable unique identifier. - name: BrewLab id: brewlab location: university - name: Cult Espresso id: cult location: universityI also like using `namedtuples` to enforce schemas and catch typos when loading `yaml` or `json` files. I'll define a namedtuple `Metadata` for the above data and load the file. Then I'll make some useful data structures to map back and forth from id to index. The index in the data matrix will just be the position of the metadata dictionary in the `data/coffee_metadata.yaml` list. (I'm assuming `id` is unique and it won't ever change. When I save data, I'll associate data with a coffee shop by using it's `id`, not it's location in the matrix. I chose having a unique `id` field over using the index in the matrix because it's human-readable, which makes it easier to update incorrect comparisons, and it makes it trivial to add new coffee shops without changing matrix sizes.)
###Code
Metadata = namedtuple('Metadata', ['name', 'id', 'location'])
with open('data/coffee_metadata.yaml') as f:
raw_data = yaml.load(f)
metadata = [Metadata(**place) for place in raw_data]
# this will be useful for converting an index into the id and back
index_to_id = [d.id for d in metadata]
ids_set = set(index_to_id)
id_to_index = {name: i for i, name in enumerate(index_to_id)}
num_items = len(index_to_id)
# check ids are unique
assert len(index_to_id) == len(ids_set), 'duplicate id! {}'.format(
set(x for x in index_to_id if index_to_id.count(x) > 1) # thanks https://stackoverflow.com/questions/9835762/how-do-i-find-the-duplicates-in-a-list-and-create-another-list-with-them
)
###Output
_____no_output_____
###Markdown
Loading comparisonsI like to store data that humans shouldn't need to mess with in a file with lines of `json`. Worst-case, I can go in and delete or change a value, but I don't need to think about weird key orders that writing `yaml` has.A file showing two comparisons would look like this: {"metric": "meet", "first": "artisan_broughton", "last": "cairngorm_george", "weight": 0.5} {"metric": "meet", "first": "wellington", "last": "twelve_triangles_portobello", "weight": 0.5} Here's the idea: `metric` is which metric I'm trying to measure. In this case, `meeting` means where I like to meet up with someone. `first` and `last` are the two `id`s that should be in the big list of coffee shop `metadata` defined above. `weight` is how much better `first` is than `last`. It could be negative if I really want.
###Code
METRIC_LIST = ['meet', 'group', 'laptop', 'reading']
COMPARISON_FILE = 'data/coffee_comparisons.json'
Comparison = namedtuple('Comparison', ['metric', 'first', 'last', 'weight'])
metric_set = set(METRIC_LIST)
def load_comparisons():
if os.path.isfile(COMPARISON_FILE):
with open(COMPARISON_FILE) as f:
all_comparisons = [
Comparison(**json.loads(line))
for line in f
]
# make sure all metrics are legal!
for c in all_comparisons:
assert c.metric in metric_set, 'metric `{}` not in {}'.format(c.metric, metric_set)
assert c.first in ids_set, 'id `{}` not in {}'.format(c.first, ids_set)
assert c.last in ids_set, 'id `{}` not in {}'.format(c.last, ids_set)
print('Loaded {} comparisons'.format(len(all_comparisons)))
else:
print("No comparision data yet. No worries, I'll create one in a second.")
all_comparisons = []
return all_comparisons
all_comparisons = load_comparisons()
###Output
Loaded 72 comparisons
###Markdown
Initial comparisonsIf I have no data so far, I can begin by requesting a few comparisons between two randomly selected coffee shops.The code will show me a `metric` name, and the first coffee shop id and second coffee shop id. Then I'll type in a number between 1 and 5. Here's what the keys mean: - `1`: totally the first coffee shop - `2`: lean towards the first coffee shop - `3`: draw - `4`: lean towards the second coffee shop - `5`: totally the second coffee shop
###Code
def keypress_to_entry_comparison(keypress, id1, id2):
'''Convert according to the following requirement.
"1": 1.0 for id1
"2": 0.5 for id1
"3": 0.0 (a draw!)
"4": 0.5 for id2
"5": 1.0 for id2
'''
keypress = int(keypress)
if keypress < 1 or keypress > 5:
raise Exception("bad key!")
data = {
'weight': (3 - keypress) / 2,
'first': id1,
'last': id2,
}
# swap if negative
if data['weight'] < 0:
tmp = data['first']
data['first'] = data['second']
data['second'] = tmp
return data
# The plan is to select two random `id`s. This block defines some reasons why
# we shouldn't bother asking about the pairs.
def already_have_comparison(matches, metric, a, b):
'''Returns true if ids `id1` and `id2` already have been compared for this stat'''
all_comparison_pairs = set((c.first, c.last) for c in all_comparisons if c.metric == metric)
return (a, b) in all_comparison_pairs or (b, a) in all_comparison_pairs
def is_comparison_to_self(id1, id2):
'''Returns true if `id1` and `id2` are the same'''
return id1 == id2
###Output
_____no_output_____
###Markdown
Inputting comparisonsThis part gets nostalgic. A lot of my first programs were about asking for data from the user. Over time I've moved to different approaches, like the YAML file above, but I think this way works better because the computer is choosing which items to show me. As an example session: laptop 1) lowdown 5) wellington? 1 laptop 1) black_medicine 5) levels? 4 laptop 1) artisan_stockbridge 5) castello? q I can type `q` to exit. Otherwise, I'll be asked to compare two coffee shops and should type a number between 1 and 5.(If you want to run this, set `SHOULD_ASK = True`. I turn it off by default so I can run the entire notebook without being asked for input.)
###Code
# Change this to True to start answering questions!
# It's False by default so you can run the full notebook
SHOULD_ASK = False
# Update METRICS_TO_CHECK with the list of attributes this should ask about
METRICS_TO_CHECK = METRIC_LIST
# Note! I'm using a for-loop here so it doesn't continue forever if it can't
# find anymore matches. Normally, I'm planning to press `q` to quit.
MAX_CHECK = 100 if SHOULD_ASK else 0
for _ in range(MAX_CHECK):
# Eh, reload the comparisons so I don't ask for duplicates in this session
# Choose a random stat
metric = np.random.choice(METRICS_TO_CHECK)
# Choose two random ids
id1, id2 = np.random.choice(index_to_id, size=2, replace=False)
if is_comparison_to_self(id1, id2):
print('Duplicate!')
continue
if already_have_comparison(all_comparisons, metric, id1, id2):
print('Already have comparison of {} and {} for {}'.format(id1, id2, metric))
continue
keyboard_input = input('{} 1) {} 5) {}? '.format(metric, id1, id2))
if keyboard_input == 'q':
break
entry_comparison = keypress_to_entry_comparison(keyboard_input, id1, id2)
# modify entry_comparison to add stats to the entry comparison.
entry_comparison['metric'] = metric
# now append to the comparison file!
with open(COMPARISON_FILE, 'a') as f:
f.write(json.dumps(entry_comparison))
f.write('\n')
###Output
_____no_output_____
###Markdown
Aside: exploring comparisions dataI can check what comparisons I have data for. Since the comparison is symmetric, I'll mark sides of the matrix.
###Code
all_comparisons = load_comparisons()
matches = {
k: np.zeros((num_items, num_items))
for k in METRIC_LIST
}
for c in all_comparisons:
# only use the most recent weight.
matches[c.metric][id_to_index[c.first], id_to_index[c.last]] = c.weight
matches[c.metric][id_to_index[c.last], id_to_index[c.first]] = -c.weight
fig, axs = plt.subplots(2, 2, figsize=(8, 8))
axs = axs.flatten()
for ax, (k, v) in zip(axs, matches.items()):
ax.imshow(v)
ax.set_title(k)
hide_ticks(ax)
plt.tight_layout()
maybe_save_plot('2018-05-27-pairwise-comparison')
plt.show()
###Output
_____no_output_____
###Markdown
I can also show a plot of the weights. This shows how my data is a little odd: I only ever store positive numbers, and they're rarely 0. I'm going to ignore it in this post, but I think it's something my non-prototype model should take into account.
###Code
# sorry i'm doing this four times
all_ratings = [
[
c.weight
for c in all_comparisons
if c.metric == metric
]
for metric in METRIC_LIST
]
plt.ylabel("weights")
sns.swarmplot(data=all_ratings, edgecolor="black", linewidth=.9)
plt.xticks(plt.xticks()[0], METRIC_LIST)
maybe_save_plot('2018-05-27-comparisons-box')
pass
###Output
_____no_output_____
###Markdown
Part 2: ModellingFor the rest of this notebook, I'll limit the scope to a single metric by setting `METRIC_NAME = laptop`. To explore other metrics, I can update that string and rerun the following cells. ModelUsing the `laptop` metric as an example, my model says there is some unknown `laptop` metric for each coffee shop. This is what I'm trying to learn. The metric is Gaussian distributed around some mean. Given enough data, it should approach the actual laptop-friendliness of the coffee shop. When I said that BrewLab was better for laptop work than Wellington Coffee, my model takes that to mean that BrewLab's `laptop` metric is probably higher than Wellington's. Specifically, the number between 0 and 1 that I gave it is the difference between BrewLab's mean and Wellington's mean.When I make a comparison, I might be a little off and the weights might be noisy. Maybe I'm more enthusiastic about Cairngorm over Press because I haven't been to Press recently. `pymc3` can take that into account too! I'll say my comparison weight is also Gaussian distributed.I'm basing my code on [this tutorial](https://docs.pymc.io/notebooks/rugby_analytics.html) but with the above model. Like the rugby model, I also use one shared `HalfStudentT` variable for the metric's standard deviations.For each comparison, I compute the difference between the "true_metric" for the first and second coffee shop, and say that should be around the score I actually gave it. WarningBecause the model is probably making terrible assumptions that I can't recognize yet, I'm mostly using this model to see how a `pymc3` model could fit into this project. I can always go back and improve the model!
###Code
METRIC_NAME = 'laptop'
all_comparisons = load_comparisons()
FIRST = 0
LAST = 1
comparison_matrix = np.vstack(
(id_to_index[c.first], id_to_index[c.last])
for c in all_comparisons
if c.metric == METRIC_NAME
)
weight_vector = np.vstack(
c.weight
for c in all_comparisons
if c.metric == METRIC_NAME
)
print('using {} observations for {}'.format(weight_vector.shape[0], METRIC_NAME))
model = pm.Model()
with model:
metric_sd = pm.HalfStudentT('metric_sd', nu=1, sd=3)
true_metric = pm.Normal(METRIC_NAME, mu=0, sd=metric_sd, shape=num_items)
comparison = pm.Deterministic(
'comparison', (
true_metric[comparison_matrix[:, FIRST]] - true_metric[comparison_matrix[:, LAST]]
)
)
obs = pm.StudentT('obs', nu=7, mu=comparison, sd=0.25, observed=weight_vector)
trace = pm.sample(500, tune=1000, cores=3)
###Output
Loaded 72 comparisons
using 21 observations for laptop
###Markdown
`pymc3` gives a lot of tools to check how well sampling went. I'm still learning how they work, but nothing jumps out yet. - The sampler gave that the number of effective samples is small, but [they](https://discourse.pymc.io/t/the-number-of-effective-samples-is-smaller-than-25-for-some-parameters/1050) say that's probably okay. - Below I plot the `traceplot`. I told it to sample with 3 chains. There are three copies of each distribution which are all in roughly the same place.
###Code
pm.traceplot(trace)
maybe_save_plot('2018-05-27-traceplot')
plt.show()
###Output
_____no_output_____
###Markdown
Plotting*Watch out:* Now that I need to interpret the results, I'm at high risk of making embarrassing assumptions that I will use in the future as a "don't do it this way" :DLike [this tutorial](https://docs.pymc.io/notebooks/rugby_analytics.html), I'll plot the medians from the samples and use the Highest Posterior Density (HPD) as credible intervals. HPD finds the smallest range of the posterior distribution that contains 95% of its mass.This looks really cool!
###Code
# code for plots
def plot_hpd(trace, field_name, unsorted_labels):
unsorted_medians = pm.stats.quantiles(trace[field_name])[50]
unsorted_err = np.abs(pm.stats.hpd(trace[field_name]).T - unsorted_medians)
sorted_indices = np.argsort(unsorted_medians)
median = unsorted_medians[sorted_indices]
err = unsorted_err[:, sorted_indices]
labels = unsorted_labels[sorted_indices]
fig = plt.figure(figsize=(6, 6))
plt.errorbar(median, range(len(median)), xerr=err, fmt='o', label=field_name)
for i, label in enumerate(labels):
plt.text(np.min(median - err[0]) * 1.1, i, s=label, horizontalalignment='right', verticalalignment='center')
plt.title('{}'.format(field_name))
plt.axis('off')
return sorted_indices
plot_hpd(trace, METRIC_NAME, np.array(index_to_id))
maybe_save_plot('2018-05-27-ranking')
pass
###Output
_____no_output_____
###Markdown
I think I can take two coffee shops and ask in how many posterior samples one better than the other. When the model doesn't have much opinion, it's close to 0.5. Otherwise it's closer to 1 or 0.
###Code
def plot_posterior(ax, trace, metric, coffee_id):
sns.kdeplot(trace[metric][:, id_to_index[coffee_id]], shade=True, label=coffee_id, ax=ax)
def compare_two(trace, metric, a, b):
results_a = trace[metric][:, id_to_index[a]]
results_b = trace[metric][:, id_to_index[b]]
return (results_a > results_b).mean()
pairs = [
('milkman', 'castello'),
('brewlab', 'levels'),
('levels', 'castello'),
]
fig, axs = plt.subplots(1, 3, figsize = (12, 4))
for ((a, b), ax) in zip(pairs, axs):
plot_posterior(ax, trace, METRIC_NAME, a)
plot_posterior(ax, trace, METRIC_NAME, b)
ax.set_title('{} > {} ({:0.3f})'.format(a, b, compare_two(trace, METRIC_NAME, a, b)))
maybe_save_plot('2018-05-27-posteriors')
plt.show()
###Output
_____no_output_____
###Markdown
Comparing model results to actual resultsThis is another step I can take in checking that the model seems reasonable is to ask what it predicts for each observation should be and plot it. This is asking if the model can predict the data it learned from.It does miss two values. It's a little suspicious. It seems like it tends to have trouble predicting that a comparison could be weighted as 0.
###Code
labels = np.array([
'{} > {}'.format(c.first, c.last, c.weight)
for c in all_comparisons
if c.metric == METRIC_NAME
])
with model:
ppc = pm.sample_ppc(trace, samples=500, model=model)
sorted_indexes = plot_hpd(ppc, 'obs', labels)
plt.plot(weight_vector[sorted_indexes], range(len(labels)), 'xk', label='true score')
plt.legend()
maybe_save_plot('2018-05-27-predictions')
plt.show()
###Output
100%|โโโโโโโโโโ| 500/500 [00:00<00:00, 2786.01it/s]
###Markdown
Part 3: Batched active learningI can use my attempts at quantifying uncertainty as a heuristic for choosing which questions to ask. I do this using `is_pretty_certain`. This is super cool! If the model is totally sure that Artisan is better for reading than Castello, it doesn't ask about it.Like before, update `SHOULD_ASK` if you want to try it out. Ways to make this even coolerThe thing is that I'll just train the model from scratch with this new data.In some special models like [TrueSkill](https://www.microsoft.com/en-us/research/project/trueskill-ranking-system/), you can update the uncertainty in closed-form.If this was a real product, there might be enough random questions to ask that it's fine to not always ask the most-useful question. If it's time-consuming to answer the question, it might be worth learning the model in between, using a different model that's easy to update with new information, or finding some middle ground.
###Code
def is_pretty_certain(trace, metric, a, b):
'''If the posteriors probably don't overlap much, we are pretty certain one of
them will win
'''
hpd = pm.stats.hpd(trace[metric])
a_low, a_high = hpd[id_to_index[a]]
b_low, b_high = hpd[id_to_index[b]]
return (a_low > b_high or b_low > a_high)
# Change this to True to start answering questions!
# It's False by default so you can run the full notebook
SHOULD_ASK = False
# Note! I'm using a for-loop here so it doesn't continue forever if it can't
# find anymore matches. Normally, I'm planning to press `q` to quit.
MAX_CHECK = 100 if SHOULD_ASK else 0
for _ in range(MAX_CHECK):
# only ask about the active metric, since we only have the trace for this metric.
metric = METRIC_NAME
# Choose two random ids
id1, id2 = np.random.choice(index_to_id, size=2, replace=False)
if is_comparison_to_self(id1, id2):
print('Duplicate!')
continue
if already_have_comparison(all_comparisons, metric, id1, id2):
print('Already have match between {} {}'.format(id1, id2))
continue
if is_pretty_certain(trace, metric, id1, id2):
print('Pretty sure about {} {}'.format(id1, id2))
continue
keyboard_input = input('{} 1) {} 5) {}? '.format(metric, id1, id2))
if keyboard_input == 'q':
break
entry_comparison = keypress_to_entry_comparison(keyboard_input, id1, id2)
# modify entry_comparison to add stats to the entry comparison.
entry_comparison['metric'] = metric
# now append to the comparison file!
with open(COMPARISON_FILE, 'a') as f:
f.write(json.dumps(entry_comparison))
f.write('\n')
###Output
_____no_output_____ |
Weekly Sessions/Weekly_Session_7.ipynb | ###Markdown
Reservoir Sampling
###Code
import random
n = 20
k = 5
input_array = [1,123,32,12,98,12,76, 34, 76, 9, 90, 89, 96, 59, 94, 91, 101, 199, 201, 899 ]
output_array = list()
for i in range(k):
output_array.append(input_array[i])
output_array
for j in range(k, n):
num = random.randint(0, j)
if num < k:
output_array[num] = input_array[j]
output_array
###Output
_____no_output_____
###Markdown
UpSampling2D & Conv2DTranspose
###Code
import numpy as np
import tensorflow as tf
matrix = np.array([[1,2], [3,4]])
matrix = matrix.reshape((1,2,2,1))
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.UpSampling2D(input_shape = (2,2,1), interpolation='nearest', size = (3,3)))
model.summary()
yhat = model.predict(matrix)
print(yhat.reshape((6,6)))
###Output
_____no_output_____
###Markdown
Conv2DTranspose
###Code
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv2DTranspose(1, (1,1), strides = (2,2), input_shape = (2,2,1)))
model.summary()
matrix.shape
model.get_weights()
weights = [np.array([[[[2]]]]), np.array([0])]
model.set_weights(weights)
yhat = model.predict(matrix)
print(yhat.reshape((4,4)))
###Output
_____no_output_____
###Markdown
Label Smoothing
###Code
import tensorflow as tf
import numpy as np
def label_smoother(labels, factor, num_classes):
new_labels = labels * (1 - factor) + factor/num_classes
print(new_labels)
label_smoother(np.array([0, 0 ,1]), 0.3, 3)
y_true = [0, 1, 1]
y_pred = [0.8, 0.99, 0.01]
tf.keras.losses.binary_crossentropy(
y_true, y_pred,
from_logits=False,
label_smoothing=0.1
)
###Output
_____no_output_____ |
csharp-101/03-Searching Strings.ipynb | ###Markdown
Searching StringsWatch the full [C 101 video](https://www.youtube.com/watch?v=JL30gSE3WaQ&list=PLdo4fOcmZ0oVxKLQCHpiUWun7vlJJvUiN&index=4) for this module. ContainsDoes your string contain another string within it? You can use `Contains` to find out!The `Contains` method returns a *boolean*. That's a type represented by the keyword `bool` that can hold two values: `true` or `false`. In this case, the method returns `true` when sought string is found, and `false` when it's not found.> Run the following code.>> What else would or wouldn't be contained?>> Does case matter?>> Can you store the return value of the `Contains` method> Remember the type of the result is a `bool`.
###Code
string songLyrics = "You say goodbye, and I say hello";
Console.WriteLine(songLyrics.Contains("goodbye"));
Console.WriteLine(songLyrics.Contains("greetings"));
###Output
True
False
###Markdown
StartsWith and EndsWith`StartsWith` and `EndsWith` are methods similar to `Contains`, but more specific. They tell you if a string starts with or ends with the string you're checking. It has the same structure as `Contains`, that is: `bigstring.StartsWith(substring)`> Now you try!> In the following code, try searching the line to see if it starts with "you" or "I".> Next, see if the code ends with "hello" or "goodbye".
###Code
string songLyrics = "You say goodbye, and I say hello";
###Output
_____no_output_____
###Markdown
PlaygroundPlay around with what you've learned! Here's some starting ideas:> How many lines say hello?> Which lines start with "You"?> Which lines end with "no"?> Think back to the previous module. Can you make some lines all uppercase and some lines all lowercase?> If you change case, how does that affect `Contains`?
###Code
Console.WriteLine("Playground");
String line1 = "You say yes, I say no";
String line2 = "You say stop and I say go, go, go";
String line3 = "Oh, no";
String line4 = "You say goodbye and I say hello";
String line5 = "Hello, hello";
String line6 = "I don't know why you say goodbye, I say hello";
###Output
Playground
###Markdown
Searching StringsWatch the full [C 101 video](https://www.youtube.com/watch?v=JL30gSE3WaQ&list=PLdo4fOcmZ0oVxKLQCHpiUWun7vlJJvUiN&index=4) for this module. ContainsDoes your string contain another string within it? You can use `Contains` to find out!The `Contains` method returns a *boolean*. That's a type represented by the keyword `bool` that can hold two values: `true` or `false`. In this case, the method returns `true` when sought string is found, and `false` when it's not found.> Run the following code.>> What else would or wouldn't be contained?>> Does case matter?>> Can you store the return value of the `Contains` method> Remember the type of the result is a `bool`.
###Code
string songLyrics = "You say goodbye, and I say hello";
Console.WriteLine(songLyrics.Contains("goodbye"));
Console.WriteLine(songLyrics.Contains("greetings"));
###Output
True
False
###Markdown
StartsWith and EndsWith`StartsWith` and `EndsWith` are methods similar to `Contains`, but more specific. They tell you if a string starts with or ends with the string you're checking. It has the same structure as `Contains`, that is: `bigstring.StartsWith(substring)`> Now you try!> In the following code, try searching the line to see if it starts with "you" or "I".> Next, see if the code ends with "hello" or "goodbye".
###Code
string songLyrics = "You say goodbye, and I say hello";
###Output
_____no_output_____
###Markdown
PlaygroundPlay around with what you've learned! Here's some starting ideas:> How many lines say hello?> Which lines start with "You"?> Which lines end with "no"?> Think back to the previous module. Can you make some lines all uppercase and some lines all lowercase?> If you change case, how does that affect `Contains`?
###Code
Console.WriteLine("Playground");
String line1 = "You say yes, I say no";
String line2 = "You say stop and I say go, go, go";
String line3 = "Oh, no";
String line4 = "You say goodbye and I say hello";
String line5 = "Hello, hello";
String line6 = "I don't know why you say goodbye, I say hello";
###Output
Playground
|
notebooks/gridstack-plus.ipynb | ###Markdown
qgrid
###Code
np.random.seed(0)
n = 200
x = np.linspace(0.0, 10.0, n)
y = np.cumsum(np.random.randn(n))
df = pd.DataFrame({'x': x, 'y':y})
tableOut = qgrid.QgridWidget(df=df, show_toolbar=True)
tableOut
###Output
_____no_output_____
###Markdown
Gridstack test notebook
###Code
import ipywidgets as widgets
from IPython.display import display
import numpy as np
import pandas as pd
import qgrid
print("hello world; a button should appear to the right --->")
widgets.Button(description='a button')
###Output
_____no_output_____
###Markdown
some more markdownhello world<--- a number should appear to the left
###Code
1 + 2 + 3
###Output
_____no_output_____
###Markdown
other tests font-awesome
###Code
import ipywidgets as widgets
display(widgets.Button(description='search', icon='search'))
display(widgets.Button(description='retweet', icon='retweet', button_style='success'))
display(widgets.Button(description='filter', icon='filter', button_style='danger'))
###Output
_____no_output_____ |
Course1/Week4/01W4Assignment.ipynb | ###Markdown
Assignment 4 - Naive Machine Translation and LSHYou will now implement your first machine translation system and then youwill see how locality sensitive hashing works. Let's get started by importingthe required functions!If you are running this notebook in your local computer, don't forget todownload the twitter samples and stopwords from nltk.```nltk.download('stopwords')nltk.download('twitter_samples')``` **NOTE**: The `Exercise xx` numbers in this assignment **_are inconsistent_** with the `UNQ_Cx` numbers. This assignment covers the folowing topics:- [1. The word embeddings data for English and French words](1) - [1.1 Generate embedding and transform matrices](1-1) - [Exercise 1](ex-01)- [2. Translations](2) - [2.1 Translation as linear transformation of embeddings](2-1) - [Exercise 2](ex-02) - [Exercise 3](ex-03) - [Exercise 4](ex-04) - [2.2 Testing the translation](2-2) - [Exercise 5](ex-05) - [Exercise 6](ex-06) - [3. LSH and document search](3) - [3.1 Getting the document embeddings](3-1) - [Exercise 7](ex-07) - [Exercise 8](ex-08) - [3.2 Looking up the tweets](3-2) - [3.3 Finding the most similar tweets with LSH](3-3) - [3.4 Getting the hash number for a vector](3-4) - [Exercise 9](ex-09) - [3.5 Creating a hash table](3-5) - [Exercise 10](ex-10) - [3.6 Creating all hash tables](3-6) - [Exercise 11](ex-11)
###Code
import pdb
import pickle
import string
import time
import gensim
import matplotlib.pyplot as plt
import nltk
import numpy as np
import scipy
import sklearn
from gensim.models import KeyedVectors
from nltk.corpus import stopwords, twitter_samples
from nltk.tokenize import TweetTokenizer
from utils import (cosine_similarity, get_dict,
process_tweet)
from os import getcwd
# add folder, tmp2, from our local workspace containing pre-downloaded corpora files to nltk's data path
filePath = f"{getcwd()}/../tmp2/"
nltk.data.path.append(filePath)
###Output
_____no_output_____
###Markdown
1. The word embeddings data for English and French wordsWrite a program that translates English to French. The dataThe full dataset for English embeddings is about 3.64 gigabytes, and the Frenchembeddings are about 629 megabytes. To prevent the Coursera workspace fromcrashing, we've extracted a subset of the embeddings for the words that you'lluse in this assignment.If you want to run this on your local computer and use the full dataset,you can download the* English embeddings from Google code archive word2vec[look for GoogleNews-vectors-negative300.bin.gz](https://code.google.com/archive/p/word2vec/) * You'll need to unzip the file first.* and the French embeddings from[cross_lingual_text_classification](https://github.com/vjstark/crosslingual_text_classification). * in the terminal, type (in one line) `curl -o ./wiki.multi.fr.vec https://dl.fbaipublicfiles.com/arrival/vectors/wiki.multi.fr.vec`Then copy-paste the code below and run it. ```python Use this code to download and process the full dataset on your local computerfrom gensim.models import KeyedVectorsen_embeddings = KeyedVectors.load_word2vec_format('./GoogleNews-vectors-negative300.bin', binary = True)fr_embeddings = KeyedVectors.load_word2vec_format('./wiki.multi.fr.vec') loading the english to french dictionariesen_fr_train = get_dict('en-fr.train.txt')print('The length of the english to french training dictionary is', len(en_fr_train))en_fr_test = get_dict('en-fr.test.txt')print('The length of the english to french test dictionary is', len(en_fr_train))english_set = set(en_embeddings.vocab)french_set = set(fr_embeddings.vocab)en_embeddings_subset = {}fr_embeddings_subset = {}french_words = set(en_fr_train.values())for en_word in en_fr_train.keys(): fr_word = en_fr_train[en_word] if fr_word in french_set and en_word in english_set: en_embeddings_subset[en_word] = en_embeddings[en_word] fr_embeddings_subset[fr_word] = fr_embeddings[fr_word]for en_word in en_fr_test.keys(): fr_word = en_fr_test[en_word] if fr_word in french_set and en_word in english_set: en_embeddings_subset[en_word] = en_embeddings[en_word] fr_embeddings_subset[fr_word] = fr_embeddings[fr_word]pickle.dump( en_embeddings_subset, open( "en_embeddings.p", "wb" ) )pickle.dump( fr_embeddings_subset, open( "fr_embeddings.p", "wb" ) )``` The subset of dataTo do the assignment on the Coursera workspace, we'll use the subset of word embeddings.
###Code
en_embeddings_subset = pickle.load(open("en_embeddings.p", "rb"))
fr_embeddings_subset = pickle.load(open("fr_embeddings.p", "rb"))
###Output
_____no_output_____
###Markdown
Look at the data* en_embeddings_subset: the key is an English word, and the vaule is a300 dimensional array, which is the embedding for that word.```'the': array([ 0.08007812, 0.10498047, 0.04980469, 0.0534668 , -0.06738281, ....```* fr_embeddings_subset: the key is an French word, and the vaule is a 300dimensional array, which is the embedding for that word.```'la': array([-6.18250e-03, -9.43867e-04, -8.82648e-03, 3.24623e-02,...``` Load two dictionaries mapping the English to French words* A training dictionary* and a testing dictionary.
###Code
# loading the english to french dictionaries
en_fr_train = get_dict('en-fr.train.txt')
print('The length of the English to French training dictionary is', len(en_fr_train))
en_fr_test = get_dict('en-fr.test.txt')
print('The length of the English to French test dictionary is', len(en_fr_train))
###Output
The length of the English to French training dictionary is 5000
The length of the English to French test dictionary is 5000
###Markdown
Looking at the English French dictionary* `en_fr_train` is a dictionary where the key is the English word and the valueis the French translation of that English word.```{'the': 'la', 'and': 'et', 'was': 'รฉtait', 'for': 'pour',```* `en_fr_test` is similar to `en_fr_train`, but is a test set. We won't look at ituntil we get to testing. 1.1 Generate embedding and transform matrices Exercise 01: Translating English dictionary to French by using embeddingsYou will now implement a function `get_matrices`, which takes the loaded dataand returns matrices `X` and `Y`.Inputs:- `en_fr` : English to French dictionary- `en_embeddings` : English to embeddings dictionary- `fr_embeddings` : French to embeddings dictionaryReturns:- Matrix `X` and matrix `Y`, where each row in X is the word embedding for anenglish word, and the same row in Y is the word embedding for the Frenchversion of that English word. Figure 2 Use the `en_fr` dictionary to ensure that the ith row in the `X` matrixcorresponds to the ith row in the `Y` matrix. **Instructions**: Complete the function `get_matrices()`:* Iterate over English words in `en_fr` dictionary.* Check if the word have both English and French embedding. Hints Sets are useful data structures that can be used to check if an item is a member of a group. You can get words which are embedded into the language by using keys method. Keep vectors in `X` and `Y` sorted in list. You can use np.vstack() to merge them into the numpy matrix. numpy.vstack stacks the items in a list as rows in a matrix.
###Code
# UNQ_C1 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def get_matrices(en_fr, french_vecs, english_vecs):
"""
Input:
en_fr: English to French dictionary
french_vecs: French words to their corresponding word embeddings.
english_vecs: English words to their corresponding word embeddings.
Output:
X: a matrix where the columns are the English embeddings.
Y: a matrix where the columns correspong to the French embeddings.
R: the projection matrix that minimizes the F norm ||X R -Y||^2.
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# X_l and Y_l are lists of the english and french word embeddings
X_l = list()
Y_l = list()
# get the english words (the keys in the dictionary) and store in a set()
english_set = set(english_vecs.keys())
# get the french words (keys in the dictionary) and store in a set()
french_set = set(french_vecs.keys())
# store the french words that are part of the english-french dictionary (these are the values of the dictionary)
french_words = set(en_fr.values())
# loop through all english, french word pairs in the english french dictionary
for en_word, fr_word in en_fr.items():
# check that the french word has an embedding and that the english word has an embedding
if fr_word in french_set and en_word in english_set:
# get the english embedding
en_vec = english_vecs[en_word]
# get the french embedding
fr_vec = french_vecs[fr_word]
# add the english embedding to the list
X_l.append(en_vec)
# add the french embedding to the list
Y_l.append(fr_vec)
# stack the vectors of X_l into a matrix X
X = np.vstack(X_l)
# stack the vectors of Y_l into a matrix Y
Y = np.vstack(Y_l)
### END CODE HERE ###
return X, Y
###Output
_____no_output_____
###Markdown
Now we will use function `get_matrices()` to obtain sets `X_train` and `Y_train`of English and French word embeddings into the corresponding vector space models.
###Code
# UNQ_C2 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
# getting the training set:
X_train, Y_train = get_matrices(
en_fr_train, fr_embeddings_subset, en_embeddings_subset)
###Output
_____no_output_____
###Markdown
2. Translations Figure 1 Write a program that translates English words to French words using word embeddings and vector space models. 2.1 Translation as linear transformation of embeddingsGiven dictionaries of English and French word embeddings you will create a transformation matrix `R`* Given an English word embedding, $\mathbf{e}$, you can multiply $\mathbf{eR}$ to get a new word embedding $\mathbf{f}$. * Both $\mathbf{e}$ and $\mathbf{f}$ are [row vectors](https://en.wikipedia.org/wiki/Row_and_column_vectors).* You can then compute the nearest neighbors to `f` in the french embeddings and recommend the word that is most similar to the transformed word embedding. Describing translation as the minimization problemFind a matrix `R` that minimizes the following equation. $$\arg \min _{\mathbf{R}}\| \mathbf{X R} - \mathbf{Y}\|_{F}\tag{1} $$ Frobenius normThe Frobenius norm of a matrix $A$ (assuming it is of dimension $m,n$) is defined as the square root of the sum of the absolute squares of its elements:$$\|\mathbf{A}\|_{F} \equiv \sqrt{\sum_{i=1}^{m} \sum_{j=1}^{n}\left|a_{i j}\right|^{2}}\tag{2}$$ Actual loss functionIn the real world applications, the Frobenius norm loss:$$\| \mathbf{XR} - \mathbf{Y}\|_{F}$$is often replaced by it's squared value divided by $m$:$$ \frac{1}{m} \| \mathbf{X R} - \mathbf{Y} \|_{F}^{2}$$where $m$ is the number of examples (rows in $\mathbf{X}$).* The same R is found when using this loss function versus the original Frobenius norm.* The reason for taking the square is that it's easier to compute the gradient of the squared Frobenius.* The reason for dividing by $m$ is that we're more interested in the average loss per embedding than the loss for the entire training set. * The loss for all training set increases with more words (training examples), so taking the average helps us to track the average loss regardless of the size of the training set. [Optional] Detailed explanation why we use norm squared instead of the norm: Click for optional details The norm is always nonnegative (we're summing up absolute values), and so is the square. When we take the square of all non-negative (positive or zero) numbers, the order of the data is preserved. For example, if 3 > 2, 3^2 > 2^2 Using the norm or squared norm in gradient descent results in the same location of the minimum. Squaring cancels the square root in the Frobenius norm formula. Because of the chain rule, we would have to do more calculations if we had a square root in our expression for summation. Dividing the function value by the positive number doesn't change the optimum of the function, for the same reason as described above. We're interested in transforming English embedding into the French. Thus, it is more important to measure average loss per embedding than the loss for the entire dictionary (which increases as the number of words in the dictionary increases). Exercise 02: Implementing translation mechanism described in this section. Step 1: Computing the loss* The loss function will be squared Frobenoius norm of the difference betweenmatrix and its approximation, divided by the number of training examples $m$.* Its formula is:$$ L(X, Y, R)=\frac{1}{m}\sum_{i=1}^{m} \sum_{j=1}^{n}\left( a_{i j} \right)^{2}$$where $a_{i j}$ is value in $i$th row and $j$th column of the matrix $\mathbf{XR}-\mathbf{Y}$. Instructions: complete the `compute_loss()` function* Compute the approximation of `Y` by matrix multiplying `X` and `R`* Compute difference `XR - Y`* Compute the squared Frobenius norm of the difference and divide it by $m$. Hints Useful functions: Numpy dot , Numpy sum, Numpy square, Numpy norm Be careful about which operation is elementwise and which operation is a matrix multiplication. Try to use matrix operations instead of the numpy norm function. If you choose to use norm function, take care of extra arguments and that it's returning loss squared, and not the loss itself.
###Code
# UNQ_C3 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def compute_loss(X, Y, R):
'''
Inputs:
X: a matrix of dimension (m,n) where the columns are the English embeddings.
Y: a matrix of dimension (m,n) where the columns correspong to the French embeddings.
R: a matrix of dimension (n,n) - transformation matrix from English to French vector space embeddings.
Outputs:
L: a matrix of dimension (m,n) - the value of the loss function for given X, Y and R.
'''
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# m is the number of rows in X
m = X.shape[0]
# diff is XR - Y
diff = np.dot(X,R) -Y
# diff_squared is the element-wise square of the difference
diff_squared = np.square(diff)
# sum_diff_squared is the sum of the squared elements
sum_diff_squared = np.sum(diff_squared)
# loss i the sum_diff_squard divided by the number of examples (m)
loss = sum_diff_squared / m
### END CODE HERE ###
return loss
###Output
_____no_output_____
###Markdown
Exercise 03 Step 2: Computing the gradient of loss in respect to transform matrix R* Calculate the gradient of the loss with respect to transform matrix `R`.* The gradient is a matrix that encodes how much a small change in `R`affect the change in the loss function.* The gradient gives us the direction in which we should decrease `R`to minimize the loss.* $m$ is the number of training examples (number of rows in $X$).* The formula for the gradient of the loss function $๐ฟ(๐,๐,๐
)$ is:$$\frac{d}{dR}๐ฟ(๐,๐,๐
)=\frac{d}{dR}\Big(\frac{1}{m}\| X R -Y\|_{F}^{2}\Big) = \frac{2}{m}X^{T} (X R - Y)$$**Instructions**: Complete the `compute_gradient` function below. Hints Transposing in numpy Finding out the dimensions of matrices in numpy Remember to use numpy.dot for matrix multiplication
###Code
# UNQ_C4 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def compute_gradient(X, Y, R):
'''
Inputs:
X: a matrix of dimension (m,n) where the columns are the English embeddings.
Y: a matrix of dimension (m,n) where the columns correspong to the French embeddings.
R: a matrix of dimension (n,n) - transformation matrix from English to French vector space embeddings.
Outputs:
g: a matrix of dimension (n,n) - gradient of the loss function L for given X, Y and R.
'''
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# m is the number of rows in X
m = X.shape[0]
# gradient is X^T(XR - Y) * 2/m
gradient = np.dot(X.T,(np.dot(X,R) - Y)) * 2 / m
### END CODE HERE ###
return gradient
###Output
_____no_output_____
###Markdown
Step 3: Finding the optimal R with gradient descent algorithm Gradient descent[Gradient descent](https://ml-cheatsheet.readthedocs.io/en/latest/gradient_descent.html) is an iterative algorithm which is used in searching for the optimum of the function. * Earlier, we've mentioned that the gradient of the loss with respect to the matrix encodes how much a tiny change in some coordinate of that matrix affect the change of loss function.* Gradient descent uses that information to iteratively change matrix `R` until we reach a point where the loss is minimized. Training with a fixed number of iterationsMost of the time we iterate for a fixed number of training steps rather than iterating until the loss falls below a threshold. OPTIONAL: explanation for fixed number of iterations click here for detailed discussion You cannot rely on training loss getting low -- what you really want is the validation loss to go down, or validation accuracy to go up. And indeed - in some cases people train until validation accuracy reaches a threshold, or -- commonly known as "early stopping" -- until the validation accuracy starts to go down, which is a sign of over-fitting. Why not always do "early stopping"? Well, mostly because well-regularized models on larger data-sets never stop improving. Especially in NLP, you can often continue training for months and the model will continue getting slightly and slightly better. This is also the reason why it's hard to just stop at a threshold -- unless there's an external customer setting the threshold, why stop, where do you put the threshold? Stopping after a certain number of steps has the advantage that you know how long your training will take - so you can keep some sanity and not train for months. You can then try to get the best performance within this time budget. Another advantage is that you can fix your learning rate schedule -- e.g., lower the learning rate at 10% before finish, and then again more at 1% before finishing. Such learning rate schedules help a lot, but are harder to do if you don't know how long you're training. Pseudocode:1. Calculate gradient $g$ of the loss with respect to the matrix $R$.2. Update $R$ with the formula:$$R_{\text{new}}= R_{\text{old}}-\alpha g$$Where $\alpha$ is the learning rate, which is a scalar. Learning rate* The learning rate or "step size" $\alpha$ is a coefficient which decides how much we want to change $R$ in each step.* If we change $R$ too much, we could skip the optimum by taking too large of a step.* If we make only small changes to $R$, we will need many steps to reach the optimum.* Learning rate $\alpha$ is used to control those changes.* Values of $\alpha$ are chosen depending on the problem, and we'll use `learning_rate`$=0.0003$ as the default value for our algorithm. Exercise 04 Instructions: Implement `align_embeddings()` Hints Use the 'compute_gradient()' function to get the gradient in each step
###Code
# UNQ_C5 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def align_embeddings(X, Y, train_steps=100, learning_rate=0.0003):
'''
Inputs:
X: a matrix of dimension (m,n) where the columns are the English embeddings.
Y: a matrix of dimension (m,n) where the columns correspong to the French embeddings.
train_steps: positive int - describes how many steps will gradient descent algorithm do.
learning_rate: positive float - describes how big steps will gradient descent algorithm do.
Outputs:
R: a matrix of dimension (n,n) - the projection matrix that minimizes the F norm ||X R -Y||^2
'''
np.random.seed(129)
# the number of columns in X is the number of dimensions for a word vector (e.g. 300)
# R is a square matrix with length equal to the number of dimensions in th word embedding
R = np.random.rand(X.shape[1], X.shape[1])
for i in range(train_steps):
if i % 25 == 0:
print(f"loss at iteration {i} is: {compute_loss(X, Y, R):.4f}")
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# use the function that you defined to compute the gradient
gradient = compute_gradient(X,Y,R)
# update R by subtracting the learning rate times gradient
R -= learning_rate * gradient
### END CODE HERE ###
return R
# UNQ_C6 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
# Testing your implementation.
np.random.seed(129)
m = 10
n = 5
X = np.random.rand(m, n)
Y = np.random.rand(m, n) * .1
R = align_embeddings(X, Y)
###Output
loss at iteration 0 is: 3.7242
loss at iteration 25 is: 3.6283
loss at iteration 50 is: 3.5350
loss at iteration 75 is: 3.4442
###Markdown
**Expected Output:**```loss at iteration 0 is: 3.7242loss at iteration 25 is: 3.6283loss at iteration 50 is: 3.5350loss at iteration 75 is: 3.4442``` Calculate transformation matrix RUsing those the training set, find the transformation matrix $\mathbf{R}$ by calling the function `align_embeddings()`.**NOTE:** The code cell below will take a few minutes to fully execute (~3 mins)
###Code
# UNQ_C7 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
R_train = align_embeddings(X_train, Y_train, train_steps=400, learning_rate=0.8)
###Output
loss at iteration 0 is: 963.0146
loss at iteration 25 is: 97.8292
loss at iteration 50 is: 26.8329
loss at iteration 75 is: 9.7893
loss at iteration 100 is: 4.3776
loss at iteration 125 is: 2.3281
loss at iteration 150 is: 1.4480
loss at iteration 175 is: 1.0338
loss at iteration 200 is: 0.8251
loss at iteration 225 is: 0.7145
loss at iteration 250 is: 0.6534
loss at iteration 275 is: 0.6185
loss at iteration 300 is: 0.5981
loss at iteration 325 is: 0.5858
loss at iteration 350 is: 0.5782
loss at iteration 375 is: 0.5735
###Markdown
Expected Output```loss at iteration 0 is: 963.0146loss at iteration 25 is: 97.8292loss at iteration 50 is: 26.8329loss at iteration 75 is: 9.7893loss at iteration 100 is: 4.3776loss at iteration 125 is: 2.3281loss at iteration 150 is: 1.4480loss at iteration 175 is: 1.0338loss at iteration 200 is: 0.8251loss at iteration 225 is: 0.7145loss at iteration 250 is: 0.6534loss at iteration 275 is: 0.6185loss at iteration 300 is: 0.5981loss at iteration 325 is: 0.5858loss at iteration 350 is: 0.5782loss at iteration 375 is: 0.5735``` 2.2 Testing the translation k-Nearest neighbors algorithm[k-Nearest neighbors algorithm](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) * k-NN is a method which takes a vector as input and finds the other vectors in the dataset that are closest to it. * The 'k' is the number of "nearest neighbors" to find (e.g. k=2 finds the closest two neighbors). Searching for the translation embeddingSince we're approximating the translation function from English to French embeddings by a linear transformation matrix $\mathbf{R}$, most of the time we won't get the exact embedding of a French word when we transform embedding $\mathbf{e}$ of some particular English word into the French embedding space. * This is where $k$-NN becomes really useful! By using $1$-NN with $\mathbf{eR}$ as input, we can search for an embedding $\mathbf{f}$ (as a row) in the matrix $\mathbf{Y}$ which is the closest to the transformed vector $\mathbf{eR}$ Cosine similarityCosine similarity between vectors $u$ and $v$ calculated as the cosine of the angle between them.The formula is $$\cos(u,v)=\frac{u\cdot v}{\left\|u\right\|\left\|v\right\|}$$* $\cos(u,v)$ = $1$ when $u$ and $v$ lie on the same line and have the same direction.* $\cos(u,v)$ is $-1$ when they have exactly opposite directions.* $\cos(u,v)$ is $0$ when the vectors are orthogonal (perpendicular) to each other. Note: Distance and similarity are pretty much opposite things.* We can obtain distance metric from cosine similarity, but the cosine similarity can't be used directly as the distance metric. * When the cosine similarity increases (towards $1$), the "distance" between the two vectors decreases (towards $0$). * We can define the cosine distance between $u$ and $v$ as$$d_{\text{cos}}(u,v)=1-\cos(u,v)$$ **Exercise 05**: Complete the function `nearest_neighbor()`Inputs:* Vector `v`,* A set of possible nearest neighbors `candidates`* `k` nearest neighbors to find.* The distance metric should be based on cosine similarity.* `cosine_similarity` function is already implemented and imported for you. It's arguments are two vectors and it returns the cosine of the angle between them.* Iterate over rows in `candidates`, and save the result of similarities between current row and vector `v` in a python list. Take care that similarities are in the same order as row vectors of `candidates`.* Now you can use [numpy argsort]( https://docs.scipy.org/doc/numpy/reference/generated/numpy.argsort.htmlnumpy.argsort) to sort the indices for the rows of `candidates`. Hints numpy.argsort sorts values from most negative to most positive (smallest to largest) The candidates that are nearest to 'v' should have the highest cosine similarity To get the last element of a list 'tmp', the notation is tmp[-1:]
###Code
# UNQ_C8 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def nearest_neighbor(v, candidates, k=1):
"""
Input:
- v, the vector you are going find the nearest neighbor for
- candidates: a set of vectors where we will find the neighbors
- k: top k nearest neighbors to find
Output:
- k_idx: the indices of the top k closest vectors in sorted form
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
similarity_l = []
# for each candidate vector...
for row in candidates:
# get the cosine similarity
cos_similarity = cosine_similarity(v,row)
# append the similarity to the list
similarity_l.append(cos_similarity)
# sort the similarity list and get the indices of the sorted list
sorted_ids = np.argsort(similarity_l)[::-1]
# get the indices of the k most similar candidate vectors
k_idx = sorted_ids[0 : k][::-1]
### END CODE HERE ###
return k_idx
# UNQ_C9 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
# Test your implementation:
v = np.array([1, 0, 1])
candidates = np.array([[1, 0, 5], [-2, 5, 3], [2, 0, 1], [6, -9, 5], [9, 9, 9]])
print(candidates[nearest_neighbor(v, candidates, 3)])
###Output
[[9 9 9]
[1 0 5]
[2 0 1]]
###Markdown
**Expected Output**:`[[9 9 9] [1 0 5] [2 0 1]]` Test your translation and compute its accuracy**Exercise 06**:Complete the function `test_vocabulary` which takes in Englishembedding matrix $X$, French embedding matrix $Y$ and the $R$matrix and returns the accuracy of translations from $X$ to $Y$ by $R$.* Iterate over transformed English word embeddings and check if theclosest French word vector belongs to French word that is the actualtranslation.* Obtain an index of the closest French embedding by using`nearest_neighbor` (with argument `k=1`), and compare it to the indexof the English embedding you have just transformed.* Keep track of the number of times you get the correct translation.* Calculate accuracy as $$\text{accuracy}=\frac{\(\text{correct predictions})}{\(\text{total predictions})}$$
###Code
# UNQ_C10 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def test_vocabulary(X, Y, R):
'''
Input:
X: a matrix where the columns are the English embeddings.
Y: a matrix where the columns correspong to the French embeddings.
R: the transform matrix which translates word embeddings from
English to French word vector space.
Output:
accuracy: for the English to French capitals
'''
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# The prediction is X times R
pred = np.dot(X,R)
# initialize the number correct to zero
num_correct = 0
# loop through each row in pred (each transformed embedding)
for i in range(len(pred)):
# get the index of the nearest neighbor of pred at row 'i'; also pass in the candidates in Y
pred_idx = nearest_neighbor(pred[i],Y, k=1)
# if the index of the nearest neighbor equals the row of i... \
if pred_idx == i:
# increment the number correct by 1.
num_correct += 1
# accuracy is the number correct divided by the number of rows in 'pred' (also number of rows in X)
accuracy = num_correct / X.shape[0]
### END CODE HERE ###
return accuracy
###Output
_____no_output_____
###Markdown
Let's see how is your translation mechanism working on the unseen data:
###Code
X_val, Y_val = get_matrices(en_fr_test, fr_embeddings_subset, en_embeddings_subset)
# UNQ_C11 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
acc = test_vocabulary(X_val, Y_val, R_train) # this might take a minute or two
print(f"accuracy on test set is {acc:.3f}")
###Output
accuracy on test set is 0.557
###Markdown
**Expected Output**:```0.557```You managed to translate words from one language to another languagewithout ever seing them with almost 56% accuracy by using some basiclinear algebra and learning a mapping of words from one language to another! 3. LSH and document searchIn this part of the assignment, you will implement a more efficient versionof k-nearest neighbors using locality sensitive hashing.You will then apply this to document search.* Process the tweets and represent each tweet as a vector (represent adocument with a vector embedding).* Use locality sensitive hashing and k nearest neighbors to find tweetsthat are similar to a given tweet.
###Code
# get the positive and negative tweets
all_positive_tweets = twitter_samples.strings('positive_tweets.json')
all_negative_tweets = twitter_samples.strings('negative_tweets.json')
all_tweets = all_positive_tweets + all_negative_tweets
###Output
_____no_output_____
###Markdown
3.1 Getting the document embeddings Bag-of-words (BOW) document modelsText documents are sequences of words.* The ordering of words makes a difference. For example, sentences "Apple pie isbetter than pepperoni pizza." and "Pepperoni pizza is better than apple pie"have opposite meanings due to the word ordering.* However, for some applications, ignoring the order of words can allowus to train an efficient and still effective model.* This approach is called Bag-of-words document model. Document embeddings* Document embedding is created by summing up the embeddings of all wordsin the document.* If we don't know the embedding of some word, we can ignore that word. **Exercise 07**:Complete the `get_document_embedding()` function.* The function `get_document_embedding()` encodes entire document as a "document" embedding.* It takes in a docoument (as a string) and a dictionary, `en_embeddings`* It processes the document, and looks up the corresponding embedding of each word.* It then sums them up and returns the sum of all word vectors of that processed tweet. Hints You can handle missing words easier by using the `get()` method of the python dictionary instead of the bracket notation (i.e. "[ ]"). See more about it here The default value for missing word should be the zero vector. Numpy will broadcast simple 0 scalar into a vector of zeros during the summation. Alternatively, skip the addition if a word is not in the dictonary. You can use your `process_tweet()` function which allows you to process the tweet. The function just takes in a tweet and returns a list of words.
###Code
# UNQ_C12 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def get_document_embedding(tweet, en_embeddings):
'''
Input:
- tweet: a string
- en_embeddings: a dictionary of word embeddings
Output:
- doc_embedding: sum of all word embeddings in the tweet
'''
doc_embedding = np.zeros(300)
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# process the document into a list of words (process the tweet)
processed_doc = process_tweet(tweet)
for word in processed_doc:
# add the word embedding to the running total for the document embedding
doc_embedding += en_embeddings.get(word,0)
### END CODE HERE ###
return doc_embedding
# UNQ_C13 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
# testing your function
custom_tweet = "RT @Twitter @chapagain Hello There! Have a great day. :) #good #morning http://chapagain.com.np"
tweet_embedding = get_document_embedding(custom_tweet, en_embeddings_subset)
tweet_embedding[-5:]
###Output
_____no_output_____
###Markdown
**Expected output**:```array([-0.00268555, -0.15378189, -0.55761719, -0.07216644, -0.32263184])``` Exercise 08 Store all document vectors into a dictionaryNow, let's store all the tweet embeddings into a dictionary.Implement `get_document_vecs()`
###Code
# UNQ_C14 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def get_document_vecs(all_docs, en_embeddings):
'''
Input:
- all_docs: list of strings - all tweets in our dataset.
- en_embeddings: dictionary with words as the keys and their embeddings as the values.
Output:
- document_vec_matrix: matrix of tweet embeddings.
- ind2Doc_dict: dictionary with indices of tweets in vecs as keys and their embeddings as the values.
'''
# the dictionary's key is an index (integer) that identifies a specific tweet
# the value is the document embedding for that document
ind2Doc_dict = {}
# this is list that will store the document vectors
document_vec_l = []
for i, doc in enumerate(all_docs):
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# get the document embedding of the tweet
doc_embedding = get_document_embedding(doc, en_embeddings)
# save the document embedding into the ind2Tweet dictionary at index i
ind2Doc_dict[i] = doc_embedding
# append the document embedding to the list of document vectors
document_vec_l.append(ind2Doc_dict[i])
### END CODE HERE ###
# convert the list of document vectors into a 2D array (each row is a document vector)
document_vec_matrix = np.vstack(document_vec_l)
return document_vec_matrix, ind2Doc_dict
document_vecs, ind2Tweet = get_document_vecs(all_tweets, en_embeddings_subset)
# UNQ_C15 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
print(f"length of dictionary {len(ind2Tweet)}")
print(f"shape of document_vecs {document_vecs.shape}")
###Output
length of dictionary 10000
shape of document_vecs (10000, 300)
###Markdown
Expected Output```length of dictionary 10000shape of document_vecs (10000, 300)``` 3.2 Looking up the tweetsNow you have a vector of dimension (m,d) where `m` is the number of tweets(10,000) and `d` is the dimension of the embeddings (300). Now youwill input a tweet, and use cosine similarity to see which tweet in ourcorpus is similar to your tweet.
###Code
my_tweet = 'i am sad'
process_tweet(my_tweet)
tweet_embedding = get_document_embedding(my_tweet, en_embeddings_subset)
# UNQ_C16 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
# this gives you a similar tweet as your input.
# this implementation is vectorized...
idx = np.argmax(cosine_similarity(document_vecs, tweet_embedding))
print(all_tweets[idx])
###Output
@zoeeylim sad sad sad kid :( it's ok I help you watch the match HAHAHAHAHA
###Markdown
Expected Output```@zoeeylim sad sad sad kid :( it's ok I help you watch the match HAHAHAHAHA``` 3.3 Finding the most similar tweets with LSHYou will now implement locality sensitive hashing (LSH) to identify the most similar tweet.* Instead of looking at all 10,000 vectors, you can just search a subset to findits nearest neighbors.Let's say your data points are plotted like this: Figure 3 You can divide the vector space into regions and search within one region for nearest neighbors of a given vector. Figure 4
###Code
N_VECS = len(all_tweets) # This many vectors.
N_DIMS = len(ind2Tweet[1]) # Vector dimensionality.
print(f"Number of vectors is {N_VECS} and each has {N_DIMS} dimensions.")
###Output
Number of vectors is 10000 and each has 300 dimensions.
###Markdown
Choosing the number of planes* Each plane divides the space to $2$ parts.* So $n$ planes divide the space into $2^{n}$ hash buckets.* We want to organize 10,000 document vectors into buckets so that every bucket has about $~16$ planes.* For that we need $\frac{10000}{16}=625$ buckets.* We're interested in $n$, number of planes, so that $2^{n}= 625$. Now, we can calculate $n=\log_{2}625 = 9.29 \approx 10$.
###Code
# The number of planes. We use log2(256) to have ~16 vectors/bucket.
N_PLANES = 10
# Number of times to repeat the hashing to improve the search.
N_UNIVERSES = 25
###Output
_____no_output_____
###Markdown
3.4 Getting the hash number for a vectorFor each vector, we need to get a unique number associated to that vector in order to assign it to a "hash bucket". Hyperlanes in vector spaces* In $3$-dimensional vector space, the hyperplane is a regular plane. In $2$ dimensional vector space, the hyperplane is a line.* Generally, the hyperplane is subspace which has dimension $1$ lower than the original vector space has.* A hyperplane is uniquely defined by its normal vector.* Normal vector $n$ of the plane $\pi$ is the vector to which all vectors in the plane $\pi$ are orthogonal (perpendicular in $3$ dimensional case). Using Hyperplanes to split the vector spaceWe can use a hyperplane to split the vector space into $2$ parts.* All vectors whose dot product with a plane's normal vector is positive are on one side of the plane.* All vectors whose dot product with the plane's normal vector is negative are on the other side of the plane. Encoding hash buckets* For a vector, we can take its dot product with all the planes, then encode this information to assign the vector to a single hash bucket.* When the vector is pointing to the opposite side of the hyperplane than normal, encode it by 0.* Otherwise, if the vector is on the same side as the normal vector, encode it by 1.* If you calculate the dot product with each plane in the same order for every vector, you've encoded each vector's unique hash ID as a binary number, like [0, 1, 1, ... 0]. Exercise 09: Implementing hash bucketsWe've initialized hash table `hashes` for you. It is list of `N_UNIVERSES` matrices, each describes its own hash table. Each matrix has `N_DIMS` rows and `N_PLANES` columns. Every column of that matrix is a `N_DIMS`-dimensional normal vector for each of `N_PLANES` hyperplanes which are used for creating buckets of the particular hash table.*Exercise*: Your task is to complete the function `hash_value_of_vector` which places vector `v` in the correct hash bucket.* First multiply your vector `v`, with a corresponding plane. This will give you a vector of dimension $(1,\text{N_planes})$.* You will then convert every element in that vector to 0 or 1.* You create a hash vector by doing the following: if the element is negative, it becomes a 0, otherwise you change it to a 1.* You then compute the unique number for the vector by iterating over `N_PLANES`* Then you multiply $2^i$ times the corresponding bit (0 or 1).* You will then store that sum in the variable `hash_value`.**Intructions:** Create a hash for the vector in the function below.Use this formula:$$ hash = \sum_{i=0}^{N-1} \left( 2^{i} \times h_{i} \right) $$ Create the sets of planes* Create multiple (25) sets of planes (the planes that divide up the region).* You can think of these as 25 separate ways of dividing up the vector space with a different set of planes.* Each element of this list contains a matrix with 300 rows (the word vector have 300 dimensions), and 10 columns (there are 10 planes in each "universe").
###Code
np.random.seed(0)
planes_l = [np.random.normal(size=(N_DIMS, N_PLANES))
for _ in range(N_UNIVERSES)]
###Output
_____no_output_____
###Markdown
Hints numpy.squeeze() removes unused dimensions from an array; for instance, it converts a (10,1) 2D array into a (10,) 1D array
###Code
# UNQ_C17 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
def hash_value_of_vector(v, planes):
"""Create a hash for a vector; hash_id says which random hash to use.
Input:
- v: vector of tweet. It's dimension is (1, N_DIMS)
- planes: matrix of dimension (N_DIMS, N_PLANES) - the set of planes that divide up the region
Output:
- res: a number which is used as a hash for your vector
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# for the set of planes,
# calculate the dot product between the vector and the matrix containing the planes
# remember that planes has shape (300, 10)
# The dot product will have the shape (1,10)
dot_product = np.dot(v,planes)
# get the sign of the dot product (1,10) shaped vector
sign_of_dot_product = np.sign(dot_product)
# set h to be false (eqivalent to 0 when used in operations) if the sign is negative,
# and true (equivalent to 1) if the sign is positive (1,10) shaped vector
h = sign_of_dot_product >=0
# remove extra un-used dimensions (convert this from a 2D to a 1D array)
h = np.squeeze(h)
# initialize the hash value to 0
hash_value = 0
n_planes = planes.shape[1]
for i in range(n_planes):
# increment the hash value by 2^i * h_i
hash_value += (2 ** i ) * h[i]
### END CODE HERE ###
# cast hash_value as an integer
hash_value = int(hash_value)
return hash_value
# UNQ_C18 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
np.random.seed(0)
idx = 0
planes = planes_l[idx] # get one 'universe' of planes to test the function
vec = np.random.rand(1, 300)
print(f" The hash value for this vector,",
f"and the set of planes at index {idx},",
f"is {hash_value_of_vector(vec, planes)}")
###Output
The hash value for this vector, and the set of planes at index 0, is 768
###Markdown
Expected Output```The hash value for this vector, and the set of planes at index 0, is 768``` 3.5 Creating a hash table Exercise 10Given that you have a unique number for each vector (or tweet), You now want to create a hash table. You need a hash table, so that given a hash_id, you can quickly look up the corresponding vectors. This allows you to reduce your search by a significant amount of time. We have given you the `make_hash_table` function, which maps the tweet vectors to a bucket and stores the vector there. It returns the `hash_table` and the `id_table`. The `id_table` allows you know which vector in a certain bucket corresponds to what tweet. Hints a dictionary comprehension, similar to a list comprehension, looks like this: `{i:0 for i in range(10)}`, where the key is 'i' and the value is zero for all key-value pairs.
###Code
# UNQ_C19 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# This is the code used to create a hash table: feel free to read over it
def make_hash_table(vecs, planes):
"""
Input:
- vecs: list of vectors to be hashed.
- planes: the matrix of planes in a single "universe", with shape (embedding dimensions, number of planes).
Output:
- hash_table: dictionary - keys are hashes, values are lists of vectors (hash buckets)
- id_table: dictionary - keys are hashes, values are list of vectors id's
(it's used to know which tweet corresponds to the hashed vector)
"""
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# number of planes is the number of columns in the planes matrix
num_of_planes = planes.shape[1]
# number of buckets is 2^(number of planes)
num_buckets = 2 ** num_of_planes
# create the hash table as a dictionary.
# Keys are integers (0,1,2.. number of buckets)
# Values are empty lists
hash_table = {i : [] for i in range(num_buckets)}
# create the id table as a dictionary.
# Keys are integers (0,1,2... number of buckets)
# Values are empty lists
id_table = {i : [] for i in range(num_buckets)}
# for each vector in 'vecs'
for i, v in enumerate(vecs):
# calculate the hash value for the vector
h = hash_value_of_vector(v, planes)
# store the vector into hash_table at key h,
# by appending the vector v to the list at key h
hash_table[h].append(v)
# store the vector's index 'i' (each document is given a unique integer 0,1,2...)
# the key is the h, and the 'i' is appended to the list at key h
id_table[h].append(i)
### END CODE HERE ###
return hash_table, id_table
# UNQ_C20 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
np.random.seed(0)
planes = planes_l[0] # get one 'universe' of planes to test the function
vec = np.random.rand(1, 300)
tmp_hash_table, tmp_id_table = make_hash_table(document_vecs, planes)
print(f"The hash table at key 0 has {len(tmp_hash_table[0])} document vectors")
print(f"The id table at key 0 has {len(tmp_id_table[0])}")
print(f"The first 5 document indices stored at key 0 of are {tmp_id_table[0][0:5]}")
###Output
The hash table at key 0 has 3 document vectors
The id table at key 0 has 3
The first 5 document indices stored at key 0 of are [3276, 3281, 3282]
###Markdown
Expected output```The hash table at key 0 has 3 document vectorsThe id table at key 0 has 3The first 5 document indices stored at key 0 of are [3276, 3281, 3282]``` 3.6 Creating all hash tablesYou can now hash your vectors and store them in a hash table thatwould allow you to quickly look up and search for similar vectors.Run the cell below to create the hashes. By doing so, you end up havingseveral tables which have all the vectors. Given a vector, you thenidentify the buckets in all the tables. You can then iterate over thebuckets and consider much fewer vectors. The more buckets you use, themore accurate your lookup will be, but also the longer it will take.
###Code
# Creating the hashtables
hash_tables = []
id_tables = []
for universe_id in range(N_UNIVERSES): # there are 25 hashes
print('working on hash universe #:', universe_id)
planes = planes_l[universe_id]
hash_table, id_table = make_hash_table(document_vecs, planes)
hash_tables.append(hash_table)
id_tables.append(id_table)
###Output
working on hash universe #: 0
working on hash universe #: 1
working on hash universe #: 2
working on hash universe #: 3
working on hash universe #: 4
working on hash universe #: 5
working on hash universe #: 6
working on hash universe #: 7
working on hash universe #: 8
working on hash universe #: 9
working on hash universe #: 10
working on hash universe #: 11
working on hash universe #: 12
working on hash universe #: 13
working on hash universe #: 14
working on hash universe #: 15
working on hash universe #: 16
working on hash universe #: 17
working on hash universe #: 18
working on hash universe #: 19
working on hash universe #: 20
working on hash universe #: 21
working on hash universe #: 22
working on hash universe #: 23
working on hash universe #: 24
###Markdown
Approximate K-NN Exercise 11Implement approximate K nearest neighbors using locality sensitive hashing,to search for documents that are similar to a given document at theindex `doc_id`. Inputs* `doc_id` is the index into the document list `all_tweets`.* `v` is the document vector for the tweet in `all_tweets` at index `doc_id`.* `planes_l` is the list of planes (the global variable created earlier).* `k` is the number of nearest neighbors to search for.* `num_universes_to_use`: to save time, we can use fewer than the totalnumber of available universes. By default, it's set to `N_UNIVERSES`,which is $25$ for this assignment.The `approximate_knn` function finds a subset of candidate vectors thatare in the same "hash bucket" as the input vector 'v'. Then it performsthe usual k-nearest neighbors search on this subset (instead of searchingthrough all 10,000 tweets). Hints There are many dictionaries used in this function. Try to print out planes_l, hash_tables, id_tables to understand how they are structured, what the keys represent, and what the values contain. To remove an item from a list, use `.remove()` To append to a list, use `.append()` To add to a set, use `.add()`
###Code
# UNQ_C21 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# This is the code used to do the fast nearest neighbor search. Feel free to go over it
def approximate_knn(doc_id, v, planes_l, k=1, num_universes_to_use=N_UNIVERSES):
"""Search for k-NN using hashes."""
assert num_universes_to_use <= N_UNIVERSES
# Vectors that will be checked as possible nearest neighbor
vecs_to_consider_l = list()
# list of document IDs
ids_to_consider_l = list()
# create a set for ids to consider, for faster checking if a document ID already exists in the set
ids_to_consider_set = set()
# loop through the universes of planes
for universe_id in range(num_universes_to_use):
# get the set of planes from the planes_l list, for this particular universe_id
planes = planes_l[universe_id]
# get the hash value of the vector for this set of planes
hash_value = hash_value_of_vector(v, planes)
# get the hash table for this particular universe_id
hash_table = hash_tables[universe_id]
# get the list of document vectors for this hash table, where the key is the hash_value
document_vectors_l = hash_table[hash_value]
# get the id_table for this particular universe_id
id_table = id_tables[universe_id]
# get the subset of documents to consider as nearest neighbors from this id_table dictionary
new_ids_to_consider = id_table[hash_value]
### START CODE HERE (REPLACE INSTANCES OF 'None' with your code) ###
# remove the id of the document that we're searching
if doc_id in new_ids_to_consider:
new_ids_to_consider.remove(doc_id)
print(f"removed doc_id {doc_id} of input vector from new_ids_to_search")
# loop through the subset of document vectors to consider
for i, new_id in enumerate(new_ids_to_consider):
# if the document ID is not yet in the set ids_to_consider...
if new_id not in ids_to_consider_set:
# access document_vectors_l list at index i to get the embedding
# then append it to the list of vectors to consider as possible nearest neighbors
document_vector_at_i = document_vectors_l[i]
vecs_to_consider_l.append(document_vector_at_i)
# append the new_id (the index for the document) to the list of ids to consider
ids_to_consider_l.append(new_id)
# also add the new_id to the set of ids to consider
# (use this to check if new_id is not already in the IDs to consider)
if ids_to_consider_set.add(new_id) == False :
print('ID already existing')
### END CODE HERE ###
# Now run k-NN on the smaller set of vecs-to-consider.
print("Fast considering %d vecs" % len(vecs_to_consider_l))
# convert the vecs to consider set to a list, then to a numpy array
vecs_to_consider_arr = np.array(vecs_to_consider_l)
# call nearest neighbors on the reduced list of candidate vectors
nearest_neighbor_idx_l = nearest_neighbor(v, vecs_to_consider_arr, k=k)
# Use the nearest neighbor index list as indices into the ids to consider
# create a list of nearest neighbors by the document ids
nearest_neighbor_ids = [ids_to_consider_l[idx]
for idx in nearest_neighbor_idx_l]
return nearest_neighbor_ids
#document_vecs, ind2Tweet
doc_id = 0
doc_to_search = all_tweets[doc_id]
vec_to_search = document_vecs[doc_id]
# UNQ_C22 (UNIQUE CELL IDENTIFIER, DO NOT EDIT)
# You do not have to input any code in this cell, but it is relevant to grading, so please do not change anything
# Sample
nearest_neighbor_ids = approximate_knn(
doc_id, vec_to_search, planes_l, k=3, num_universes_to_use=5)
print(f"Nearest neighbors for document {doc_id}")
print(f"Document contents: {doc_to_search}")
print("")
for neighbor_id in nearest_neighbor_ids:
print(f"Nearest neighbor at document id {neighbor_id}")
print(f"document contents: {all_tweets[neighbor_id]}")
###Output
Nearest neighbors for document 0
Document contents: #FollowFriday @France_Inte @PKuchly57 @Milipol_Paris for being top engaged members in my community this week :)
Nearest neighbor at document id 2140
document contents: @PopsRamjet come one, every now and then is not so bad :)
Nearest neighbor at document id 701
document contents: With the top cutie of Bohol :) https://t.co/Jh7F6U46UB
Nearest neighbor at document id 51
document contents: #FollowFriday @France_Espana @reglisse_menthe @CCI_inter for being top engaged members in my community this week :)
|
programming/google_spreadsheet/pygspread.ipynb | ###Markdown
pygsheets- http://pygsheets.readthedocs.io/en/latest/- https://github.com/nithinmurali/pygsheets- $ pip3 install pygsheets get oauth 2.0 client id- http://pygsheets.readthedocs.io/en/latest/authorizing.htmloauth-credentials- ์ธ์ฆ ํ๊ธฐ ๋ณ์ ์ฌ์ฉ- gc : ์ธ์ฆ์๋ฃํ ๊ตฌ๊ธ ๋๋ผ์ด๋ฒ์ ์ ์ํ ์ ์๋ ๊ฐ์ฒด- sh : ํ์ผ์ ์ํ ์ ์ฒด ์ํธ ๊ฐ์ฒด- sheet : ์ํธ๋ฅผ ๋ด์ ๋ณ์- cell : ์
์ ๋ด์ ๋ณ์
###Code
import pygsheets
###Output
_____no_output_____
###Markdown
access- ์ํธ ํ์ผ์ ์ ์- oauth 2.0 ์ธ์ฆํ ๋ค์ด ๋ฐ์ json ํ์ผ์ outh_file์ ํค๋กํ๋ ํค์๋ ํ๋ผ๋ฏธํฐ๋ก authorize ํจ์๋ฅผ ํธ์ถํฉ๋๋ค.- ์๋ ์ฝ๋๋ฅผ ์คํํ๋ฉด oauth ์ธ์ฆ ํ์ฉ ์น ํ์ด์ง๊ฐ ๋จ๊ณ ํ์ฉ์ผ๋ก ์ธ์ฆ์ ํ์ฉํด์ผ google spreadsheet api๋ฅผ ์ฌ์ฉํ ์ ์์ต๋๋ค.
###Code
gc = pygsheets.authorize(outh_file='client_secret.json')
###Output
_____no_output_____
###Markdown
open sheet- ๊ตฌ๊ธ ๋๋ผ์ด๋ธ๋ก ๊ฐ์ ์๋ก์ด ์ํธ๋ฅผ ๋ง๋ญ๋๋ค.- ์ํธ ํ์ผ ์ด๋ฆ์ผ๋ก ์ํธ๋ฅผ ์คํํฉ๋๋ค.- open ํจ์์ ์ํธ์ ์ด๋ฆ์ ํ๋ผ๋ฏธํฐ๋ก ๋ฃ์ด ๊ตฌ๊ธ ๋๋ผ์ด๋ฒ์ ์๋ ์ํธ ํ์ผ์ ์๋์ ๊ฐ์ด ์ด์ ์์ต๋๋ค.- sh๋ก ์คํํ ์ํธ ํ์ผ์ ์ํธ๋ sh.sheet1์ผ๋ก ์ฒซ๋ฒ์งธ ์ํธ๋ฅผ ๊ฐ์ ธ์ฌ์ ์์ต๋๋ค.- ์ฒ์์๋ ํญ์ ์ฒซ๋ฒ์งธ ์ํธ๋ฅผ ๊ฐ์ ธ์ค๊ณ selecting๊ธฐ๋ฅ์ผ๋ก ๋ค๋ฅธ ์ํธ์ ์ ๊ทผํ ์ ์์ต๋๋ค.
###Code
sh = gc.open('email') # ํ์ผ ์ด๊ธฐ (sh : ์ ์ฒด ์ํธ์ ๋ํ ๊ฐ์ฒด)
sheet1 = sh.sheet1 # ์ํธ ์ ๊ทผ (sheet1 : ์ฒซ๋ฒ์งธ ์ํธ์ ๋ํ ๊ฐ์ฒด)
sheet1
###Output
_____no_output_____
###Markdown
create sheet- ์ํธ ์์ฑ- add_worksheet ํจ์๋ฅผ ์ด์ฉํ์ฌ ์์ฑํ ์ํธ์ด๋ฆ, ํ๊ณผ ์ด์ ํฌ๊ธฐ๋ฅผ ํ๋ผ๋ฏธํฐ๋ก ๋๊ฒจ ์๋ก์ด ์ํธ๋ฅผ ์์ฑํ ์ ์์ต๋๋ค.
###Code
# 5์นธ, 20์ค์ ๊ฐ์ง๋ new_sheet๋ผ๋ ์ด๋ฆ์ ์๋ก์ด ์ํธ๋ฅผ ์์ฑํ์ฌ sheet2๋ผ๋ ๋ณ์์ ๋ฃ์ด์ค
sheet2 = sh.add_worksheet("new_sheet", rows=20, cols=5)
sheet2
###Output
_____no_output_____
###Markdown
copy sheet- ์ํธ ๋ณต์ฌ- add_worksheet๋ฅผ ์ด์ฉํ์ฌ src_worksheet ํ๋ผ๋ฏธํฐ์ ๋ณต์ฌํ ์ํธ๋ฅผ ํ๋ผ๋ฏธํฐ๋ก ๋๊ธฐ๋ฉด ์๋ก์ด ์ํธ๋ฅผ ์์ฑํ ๋ src_worksheet์ ์ค์ ํ ์ํธ๊ฐ ๋ณต์ฌ ๋ฉ๋๋ค.
###Code
# sheet1์ ๋ณต์ฌํ์ฌ email_copied๋ผ๋ title์ ์๋ก์ด ์ํธ๋ฅผ ์์ฑํ์ฌ sheet3์ด๋ผ๋ ๋ณ์์ ๋ฃ์ด์ค
sheet3 = sh.add_worksheet("email_copied", src_worksheet=sheet1)
sheet3
###Output
_____no_output_____
###Markdown
delete sheet- ์ํธ ์ญ์ - del_worksheet์ ์ญ์ ํ ์ํธ ๊ฐ์ฒด๋ฅผ ํ๋ผ๋ฏธํฐ๋ก ๋๊ธฐ๋ฉด ํด๋น ์ํธ๊ฐ ์ญ์ ๋ฉ๋๋ค.
###Code
# sheet3 ๋ณ์๊ฐ ๊ฐ์ง๋ sheet๋ฅผ ์ญ์
sh.del_worksheet(sh[2])
###Output
_____no_output_____
###Markdown
selecting sheet- ์ํธ๊ฐ ๋ชจ์ฌ์๋ ๊ฐ์ฒด์ธ sh ๊ฐ์ฒด์์ ์ํ๋ ์ํธํ๋์ ๋ํ๊ฐ์ฒด๋ฅผ ์ ํํด์ ๊ฐ์ ธ์ค๋ ๋ฐฉ๋ฒ์
๋๋ค.- ์ ๋ชฉ๊ณผ ์์์ ๋ํ ๊ฐ์ผ๋ก ์ํธ๋ฅผ ๊ฐ์ ธ์ฌ์ ์์ต๋๋ค.
###Code
# ๋ชจ๋ ์ํธ ๋ฆฌ์คํธ๋ก ๊ฐ์ ธ์ค๊ธฐ
sheet_list = sh.worksheets()
print(sheet_list)
# ์ํธ ์ ๋ชฉ์ผ๋ก ๊ฐ์ ธ์ค๊ธฐ
new_sheet = sh.worksheet_by_title("new_sheet")
print(new_sheet)
# index๋ก ์ํธ ๊ฐ์ ธ์ค๊ธฐ
sheet0 = sh.worksheet("index", 0)
print(sheet0)
# ์์ ์ ์ฅํ ์ฒซ๋ฒ์งธ ์ํธ์ธ sheet1๊ณผ ๊ฐ์์ง ํ์ธํ๊ธฐ
sheet0 == sheet1
# offset์ผ๋ก ๊ฐ์ ธ์ค๊ธฐ
sheet0 = sh[0]
print(sheet0)
# ์์ ์ ์ฅํ ์ฒซ๋ฒ์งธ ์ํธ์ธ sheet1๊ณผ ๊ฐ์์ง ํ์ธํ๊ธฐ
sheet0 == sheet1
###Output
_____no_output_____
###Markdown
get values
###Code
# ์ ์ฒด ๋ฐ์ดํฐ ๋ฆฌ์คํธ๋ก ๊ฐ์ ธ์ค๊ธฐ (๋์
๋๋ฆฌํ์
)
pd.DataFrame(sheet1.get_all_records())
# ๋ชจ๋ ๋ฐ์ดํฐ ํ๋ ฌ๋ก ๊ฐ์ ธ์ค๊ธฐ (๋ฆฌ์คํธํ์
)
all_data_sheet1 = sheet1.get_all_values(returnas='matrix')
all_data_sheet1
# ์์น๋ฅผ ์ง์ ํ์ฌ ํ๋ ฌ ํํ๋ก ๋ฐ์ดํฐ ๊ฐ์ ธ์ค๊ธฐ
some_data_sheet1 = sheet1.get_values(start=(2,2), end=(3,3), returnas='matrix')
some_data_sheet1
# "์ํธ[ํ][์ด]"๊ณผ ๊ฐ์ ๋ฐฉ๋ฒ์ผ๋ก ํน์ ์
์ ๋ฐ์ดํฐ๋ฅผ ๊ฐ์ ธ์ฌ์ ์์ต๋๋ค.
value = sheet1[3][2]
value
# ๋ฌธ์์ด ์ฐพ๊ธฐ
cell_list = sh[0].find("[email protected]")
cell_list
# ํน์ ๋ฌธ์์ด์ด ์๋ ์
์ ์ฐพ์์ ๋ค๋ฅธ ๋ฌธ์์ด๋ก ๋ฐ๊พธ๊ธฐ
cell_list = sh[0].find("[email protected]", replace="[email protected]")
cell_list
# csv ํ์ผ๋ก exportํ๊ธฐ
sheet1.export(pygsheets.ExportType.CSV, filename="sheet1.csv")
###Output
sheet1.csv
###Markdown
update & insert
###Code
# A1์์ C4๊น์ง์ ์์น์ some_data_sheet1 ๋ฐ์ดํฐ๋ก ์
๋ฐ์ดํธํจ
sh[1].update_cells(crange='A1:C4', values=some_data_sheet1)
# sh[1] ์์น์ ์๋ ๋๋ฒ์งธ ์ํธ์ ๋ํ ๋ชจ๋ ๋ฐ์ดํฐ๋ฅผ ๊ฐ์ ธ์ด
all_data_sheet2 = sh[1].get_all_values()
all_data_sheet2
# 4๋ฒ์งธ์ค ์๋๋ก 2์ค ์ฝ์
(5,6๋ฒ์งธ์ค์ ๋ฐ์ดํฐ ์ฝ์
)
sh[1].insert_rows(row=4, number=2, values=all_data_sheet2)
# ์ํธ์ ์ด๊ณผ ํ์ ์ฌ์ค์ ํด์ค
sh[1].rows = 7
sh[1].cols = 2
# ๋ฐ๋ณต๋ฌธ์ ํตํด ํ์ค์ฉ ์ฝ์ด ์ฌ์ ์์
for row in sh[1]:
print(row)
# ์ํธ์ ์ ๋ชฉ์ ์
๋ฐ์ดํธ
sh[1].title = "NewSheet"
# ์ํธ์ ๋ง์ง๋ง ๋ฐ์ดํฐ๋ฅผ ์ฐพ์ ๋ง์ง๋ง ๋ฐ์ดํฐ์ ์๋์ ๋ฐ์ดํฐ๋ฅผ ์ถ๊ฐ
sh[1].append_table(values=["์ด๋ฏผ์ฑ","[email protected]"])
###Output
_____no_output_____
###Markdown
delete
###Code
# ์ํธ ๋ด์ฉ ๋ชจ๋ ์ญ์ ํ๊ธฐ
sh[1].clear()
###Output
_____no_output_____
###Markdown
change to pandas- google sheet๋ฅผ ๋ฐ์ดํฐ ๋ถ์์ ์ํ ํ์ด์ฌ ํจํค์ง์ธ pandas์ DataFrame์ผ๋ก ๋ณํํ ์ ์๋ค.
###Code
import pandas as pd
sheet1
df = pd.DataFrame(columns=["์๋ฒ","์ด๋ฆ","์ด๋ฉ์ผ"])
sheet1.set_dataframe(df,(1,1)) # 1,1๋ก ํด์ผ 1,1 ์์น์ ์
๋ถํฐ ๋ฐ์ดํฐ๋ฅผ ๊ฐ์ ธ์ต๋๋ค.
df = sheet1.get_as_df()
df
# csv ํ์ผ๋ก ์ ์ฅ
df.to_csv("email.csv", index=False)
# csv ํ์ผ ์ฝ์ด์ค๊ธฐ
df = pd.read_csv("email.csv")
df
###Output
_____no_output_____
###Markdown
Cell
###Code
# sheet1์ cell_test ์ํธ๋ฅผ ๋ง๋ค์ด ๋ณต์ฌํ๋ค.
test_sheet = sh.add_worksheet("cell_test", src_worksheet=sheet1)
test_sheet
# ํน์ ์
์ ๊ฐ์ฒด ๊ฐ์ ธ์ค๊ธฐ
b2 = test_sheet.cell('B2')
# ์
๊ฐ ํ์ธ
print(b2.value)
# b2 ๊ฐ์ฒด์ 3๋ฒ์งธ ์นธ์ ๋ฐ์ดํฐ๋ฅผ b2์ ํ ๋น
b2.col = 3
# ์
๊ฐ ํ์ธ
print(b2.value)
# b2์ ํด๋นํ๋ ์์น์ ๋ฐ์ดํฐ๋ฅผ "[email protected]"๋ก ๋ฐ๊ฟ
b2.value = "[email protected]"
b2.value
# C2 ์์น์ ๋ฐ์ดํฐ๋ฅผ '[email protected]'๋ก ์
๋ฐ์ดํธํจ
test_sheet.update_cell('C2', '[email protected]')
# A1์์ C4์ ์
๋ฆฌ์คํธ ๊ฐ์ ธ์ด
cell_list = test_sheet.range('A1:C4')
print(cell_list)
# A1์์ C4์ ์
๋ฆฌ์คํธ ๊ฐ์ ธ์ด
cell_list = test_sheet.get_values('A1','C4', returnas='cells')
print(cell_list)
# ๋๋ฒ์งธ ์ค์ ์
๋ฆฌ์คํธ ๊ฐ์ ธ์ด
cell_list = test_sheet.get_row(2, returnas='cells')
print(cell_list)
%%time
cell = test_sheet.cell('C2')
# ๋
ธํธ ์ถ๊ฐ
cell.note = "this is email data."
# ์
๋ฐฐ๊ฒฝ ์์ ๋ณ๊ฒฝ (Red, Green, Blue, Alpha
cell.color = (1.0,1.0,0.0,1.0)
# ํ
์คํธ ํฌ๋ฉง ๋ณ๊ฒฝ
cell.text_format['fontSize'] = 12
cell.text_format['bold'] = True
# sync the changes
cell.update()
###Output
CPU times: user 353 ms, sys: 20.3 ms, total: 374 ms
Wall time: 9.3 s
###Markdown
share
###Code
# add
sh.share("[email protected]")
# remove
sh.remove_permissions("[email protected]")
###Output
_____no_output_____
###Markdown
all clear
###Code
sh.del_worksheet(sh[1])
sh.del_worksheet(sh[1])
###Output
_____no_output_____
###Markdown
seaborn์์ iris ๋ฐ์ดํฐ๋ฅผ ๊ฐ์ ธ์์ ๊ตฌ๊ธ ๋ฐ์ดํฐ ์ํธ์ ๋ฃ๊ธฐ
###Code
import seaborn as sns
iris = sns.load_dataset("iris")
iris.tail()
# ์๋ก์ด ์ํธ ๋ง๋ค๊ธฐ
iris_sheet = sh.add_worksheet("iris")
iris_sheet.set_dataframe(iris, 'A1', copy_index=True) # (df, cell_start)
###Output
_____no_output_____ |
notebooks/TensorBayes_v3.2.ipynb | ###Markdown
tensorboard \ --logdir ~/Dropbox/Cours/tensorbayes \ --port 6006 \ --debugger_port 6064
###Code
sess.run(tf.global_variables_initializer())
sess.run(ta_beta.eval(session=sess), feed_dict={ind: 0, Xj: x[:,0].reshape(N,1)})
x
# Number of Gibbs sampling iterations
num_iter = 5000
with tf.Session() as sess:
# Initialize variable
sess.run(tf.global_variables_initializer())
# Gibbs sampler iterations
for i in range(num_iter):
print("Gibbs sampling iteration: ", i)
sess.run(emu_up)
#sess.run(ny_reset)
index = np.random.permutation(M)
for marker in index:
current_col = x[:,[marker]]
feed = {ind: marker, Xj: current_col}
sess.run(up_grp, feed_dict=feed)
sess.run(nz_up)
sess.run(emu_up)
sess.run(eps_up)
sess.run(s2b_up)
sess.run(s2e_up)
# Print operations
print(sess.run(print_dict))
# End of Gibbs sampling
print(sess.run(Ebeta), beta_true)
total_time = time.clock()-start_time
print("Total time: " + str(total_time) + "s")
###Output
_____no_output_____ |
notebooks/Documentation_Database_Structure.ipynb | ###Markdown
The Database of Stonktastic*** Database StructureStonktastic uses a relational database created with SQLite3. The database consists of 5 different tables in a star schema. Why SQL LiteThe project was built using SQL Lite as we wanted some of the following features:- **Lightweight** : SQL lite databases require very little overhead and maintaince. It also connects into python easily with several common and easy to use libraries- **No installation** : We wanted a database that didn't require large amounts of set up and maintaince. - **Cheaper to run** : When running on a cloud server, the SQL lite database does not need to run. This is preferable for a low-traffic load website. Schema:
###Code
from IPython.display import Image
Image(filename="StockDatabase.jpg")
###Output
_____no_output_____ |
Classification/Support Vector Machine/LinearSVC_Normalize_QuantileTransformer.ipynb | ###Markdown
LinearSVC with Normalize & Quantile Transformer This Code template is for classification analysis using the LinearSVC Classifier where rescaling method used is Normalize and feature transformation is done via Quantile Transformer. Required Packages
###Code
import warnings
import numpy as np
import pandas as pd
import seaborn as se
import matplotlib.pyplot as plt
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import train_test_split
from sklearn.svm import LinearSVC
from imblearn.over_sampling import RandomOverSampler
from sklearn.preprocessing import LabelEncoder, Normalizer, QuantileTransformer
from sklearn.metrics import classification_report, plot_confusion_matrix
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
InitializationFilepath of CSV file
###Code
#filepath
file_path= ""
###Output
_____no_output_____
###Markdown
List of features which are required for model training.
###Code
#x_values
features=[]
###Output
_____no_output_____
###Markdown
Target feature for prediction.
###Code
#y_value
target=''
###Output
_____no_output_____
###Markdown
Data FetchingPandas is an open-source, BSD-licensed library providing high-performance, easy-to-use data manipulation and data analysis tools.We will use panda's library to read the CSV file using its storage path.And we use the head function to display the initial row or entry.
###Code
df=pd.read_csv(file_path)
df.head()
###Output
_____no_output_____
###Markdown
Feature SelectionsIt is the process of reducing the number of input variables when developing a predictive model. Used to reduce the number of input variables to both reduce the computational cost of modelling and, in some cases, to improve the performance of the model.We will assign all the required input features to X and target/outcome to Y.
###Code
X=df[features]
Y=df[target]
###Output
_____no_output_____
###Markdown
Data PreprocessingSince the majority of the machine learning models in the Sklearn library doesn't handle string category data and Null value, we have to explicitly remove or replace null values. The below snippet have functions, which removes the null value if any exists. And convert the string classes data in the datasets by encoding them to integer classes.
###Code
def NullClearner(df):
if(isinstance(df, pd.Series) and (df.dtype in ["float64","int64"])):
df.fillna(df.mean(),inplace=True)
return df
elif(isinstance(df, pd.Series)):
df.fillna(df.mode()[0],inplace=True)
return df
else:return df
def EncodeX(df):
return pd.get_dummies(df)
###Output
_____no_output_____
###Markdown
Calling preprocessing functions on the feature and target set.
###Code
x=X.columns.to_list()
for i in x:
X[i]=NullClearner(X[i])
X=EncodeX(X)
Y=NullClearner(Y)
X.head()
###Output
_____no_output_____
###Markdown
Correlation MapIn order to check the correlation between the features, we will plot a correlation matrix. It is effective in summarizing a large amount of data where the goal is to see patterns.
###Code
f,ax = plt.subplots(figsize=(18, 18))
matrix = np.triu(X.corr())
se.heatmap(X.corr(), annot=True, linewidths=.5, fmt= '.1f',ax=ax, mask=matrix)
plt.show()
###Output
_____no_output_____
###Markdown
Data SplittingThe train-test split is a procedure for evaluating the performance of an algorithm. The procedure involves taking a dataset and dividing it into two subsets. The first subset is utilized to fit/train the model. The second subset is used for prediction. The main motive is to estimate the performance of the model on new data.
###Code
x_train,x_test,y_train,y_test=train_test_split(X,Y,test_size=0.2,random_state=123)
###Output
_____no_output_____
###Markdown
Data RescalingNormalizer normalizes samples (rows) individually to unit norm.Each sample with at least one non zero component is rescaled independently of other samples so that its norm (l1, l2 or inf) equals one.We will fit an object of Normalizer to train data then transform the same data via fit_transform(X_train) method, following which we will transform test data via transform(X_test) method.
###Code
normalizer = Normalizer()
x_train = normalizer.fit_transform(x_train)
x_test = normalizer.transform(x_test)
###Output
_____no_output_____
###Markdown
Quantile TransformerThis method transforms the features to follow a uniform or a normal distribution. Therefore, for a given feature, this transformation tends to spread out the most frequent values. It also reduces the impact of (marginal) outliers: this is therefore a robust preprocessing scheme.Transform features using quantiles information. Linear Support Vector Classification.Similar to SVC with parameter kernel=โlinearโ, but implemented in terms of liblinear rather than libsvm, so it has more flexibility in the choice of penalties and loss functions and should scale better to large numbers of samples.This class supports both dense and sparse input and the multiclass support is handled according to a one-vs-the-rest scheme.Model Tuning Parameters:**penalty: {โl1โ, โl2โ}, default=โl2โ** ->Specifies the norm used in the penalization. The โl2โ penalty is the standard used in SVC. The โl1โ leads to coef_ vectors that are sparse.**loss: {โhingeโ, โsquared_hingeโ}, default=โsquared_hingeโ** ->Specifies the loss function. โhingeโ is the standard SVM loss (used e.g. by the SVC class) while โsquared_hingeโ is the square of the hinge loss. The combination of penalty='l1' and loss='hinge' is not supported.**dual: bool, default=True** ->Select the algorithm to either solve the dual or primal optimization problem. Prefer dual=False when n_samples > n_features.**tol: float, default=1e-4** ->Tolerance for stopping criteria.**C: float, default=1.0** ->Regularization parameter. The strength of the regularization is inversely proportional to C. Must be strictly positive.**multi_class: {โovrโ, โcrammer_singerโ}, default=โovrโ** ->Determines the multi-class strategy if y contains more than two classes. "ovr" trains n_classes one-vs-rest classifiers, while "crammer_singer" optimizes a joint objective over all classes. While crammer_singer is interesting from a theoretical perspective as it is consistent, it is seldom used in practice as it rarely leads to better accuracy and is more expensive to compute. If "crammer_singer" is chosen, the options loss, penalty and dual will be ignored.**fit_intercept: bool, default=True** ->Whether to calculate the intercept for this model. If set to false, no intercept will be used in calculations (i.e. data is expected to be already centered).**intercept_scaling: float, default=1** ->When self.fit_intercept is True, instance vector x becomes [x, self.intercept_scaling], i.e. a โsyntheticโ feature with constant value equals to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic feature weight Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) intercept_scaling has to be increased.**class_weight: dict or โbalancedโ, default=None** ->Set the parameter C of class i to class_weight[i]*C for SVC. If not given, all classes are supposed to have weight one. The โbalancedโ mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as n_samples / (n_classes * np.bincount(y)).**verbose: int, default=0** ->Enable verbose output. Note that this setting takes advantage of a per-process runtime setting in liblinear that, if enabled, may not work properly in a multithreaded context.**random_state: int, RandomState instance or None, default=None** ->Controls the pseudo random number generation for shuffling the data for the dual coordinate descent (if dual=True). When dual=False the underlying implementation of LinearSVC is not random and random_state has no effect on the results. Pass an int for reproducible output across multiple function calls. See Glossary.**max_iter: int, default=1000** ->The maximum number of iterations to be run.
###Code
model=make_pipeline(QuantileTransformer(), LinearSVC())
model.fit(x_train,y_train)
###Output
_____no_output_____
###Markdown
Model AccuracyWe will use the trained model to make a prediction on the test set.Then use the predicted value for measuring the accuracy of our model.score: The score function returns the coefficient of determination R2 of the prediction.
###Code
print("Accuracy score {:.2f} %\n".format(model.score(x_test,y_test)*100))
###Output
Accuracy score 78.75 %
###Markdown
> **r2_score**: The **r2_score** function computes the percentage variablility explained by our model, either the fraction or the count of correct predictions. > **mae**: The **mean abosolute error** function calculates the amount of total error(absolute average distance between the real data and the predicted data) by our model. > **mse**: The **mean squared error** function squares the error(penalizes the model for large errors) by our model.
###Code
plot_confusion_matrix(model,x_test,y_test,cmap=plt.cm.Blues)
###Output
_____no_output_____
###Markdown
Prediction PlotFirst, we make use of a plot to plot the actual observations, with x_train on the x-axis and y_train on the y-axis.For the regression line, we will use x_train on the x-axis and then the predictions of the x_train observations on the y-axis.
###Code
print(classification_report(y_test,model.predict(x_test)))
###Output
precision recall f1-score support
0 0.81 0.86 0.83 50
1 0.74 0.67 0.70 30
accuracy 0.79 80
macro avg 0.78 0.76 0.77 80
weighted avg 0.78 0.79 0.79 80
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.