path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Simple-Web-Scraper-Python.ipynb | ###Markdown
How to: Scrape the Webwith Python + requests + BeautifulSoup Before you replicate the following code, make sure you have Python and all dependencies installed.- To install package manager brew: `/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"` - To install Python3: `brew install python3` - To install Jupyter and use Notebooks: `pip3 install jupyter` - To install requests: `pip3 install requests` - To install BeautifulSoup: `pip3 install bs4` Documentation: - Python: https://www.python.org/doc/- requests: http://docs.python-requests.org/en/master/ - BeautifulSoup: https://www.crummy.com/software/BeautifulSoup/bs4/doc/ Import all the needed dependencies
###Code
import requests
from bs4 import BeautifulSoup
###Output
_____no_output_____
###Markdown
Grab HTML source code Send GET request
###Code
url = 'http://www.imfdb.org/wiki/Category:Movie'
headers = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Mobile Safari/537.36',
'Connection' : 'keep-alive'
}
proxies = {
# Include your proxies if needed
# 'http':'...',
# 'https':'...'
}
response = requests.get(url, headers=headers, proxies=proxies)
response
###Output
_____no_output_____
###Markdown
Save the response
###Code
text = response.text
text
###Output
_____no_output_____
###Markdown
Parse the response with BeautifulSoup
###Code
souped = BeautifulSoup(text, "html.parser")
souped
###Output
_____no_output_____
###Markdown
Find the `` for movie pages
###Code
movie_pages = souped.find('div', attrs={'id':'mw-pages'})
movie_pages
###Output
_____no_output_____
###Markdown
Grab all links to movie pages
###Code
bullets = movie_pages.find_all('li')
bullets
urls = [] # Initiate an empty list
for bullet in bullets: # simple for loop
url = 'http://www.imfdb.org' + bullet.a['href'] # local scope variable
print(url) # console.log in JavaScript
urls.append(url)
urls
###Output
_____no_output_____
###Markdown
Find the link to the next pageConveniently enough, it's the very last `` in the movie_pages ``
###Code
movie_pages
movie_pages.find_all('a')
# This is a list
type(movie_pages.find_all('a'))
next_page = movie_pages.find_all('a')[-1]
next_page
next_page.text
next_page['href']
next_page_url = 'http://www.imfdb.org' + next_page['href']
next_page_url
###Output
_____no_output_____
###Markdown
Bind that to one piece of codeto extract 5k pages/links
###Code
urls = []
def scrape_the_web(url): # Python function with one parameter
headers = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Mobile Safari/537.36',
'Connection' : 'keep-alive'
}
proxies = {
# Don't forget your proxies if you need any
}
response = requests.get(url, headers=headers, proxies=proxies)
souped = BeautifulSoup(response.text, "html.parser")
movie_pages = souped.find('div', attrs={'id':'mw-pages'})
bullets = movie_pages.find_all('li')
for bullet in bullets:
url = 'http://www.imfdb.org' + bullet.a['href']
urls.append(url)
next_page = movie_pages.find_all('a')[-1]
next_page_text = next_page.text
if next_page_text == "next 200":
next_page_url = 'http://www.imfdb.org' + next_page['href']
print(next_page_url)
scrape_the_web(next_page_url)
else:
pass
url = 'http://www.imfdb.org/wiki/Category:Movie'
scrape_the_web(url)
len(urls)
urls[-1]
###Output
_____no_output_____
###Markdown
Now that we've got every link, let's extract firearm information from each page
###Code
url = 'http://www.imfdb.org/wiki/American_Graffiti'
headers = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Mobile Safari/537.36',
'Connection' : 'keep-alive'
}
proxies = {
# Don't forget your proxies if you need any
}
response = requests.get(url, headers=headers, proxies=proxies)
souped = BeautifulSoup(response.text, "html.parser")
souped
souped.find_all('span', attrs={'class':'mw-headline'})
# list comprehension
[span.text for span in souped.find_all('span', attrs={'class':'mw-headline'})]
[span.next.next.next.text for span in souped.find_all('span', attrs={'class':'mw-headline'})]
###Output
_____no_output_____
###Markdown
Let's try with another movie
###Code
url = 'http://www.imfdb.org/wiki/And_All_Will_Be_Quiet_(Potem_nastapi_cisza)'
headers = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Mobile Safari/537.36',
'Connection' : 'keep-alive'
}
proxies = {
# Don't forget your proxies if you need any
}
response = requests.get(url, headers=headers, proxies=proxies)
souped = BeautifulSoup(response.text, "html.parser")
print([span.text for span in souped.find_all('span', attrs={'class':'mw-headline'})])
print([span.next.next.next.text for span in souped.find_all('span', attrs={'class':'mw-headline'})])
###Output
[' Pistols ', ' Tokarev TT-33 ', 'Luger P08', ' Submachine Guns ', 'PPSh-41', ' MP40 ', ' Machine Guns ', 'Degtyaryov DP-28', ' MG34 ', 'Goryunov SG-43 Machine Gun', ' Maxim ', ' Rifles ', ' Mosin Nagant M44 Carbine ', ' Mosin Nagant M38 Carbine ', ' Karabiner 98k ', ' Hand Grenades ', ' F-1 hand grenade ', ' Model 24 Stielhandgranate ', ' Others ', ' SPSh Flare Pistol ', ' PTRD-41 ', ' 7.5 cm Pak 40 ', ' 45mm anti-tank gun M1937 (53-K) ', ' 76 mm divisional gun M1942 (ZiS-3)', ' SU-76M ', ' T-34 ']
[' Tokarev TT-33 ', 'Various characters are seen with a Tokarev TT-33 pistol.\n', 'Some German NCO and officers carry a Luger P08 pistol.\n', ' PPSh-41', 'Polish infantrymen are mainly armed with PPSh-41 submachine guns.\n', 'MP40 is submachine gun used by German infantrymen.\n', ' Degtyaryov DP-28', 'Polish soldiers mainly use Degtyarev DP-28 machine guns.\n', 'MG34 machine guns are widely used by German soldiers.\n', 'Polish soldiers are also occasionally seen with Goryunov SG-43 machine guns.\n', 'Polish troops are equipped with a Maxim M1910/30 machine guns.\n', ' Mosin Nagant M44 Carbine ', 'Some Polish soldiers are armed with a Mosin Nagant M44 carbine.\n', 'But most Polish infantrymen carry older type M38 carbines.\n', 'The Kar98k carry a few German soldiers.\n', ' F-1 hand grenade ', 'Polish infantrymen carry F-1 hand grenades and also Model 24 Stielhandgranates.\n', ' Model 24 Stielhandgranate "Potato Masher" high-explosive fragmentation hand grenade', ' SPSh Flare Pistol ', 'Lt. Kolski (Marek Perepeczko) gives instruction to the firing a rocket from SPSh Flare Pistol.\n', 'Polish troops are equipped with PTRD-41 anti-tank rifles.\n', 'The popular weapon of the German Army is a 7.5 cm Pak 40 anti tank gun.\n', 'Soviet troops are equipped with 45 mm anti-tank gun M1937 (53-K)s.\n', 'Polish artillery use against German tanks a 76 mm divisional gun M1942 (ZiS-3).\n', 'On the battlefield appears also several Polish SU-76M self-propelled guns.\n', 'The Polish army in the USSR had in service with the Soviet tanks T-34.\n']
###Markdown
Remove the extra spaces, or any special characters
###Code
print([span.next.next.next.text.strip() for span in souped.find_all('span', attrs={'class':'mw-headline'})])
###Output
['Tokarev TT-33', 'Various characters are seen with a Tokarev TT-33 pistol.', 'Some German NCO and officers carry a Luger P08 pistol.', 'PPSh-41', 'Polish infantrymen are mainly armed with PPSh-41 submachine guns.', 'MP40 is submachine gun used by German infantrymen.', 'Degtyaryov DP-28', 'Polish soldiers mainly use Degtyarev DP-28 machine guns.', 'MG34 machine guns are widely used by German soldiers.', 'Polish soldiers are also occasionally seen with Goryunov SG-43 machine guns.', 'Polish troops are equipped with a Maxim M1910/30 machine guns.', 'Mosin Nagant M44 Carbine', 'Some Polish soldiers are armed with a Mosin Nagant M44 carbine.', 'But most Polish infantrymen carry older type M38 carbines.', 'The Kar98k carry a few German soldiers.', 'F-1 hand grenade', 'Polish infantrymen carry F-1 hand grenades and also Model 24 Stielhandgranates.', 'Model 24 Stielhandgranate "Potato Masher" high-explosive fragmentation hand grenade', 'SPSh Flare Pistol', 'Lt. Kolski (Marek Perepeczko) gives instruction to the firing a rocket from SPSh Flare Pistol.', 'Polish troops are equipped with PTRD-41 anti-tank rifles.', 'The popular weapon of the German Army is a 7.5 cm Pak 40 anti tank gun.', 'Soviet troops are equipped with 45 mm anti-tank gun M1937 (53-K)s.', 'Polish artillery use against German tanks a 76 mm divisional gun M1942 (ZiS-3).', 'On the battlefield appears also several Polish SU-76M self-propelled guns.', 'The Polish army in the USSR had in service with the Soviet tanks T-34.']
###Markdown
Bind into one code
###Code
len(urls)
headers = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Mobile Safari/537.36',
'Connection' : 'keep-alive'
}
proxies = {
# Don't forget your proxies if you need any
}
every_gun_in_every_movie = []
uncaught_movies = []
for url in urls:
movie_title = url.split('wiki/')[1]
response = requests.get(url, headers=headers, proxies=proxies)
souped = BeautifulSoup(response.text, "html.parser")
try:
guns_depicted = [p.span.text.strip() for p in souped.find_all('h2') if p.span and p.span['class'][0] == 'mw-headline']
scene_descriptions = [p.span.parent.find_next('p').text.strip() for p in souped.find_all('h2') if p.span and p.span['class'][0] == 'mw-headline']
except:
uncaught_movies.append(url)
for gun, description in zip(guns_depicted,scene_descriptions):
empty_dictionary = {} # Python dictionaries
empty_dictionary['movie_title'] = movie_title
empty_dictionary['gun_used'] = gun
empty_dictionary['scene_description'] = description
every_gun_in_every_movie.append(empty_dictionary)
len(every_gun_in_every_movie)
len(uncaught_movies)
###Output
_____no_output_____
###Markdown
And since we're at it`pip3 install pandas`
###Code
import pandas as pd
df = pd.DataFrame(every_gun_in_every_movie)
df
df.movie_title.value_counts().head(8)
df.gun_used.value_counts().head(8)
df.to_csv("every_gun_in_every_movie.csv", index=False)
from matplotlib import pyplot as plt
%matplotlib inline
df.movie_title.value_counts().head(8).plot(kind='bar')
plt.style.use('ggplot')
df.movie_title.value_counts().head(8).plot(kind='bar', figsize=(10,8))
plt.savefig('every_gun_in_every_movie.svg')
###Output
_____no_output_____ |
Misc Notebooks/r8_small-Copy1.ipynb | ###Markdown
Image recon
###Code
# Import required libraries
from image_util import *
import skimage.filters
from matplotlib import pyplot as plt
import cairocffi as cairo
import math, random
import numpy as np
import pandas as pd
from IPython.display import Image
from scipy.interpolate import interp1d
import astra
%matplotlib inline
def r8_to_sino(readings):
sino = []
for e in range(8):
start = e*8 + (e+2)%8
end = e*8 + (e+6)%8
if end-start == 4:
sino.append(readings[start : end])
else:
r = readings[start : (e+1)*8]
for p in readings[e*8 : end]:
r.append(p)
sino.append(r)
return np.asarray(sino)
nviews = 8
ndetectors = 4
nvdetectors = 8
IMSIZE = 50
R = IMSIZE/2
D = IMSIZE/2
# Transforming from a round fan-beam to a fan-flat projection (See diagram)
beta = np.linspace(math.pi/8, 7*math.pi/8, ndetectors)
alpha = np.asarray([R*math.sin(b-math.pi/2)/(R**2 + D**2)**0.5 for b in beta])
tau = np.asarray([(R+D)*math.tan(a) for a in alpha])
tau_new = np.linspace(-(max(tau)/2), max(tau)/2, nvdetectors)
vol_geom = astra.create_vol_geom(IMSIZE, IMSIZE)
angles = np.linspace(0,2*math.pi,nviews);
d_size = (tau[-1]-tau[0])/nvdetectors
proj_geom= astra.create_proj_geom('fanflat', d_size, nvdetectors, angles, D, R);
proj_id = astra.create_projector('line_fanflat', proj_geom, vol_geom)
base = read_av()
np.asarray(base).reshape(8,8)
%%time
for i in range(1):
print(i)
r2 = read_av()
readings = (np.asarray(base)-np.asarray(r2))# - base
readings = r8_to_sino(readings.tolist()) # Get important ones and reorder
readings2 = []
for r in readings:
f = interp1d(tau, r, kind='cubic') # Can change to linear
readings2.append(f(tau_new))
sinogram_id = astra.data2d.create('-sino', proj_geom, np.asarray(readings2))
# Plotting sinogram - new (transformed) set of readings
plt.figure(num=None, figsize=(16, 10), dpi=80, facecolor='w', edgecolor='k')
ax1 = plt.subplot(1, 3, 1)
ax1.imshow(readings2) #<< Set title
# Doing the reconstruction, in this case with FBP
rec_id = astra.data2d.create('-vol', vol_geom)
cfg = astra.astra_dict('FBP')
cfg['ReconstructionDataId'] = rec_id
cfg['ProjectionDataId'] = sinogram_id
cfg['ProjectorId'] = proj_id
# Create the algorithm object from the configuration structure
alg_id = astra.algorithm.create(cfg)
astra.algorithm.run(alg_id, 1)
# Get the result
rec = astra.data2d.get(rec_id)
ax2 = plt.subplot(1, 3, 2)
ax2.imshow(rec)
norm_rec = rec/(np.amax(np.abs(rec)))
blurred = skimage.filters.gaussian(norm_rec, 3)
ax3 = plt.subplot(1, 3, 3)
ax3.imshow(blurred)
plt.savefig('r8s'+str(i) + '.png')
print(max(np.asarray(readings2).flatten()))
# Clean up.
astra.algorithm.delete(alg_id)
astra.data2d.delete(rec_id)
astra.data2d.delete(sinogram_id)
astra.projector.delete(proj_id)
np.linspace(math.pi/8, 7*math.pi/8, ndetectors)
np.linspace(0, math.pi, ndetectors)
r = []
y = []
y2 = []
for i in range(50):
r.append(np.asarray(read_all()[0]).flatten())
y.append(0)
y2.append(0)
for i in range(50):
r.append(np.asarray(read_all()[0]).flatten())
y.append(2)
y2.append(2)
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import RandomForestClassifier
X_train, X_test, y_train, y_test = train_test_split(r, y)
regr = RandomForestClassifier(max_depth=5, random_state=0)
regr.fit(X_train, y_train)
regr.score(X_test, y_test)
regr.predict([np.asarray(read_all()[0]).flatten()])
df = pd.read_csv('r8_small_rotation.csv')
r = df[[str(i) for i in range(64)]]
y = df['Y']
from sklearn.neural_network import MLPClassifier, MLPRegressor
X_train, X_test, y_train, y_test = train_test_split(r, y)
scaler = StandardScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
mlpc = MLPClassifier(hidden_layer_sizes=(20, 20, 20), max_iter=400)
mlpc.fit(X_train, y_train)
print(mlpc.score(X_test, y_test))
from IPython.display import clear_output
av = 0
while True:
read = [np.asarray(read_all()[0]).flatten()]
read = scaler.transform(read)
print(mlpc.predict(read))
time.sleep(0.1)
clear_output(wait=True)
ser.read_all()
import pandas as pd
df1 = pd.DataFrame(r)
df1.head()
df1['Y'] = y
df1.head()
df1.to_csv('r8_small_rotation.csv', index=False)
###Output
_____no_output_____ |
model1-train-resnet34/2020-03-29-resnet34-experiments.ipynb | ###Markdown
ResNet34 - Experiments Welcome to lesson 1! For those of you who are using a Jupyter Notebook for the first time, you can learn about this useful tool in a tutorial we prepared specially for you; click `File`->`Open` now and click `00_notebook_tutorial.ipynb`. In this lesson we will build our first image classifier from scratch, and see if we can achieve world-class results. Let's dive in!Every notebook starts with the following three lines; they ensure that any edits to libraries you make are reloaded here automatically, and also that any charts or images displayed are shown in this notebook.
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
###Output
_____no_output_____
###Markdown
We import all the necessary packages. We are going to work with the [fastai V1 library](http://www.fast.ai/2018/10/02/fastai-ai/) which sits on top of [Pytorch 1.0](https://hackernoon.com/pytorch-1-0-468332ba5163). The fastai library provides many useful functions that enable us to quickly and easily build neural networks and train our models.
###Code
from fastai.vision import *
from fastai.metrics import error_rate
###Output
_____no_output_____
###Markdown
If you're using a computer with an unusually small GPU, you may get an out of memory error when running this notebook. If this happens, click Kernel->Restart, uncomment the 2nd line below to use a smaller *batch size* (you'll learn all about what this means during the course), and try again.
###Code
bs = 64
# bs = 16 # uncomment this line if you run out of memory even after clicking Kernel->Restart
###Output
_____no_output_____
###Markdown
Looking at the data We are going to use the [Oxford-IIIT Pet Dataset](http://www.robots.ox.ac.uk/~vgg/data/pets/) by [O. M. Parkhi et al., 2012](http://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/parkhi12a.pdf) which features 12 cat breeds and 25 dogs breeds. Our model will need to learn to differentiate between these 37 distinct categories. According to their paper, the best accuracy they could get in 2012 was 59.21%, using a complex model that was specific to pet detection, with separate "Image", "Head", and "Body" models for the pet photos. Let's see how accurate we can be using deep learning!We are going to use the `untar_data` function to which we must pass a URL as an argument and which will download and extract the data.
###Code
help(untar_data)
#path = untar_data(URLs.PETS); path
path = Path(r'/home/ec2-user/SageMaker/classify-streetview/images')
path
path.ls()
#path_anno = path/'annotations'
path_img = path
###Output
_____no_output_____
###Markdown
The first thing we do when we approach a problem is to take a look at the data. We _always_ need to understand very well what the problem is and what the data looks like before we can figure out how to solve it. Taking a look at the data means understanding how the data directories are structured, what the labels are and what some sample images look like.The main difference between the handling of image classification datasets is the way labels are stored. In this particular dataset, labels are stored in the filenames themselves. We will need to extract them to be able to classify the images into the correct categories. Fortunately, the fastai library has a handy function made exactly for this, `ImageDataBunch.from_name_re` gets the labels from the filenames using a [regular expression](https://docs.python.org/3.6/library/re.html).
###Code
fnames = get_image_files(path_img)
fnames[:5]
tfms = get_transforms(do_flip=False)
#data = ImageDataBunch.from_folder(path_img, ds_tfms=tfms, size=224)
#np.random.seed(2)
#pat = r'/([^/]+)_\d+.jpg$'
#data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(), size=224, bs=bs
# ).normalize(imagenet_stats)
# https://docs.fast.ai/vision.data.html#ImageDataBunch.from_folder
data = ImageDataBunch.from_folder(path, ds_tfms = tfms, size = 224, bs=bs)
data.show_batch(rows=3, figsize=(7,6))
print(data.classes)
len(data.classes),data.c
###Output
['0_missing', '1_null', '2_obstacle', '3_present', '4_surface_prob']
###Markdown
Training: resnet34 Now we will start training our model. We will use a [convolutional neural network](http://cs231n.github.io/convolutional-networks/) backbone and a fully connected head with a single hidden layer as a classifier. Don't know what these things mean? Not to worry, we will dive deeper in the coming lessons. For the moment you need to know that we are building a model which will take images as input and will output the predicted probability for each of the categories (in this case, it will have 37 outputs).We will train for 4 epochs (4 cycles through all our data).
###Code
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
learn.model
learn.fit_one_cycle(4)
learn.save('stage-1')
###Output
_____no_output_____
###Markdown
Results Let's see what results we have got. We will first see which were the categories that the model most confused with one another. We will try to see if what the model predicted was reasonable or not. In this case the mistakes look reasonable (none of the mistakes seems obviously naive). This is an indicator that our classifier is working correctly. Furthermore, when we plot the confusion matrix, we can see that the distribution is heavily skewed: the model makes the same mistakes over and over again but it rarely confuses other categories. This suggests that it just finds it difficult to distinguish some specific categories between each other; this is normal behaviour.
###Code
interp = ClassificationInterpretation.from_learner(learn)
losses,idxs = interp.top_losses()
len(data.valid_ds)==len(losses)==len(idxs)
interp.plot_top_losses(9, figsize=(15,11))
doc(interp.plot_top_losses)
interp.plot_confusion_matrix(figsize=(4,4), dpi=60)
interp.most_confused(min_val=2)
###Output
_____no_output_____
###Markdown
Unfreezing, fine-tuning, and learning rates Since our model is working as we expect it to, we will *unfreeze* our model and train some more.
###Code
learn.unfreeze()
learn.fit_one_cycle(1)
learn.load('stage-1');
learn.lr_find()
learn.recorder.plot()
learn.unfreeze()
learn.fit_one_cycle(2, max_lr=slice(1e-6,1e-4))
###Output
_____no_output_____
###Markdown
That's a pretty accurate model! Training: resnet50 Now we will train in the same way as before but with one caveat: instead of using resnet34 as our backbone we will use resnet50 (resnet34 is a 34 layer residual network while resnet50 has 50 layers. It will be explained later in the course and you can learn the details in the [resnet paper](https://arxiv.org/pdf/1512.03385.pdf)).Basically, resnet50 usually performs better because it is a deeper network with more parameters. Let's see if we can achieve a higher performance here. To help it along, let's us use larger images too, since that way the network can see more detail. We reduce the batch size a bit since otherwise this larger network will require more GPU memory.
###Code
data = ImageDataBunch.from_name_re(path_img, fnames, pat, ds_tfms=get_transforms(),
size=299, bs=bs//2).normalize(imagenet_stats)
learn = cnn_learner(data, models.resnet50, metrics=error_rate)
learn.lr_find()
learn.recorder.plot()
learn.fit_one_cycle(8)
learn.save('stage-1-50')
###Output
_____no_output_____
###Markdown
It's astonishing that it's possible to recognize pet breeds so accurately! Let's see if full fine-tuning helps:
###Code
learn.unfreeze()
learn.fit_one_cycle(3, max_lr=slice(1e-6,1e-4))
###Output
Total time: 03:27
epoch train_loss valid_loss error_rate
1 0.097319 0.155017 0.048038 (01:10)
2 0.074885 0.144853 0.044655 (01:08)
3 0.063509 0.144917 0.043978 (01:08)
###Markdown
If it doesn't, you can always go back to your previous model.
###Code
learn.load('stage-1-50');
interp = ClassificationInterpretation.from_learner(learn)
interp.most_confused(min_val=2)
###Output
_____no_output_____
###Markdown
Other data formats
###Code
path = untar_data(URLs.MNIST_SAMPLE); path
tfms = get_transforms(do_flip=False)
data = ImageDataBunch.from_folder(path, ds_tfms=tfms, size=26)
data.show_batch(rows=3, figsize=(5,5))
learn = cnn_learner(data, models.resnet18, metrics=accuracy)
learn.fit(2)
df = pd.read_csv(path/'labels.csv')
df.head()
data = ImageDataBunch.from_csv(path, ds_tfms=tfms, size=28)
data.show_batch(rows=3, figsize=(5,5))
data.classes
data = ImageDataBunch.from_df(path, df, ds_tfms=tfms, size=24)
data.classes
fn_paths = [path/name for name in df['name']]; fn_paths[:2]
pat = r"/(\d)/\d+\.png$"
data = ImageDataBunch.from_name_re(path, fn_paths, pat=pat, ds_tfms=tfms, size=24)
data.classes
data = ImageDataBunch.from_name_func(path, fn_paths, ds_tfms=tfms, size=24,
label_func = lambda x: '3' if '/3/' in str(x) else '7')
data.classes
labels = [('3' if '/3/' in str(x) else '7') for x in fn_paths]
labels[:5]
data = ImageDataBunch.from_lists(path, fn_paths, labels=labels, ds_tfms=tfms, size=24)
data.classes
###Output
_____no_output_____ |
BERT_distyll.ipynb | ###Markdown
Дистилляция BERTДообученная модель BERT показывает очень хорошее качество при решении множества NLP-задач. Однако, её не всегда можно применить на практике из-за того, что модель очень большая и работает дастаточно медленно. В связи с этим было придумано несколько способов обойти это ограничение.Один из способов - `knowledge distillation`.Суть метода заключается в следующем. Мы берём две модели - нашу обученную на решение конкретной задачи BERT (модель-учитель) и модель с более простой архитектурой (модель-ученик). Модель-ученик будет обучаться поведению модели-учителя: логиты Берта мы будем подавать модели-ученику в процессе её обучения.В качестве модели-учителя возьмём уже обученную ранее модель, классифицирующую названия строительных товаров. Библиотеки
###Code
pip install transformers catboost
import os
import random
import numpy as np
import pandas as pd
import torch
from transformers import AutoConfig, AutoModelForSequenceClassification
from transformers import AutoTokenizer
from torch.utils.data import TensorDataset, DataLoader, SequentialSampler
from catboost import Pool, CatBoostRegressor
from sklearn.metrics import classification_report
from tqdm.notebook import tqdm
SEED = 22
os.environ['PYTHONHASHSEED'] = str(SEED)
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(device.type)
if device.type == 'cuda':
print(torch.cuda.get_device_name(0))
###Output
cuda
Tesla P100-PCIE-16GB
###Markdown
Загрузка токенизатора, модели, конфигурации
###Code
# config
config = AutoConfig.from_pretrained('/content/drive/My Drive/colab_data/leroymerlin/model/BERT_model')
# tokenizer
tokenizer = AutoTokenizer.from_pretrained('/content/drive/My Drive/colab_data/leroymerlin/model/BERT_model', pad_to_max_length=True)
# model
model = AutoModelForSequenceClassification.from_pretrained('/content/drive/My Drive/colab_data/leroymerlin/model/BERT_model', config=config)
###Output
_____no_output_____
###Markdown
Подготовка данных
###Code
category_index = {'Водоснабжение': 8,
'Декор': 12,
'Инструменты': 4,
'Краски': 11,
'Кухни': 15,
'Напольные покрытия': 5,
'Окна и двери': 2,
'Освещение': 13,
'Плитка': 6,
'Сад': 9,
'Сантехника': 7,
'Скобяные изделия': 10,
'Столярные изделия': 1,
'Стройматериалы': 0,
'Хранение': 14,
'Электротовары': 3}
category_index_inverted = dict(map(reversed, category_index.items()))
df = pd.read_csv('/content/drive/My Drive/colab_data/leroymerlin/to_classifier.csv')
sentences = df.name.values
labels = [category_index[i] for i in df.category_1.values]
tokens = [tokenizer.encode(
sent,
add_special_tokens=True,
max_length=24,
pad_to_max_length='right') for sent in sentences]
tokens_tensor = torch.tensor(tokens)
#labels_tensor = torch.tensor(labels)
BATCH_SIZE = 400
#full_dataset = TensorDataset(tokens_tensor, labels_tensor)
sampler = SequentialSampler(tokens_tensor)
dataloader = DataLoader(tokens_tensor, sampler=sampler, batch_size=BATCH_SIZE)
###Output
_____no_output_____
###Markdown
Получение логитов BERT
###Code
train_logits = []
with torch.no_grad():
model.to(device)
for batch in tqdm(dataloader):
batch = batch.to(device)
outputs = model(batch)
logits = outputs[0].detach().cpu().numpy()
train_logits.extend(logits)
#train_logits = np.vstack(train_logits)
###Output
_____no_output_____
###Markdown
Обучение ученикаТеперь возьмём мультирегрессионную модель от CatBoost и передадим ей все полученные логиты.
###Code
data_pool = Pool(tokens, train_logits)
distilled_model = CatBoostRegressor(iterations=2000,
depth=4,
learning_rate=.1,
loss_function='MultiRMSE',
verbose=200)
distilled_model.fit(data_pool)
###Output
0: learn: 11.6947874 total: 275ms remaining: 9m 9s
200: learn: 9.0435970 total: 47s remaining: 7m
400: learn: 8.2920608 total: 1m 32s remaining: 6m 10s
600: learn: 7.7736947 total: 2m 18s remaining: 5m 22s
800: learn: 7.3674586 total: 3m 4s remaining: 4m 36s
1000: learn: 7.0166625 total: 3m 51s remaining: 3m 51s
1200: learn: 6.7202548 total: 4m 38s remaining: 3m 5s
1400: learn: 6.4602129 total: 5m 25s remaining: 2m 19s
1600: learn: 6.2248947 total: 6m 12s remaining: 1m 32s
1800: learn: 6.0164036 total: 7m remaining: 46.4s
1999: learn: 5.8322141 total: 7m 46s remaining: 0us
###Markdown
Сравнение качества моделей
###Code
category_index_inverted = dict(map(reversed, category_index.items()))
###Output
_____no_output_____
###Markdown
Метрики Берта:
###Code
print(classification_report(labels, np.argmax(train_logits, axis=1), target_names=category_index_inverted.values()))
###Output
precision recall f1-score support
Водоснабжение 0.94 0.88 0.91 13377
Декор 1.00 0.40 0.57 2716
Инструменты 1.00 0.40 0.58 540
Краски 0.97 0.81 0.88 20397
Кухни 0.96 0.91 0.93 29920
Напольные покрытия 1.00 0.56 0.72 2555
Окна и двери 1.00 0.61 0.76 2440
Освещение 0.98 0.92 0.95 30560
Плитка 0.97 0.96 0.97 23922
Сад 0.95 0.98 0.96 49518
Сантехника 0.97 0.74 0.84 24245
Скобяные изделия 0.85 0.93 0.89 15280
Столярные изделия 0.58 0.95 0.72 30329
Стройматериалы 0.98 0.67 0.80 8532
Хранение 0.97 0.77 0.86 6237
Электротовары 0.96 0.87 0.92 4019
accuracy 0.89 264587
macro avg 0.94 0.77 0.83 264587
weighted avg 0.91 0.89 0.89 264587
###Markdown
Метрики модели-ученика:
###Code
tokens_pool = Pool(tokens)
distilled_predicted_logits = distilled_model.predict(tokens_pool, prediction_type='RawFormulaVal') # Probability
print(classification_report(labels, np.argmax(distilled_predicted_logits, axis=1), target_names=category_index_inverted.values()))
###Output
precision recall f1-score support
Водоснабжение 0.90 0.53 0.67 13377
Декор 0.99 0.30 0.46 2716
Инструменты 0.00 0.00 0.00 540
Краски 0.97 0.61 0.75 20397
Кухни 0.85 0.77 0.81 29920
Напольные покрытия 1.00 0.28 0.44 2555
Окна и двери 0.96 0.30 0.45 2440
Освещение 0.92 0.82 0.87 30560
Плитка 0.94 0.86 0.90 23922
Сад 0.85 0.86 0.86 49518
Сантехника 0.91 0.55 0.68 24245
Скобяные изделия 0.61 0.78 0.69 15280
Столярные изделия 0.40 0.92 0.56 30329
Стройматериалы 0.80 0.64 0.71 8532
Хранение 0.93 0.50 0.65 6237
Электротовары 0.88 0.24 0.38 4019
accuracy 0.74 264587
macro avg 0.81 0.56 0.62 264587
weighted avg 0.82 0.74 0.75 264587
|
pykonal_eq/quake_de.ipynb | ###Markdown
Development of EDT residual
###Code
import itertools
def edt(self, coords):
arrivals = self.arrivals.set_index("handle")
pairs = list(itertools.product(arrivals.index, arrivals.index))
r = [
(arrivals.loc[handle1, "time"] - arrivals.loc[handle2, "time"] )
- (self._tt[handle1].value(coords[:3], null=np.inf) - self._tt[handle2].value(coords[:3], null=np.inf))
for handle1, handle2 in pairs
]
return (r)
%%time
event = EVENTS.iloc[1]
locator.arrivals = ARRIVALS.set_index("event_id").loc[event["event_id"]]
locator._tt = {handle: locator.tti.read(handle) for handle in locator.arrivals.index}
def residuals(self, coords, bootstrap=False):
arrivals = locator.arrivals
pairs = np.array(list(itertools.product(arrivals.index, arrivals.index)))
ota = locator.arrivals.loc[pairs[:, 0], "time"].values
otb = locator.arrivals.loc[pairs[:, 1], "time"].values
tts = {handle: self._tt[handle].value(coords[:3], null=np.inf) for handle in arrivals.index}
tta = np.array([tts[handle] for handle in pairs[:, 0]])
ttb = np.array([tts[handle] for handle in pairs[:, 1]])
return ((ota - otb) - (tta - ttb))
np.array([*nodes[i, j], 0])
nodes = locator.tti.nodes[-5]
# edt_norm = np.zeros(nodes.shape[:-1])
# for i in range(edt_norm.shape[0]):
# for j in range(edt_norm.shape[1]):
# edt_norm[i, j] = np.linalg.norm(residuals(locator, nodes[i, j]))
l2_norm = np.zeros(nodes.shape[:-1])
for i in range(l2_norm.shape[0]):
for j in range(l2_norm.shape[1]):
l2_norm[i, j] = locator.norm(np.array([*nodes[i, j], loc0.x[-1]]))
plt.close("all")
fig, axes = plt.subplots(ncols=2, figsize=(12, 6))
qmesh = axes[0].pcolormesh(l2_norm)
fig.colorbar(qmesh, ax=axes[0])
qmesh = axes[1].pcolormesh(edt_norm)
fig.colorbar(qmesh, ax=axes[1])
%%time
loc0 = locator.differential_evolution(order=2, bootstrap=False)
loc0.x
%%time
boots = np.empty((0, 4))
for i in range(100):
locator.bootstrap_sample(loc0.x)
loc = locator.differential_evolution(order=2, bootstrap=True)
boots = np.vstack([boots, loc.x])
np.degrees(np.std(boots[:, 1])) * 111
plt.close("all")
fig, ax = plt.subplots()
ax.hist(boots[:, 0], bins=32)
###Output
_____no_output_____ |
notebooks/BertModel.ipynb | ###Markdown
Load model
###Code
bert_model.train()
###Output
_____no_output_____
###Markdown
Original text
###Code
print(input_text)
###Output
b'Dollar gains on Greenspan speech\n\nThe dollar has hit its highest level against the euro in almost three months after the Federal Reserve head said the US trade deficit is set to stabilise.\n\nAnd Alan Greenspan highlighted the US government\'s willingness to curb spending and rising household savings as factors which may help to reduce it. In late trading in New York, the dollar reached $1.2871 against the euro, from $1.2974 on Thursday. Market concerns about the deficit has hit the greenback in recent months. On Friday, Federal Reserve chairman Mr Greenspan\'s speech in London ahead of the meeting of G7 finance ministers sent the dollar higher after it had earlier tumbled on the back of worse-than-expected US jobs data. "I think the chairman\'s taking a much more sanguine view on the current account deficit than he\'s taken for some time," said Robert Sinche, head of currency strategy at Bank of America in New York. "He\'s taking a longer-term view, laying out a set of conditions under which the current account deficit can improve this year and next."\n\nWorries about the deficit concerns about China do, however, remain. China\'s currency remains pegged to the dollar and the US currency\'s sharp falls in recent months have therefore made Chinese export prices highly competitive. But calls for a shift in Beijing\'s policy have fallen on deaf ears, despite recent comments in a major Chinese newspaper that the "time is ripe" for a loosening of the peg. The G7 meeting is thought unlikely to produce any meaningful movement in Chinese policy. In the meantime, the US Federal Reserve\'s decision on 2 February to boost interest rates by a quarter of a point - the sixth such move in as many months - has opened up a differential with European rates. The half-point window, some believe, could be enough to keep US assets looking more attractive, and could help prop up the dollar. The recent falls have partly been the result of big budget deficits, as well as the US\'s yawning current account gap, both of which need to be funded by the buying of US bonds and assets by foreign firms and governments. The White House will announce its budget on Monday, and many commentators believe the deficit will remain at close to half a trillion dollars.\n'
###Markdown
Actual summarized text from dataset
###Code
actual_summary = data[DATA_SUMMARIZED][TEST_INDEX]
print(actual_summary)
###Output
b'The dollar has hit its highest level against the euro in almost three months after the Federal Reserve head said the US trade deficit is set to stabilise.China\'s currency remains pegged to the dollar and the US currency\'s sharp falls in recent months have therefore made Chinese export prices highly competitive.Market concerns about the deficit has hit the greenback in recent months."I think the chairman\'s taking a much more sanguine view on the current account deficit than he\'s taken for some time," said Robert Sinche, head of currency strategy at Bank of America in New York.The recent falls have partly been the result of big budget deficits, as well as the US\'s yawning current account gap, both of which need to be funded by the buying of US bonds and assets by foreign firms and governments."He\'s taking a longer-term view, laying out a set of conditions under which the current account deficit can improve this year and next."'
###Markdown
Summarized text
###Code
summary = bert_model.predict(input_text)
print(summary)
###Output
b'Dollar gains on Greenspan speech\n\nThe dollar has hit its highest level against the euro in almost three months after the Federal Reserve head said the US trade deficit is set to stabilise.\n\nAnd Alan Greenspan highlighted the US government\'s willingness to curb spending and rising household savings as factors which may help to reduce it. I think the chairman\'s taking a much more sanguine view on the current account deficit than he\'s taken for some time," said Robert Sinche, head of currency strategy at Bank of America in New York. " But calls for a shift in Beijing\'s policy have fallen on deaf ears, despite recent comments in a major Chinese newspaper that the "time is ripe" for a loosening of the peg.
###Markdown
Evaluation
###Code
reference_data = data[DATA_ORIGINAL].sample(n=10, random_state=42)
reference_data
# candidate_data = reference_data.apply(lambda x: bert_model.predict(x))
candidate_data = reference_data.map(bert_model.predict)
candidate_data
###Output
_____no_output_____
###Markdown
Score for 10 samples
###Code
precision, recall, f1 = bert_model.evaluation(preds=candidate_data, refs=reference_data)
print(f'Precision: {precision}')
print(f'Recall: {recall}')
print(f'F1: {f1}')
###Output
calculating scores...
computing bert embedding.
###Markdown
Score for 100 samples
###Code
precision, recall, f1 = bert_model.evaluation(preds=preds, refs=refs)
print(f'Precision: {precision}')
print(f'Recall: {recall}')
print(f'F1: {f1}')
###Output
calculating scores...
computing bert embedding.
|
naive_bayes/Naive_Bayes_Basic.ipynb | ###Markdown
Load the Dataset
###Code
import pandas as pd
import numpy as np
import random
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"
iris = pd.read_csv(url,
header=None,
names = ['sepal_length',
'sepal_width',
'petal_length',
'petal_width',
'species'])
## Definitions of classifying classes
classes = list(pd.unique(iris.species))
numClasses = len(classes)
###Output
_____no_output_____
###Markdown
Extracting the Feature Matrix
###Code
X = np.matrix(iris.iloc[:, 0:4])
X = X.astype(np.float)
m, n = X.shape
###Output
_____no_output_____
###Markdown
Extracting the response
###Code
y = np.asarray(iris.species)
###Output
_____no_output_____
###Markdown
Extracting features for different classes
###Code
CLS = []
for each in classes:
CLS.append(np.matrix(iris[iris.species == each].iloc[:, 0:4]))
len(CLS)
###Output
_____no_output_____
###Markdown
The real meat Calculating the mean and variance of each features for each class
###Code
pArray = []
def calculate_mean_and_variance(CLS, n, numClasses):
for i in range(numClasses):
pArray.append([])
for x in range(n):
mean = np.mean(CLS[i][:, x])
var = np.var(CLS[i][:, x])
pArray[i].append([mean, var])
calculate_mean_and_variance(CLS, n, numClasses)
for each in pArray:
print(each, end='\n\n')
###Output
[[5.0060000000000002, 0.12176400000000002], [3.4180000000000001, 0.14227600000000001], [1.464, 0.029504000000000002], [0.24399999999999999, 0.011264000000000003]]
[[5.9359999999999999, 0.261104], [2.7700000000000005, 0.096500000000000016], [4.2599999999999998, 0.21640000000000004], [1.3259999999999998, 0.038323999999999997]]
[[6.5879999999999983, 0.39625600000000011], [2.9740000000000002, 0.10192399999999999], [5.5520000000000005, 0.29849600000000004], [2.0260000000000002, 0.07392399999999999]]
###Markdown
Choosing training dataset (Random Choosing)
###Code
# Choosing 70% of the dataset for training Randomly
random_index = random.sample(range(m), int(m * 0.7))
def probability(mean, stdev, x):
###Output
_____no_output_____
###Markdown
Creating the actual Baysean Classifier
###Code
def classify_baysean():
correct_predictions = 0
for index in random_index:
result = []
x = X[index, :]
for eachClass in range(numClasses):
result.append([])
prior = 1 / numClasses
# For sepal_length
prosterior_feature_1 = probability(pArray[index][0][0],
pArray[index][0][1],
x[0])
# For sepal_width
prosterior_feature_2 = probability(pArray[index][1][0],
pArray[index][1][1],
x[1])
# For petal_length
prosterior_feature_3 = probability(pArray[index][2][0],
pArray[index][2][1],
x[2])
# For petal_width
prosterior_feature_4 = probability(pArray[index][3][0],
pArray[index][3][1],
x[3])
joint = prosterior_feature_1 * prosterior_feature_2 * \
prosterior_feature_3 * prosterior_feature_4 * prior
result[index].append(joint)
print(result[index])
classify_baysean()
x = X[49,:][:, 0]
x
import math
mean = pArray[0][0][0]
stdev = pArray[0][0][1]
exponent = math.exp(-(math.pow(x-mean,2)/(2*stdev)))
exponent
a = [[1, 2, 3, 4], [5, 6, 7, 8]]
for attribute in zip(*a):
print(attribute)
df = pd.DataFrame(np.random.randn(100, 2))
df
msk = np.random.rand(len(df)) < 0.8
msk
###Output
_____no_output_____ |
DP/Gamblers Problem Solution.ipynb | ###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all action in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros(100)
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
policy[s] = best_action
return policy, V
policy, v = value_iteration_for_gamblers(0.25)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11. 15. 16. 17.
18. 6. 20. 21. 3. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 38. 11. 10. 9. 42. 7. 44. 5. 46. 47. 48. 49. 50. 1. 2. 3.
4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 11. 10. 9. 17. 7. 19. 5. 21.
22. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11.
10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]
Optimized Value Function:
[0.00000000e+00 7.24792480e-05 2.89916992e-04 6.95257448e-04
1.16010383e-03 1.76906586e-03 2.78102979e-03 4.03504074e-03
4.66214120e-03 5.59997559e-03 7.08471239e-03 9.03964043e-03
1.11241192e-02 1.56793594e-02 1.61464431e-02 1.69517994e-02
1.86512806e-02 1.98249817e-02 2.24047303e-02 2.73845196e-02
2.83388495e-02 3.04937363e-02 3.61633897e-02 3.84953022e-02
4.44964767e-02 6.25000000e-02 6.27174377e-02 6.33700779e-02
6.45857723e-02 6.59966059e-02 6.78135343e-02 7.08430894e-02
7.46098323e-02 7.64884604e-02 7.93035477e-02 8.37541372e-02
8.96225423e-02 9.58723575e-02 1.09538078e-01 1.10939329e-01
1.13360151e-01 1.18457374e-01 1.21977661e-01 1.29716907e-01
1.44653559e-01 1.47520113e-01 1.53983246e-01 1.70990169e-01
1.77987434e-01 1.95990576e-01 2.50000000e-01 2.50217438e-01
2.50870078e-01 2.52085772e-01 2.53496606e-01 2.55313534e-01
2.58343089e-01 2.62109832e-01 2.63988460e-01 2.66803548e-01
2.71254137e-01 2.77122542e-01 2.83372357e-01 2.97038078e-01
2.98439329e-01 3.00860151e-01 3.05957374e-01 3.09477661e-01
3.17216907e-01 3.32153559e-01 3.35020113e-01 3.41483246e-01
3.58490169e-01 3.65487434e-01 3.83490576e-01 4.37500000e-01
4.38152558e-01 4.40122454e-01 4.43757317e-01 4.47991345e-01
4.53440603e-01 4.62529268e-01 4.73829497e-01 4.79468031e-01
4.87912680e-01 5.01265085e-01 5.18867627e-01 5.37617932e-01
5.78614419e-01 5.82817988e-01 5.90080452e-01 6.05372123e-01
6.15934510e-01 6.39150720e-01 6.83960814e-01 6.92560339e-01
7.11950883e-01 7.62970611e-01 7.83963162e-01 8.37972371e-01
0.00000000e+00]
###Markdown
Show your results graphically, as in Figure 4.3.
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Final Policy (action stake) vs State (Capital)')
# function to show the plot
plt.show()
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show()
###Output
_____no_output_____
###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all action in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros(100)
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
policy[s] = best_action
return policy, V
policy, v = value_iteration_for_gamblers(0.25)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11. 15. 16. 17.
18. 6. 20. 21. 3. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 38. 11. 10. 9. 42. 7. 44. 5. 46. 47. 48. 49. 50. 1. 2. 3.
4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 11. 10. 9. 17. 7. 19. 5. 21.
22. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11.
10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]
Optimized Value Function:
[0.00000000e+00 7.24792480e-05 2.89916992e-04 6.95257448e-04
1.16010383e-03 1.76906586e-03 2.78102979e-03 4.03504074e-03
4.66214120e-03 5.59997559e-03 7.08471239e-03 9.03964043e-03
1.11241192e-02 1.56793594e-02 1.61464431e-02 1.69517994e-02
1.86512806e-02 1.98249817e-02 2.24047303e-02 2.73845196e-02
2.83388495e-02 3.04937363e-02 3.61633897e-02 3.84953022e-02
4.44964767e-02 6.25000000e-02 6.27174377e-02 6.33700779e-02
6.45857723e-02 6.59966059e-02 6.78135343e-02 7.08430894e-02
7.46098323e-02 7.64884604e-02 7.93035477e-02 8.37541372e-02
8.96225423e-02 9.58723575e-02 1.09538078e-01 1.10939329e-01
1.13360151e-01 1.18457374e-01 1.21977661e-01 1.29716907e-01
1.44653559e-01 1.47520113e-01 1.53983246e-01 1.70990169e-01
1.77987434e-01 1.95990576e-01 2.50000000e-01 2.50217438e-01
2.50870078e-01 2.52085772e-01 2.53496606e-01 2.55313534e-01
2.58343089e-01 2.62109832e-01 2.63988460e-01 2.66803548e-01
2.71254137e-01 2.77122542e-01 2.83372357e-01 2.97038078e-01
2.98439329e-01 3.00860151e-01 3.05957374e-01 3.09477661e-01
3.17216907e-01 3.32153559e-01 3.35020113e-01 3.41483246e-01
3.58490169e-01 3.65487434e-01 3.83490576e-01 4.37500000e-01
4.38152558e-01 4.40122454e-01 4.43757317e-01 4.47991345e-01
4.53440603e-01 4.62529268e-01 4.73829497e-01 4.79468031e-01
4.87912680e-01 5.01265085e-01 5.18867627e-01 5.37617932e-01
5.78614419e-01 5.82817988e-01 5.90080452e-01 6.05372123e-01
6.15934510e-01 6.39150720e-01 6.83960814e-01 6.92560339e-01
7.11950883e-01 7.62970611e-01 7.83963162e-01 8.37972371e-01
0.00000000e+00]
###Markdown
Show your results graphically, as in Figure 4.3.
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Final Policy (action stake) vs State (Capital)')
# function to show the plot
plt.show()
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show()
###Output
_____no_output_____
###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all action in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros(100)
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
policy[s] = best_action
return policy, V
policy, v = value_iteration_for_gamblers(0.25)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11. 15. 16. 17.
18. 6. 20. 21. 3. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 38. 11. 10. 9. 42. 7. 44. 5. 46. 47. 48. 49. 50. 1. 2. 3.
4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 11. 10. 9. 17. 7. 19. 5. 21.
22. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11.
10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]
Optimized Value Function:
[0.00000000e+00 7.24792480e-05 2.89916992e-04 6.95257448e-04
1.16010383e-03 1.76906586e-03 2.78102979e-03 4.03504074e-03
4.66214120e-03 5.59997559e-03 7.08471239e-03 9.03964043e-03
1.11241192e-02 1.56793594e-02 1.61464431e-02 1.69517994e-02
1.86512806e-02 1.98249817e-02 2.24047303e-02 2.73845196e-02
2.83388495e-02 3.04937363e-02 3.61633897e-02 3.84953022e-02
4.44964767e-02 6.25000000e-02 6.27174377e-02 6.33700779e-02
6.45857723e-02 6.59966059e-02 6.78135343e-02 7.08430894e-02
7.46098323e-02 7.64884604e-02 7.93035477e-02 8.37541372e-02
8.96225423e-02 9.58723575e-02 1.09538078e-01 1.10939329e-01
1.13360151e-01 1.18457374e-01 1.21977661e-01 1.29716907e-01
1.44653559e-01 1.47520113e-01 1.53983246e-01 1.70990169e-01
1.77987434e-01 1.95990576e-01 2.50000000e-01 2.50217438e-01
2.50870078e-01 2.52085772e-01 2.53496606e-01 2.55313534e-01
2.58343089e-01 2.62109832e-01 2.63988460e-01 2.66803548e-01
2.71254137e-01 2.77122542e-01 2.83372357e-01 2.97038078e-01
2.98439329e-01 3.00860151e-01 3.05957374e-01 3.09477661e-01
3.17216907e-01 3.32153559e-01 3.35020113e-01 3.41483246e-01
3.58490169e-01 3.65487434e-01 3.83490576e-01 4.37500000e-01
4.38152558e-01 4.40122454e-01 4.43757317e-01 4.47991345e-01
4.53440603e-01 4.62529268e-01 4.73829497e-01 4.79468031e-01
4.87912680e-01 5.01265085e-01 5.18867627e-01 5.37617932e-01
5.78614419e-01 5.82817988e-01 5.90080452e-01 6.05372123e-01
6.15934510e-01 6.39150720e-01 6.83960814e-01 6.92560339e-01
7.11950883e-01 7.62970611e-01 7.83963162e-01 8.37972371e-01
0.00000000e+00]
###Markdown
Show your results graphically, as in Figure 4.3.
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Final Policy (action stake) vs State (Capital)')
# function to show the plot
plt.show()
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show()
###Output
_____no_output_____
###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all action in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros(100)
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
policy[s] = best_action
return policy, V
policy, v = value_iteration_for_gamblers(0.25)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11. 15. 16. 17.
18. 6. 20. 21. 3. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 38. 11. 10. 9. 42. 7. 44. 5. 46. 47. 48. 49. 50. 1. 2. 3.
4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 11. 10. 9. 17. 7. 19. 5. 21.
22. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11.
10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]
Optimized Value Function:
[0.00000000e+00 7.24792480e-05 2.89916992e-04 6.95257448e-04
1.16010383e-03 1.76906586e-03 2.78102979e-03 4.03504074e-03
4.66214120e-03 5.59997559e-03 7.08471239e-03 9.03964043e-03
1.11241192e-02 1.56793594e-02 1.61464431e-02 1.69517994e-02
1.86512806e-02 1.98249817e-02 2.24047303e-02 2.73845196e-02
2.83388495e-02 3.04937363e-02 3.61633897e-02 3.84953022e-02
4.44964767e-02 6.25000000e-02 6.27174377e-02 6.33700779e-02
6.45857723e-02 6.59966059e-02 6.78135343e-02 7.08430894e-02
7.46098323e-02 7.64884604e-02 7.93035477e-02 8.37541372e-02
8.96225423e-02 9.58723575e-02 1.09538078e-01 1.10939329e-01
1.13360151e-01 1.18457374e-01 1.21977661e-01 1.29716907e-01
1.44653559e-01 1.47520113e-01 1.53983246e-01 1.70990169e-01
1.77987434e-01 1.95990576e-01 2.50000000e-01 2.50217438e-01
2.50870078e-01 2.52085772e-01 2.53496606e-01 2.55313534e-01
2.58343089e-01 2.62109832e-01 2.63988460e-01 2.66803548e-01
2.71254137e-01 2.77122542e-01 2.83372357e-01 2.97038078e-01
2.98439329e-01 3.00860151e-01 3.05957374e-01 3.09477661e-01
3.17216907e-01 3.32153559e-01 3.35020113e-01 3.41483246e-01
3.58490169e-01 3.65487434e-01 3.83490576e-01 4.37500000e-01
4.38152558e-01 4.40122454e-01 4.43757317e-01 4.47991345e-01
4.53440603e-01 4.62529268e-01 4.73829497e-01 4.79468031e-01
4.87912680e-01 5.01265085e-01 5.18867627e-01 5.37617932e-01
5.78614419e-01 5.82817988e-01 5.90080452e-01 6.05372123e-01
6.15934510e-01 6.39150720e-01 6.83960814e-01 6.92560339e-01
7.11950883e-01 7.62970611e-01 7.83963162e-01 8.37972371e-01
0.00000000e+00]
###Markdown
Show your results graphically, as in Figure 4.3.
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Final Policy (action stake) vs State (Capital)')
# function to show the plot
plt.show()
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show()
###Output
_____no_output_____
###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all action in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros(100)
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
policy[s] = best_action
return policy, V
policy, v = value_iteration_for_gamblers(0.25)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11. 15. 16. 17.
18. 6. 20. 21. 3. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 38. 11. 10. 9. 42. 7. 44. 5. 46. 47. 48. 49. 50. 1. 2. 3.
4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 11. 10. 9. 17. 7. 19. 5. 21.
22. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11.
10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]
Optimized Value Function:
[0.00000000e+00 7.24792480e-05 2.89916992e-04 6.95257448e-04
1.16010383e-03 1.76906586e-03 2.78102979e-03 4.03504074e-03
4.66214120e-03 5.59997559e-03 7.08471239e-03 9.03964043e-03
1.11241192e-02 1.56793594e-02 1.61464431e-02 1.69517994e-02
1.86512806e-02 1.98249817e-02 2.24047303e-02 2.73845196e-02
2.83388495e-02 3.04937363e-02 3.61633897e-02 3.84953022e-02
4.44964767e-02 6.25000000e-02 6.27174377e-02 6.33700779e-02
6.45857723e-02 6.59966059e-02 6.78135343e-02 7.08430894e-02
7.46098323e-02 7.64884604e-02 7.93035477e-02 8.37541372e-02
8.96225423e-02 9.58723575e-02 1.09538078e-01 1.10939329e-01
1.13360151e-01 1.18457374e-01 1.21977661e-01 1.29716907e-01
1.44653559e-01 1.47520113e-01 1.53983246e-01 1.70990169e-01
1.77987434e-01 1.95990576e-01 2.50000000e-01 2.50217438e-01
2.50870078e-01 2.52085772e-01 2.53496606e-01 2.55313534e-01
2.58343089e-01 2.62109832e-01 2.63988460e-01 2.66803548e-01
2.71254137e-01 2.77122542e-01 2.83372357e-01 2.97038078e-01
2.98439329e-01 3.00860151e-01 3.05957374e-01 3.09477661e-01
3.17216907e-01 3.32153559e-01 3.35020113e-01 3.41483246e-01
3.58490169e-01 3.65487434e-01 3.83490576e-01 4.37500000e-01
4.38152558e-01 4.40122454e-01 4.43757317e-01 4.47991345e-01
4.53440603e-01 4.62529268e-01 4.73829497e-01 4.79468031e-01
4.87912680e-01 5.01265085e-01 5.18867627e-01 5.37617932e-01
5.78614419e-01 5.82817988e-01 5.90080452e-01 6.05372123e-01
6.15934510e-01 6.39150720e-01 6.83960814e-01 6.92560339e-01
7.11950883e-01 7.62970611e-01 7.83963162e-01 8.37972371e-01
0.00000000e+00]
###Markdown
Show your results graphically, as in Figure 4.3.
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Final Policy (action stake) vs State (Capital)')
# function to show the plot
plt.show()
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show()
###Output
_____no_output_____
###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all action in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros([100, 100])
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
policy[s, best_action] = 1.0
return policy, V
policy, v = value_iteration_for_gamblers(0.25)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[[0. 0. 0. ... 0. 0. 0.]
[0. 1. 0. ... 0. 0. 0.]
[0. 0. 1. ... 0. 0. 0.]
...
[0. 0. 0. ... 0. 0. 0.]
[0. 0. 1. ... 0. 0. 0.]
[0. 1. 0. ... 0. 0. 0.]]
Optimized Value Function:
[0.00000000e+00 7.24792480e-05 2.89916992e-04 6.95257448e-04
1.16010383e-03 1.76906586e-03 2.78102979e-03 4.03504074e-03
4.66214120e-03 5.59997559e-03 7.08471239e-03 9.03964043e-03
1.11241192e-02 1.56793594e-02 1.61464431e-02 1.69517994e-02
1.86512806e-02 1.98249817e-02 2.24047303e-02 2.73845196e-02
2.83388495e-02 3.04937363e-02 3.61633897e-02 3.84953022e-02
4.44964767e-02 6.25000000e-02 6.27174377e-02 6.33700779e-02
6.45857723e-02 6.59966059e-02 6.78135343e-02 7.08430894e-02
7.46098323e-02 7.64884604e-02 7.93035477e-02 8.37541372e-02
8.96225423e-02 9.58723575e-02 1.09538078e-01 1.10939329e-01
1.13360151e-01 1.18457374e-01 1.21977661e-01 1.29716907e-01
1.44653559e-01 1.47520113e-01 1.53983246e-01 1.70990169e-01
1.77987434e-01 1.95990576e-01 2.50000000e-01 2.50217438e-01
2.50870078e-01 2.52085772e-01 2.53496606e-01 2.55313534e-01
2.58343089e-01 2.62109832e-01 2.63988460e-01 2.66803548e-01
2.71254137e-01 2.77122542e-01 2.83372357e-01 2.97038078e-01
2.98439329e-01 3.00860151e-01 3.05957374e-01 3.09477661e-01
3.17216907e-01 3.32153559e-01 3.35020113e-01 3.41483246e-01
3.58490169e-01 3.65487434e-01 3.83490576e-01 4.37500000e-01
4.38152558e-01 4.40122454e-01 4.43757317e-01 4.47991345e-01
4.53440603e-01 4.62529268e-01 4.73829497e-01 4.79468031e-01
4.87912680e-01 5.01265085e-01 5.18867627e-01 5.37617932e-01
5.78614419e-01 5.82817988e-01 5.90080452e-01 6.05372123e-01
6.15934510e-01 6.39150720e-01 6.83960814e-01 6.92560339e-01
7.11950883e-01 7.62970611e-01 7.83963162e-01 8.37972371e-01
0.00000000e+00]
###Markdown
Show your results graphically, as in Figure 4.3.
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Final Policy (action stake) vs State (Capital)')
# function to show the plot
plt.show()
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show()
###Output
_____no_output_____
###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all action in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros(100)
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
policy[s] = best_action
return policy, V
policy, v = value_iteration_for_gamblers(0.55)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
v.shape
###Output
_____no_output_____
###Markdown
Show your results graphically, as in Figure 4.3.
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Final Policy (action stake) vs State (Capital)')
# function to show the plot
plt.show()
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show()
###Output
_____no_output_____
###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all action in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros(100)
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
policy[s] = best_action
return policy, V
policy, v = value_iteration_for_gamblers(0.25)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11. 15. 16. 17.
18. 6. 20. 21. 3. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 38. 11. 10. 9. 42. 7. 44. 5. 46. 47. 48. 49. 50. 1. 2. 3.
4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 11. 10. 9. 17. 7. 19. 5. 21.
22. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11.
10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]
Optimized Value Function:
[0.00000000e+00 7.24792480e-05 2.89916992e-04 6.95257448e-04
1.16010383e-03 1.76906586e-03 2.78102979e-03 4.03504074e-03
4.66214120e-03 5.59997559e-03 7.08471239e-03 9.03964043e-03
1.11241192e-02 1.56793594e-02 1.61464431e-02 1.69517994e-02
1.86512806e-02 1.98249817e-02 2.24047303e-02 2.73845196e-02
2.83388495e-02 3.04937363e-02 3.61633897e-02 3.84953022e-02
4.44964767e-02 6.25000000e-02 6.27174377e-02 6.33700779e-02
6.45857723e-02 6.59966059e-02 6.78135343e-02 7.08430894e-02
7.46098323e-02 7.64884604e-02 7.93035477e-02 8.37541372e-02
8.96225423e-02 9.58723575e-02 1.09538078e-01 1.10939329e-01
1.13360151e-01 1.18457374e-01 1.21977661e-01 1.29716907e-01
1.44653559e-01 1.47520113e-01 1.53983246e-01 1.70990169e-01
1.77987434e-01 1.95990576e-01 2.50000000e-01 2.50217438e-01
2.50870078e-01 2.52085772e-01 2.53496606e-01 2.55313534e-01
2.58343089e-01 2.62109832e-01 2.63988460e-01 2.66803548e-01
2.71254137e-01 2.77122542e-01 2.83372357e-01 2.97038078e-01
2.98439329e-01 3.00860151e-01 3.05957374e-01 3.09477661e-01
3.17216907e-01 3.32153559e-01 3.35020113e-01 3.41483246e-01
3.58490169e-01 3.65487434e-01 3.83490576e-01 4.37500000e-01
4.38152558e-01 4.40122454e-01 4.43757317e-01 4.47991345e-01
4.53440603e-01 4.62529268e-01 4.73829497e-01 4.79468031e-01
4.87912680e-01 5.01265085e-01 5.18867627e-01 5.37617932e-01
5.78614419e-01 5.82817988e-01 5.90080452e-01 6.05372123e-01
6.15934510e-01 6.39150720e-01 6.83960814e-01 6.92560339e-01
7.11950883e-01 7.62970611e-01 7.83963162e-01 8.37972371e-01
0.00000000e+00]
###Markdown
Show your results graphically, as in Figure 4.3.
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Final Policy (action stake) vs State (Capital)')
# function to show the plot
plt.show()
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show()
###Output
_____no_output_____
###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all action in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros(100)
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
policy[s] = best_action
return policy, V
policy, v = value_iteration_for_gamblers(0.25)
print("Optimized Policy:")
print(policy)
print("")
s = 50
print("Optimized Value Function: v({})={}".format(s, v[s]))
print(v)
print("")
###Output
Optimized Policy:
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11. 15. 16. 17.
18. 6. 20. 21. 3. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 38. 11. 10. 9. 42. 7. 44. 5. 46. 47. 48. 49. 50. 1. 2. 3.
4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 11. 10. 9. 17. 7. 19. 5. 21.
22. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11.
10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]
Optimized Value Function: v(50)=0.25
[0.00000000e+00 7.24792480e-05 2.89916992e-04 6.95257448e-04
1.16010383e-03 1.76906586e-03 2.78102979e-03 4.03504074e-03
4.66214120e-03 5.59997559e-03 7.08471239e-03 9.03964043e-03
1.11241192e-02 1.56793594e-02 1.61464431e-02 1.69517994e-02
1.86512806e-02 1.98249817e-02 2.24047303e-02 2.73845196e-02
2.83388495e-02 3.04937363e-02 3.61633897e-02 3.84953022e-02
4.44964767e-02 6.25000000e-02 6.27174377e-02 6.33700779e-02
6.45857723e-02 6.59966059e-02 6.78135343e-02 7.08430894e-02
7.46098323e-02 7.64884604e-02 7.93035477e-02 8.37541372e-02
8.96225423e-02 9.58723575e-02 1.09538078e-01 1.10939329e-01
1.13360151e-01 1.18457374e-01 1.21977661e-01 1.29716907e-01
1.44653559e-01 1.47520113e-01 1.53983246e-01 1.70990169e-01
1.77987434e-01 1.95990576e-01 2.50000000e-01 2.50217438e-01
2.50870078e-01 2.52085772e-01 2.53496606e-01 2.55313534e-01
2.58343089e-01 2.62109832e-01 2.63988460e-01 2.66803548e-01
2.71254137e-01 2.77122542e-01 2.83372357e-01 2.97038078e-01
2.98439329e-01 3.00860151e-01 3.05957374e-01 3.09477661e-01
3.17216907e-01 3.32153559e-01 3.35020113e-01 3.41483246e-01
3.58490169e-01 3.65487434e-01 3.83490576e-01 4.37500000e-01
4.38152558e-01 4.40122454e-01 4.43757317e-01 4.47991345e-01
4.53440603e-01 4.62529268e-01 4.73829497e-01 4.79468031e-01
4.87912680e-01 5.01265085e-01 5.18867627e-01 5.37617932e-01
5.78614419e-01 5.82817988e-01 5.90080452e-01 6.05372123e-01
6.15934510e-01 6.39150720e-01 6.83960814e-01 6.92560339e-01
7.11950883e-01 7.62970611e-01 7.83963162e-01 8.37972371e-01
0.00000000e+00]
###Markdown
Show your results graphically, as in Figure 4.3.
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Final Policy (action stake) vs State (Capital)')
# function to show the plot
plt.show()
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show()
###Output
_____no_output_____
###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all action in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
policy[s] = np.argmax(A)
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
# policy = np.zeros(100)
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
# policy[s] = best_action
return policy, V
policy, v = value_iteration_for_gamblers(0.55)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1.]
Optimized Value Function:
[0. 0.17907988 0.3256451 0.44562338 0.54386112 0.62432055
0.69024101 0.74427075 0.78857479 0.82492306 0.8547625 0.87927601
0.89943065 0.916017 0.92968144 0.94095243 0.95026207 0.95796371
0.96434629 0.96964617 0.97405667 0.97773597 0.98081353 0.98339531
0.9855681 0.98740299 0.98895822 0.9902816 0.99141231 0.99238257
0.99321882 0.99394285 0.99457258 0.99512281 0.99560577 0.99603155
0.99640856 0.99674375 0.99704294 0.99731096 0.9975519 0.99776916
0.99796564 0.99814377 0.99830564 0.99845302 0.99858745 0.99871025
0.99882255 0.99892537 0.99901957 0.99910594 0.99918515 0.99925782
0.99932449 0.99938566 0.99944178 0.99949324 0.99954041 0.99958363
0.9996232 0.99965942 0.99969253 0.99972279 0.99975041 0.99977559
0.99979853 0.99981941 0.99983838 0.99985561 0.99987123 0.99988537
0.99989815 0.9999097 0.99992011 0.99992947 0.99993789 0.99994544
0.9999522 0.99995825 0.99996364 0.99996844 0.99997271 0.99997649
0.99997983 0.99998278 0.99998538 0.99998766 0.99998965 0.99999139
0.9999929 0.99999421 0.99999534 0.99999631 0.99999714 0.99999785
0.99999845 0.99999895 0.99999937 0.99999972 0. ]
###Markdown
Show your results graphically, as in Figure 4.3.
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Final Policy (action stake) vs State (Capital)')
# function to show the plot
plt.show()
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show()
###Output
_____no_output_____
###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all action in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros(100)
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
policy[s] = best_action
return policy, V
policy, v = value_iteration_for_gamblers(0.25)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11. 15. 16. 17.
18. 6. 20. 21. 3. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 38. 11. 10. 9. 42. 7. 44. 5. 46. 47. 48. 49. 50. 1. 2. 3.
4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 11. 10. 9. 17. 7. 19. 5. 21.
22. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11.
10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]
Optimized Value Function:
[0.00000000e+00 7.24792480e-05 2.89916992e-04 6.95257448e-04
1.16010383e-03 1.76906586e-03 2.78102979e-03 4.03504074e-03
4.66214120e-03 5.59997559e-03 7.08471239e-03 9.03964043e-03
1.11241192e-02 1.56793594e-02 1.61464431e-02 1.69517994e-02
1.86512806e-02 1.98249817e-02 2.24047303e-02 2.73845196e-02
2.83388495e-02 3.04937363e-02 3.61633897e-02 3.84953022e-02
4.44964767e-02 6.25000000e-02 6.27174377e-02 6.33700779e-02
6.45857723e-02 6.59966059e-02 6.78135343e-02 7.08430894e-02
7.46098323e-02 7.64884604e-02 7.93035477e-02 8.37541372e-02
8.96225423e-02 9.58723575e-02 1.09538078e-01 1.10939329e-01
1.13360151e-01 1.18457374e-01 1.21977661e-01 1.29716907e-01
1.44653559e-01 1.47520113e-01 1.53983246e-01 1.70990169e-01
1.77987434e-01 1.95990576e-01 2.50000000e-01 2.50217438e-01
2.50870078e-01 2.52085772e-01 2.53496606e-01 2.55313534e-01
2.58343089e-01 2.62109832e-01 2.63988460e-01 2.66803548e-01
2.71254137e-01 2.77122542e-01 2.83372357e-01 2.97038078e-01
2.98439329e-01 3.00860151e-01 3.05957374e-01 3.09477661e-01
3.17216907e-01 3.32153559e-01 3.35020113e-01 3.41483246e-01
3.58490169e-01 3.65487434e-01 3.83490576e-01 4.37500000e-01
4.38152558e-01 4.40122454e-01 4.43757317e-01 4.47991345e-01
4.53440603e-01 4.62529268e-01 4.73829497e-01 4.79468031e-01
4.87912680e-01 5.01265085e-01 5.18867627e-01 5.37617932e-01
5.78614419e-01 5.82817988e-01 5.90080452e-01 6.05372123e-01
6.15934510e-01 6.39150720e-01 6.83960814e-01 6.92560339e-01
7.11950883e-01 7.62970611e-01 7.83963162e-01 8.37972371e-01
0.00000000e+00]
###Markdown
Show your results graphically, as in Figure 4.3.
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Final Policy (action stake) vs State (Capital)')
# function to show the plot
plt.show()
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show()
###Output
_____no_output_____
###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all action in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros(100)
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
policy[s] = best_action
return policy, V
policy, v = value_iteration_for_gamblers(0.25)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11. 15. 16. 17.
18. 6. 20. 21. 3. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 38. 11. 10. 9. 42. 7. 44. 5. 46. 47. 48. 49. 50. 1. 2. 3.
4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 11. 10. 9. 17. 7. 19. 5. 21.
22. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11.
10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]
Optimized Value Function:
[0.00000000e+00 7.24792480e-05 2.89916992e-04 6.95257448e-04
1.16010383e-03 1.76906586e-03 2.78102979e-03 4.03504074e-03
4.66214120e-03 5.59997559e-03 7.08471239e-03 9.03964043e-03
1.11241192e-02 1.56793594e-02 1.61464431e-02 1.69517994e-02
1.86512806e-02 1.98249817e-02 2.24047303e-02 2.73845196e-02
2.83388495e-02 3.04937363e-02 3.61633897e-02 3.84953022e-02
4.44964767e-02 6.25000000e-02 6.27174377e-02 6.33700779e-02
6.45857723e-02 6.59966059e-02 6.78135343e-02 7.08430894e-02
7.46098323e-02 7.64884604e-02 7.93035477e-02 8.37541372e-02
8.96225423e-02 9.58723575e-02 1.09538078e-01 1.10939329e-01
1.13360151e-01 1.18457374e-01 1.21977661e-01 1.29716907e-01
1.44653559e-01 1.47520113e-01 1.53983246e-01 1.70990169e-01
1.77987434e-01 1.95990576e-01 2.50000000e-01 2.50217438e-01
2.50870078e-01 2.52085772e-01 2.53496606e-01 2.55313534e-01
2.58343089e-01 2.62109832e-01 2.63988460e-01 2.66803548e-01
2.71254137e-01 2.77122542e-01 2.83372357e-01 2.97038078e-01
2.98439329e-01 3.00860151e-01 3.05957374e-01 3.09477661e-01
3.17216907e-01 3.32153559e-01 3.35020113e-01 3.41483246e-01
3.58490169e-01 3.65487434e-01 3.83490576e-01 4.37500000e-01
4.38152558e-01 4.40122454e-01 4.43757317e-01 4.47991345e-01
4.53440603e-01 4.62529268e-01 4.73829497e-01 4.79468031e-01
4.87912680e-01 5.01265085e-01 5.18867627e-01 5.37617932e-01
5.78614419e-01 5.82817988e-01 5.90080452e-01 6.05372123e-01
6.15934510e-01 6.39150720e-01 6.83960814e-01 6.92560339e-01
7.11950883e-01 7.62970611e-01 7.83963162e-01 8.37972371e-01
0.00000000e+00]
###Markdown
Show your results graphically, as in Figure 4.3.
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Final Policy (action stake) vs State (Capital)')
# function to show the plot
plt.show()
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show()
###Output
_____no_output_____
###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all action in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros(100)
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
policy[s] = best_action
return policy, V
policy, v = value_iteration_for_gamblers(0.25)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11. 15. 16. 17.
18. 6. 20. 21. 3. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 38. 11. 10. 9. 42. 7. 44. 5. 46. 47. 48. 49. 50. 1. 2. 3.
4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 11. 10. 9. 17. 7. 19. 5. 21.
22. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11.
10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]
Optimized Value Function:
[0.00000000e+00 7.24792480e-05 2.89916992e-04 6.95257448e-04
1.16010383e-03 1.76906586e-03 2.78102979e-03 4.03504074e-03
4.66214120e-03 5.59997559e-03 7.08471239e-03 9.03964043e-03
1.11241192e-02 1.56793594e-02 1.61464431e-02 1.69517994e-02
1.86512806e-02 1.98249817e-02 2.24047303e-02 2.73845196e-02
2.83388495e-02 3.04937363e-02 3.61633897e-02 3.84953022e-02
4.44964767e-02 6.25000000e-02 6.27174377e-02 6.33700779e-02
6.45857723e-02 6.59966059e-02 6.78135343e-02 7.08430894e-02
7.46098323e-02 7.64884604e-02 7.93035477e-02 8.37541372e-02
8.96225423e-02 9.58723575e-02 1.09538078e-01 1.10939329e-01
1.13360151e-01 1.18457374e-01 1.21977661e-01 1.29716907e-01
1.44653559e-01 1.47520113e-01 1.53983246e-01 1.70990169e-01
1.77987434e-01 1.95990576e-01 2.50000000e-01 2.50217438e-01
2.50870078e-01 2.52085772e-01 2.53496606e-01 2.55313534e-01
2.58343089e-01 2.62109832e-01 2.63988460e-01 2.66803548e-01
2.71254137e-01 2.77122542e-01 2.83372357e-01 2.97038078e-01
2.98439329e-01 3.00860151e-01 3.05957374e-01 3.09477661e-01
3.17216907e-01 3.32153559e-01 3.35020113e-01 3.41483246e-01
3.58490169e-01 3.65487434e-01 3.83490576e-01 4.37500000e-01
4.38152558e-01 4.40122454e-01 4.43757317e-01 4.47991345e-01
4.53440603e-01 4.62529268e-01 4.73829497e-01 4.79468031e-01
4.87912680e-01 5.01265085e-01 5.18867627e-01 5.37617932e-01
5.78614419e-01 5.82817988e-01 5.90080452e-01 6.05372123e-01
6.15934510e-01 6.39150720e-01 6.83960814e-01 6.92560339e-01
7.11950883e-01 7.62970611e-01 7.83963162e-01 8.37972371e-01
0.00000000e+00]
###Markdown
Show your results graphically, as in Figure 4.3.
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Final Policy (action stake) vs State (Capital)')
# function to show the plot
plt.show()
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show()
###Output
_____no_output_____
###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.00000001, discount_factor=1.0):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all actions in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros(100)
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
policy[s] = best_action
return policy, V
###Output
_____no_output_____
###Markdown
P(heads) = 0.25
###Code
policy, v = value_iteration_for_gamblers(0.25)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11. 10. 16. 17.
18. 6. 5. 21. 3. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 12. 39. 40. 9. 42. 7. 6. 45. 46. 3. 48. 49. 50. 1. 2. 3.
4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 14. 15. 9. 17. 7. 6. 20. 21.
3. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11.
10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]
Optimized Value Function:
[0.00000000e+00 7.28611644e-05 2.91444657e-04 6.95264534e-04
1.16577863e-03 1.77125497e-03 2.78105813e-03 4.03661189e-03
4.66311452e-03 5.60141614e-03 7.08501986e-03 9.04088722e-03
1.11242326e-02 1.56796459e-02 1.61464484e-02 1.69534412e-02
1.86524589e-02 1.98260621e-02 2.24056654e-02 2.73847344e-02
2.83400809e-02 3.04945466e-02 3.61635508e-02 3.84959099e-02
4.44969325e-02 6.25000000e-02 6.27185835e-02 6.33743340e-02
6.45857936e-02 6.59973359e-02 6.78137649e-02 7.08431744e-02
7.46098363e-02 7.64893442e-02 7.93042491e-02 8.37550607e-02
8.96226631e-02 9.58726993e-02 1.09538938e-01 1.10939345e-01
1.13360324e-01 1.18457377e-01 1.21978187e-01 1.29716997e-01
1.44654203e-01 1.47520243e-01 1.53983640e-01 1.70990652e-01
1.77987730e-01 1.95990798e-01 2.50000000e-01 2.50218583e-01
2.50874334e-01 2.52085794e-01 2.53497336e-01 2.55313765e-01
2.58343174e-01 2.62109836e-01 2.63989344e-01 2.66804249e-01
2.71255061e-01 2.77122663e-01 2.83372699e-01 2.97038938e-01
2.98439345e-01 3.00860324e-01 3.05957377e-01 3.09478187e-01
3.17216997e-01 3.32154203e-01 3.35020243e-01 3.41483640e-01
3.58490652e-01 3.65487730e-01 3.83490798e-01 4.37500000e-01
4.38155750e-01 4.40123002e-01 4.43757381e-01 4.47992008e-01
4.53441296e-01 4.62529525e-01 4.73829509e-01 4.79468033e-01
4.87912748e-01 5.01265182e-01 5.18867989e-01 5.37618098e-01
5.78616813e-01 5.82818036e-01 5.90080972e-01 6.05372132e-01
6.15934561e-01 6.39150992e-01 6.83962610e-01 6.92560729e-01
7.11950921e-01 7.62971957e-01 7.83963191e-01 8.37972393e-01
0.00000000e+00]
###Markdown
Show results graphically
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Capital vs Value Estimates')
# function to show the plot
plt.show();
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show();
###Output
_____no_output_____
###Markdown
P(heads) = 0.4
###Code
policy, v = value_iteration_for_gamblers(0.4)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 8.
7. 19. 20. 4. 22. 2. 1. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 38. 39. 40. 41. 8. 43. 44. 45. 4. 47. 2. 1. 50. 1. 2. 3.
4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 8. 18. 19. 20. 4.
22. 2. 26. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 13. 12. 11.
10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]
Optimized Value Function:
[0. 0.00206562 0.00516406 0.00922547 0.01291015 0.0173854
0.02306368 0.02781411 0.03227539 0.03768507 0.0434635 0.05035447
0.05765919 0.06523937 0.06953528 0.07443124 0.08068847 0.08661104
0.09421268 0.10314362 0.10865874 0.11596663 0.12588617 0.13357998
0.14414799 0.16 0.16309844 0.16774609 0.17383821 0.17936523
0.1860781 0.19459552 0.20172117 0.20841308 0.21652761 0.22519525
0.2355317 0.24648879 0.25785906 0.26430292 0.27164686 0.2810327
0.28991657 0.30131902 0.31471544 0.32298812 0.33394994 0.34882926
0.36036996 0.37622198 0.4 0.40309844 0.40774609 0.41383821
0.41936523 0.4260781 0.43459552 0.44172117 0.44841308 0.45652761
0.46519525 0.4755317 0.48648879 0.49785906 0.50430292 0.51164686
0.5210327 0.52991657 0.54131902 0.55471544 0.56298812 0.57394994
0.58882926 0.60036996 0.61622198 0.64 0.64464766 0.65161914
0.66075731 0.66904785 0.67911715 0.69189327 0.70258175 0.71261962
0.72479141 0.73779287 0.75329756 0.76973319 0.78678859 0.79645439
0.80747029 0.82154905 0.83487485 0.85197853 0.87207316 0.88448217
0.90092491 0.92324389 0.94055495 0.96433297 0. ]
###Markdown
Show results graphically
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Capital vs Value Estimates')
# function to show the plot
plt.show();
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show();
###Output
_____no_output_____
###Markdown
P(heads) = 0.55
###Code
policy, v = value_iteration_for_gamblers(0.55)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.
1. 1. 1. 1.]
Optimized Value Function:
[0. 0.18181794 0.33057808 0.45229093 0.55187418 0.63335139
0.70001457 0.75455719 0.79918297 0.83569499 0.86556846 0.89001041
0.91000837 0.92637035 0.93975744 0.95071052 0.95967213 0.96700437
0.97300349 0.97791186 0.9819278 0.98521359 0.98790196 0.99010154
0.99190121 0.99337366 0.9945784 0.99556411 0.99637059 0.99703045
0.99757034 0.99801206 0.99837348 0.99866919 0.99891113 0.99910909
0.99927105 0.99940357 0.999512 0.99960071 0.9996733 0.99973269
0.99978128 0.99982104 0.99985357 0.99988019 0.99990197 0.99991978
0.99993436 0.99994629 0.99995605 0.99996404 0.99997058 0.99997592
0.9999803 0.99998388 0.99998681 0.99998921 0.99999117 0.99999277
0.99999409 0.99999516 0.99999604 0.99999676 0.99999735 0.99999783
0.99999822 0.99999855 0.99999881 0.99999903 0.9999992 0.99999935
0.99999947 0.99999956 0.99999964 0.99999971 0.99999976 0.99999981
0.99999984 0.99999987 0.99999989 0.99999991 0.99999993 0.99999994
0.99999995 0.99999996 0.99999997 0.99999998 0.99999998 0.99999998
0.99999999 0.99999999 0.99999999 0.99999999 1. 1.
1. 1. 1. 1. 0. ]
###Markdown
Show results graphically
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Capital vs Value Estimates')
# function to show the plot
plt.show();
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show();
###Output
_____no_output_____
###Markdown
Adding discount factor
###Code
policy, v = value_iteration_for_gamblers(0.55, discount_factor=0.8)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Capital vs Value Estimates')
# function to show the plot
plt.show();
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show();
###Output
_____no_output_____
###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all action in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros(100)
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
policy[s] = best_action
return policy, V
policy, v = value_iteration_for_gamblers(0.25)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11. 15. 16. 17.
18. 6. 20. 21. 3. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 38. 11. 10. 9. 42. 7. 44. 5. 46. 47. 48. 49. 50. 1. 2. 3.
4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 11. 10. 9. 17. 7. 19. 5. 21.
22. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11.
10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]
Optimized Value Function:
[0.00000000e+00 7.24792480e-05 2.89916992e-04 6.95257448e-04
1.16010383e-03 1.76906586e-03 2.78102979e-03 4.03504074e-03
4.66214120e-03 5.59997559e-03 7.08471239e-03 9.03964043e-03
1.11241192e-02 1.56793594e-02 1.61464431e-02 1.69517994e-02
1.86512806e-02 1.98249817e-02 2.24047303e-02 2.73845196e-02
2.83388495e-02 3.04937363e-02 3.61633897e-02 3.84953022e-02
4.44964767e-02 6.25000000e-02 6.27174377e-02 6.33700779e-02
6.45857723e-02 6.59966059e-02 6.78135343e-02 7.08430894e-02
7.46098323e-02 7.64884604e-02 7.93035477e-02 8.37541372e-02
8.96225423e-02 9.58723575e-02 1.09538078e-01 1.10939329e-01
1.13360151e-01 1.18457374e-01 1.21977661e-01 1.29716907e-01
1.44653559e-01 1.47520113e-01 1.53983246e-01 1.70990169e-01
1.77987434e-01 1.95990576e-01 2.50000000e-01 2.50217438e-01
2.50870078e-01 2.52085772e-01 2.53496606e-01 2.55313534e-01
2.58343089e-01 2.62109832e-01 2.63988460e-01 2.66803548e-01
2.71254137e-01 2.77122542e-01 2.83372357e-01 2.97038078e-01
2.98439329e-01 3.00860151e-01 3.05957374e-01 3.09477661e-01
3.17216907e-01 3.32153559e-01 3.35020113e-01 3.41483246e-01
3.58490169e-01 3.65487434e-01 3.83490576e-01 4.37500000e-01
4.38152558e-01 4.40122454e-01 4.43757317e-01 4.47991345e-01
4.53440603e-01 4.62529268e-01 4.73829497e-01 4.79468031e-01
4.87912680e-01 5.01265085e-01 5.18867627e-01 5.37617932e-01
5.78614419e-01 5.82817988e-01 5.90080452e-01 6.05372123e-01
6.15934510e-01 6.39150720e-01 6.83960814e-01 6.92560339e-01
7.11950883e-01 7.62970611e-01 7.83963162e-01 8.37972371e-01
0.00000000e+00]
###Markdown
Show your results graphically, as in Figure 4.3.
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Final Policy (action stake) vs State (Capital)')
# function to show the plot
plt.show()
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show()
###Output
_____no_output_____
###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all action in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros(100)
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
policy[s] = best_action
return policy, V
policy, v = value_iteration_for_gamblers(0.25)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11. 15. 16. 17.
18. 6. 20. 21. 3. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 38. 11. 10. 9. 42. 7. 44. 5. 46. 47. 48. 49. 50. 1. 2. 3.
4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 11. 10. 9. 17. 7. 19. 5. 21.
22. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11.
10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]
Optimized Value Function:
[0.00000000e+00 7.24792480e-05 2.89916992e-04 6.95257448e-04
1.16010383e-03 1.76906586e-03 2.78102979e-03 4.03504074e-03
4.66214120e-03 5.59997559e-03 7.08471239e-03 9.03964043e-03
1.11241192e-02 1.56793594e-02 1.61464431e-02 1.69517994e-02
1.86512806e-02 1.98249817e-02 2.24047303e-02 2.73845196e-02
2.83388495e-02 3.04937363e-02 3.61633897e-02 3.84953022e-02
4.44964767e-02 6.25000000e-02 6.27174377e-02 6.33700779e-02
6.45857723e-02 6.59966059e-02 6.78135343e-02 7.08430894e-02
7.46098323e-02 7.64884604e-02 7.93035477e-02 8.37541372e-02
8.96225423e-02 9.58723575e-02 1.09538078e-01 1.10939329e-01
1.13360151e-01 1.18457374e-01 1.21977661e-01 1.29716907e-01
1.44653559e-01 1.47520113e-01 1.53983246e-01 1.70990169e-01
1.77987434e-01 1.95990576e-01 2.50000000e-01 2.50217438e-01
2.50870078e-01 2.52085772e-01 2.53496606e-01 2.55313534e-01
2.58343089e-01 2.62109832e-01 2.63988460e-01 2.66803548e-01
2.71254137e-01 2.77122542e-01 2.83372357e-01 2.97038078e-01
2.98439329e-01 3.00860151e-01 3.05957374e-01 3.09477661e-01
3.17216907e-01 3.32153559e-01 3.35020113e-01 3.41483246e-01
3.58490169e-01 3.65487434e-01 3.83490576e-01 4.37500000e-01
4.38152558e-01 4.40122454e-01 4.43757317e-01 4.47991345e-01
4.53440603e-01 4.62529268e-01 4.73829497e-01 4.79468031e-01
4.87912680e-01 5.01265085e-01 5.18867627e-01 5.37617932e-01
5.78614419e-01 5.82817988e-01 5.90080452e-01 6.05372123e-01
6.15934510e-01 6.39150720e-01 6.83960814e-01 6.92560339e-01
7.11950883e-01 7.62970611e-01 7.83963162e-01 8.37972371e-01
0.00000000e+00]
###Markdown
Show your results graphically, as in Figure 4.3.
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Final Policy (action stake) vs State (Capital)')
# function to show the plot
plt.show()
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show()
###Output
_____no_output_____
###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all action in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros(100)
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
policy[s] = best_action
return policy, V
policy, v = value_iteration_for_gamblers(0.25)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11. 15. 16. 17.
18. 6. 20. 21. 3. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 38. 11. 10. 9. 42. 7. 44. 5. 46. 47. 48. 49. 50. 1. 2. 3.
4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 11. 10. 9. 17. 7. 19. 5. 21.
22. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11.
10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]
Optimized Value Function:
[0.00000000e+00 7.24792480e-05 2.89916992e-04 6.95257448e-04
1.16010383e-03 1.76906586e-03 2.78102979e-03 4.03504074e-03
4.66214120e-03 5.59997559e-03 7.08471239e-03 9.03964043e-03
1.11241192e-02 1.56793594e-02 1.61464431e-02 1.69517994e-02
1.86512806e-02 1.98249817e-02 2.24047303e-02 2.73845196e-02
2.83388495e-02 3.04937363e-02 3.61633897e-02 3.84953022e-02
4.44964767e-02 6.25000000e-02 6.27174377e-02 6.33700779e-02
6.45857723e-02 6.59966059e-02 6.78135343e-02 7.08430894e-02
7.46098323e-02 7.64884604e-02 7.93035477e-02 8.37541372e-02
8.96225423e-02 9.58723575e-02 1.09538078e-01 1.10939329e-01
1.13360151e-01 1.18457374e-01 1.21977661e-01 1.29716907e-01
1.44653559e-01 1.47520113e-01 1.53983246e-01 1.70990169e-01
1.77987434e-01 1.95990576e-01 2.50000000e-01 2.50217438e-01
2.50870078e-01 2.52085772e-01 2.53496606e-01 2.55313534e-01
2.58343089e-01 2.62109832e-01 2.63988460e-01 2.66803548e-01
2.71254137e-01 2.77122542e-01 2.83372357e-01 2.97038078e-01
2.98439329e-01 3.00860151e-01 3.05957374e-01 3.09477661e-01
3.17216907e-01 3.32153559e-01 3.35020113e-01 3.41483246e-01
3.58490169e-01 3.65487434e-01 3.83490576e-01 4.37500000e-01
4.38152558e-01 4.40122454e-01 4.43757317e-01 4.47991345e-01
4.53440603e-01 4.62529268e-01 4.73829497e-01 4.79468031e-01
4.87912680e-01 5.01265085e-01 5.18867627e-01 5.37617932e-01
5.78614419e-01 5.82817988e-01 5.90080452e-01 6.05372123e-01
6.15934510e-01 6.39150720e-01 6.83960814e-01 6.92560339e-01
7.11950883e-01 7.62970611e-01 7.83963162e-01 8.37972371e-01
0.00000000e+00]
###Markdown
Show your results graphically, as in Figure 4.3.
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Final Policy (action stake) vs State (Capital)')
# function to show the plot
plt.show()
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show()
###Output
_____no_output_____
###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to
# termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all action in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the discounted value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros(100)
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
policy[s] = best_action
return policy, V
policy, v = value_iteration_for_gamblers(0.25)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11. 15. 16. 17.
18. 6. 20. 21. 3. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 38. 11. 10. 9. 42. 7. 44. 5. 46. 47. 48. 49. 50. 1. 2. 3.
4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 11. 10. 9. 17. 7. 19. 5. 21.
22. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11.
10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]
Optimized Value Function:
[0.00000000e+00 7.24792480e-05 2.89916992e-04 6.95257448e-04
1.16010383e-03 1.76906586e-03 2.78102979e-03 4.03504074e-03
4.66214120e-03 5.59997559e-03 7.08471239e-03 9.03964043e-03
1.11241192e-02 1.56793594e-02 1.61464431e-02 1.69517994e-02
1.86512806e-02 1.98249817e-02 2.24047303e-02 2.73845196e-02
2.83388495e-02 3.04937363e-02 3.61633897e-02 3.84953022e-02
4.44964767e-02 6.25000000e-02 6.27174377e-02 6.33700779e-02
6.45857723e-02 6.59966059e-02 6.78135343e-02 7.08430894e-02
7.46098323e-02 7.64884604e-02 7.93035477e-02 8.37541372e-02
8.96225423e-02 9.58723575e-02 1.09538078e-01 1.10939329e-01
1.13360151e-01 1.18457374e-01 1.21977661e-01 1.29716907e-01
1.44653559e-01 1.47520113e-01 1.53983246e-01 1.70990169e-01
1.77987434e-01 1.95990576e-01 2.50000000e-01 2.50217438e-01
2.50870078e-01 2.52085772e-01 2.53496606e-01 2.55313534e-01
2.58343089e-01 2.62109832e-01 2.63988460e-01 2.66803548e-01
2.71254137e-01 2.77122542e-01 2.83372357e-01 2.97038078e-01
2.98439329e-01 3.00860151e-01 3.05957374e-01 3.09477661e-01
3.17216907e-01 3.32153559e-01 3.35020113e-01 3.41483246e-01
3.58490169e-01 3.65487434e-01 3.83490576e-01 4.37500000e-01
4.38152558e-01 4.40122454e-01 4.43757317e-01 4.47991345e-01
4.53440603e-01 4.62529268e-01 4.73829497e-01 4.79468031e-01
4.87912680e-01 5.01265085e-01 5.18867627e-01 5.37617932e-01
5.78614419e-01 5.82817988e-01 5.90080452e-01 6.05372123e-01
6.15934510e-01 6.39150720e-01 6.83960814e-01 6.92560339e-01
7.11950883e-01 7.62970611e-01 7.83963162e-01 8.37972371e-01
0.00000000e+00]
###Markdown
Show your results graphically, as in Figure 4.3.
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Final Policy (action stake) vs State (Capital)')
# function to show the plot
plt.show()
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show()
###Output
_____no_output_____
###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all action in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros(100)
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
policy[s] = best_action
return policy, V
policy, v = value_iteration_for_gamblers(0.25)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11. 15. 16. 17.
18. 6. 20. 21. 3. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 38. 11. 10. 9. 42. 7. 44. 5. 46. 47. 48. 49. 50. 1. 2. 3.
4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 11. 10. 9. 17. 7. 19. 5. 21.
22. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11.
10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]
Optimized Value Function:
[0.00000000e+00 7.24792480e-05 2.89916992e-04 6.95257448e-04
1.16010383e-03 1.76906586e-03 2.78102979e-03 4.03504074e-03
4.66214120e-03 5.59997559e-03 7.08471239e-03 9.03964043e-03
1.11241192e-02 1.56793594e-02 1.61464431e-02 1.69517994e-02
1.86512806e-02 1.98249817e-02 2.24047303e-02 2.73845196e-02
2.83388495e-02 3.04937363e-02 3.61633897e-02 3.84953022e-02
4.44964767e-02 6.25000000e-02 6.27174377e-02 6.33700779e-02
6.45857723e-02 6.59966059e-02 6.78135343e-02 7.08430894e-02
7.46098323e-02 7.64884604e-02 7.93035477e-02 8.37541372e-02
8.96225423e-02 9.58723575e-02 1.09538078e-01 1.10939329e-01
1.13360151e-01 1.18457374e-01 1.21977661e-01 1.29716907e-01
1.44653559e-01 1.47520113e-01 1.53983246e-01 1.70990169e-01
1.77987434e-01 1.95990576e-01 2.50000000e-01 2.50217438e-01
2.50870078e-01 2.52085772e-01 2.53496606e-01 2.55313534e-01
2.58343089e-01 2.62109832e-01 2.63988460e-01 2.66803548e-01
2.71254137e-01 2.77122542e-01 2.83372357e-01 2.97038078e-01
2.98439329e-01 3.00860151e-01 3.05957374e-01 3.09477661e-01
3.17216907e-01 3.32153559e-01 3.35020113e-01 3.41483246e-01
3.58490169e-01 3.65487434e-01 3.83490576e-01 4.37500000e-01
4.38152558e-01 4.40122454e-01 4.43757317e-01 4.47991345e-01
4.53440603e-01 4.62529268e-01 4.73829497e-01 4.79468031e-01
4.87912680e-01 5.01265085e-01 5.18867627e-01 5.37617932e-01
5.78614419e-01 5.82817988e-01 5.90080452e-01 6.05372123e-01
6.15934510e-01 6.39150720e-01 6.83960814e-01 6.92560339e-01
7.11950883e-01 7.62970611e-01 7.83963162e-01 8.37972371e-01
0.00000000e+00]
###Markdown
Show your results graphically, as in Figure 4.3.
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Final Policy (action stake) vs State (Capital)')
# function to show the plot
plt.show()
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show()
###Output
_____no_output_____
###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all action in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros(100)
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
policy[s] = best_action
return policy, V
policy, v = value_iteration_for_gamblers(0.25)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11. 15. 16. 17.
18. 6. 20. 21. 3. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 38. 11. 10. 9. 42. 7. 44. 5. 46. 47. 48. 49. 50. 1. 2. 3.
4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 11. 10. 9. 17. 7. 19. 5. 21.
22. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11.
10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]
Optimized Value Function:
[0.00000000e+00 7.24792480e-05 2.89916992e-04 6.95257448e-04
1.16010383e-03 1.76906586e-03 2.78102979e-03 4.03504074e-03
4.66214120e-03 5.59997559e-03 7.08471239e-03 9.03964043e-03
1.11241192e-02 1.56793594e-02 1.61464431e-02 1.69517994e-02
1.86512806e-02 1.98249817e-02 2.24047303e-02 2.73845196e-02
2.83388495e-02 3.04937363e-02 3.61633897e-02 3.84953022e-02
4.44964767e-02 6.25000000e-02 6.27174377e-02 6.33700779e-02
6.45857723e-02 6.59966059e-02 6.78135343e-02 7.08430894e-02
7.46098323e-02 7.64884604e-02 7.93035477e-02 8.37541372e-02
8.96225423e-02 9.58723575e-02 1.09538078e-01 1.10939329e-01
1.13360151e-01 1.18457374e-01 1.21977661e-01 1.29716907e-01
1.44653559e-01 1.47520113e-01 1.53983246e-01 1.70990169e-01
1.77987434e-01 1.95990576e-01 2.50000000e-01 2.50217438e-01
2.50870078e-01 2.52085772e-01 2.53496606e-01 2.55313534e-01
2.58343089e-01 2.62109832e-01 2.63988460e-01 2.66803548e-01
2.71254137e-01 2.77122542e-01 2.83372357e-01 2.97038078e-01
2.98439329e-01 3.00860151e-01 3.05957374e-01 3.09477661e-01
3.17216907e-01 3.32153559e-01 3.35020113e-01 3.41483246e-01
3.58490169e-01 3.65487434e-01 3.83490576e-01 4.37500000e-01
4.38152558e-01 4.40122454e-01 4.43757317e-01 4.47991345e-01
4.53440603e-01 4.62529268e-01 4.73829497e-01 4.79468031e-01
4.87912680e-01 5.01265085e-01 5.18867627e-01 5.37617932e-01
5.78614419e-01 5.82817988e-01 5.90080452e-01 6.05372123e-01
6.15934510e-01 6.39150720e-01 6.83960814e-01 6.92560339e-01
7.11950883e-01 7.62970611e-01 7.83963162e-01 8.37972371e-01
0.00000000e+00]
###Markdown
Show your results graphically, as in Figure 4.3.
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Final Policy (action stake) vs State (Capital)')
# function to show the plot
plt.show()
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show()
###Output
_____no_output_____
###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.0001, discount_factor=1.0):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all action in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros(100)
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
policy[s] = best_action
return policy, V
policy, v = value_iteration_for_gamblers(0.25)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[ 0. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11. 15. 16. 17.
18. 6. 20. 21. 3. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
11. 12. 38. 11. 10. 9. 42. 7. 44. 5. 46. 47. 48. 49. 50. 1. 2. 3.
4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 11. 10. 9. 17. 7. 19. 5. 21.
22. 23. 24. 25. 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 12. 11.
10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]
Optimized Value Function:
[0.00000000e+00 7.24792480e-05 2.89916992e-04 6.95257448e-04
1.16010383e-03 1.76906586e-03 2.78102979e-03 4.03504074e-03
4.66214120e-03 5.59997559e-03 7.08471239e-03 9.03964043e-03
1.11241192e-02 1.56793594e-02 1.61464431e-02 1.69517994e-02
1.86512806e-02 1.98249817e-02 2.24047303e-02 2.73845196e-02
2.83388495e-02 3.04937363e-02 3.61633897e-02 3.84953022e-02
4.44964767e-02 6.25000000e-02 6.27174377e-02 6.33700779e-02
6.45857723e-02 6.59966059e-02 6.78135343e-02 7.08430894e-02
7.46098323e-02 7.64884604e-02 7.93035477e-02 8.37541372e-02
8.96225423e-02 9.58723575e-02 1.09538078e-01 1.10939329e-01
1.13360151e-01 1.18457374e-01 1.21977661e-01 1.29716907e-01
1.44653559e-01 1.47520113e-01 1.53983246e-01 1.70990169e-01
1.77987434e-01 1.95990576e-01 2.50000000e-01 2.50217438e-01
2.50870078e-01 2.52085772e-01 2.53496606e-01 2.55313534e-01
2.58343089e-01 2.62109832e-01 2.63988460e-01 2.66803548e-01
2.71254137e-01 2.77122542e-01 2.83372357e-01 2.97038078e-01
2.98439329e-01 3.00860151e-01 3.05957374e-01 3.09477661e-01
3.17216907e-01 3.32153559e-01 3.35020113e-01 3.41483246e-01
3.58490169e-01 3.65487434e-01 3.83490576e-01 4.37500000e-01
4.38152558e-01 4.40122454e-01 4.43757317e-01 4.47991345e-01
4.53440603e-01 4.62529268e-01 4.73829497e-01 4.79468031e-01
4.87912680e-01 5.01265085e-01 5.18867627e-01 5.37617932e-01
5.78614419e-01 5.82817988e-01 5.90080452e-01 6.05372123e-01
6.15934510e-01 6.39150720e-01 6.83960814e-01 6.92560339e-01
7.11950883e-01 7.62970611e-01 7.83963162e-01 8.37972371e-01
0.00000000e+00]
###Markdown
Show your results graphically, as in Figure 4.3.
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Final Policy (action stake) vs State (Capital)')
# function to show the plot
plt.show()
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show()
###Output
_____no_output_____
###Markdown
This is Example 4.3. Gambler’s Problem from Sutton's book.A gambler has the opportunity to make bets on the outcomes of a sequence of coin flips. If the coin comes up heads, he wins as many dollars as he has staked on that flip; if it is tails, he loses his stake. The game ends when the gambler wins by reaching his goal of $100, or loses by running out of money. On each flip, the gambler must decide what portion of his capital to stake, in integer numbers of dollars. This problem can be formulated as an undiscounted, episodic, finite MDP. The state is the gambler’s capital, s ∈ {1, 2, . . . , 99}.The actions are stakes, a ∈ {0, 1, . . . , min(s, 100 − s)}. The reward is zero on all transitions except those on which the gambler reaches his goal, when it is +1.The state-value function then gives the probability of winning from each state. A policy is a mapping from levels of capital to stakes. The optimal policy maximizes the probability of reaching the goal. Let p_h denote the probability of the coin coming up heads. If p_h is known, then the entire problem is known and it can be solved, for instance, by value iteration.
###Code
import numpy as np
import sys
import matplotlib.pyplot as plt
if "../" not in sys.path:
sys.path.append("../")
###Output
_____no_output_____
###Markdown
Exercise 4.9 (programming)Implement value iteration for the gambler’s problem and solve it for p_h = 0.25 and p_h = 0.55.
###Code
def value_iteration_for_gamblers(p_h, theta=0.1, discount_factor=0.5):
"""
Args:
p_h: Probability of the coin coming up heads
"""
# The reward is zero on all transitions except those on which the gambler reaches his goal,
# when it is +1.
rewards = np.zeros(101)
rewards[100] = 1
# We introduce two dummy states corresponding to termination with capital of 0 and 100
V = np.zeros(101)
def one_step_lookahead(s, V, rewards):
"""
Helper function to calculate the value for all action in a given state.
Args:
s: The gambler’s capital. Integer.
V: The vector that contains values at each state.
rewards: The reward vector.
Returns:
A vector containing the expected value of each action.
Its length equals to the number of actions.
"""
A = np.zeros(101)
stakes = range(1, min(s, 100-s)+1) # Your minimum bet is 1, maximum bet is min(s, 100-s).
for a in stakes:
# rewards[s+a], rewards[s-a] are immediate rewards.
# V[s+a], V[s-a] are values of the next states.
# This is the core of the Bellman equation: The expected value of your action is
# the sum of immediate rewards and the value of the next state.
A[a] = p_h * (rewards[s+a] + V[s+a]*discount_factor) + (1-p_h) * (rewards[s-a] + V[s-a]*discount_factor)
return A
while True:
# Stopping condition
delta = 0
# Update each state...
for s in range(1, 100):
# Do a one-step lookahead to find the best action
A = one_step_lookahead(s, V, rewards)
# print(s,A,V) # if you want to debug.
best_action_value = np.max(A)
# Calculate delta across all states seen so far
delta = max(delta, np.abs(best_action_value - V[s]))
# Update the value function. Ref: Sutton book eq. 4.10.
V[s] = best_action_value
# Check if we can stop
if delta < theta:
break
# Create a deterministic policy using the optimal value function
policy = np.zeros(100)
for s in range(1, 100):
# One step lookahead to find the best action for this state
A = one_step_lookahead(s, V, rewards)
best_action = np.argmax(A)
# Always take the best action
policy[s] = best_action
return policy, V
policy, v = value_iteration_for_gamblers(0.25)
print("Optimized Policy:")
print(policy)
print("")
print("Optimized Value Function:")
print(v)
print("")
###Output
Optimized Policy:
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 12. 11. 10. 9. 8.
7. 19. 18. 17. 22. 21. 23. 25. 24. 23. 22. 21. 20. 19. 31. 30. 29. 34.
36. 37. 12. 11. 10. 41. 40. 43. 6. 5. 45. 3. 48. 49. 50. 49. 48. 47.
46. 45. 44. 43. 42. 41. 40. 39. 38. 37. 36. 35. 34. 33. 32. 31. 30. 29.
28. 27. 26. 25. 24. 23. 22. 21. 20. 19. 18. 17. 16. 15. 14. 13. 12. 11.
10. 9. 8. 7. 6. 5. 4. 3. 2. 1.]
Optimized Value Function:
[0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0.03125 0.03125 0.03125 0.03125 0.03125
0.03125 0.03125 0.03125 0.03125 0.03125 0.03125
0.03125 0.03125 0.04296875 0.04296875 0.04296875 0.04296875
0.04296875 0.04296875 0.04736328 0.04736328 0.04736328 0.04901123
0.04901123 0.04962921 0.25 0.25 0.25 0.25
0.25 0.25 0.25 0.25 0.25 0.25
0.25 0.25 0.25 0.26171875 0.26171875 0.26171875
0.26171875 0.26171875 0.26171875 0.26611328 0.26611328 0.26611328
0.26776123 0.26776123 0.26837921 0.34375 0.34375 0.34375
0.34375 0.34375 0.34375 0.34375 0.34814453 0.34814453
0.34814453 0.34979248 0.35041046 0.3506422 0.37890625 0.37890625
0.37890625 0.3805542 0.3805542 0.38140392 0.39208984 0.39208984
0.39270782 0.39703369 0.39726543 0.39897454 0. ]
###Markdown
Show your results graphically, as in Figure 4.3.
###Code
# Plotting Final Policy (action stake) vs State (Capital)
# x axis values
x = range(100)
# corresponding y axis values
y = v[:100]
# plotting the points
plt.plot(x, y)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Value Estimates')
# giving a title to the graph
plt.title('Final Policy (action stake) vs State (Capital)')
# function to show the plot
plt.show()
# Plotting Capital vs Final Policy
# x axis values
x = range(100)
# corresponding y axis values
y = policy
# plotting the bars
plt.bar(x, y, align='center', alpha=0.5)
# naming the x axis
plt.xlabel('Capital')
# naming the y axis
plt.ylabel('Final policy (stake)')
# giving a title to the graph
plt.title('Capital vs Final Policy')
# function to show the plot
plt.show()
###Output
_____no_output_____ |
python/.ipynb_checkpoints/examples-checkpoint.ipynb | ###Markdown
파이썬 기초강의01. 연 습 문 제이 인 구 (Ike Lee) Example : 1 직육면체의 부피를 구해보자
###Code
# 변수 설정
length = 5
height = 5
width = 20
volume = length*width*height
print('직육면체의 부피 : %d'%volume)
length = 10 # 다시 할당하면 됨
volume = length*width*height
print('직육면체의 부피 : %d'%volume)
###Output
_____no_output_____
###Markdown
Example : 2 for 를 사용해서 암컷 개를 찾으세요. and를 사용해서 일치하는것을 찾으세요
###Code
suspects = [['낙타', '포유류','암컷'], ['상어','어류','숫컷'], ['푸들','개','암컷']]
for suspect in suspects:
if suspect[1] == '개' and suspect[2] =='암컷':
print('범인은', suspect[0], '입니다')
volume = length*width*height
print('직육면체의 부피 : %d'%volume)
length = 10 # 다시 할당하면 됨
volume = length*width*height
print('직육면체의 부피 : %d'%volume)
###Output
_____no_output_____
###Markdown
Example : 3 연이율구하기 ``` 2017년 7월 2일 연이율 3% 계좌를 생성하여 3000000원을 입금한 경우 2018년 7월 2일 계좌 총액을 계산하여 출력하는 프로그램을 작성하십시오. 프로그램에서 입금액을 위한 변수, 연이율을 위한 변수를 만들어 사용하십시오.위의 프로그램을 입금액과 연이율을 입력받아 총액을 출력하도록 변경하십시오.언어 : python3입력 설명 :다음은 입금액과 연이율의 입력예입니다.===============================입금액(원), 연이율(%):: 4000, 3출력 설명 :다음과 같이 1년 후 총액을 출력합니다.===============================4120.0샘플 입력 : 4000, 3샘플 출력 : 4120.0```
###Code
money, ratio = eval(input('입금액(원), 연이율(%)::'))
print(money*(1+(1/100)*ratio))
###Output
_____no_output_____
###Markdown
Example : 4 삼각형 넓이를 구하시오```삼각형의 세변의 길이가 3,4,5인 경우 삼각형 넓이는 다음과 같이 계산합니다. x = (3 + 4 + 5)/2 넒이는 x(x-3)(x-4)(x-5) 의 양의 제곱근언어 : python3입력 설명 :다음과 같이 삼각형의 세변의 길이를 입력합니다.======================삼각형 세변의 길이(comma로 구분): 3,4,5출력 설명 :다음과 같이 삼각형의 넓이를 출력합니다.======================6.0샘플 입력 : 3,4,5샘플 출력 : 6.0```
###Code
a,b,c = eval(input())
x = (a+b+c)/2
area = (x*(x-a)*(x-b)*(x-c))**(0.5)
print(area)
###Output
_____no_output_____
###Markdown
Example : 5 for 를 사용해서 암컷 개를 찾으세요. and를 사용해서 일치하는것을 찾으세요
###Code
suspects = [['낙타', '포유류','암컷'], ['상어','어류','숫컷'], ['푸들','개','암컷']]
for suspect in suspects:
if suspect[1] == '개' and suspect[2] =='암컷':
print('범인은', suspect[0], '입니다')
###Output
_____no_output_____
###Markdown
Example : 6 중복되지 않는 카드 두 장을 뽑도록 빈칸을 채우세요.
###Code
import random
cities = ['서울','부산','울산','인천' ]
print(random.sample(cities, 2))
###Output
_____no_output_____
###Markdown
Example : 7 다음중 하나를 무작위로 뽑아주세요! annimals = 얼룩말, 황소, 개구리, 참새.
###Code
#리스트[]
import random
annimals = ['얼룩말','황소', '개구리', '참새']
print(random.choice(annimals))
###Output
_____no_output_____
###Markdown
Example : 8 def 를 이용해서 서로에게 인사하는 문구를 만들어 보세요! 가브리엘 님 안녕하세요? \엘리스 님 안녕하세요?
###Code
def welcome(name):
print(name,'님 안녕하세요?')
welcome('가브리엘')
welcome('엘리스')
###Output
_____no_output_____
###Markdown
Example : 9 점수에 따라 학점을 출력 해주세요. 철수의 점수는 75점 입니다. 몇 학점 인지 표시해 주세요. A학점은 80< score <=100 B학점은 60< score <=80 C학점은 40< score <=60
###Code
score =75
if 80< score <=100:
print('학점은 A 입니다')
if 60< score <=80:
print('학점은 B 입니다')
if 40< score <=60:
print('학점은 C 입니다')
###Output
_____no_output_____
###Markdown
Example : 10 변수를 사용해서 매출액을 계산해 주세요. 주문서1 - 커피2잔, 홍차4잔, 레몬티5잔 주문서2 - 커피1잔, 홍차1잔, 레몬티5잔 주문서3 - 커피2잔, 홍차3잔, 레몬티1잔
###Code
coffee =4000
tea = 3000
lemon =200
order1 = (coffee*2 + tea*4 + lemon*5)
order2 = (coffee*1 + tea*1 + lemon*5)
order3 = (coffee*2 + tea*3 + lemon*1)
print(order1+order2+order3)
###Output
_____no_output_____
###Markdown
Example : 11 5바퀴를 도는 레이싱 경주를 하고 있습니다. while 코드를 이용해서 트랙의 수를 카운트하고 5바퀴를 돌면 종료 멧세지를 주세요. 반복할 때마다 몇 번째 바퀴인지 출력하세요. \5바퀴를 돌면 종료 멧세지와 함께 종료해 주세요.
###Code
count = 0
while count <5:
count =count +1
print(count, "번째 바퀴입니다.")
print('경주가 종료되었습니다!')
###Output
_____no_output_____
###Markdown
Example : 12 정답을 맟춰보세요. 미국이 수도는 어디인기요? \보기에서 찾아서 답하게 하세요. 런던,오타와, 파리, 뉴욕틀린 답을 말하면 어느 나라의 수도인지 말해주세요.
###Code
while True:
answer = input('런던,오타와,파리,뉴욕 중 미국이 수도는 어디일까요?')
if answer == '뉴욕':
print('정답입니다. 뉴욕은 미국의 수도 입니다')
break
elif answer == '오타와':
print('오타와는 캐나다의 수도 입니다')
elif answer == '파리':
print('파리는 프랑스의 수도 입니다')
elif answer == '런던':
print('런던은 영국의 수도 입니다')
else:
print('보기에서 골라주세요')
###Output
_____no_output_____
###Markdown
Example : 13 물건을 교환 해주세요 철수는 마트에서 형광등을 샀습니다. 그런데 LED 전구가 전기 효율이 좋아 형광등을 LED 전구로 교환 하고자 합니다. 형광등 3개를 LED 3개로 바꾸어 주세요. 형광등, 형광등, 형광등 ==> LED 전구, LED 전구, LED전구
###Code
전구 = ['형광등', '형광등', '형광등']
for i in range(3):
전구[i] = 'LED 전구'
print(전구)
###Output
_____no_output_____
###Markdown
Example : 14 반복하기 동물원 원숭이 10 마리에게 인사하기. for을 사용해서 10마리에게 한번에 인사하기 코드를 적어주세요.
###Code
for num in range(10):
print ('안녕 원숭이', num)
my_str ='My name is %s' % 'Lion'
print(my_str)
'%d %d' % (1,2)
'%f %f' % (1,2)
###Output
_____no_output_____
###Markdown
print Options
###Code
print('집단지성', end='/')
print('집단지성', end='통합하자')
###Output
_____no_output_____ |
Week 12 - Abstraction Practice.ipynb | ###Markdown
What is Abstraction in OOP Abstraction is the concept of object-oriented programming that “shows” only essential attributes and “hides” unnecessary information.The main purpose of abstraction is hiding the unnecessary details from the users. Abstraction is selecting data from a larger pool to show only relevant details of the object to the user. It helps in reducing programming complexity and efforts. It is one of the most important concepts of OOPs. Abstraction in Python Abstraction in python is defined as hiding the implementation of logic from the client and using the particular application. It hides the irrelevant data specified in the project, reducing complexity and giving value to the efficiency. Abstraction is made in Python using Abstract classes and their methods in the code. What is an Abstract Class? Abstract Class is a type of class in OOPs, that declare one or more abstract methods. These classes can have abstract methods as well as concrete methods. A normal class cannot have abstract methods.An abstract class is a class that contains at least one abstract method. What are Abstract Methods?Abstract Method is a method that has just the method definition but does not contain implementation.A method without a body is known as an Abstract Method.It must be declared in an abstract class.The abstract method will never be final because the abstract class must implement all the abstract methods. When to use Abstract Methods & Abstract Class?Abstract methods are mostly declared where two or more subclasses are also doing the same thing in different ways through different implementations.It also extends the same Abstract class and offers different implementations of the abstract methods.Abstract classes help to describe generic types of behaviors and object-oriented programming class hierarchy. It also describes subclasses to offer implementation details of the abstract class. Difference between Abstraction and Encapsulation AbstractionEncapsulation Abstraction in Object Oriented Programming solves the issues at the design level.Encapsulation solves it implementation level. Abstraction in Programming is about hiding unwanted details while showing most essential information.Encapsulation means binding the code and data into a single unit. Data Abstraction in Java allows focussing on what the information object must containEncapsulation means hiding the internal details or mechanics of how an object does something for security reasons. Advantages of AbstractionThe main benefit of using an Abstraction in Programming is that it allows you to group several related classes as siblings.Abstraction in Object Oriented Programming helps to reduce the complexity of the design and implementation process of software. How Abstract Base classes work : By default, Python does not provide abstract classes. Python comes with a module that provides the base for defining Abstract Base classes(ABC) and that module name is ABC. ABC works by decorating methods of the base class as abstract and then registering concrete classes as implementations of the abstract base. A method becomes abstract when decorated with the keyword @abstractmethod. SyntaxAbstract class Syntax is declared as:
###Code
from abc import ABC
# declaration
class classname(ABC):
def pau(self):
pass
###Output
_____no_output_____
###Markdown
Abstract method Syntax is declared as
###Code
def abstractmethod_name():
pass
###Output
_____no_output_____
###Markdown
Few things to be noted in Python:In python, an abstract class can hold both an abstract method and a normal method.The second point is an abstract class is not initiated (no objects are created).The derived class implementation methods are defined in abstract base classes.
###Code
from ABC import abc
# here abc and ABC are case-sensitive. When we swap it creates
###Output
_____no_output_____
###Markdown
Code I:
###Code
from abc import ABC, abstractmethod
# Abstract Class
class product(abc):
# Normal Method
def item_list(self, rate):
print("amount submitted : ",rate)
# Abstract Method
@abstractmethod
def product(self, rate):
###Output
_____no_output_____
###Markdown
Code II:A program to generate the volume of geometric shapes
###Code
from abc import ABC
class geometric(ABC):
def volume(self):
#abstract method
pass
class Rect(geometric):
length = 4
width = 6
height = 6
def volume(self):
return self.length * self.width *self.height
class Sphere(geometric):
radius = 8
def volume(self):
return 1.3 * 3.14 * self.radius * self.radius *self.radius
class Cube(geometric):
Edge = 5
def volume(self):
return self.Edge * self.Edge *self.Edge
class Triangle_3D:
length = 5
width = 4
def volume(self):
return 0.5 * self.length * self.width
rr = Rect()
ss = Sphere()
cc = Cube()
tt = Triangle_3D()
print("Volume of a rectangle:", rr.volume())
print("Volume of a circle:", ss.volume())
print("Volume of a square:", cc.volume())
print("Volume of a triangle:", tt.volume())
###Output
Volume of a rectangle: 144
Volume of a circle: 2089.9840000000004
Volume of a square: 125
Volume of a triangle: 10.0
###Markdown
Code IIIA program to generate different invoices
###Code
from abc import ABC, abstractmethod
class Bill(ABC):
def final_bill(self, pay):
print('Purchase of the product: ', pay)
@abstractmethod
def Invoice(self, pay):
pass
class Paycheque(Bill):
def Invoice(self, pay):
print('paycheque of: ', pay)
class CardPayment(Bill):
def Invoice(self, pay):
print('pay through card of: ', pay)
aa = Paycheque()
aa.Invoice(6500)
aa.final_bill(6500)
print(isinstance(aa,Invoice))
aa = CardPayment()
aa.Invoice(2600)
aa.final_bill(2600)
print(isinstance(aa,Invoice))
###Output
_____no_output_____
###Markdown
Code IV: Python program showing abstract base class work
###Code
from abc import ABC, abstractmethod
class Animal(ABC):
@abstractmethod
def move(self):
pass
class Human(Animal):
def move(self):
print("I can walk and run")
class Snake(Animal):
def move(self):
print("I can crawl")
class Dog(Animal):
def move(self):
print("I can bark")
class Lion(Animal):
def move(self):
print("I can roar")
# Object Instantiation
r = Human()
r.move()
k = Snake()
k.move()
d = Dog()
d.move()
m = Lion()
m.move()
###Output
I can walk and run
I can crawl
I can bark
I can roar
###Markdown
Concrete Methods in Abstract Base Classes : Concrete (normal) classes contain only concrete (normal) methods whereas abstract classes may contain both concrete methods and abstract methods. The concrete class provides an implementation of abstract methods, the abstract base class can also provide an implementation by invoking the methods via super(). Code V:Python program invoking a method using super()
###Code
from abc import ABC, abstractmethod
class R(ABC):
def rk(self):
print("Abstract Base Class")
class K(R):
def rk(self):
super().rk()
print("subclass")
# Object instantiation
r = K()
r.rk()
###Output
Abstract Base Class
subclass
###Markdown
Code VI:
###Code
from abc import ABC, abstractmethod
class Bank(ABC):
def branch(self, Naira):
print("Fees submitted : ",Naira)
@abstractmethod
def Bank(Naira):
class private(Bank):
def Bank(naira):
print("Total Naira Value here: ",Naira)
class public(bank):
def Bank(Naira):
print("Total Naira Value here:",Naira)
private.Bank(5000)
public.Bank(2000)
a = public()
#a.branch(3500)
###Output
_____no_output_____
###Markdown
Class Project I Develop a python OOP program that creates an abstract base class called coup_de_ecriva. The base class will have one abstract method called Fan_Page and four subclassses namely; FC_Cirok, Madiba_FC, Blue_Jay_FC and TSG_Walker. The program will receive as input the name of the club the user supports and instantiate an object that will invoke the Fan_Page method in the subclass that prints Welcome to "club name".Hint:The subclasses will use Single Inheritance to inherit the abstract base class.
###Code
from abc import ABC, abstractmethod
class coup_de_escriva(ABC):
@abstractmethod
def Fan_page(self):
pass
class FC_Cirok(coup_de_escriva):
def Fan_page(self):
print(str(input("Enter your name")))
print(str(input("Which club do you support?")))
print("WELCOME TO CIROK FC!")
class Madiba_FC(coup_de_escriva):
def Fan_page(self):
print(str(input("Enter your name")))
print(str(input("Which club do you support?")))
print("WELCOME TO MADIBA FC!")
class Blue_Jay_FC(coup_de_escriva):
def Fan_page(self):
print(str(input("Enter your name")))
print(str(input("Which club do you support?")))
print("WELCOME TO THE BLUES!")
class TSG_Walkers(coup_de_escriva):
def Fan_page(self):
print(str(input("Enter your name")))
print(str(input("Which club do you support?")))
print("WELCOME TO TSG WALKERS FC!")
a = FC_Cirok()
a.Fan_page()
b = Madiba_FC()
b.Fan_page()
c = Blue_Jay_FC()
c.Fan_page()
d = TSG_Walkers()
d.Fan_page()
###Output
Enter your name Chima
Chima
Which club do you support? Cirok
Cirok
WELCOME TO CIROK FC!
Enter your name Toju
Toju
Which club do you support? Madiba
Madiba
WELCOME TO MADIBA FC!
Enter your name Daniel
Daniel
Which club do you support? Bluejays
Bluejays
WELCOME TO THE BLUES!
Enter your name Murewa
Murewa
Which club do you support? TSG
TSG
WELCOME TO TSG WALKERS FC!
###Markdown
Class Project II The Service Unit of PAU has contacted you to develop a program to manage some of the External Food Vendors. With your knowledge in python OOP develop a program to manage the PAU External Food Vendors. The program receives as input the vendor of interest and display the menu of the interested vendor. The External vendors are Faith hostel, Cooperative Hostel, and Student Center. Find below the menus: Cooperative Cafeteria Main MealPrice (N) Jollof Rice and Stew200 White Rice and Stew200 Fried Rice200 Salad100 Platain100 Faith Hostel Cafeteria Main MealPrice (N) Fried Rice400 White Rice and Stew400 Jollof Rice400 Beans200 Chicken1000 Student Centre Cafeteria Main MealPrice (N) Chicken Fried Rice800 Pomo Sauce300 Spaghetti Jollof500 Amala/Ewedu500 Semo with Eforiro Soup500 Hints: The abstract base class is called External_Vendors(). The abstract method is called menu().The subclasses (the different vendors) will inherit the abstract base class. Each subclass will have a normal method called menu().
###Code
from abc import ABC, abstractmethod
class External_Vendors(ABC):
@abstractmethod
def menu(self):
pass
class Cooperative_cafeteria(External_Vendors):
def menu(self):
print(str(input("Which external vendor would you prefer?")))
print("Menu ; Jollof Rice and Stew, White Rice and Stew, Fried Rice, Salad, Platain")
class Faith_Hostel_Cafeteria(External_Vendors):
def menu(self):
print(str(input("Which external vendor would you prefer?")))
print("Menu ; Jollof Rice , White Rice and Stew, Fried Rice, Beans, Chicken")
class Student_centre_cafeteria(External_Vendors):
def menu(self):
print(str(input("Which external vendor would you prefer?")))
print("Menu ; Pomo sauce, Chicken Fried Rice, Spaghetti Jollof, Amala/Ewedu, Semo with Efo riro soup")
a = Cooperative_cafeteria()
a.menu()
b = Faith_Hostel_Cafeteria()
b.menu()
c = Student_centre_cafeteria()
c.menu()
###Output
Which external vendor would you prefer? Cooperative
Cooperative
Menu ; Jollof Rice and Stew, White Rice and Stew, Fried Rice, Salad, Platain
Which external vendor would you prefer? Faith
Faith
Menu ; Jollof Rice , White Rice and Stew, Fried Rice, Beans, Chicken
Which external vendor would you prefer? Students Centre
Students Centre
Menu ; Pomo sauce, Chicken Fried Rice, Spaghetti Jollof, Amala/Ewedu, Semo with Efo riro soup
###Markdown
What is Abstraction in OOP Abstraction is the concept of object-oriented programming that “shows” only essential attributes and “hides” unnecessary information.The main purpose of abstraction is hiding the unnecessary details from the users. Abstraction is selecting data from a larger pool to show only relevant details of the object to the user. It helps in reducing programming complexity and efforts. It is one of the most important concepts of OOPs. Abstraction in Python Abstraction in python is defined as hiding the implementation of logic from the client and using the particular application. It hides the irrelevant data specified in the project, reducing complexity and giving value to the efficiency. Abstraction is made in Python using Abstract classes and their methods in the code. What is an Abstract Class? Abstract Class is a type of class in OOPs, that declare one or more abstract methods. These classes can have abstract methods as well as concrete methods. A normal class cannot have abstract methods.An abstract class is a class that contains at least one abstract method. What are Abstract Methods?Abstract Method is a method that has just the method definition but does not contain implementation.A method without a body is known as an Abstract Method.It must be declared in an abstract class.The abstract method will never be final because the abstract class must implement all the abstract methods. When to use Abstract Methods & Abstract Class?Abstract methods are mostly declared where two or more subclasses are also doing the same thing in different ways through different implementations.It also extends the same Abstract class and offers different implementations of the abstract methods.Abstract classes help to describe generic types of behaviors and object-oriented programming class hierarchy. It also describes subclasses to offer implementation details of the abstract class. Difference between Abstraction and Encapsulation AbstractionEncapsulation Abstraction in Object Oriented Programming solves the issues at the design level.Encapsulation solves it implementation level. Abstraction in Programming is about hiding unwanted details while showing most essential information.Encapsulation means binding the code and data into a single unit. Data Abstraction in Java allows focussing on what the information object must containEncapsulation means hiding the internal details or mechanics of how an object does something for security reasons. Advantages of AbstractionThe main benefit of using an Abstraction in Programming is that it allows you to group several related classes as siblings.Abstraction in Object Oriented Programming helps to reduce the complexity of the design and implementation process of software. How Abstract Base classes work : By default, Python does not provide abstract classes. Python comes with a module that provides the base for defining Abstract Base classes(ABC) and that module name is ABC. ABC works by decorating methods of the base class as abstract and then registering concrete classes as implementations of the abstract base. A method becomes abstract when decorated with the keyword @abstractmethod. SyntaxAbstract class Syntax is declared as:
###Code
from abc import ABC
# declaration
class classname(ABC):
pass
###Output
_____no_output_____
###Markdown
Abstract method Syntax is declared as
###Code
from abc import ABC
def abstractmethod_name():
pass
###Output
_____no_output_____
###Markdown
Few things to be noted in Python:In python, an abstract class can hold both an abstract method and a normal method.The second point is an abstract class is not initiated (no objects are created).The derived class implementation methods are defined in abstract base classes.
###Code
from abc import ABC
# here abc and ABC are case-sensitive. When we swap it creates
###Output
_____no_output_____
###Markdown
Code I:
###Code
from abc import ABC, abstractmethod
# Abstract Class
class product(ABC):
# Normal Method
def item_list(self, rate):
print("amount submitted : ",rate)
# Abstract Method
@abstractmethod
def product(self,rate):
pass
###Output
_____no_output_____
###Markdown
Code II:A program to generate the volume of geometric shapes
###Code
from abc import ABC
class geometric(ABC):
def volume(self):
#abstract method
pass
class Rect(geometric):
length = 4
width = 6
height = 6
def volume(self):
return self.length * self.width *self.height
class Sphere(geometric):
radius = 8
def volume(self):
return 1.3 * 3.14 * self.radius * self.radius *self.radius
class Cube(geometric):
Edge = 5
def volume(self):
return self.Edge * self.Edge *self.Edge
class Triangle_3D:
length = 5
width = 4
def volume(self):
return 0.5 * self.length * self.width
rr = Rect()
ss = Sphere()
cc = Cube()
tt = Triangle_3D()
print("Volume of a rectangle:", rr.volume())
print("Volume of a circle:", ss.volume())
print("Volume of a square:", cc.volume())
print("Volume of a triangle:", tt.volume())
###Output
Volume of a rectangle: 144
Volume of a circle: 2089.9840000000004
Volume of a square: 125
Volume of a triangle: 10.0
###Markdown
Code IIIA program to generate different invoices
###Code
from abc import ABC, abstractmethod
class Bill(ABC):
def final_bill(self, pay):
print('Purchase of the product: ', pay)
@abstractmethod
def Invoice(self, pay):
pass
class Paycheque(Bill):
def Invoice(self, pay):
print('paycheque of: ', pay)
class CardPayment(Bill):
def Invoice(self, pay):
print('pay through card of: ', pay)
aa = Paycheque()
aa.Invoice(6500)
aa.final_bill(6500)
print(isinstance(aa,Paycheque))
aa = CardPayment()
aa.Invoice(2600)
aa.final_bill(2600)
print(isinstance(aa,CardPayment))
###Output
paycheque of: 6500
Purchase of the product: 6500
True
pay through card of: 2600
Purchase of the product: 2600
True
###Markdown
Code IV: Python program showing abstract base class work Concrete Methods in Abstract Base Classes : Concrete (normal) classes contain only concrete (normal) methods whereas abstract classes may contain both concrete methods and abstract methods. The concrete class provides an implementation of abstract methods, the abstract base class can also provide an implementation by invoking the methods via super().
###Code
from abc import ABC, abstractmethod
class Animal(ABC):
@abstractmethod
def move(self):
pass
class Human(Animal):
def move(self):
print("I can walk and run")
class Snake(Animal):
def move(self):
print("I can crawl")
class Dog(Animal):
def move(self):
print("I can bark")
class Lion(Animal):
def move(self):
print("I can roar")
# Object Instantiation
R = Human()
R.move()
K = Snake()
K.move()
R = Dog()
R.move()
K = Lion()
K.move()
###Output
I can walk and run
I can crawl
I can bark
I can roar
###Markdown
Code V:Python program invoking a method using super()
###Code
from abc import ABC, abstractmethod
class R(ABC):
def rk(self):
print("Abstract Base Class")
class K(R):
def rk(self):
super().rk()
print("subclass")
# Object instantiation
r = K()
r.rk()
###Output
Abstract Base Class
subclass
###Markdown
Code VI: Class Project I
###Code
from abc import ABC, abstractmethod
class Bank(ABC):
def branch(self, Naira):
print("Fees submitted : ",Naira)
@abstractmethod
def Bank(Naira):
pass
class private(Bank):
def Bank(Naira):
print("Total Naira Value here: ",Naira)
class public(Bank):
def Bank(Naira):
print("Total Naira Value here:",Naira)
private.Bank(5000)
public.Bank(2000)
a = public()
#a.branch(3500)
###Output
Total Naira Value here: 5000
Total Naira Value here: 2000
###Markdown
Develop a python OOP program that creates an abstract base class called coup_de_ecriva. The base class will have one abstract method called Fan_Page and four subclassses namely; FC_Cirok, Madiba_FC, Blue_Jay_FC and TSG_Walker. The program will receive as input the name of the club the user supports and instantiate an object that will invoke the Fan_Page method in the subclass that prints Welcome to "club name".Hint:The subclasses will use Single Inheritance to inherit the abstract base class.
###Code
from abc import ABC
class coup_de_escriva(ABC):
def fan_page(self):
pass
class fc_cirok(coup_de_escriva):
def fan_page(self):
print("Welcome to fc cirok")
class madiba_fc(coup_de_escriva):
def fan_page(self):
print("Welcome to madiba fc")
class blue_jays_fc(coup_de_escriva):
def fan_page(self):
print("Welcome to blue jays fc")
class tsg_walkers(coup_de_escriva):
def fan_page(self):
print("Welcome to tsg walkers")
x = input("What is the name of the club you support? ")
if x == "fc cirok":
c = fc_cirok()
c.fan_page()
elif x == "madiba fc":
m = madiba_fc()
m.fan_page()
elif x == "blue jays fc":
b = blue_jay_fc()
b.fan_page()
elif x == "tsg walkers":
t = tsg_walker()
t.fan_page()
else:
print("Doesn't exist")
###Output
What is the name of the club you support?f
Doesn't exist
###Markdown
Class Project II The Service Unit of PAU has contacted you to develop a program to manage some of the External Food Vendors. With your knowledge in python OOP develop a program to manage the PAU External Food Vendors. The program receives as input the vendor of interest and display the menu of the interested vendor. The External vendors are Faith hostel, Cooperative Hostel, and Student Center. Find below the menus: Cooperative Cafeteria Main MealPrice (N) Jollof Rice and Stew200 White Rice and Stew200 Fried Rice200 Salad100 Platain100 Faith Hostel Cafeteria Main MealPrice (N) Fried Rice400 White Rice and Stew400 Jollof Rice400 Beans200 Chicken1000 Student Centre Cafeteria Main MealPrice (N) Chicken Fried Rice800 Pomo Sauce300 Spaghetti Jollof500 Amala/Ewedu500 Semo with Eforiro Soup500 Hints: The abstract base class is called External_Vendors(). The abstract method is called menu().The subclasses (the different vendors) will inherit the abstract base class. Each subclass will have a normal method called menu().
###Code
from abc import ABC
import pandas as pd
class External_Vendors(ABC):
def menu(self):
pass
class cooperative(External_Vendors):
def menu(self):
x = {'Main meal':['Jollof Rice and Stew', 'White Rice and Stew', 'Fried Rice', 'Salad','Plantain'],
'Price':[200, 200, 200, 100, 100]}
df = pd.DataFrame(x)
print(df)
class Faith(External_Vendors):
def menu(self):
y = {'Main meal':['Fried Rice', 'White Rice and Stew','Jollof Rice','Beans','Chicken'],
'Price':[400, 400, 400, 200, 1000]}
df = pd.DataFrame(y)
print(df)
class student(External_Vendors):
def menu(self):
z = {'Main meal':['Chicken Fried Rice', 'Pomo Sauce','Spaghetti Jollof','Amala/Ewedu','Semo with Eforiro Soup'],
'Price':[800, 300, 500, 500, 500]}
df = pd.DataFrame(z)
print(df)
a = input("Your preferred vendor? ")
if a == "Cooperative":
c = cooperative()
c.menu()
elif a == "Faith":
f = Faith()
f.menu()
elif a == "Student":
s = student()
s.menu()
else:
print("Doesn't exist")
###Output
Your preferred vendor?Faith
Main meal Price
0 Fried Rice 400
1 White Rice and Stew 400
2 Jollof Rice 400
3 Beans 200
4 Chicken 1000
|
pipeline/misc/rds_to_vcf.ipynb | ###Markdown
Summary statistics in VCF formatmodified from the create_vcf of mrcieu/gwasvcf package to transform the mash output matrixs from the rds format into a vcf file, with a effect size = to the coef and the se = 1, named as EF:SE.Input:a collection of gene-level rds file, each file is a matrix of mash output, with colnames = studies, rownames = snps, snps shall be in the form of chr:pos_alt_ref,A list of aforementioned MASH outputoutput:A collection of gene-level vcf output:vcf file and corresponding indexa list of aforementioned vcfRequired R packages: dplyr readr VariantAnnotation The output format of this workflow is following this specification https://github.com/MRCIEU/gwas-vcf-specification, with each study stands for a column
###Code
[global]
import glob
# single column file each line is the data filename
parameter: analysis_units = path
# Path to data directory
parameter: data_dir = "/"
# data file suffix
parameter: data_suffix = ""
# Path to work directory where output locates
parameter: wd = path("./output")
# An identifier for your run of analysis
parameter: name = ""
regions = [x.replace("\"", "" ).strip().split() for x in open(analysis_units).readlines() if x.strip() and not x.strip().startswith('#')]
genes = regions
# Containers that contains the necessary packages
parameter: container = 'gaow/twas'
[rds_to_vcf_1]
input: genes, group_by = 1
output: vcf = f'{wd:a}/mash_vcf/{_input:bn}.vcf.bgz'
task: trunk_workers = 1, walltime = '1h', trunk_size = 1, mem = '10G', cores = 1, tags = f'{_output:bn}'
R: expand = '$[ ]', stdout = f"{_output[0]:nn}.stdout", stderr = f"{_output[0]:nn}.stderr"
library("dplyr")
library("stringr")
library("readr")
library("purrr")
## Define a wrapper, modified from the gwasvcf packages, to create the vcf of needed.
create_vcf = function (chrom, pos, nea, ea, snp = NULL, ea_af = NULL, effect = NULL,
se = NULL, pval = NULL, n = NULL, ncase = NULL, name = NULL)
{
stopifnot(length(chrom) == length(pos))
if (is.null(snp)) {
snp <- paste0(chrom, ":", pos)
}
snp <- paste0(chrom, ":", pos)
nsnp <- length(chrom)
gen <- list()
## Setupt data content for each sample column
if (!is.null(ea_af))
gen[["AF"]] <- matrix(ea_af, nsnp)
if (!is.null(effect))
gen[["ES"]] <- matrix(effect, nsnp)
if (!is.null(se))
gen[["SE"]] <- matrix(se, nsnp)
if (!is.null(pval))
gen[["LP"]] <- matrix(-log10(pval), nsnp)
if (!is.null(n))
gen[["SS"]] <- matrix(n, nsnp)
if (!is.null(ncase))
gen[["NC"]] <- matrix(ncase, nsnp)
gen <- S4Vectors::SimpleList(gen)
## Setup snps info for the fix columns
gr <- GenomicRanges::GRanges(chrom, IRanges::IRanges(start = pos,
end = pos + pmax(nchar(nea), nchar(ea)) - 1, names = snp))
## Setup meta informations
coldata <- S4Vectors::DataFrame(Studies = name, row.names = name)
hdr <- VariantAnnotation::VCFHeader(header = IRanges::DataFrameList(fileformat = S4Vectors::DataFrame(Value = "VCFv4.2",
row.names = "fileformat")), sample = name)
VariantAnnotation::geno(hdr) <- S4Vectors::DataFrame(Number = c("A",
"A", "A", "A", "A", "A"), Type = c("Float", "Float",
"Float", "Float", "Float", "Float"), Description = c("Effect size estimate relative to the alternative allele",
"Standard error of effect size estimate", "-log10 p-value for effect estimate",
"Alternate allele frequency in the association study",
"Sample size used to estimate genetic effect", "Number of cases used to estimate genetic effect"),
row.names = c("ES", "SE", "LP", "AF", "SS", "NC"))
## Save only the meta information in the sample columns
VariantAnnotation::geno(hdr) <- subset(VariantAnnotation::geno(hdr),
rownames(VariantAnnotation::geno(hdr)) %in% names(gen))
## Save VCF values
vcf <- VariantAnnotation::VCF(rowRanges = gr, colData = coldata,
exptData = list(header = hdr), geno = gen)
VariantAnnotation::alt(vcf) <- Biostrings::DNAStringSetList(as.list(ea))
VariantAnnotation::ref(vcf) <- Biostrings::DNAStringSet(nea)
## Write fixed values
VariantAnnotation::fixed(vcf)$FILTER <- "PASS"
return(sort(vcf))
}
input = readRDS($[_input:r])
input_effect = input$PosteriorMean
if(is.null(input$PosteriorSD)){
input$PosteriorSD = matrix(1,nrow = nrow(input_effect),ncol = ncol(input_effect) )
}
input_se = input$PosteriorSD
df = tibble(snps = input$snps)
df = df%>%mutate( chr = map_dbl(snps,~str_remove(read.table(text = .x,sep = ":",as.is = T)$V1, "chr")%>%as.numeric),
pos_alt_ref = map_chr(snps,~read.table(text = .x,sep = ":",as.is = TRUE)$V2),
pos = map_dbl(pos_alt_ref,~read.table(text = .x,sep = "_",as.is = TRUE)$V1),
alt = map_chr(pos_alt_ref,~read.table(text = .x,sep = "_",as.is = TRUE, colClass = "character")$V2),
ref = map_chr(pos_alt_ref,~read.table(text = .x,sep = "_",as.is = TRUE, colClass = "character")$V3))
vcf = create_vcf(
chrom = df$chr,
pos = df$pos,
ea = df$alt,
nea = df$ref,
effect = input_effect ,
se = input_se,
name = colnames(input_effect))
VariantAnnotation::writeVcf(vcf,$[_output:nr],index = TRUE)
[rds_to_vcf_2]
input: group_by = "all"
output: vcf_list = f'{_input[0]:d}/vcf_output_list.txt'
bash: expand = '${ }', stdout = f"{_output[0]:nn}.stdout", stderr = f"{_output[0]:nn}.stderr"
cd ${_input[0]:d}
ls *.vcf.bgz > ${_output}
###Output
_____no_output_____ |
cdl/cbpdndl_parcns_clr.ipynb | ###Markdown
Convolutional Dictionary Learning=================================This example demonstrates the use of [prlcnscdl.ConvBPDNDictLearn_Consensus](http://sporco.rtfd.org/en/latest/modules/sporco.dictlrn.prlcnscdl.htmlsporco.dictlrn.prlcnscdl.ConvBPDNDictLearn_Consensus) for learning a convolutional dictionary from a set of colour training images [[51]](http://sporco.rtfd.org/en/latest/zreferences.htmlid54). The dictionary learning algorithm is based on the ADMM consensus dictionary update [[1]](http://sporco.rtfd.org/en/latest/zreferences.htmlid44) [[26]](http://sporco.rtfd.org/en/latest/zreferences.htmlid25).
###Code
from __future__ import print_function
from builtins import input
import pyfftw # See https://github.com/pyFFTW/pyFFTW/issues/40
import numpy as np
from sporco.dictlrn import prlcnscdl
from sporco import util
from sporco import signal
from sporco import plot
plot.config_notebook_plotting()
###Output
_____no_output_____
###Markdown
Load training images.
###Code
exim = util.ExampleImages(scaled=True, zoom=0.25)
S1 = exim.image('barbara.png', idxexp=np.s_[10:522, 100:612])
S2 = exim.image('kodim23.png', idxexp=np.s_[:, 60:572])
S3 = exim.image('monarch.png', idxexp=np.s_[:, 160:672])
S4 = exim.image('sail.png', idxexp=np.s_[:, 210:722])
S5 = exim.image('tulips.png', idxexp=np.s_[:, 30:542])
S = np.stack((S1, S2, S3, S4, S5), axis=3)
###Output
_____no_output_____
###Markdown
Highpass filter training images.
###Code
npd = 16
fltlmbd = 5
sl, sh = signal.tikhonov_filter(S, fltlmbd, npd)
###Output
_____no_output_____
###Markdown
Construct initial dictionary.
###Code
np.random.seed(12345)
D0 = np.random.randn(8, 8, 3, 64)
###Output
_____no_output_____
###Markdown
Set regularization parameter and options for dictionary learning solver.
###Code
lmbda = 0.2
opt = prlcnscdl.ConvBPDNDictLearn_Consensus.Options({'Verbose': True,
'MaxMainIter': 200,
'CBPDN': {'rho': 50.0*lmbda + 0.5},
'CCMOD': {'rho': 1.0, 'ZeroMean': True}})
###Output
_____no_output_____
###Markdown
Create solver object and solve.
###Code
d = prlcnscdl.ConvBPDNDictLearn_Consensus(D0, sh, lmbda, opt)
D1 = d.solve()
print("ConvBPDNDictLearn_Consensus solve time: %.2fs" %
d.timer.elapsed('solve'))
###Output
Itn Fnc DFid Regℓ1
----------------------------------
###Markdown
Display initial and final dictionaries.
###Code
D1 = D1.squeeze()
fig = plot.figure(figsize=(14, 7))
plot.subplot(1, 2, 1)
plot.imview(util.tiledict(D0), title='D0', fig=fig)
plot.subplot(1, 2, 2)
plot.imview(util.tiledict(D1), title='D1', fig=fig)
fig.show()
###Output
_____no_output_____
###Markdown
Get iterations statistics from solver object and plot functional value
###Code
its = d.getitstat()
plot.plot(its.ObjFun, xlbl='Iterations', ylbl='Functional')
###Output
_____no_output_____ |
examples/Supply chain physics.ipynb | ###Markdown
Supply chain physics*This notebook illustrates methods to investigate the physics of a supply chain****Alessandro Tufano 2020 Import packages
###Code
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Generate empirical demand and production We define an yearly sample of production quantity $x$, and demand quantity $d$
###Code
number_of_sample = 365 #days
mu_production = 105 #units per day
sigma_production = 1 # units per day
mu_demand = 100 #units per day
sigma_demand = 0.3 # units per day
x = np.random.normal(mu_production,sigma_production,number_of_sample)
#d = np.random.normal(mu_demand,sigma_demand,number_of_sample)
d = brownian(x0=mu_demand, n=365, dt=1, delta=sigma_demand, out=None) #demand stochastic process
# represent demand
plt.hist(d,color='orange')
plt.hist(x,color='skyblue')
plt.title('Production and Demand histogram')
plt.xlabel('Daily rate')
plt.ylabel('Frequency')
plt.legend(['Demand','Production'])
x = np.array(x)
d = np.array(d)
plt.figure()
plt.plot(d)
plt.title("Demand curve $d$")
plt.xlabel('Time in days')
plt.ylabel('Numbar of parts')
plt.figure()
plt.plot(x)
plt.title("Production curve $x$")
plt.xlabel('Time in days')
plt.ylabel('Number of parts')
###Output
_____no_output_____
###Markdown
Define the inventory function $q$ The empirical inventory function $q$ is defined as the differende between production and demand, plus the residual inventory. $q_t = q_{t-1} + x_t - d_t$
###Code
q = [mu_production] #initial inventory with production mean value
for i in range(0,len(d)):
inventory_value = q[i] + x[i] - d[i]
if inventory_value <0 :
inventory_value=0
q.append(inventory_value)
plt.plot(q)
plt.xlabel('days')
plt.ylabel('Inventory quantity $q$')
plt.title('Inventory function $q$')
q = np.array(q)
###Output
_____no_output_____
###Markdown
Define pull and push forces (the momentum $p=\dot{q}$) By using continuous notation we obtain the derivative $\dot{q}=p=x-d$. The derivative of the inventory represents the *momentum* of the supply chain, i.e. the speed a which the inventory values goes up (production), and down (demand). We use the term **productivity** to identify the momentum $p$. The forces changing the value of the productivity are called **movements** $\dot{p}$.
###Code
p1 = [q[i]-q[i-1] for i in range(1,len(q))]
p2 = [x[i]-d[i] for i in range(1,len(d))]
plt.plot(p1)
plt.plot(p2)
plt.xlabel('days')
plt.ylabel('Value')
plt.title('Momentum function $p$')
p=np.array(p)
###Output
_____no_output_____
###Markdown
Define a linear potential $V(q)$ we introduce a linear potential to describe the amount of *energy* related with a given quantity of the inventory $q$.
###Code
F0 = 0.1
#eta = 1.2
#lam = mu_demand
#F0=eta*lam
print(F0)
V_q = -F0*q
V_q = V_q[0:-1]
###Output
_____no_output_____
###Markdown
Define the energy conservation function using the Lagrangianm and the Hamiltonian We use the Lagrangian to describe the energy conservation equation.$L(q,\dot{q}) = H = \frac{1}{2}\dot{q} - V(q)$
###Code
H = (p**2)/2 - F0*q[0:-1]
plt.plot(H)
plt.xlabel('days')
plt.ylabel('value')
plt.title('Function $H$')
###Output
_____no_output_____
###Markdown
Obtain the inventory $q$, given $H$
###Code
S_q = [H[i-1] + H[i] for i in range(1,len(H))]
plt.plot(S_q)
plt.xlabel('days')
plt.ylabel('value')
plt.title('Function $S[q]$')
#compare with q
plt.plot(q)
plt.xlabel('days')
plt.ylabel('Inventory quantity $q$')
plt.title('Inventory function $q$')
plt.legend(['Model inventory','Empirical inventory'])
###Output
_____no_output_____
###Markdown
Inventory control Define the Brownian process
###Code
from math import sqrt
from scipy.stats import norm
import numpy as np
def brownian(x0, n, dt, delta, out=None):
"""
Generate an instance of Brownian motion (i.e. the Wiener process):
X(t) = X(0) + N(0, delta**2 * t; 0, t)
where N(a,b; t0, t1) is a normally distributed random variable with mean a and
variance b. The parameters t0 and t1 make explicit the statistical
independence of N on different time intervals; that is, if [t0, t1) and
[t2, t3) are disjoint intervals, then N(a, b; t0, t1) and N(a, b; t2, t3)
are independent.
Written as an iteration scheme,
X(t + dt) = X(t) + N(0, delta**2 * dt; t, t+dt)
If `x0` is an array (or array-like), each value in `x0` is treated as
an initial condition, and the value returned is a numpy array with one
more dimension than `x0`.
Arguments
---------
x0 : float or numpy array (or something that can be converted to a numpy array
using numpy.asarray(x0)).
The initial condition(s) (i.e. position(s)) of the Brownian motion.
n : int
The number of steps to take.
dt : float
The time step.
delta : float
delta determines the "speed" of the Brownian motion. The random variable
of the position at time t, X(t), has a normal distribution whose mean is
the position at time t=0 and whose variance is delta**2*t.
out : numpy array or None
If `out` is not None, it specifies the array in which to put the
result. If `out` is None, a new numpy array is created and returned.
Returns
-------
A numpy array of floats with shape `x0.shape + (n,)`.
Note that the initial value `x0` is not included in the returned array.
"""
x0 = np.asarray(x0)
# For each element of x0, generate a sample of n numbers from a
# normal distribution.
r = norm.rvs(size=x0.shape + (n,), scale=delta*sqrt(dt))
# If `out` was not given, create an output array.
if out is None:
out = np.empty(r.shape)
# This computes the Brownian motion by forming the cumulative sum of
# the random samples.
np.cumsum(r, axis=-1, out=out)
# Add the initial condition.
out += np.expand_dims(x0, axis=-1)
return out
###Output
_____no_output_____
###Markdown
Define the supply chain control model
###Code
# supply chain control model
def supply_chain_control_model(p,beta,eta,F0):
#p is the productivity function defined as the defivative of q
#beta is the diffusion coefficient, i.e. the delta of the Brownian process, the std of the demand can be used
#eta represents the flexibility of the productio. It is the number of days to reach a target inventory
#F0 is the potential
Fr_t = brownian(x0=F0, n=365, dt=1, delta=beta, out=None) #demand stochastic process
p_dot = F0 -eta*p + Fr_t
return p_dot, Fr_t
#identify the sensitivity of the inventory control with different values of eta
for eta in [0.1,1,2,7,30]:
p_dot, Fr_t = supply_chain_control_model(p=p,beta = sigma_demand,eta=eta,F0=F0)
plt.figure()
plt.plot(Fr_t)
plt.plot(p)
plt.plot(p_dot)
plt.title(f"Inventory control with eta={eta}")
plt.legend(['Demand','Productivity','Movements $\dot{p}$'])
p_dot, Fr_t = supply_chain_control_model(p=p,beta = sigma_demand,eta=1,F0=0.9)
p_model = [p_dot[i-1] + p_dot[i] for i in range(1,len(p_dot))]
q_model = [p_model[i-1] + p_model[i] for i in range(1,len(p_model))]
plt.plot(q_model)
plt.plot(p_model)
plt.legend(['$q$: inventory','$p$: productivity'])
p_mean = np.mean(p_model)
p_std = np.std(p_model)
print(f"Movements mean: {p_mean}, std: {p_std}")
q_mean = np.mean(q_model)
q_std = np.std(q_model)
print(f"Inventory mean: {q_mean}, std: {q_std}")
###Output
_____no_output_____ |
variable_exploration/mk/1_Ingestion_Wrangling/0_data_pull.ipynb | ###Markdown
Ingest DataOriginal data was from inside airbnb, to secure the files, they were copied to google drive
###Code
import os
from google_drive_downloader import GoogleDriveDownloader as gdd
###Output
_____no_output_____
###Markdown
Pull files from Google Drivelistings shared url: https://drive.google.com/file/d/1e8hVygvxFgJo3QgUrzgslsTzWD9-MQUO/view?usp=sharingcalendar shared url: https://drive.google.com/file/d/1VjlSWEr4vaJHdT9o2OF9N2Ga0X2b22v9/view?usp=sharingreviews shared url: https://drive.google.com/file/d/1_ojDocAs_LtcBLNxDHqH_TSBWjPz-Zme/view?usp=sharing
###Code
# gdd.download_file_from_google_drive(file_id='1e8hVygvxFgJo3QgUrzgslsTzWD9-MQUO',
# dest_path='../data/gdrive/listings.csv.gz'
#source_dest = {'../data/gdrive/listings.csv.gz':'1e8hVygvxFgJo3QgUrzgslsTzWD9-MQUO'}
source_dest = {'../data/gdrive/listings.csv.gz':'1e8hVygvxFgJo3QgUrzgslsTzWD9-MQUO', '../data/gdrive/calendar.csv.gz':'1VjlSWEr4vaJHdT9o2OF9N2Ga0X2b22v9', '../data/gdrive/reviews.csv.gz':'1_ojDocAs_LtcBLNxDHqH_TSBWjPz-Zme'}
for k, v in source_dest.items():
gdd.download_file_from_google_drive(file_id=v, dest_path=k)
###Output
_____no_output_____ |
module1_lab_2.ipynb | ###Markdown
**The aim of this lab is to introduce DATA and FEATURES.** Extracting features from data FMML Module 1, Lab 2 Module Coordinator : [email protected]
###Code
! pip install wikipedia
import wikipedia
import nltk
from nltk.util import ngrams
from collections import Counter
import matplotlib.pyplot as plt
import numpy as np
import re
import unicodedata
import plotly.express as px
import pandas as pd
###Output
Collecting wikipedia
Downloading wikipedia-1.4.0.tar.gz (27 kB)
Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.7/dist-packages (from wikipedia) (4.6.3)
Requirement already satisfied: requests<3.0.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from wikipedia) (2.23.0)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.0.0->wikipedia) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.0.0->wikipedia) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.0.0->wikipedia) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3.0.0,>=2.0.0->wikipedia) (2021.10.8)
Building wheels for collected packages: wikipedia
Building wheel for wikipedia (setup.py) ... [?25l[?25hdone
Created wheel for wikipedia: filename=wikipedia-1.4.0-py3-none-any.whl size=11695 sha256=1bd6bc02a88d2783cdab8fa979ba87dd77c590685818e4995e42f62ac8980708
Stored in directory: /root/.cache/pip/wheels/15/93/6d/5b2c68b8a64c7a7a04947b4ed6d89fb557dcc6bc27d1d7f3ba
Successfully built wikipedia
Installing collected packages: wikipedia
Successfully installed wikipedia-1.4.0
###Markdown
**What are features?**features are individual independent variables that act like a input to your system.
###Code
import matplotlib.pyplot as plt
from matplotlib import cm
import numpy as np
from mpl_toolkits.mplot3d.axes3d import get_test_data
# set up a figure twice as wide as it is tall
fig = plt.figure(figsize=plt.figaspect(0.9))
# =============
# First subplot
# =============
# set up the axes for the first plot
ax = fig.add_subplot(1, 2, 1, projection='3d')
# plot a 3D surface like in the example mplot3d/surface3d_demo
X = np.arange(-5, 5, 0.25) # feature 1
Y = np.arange(-5, 5, 0.25) # feature 2
X, Y = np.meshgrid(X, Y)
R = np.sqrt(X**2 + Y**2)
Z = np.sin(R) #output
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0.4, antialiased=False)
ax.set_zlim(-1.01, 1.01)
fig.colorbar(surf, shrink=0.5, aspect=10)
###Output
_____no_output_____
###Markdown
**Part 2: Features of text**How do we apply machine learning on text? We can't directly use the text as input to our algorithms. We need to convert them to features. In this notebook, we will explore a simple way of converting text to features.Let us download a few documents off Wikipedia.
###Code
topic1 = 'Giraffe'
topic2 = 'Elephant'
wikipedia.set_lang('en')
eng1 = wikipedia.page(topic1).content
eng2 = wikipedia.page(topic2).content
wikipedia.set_lang('fr')
fr1 = wikipedia.page(topic1).content
fr2 = wikipedia.page(topic2).content
fr2
###Output
_____no_output_____
###Markdown
We need to clean this up a bit. Let us remove all the special characters and keep only 26 letters and space. Note that this will remove accented characters in French also. We are also removing all the numbers and spaces. So this is not an ideal solution.
###Code
def cleanup(text):
text = text.lower() # make it lowercase
text = re.sub('[^a-z]+', '', text) # only keep characters
return text
print(eng1)
###Output
The giraffe is a tall African mammal belonging to the genus Giraffa. Specifically, It is an even-toed ungulate. It is the tallest living terrestrial animal and the largest ruminant on Earth. Traditionally, giraffes were thought to be one species, Giraffa camelopardalis, with nine subspecies. Most recently, researchers proposed dividing giraffes into up to eight extant species due to new research into their mitochondrial and nuclear DNA, as well as morphological measurements. Seven other extinct species of Giraffa are known from the fossil record.
The giraffe's chief distinguishing characteristics are its extremely long neck and legs, its horn-like ossicones, and its spotted coat patterns. It is classified under the family Giraffidae, along with its closest extant relative, the okapi. Its scattered range extends from Chad in the north to South Africa in the south, and from Niger in the west to Somalia in the east. Giraffes usually inhabit savannahs and woodlands. Their food source is leaves, fruits, and flowers of woody plants, primarily acacia species, which they browse at heights most other herbivores cannot reach.
Lions, leopards, spotted hyenas, and African wild dogs may prey upon giraffes. Giraffes live in herds of related females and their offspring, or bachelor herds of unrelated adult males, but are gregarious and may gather in large aggregations. Males establish social hierarchies through "necking,” which are combat bouts where the neck is used as a weapon. Dominant males gain mating access to females, which bear the sole responsibility for raising the young.
The giraffe has intrigued various ancient and modern cultures for its peculiar appearance, and has often been featured in paintings, books, and cartoons. It is classified by the International Union for Conservation of Nature (IUCN) as vulnerable to extinction and has been extirpated from many parts of its former range. Giraffes are still found in numerous national parks and game reserves, but estimates as of 2016 indicate there are approximately 97,500 members of Giraffa in the wild. More than 1,600 were kept in zoos in 2010.
== Etymology ==
The name "giraffe" has its earliest known origins in the Arabic word zarāfah (زرافة), perhaps borrowed from the animal's Somali name geri. The Arab name is translated as "fast-walker". In early Modern English the spellings jarraf and ziraph were used, probably directly from the Arabic, and in Middle English orafle and gyrfaunt, gerfaunt. The Italian form giraffa arose in the 1590s. The modern English form developed around 1600 from the French girafe."Camelopard" is an archaic English name for the giraffe; it derives from the Ancient Greek καμηλοπάρδαλις (kamēlopárdalis), from κάμηλος (kámēlos), "camel", and πάρδαλις (párdalis), "leopard", referring to its camel-like shape and leopard-like colouration.
== Taxonomy ==
Carl Linnaeus originally classified living giraffes as one species in 1758. He gave it the binomial name Cervus camelopardalis. Morten Thrane Brünnich classified the genus Giraffa in 1762. The species name camelopardalis is from Latin.
=== Evolution ===
The giraffe is one of only two living genera of the family Giraffidae in the order Artiodactyla, the other being the okapi. The family was once much more extensive, with over 10 fossil genera described. The elongation of the neck appears to have started early in the giraffe lineage. Comparisons between giraffes and their ancient relatives suggest vertebrae close to the skull lengthened earlier, followed by lengthening of vertebrae further down. One early giraffid ancestor was Canthumeryx which has been dated variously to have lived 25–20 million years ago (mya), 17–15 mya or 18–14.3 mya and whose deposits have been found in Libya. This animal was medium-sized, slender and antelope-like. Giraffokeryx appeared 15 mya on the Indian subcontinent and resembled an okapi or a small giraffe, and had a longer neck and similar ossicones. Giraffokeryx may have shared a clade with more massively built giraffids like Sivatherium and Bramatherium.Giraffids like Palaeotragus, Shansitherium and Samotherium appeared 14 mya and lived throughout Africa and Eurasia. These animals had bare ossicones and small cranial sinuses and were longer with broader skulls. Paleotragus resembled the okapi and may have been its ancestor. Others find that the okapi lineage diverged earlier, before Giraffokeryx. Samotherium was a particularly important transitional fossil in the giraffe lineage, as its cervical vertebrae were intermediate in length and structure between a modern giraffe and an okapi, and were more vertical than the okapi's. Bohlinia, which first appeared in southeastern Europe and lived 9–7 mya was likely a direct ancestor of the giraffe. Bohlinia closely resembled modern giraffes, having a long neck and legs and similar ossicones and dentition.
Bohlinia entered China and northern India in response to climate change. From there, the genus Giraffa evolved and, around 7 mya, entered Africa. Further climate changes caused the extinction of the Asian giraffes, while the African giraffes survived and radiated into several new species. Living giraffes appear to have arisen around 1 mya in eastern Africa during the Pleistocene. Some biologists suggest the modern giraffes descended from G. jumae; others find G. gracilis a more likely candidate. G. jumae was larger and more heavily built, while G. gracilis was smaller and more lightly built.The changes from extensive forests to more open habitats, which began 8 mya, are believed to be the main driver for the evolution of giraffes. During this time, tropical plants disappeared and were replaced by arid C4 plants, and a dry savannah emerged across eastern and northern Africa and western India. Some researchers have hypothesised that this new habitat coupled with a different diet, including acacia species, may have exposed giraffe ancestors to toxins that caused higher mutation rates and a higher rate of evolution. The coat patterns of modern giraffes may also have coincided with these habitat changes. Asian giraffes are hypothesised to have had more okapi-like colourations.The giraffe genome is around 2.9 billion base pairs in length compared to the 3.3 billion base pairs of the okapi. Of the proteins in giraffe and okapi genes, 19.4% are identical. The divergence of giraffe and okapi lineages dates to around 11.5 mya. A small group of regulatory genes in the giraffe appear to be responsible for the animal's stature and associated circulatory adaptations.
=== Species and subspecies ===
The International Union for Conservation of Nature (IUCN) currently recognises only one species of giraffe with nine subspecies. During the 1900s, various taxonomies with two or three species were proposed. A 2007 study on the genetics of giraffes using mitochondrial DNA suggested at least six lineages could be recognised as species. A 2011 study using detailed analyses of the morphology of giraffes, and application of the phylogenetic species concept, described eight species of living giraffes. A 2016 study also concluded that living giraffes consist of multiple species. The researchers suggested the existence of four species, which have not exchanged genetic information between each other for 1 to 2 million years.A 2020 study showed that depending on the method chosen, different taxonomic hypotheses recognizing from two to six species can be considered for the genus Giraffa. That study also found that multi-species coalescent methods can lead to taxonomic over-splitting, as those methods delimit geographic structures rather than species. The three-species hypothesis, which recognises G. camelopardalis, G. giraffa, and G. tippelskirchi, is highly supported by phylogenetic analyses and also corroborated by most population genetic and multi-species coalescent analyses. A 2021 whole genome sequencing study suggests the existence of four distinct species and seven subspecies.The cladogram below shows the phylogenetic relationship between the four proposed species and seven subspecies based on the genome analysis. Note the eight lineages correspond to eight of the traditional subspecies in the one species hypothesis. The Rothschild giraffe is subsumed into G. camelopardalis camelopardalis.
The following table compares the different hypotheses for giraffe species. The description column shows the traditional nine subspecies in the one species hypothesis.
The first extinct species to be described was Giraffa sivalensis Falconer and Cautley 1843, a reevaluation of a vertebra that was initially described as a fossil of the living giraffe. While taxonomic opinion may be lacking on some names, the extinct species that have been published include:
Giraffa gracilis
Giraffa jumae
Giraffa priscilla
Giraffa pomeli
Giraffa punjabiensis
Giraffa pygmaea
Giraffa sivalensis
Giraffa stillei
== Appearance and anatomy ==
Fully grown giraffes stand 4.3–5.7 m (14.1–18.7 ft) tall, with males taller than females. The average weight is 1,192 kg (2,628 lb) for an adult male and 828 kg (1,825 lb) for an adult female. Despite its long neck and legs, the giraffe's body is relatively short.: 66 The skin of a giraffe is mostly gray, or tan, and can reach a thickness of 20 mm (0.79 in).: 87 The 80–100 centimetres (31–39 in) long tail ends in a long, dark tuft of hair and is used as a defense against insects.: 94 The coat has dark blotches or patches, which can be orange, chestnut, brown, or nearly black, separated by light hair, usually white or cream coloured. Male giraffes become darker as they age. The coat pattern has been claimed to serve as camouflage in the light and shade patterns of savannah woodlands. When standing among trees and bushes, they are hard to see at even a few metres distance. However, adult giraffes move about to gain the best view of an approaching predator, relying on their size and ability to defend themselves rather than on camouflage, which may be more important for calves. Each individual giraffe has a unique coat pattern. Giraffe calves inherit some coat pattern traits from their mothers, and variation in some spot traits is correlated with neonatal survival. The skin underneath the blotches may serve as windows for thermoregulation, being sites for complex blood vessel systems and large sweat glands.The fur may give the animal chemical defense, as its parasite repellents give it a characteristic scent. At least 11 main aromatic chemicals are in the fur, although indole and 3-methylindole are responsible for most of the smell. Because the males have a stronger odour than the females, the odour may also have sexual function.
=== Head ===
Both sexes have prominent horn-like structures called ossicones, formed from ossified cartilage, covered in skin and fused to the skull at the parietal bones. Being vascularised, the ossicones may have a role in thermoregulation, and are used in combat between males. Appearance is a reliable guide to the sex or age of a giraffe: the ossicones of females and young are thin and display tufts of hair on top, whereas those of adult males end in knobs and tend to be bald on top. Also, a median lump, which is more prominent in males, emerges at the front of the skull. Males develop calcium deposits that form bumps on their skulls as they age. Multiple sinuses lighten a giraffe's skull.: 103 However, as males age, their skulls become heavier and more club-like, helping them become more dominant in combat. The occipital condyles of the skull allow the animal to tilt its head straight up and grab food on the branches above with the tongue.: 103, 110 Located on both sides of the head, the giraffe's eyes give it good eyesight and a wide field of vision from its great height.: 85, 102 The eye is larger than in other ungulates, with a greater retinal surface area. Giraffes possibly see in colour: 85 and their senses of hearing and smell are sharp. The ears are movable: 95 and the nostrils are slit-shaped, which may be an adaptation against blowing sand.The giraffe's prehensile tongue is about 45 cm (18 in) long. It is black, perhaps to protect against sunburn and is useful for grasping foliage, and delicately removing leaves from branches.: 109–110 The giraffe's upper lip is prehensile and useful when foraging, and is covered in hair to protect against thorns. Papillae cover the tongue and the inside of the mouth. The upper jaw has a hard palate and lacks front teeth. The molars and premolars have a low-crowned, broad surface with an almost square cross-section.: 106
=== Legs, locomotion and posture ===
A giraffe's front and back legs are about the same length. The radius and ulna of the front legs are articulated by the carpus, which, while structurally equivalent to the human wrist, functions as a knee. It appears that a suspensory ligament allows the lanky legs to support the animal's great weight. The hooves of large male giraffes reach a diameter of 31 cm × 23 cm (12.2 in × 9.1 in).: 98 The rear of each hoof is low, and the fetlock is close to the ground, allowing the foot to provide additional support for the animal's weight. Giraffes lack dewclaws and interdigital glands. The giraffe's pelvis, though relatively short, has an ilium that is outspread at the upper ends.A giraffe has only two gaits: walking and galloping. Walking is done by moving the legs on one side of the body, then doing the same on the other side. When galloping, the hind legs move around the front legs before the latter move forward, and the tail will curl up. The animal relies on the forward and backward motions of its head and neck to maintain balance and the counter momentum while galloping.: 327–29 The giraffe can reach a sprint speed of up to 60 km/h (37 mph), and can sustain 50 km/h (31 mph) for several kilometres. Giraffes would probably not be competent swimmers as their long legs would be highly cumbersome in the water, although they could possibly float. When swimming, the thorax would be weighed down by the front legs, making it difficult for the animal to move its neck and legs in harmony or keep its head above the water's surface.A giraffe rests by lying with its body on top of its folded legs.: 329 To lie down, the animal kneels on its front legs and then lowers the rest of its body. To get back up, it first gets on its front knees and shifts hindquarters onto its back feet. It then moves from kneeling to standing on its front legs and pulls the rest of its body upwards, swinging its head for balance.: 67 If the giraffe wants to bend down to drink, it either spreads its front legs or bends its knees. Studies in captivity found the giraffe sleeps intermittently around 4.6 hours per day, mostly at night. It usually sleeps lying down; however, standing sleeps have been recorded, particularly in older individuals. Intermittent short "deep sleep" phases while lying are characterised by the giraffe bending its neck backwards and resting its head on the hip or thigh, a position believed to indicate paradoxical sleep.
=== Neck ===
The giraffe has an extremely elongated neck, which can be up to 2.4 m (7.9 ft) in length. Along the neck is a mane made of short, erect hairs. The neck typically rests at an angle of 50–60 degrees, though juveniles have straighter necks and rest at 70 degrees.: 94 The long neck results from a disproportionate lengthening of the cervical vertebrae, not from the addition of more vertebrae. Each cervical vertebra is over 28 cm (11 in) long.: 71 They comprise 52–54 per cent of the length of the giraffe's vertebral column, compared with the 27–33 percent typical of similar large ungulates, including the giraffe's closest living relative, the okapi. This elongation largely takes place after birth, perhaps because giraffe mothers would have a difficult time giving birth to young with the same neck proportions as adults. The giraffe's head and neck are held up by large muscles and a strengthened nuchal ligament, which are anchored by long dorsal spines on the anterior thoracic vertebrae, giving the animal a hump.
The giraffe's neck vertebrae have ball and socket joints.: 71 The point of articulation between the cervical and thoracic vertebrae of giraffes is shifted to lie between the first and second thoracic vertebrae (T1 and T2), unlike most other ruminants where the articulation is between the seventh cervical vertebra (C7) and T1. This allows C7 to contribute directly to increased neck length and has given rise to the suggestion that T1 is actually C8, and that giraffes have added an extra cervical vertebra. However, this proposition is not generally accepted, as T1 has other morphological features, such as an articulating rib, deemed diagnostic of thoracic vertebrae, and because exceptions to the mammalian limit of seven cervical vertebrae are generally characterised by increased neurological anomalies and maladies.There are several hypotheses regarding the evolutionary origin and maintenance of elongation in giraffe necks. Charles Darwin originally suggested the "competing browsers hypothesis", which has been challenged only recently. It suggests that competitive pressure from smaller browsers, like kudu, steenbok and impala, encouraged the elongation of the neck, as it enabled giraffes to reach food that competitors could not. This advantage is real, as giraffes can and do feed up to 4.5 m (15 ft) high, while even quite large competitors, such as kudu, can feed up to only about 2 m (6 ft 7 in) high. There is also research suggesting that browsing competition is intense at lower levels, and giraffes feed more efficiently (gaining more leaf biomass with each mouthful) high in the canopy. However, scientists disagree about just how much time giraffes spend feeding at levels beyond the reach of other browsers,
and a 2010 study found that adult giraffes with longer necks actually suffered higher mortality rates under drought conditions than their shorter-necked counterparts. This study suggests that maintaining a longer neck requires more nutrients, which puts longer-necked giraffes at risk during a food shortage.Another theory, the sexual selection hypothesis, proposes the long necks evolved as a secondary sexual characteristic, giving males an advantage in "necking" contests (see below) to establish dominance and obtain access to sexually receptive females. In support of this theory, necks are longer and heavier for males than females of the same age, and males do not employ other forms of combat. However, one objection is it fails to explain why female giraffes also have long necks. It has also been proposed that the neck serves to give the animal greater vigilance.
=== Internal systems ===
In mammals, the left recurrent laryngeal nerve is longer than the right; in the giraffe, it is over 30 cm (12 in) longer. These nerves are longer in the giraffe than in any other living animal; the left nerve is over 2 m (6 ft 7 in) long. Each nerve cell in this path begins in the brainstem and passes down the neck along the vagus nerve, then branches off into the recurrent laryngeal nerve which passes back up the neck to the larynx. Thus, these nerve cells have a length of nearly 5 m (16 ft) in the largest giraffes. Despite its long neck and large skull, the brain of the giraffe is typical for an ungulate. Evaporative heat loss in the nasal passages keep the giraffe's brain cool. The shape of the skeleton gives the giraffe a small lung volume relative to its mass. Its long neck gives it a large amount of dead space, in spite of its narrow windpipe. The giraffe also has an increased level of tidal volume so the ratio of dead space to tidal volume is similar to other mammals. The animal can still supply enough oxygen to its tissues, and it can increase its respiratory rate and oxygen diffusion when running.
The circulatory system of the giraffe has several adaptations for its great height. Its heart, which can weigh more than 11 kg (25 lb) and measures about 60 cm (2 ft) long, must generate approximately double the blood pressure required for a human to maintain blood flow to the brain. As such, the wall of the heart can be as thick as 7.5 cm (3.0 in). Giraffes have unusually high heart rates for their size, at 150 beats per minute.: 76 When the animal lowers its head, the blood rushes down fairly unopposed and a rete mirabile in the upper neck, with its large cross-sectional area, prevents excess blood flow to the brain. When it raises again, the blood vessels constrict and direct blood into the brain so the animal does not faint. The jugular veins contain several (most commonly seven) valves to prevent blood flowing back into the head from the inferior vena cava and right atrium while the head is lowered. Conversely, the blood vessels in the lower legs are under great pressure because of the weight of fluid pressing down on them. To solve this problem, the skin of the lower legs is thick and tight, preventing too much blood from pouring into them.Giraffes have oesophageal muscles that are unusually strong to allow regurgitation of food from the stomach up the neck and into the mouth for rumination.: 78 They have four chambered stomachs, as in all ruminants; the first chamber has adapted to their specialized diet. The intestines of an adult giraffe measure more than 70 m (230 ft) in length and have a relatively small ratio of small to large intestine. The liver of the giraffe is small and compact.: 76 A gallbladder is generally present during fetal life, but it may disappear before birth.
== Behaviour and ecology ==
=== Habitat and feeding ===
Giraffes usually inhabit savannahs and open woodlands. They prefer Acacieae, Commiphora, Combretum and open Terminalia woodlands over denser environments like Brachystegia woodlands.: 322 The Angolan giraffe can be found in desert environments. Giraffes browse on the twigs of trees, preferring those of the subfamily Acacieae and the genera Commiphora and Terminalia, which are important sources of calcium and protein to sustain the giraffe's growth rate. They also feed on shrubs, grass and fruit.: 324 A giraffe eats around 34 kg (75 lb) of foliage daily. When stressed, giraffes may chew the bark off branches.: 325 Giraffes are also recorded to chew old bones.: 102 During the wet season, food is abundant and giraffes are more spread out, while during the dry season, they gather around the remaining evergreen trees and bushes. Mothers tend to feed in open areas, presumably to make it easier to detect predators, although this may reduce their feeding efficiency. As a ruminant, the giraffe first chews its food, then swallows it for processing and then visibly passes the half-digested cud up the neck and back into the mouth to chew again.: 78–79 The giraffe requires less food than many other herbivores because the foliage it eats has more concentrated nutrients and it has a more efficient digestive system. The animal's faeces come in the form of small pellets. When it has access to water, a giraffe drinks at intervals no longer than three days.Giraffes have a great effect on the trees that they feed on, delaying the growth of young trees for some years and giving "waistlines" to too tall trees. Feeding is at its highest during the first and last hours of daytime. Between these hours, giraffes mostly stand and ruminate. Rumination is the dominant activity during the night, when it is mostly done lying down.
=== Social life ===
Giraffes are usually found in groups that vary in size and composition according to ecological, anthropogenic, temporal, and social factors. Traditionally, the composition of these groups had been described as open and ever-changing. For research purposes, a "group" has been defined as "a collection of individuals that are less than a kilometre apart and moving in the same general direction". More recent studies have found that giraffes have long-term social associations and may form groups or pairs based on kinship, sex or other factors, and these groups regularly associate with other groups in larger communities or sub-communities within a fission–fusion society. Proximity to humans can disrupt social arrangements. Masai giraffes of Tanzania live in distinct social subpopulations that overlap spatially, but have different reproductive rates and calf survival rates.
The number of giraffes in a group can range from one up to 66 individuals. Giraffe groups tend to be sex-segregated although mixed-sex groups made of adult females and young males also occur. Female groups may be matrilineally related. Generally females are more selective than males in who they associate with regarding individuals of the same sex. Particularly stable giraffe groups are those made of mothers and their young, which can last weeks or months. Young males also form groups and will engage in playfights. However, as they get older, males become more solitary but may also associate in pairs or with female groups. Giraffes are not territorial, but they have home ranges that vary according to rainfall and proximity to human settlements. Male giraffes occasionally wander far from areas that they normally frequent.: 329 Early biologists suggested giraffes were mute and unable to produce air flow of sufficient velocity to vibrate their vocal folds. To the contrary; they have been recorded to communicate using snorts, sneezes, coughs, snores, hisses, bursts, moans, grunts, growls and flute-like sounds. During courtship, males emit loud coughs. Females call their young by bellowing. Calves will emit snorts, bleats, mooing and mewing sounds. Snorting and hissing in adults is associated with vigilance. During nighttime, giraffes appear to hum to each other above the infrasound range. The purpose is unclear. Dominant males display to other males with an erect posture; holding the chin and head high while walking stiffly and approaching them laterally. The less dominant show submissiveness by lowing the head and ears with the chin moved in and then jump and flee.
=== Reproduction and parental care ===
Reproduction in giraffes is broadly polygamous: a few older males mate with the fertile females. Females can reproduce throughout the year and experience oestrus cycling approximately every 15 days. Female giraffes in oestrous are dispersed over space and time, so reproductive adult males adopt a strategy of roaming among female groups to seek mating opportunities, with periodic hormone-induced rutting behaviour approximately every two weeks. Males prefer young adult females over juveniles and older adults.Male giraffes assess female fertility by tasting the female's urine to detect oestrus, in a multi-step process known as the flehmen response. Once an oestrous female is detected, the male will attempt to court her. When courting, dominant males will keep subordinate ones at bay. A courting male may lick a female's tail, rest his head and neck on her body or nudge her with his ossicones. During copulation, the male stands on his hind legs with his head held up and his front legs resting on the female's sides.Giraffe gestation lasts 400–460 days, after which a single calf is normally born, although twins occur on rare occasions. The mother gives birth standing up. The calf emerges head and front legs first, having broken through the fetal membranes, and falls to the ground, severing the umbilical cord. A newborn giraffe is 1.7–2 m (5.6–6.6 ft) tall. Within a few hours of birth, the calf can run around and is almost indistinguishable from a one-week-old. However, for the first one to three weeks, it spends most of its time hiding; its coat pattern providing camouflage. The ossicones, which have lain flat while it was in the womb, become erect within a few days.
Mothers with calves will gather in nursery herds, moving or browsing together. Mothers in such a group may sometimes leave their calves with one female while they forage and drink elsewhere. This is known as a "calving pool". Adult males play almost no role in raising the young,: 337 although they appear to have friendly interactions. Calves are at risk of predation, and a mother giraffe will stand over her calf and kick at an approaching predator. Females watching calving pools will only alert their own young if they detect a disturbance, although the others will take notice and follow. Calves may be weaned at six to eight months old but can remain with their mothers for up to 14 months.: 49 Females become sexually mature when they are four years old, while males become mature at four or five years. Spermatogenesis in male giraffes begins at three to four years of age. Males must wait until they are at least seven years old to gain the opportunity to mate.
=== Necking ===
Male giraffes use their necks as weapons in combat, a behaviour known as "necking". Necking is used to establish dominance and males that win necking bouts have greater reproductive success. This behaviour occurs at low or high intensity. In low-intensity necking, the combatants rub and lean against each other. The male that can hold itself more erect wins the bout. In high-intensity necking, the combatants will spread their front legs and swing their necks at each other, attempting to land blows with their ossicones. The contestants will try to dodge each other's blows and then get ready to counter. The power of a blow depends on the weight of the skull and the arc of the swing. A necking duel can last more than half an hour, depending on how well matched the combatants are.: 331 Although most fights do not lead to serious injury, there have been records of broken jaws, broken necks, and even deaths.After a duel, it is common for two male giraffes to caress and court each other. Such interactions between males have been found to be more frequent than heterosexual coupling. In one study, up to 94 percent of observed mounting incidents took place between males. The proportion of same-sex activities varied from 30 to 75 percent. Only one percent of same-sex mounting incidents occurred between females.
=== Mortality and health ===
Giraffes have high adult survival probability, and an unusually long lifespan compared to other ruminants, up to 38 years. Because of their size, eyesight and powerful kicks, adult giraffes are usually not subject to predation, although lions may regularly prey on individuals up to 550 kg (1,210 lb). Giraffes are the most common food source for the big cats in Kruger National Park, comprising nearly a third of the meat consumed, although only a small portion of the giraffes were probably killed by predators, as a majority of the consumed giraffes appeared to be scavenged. Adult female survival is significantly correlated with gregariousness, the average number of other females she is seen associating with. Calves are much more vulnerable than adults and are also preyed on by leopards, spotted hyenas and wild dogs. A quarter to a half of giraffe calves reach adulthood. Calf survival varies according to the season of birth, with calves born during the dry season having higher survival rates.The local, seasonal presence of large herds of migratory wildebeests and zebras reduces predation pressure on giraffe calves and increases their survival probability. In turn, it has been suggested that other ungulates may benefit from associating with giraffes, as their height allows them to spot predators from further away. Zebras were found to glean information on predation risk from giraffe body language and spend less time scanning the environment when giraffes are present.Some parasites feed on giraffes. They are often hosts for ticks, especially in the area around the genitals, which have thinner skin than other areas. Tick species that commonly feed on giraffes are those of genera Hyalomma, Amblyomma and Rhipicephalus. Giraffes may rely on red-billed and yellow-billed oxpeckers to clean them of ticks and alert them to danger. Giraffes host numerous species of internal parasites and are susceptible to various diseases. They were victims of the (now eradicated) viral illness rinderpest. Giraffes can also suffer from a skin disorder, which comes in the form of wrinkles, lesions or raw fissures. As much as 79% of giraffes show signs of the disease in Ruaha National Park, but it did not cause mortality in Tarangire and is less prevalent in areas with fertile soils.
== Relationship with humans ==
=== Cultural significance ===
With its lanky build and spotted coat, the giraffe has been a source of fascination throughout human history, and its image is widespread in culture. It has been used to symbolise flexibility, far-sightedness, femininity, fragility, passivity, grace, beauty and the continent of Africa itself.: 7, 116
Giraffes were depicted in art throughout the African continent, including that of the Kiffians, Egyptians, and Kushites.: 45–47 The Kiffians were responsible for a life-size rock engraving of two giraffes, dated 8,000 years ago, that has been called the "world's largest rock art petroglyph".: 45 How the giraffe got its height has been the subject of various African folktales. The Tugen people of modern Kenya used the giraffe to depict their god Mda. The Egyptians gave the giraffe its own hieroglyph, named 'sr' in Old Egyptian and 'mmy' in later periods.: 49 Giraffes have a presence in modern Western culture. Salvador Dalí depicted them with burning manes in some of his surrealist paintings. Dali considered the giraffe to be a symbol of masculinity, and a flaming giraffe was meant to be a "masculine cosmic apocalyptic monster".: 123 Several children's books feature the giraffe, including David A. Ufer's The Giraffe Who Was Afraid of Heights, Giles Andreae's Giraffes Can't Dance and Roald Dahl's The Giraffe and the Pelly and Me. Giraffes have appeared in animated films, as minor characters in Disney's The Lion King and Dumbo, and in more prominent roles in The Wild and the Madagascar films. Sophie the Giraffe has been a popular teether since 1961. Another famous fictional giraffe is the Toys "R" Us mascot Geoffrey the Giraffe.: 127 The giraffe has also been used for some scientific experiments and discoveries. Scientists have looked at the properties of giraffe skin when developing suits for astronauts and fighter pilots: 76 because the people in these professions are in danger of passing out if blood rushes to their legs. Computer scientists have modeled the coat patterns of several subspecies using reaction–diffusion mechanisms. The constellation of Camelopardalis, introduced in the seventeenth century, depicts a giraffe.: 119–20 The Tswana people of Botswana traditionally see the constellation Crux as two giraffes—Acrux and Mimosa forming a male, and Gacrux and Delta Crucis forming the female.
=== Captivity ===
The Egyptians kept giraffes as pets and shipped them around the Mediterranean.: 48–49 The giraffe was among the many animals collected and displayed by the Romans. The first one in Rome was brought in by Julius Caesar in 46 BC and exhibited to the public.: 52 With the fall of the Western Roman Empire, the housing of giraffes in Europe declined.: 54 During the Middle Ages, giraffes were known to Europeans through contact with the Arabs, who revered the giraffe for its peculiar appearance.Individual captive giraffes were given celebrity status throughout history. In 1414, a giraffe was shipped from Malindi to Bengal. It was then taken to China by explorer Zheng He and placed in a Ming dynasty zoo. The animal was a source of fascination for the Chinese people, who associated it with the mythical Qilin.: 56 The Medici giraffe was a giraffe presented to Lorenzo de' Medici in 1486. It caused a great stir on its arrival in Florence. Zarafa, another famous giraffe, was brought from Egypt to Paris in the early 19th century as a gift from Muhammad Ali of Egypt to Charles X of France. A sensation, the giraffe was the subject of numerous memorabilia or "giraffanalia".: 81 Giraffes have become popular attractions in modern zoos, though keeping them healthy is difficult as they require wide areas and high amounts of browse for food. Captive giraffes in North America and Europe appear to have a higher mortality rate than in the wild; causes of death include poor husbandry, nutrition and management decisions.: 153 Giraffes in zoos display stereotypical behaviours, the most common being the licking of non-food items.: 164 Zookeepers may offer various activities to stimulate giraffes, including training them to accept food from visitors.: 175 Stables for giraffes are built particularly high to accommodate their height.: 183
=== Exploitation ===
Giraffes were probably common targets for hunters throughout Africa. Different parts of their bodies were used for different purposes. Their meat was used for food. The tail hairs served as flyswatters, bracelets, necklaces, and thread. Shields, sandals, and drums were made using the skin, and the strings of musical instruments were from the tendons. The smoke from burning giraffe skins was used by the medicine men of Buganda to treat nose bleeds. The Humr people of Kordofan consume the drink Umm Nyolokh, which is prepared from the liver and bone marrow of giraffes. Richard Rudgley hypothesised that Umm Nyolokh might contain DMT. The drink is said to cause hallucinations of giraffes, believed to be the giraffes' ghosts, by the Humr.
=== Conservation status ===
In 2016, giraffes were assessed as Vulnerable from a conservation perspective by the IUCN. In 1985, it was estimated there were 155,000 giraffes in the wild. This declined to over 140,000 in 1999. Estimates as of 2016 indicate there are approximately 97,500 members of Giraffa in the wild. The Masai and reticulated subspecies are endangered, and the Rothschild subspecies is near threatened. The Nubian subspecies is critically endangered.
The primary causes for giraffe population declines are habitat loss and direct killing for bushmeat markets. Giraffes have been extirpated from much of their historic range, including Eritrea, Guinea, Mauritania and Senegal. They may also have disappeared from Angola, Mali, and Nigeria, but have been introduced to Rwanda and Eswatini. As of 2010, there were more than 1,600 in captivity at Species360-registered zoos. Habitat destruction has hurt the giraffe. In the Sahel, the need for firewood and grazing room for livestock has led to deforestation. Normally, giraffes can coexist with livestock, since they do not directly compete with them. In 2017, severe droughts in northern Kenya led to increased tensions over land and the killing of wildlife by herders, with giraffe populations being particularly hit.Protected areas like national parks provide important habitat and anti-poaching protection to giraffe populations. Community-based conservation efforts outside national parks are also effective at protecting giraffes and their habitats. Private game reserves have contributed to the preservation of giraffe populations in southern Africa. The giraffe is a protected species in most of its range. It is the national animal of Tanzania, and is protected by law, and unauthorised killing can result in imprisonment. The UN backed Convention of Migratory Species selected giraffes for protection in 2017. In 2019, giraffes were listed under Appendix II of the Convention on International Trade in Endangered Species (CITES), which means international trade including in parts/derivatives is regulated.Translocations are sometimes used to augment or re-establish diminished or extirpated populations, but these activities are risky and difficult to undertake using the best practices of extensive pre- and post-translocation studies and ensuring a viable founding population. Aerial survey is the most common method of monitoring giraffe population trends in the vast roadless tracts of African landscapes, but aerial methods are known to undercount giraffes. Ground-based survey methods are more accurate and can be used in conjunction with aerial surveys to make accurate estimates of population sizes and trends. The Giraffe Conservation Foundation has been criticized for alleged mistreatment of giraffes and giraffe scientists.
== See also ==
Fauna of Africa
Giraffe Centre
Giraffe Manor - hotel in Nairobi with giraffes
== References ==
== External links ==
Giraffe Conservation Foundation
###Markdown
Instead of directly using characters as the features, to understand a text better, we may consider group of tokens i.e. ngrams as features.for this example let us consider that each character is one word, and let us see how n-grams work. **nltk library provides many tools for text processing, please explore them.** Now let us calculate the frequency of the character n-grams. N-grams are groups of characters of size n. A unigram is a single character and a bigram is a group of two characters and so on.Let us count the frequency of each character in a text and plot it in a histogram.
###Code
# convert a tuple of characters to a string
def tuple2string(tup):
st = ''
for ii in tup:
st = st + ii
return st
# convert a tuple of tuples to a list of strings
def key2string(keys):
return [tuple2string(i) for i in keys]
# plot the histogram
def plothistogram(ngram):
keys = key2string(ngram.keys())
values = list(ngram.values())
# sort the keys in alphabetic order
combined = zip(keys, values)
zipped_sorted = sorted(combined, key=lambda x: x[0])
keys, values = map(list, zip(*zipped_sorted))
plt.bar(keys, values)
###Output
_____no_output_____
###Markdown
Let us compare the histograms of English pages and French pages. Can you spot a difference?
###Code
## we passed ngrams 'n' as 1 to get unigrams. Unigram is nothing but single token (in this case character).
unigram_eng1 = Counter(ngrams(eng1,1))
plothistogram(unigram_eng1)
plt.title('English 1')
plt.show()
unigram_eng2 = Counter(ngrams(eng2,1))
plothistogram(unigram_eng2)
plt.title('English 2')
plt.show()
unigram_fr1 = Counter(ngrams(fr1,1))
plothistogram(unigram_eng1)
plt.title('French 1')
plt.show()
unigram_fr2 = Counter(ngrams(fr2,1))
plothistogram(unigram_fr2)
plt.title('French 2')
plt.show()
###Output
_____no_output_____
###Markdown
A good feature is one that helps in easy prediction and classification.for ex : if you wish to differentiate between grapes and apples, size can be one of the useful features. We can see that the unigrams for French and English are very similar. So this is not a good feature if we want to distinguish between English and French. Let us look at bigrams.
###Code
## Now instead of unigram, we will use bigrams as features, and see how useful bigrams are as features.
bigram_eng1 = Counter(ngrams(eng1,2)) # bigrams
plothistogram(bigram_eng1)
plt.title('English 1')
plt.show()
bigram_eng2 = Counter(ngrams(eng2,2))
plothistogram(bigram_eng2)
plt.title('English 2')
plt.show()
bigram_fr1 = Counter(ngrams(fr1,2))
plothistogram(bigram_eng1)
plt.title('French 1')
plt.show()
bigram_fr2 = Counter(ngrams(fr2,2))
plothistogram(bigram_fr2)
plt.title('French 2')
plt.show()
###Output
_____no_output_____
###Markdown
Another way to visualize bigrams is to use a 2-dimensional graph.
###Code
## lets have a lot at bigrams.
bigram_eng1
def plotbihistogram(ngram):
freq = np.zeros((26,26))
for ii in range(26):
for jj in range(26):
freq[ii,jj] = ngram[(chr(ord('a')+ii), chr(ord('a')+jj))]
plt.imshow(freq, cmap = 'jet')
return freq
bieng1 = plotbihistogram(bigram_eng1)
plt.show()
bieng2 = plotbihistogram(bigram_eng2)
bifr1 = plotbihistogram(bigram_fr1)
plt.show()
bifr2 = plotbihistogram(bigram_fr2)
###Output
_____no_output_____
###Markdown
Let us look at the top 10 ngrams for each text.
###Code
from IPython.core.debugger import set_trace
def ind2tup(ind):
ind = int(ind)
i = int(ind/26)
j = int(ind%26)
return (chr(ord('a')+i), chr(ord('a')+j))
def ShowTopN(bifreq, n=10):
f = bifreq.flatten()
arg = np.argsort(-f)
for ii in range(n):
print(f'{ind2tup(arg[ii])} : {f[arg[ii]]}')
print('\nEnglish 1:')
ShowTopN(bieng1)
print('\nEnglish 2:')
ShowTopN(bieng2)
print('\nFrench 1:')
ShowTopN(bifr1)
print('\nFrench 2:')
ShowTopN(bifr2)
###Output
English 1:
('t', 'h') : 714.0
('h', 'e') : 705.0
('i', 'n') : 577.0
('e', 's') : 546.0
('a', 'n') : 541.0
('e', 'r') : 457.0
('r', 'e') : 445.0
('r', 'a') : 418.0
('a', 'l') : 407.0
('n', 'd') : 379.0
English 2:
('a', 'n') : 1344.0
('t', 'h') : 1271.0
('h', 'e') : 1163.0
('i', 'n') : 946.0
('e', 'r') : 744.0
('l', 'e') : 707.0
('r', 'e') : 704.0
('n', 'd') : 670.0
('n', 't') : 642.0
('h', 'a') : 632.0
French 1:
('e', 's') : 546.0
('l', 'e') : 337.0
('d', 'e') : 328.0
('e', 'n') : 315.0
('o', 'n') : 306.0
('n', 't') : 275.0
('r', 'e') : 268.0
('r', 'a') : 216.0
('a', 'n') : 202.0
('o', 'u') : 193.0
French 2:
('e', 's') : 920.0
('n', 't') : 773.0
('d', 'e') : 623.0
('e', 'n') : 598.0
('a', 'n') : 538.0
('l', 'e') : 511.0
('o', 'n') : 479.0
('r', 'e') : 455.0
('u', 'r') : 295.0
('t', 'i') : 280.0
###Markdown
**At times, we need to reduce the number of features. We will discuss this more in the upcoming sessions, but a small example has been discussed here. Instead of using each unique token (a word) as a feature, we reduced the number of features by using 1-gram and 2-gram of characters as features.** We observe that the bigrams are similar across different topics but different across languages. Thus, the bigram frequency is a good feature for distinguishing languages, but not for distinguishing topics.Thus, we were able to convert a many-dimensional input (the text) to 26 dimesions (unigrams) or 26*26 dimensions (bigrams).A few ways to explore:Try with different languages.The topics we used are quite similar, wikipedia articles of 'elephant' and 'giraffe'. What happens if we use very different topics? What if we use text from another source than Wikipedia?How can we use and visualize trigrams and higher n-grams? Features of Images.Images in digital format are stored as numeric values, and hence we can use these values as features. for ex : a black and white (binary) image is stored as an array of 0 and 255 or 0 and 1. **Part 2: Written numbers**We will use a subset of the MNIST dataset. Each input character is represented in a 28*28 array. Let us see if we can extract some simple features from these images which can help us distinguish between the digits.Load the dataset:
###Code
from keras.datasets import mnist
#loading the dataset
(train_X, train_y), (test_X, test_y) = mnist.load_data()
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
11501568/11490434 [==============================] - 0s 0us/step
###Markdown
Extract a subset of the data for our experiment:
###Code
no1 = train_X[train_y==1,:,:] ## dataset corresponding to number = 1.
no0 = train_X[train_y==0,:,:] ## dataset corresponding to number = 0.
###Output
_____no_output_____
###Markdown
Let us visualize a few images here
###Code
for ii in range(5):
plt.subplot(1, 5, ii+1)
plt.imshow(no1[ii,:,:])
plt.show()
for ii in range(5):
plt.subplot(1, 5, ii+1)
plt.imshow(no0[ii,:,:])
plt.show()
###Output
_____no_output_____
###Markdown
We can even use value of each pixel as a feature. But let us see how to derive other features. Now, let us start with a simple feature: the sum of all pixels and see how good this feature is.
###Code
## sum of pixel values.
sum1 = np.sum(no1>0, (1,2)) # threshold before adding up
sum0 = np.sum(no0>0, (1,2))
###Output
_____no_output_____
###Markdown
Let us visualize how good this feature is: (X-axis is mean, y-axis is the digit)
###Code
plt.hist(sum1, alpha=0.7);
plt.hist(sum0, alpha=0.7);
###Output
_____no_output_____
###Markdown
We can already see that this feature separates the two classes quite well.Let us look at another, more complicated feature. We will count the number black pixels that are surrounded on four sides by non-black pixels, or "hole pixels".
###Code
def cumArray(img):
img2 = img.copy()
for ii in range(1, img2.shape[1]):
img2[ii,:] = img2[ii,:] + img2[ii-1,:] # for every row, add up all the rows above it.
#print(img2)
img2 = img2>0
#print(img2)
return img2
def getHolePixels(img):
im1 = cumArray(img)
im2 = np.rot90(cumArray(np.rot90(img)), 3) # rotate and cumulate it again for differnt direction
im3 = np.rot90(cumArray(np.rot90(img, 2)), 2)
im4 = np.rot90(cumArray(np.rot90(img, 3)), 1)
hull = im1 & im2 & im3 & im4 # this will create a binary image with all the holes filled in.
hole = hull & ~ (img>0) # remove the original digit to leave behind the holes
return hole
###Output
_____no_output_____
###Markdown
Visualize a few:
###Code
imgs = [no1[456,:,:], no0[456,:,:]]
for img in imgs:
plt.subplot(1,2,1)
plt.imshow(getHolePixels(img))
plt.subplot(1,2,2)
plt.imshow(img)
plt.show()
###Output
_____no_output_____
###Markdown
Now let us plot the number of hole pixels and see how this feature behaves
###Code
hole1 = np.array([getHolePixels(i).sum() for i in no1])
hole0 = np.array([getHolePixels(i).sum() for i in no0])
plt.hist(hole1, alpha=0.7);
plt.hist(hole0, alpha=0.7);
###Output
_____no_output_____
###Markdown
This feature works even better to distinguish between one and zero.Now let us try the number of pixels in the 'hull' or the number with the holes filled in: Let us try one more feature, where we look at the number of boundary pixels in each image.
###Code
def minus(a, b):
return a & ~ b
def getBoundaryPixels(img):
img = img.copy()>0 # binarize the image
rshift = np.roll(img, 1, 1)
lshift = np.roll(img, -1 ,1)
ushift = np.roll(img, -1, 0)
dshift = np.roll(img, 1, 0)
boundary = minus(img, rshift) | minus(img, lshift) | minus(img, ushift) | minus(img, dshift)
return boundary
imgs = [no1[456,:,:], no0[456,:,:]]
for img in imgs:
plt.subplot(1,2,1)
plt.imshow(getBoundaryPixels(img))
plt.subplot(1,2,2)
plt.imshow(img)
plt.show()
bound1 = np.array([getBoundaryPixels(i).sum() for i in no1])
bound0= np.array([getBoundaryPixels(i).sum() for i in no0])
plt.hist(bound1, alpha=0.7);
plt.hist(bound0, alpha=0.7);
###Output
_____no_output_____
###Markdown
What will happen if we plot two features together? Feel free to explore the above graph with your mouse.We have seen that we extracted four features from a 28*28 dimensional image.Some questions to explore:Which is the best combination of features?How would you test or visualize four or more features?Can you come up with your own features?Will these features work for different classes other than 0 and 1?What will happen if we take more that two classes at a time? **Features from CSV file**
###Code
import pandas as pd
df = pd.read_csv('/content/sample_data/california_housing_train.csv')
df.head()
df.columns
df = df.rename(columns={'oldName1': 'newName1', 'oldName2': 'newName2'})
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from mpl_toolkits.mplot3d import Axes3D
sns.set(style = "darkgrid")
fig = plt.figure()
ax = fig.add_subplot(111, projection = '3d')
x = df['total_bedrooms'][:50]
y = df['housing_median_age'][:50]
z = df['median_house_value'][:50]
ax.set_xlabel("total_bedrooms")
ax.set_ylabel("housing_median_age")
ax.set_zlabel("median_house_value")
ax.scatter(x, y, z)
plt.show()
###Output
_____no_output_____
###Markdown
**Task :** Download a CSV file from the internet, upload it to your google drive. Read the CSV file and plot graphs using different combination of features and write your analysis Ex : IRIS flower datasaet
###Code
import pandas as pd
from google.colab import drive
drive.mount('/content/drive')
iris = pd.read_csv('/content/drive/MyDrive/iris_csv.csv')
iris.head()
iris.columns
iris_trimmed = iris[['sepallength', 'sepalwidth', 'petallength', 'petalwidth', 'class']]
import seaborn as sns
import matplotlib.pyplot as plt
sns.pairplot(iris_trimmed, hue = 'class');
plt.savefig('iris.png')
###Output
_____no_output_____ |
Exercises/4_Bayesian_Interference/Bayesian_Inference.ipynb | ###Markdown
Our Mission Spam detection is one of the major applications of Machine Learning in the interwebs today. Pretty much all of the major email service providers have spam detection systems built in and automatically classify such mail as 'Junk Mail'. In this mission we will be using the Naive Bayes algorithm to create a model that can classify SMS messages as spam or not spam, based on the training we give to the model. It is important to have some level of intuition as to what a spammy text message might look like. Often they have words like 'free', 'win', 'winner', 'cash', 'prize' and the like in them as these texts are designed to catch your eye and in some sense tempt you to open them. Also, spam messages tend to have words written in all capitals and also tend to use a lot of exclamation marks. To the human recipient, it is usually pretty straightforward to identify a spam text and our objective here is to train a model to do that for us!Being able to identify spam messages is a binary classification problem as messages are classified as either 'Spam' or 'Not Spam' and nothing else. Also, this is a supervised learning problem, as we will be feeding a labelled dataset into the model, that it can learn from, to make future predictions. OverviewThis project has been broken down in to the following steps: - Step 0: Introduction to the Naive Bayes Theorem- Step 1.1: Understanding our dataset- Step 1.2: Data Preprocessing- Step 2.1: Bag of Words (BoW)- Step 2.2: Implementing BoW from scratch- Step 2.3: Implementing Bag of Words in scikit-learn- Step 3.1: Training and testing sets- Step 3.2: Applying Bag of Words processing to our dataset.- Step 4.1: Bayes Theorem implementation from scratch- Step 4.2: Naive Bayes implementation from scratch- Step 5: Naive Bayes implementation using scikit-learn- Step 6: Evaluating our model- Step 7: Conclusion**Note**: If you need help with a step, you can find the solution notebook by clicking on the Jupyter logo in the top left of the notebook. Step 0: Introduction to the Naive Bayes Theorem Bayes Theorem is one of the earliest probabilistic inference algorithms. It was developed by Reverend Bayes (which he used to try and infer the existence of God no less), and still performs extremely well for certain use cases. It's best to understand this theorem using an example. Let's say you are a member of the Secret Service and you have been deployed to protect the Democratic presidential nominee during one of his/her campaign speeches. Being a public event that is open to all, your job is not easy and you have to be on the constant lookout for threats. So one place to start is to put a certain threat-factor for each person. So based on the features of an individual, like age, whether the person is carrying a bag, looks nervous, etc., you can make a judgment call as to whether that person is a viable threat. If an individual ticks all the boxes up to a level where it crosses a threshold of doubt in your mind, you can take action and remove that person from the vicinity. Bayes Theorem works in the same way, as we are computing the probability of an event (a person being a threat) based on the probabilities of certain related events (age, presence of bag or not, nervousness of the person, etc.). One thing to consider is the independence of these features amongst each other. For example if a child looks nervous at the event then the likelihood of that person being a threat is not as much as say if it was a grown man who was nervous. To break this down a bit further, here there are two features we are considering, age AND nervousness. Say we look at these features individually, we could design a model that flags ALL persons that are nervous as potential threats. However, it is likely that we will have a lot of false positives as there is a strong chance that minors present at the event will be nervous. Hence by considering the age of a person along with the 'nervousness' feature we would definitely get a more accurate result as to who are potential threats and who aren't. This is the 'Naive' bit of the theorem where it considers each feature to be independent of each other which may not always be the case and hence that can affect the final judgement.In short, Bayes Theorem calculates the probability of a certain event happening (in our case, a message being spam) based on the joint probabilistic distributions of certain other events (in our case, the appearance of certain words in a message). We will dive into the workings of Bayes Theorem later in the mission, but first, let us understand the data we are going to work with. Step 1.1: Understanding our dataset We will be using a dataset originally compiled and posted on the UCI Machine Learning repository which has a very good collection of datasets for experimental research purposes. If you're interested, you can review the [abstract](https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection) and the original [compressed data file](https://archive.ics.uci.edu/ml/machine-learning-databases/00228/) on the UCI site. For this exercise, however, we've gone ahead and downloaded the data for you. **Here's a preview of the data:** The columns in the data set are currently not named and as you can see, there are 2 columns. The first column takes two values, 'ham' which signifies that the message is not spam, and 'spam' which signifies that the message is spam. The second column is the text content of the SMS message that is being classified. >**Instructions:*** Import the dataset into a pandas dataframe using the **read_table** method. The file has already been downloaded, and you can access it using the filepath 'smsspamcollection/SMSSpamCollection'. Because this is a tab separated dataset we will be using '\\t' as the value for the 'sep' argument which specifies this format. * Also, rename the column names by specifying a list ['label', 'sms_message'] to the 'names' argument of read_table().* Print the first five values of the dataframe with the new column names.
###Code
# '!' allows you to run bash commands from jupyter notebook.
print("List all the files in the current directory\n")
!ls
# The required data table can be found under smsspamcollection/SMSSpamCollection
print("\n List all the files inside the smsspamcollection directory\n")
!ls smsspamcollection
!cat smsspamcollection/SMSSpamCollection
import pandas as pd
# Dataset available using filepath 'smsspamcollection/SMSSpamCollection'
df = pd.read_table("smsspamcollection/SMSSpamCollection", names=['label', 'sms_message'] )
# Output printing out first 5 rows
df[:5]
###Output
_____no_output_____
###Markdown
Step 1.2: Data Preprocessing Now that we have a basic understanding of what our dataset looks like, let's convert our labels to binary variables, 0 to represent 'ham'(i.e. not spam) and 1 to represent 'spam' for ease of computation. You might be wondering why do we need to do this step? The answer to this lies in how scikit-learn handles inputs. Scikit-learn only deals with numerical values and hence if we were to leave our label values as strings, scikit-learn would do the conversion internally(more specifically, the string labels will be cast to unknown float values). Our model would still be able to make predictions if we left our labels as strings but we could have issues later when calculating performance metrics, for example when calculating our precision and recall scores. Hence, to avoid unexpected 'gotchas' later, it is good practice to have our categorical values be fed into our model as integers. >**Instructions:*** Convert the values in the 'label' column to numerical values using map method as follows:{'ham':0, 'spam':1} This maps the 'ham' value to 0 and the 'spam' value to 1.* Also, to get an idea of the size of the dataset we are dealing with, print out number of rows and columns using 'shape'.
###Code
'''
Solution
'''
df['names'] = df.label.map(lambda x: 1 if x == "spam" else 0)
###Output
_____no_output_____
###Markdown
Step 2.1: Bag of Words What we have here in our data set is a large collection of text data (5,572 rows of data). Most ML algorithms rely on numerical data to be fed into them as input, and email/sms messages are usually text heavy. Here we'd like to introduce the Bag of Words (BoW) concept which is a term used to specify the problems that have a 'bag of words' or a collection of text data that needs to be worked with. The basic idea of BoW is to take a piece of text and count the frequency of the words in that text. It is important to note that the BoW concept treats each word individually and the order in which the words occur does not matter. Using a process which we will go through now, we can convert a collection of documents to a matrix, with each document being a row and each word (token) being the column, and the corresponding (row, column) values being the frequency of occurrence of each word or token in that document.For example: Let's say we have 4 documents, which are text messagesin our case, as follows:`['Hello, how are you!','Win money, win from home.','Call me now','Hello, Call you tomorrow?']`Our objective here is to convert this set of texts to a frequency distribution matrix, as follows:Here as we can see, the documents are numbered in the rows, and each word is a column name, with the corresponding value being the frequency of that word in the document.Let's break this down and see how we can do this conversion using a small set of documents.To handle this, we will be using sklearn's [count vectorizer](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.htmlsklearn.feature_extraction.text.CountVectorizer) method which does the following:* It tokenizes the string (separates the string into individual words) and gives an integer ID to each token.* It counts the occurrence of each of those tokens.**Please Note:** * The CountVectorizer method automatically converts all tokenized words to their lower case form so that it does not treat words like 'He' and 'he' differently. It does this using the `lowercase` parameter which is by default set to `True`.* It also ignores all punctuation so that words followed by a punctuation mark (for example: 'hello!') are not treated differently than the same words not prefixed or suffixed by a punctuation mark (for example: 'hello'). It does this using the `token_pattern` parameter which has a default regular expression which selects tokens of 2 or more alphanumeric characters.* The third parameter to take note of is the `stop_words` parameter. Stop words refer to the most commonly used words in a language. They include words like 'am', 'an', 'and', 'the', etc. By setting this parameter value to `english`, CountVectorizer will automatically ignore all words (from our input text) that are found in the built in list of English stop words in scikit-learn. This is extremely helpful as stop words can skew our calculations when we are trying to find certain key words that are indicative of spam.We will dive into the application of each of these into our model in a later step, but for now it is important to be aware of such preprocessing techniques available to us when dealing with textual data. Step 2.2: Implementing Bag of Words from scratch Before we dive into scikit-learn's Bag of Words (BoW) library to do the dirty work for us, let's implement it ourselves first so that we can understand what's happening behind the scenes. **Step 1: Convert all strings to their lower case form.**Let's say we have a document set:```documents = ['Hello, how are you!', 'Win money, win from home.', 'Call me now.', 'Hello, Call hello you tomorrow?']```>>**Instructions:*** Convert all the strings in the documents set to their lower case. Save them into a list called 'lower_case_documents'. You can convert strings to their lower case in python by using the lower() method.
###Code
'''
Solution:
'''
documents = ['Hello, how are you!',
'Win money, win from home.',
'Call me now.',
'Hello, Call hello you tomorrow?']
lower_case_documents = [w.lower() for w in documents]
print(lower_case_documents)
###Output
['hello, how are you!', 'win money, win from home.', 'call me now.', 'hello, call hello you tomorrow?']
###Markdown
**Step 2: Removing all punctuation**>>**Instructions:**Remove all punctuation from the strings in the document set. Save the strings into a list called 'sans_punctuation_documents'.
###Code
'''
Solution:
'''
punctuation = ",.?!"
import string
sans_punctuation_documents = [w.translate({ord(c): None for c in ".,_!?"})for w in lower_case_documents]
print(sans_punctuation_documents)
###Output
['hello how are you', 'win money win from home', 'call me now', 'hello call hello you tomorrow']
###Markdown
**Step 3: Tokenization**Tokenizing a sentence in a document set means splitting up the sentence into individual words using a delimiter. The delimiter specifies what character we will use to identify the beginning and end of a word. Most commonly, we use a single space as the delimiter character for identifying words, and this is true in our documents in this case also. >>**Instructions:**Tokenize the strings stored in 'sans_punctuation_documents' using the split() method. Store the final document set in a list called 'preprocessed_documents'.
###Code
'''
Solution:
'''
import itertools
preprocessed_documents = [w.split() for w in sans_punctuation_documents]
preprocessed_documents = list(itertools.chain(*preprocessed_documents))
print(preprocessed_documents)
###Output
['hello', 'how', 'are', 'you', 'win', 'money', 'win', 'from', 'home', 'call', 'me', 'now', 'hello', 'call', 'hello', 'you', 'tomorrow']
###Markdown
**Step 4: Count frequencies**Now that we have our document set in the required format, we can proceed to counting the occurrence of each word in each document of the document set. We will use the `Counter` method from the Python `collections` library for this purpose. `Counter` counts the occurrence of each item in the list and returns a dictionary with the key as the item being counted and the corresponding value being the count of that item in the list. >>**Instructions:**Using the Counter() method and preprocessed_documents as the input, create a dictionary with the keys being each word in each document and the corresponding values being the frequency of occurrence of that word. Save each Counter dictionary as an item in a list called 'frequency_list'.
###Code
'''
Solution
'''
frequency_list = []
import pprint
from collections import Counter
frequency_list = Counter(preprocessed_documents)
pprint.pprint(frequency_list)
###Output
Counter({'hello': 3,
'you': 2,
'win': 2,
'call': 2,
'how': 1,
'are': 1,
'money': 1,
'from': 1,
'home': 1,
'me': 1,
'now': 1,
'tomorrow': 1})
###Markdown
Congratulations! You have implemented the Bag of Words process from scratch! As we can see in our previous output, we have a frequency distribution dictionary which gives a clear view of the text that we are dealing with.We should now have a solid understanding of what is happening behind the scenes in the `sklearn.feature_extraction.text.CountVectorizer` method of scikit-learn. We will now implement `sklearn.feature_extraction.text.CountVectorizer` method in the next step. Step 2.3: Implementing Bag of Words in scikit-learn Now that we have implemented the BoW concept from scratch, let's go ahead and use scikit-learn to do this process in a clean and succinct way. We will use the same document set as we used in the previous step.
###Code
'''
Here we will look to create a frequency matrix on a smaller document set to make sure we understand how the
document-term matrix generation happens. We have created a sample document set 'documents'.
'''
documents = ['Hello, how are you!',
'Win money, win from home.',
'Call me now.',
'Hello, Call hello you tomorrow?']
###Output
_____no_output_____
###Markdown
>>**Instructions:**Import the sklearn.feature_extraction.text.CountVectorizer method and create an instance of it called 'count_vector'.
###Code
'''
Solution
'''
from sklearn.feature_extraction.text import CountVectorizer
count_vector = CountVectorizer(documents)
count_vector
###Output
_____no_output_____
###Markdown
**Data preprocessing with CountVectorizer()**In Step 2.2, we implemented a version of the CountVectorizer() method from scratch that entailed cleaning our data first. This cleaning involved converting all of our data to lower case and removing all punctuation marks. CountVectorizer() has certain parameters which take care of these steps for us. They are:* `lowercase = True` The `lowercase` parameter has a default value of `True` which converts all of our text to its lower case form.* `token_pattern = (?u)\\b\\w\\w+\\b` The `token_pattern` parameter has a default regular expression value of `(?u)\\b\\w\\w+\\b` which ignores all punctuation marks and treats them as delimiters, while accepting alphanumeric strings of length greater than or equal to 2, as individual tokens or words.* `stop_words` The `stop_words` parameter, if set to `english` will remove all words from our document set that match a list of English stop words defined in scikit-learn. Considering the small size of our dataset and the fact that we are dealing with SMS messages and not larger text sources like e-mail, we will not use stop words, and we won't be setting this parameter value.You can take a look at all the parameter values of your `count_vector` object by simply printing out the object as follows:
###Code
'''
Practice node:
Print the 'count_vector' object which is an instance of 'CountVectorizer()'
'''
# No need to revise this code
print(count_vector)
###Output
CountVectorizer(analyzer='word', binary=False, decode_error='strict',
dtype=<class 'numpy.int64'>, encoding='utf-8',
input=['Hello, how are you!', 'Win money, win from home.', 'Call me now.', 'Hello, Call hello you tomorrow?'],
lowercase=True, max_df=1.0, max_features=None, min_df=1,
ngram_range=(1, 1), preprocessor=None, stop_words=None,
strip_accents=None, token_pattern='(?u)\\b\\w\\w+\\b',
tokenizer=None, vocabulary=None)
###Markdown
>>**Instructions:**Fit your document dataset to the CountVectorizer object you have created using fit(), and get the list of words which have been categorized as features using the get_feature_names() method.
###Code
'''
Solution:
'''
# No need to revise this code
count_vector.fit(documents)
count_vector.get_feature_names()
###Output
_____no_output_____
###Markdown
The `get_feature_names()` method returns our feature names for this dataset, which is the set of words that make up our vocabulary for 'documents'. >>**Instructions:**Create a matrix with each row representing one of the 4 documents, and each column representing a word (feature name). Each value in the matrix will represent the frequency of the word in that column occurring in the particular document in that row. You can do this using the transform() method of CountVectorizer, passing in the document data set as the argument. The transform() method returns a matrix of NumPy integers, which you can convert to an array usingtoarray(). Call the array 'doc_array'.
###Code
'''
Solution
'''
doc_array = [d for d in documents]
doc_array
###Output
_____no_output_____
###Markdown
Now we have a clean representation of the documents in terms of the frequency distribution of the words in them. To make it easier to understand our next step is to convert this array into a dataframe and name the columns appropriately. >>**Instructions:**Convert the 'doc_array' we created into a dataframe, with the column names as the words (feature names). Call the dataframe 'frequency_matrix'.
###Code
'''
Solution
'''
import pandas as pd
frequency_matrix = pd.DataFrame(doc_array)
frequency_matrix
###Output
_____no_output_____
###Markdown
Congratulations! You have successfully implemented a Bag of Words problem for a document dataset that we created. One potential issue that can arise from using this method is that if our dataset of text is extremely large (say if we have a large collection of news articles or email data), there will be certain values that are more common than others simply due to the structure of the language itself. For example, words like 'is', 'the', 'an', pronouns, grammatical constructs, etc., could skew our matrix and affect our analyis. There are a couple of ways to mitigate this. One way is to use the `stop_words` parameter and set its value to `english`. This will automatically ignore all the words in our input text that are found in a built-in list of English stop words in scikit-learn.Another way of mitigating this is by using the [tfidf](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.htmlsklearn.feature_extraction.text.TfidfVectorizer) method. This method is out of scope for the context of this lesson. Step 3.1: Training and testing sets Now that we understand how to use the Bag of Words approach, we can return to our original, larger UCI dataset and proceed with our analysis. Our first step is to split our dataset into a training set and a testing set so we can first train, and then test our model. >>**Instructions:**Split the dataset into a training and testing set using the train_test_split method in sklearn, and print out the number of rows we have in each of our training and testing data. Split the datausing the following variables:* `X_train` is our training data for the 'sms_message' column.* `y_train` is our training data for the 'label' column* `X_test` is our testing data for the 'sms_message' column.* `y_test` is our testing data for the 'label' column.
###Code
'''
Solution
'''
# split into training and testing sets
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df['sms_message'],
df['label'],
random_state=1)
print('Number of rows in the total set: {}'.format(df.shape[0]))
print('Number of rows in the training set: {}'.format(X_train.shape[0]))
print('Number of rows in the test set: {}'.format(X_test.shape[0]))
###Output
_____no_output_____
###Markdown
Step 3.2: Applying Bag of Words processing to our dataset. Now that we have split the data, our next objective is to follow the steps from "Step 2: Bag of Words," and convert our data into the desired matrix format. To do this we will be using CountVectorizer() as we did before. There are two steps to consider here:* First, we have to fit our training data (`X_train`) into `CountVectorizer()` and return the matrix.* Secondly, we have to transform our testing data (`X_test`) to return the matrix. Note that `X_train` is our training data for the 'sms_message' column in our dataset and we will be using this to train our model. `X_test` is our testing data for the 'sms_message' column and this is the data we will be using (after transformation to a matrix) to make predictions on. We will then compare those predictions with `y_test` in a later step. For now, we have provided the code that does the matrix transformations for you!
###Code
'''
[Practice Node]
The code for this segment is in 2 parts. First, we are learning a vocabulary dictionary for the training data
and then transforming the data into a document-term matrix; secondly, for the testing data we are only
transforming the data into a document-term matrix.
This is similar to the process we followed in Step 2.3.
We will provide the transformed data to students in the variables 'training_data' and 'testing_data'.
'''
'''
Solution
'''
# Instantiate the CountVectorizer method
count_vector = CountVectorizer()
# Fit the training data and then return the matrix
training_data = count_vector.fit_transform(X_train)
# Transform testing data and return the matrix. Note we are not fitting the testing data into the CountVectorizer()
testing_data = count_vector.transform(X_test)
###Output
_____no_output_____
###Markdown
Step 4.1: Bayes Theorem implementation from scratch Now that we have our dataset in the format that we need, we can move onto the next portion of our mission which is the algorithm we will use to make our predictions to classify a message as spam or not spam. Remember that at the start of the mission we briefly discussed the Bayes theorem but now we shall go into a little more detail. In layman's terms, the Bayes theorem calculates the probability of an event occurring, based on certain other probabilities that are related to the event in question. It is composed of "prior probabilities" - or just "priors." These "priors" are the probabilities that we are aware of, or that are given to us. And Bayes theorem is also composed of the "posterior probabilities," or just "posteriors," which are the probabilities we are looking to compute using the "priors". Let us implement the Bayes Theorem from scratch using a simple example. Let's say we are trying to find the odds of an individual having diabetes, given that he or she was tested for it and got a positive result. In the medical field, such probabilities play a very important role as they often deal with life and death situations. We assume the following:`P(D)` is the probability of a person having Diabetes. Its value is `0.01`, or in other words, 1% of the general population has diabetes (disclaimer: these values are assumptions and are not reflective of any actual medical study).`P(Pos)` is the probability of getting a positive test result.`P(Neg)` is the probability of getting a negative test result.`P(Pos|D)` is the probability of getting a positive result on a test done for detecting diabetes, given that you have diabetes. This has a value `0.9`. In other words the test is correct 90% of the time. This is also called the Sensitivity or True Positive Rate.`P(Neg|~D)` is the probability of getting a negative result on a test done for detecting diabetes, given that you do not have diabetes. This also has a value of `0.9` and is therefore correct, 90% of the time. This is also called the Specificity or True Negative Rate.The Bayes formula is as follows:* `P(A)` is the prior probability of A occurring independently. In our example this is `P(D)`. This value is given to us.* `P(B)` is the prior probability of B occurring independently. In our example this is `P(Pos)`.* `P(A|B)` is the posterior probability that A occurs given B. In our example this is `P(D|Pos)`. That is, **the probability of an individual having diabetes, given that this individual got a positive test result. This is the value that we are looking to calculate.*** `P(B|A)` is the prior probability of B occurring, given A. In our example this is `P(Pos|D)`. This value is given to us. Putting our values into the formula for Bayes theorem we get:`P(D|Pos) = P(D) * P(Pos|D) / P(Pos)`The probability of getting a positive test result `P(Pos)` can be calculated using the Sensitivity and Specificity as follows:`P(Pos) = [P(D) * Sensitivity] + [P(~D) * (1-Specificity))]`
###Code
'''
Instructions:
Calculate probability of getting a positive test result, P(Pos)
'''
'''
Solution (skeleton code will be provided)
'''
# P(D)
p_diabetes = 0.01
# P(~D)
p_no_diabetes = 0.99
# Sensitivity or P(Pos|D)
p_pos_diabetes = 0.9
# Specificity or P(Neg|~D)
p_neg_no_diabetes = 0.9
# P(Pos)
p_pos = # TODO
print('The probability of getting a positive test result P(Pos) is: {}',format(p_pos))
###Output
_____no_output_____
###Markdown
**Using all of this information we can calculate our posteriors as follows:** The probability of an individual having diabetes, given that, that individual got a positive test result:`P(D|Pos) = (P(D) * Sensitivity)) / P(Pos)`The probability of an individual not having diabetes, given that, that individual got a positive test result:`P(~D|Pos) = (P(~D) * (1-Specificity)) / P(Pos)`The sum of our posteriors will always equal `1`.
###Code
'''
Instructions:
Compute the probability of an individual having diabetes, given that, that individual got a positive test result.
In other words, compute P(D|Pos).
The formula is: P(D|Pos) = (P(D) * P(Pos|D) / P(Pos)
'''
'''
Solution
'''
# P(D|Pos)
p_diabetes_pos = # TODO
print('Probability of an individual having diabetes, given that that individual got a positive test result is:\
',format(p_diabetes_pos))
'''
Instructions:
Compute the probability of an individual not having diabetes, given that, that individual got a positive test result.
In other words, compute P(~D|Pos).
The formula is: P(~D|Pos) = P(~D) * P(Pos|~D) / P(Pos)
Note that P(Pos|~D) can be computed as 1 - P(Neg|~D).
Therefore:
P(Pos|~D) = p_pos_no_diabetes = 1 - 0.9 = 0.1
'''
'''
Solution
'''
# P(Pos|~D)
p_pos_no_diabetes = 0.1
# P(~D|Pos)
p_no_diabetes_pos = # TODO
print 'Probability of an individual not having diabetes, given that that individual got a positive test result is:'\
,p_no_diabetes_pos
###Output
_____no_output_____
###Markdown
Congratulations! You have implemented Bayes Theorem from scratch. Your analysis shows that even if you get a positive test result, there is only an 8.3% chance that you actually have diabetes and a 91.67% chance that you do not have diabetes. This is of course assuming that only 1% of the entire population has diabetes which is only an assumption. **What does the term 'Naive' in 'Naive Bayes' mean ?** The term 'Naive' in Naive Bayes comes from the fact that the algorithm considers the features that it is using to make the predictions to be independent of each other, which may not always be the case. So in our Diabetes example, we are considering only one feature, that is the test result. Say we added another feature, 'exercise'. Let's say this feature has a binary value of `0` and `1`, where the former signifies that the individual exercises less than or equal to 2 days a week and the latter signifies that the individual exercises greater than or equal to 3 days a week. If we had to use both of these features, namely the test result and the value of the 'exercise' feature, to compute our final probabilities, Bayes' theorem would fail. Naive Bayes' is an extension of Bayes' theorem that assumes that all the features are independent of each other. Step 4.2: Naive Bayes implementation from scratch Now that you have understood the ins and outs of Bayes Theorem, we will extend it to consider cases where we have more than one feature. Let's say that we have two political parties' candidates, 'Jill Stein' of the Green Party and 'Gary Johnson' of the Libertarian Party and we have the probabilities of each of these candidates saying the words 'freedom', 'immigration' and 'environment' when they give a speech:* Probability that Jill Stein says 'freedom': 0.1 ---------> `P(F|J)`* Probability that Jill Stein says 'immigration': 0.1 -----> `P(I|J)`* Probability that Jill Stein says 'environment': 0.8 -----> `P(E|J)`* Probability that Gary Johnson says 'freedom': 0.7 -------> `P(F|G)`* Probability that Gary Johnson says 'immigration': 0.2 ---> `P(I|G)`* Probability that Gary Johnson says 'environment': 0.1 ---> `P(E|G)`And let us also assume that the probability of Jill Stein giving a speech, `P(J)` is `0.5` and the same for Gary Johnson, `P(G) = 0.5`. Given this, what if we had to find the probabilities of Jill Stein saying the words 'freedom' and 'immigration'? This is where the Naive Bayes' theorem comes into play as we are considering two features, 'freedom' and 'immigration'.Now we are at a place where we can define the formula for the Naive Bayes' theorem:Here, `y` is the class variable (in our case the name of the candidate) and `x1` through `xn` are the feature vectors (in our case the individual words). The theorem makes the assumption that each of the feature vectors or words (`xi`) are independent of each other. To break this down, we have to compute the following posterior probabilities:* `P(J|F,I)`: Given the words 'freedom' and 'immigration' were said, what's the probability they were said by Jill? Using the formula and our knowledge of Bayes' theorem, we can compute this as follows: `P(J|F,I)` = `(P(J) * P(F|J) * P(I|J)) / P(F,I)`. Here `P(F,I)` is the probability of the words 'freedom' and 'immigration' being said in a speech. * `P(G|F,I)`: Given the words 'freedom' and 'immigration' were said, what's the probability they were said by Gary? Using the formula, we can compute this as follows: `P(G|F,I)` = `(P(G) * P(F|G) * P(I|G)) / P(F,I)`
###Code
'''
Instructions: Compute the probability of the words 'freedom' and 'immigration' being said in a speech, or
P(F,I).
The first step is multiplying the probabilities of Jill Stein giving a speech with her individual
probabilities of saying the words 'freedom' and 'immigration'. Store this in a variable called p_j_text.
The second step is multiplying the probabilities of Gary Johnson giving a speech with his individual
probabilities of saying the words 'freedom' and 'immigration'. Store this in a variable called p_g_text.
The third step is to add both of these probabilities and you will get P(F,I).
'''
'''
Solution: Step 1
'''
# P(J)
p_j = 0.5
# P(F/J)
p_j_f = 0.1
# P(I/J)
p_j_i = 0.1
p_j_text = # TODO
print(p_j_text)
'''
Solution: Step 2
'''
# P(G)
p_g = 0.5
# P(F/G)
p_g_f = 0.7
# P(I/G)
p_g_i = 0.2
p_g_text = # TODO
print(p_g_text)
'''
Solution: Step 3: Compute P(F,I) and store in p_f_i
'''
p_f_i = # TODO
print('Probability of words freedom and immigration being said are: ', format(p_f_i))
###Output
_____no_output_____
###Markdown
Now we can compute the probability of `P(J|F,I)`, the probability of Jill Stein saying the words 'freedom' and 'immigration' and `P(G|F,I)`, the probability of Gary Johnson saying the words 'freedom' and 'immigration'.
###Code
'''
Instructions:
Compute P(J|F,I) using the formula P(J|F,I) = (P(J) * P(F|J) * P(I|J)) / P(F,I) and store it in a variable p_j_fi
'''
'''
Solution
'''
p_j_fi = # TODO
print('The probability of Jill Stein saying the words Freedom and Immigration: ', format(p_j_fi))
'''
Instructions:
Compute P(G|F,I) using the formula P(G|F,I) = (P(G) * P(F|G) * P(I|G)) / P(F,I) and store it in a variable p_g_fi
'''
'''
Solution
'''
p_g_fi = # TODO
print('The probability of Gary Johnson saying the words Freedom and Immigration: ', format(p_g_fi))
###Output
_____no_output_____
###Markdown
And as we can see, just like in the Bayes' theorem case, the sum of our posteriors is equal to 1. Congratulations! You have implemented the Naive Bayes' theorem from scratch. Our analysis shows that there is only a 6.6% chance that Jill Stein of the Green Party uses the words 'freedom' and 'immigration' in her speech as compared with the 93.3% chance for Gary Johnson of the Libertarian party. For another example of Naive Bayes, let's consider searching for images using the term 'Sacramento Kings' in a search engine. In order for us to get the results pertaining to the Scramento Kings NBA basketball team, the search engine needs to be able to associate the two words together and not treat them individually. If the search engine only searched for the words individually, we would get results of images tagged with 'Sacramento,' like pictures of city landscapes, and images of 'Kings,' which might be pictures of crowns or kings from history. But associating the two terms together would produce images of the basketball team. In the first approach we would treat the words as independent entities, so it would be considered 'naive.' We don't usually want this approach from a search engine, but it can be extremely useful in other cases. Applying this to our problem of classifying messages as spam, the Naive Bayes algorithm *looks at each word individually and not as associated entities* with any kind of link between them. In the case of spam detectors, this usually works, as there are certain red flag words in an email which are highly reliable in classifying it as spam. For example, emails with words like 'viagra' are usually classified as spam. Step 5: Naive Bayes implementation using scikit-learn Now let's return to our spam classification context. Thankfully, sklearn has several Naive Bayes implementations that we can use, so we do not have to do the math from scratch. We will be using sklearn's `sklearn.naive_bayes` method to make predictions on our SMS messages dataset. Specifically, we will be using the multinomial Naive Bayes algorithm. This particular classifier is suitable for classification with discrete features (such as in our case, word counts for text classification). It takes in integer word counts as its input. On the other hand, Gaussian Naive Bayes is better suited for continuous data as it assumes that the input data has a Gaussian (normal) distribution.
###Code
'''
Instructions:
We have loaded the training data into the variable 'training_data' and the testing data into the
variable 'testing_data'.
Import the MultinomialNB classifier and fit the training data into the classifier using fit(). Name your classifier
'naive_bayes'. You will be training the classifier using 'training_data' and 'y_train' from our split earlier.
'''
'''
Solution
'''
from sklearn.naive_bayes import MultinomialNB
naive_bayes = # TODO
naive_bayes.fit(# TODO)
'''
Instructions:
Now that our algorithm has been trained using the training data set we can now make some predictions on the test data
stored in 'testing_data' using predict(). Save your predictions into the 'predictions' variable.
'''
'''
Solution
'''
predictions = naive_bayes.predict(# TODO)
###Output
_____no_output_____
###Markdown
Now that predictions have been made on our test set, we need to check the accuracy of our predictions. Step 6: Evaluating our model Now that we have made predictions on our test set, our next goal is to evaluate how well our model is doing. There are various mechanisms for doing so, so first let's review them.**Accuracy** measures how often the classifier makes the correct prediction. It’s the ratio of the number of correct predictions to the total number of predictions (the number of test data points).**Precision** tells us what proportion of messages we classified as spam, actually were spam.It is a ratio of true positives (words classified as spam, and which actually are spam) to all positives (all words classified as spam, regardless of whether that was the correct classification). In other words, precision is the ratio of`[True Positives/(True Positives + False Positives)]`**Recall (sensitivity)** tells us what proportion of messages that actually were spam were classified by us as spam.It is a ratio of true positives (words classified as spam, and which actually are spam) to all the words that were actually spam. In other words, recall is the ratio of`[True Positives/(True Positives + False Negatives)]`For classification problems that are skewed in their classification distributions like in our case - for example if we had 100 text messages and only 2 were spam and the other 98 weren't - accuracy by itself is not a very good metric. We could classify 90 messages as not spam (including the 2 that were spam but we classify them as not spam, hence they would be false negatives) and 10 as spam (all 10 false positives) and still get a reasonably good accuracy score. For such cases, precision and recall come in very handy. These two metrics can be combined to get the **F1 score**, which is the weighted average of the precision and recall scores. This score can range from 0 to 1, with 1 being the best possible F1 score. We will be using all 4 of these metrics to make sure our model does well. For all 4 metrics whose values can range from 0 to 1, having a score as close to 1 as possible is a good indicator of how well our model is doing.
###Code
'''
Instructions:
Compute the accuracy, precision, recall and F1 scores of your model using your test data 'y_test' and the predictions
you made earlier stored in the 'predictions' variable.
'''
'''
Solution
'''
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Accuracy score: ', format(accuracy_score(# TODO)))
print('Precision score: ', format(precision_score(# TODO)))
print('Recall score: ', format(recall_score(# TODO)))
print('F1 score: ', format(f1_score(# TODO)))
###Output
_____no_output_____ |
learnelixir.ipynb | ###Markdown
メモelixir を齧る。かじる。今のイメージ $\quad$ erlang 上で、erlang は 並行処理のためのシステムで、その erlang 上で理想的な言語を作ろうとしたら、ruby + clojure みたいな言語になった。Dave Thomas と まつもとゆきひろ が勧めているのだからいい言語なのだろう。 * https://elixirschool.com/ja/lessons/basics/control-structures/* https://magazine.rubyist.net/articles/0054/0054-ElixirBook.* https://dev.to/gumi/elixir-01--2585* https://elixir-lang.org/getting-started/introduction.html---本を買った。プログラミング elixirdave thomas, 笹田耕一・鳥居雪訳、 ohmshaprogramming elixir |> 1.6を読む。
###Code
%%capture
!wget https://packages.erlang-solutions.com/erlang-solutions_2.0_all.deb && sudo dpkg -i erlang-solutions_2.0_all.deb
!sudo apt update
!sudo apt install elixir
!elixir -v
!date
###Output
Erlang/OTP 24 [erts-12.2.1] [source] [64-bit] [smp:2:2] [ds:2:2:10] [async-threads:1] [jit]
Elixir 1.13.0 (compiled with Erlang/OTP 24)
Wed Mar 16 16:43:56 UTC 2022
###Markdown
---メモ`!elixir -h` (ヘルプ)としたらシェルワンライナー `elixir -e` が使えるらしいことがわかった。`iex` というのがインタラクティブ環境なのだが、colab では使いにくいので `elixir -e` で代用する。
###Code
!elixir -e 'IO.puts 3 + 3'
!elixir -e 'IO.puts "hello world!"'
# 次のようにすればファイルが作れる
%%writefile temp.exs
IO.puts "this is a pen."
# cat してみる
!cat temp.exs
# ファイルを elixir で実行する
!elixir temp.exs
###Output
this is a pen.
###Markdown
---ネットで紹介されていた次のコードセルのコードはどうやって実行するのだろう。 今はわからなくていいと思うがとりあえず転記しておく。説明:このプログラムでは、Parallel というモジュールに pmap という関数を定義しているmap は、与えられたコレクションに対して map(Ruby での Enumerablemap と同じようなものと考えて下さい)を行なうのですが、 各要素の処理を、要素数の分だけプロセスを生成し、各プロセスで並行に実行する、というものです。 ちょっと見ても、よくわからないような気がしますが、大丈夫、本書を読めば、わかるようになりるとのこと。
###Code
%%writefile temp.exs
defmodule Parallel do
def pmap(collection, func) do
collection
|> Enum.map(&(Task.async(fn -> func.(&1) end)))
|> Enum.map(&Task.await/1)
end
end
result = Parallel.pmap 1..1000, &(&1 * &1)
IO.inspect result
!elixir temp.exs
###Output
[1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324,
361, 400, 441, 484, 529, 576, 625, 676, 729, 784, 841, 900, 961, 1024, 1089,
1156, 1225, 1296, 1369, 1444, 1521, 1600, 1681, 1764, 1849, 1936, 2025, 2116,
2209, 2304, 2401, 2500, ...]
###Markdown
上の例で colab 環境で非同期処理が問題なく動くことが確認できたみたい。 ---次のもネットで紹介されていた例で、ハローワールド並行処理版
###Code
%%writefile temp.exs
parent = self()
spawn_link(fn ->
send parent, {:msg, "hello world"}
end)
receive do
{:msg, contents} -> IO.puts contents
end
!elixir temp.exs
###Output
hello world
###Markdown
上の例でやっていることはつぎのような流れである。1. spawn_linkという関数に渡された関数が、関数の内容を実行する。2. 新しく作られたプロセス側では、メインプロセス側(parent)に “hello world” というメッセージを送る。3. メインプロセス側は、どこからかメッセージが来ないかを待ち受けて(receive)、メッセージが来たらそれをコンソールに表示する。
###Code
# 実験 とりあえず理解しない。 colab 環境でどうかだけ調べる。
%%writefile chain.exs
defmodule Chain do
def counter(next_pid) do
receive do
n -> send next_pid, n + 1
end
end
def create_processes(n) do
last = Enum.reduce 1..n, self(),
fn (_, send_to) -> spawn(Chain, :counter, [send_to]) end
send last, 0
receive do
final_answer when is_integer(final_answer) ->
"Result is #{inspect(final_answer)}"
end
end
def run(n) do
IO.puts inspect :timer.tc(Chain, :create_processes, [n])
end
end
!elixir --erl "+P 1000000" -r chain.exs -e "Chain.run(1_000_000)"
###Output
{4638957, "Result is 1000000"}
###Markdown
記事 https://ubiteku.oinker.me/2015/12/22/elixir試飲-2-カルチャーショックに戸惑う-並行指向プ/ のマシン Macbook Pro – 3 GHz Intel Core i7, 16GB RAM では 7 秒のところ、colab では 5 秒で終わってるね!!!!手元のwindowsマシン intel core i5-9400 8gb ram でやったら次のようになった。 {3492935, "Result is 1000000"}あれ、速いじゃん!!!! ---コメントは ``
###Code
%%writefile temp.exs
# コメント実験
str = "helloworld!!!!"
IO.puts str
!elixir temp.exs
###Output
helloworld!!!!
###Markdown
---n 進数、整数 integer
###Code
!elixir -e 'IO.puts 0b1111'
!elixir -e 'IO.puts 0o7777'
!elixir -e 'IO.puts 0xffff'
!elixir -e 'IO.puts 1000_000_00_0'
###Output
15
4095
65535
1000000000
###Markdown
整数型に上限下限 fixed limit はない。 factorial(10000) が計算できる。今はしない。 ---問題10進数を $n$ 進数にベースを変えるのはどうするか。 python では `int()`, `bin()`, `oct()`, `hex()` があった。
###Code
# python
print(0b1111)
print(0o7777)
print(0xffff)
print(int('7777',8))
print(bin(15))
print(oct(4095))
print(hex(65535))
!elixir -e 'IO.puts 0b1111'
!elixir -e 'IO.puts 0o7777'
!elixir -e 'IO.puts 0xffff'
!echo
# Integer.to_string() と言う関数を使う
# <> はバイナリー連結
!elixir -e 'IO.puts "0b" <> Integer.to_string(15,2)'
!elixir -e 'IO.puts "0o" <> Integer.to_string(4095,8)'
!elixir -e 'IO.puts "0x" <> Integer.to_string(65535,16)'
###Output
15
4095
65535
0b1111
0o7777
0xFFFF
###Markdown
浮動小数点数 floating-point number
###Code
!elixir -e 'IO.puts 1.532e-4'
# .0 とか 1. とかはエラーになる
!elixir -e 'IO.puts 98099098.0809898888'
!elixir -e 'IO.puts 0.00000000000000000000000001' #=> 1.0e-26
!elixir -e 'IO.puts 90000000000000000000000000000000000000000000000000000000'
###Output
1.532e-4
98099098.08098988
1.0e-26
999999999999999999999999999999999999999
90000000000000000000000000000000000000000000000000000000
###Markdown
文字列 stringstring という型はない、みたい。---質問 型を調べる関数はあるか。type() とか。
###Code
!elixir -e 'IO.puts "日本語が書けますか"'
!elixir -e 'IO.puts "日本語が書けます"'
# 関数に括弧をつけることができる
# \ で escape できる
!elixir -e 'IO.puts (0b1111)'
!elixir -e 'IO.puts ("にほんご\n日本語")'
!elixir -e "IO.puts ('にほんご\n\"日本語\"')"
# 文字連結 `+` ではない!!!!
!elixir -e 'IO.puts("ABCD"<>"EFGH")'
###Output
ABCDEFGH
###Markdown
`` と言う記号はバイナリ連結ということらしい。 ---値の埋め込み`{変数名}` を記述することで、変数の値を埋め込むことができる。
###Code
!elixir -e 'val = 1000; IO.puts "val = #{val}"'
###Output
val = 1000
###Markdown
---真偽値elixir の 真偽値は true と false (小文字) で false と nil が false でそれ以外は true
###Code
!elixir -e 'if true do IO.puts "true" end'
!elixir -e 'if True do IO.puts "true" end'
!elixir -e 'if False do IO.puts "true" end' # False が大文字なので
!elixir -e 'if false do IO.puts "true" else IO.puts "false" end'
!elixir -e 'if nil do IO.puts "true" else IO.puts "false" end'
!elixir -e 'if 0 do IO.puts "true" else IO.puts "false" end'
!elixir -e 'if (-1) do IO.puts "true" else IO.puts "false" end'
!elixir -e 'if [] do IO.puts "true" else IO.puts "false" end'
!elixir -e 'if "" do IO.puts "true" else IO.puts "false" end'
###Output
true
true
true
false
false
true
true
true
true
###Markdown
`null` はない。 ---**マッチ演算子 `=`**マッチ演算子 `=` はマッチ演算子である。 マッチ演算子を通して値を代入し、その後、マッチさせることができる。マッチすると、方程式の結果が返され、失敗すると、エラーになる。
###Code
!elixir -e 'IO.puts a = 1'
!elixir -e 'a =1; IO.puts 1 = a'
!elixir -e 'a =1; IO.puts 2 = a'
!elixir -e 'IO.inspect a = [1,2,3]' # リストは puts で表示できないので inspect を使う
!elixir -e '[a,b,c] = [1,2,3]; IO.puts c; IO.puts b'
###Output
[1, 2, 3]
3
2
###Markdown
上の例は、elixir は マッチ演算子 `=` があると左右がマッチするように最善を尽くす。 そのため、`[a,b,c] = [1,2,3]` で a,b,c に値が代入される。
###Code
!elixir -e 'IO.inspect [1,2,[3,4,5]]'
!elixir -e '[a,b,c] = [1,2,[3,4,5]]; IO.inspect c; IO.inspect b'
# 実験 => エラー
!elixir -e 'IO.insepct [a,b] = [1,2,3]'
# 実験
!elixir -e 'IO.inspect a = [[1,2,3]]'
!elixir -e 'IO.inspect [a] = [[1,2,3]]'
!elixir -e '[a] = [[1,2,3]]; IO.inspect a'
# 実験 => エラー
!elixir -e 'IO.insepct [a,b] = [a,b]'
# 実験 アトムについては後述
!elixir -e 'IO.puts a = :a'
!elixir -e 'a = :a; IO.inspect a = a'
!elixir -e 'a = :a; IO.puts a = a'
!elixir -e 'IO.puts :b'
###Output
a
:a
a
b
###Markdown
アンダースコア `_` で値を無視する。 ワルドカード。なんでも受け付ける。
###Code
!elixir -e 'IO.inspect [1,_,_]=[1,2,3]'
!elixir -e 'IO.inspect [1,_,_]=[1,"cat","dog"]'
###Output
[1, "cat", "dog"]
###Markdown
変数は、バインド (束縛、紐付け) されると変更できない。かと思ったらできてしまう。
###Code
!elixir -e 'a = 1; IO.puts a = 2'
###Output
2
###Markdown
元の変数を指し示すピン演算子 (`^` カレット) がある。
###Code
!elixir -e 'a = 1; IO.puts ^a = 2'
###Output
** (MatchError) no match of right hand side value: 2
(stdlib 3.15) erl_eval.erl:450: :erl_eval.expr/5
(stdlib 3.15) erl_eval.erl:893: :erl_eval.expr_list/6
(stdlib 3.15) erl_eval.erl:408: :erl_eval.expr/5
(elixir 1.12.0) lib/code.ex:656: Code.eval_string_with_error_handling/3
###Markdown
メモ $\quad$ 普通の関数型言語のように変数は変更できないルールにしてしまった方が簡単ではなかったか、と思わないでもない。 変数を不変にする、const 宣言みたいなのはないのか。リストは不変 immutable なので安心。
###Code
# 大文字にする capitalize
!elixir -e 'IO.puts name = String.capitalize "elixir"'
# 大文字にする upcase
!elixir -e 'IO.puts String.upcase "elixir"'
###Output
ELIXIR
###Markdown
アトムアトムは名前がそのまま値となる定数である。**名前の前にコロン `:` をつけることでアトムになる。**アトムの名前は utf-8 文字列 (記号を含む)、数字、アンダースコア `_` 、`@` で、終端文字としてのみ「!」や「?」が使える。:fred $\quad$ :is_binary? $\quad$ :var@2 $\quad$ : $\quad$ :=== :"func/3" $\quad$ :"long john silver" $\quad$ :эликсир:mötley_crüeメモ
###Code
# 実験 アトムは宣言しないで突然使える
!elixir -e 'IO.puts :fred'
# 実験
!elixir -e 'IO.puts true === :true'
!elixir -e 'IO.puts :true'
!elixir -e 'IO.puts false === :false'
# 実験
!elixir -e 'IO.puts :fred'
!elixir -e 'IO.puts :is_binary?'
!elixir -e 'IO.puts :var@2'
!elixir -e 'IO.puts :<>'
!elixir -e 'IO.puts :==='
# セミコロンを含むアトムは iex 上では使えるが、シェルワンライナーでは使えない
# unexpected token: "" と言うエラーになる
# colab の環境だけでなく、通常のシェルでも同じ
# ファイルにしたプログラムでは使えるので問題ない
# !elixir -e 'IO.puts :"func/3"'
# !elixir -e 'IO.puts :"long john silver"'
!elixir -e 'IO.puts :эликсир'
!elixir -e 'IO.puts :mötley_crüe'
!elixir -e 'IO.puts :日本語はどうか'
###Output
fred
is_binary?
var@2
<>
===
эликсир
mötley_crüe
日本語はどうか
###Markdown
演算子
###Code
!elixir -e 'IO.puts 1 + 2'
!elixir -e 'x = 10; IO.puts x + 1'
!elixir -e 'IO.puts 1 - 2'
!elixir -e 'x = 10; IO.puts x - 1'
!elixir -e 'IO.puts 5 * 2'
!elixir -e 'x = 10; IO.puts x * 4'
!echo
!elixir -e 'IO.puts 5 / 2'
!elixir -e 'x = 10; IO.puts x / 3'
# 浮動少数ではなく整数としての結果がほしい場合は div 関数を使用
!elixir -e 'IO.puts div(10,5)'
!elixir -e 'IO.puts div(10,4)'
# 割り算の余り、剰余を求める場合は rem関数を使用
!elixir -e 'IO.puts rem(10,4)'
!elixir -e 'IO.puts rem(10,3)'
!elixir -e 'IO.puts rem(10,2)'
# 比較演算子
!elixir -e 'IO.puts 1 == 1'
!elixir -e 'IO.puts 1 != 1'
!elixir -e 'IO.puts ! (1 != 1)'
!echo
!elixir -e 'IO.puts 20.0 == 20'
!elixir -e 'IO.puts 20.0 === 20'
!elixir -e 'IO.puts 20.0 !== 20'
# 論理演算子
# 論理和
!elixir -e 'IO.puts "ABC" == "ABC" || 20 == 30'
!elixir -e 'IO.puts "ABC" == "abc" || 20 == 30'
!echo
# 論理積
!elixir -e 'IO.puts "ABC" == "ABC" && 20 == 20'
!elixir -e 'IO.puts "ABC" == "ABC" && 20 == 30'
!elixir -e 'IO.puts "ABC" == "def" && 10 > 100'
!echo
# 否定
!elixir -e 'IO.puts !("ABC" == "ABC")'
!elixir -e 'IO.puts !("ABC" == "DEF")'
###Output
true
false
true
false
false
false
true
###Markdown
rangeメモ $\quad$ range は型ではなく、struct である。 構造体?`start..end` で表現される、とあるが、1..10 と書けばそれで range なのか?
###Code
!elixir -e 'IO.inspect Enum.to_list(1..3)'
!elixir -e 'IO.inspect Enum.to_list(0..10//3)'
!elixir -e 'IO.inspect Enum.to_list(0..10//-3)'
!elixir -e 'IO.inspect Enum.to_list(10..0//-3)'
!elixir -e 'IO.inspect Enum.to_list(1..1)'
!elixir -e 'IO.inspect Enum.to_list(1..-1)'
!elixir -e 'IO.inspect Enum.to_list(1..1//2)'
!elixir -e 'IO.inspect Enum.to_list(1..-1//2)'
!elixir -e 'IO.inspect Enum.to_list(1..-1//-2)'
!elixir -e 'IO.inspect 1..9//2'
###Output
1..9//2
###Markdown
正規表現 regular expression正規表現も型ではなく、struct である。
###Code
!elixir -e 'IO.inspect Regex.run ~r{[aiueo]},"catapillar"'
!elixir -e 'IO.inspect Regex.scan ~r{[aiueo]},"catapillar"'
!elixir -e 'IO.inspect Regex.split ~r{[aiueo]},"catapillar"'
!elixir -e 'IO.inspect Regex.replace ~r{[aiueo]},"catapillar", "*"'
###Output
"c*t*p*ll*r"
###Markdown
コレクション型 タプルタプルは波括弧 brace を用いて定義する。タプルに限らず elixir のコレクションはすべて要素のタイプを限定しない。通常 2 から 4 の要素であり、それ以上の要素数の場合、map や struct の利用を考える。タプルは関数の返り値に便利に利用される。パターンマッチングと組み合わせて使われる。---cf. タプル以外の波括弧 brace の使用* 値の代入`{変数名}` * 正規表現 Regex `r{}` * マップ `%{}`
###Code
!elixir -e 'IO.inspect {3.14, :pie, "Apple"}'
!elixir -e '{status, count, action} = {3.14, :pie, "next"}; IO.puts action'
# 実験
# タプルの使い方の例
!echo hello > temp.txt
!elixir -e '{status, file} = File.open("temp.txt"); IO.inspect {status, file}'
!elixir -e '{status, file} = File.read("temp.txt"); IO.inspect {status, file}'
!elixir -e '{status, file} = File.read("temp02.txt"); IO.inspect {status, file}'
!elixir -e '{status, file} = File.write("temp.txt", "goodbye"); IO.inspect {status, file}'
!elixir -e '{status, file} = File.read("temp.txt"); IO.inspect {status, file}'
# 実験 タプルに ++ は使えるか。 => 使えない <> も使えない
# !elixir -e 'IO.inspect {3.14, :pie, "Apple"} ++ {3}'
# 実験 タプルに head は使えるか。 => 使えない
# !elixir -e 'IO.inspect hd {3.14, :pie, "Apple"}'
# 実験 タプルにパターンマッチングは使えるか。 => 使える
!elixir -e '{a,b,c} = {3.14, :pie, "Apple"}; IO.inspect [c,a,b]'
# 実験
# 項目の入れ替え
!elixir -e 'a=1; b=3; {b,a}={a,b}; IO.inspect {a,b}'
!elixir -e 'a=1; b=3; c=5; d= 7; {d,c,b,a}={a,b,c,d}; IO.inspect {a,b,c,d}'
# 実験
# タプルの要素にタプルはあるか
!elixir -e 'IO.inspect {3.14, :pie, "Apple", {3}}'
###Output
{3.14, :pie, "Apple", {3}}
###Markdown
リスト他の言語の配列 array と elixir のリストは違うので注意。 lisp のリストと似たような概念である。カラのリストでなければ、head (hd) と tail (tl) がある。hd は頭の1つで tl はそれ以降全部。
###Code
# リスト
!elixir -e 'IO.inspect [3.14, :pie, "Apple"]'
!elixir -e 'IO.inspect hd [3.14]'
!elixir -e 'IO.inspect tl [3.14]'
# リスト先頭への追加(高速)
!elixir -e 'IO.inspect ["π" | [3.14, :pie, "Apple"]]'
# リスト末尾への追加(低速)
!elixir -e 'IO.inspect [3.14, :pie, "Apple"] ++ ["Cherry"]'
###Output
["π", 3.14, :pie, "Apple"]
[3.14, :pie, "Apple", "Cherry"]
###Markdown
上と下のコードセルでリストの連結を行っているが、++/2 演算子を用いている。 この `++/2` という表記は `++` が演算子自体で `/2` がアリティ (引数の数) を表す。 ---質問 $\quad$ アリティとはなにか。---質問 $\quad$ リストの連結に `++` で文字列の連結 `` なのはなぜか。 オーバーライディングはあるのか。 文字列 string はリストではないのか。 長さを測る関数も別々なのか。
###Code
# リストの連結
!elixir -e 'IO.inspect [1, 2] ++ [3, 4, 1]'
# リストの減算
# --/2 演算子は存在しない値を引いてしまってもオッケー
!elixir -e 'IO.inspect ["foo", :bar, 42] -- [42, "bar"]'
# 重複した値の場合、右辺の要素のそれぞれに対し、左辺の要素のうち初めて登場した同じ値が順次削除
!elixir -e 'IO.inspect [1,2,2,3,2,3] -- [1,2,3,2]'
# リストの減算の値のマッチには strict comparison が使われている
!elixir -e 'IO.inspect [2] -- [2.0]'
!elixir -e 'IO.inspect [2.0] -- [2.0]'
# head /tail
!elixir -e 'IO.inspect hd [3.14, :pie, "Apple"]'
!elixir -e 'IO.inspect tl [3.14, :pie, "Apple"]'
###Output
3.14
[:pie, "Apple"]
###Markdown
---リストを頭部と尾部に分けるのに* パターンマッチング* cons 演算子( `|` )を使うこともできる。
###Code
!elixir -e '[head | tail] = [3.14, :pie, "Apple"]; IO.inspect head; IO.inspect tail'
###Output
3.14
[:pie, "Apple"]
###Markdown
キーワードリストキーワードリストとマップは elixir の連想配列である。キーワードリストは最初の要素がアトムのタプルからなる特別なリストで、リストと同様の性能になる。
###Code
# キーワードリスト
!elixir -e 'IO.inspect [foo: "bar", hello: "world"]'
# タプルのリストとしても同じ
!elixir -e 'IO.inspect [{:foo, "bar"}, {:hello, "world"}]'
!elixir -e 'IO.inspect [foo: "bar", hello: "world"] == [{:foo, "bar"}, {:hello, "world"}]'
###Output
[foo: "bar", hello: "world"]
[foo: "bar", hello: "world"]
true
###Markdown
キーワードリストの 3 つの特徴* キーはアトムである。* キーは順序付けされている。* キーの一意性は保証されない。こうした理由から、キーワードリストは関数にオプションを渡すためによく用いられる。
###Code
# 実験 リストの角括弧は省略できる
!elixir -e 'IO.inspect foo: "bar", hello: "world"'
# 実験
!elixir -e 'IO.inspect [1, fred: 1, dave: 2]'
!elixir -e 'IO.inspect {1, fred: 1, dave: 2}'
!elixir -e 'IO.inspect {1, [{:fred,1},{:dave, 2}]}'
###Output
[1, {:fred, 1}, {:dave, 2}]
{1, [fred: 1, dave: 2]}
{1, [fred: 1, dave: 2]}
###Markdown
マップ* キーワードリストとは違ってどんな型のキーも使える。* 順序付けされない。* キーの一意性が保証されている。重複したキーが追加された場合は、前の値が置き換えられる。* 変数をマップのキーにできる。* `%{}` 構文で定義する。
###Code
!elixir -e 'IO.inspect %{:foo => "bar", "hello" => :world}'
!elixir -e 'map = %{:foo => "bar", "hello" => :world}; IO.inspect map[:foo]'
!elixir -e 'map = %{:foo => "bar", "hello" => :world}; IO.inspect map["hello"]'
!echo
!elixir -e 'key = "hello"; IO.inspect %{key => "world"}'
!echo
!elixir -e 'IO.inspect %{:foo => "bar", :foo => "hello world"}'
###Output
%{:foo => "bar", "hello" => :world}
"bar"
:world
%{"hello" => "world"}
[33mwarning: [0mkey :foo will be overridden in map
nofile:1
%{foo: "hello world"}
###Markdown
アトムのキーだけを含んだマップには特別な構文がある。
###Code
!elixir -e 'IO.inspect %{foo: "bar", hello: "world"} == %{:foo => "bar", :hello => "world"}'
# 加えて、アトムのキーにアクセスするための特別な構文がある。
!elixir -e 'map = %{foo: "bar", hello: "world"}; IO.inspect map.hello'
!elixir -e 'map = %{foo: "bar", hello: "world"}; IO.inspect map[:hello]'
!elixir -e 'map = %{:foo => "bar", :hello => "world"}; IO.inspect map[:hello]'
###Output
"world"
"world"
"world"
###Markdown
---質問 map の特別な構文1. `=>` の代わりにコロン `:` を使う2. 要素を取り出すのに `[]` の代わりにピリオド `.` を使うは不要ではないか。不要だが見かけが良くなる、ということか。普通はどっちを使うのか。無駄に構文を複雑にするだけのような気がする。多分まず Python の dict でコロン `:` を使うこと、Ruby は `=>` を使うが糖衣構文としてコロン `:` が使えてその形が主流であることから、見かけ大切ということでこうなったのではないか。キーにアトムを使うことが前提ならば生産性が上がるかもしれない。キーであることを示すコロンが不要になる。fat arrow よりコロンの方が短い。map の定義が同時に行われる。要素の取り出しピリオドを使う点についても同様。ということは基本的にこの構文になる、と言う事だろう。
###Code
# マップの更新のための構文がある (新しい map が作成される)
# この構文は、マップに既に存在するキーを更新する場合にのみ機能する
!elixir -e 'map = %{foo: "bar", hello: "world"}; IO.inspect %{map | foo: "baz"}'
# 新しいキーを作成するには、`Map.put/3` を使用
!elixir -e 'map = %{hello: "world"}; IO.inspect Map.put(map, :foo, "baz")'
###Output
%{foo: "baz", hello: "world"}
###Markdown
---質問 binary については良くわからないので別途。 バイナリ binary
###Code
# binaries
!elixir -e 'IO.inspect <<1,2>>'
!elixir -e 'IO.inspect <<1,10>>'
!elixir -e 'bin = <<1,10>>; IO.inspect byte_size bin'
!elixir -e 'bin = <<3::size(2),5::size(4),1::size(2)>>; IO.inspect bin'
!elixir -e 'IO.puts Integer.to_string(213,2)'
!elixir -e 'IO.puts 0b11'
!elixir -e 'IO.puts 0b0101'
!echo
!elixir -e 'bin = <<3::size(2),5::size(4),1::size(2)>>; IO.inspect byte_size bin'
!elixir -e 'bin = <<3::size(2),5::size(4),1::size(2)>>; IO.inspect :io.format("~-8.2b~n",:binary.bin_to_list(bin))'
!elixir -e 'IO.inspect <<1,2>> <> <<3>>'
###Output
<<1, 2, 3>>
###Markdown
----** Date and Time 日付 **
###Code
# Date and Time
!elixir -e 'IO.inspect Date.new(2021,6,2)'
!elixir -e '{:ok, d1}=Date.new(2021,6,2); IO.inspect d1'
!elixir -e '{:ok, d1}=Date.new(2021,6,2); IO.inspect Date.day_of_week(d1)'
!elixir -e '{:ok, d1}=Date.new(2021,6,2); IO.inspect Date.add(d1,7)'
!elixir -e '{:ok, d1}=Date.new(2021,6,2); IO.inspect d1, structs: false'
###Output
~D[2021-06-02]
3
~D[2021-06-09]
%{__struct__: Date, calendar: Calendar.ISO, day: 2, month: 6, year: 2021}
###Markdown
`~D[...]` や `~T[...]` は elixir の シギル sigil である。 文字列とバイナリーのところで説明する。 help についてメモ $\quad$ 関数の調べ方Helper の使い方。 help, type, info, information とか。下のコードセルにあるように、対象のモジュールの関数名を調べ、そのヘルプを見ればけっこうくわしくわかる。コメントアウトしてあるのは出力が大きいので、とりあえずコメントアウトして出力を抑制してある。具体的には、Enum にあたるところにモジュール名を入れて関数のリストを出す。 Ctrl+A Ctrl+C でコピーして vscode などでペーストして読む。 調べたい関数名をヘルプの、Enum.all?/1 のところに入れて出力をコピーして、vscode などでペーストして読む
###Code
# !elixir -e 'Enum.__info__(:functions) |> Enum.each(fn({function, arity}) -> IO.puts "#{function}/#{arity}" end)'
# !elixir -e 'require IEx.Helpers;IEx.Helpers.h Enum.all?/1'
# h 単独のドキュメントを見たい
# !elixir -e 'require IEx.Helpers;IEx.Helpers.h'
# i というのもある
# !elixir -e 'x = [3,2]; require IEx.Helpers;IEx.Helpers.i x'
# !elixir -e 'require IEx.Helpers;IEx.Helpers.h IO'
###Output
_____no_output_____
###Markdown
Enum モジュールEnum はリストなどコレクションを列挙するための一連のアルゴリズム。* all?、any?* chunk_every、chunk_by、map_every* each* map、filter、reduce* min、max* sort、uniq、uniq_by* キャプチャ演算子 `(&)`
###Code
# all? 関数を引数で受け取り、リストの全体が true の時、true を返す
!elixir -e 'IO.puts Enum.all?(["foo", "bar", "hello"], fn(s) -> String.length(s) == 3 end)'
!elixir -e 'IO.puts Enum.all?(["foo", "bar", "hello"], fn(s) -> String.length(s) >1 end)'
# any? 少なくとも1つの要素が true と評価された場合に true を返す
!elixir -e 'IO.puts Enum.any?(["foo", "bar", "hello"], fn(s) -> String.length(s) == 5 end)'
# chunk_every リストを小さなグループに分割する
!elixir -e 'IO.inspect Enum.chunk([1, 2, 3, 4, 5, 6], 2)'
!elixir -e 'IO.inspect Enum.chunk([1, 2, 3, 4, 5, 6], 3)'
!elixir -e 'IO.inspect Enum.chunk([1, 2, 3, 4, 5, 6], 4)'
# chunk_by 関数の戻り値が変化することによって分割する
!elixir -e 'IO.inspect Enum.chunk_by(["one", "two", "three", "four", "five"], fn(x) -> String.length(x) end)'
!elixir -e 'IO.inspect Enum.chunk_by(["one", "two", "three", "four", "five", "six"], fn(x) -> String.length(x) end)'
# map_every nth ごとに map 処理する
!elixir -e 'IO.inspect Enum.map_every(1..10, 3, fn x -> x + 1000 end)'
!elixir -e 'IO.inspect Enum.map_every(1..10, 1, fn x -> x + 1000 end)'
!elixir -e 'IO.inspect Enum.map_every(1..10, 0, fn x -> x + 1000 end)'
# each 新しい値を生成することなく反復する。返り値は:ok というアトム。
!elixir -e 'IO.inspect Enum.each(["one", "two", "three"], fn(s) -> IO.puts(s) end)'
!elixir -e 'IO.puts Enum.each(["one", "two", "three"], fn(s) -> IO.puts(s) end)'
# map 関数を各要素に適用して新しいリストを生み出す
!elixir -e 'IO.inspect Enum.map([0, 1, 2, 3], fn(x) -> x - 1 end)'
# min 最小の値を探す。 リストが空の場合エラーになる
# リストが空だったときのために予め最小値を生成する関数を渡すことができる
!elixir -e 'IO.inspect Enum.min([5, 3, 0, -1])'
!elixir -e 'IO.inspect Enum.min([], fn -> :foo end)'
# max 最大の(max/1)値を返す
!elixir -e 'IO.inspect Enum.max([5, 3, 0, -1])'
!elixir -e 'IO.inspect Enum.max([], fn -> :bar end)'
# filter 与えられた関数によって true と評価された要素だけを得る
!elixir -e 'IO.inspect Enum.filter([1, 2, 3, 4], fn(x) -> rem(x, 2) == 0 end)'
!elixir -e 'IO.inspect Enum.filter([], fn(x) -> rem(x, 2) == 0 end)'
# reduce リストを関数に従って単一の値へ抽出する。 accumulator を指定できる。
# accumulator が与えられない場合は最初の要素が用いられる。
!elixir -e 'IO.inspect Enum.reduce([1, 2, 3], 10, fn(x, acc) -> x + acc end)'
!elixir -e 'IO.inspect Enum.reduce([1, 2, 3], fn(x, acc) -> x + acc end)'
!elixir -e 'IO.inspect Enum.reduce(["a","b","c"], "1", fn(x,acc)-> x <> acc end)'
# sort `sort/1` はソートの順序に Erlangの Term 優先順位 を使う
!elixir -e 'IO.inspect Enum.sort([5, 6, 1, 3, -1, 4])'
!elixir -e 'IO.inspect Enum.sort([:foo, "bar", Enum, -1, 4])'
# `sort/2` は、順序を決める為の関数を渡すことができる
!elixir -e 'IO.inspect Enum.sort([%{:val => 4}, %{:val => 1}], fn(x, y) -> x[:val] > y[:val] end)'
# なしの場合
!elixir -e 'IO.inspect Enum.sort([%{:count => 4}, %{:count => 1}])'
# sort/2 に :asc または :desc をソート関数として渡すことができる
!elixir -e 'IO.inspect Enum.sort([2, 3, 1], :desc)'
# uniq 重複した要素を取り除く
!elixir -e 'IO.inspect Enum.uniq([1, 2, 3, 2, 1, 1, 1, 1, 1])'
[1, 2, 3]
# uniq_by 重複した要素を削除するが、ユニークかどうか比較を行う関数を渡せる
!elixir -e 'IO.inspect Enum.uniq_by([%{x: 1, y: 1}, %{x: 2, y: 1}, %{x: 3, y: 3}], fn coord -> coord.y end)'
###Output
[1, 2, 3]
[%{x: 1, y: 1}, %{x: 3, y: 3}]
###Markdown
キャプチャ演算子 `&` を使用した Enum と無名関数 elixir の Enum モジュール内の多くの関数は、引数として無名関数を取る。これらの無名関数は、多くの場合、キャプチャ演算子 `&` を使用して省略形で記述される。
###Code
# 無名関数でのキャプチャ演算子の使用
!elixir -e 'IO.inspect Enum.map([1,2,3], fn number -> number + 3 end)'
!elixir -e 'IO.inspect Enum.map([1,2,3], &(&1 + 3))'
!elixir -e 'plus_three = &(&1 + 3);IO.inspect Enum.map([1,2,3], plus_three)'
# Enum.all? でもキャプチャ演算子が使えるか
# all? 関数を引数で受け取り、リストの全体が true の時、true を返す
# !elixir -e 'IO.puts Enum.all?(["foo", "bar", "hello"], fn(s) -> String.length(s) == 3 end)'
!elixir -e 'IO.puts Enum.all?(["foo", "bar", "hello"], &(String.length(&1)==3))'
# !elixir -e 'IO.puts Enum.all?(["foo", "bar", "hello"], fn(s) -> String.length(s) >1 end)'
!elixir -e 'IO.puts Enum.all?(["foo", "bar", "hello"], &(String.length(&1)>1))'
###Output
false
true
###Markdown
--- パターンマッチングパターンマッチングでは、値、データ構造、関数をマッチすることができる。* マッチ演算子* ピン演算子
###Code
# マッチ演算子 `=` はマッチ演算子である。 マッチ演算子を通して値を代入し、
# その後、マッチさせることができる。マッチすると、方程式の結果が返され、
# 失敗すると、エラーになる
!elixir -e 'IO.puts x = 1'
!elixir -e 'x = 1;IO.puts 1 = x'
# !elixir -e 'x = 1;IO.puts 2 = x'
#=> (MatchError) no match of right hand side value: 1
# リストでのマッチ演算子
!elixir -e 'IO.inspect list = [1, 2, 3]'
!elixir -e 'list = [1, 2, 3]; IO.inspect [1, 2, 3] = list'
# !elixir -e 'list = [1, 2, 3]; IO.inspect [] = list'
#=> (MatchError) no match of right hand side value: [1, 2, 3]
!elixir -e 'list = [1, 2, 3]; IO.inspect [1 | tail] = list'
!elixir -e 'list = [1, 2, 3]; [1 | tail] = list; IO.inspect tail'
# タプルとマッチ演算子
!elixir -e 'IO.inspect {:ok, value} = {:ok, "Successful!"}'
!elixir -e '{:ok, value} = {:ok, "Successful!"}; IO.inspect value'
###Output
{:ok, "Successful!"}
"Successful!"
###Markdown
---**ピン演算子**マッチ演算子は左辺に変数が含まれている時に代入操作を行う。 この変数を再び束縛するという挙動は望ましくない場合がある。 そうした状況のために、ピン演算子 `^` がある。ピン演算子で変数を固定すると、新しく再束縛するのではなく既存の値とマッチする。
###Code
# ピン演算子
!elixir -e 'IO.inspect x = 1'
# !elixir -e 'x = 1; IO.inspect ^x = 2'
#=> ** (MatchError) no match of right hand side value: 2
!elixir -e 'x = 1; IO.inspect {x, ^x} = {2, 1}'
!elixir -e 'x = 1;{x, ^x} = {2, 1}; IO.inspect x'
!echo
!elixir -e 'IO.inspect key = "hello"'
!elixir -e 'key = "hello"; IO.inspect %{^key => value} = %{"hello" => "world"}'
!elixir -e 'key = "hello"; %{^key => value} = %{"hello" => "world"}; IO.inspect value'
!elixir -e 'key = "hello"; %{^key => value} = %{"hello" => "world"}; IO.inspect value'
# 関数の clause でのピン演算子
!elixir -e 'IO.inspect greeting = "Hello"'
!elixir -e 'greeting = "Hello"; IO.inspect greet = fn (^greeting, name) -> "Hi #{name}"; (greeting, name) -> "#{greeting},#{name}" end'
!elixir -e 'greeting = "Hello"; greet = fn (^greeting, name) -> "Hi #{name}"; (greeting, name) -> "#{greeting},#{name}" end; IO.inspect greet.("Hello","Sean")'
!elixir -e 'greeting = "Hello"; greet = fn (^greeting, name) -> "Hi #{name}"; (greeting, name) -> "#{greeting},#{name}" end; IO.inspect greet.("Mornin","Sean")'
###Output
"Hello"
#Function<43.65746770/2 in :erl_eval.expr/5>
"Hi Sean"
"Mornin,Sean"
###Markdown
制御構造 control structure* if と unless* case* cond* with if と unless elixir の if と unless は ruby と同じ。elixir は if と unless はマクロとして定義されている。この実装は kernel module で知ることができる。elixir では偽とみなされる値は nil と真理値の false だけだということに留意。
###Code
%%writefile temp.exs
IO.puts (
if String.valid?("Hello") do
"Valid string!"
else
"Invalid string."
end)
!elixir temp.exs
%%writefile temp.exs
if "a string value" do
IO.puts "Truthy"
end
!elixir temp.exs
# unless/2 は if/2 の逆で、条件が否定される時だけ作用する
%%writefile temp.exs
unless is_integer("hello") do
IO.puts "Not an Int"
end
!elixir temp.exs
# 実験 シェルワンライナー版 do や end の前後にセミコロンは要らない
!elixir -e 'unless is_integer("hello") do IO.puts "Not an Int" end'
# 複数のパターンにマッチする場合、case/2 を使う
%%writefile temp.exs
IO.puts(
case {:error, "Hello World"} do
{:ok, result} -> result
{:error, _} -> "Uh oh!"
_ -> "Catch all"
end
)
!elixir temp.exs
# アンダースコア _ 変数は case/2 命令文の中に含まれる重要な要素
# これが無いと、マッチするものが見あたらない場合にエラーが発生する
# エラーの例
!elixir -e 'case :even do :odd -> IO.puts "Odd" end'
# アンダースコア _ を"他の全て"にマッチする else と考えること
!elixir -e 'case :even do :odd -> IO.puts "Odd"; _ -> IO.puts "Not odd" end'
# case/2 はパターンマッチングに依存しているため、パターンマッチングと同じルールや制限が全て適用される
# 既存の変数に対してマッチさせようという場合にはピン ^ 演算子を使う
!elixir -e 'pie=3.14; IO.puts(case "cherry pie" do ^pie -> "Not so tasty"; pie -> "I bet #{pie} is tasty" end)'
!elixir -e 'pie=3.14; IO.puts(case "cherry pie" do pie -> "Not so tasty"; pie -> "I bet #{pie} is tasty" end)'
# case/2 はガード節に対応している
# 公式ドキュメントの Expressions allowed in guard clauses を参照
!elixir -e 'IO.puts(case {1, 2, 3} do {1, x, 3} when x > 0 -> "Will match"; _ -> "Wont match" end)'
###Output
Will match
###Markdown
---ガード節とは何か?公式ドキュメントの Expressions allowed in guard clauses を参照
###Code
# cond
!elixir -e 'IO.puts (cond do 2+2==5 -> "This will not be true"; 2*2==3 -> "Nor this"; 1+1 == 2 -> "But this will" end)'
# cond も case と同様マッチしない場合にエラーになるので、true になる条件を定義する
!elixir -e 'IO.puts (cond do 7+1==0 -> "Incorrect"; true -> "Catch all" end)'
# with
# 特殊形式の with/1 はネストされた case/2 文やきれいにパイプできない状況に便利
# with/1 式はキーワード, ジェネレータ, そして式から成り立っている
# ジェネレータについてはリスト内包表記のところで詳しく述べる
# `<-` の右側と左側を比べるのにパターンマッチングが使われる
!elixir -e 'user=%{first: "Sean", last: "Callan"}; IO.inspect user'
!elixir -e 'user=%{first: "Sean", last: "Callan"}; with {:ok, first} <- Map.fetch(user, :first), {:ok, last} <- Map.fetch(user, :last), do: IO.puts last <> ", " <> first'
# シェルワンライナーが長いのでファイルにする
%%writefile temp.exs
user=%{first: "Sean", last: "Callan"}
with {:ok, first} <- Map.fetch(user, :first),
{:ok, last} <- Map.fetch(user, :last),
do: IO.puts last <> ", " <> first
!elixir temp.exs
# 式がマッチに失敗した場合
# Map.fetch が失敗して :error を返し、first が設定されずプログラムが止まる
%%writefile temp.exs
user = %{first: "doomspork"}
with {:ok, first} <- Map.fetch(user, :first),
{:ok, last} <- Map.fetch(user, :last),
do: IO.puts last <> ", " <> first
!elixir temp.exs
# with/1 で else が使える
%%writefile temp.exs
import Integer
m = %{a: 1, c: 3}
a =
with {:ok, number} <- Map.fetch(m, :a),
true <- is_even(number) do
IO.puts "#{number} divided by 2 is #{div(number, 2)}"
:even
else
:error ->
IO.puts("We don't have this item in map")
:error
_ ->
IO.puts("It is odd")
:odd
end
IO.inspect a
!elixir temp.exs
###Output
We don't have this item in map
:error
###Markdown
関数 Function
###Code
# 関数型言語では、関数は第一級オブジェクト first class object である
# ここでは無名関数、名前付き関数、アリティ、パターンマッチング、プライベート関数、ガード、デフォルト引数について学ぶ
# 無名関数 anonymous function
# fn end のキーワードを用い、 引数 `->` 関数定義 の形で定義する
%%writefile temp.exs
sum = fn (a, b) -> a + b end
IO.puts sum.(2, 3)
!elixir temp.exs
# シェルワンライナーで書いてみる
!elixir -e 'sum=fn(a,b)->a+b end;IO.puts sum.(2,3)'
# elixir では通常関数定義に省略記号 & を使う (キャプチャ演算子)
!elixir -e 'sum = &(&1 + &2); IO.puts sum.(2, 3)'
###Output
5
###Markdown
---質問 無名関数に引数を渡して結果を得るのはどうやるのか&(&1 + &2).(2, 3) として出来なかった。 => 出来た。!elixir -e 'IO.puts ((&(&1 + &2)).(2,3))'
###Code
!elixir -e 'IO.puts ((fn (a,b) -> a + b end).(2,3))'
!elixir -e 'IO.puts ((&(&1 + &2)).(2,3))'
# 関数定義にパターンマッチングが使える
%%writefile temp.exs
handle_result = fn
{:ok, _result} -> IO.puts "Handling result..."
{:ok, _} -> IO.puts "This would be never run as previous will be matched beforehand."
{:error} -> IO.puts "An error has occurred!"
end
some_result = 1
handle_result.({:ok, some_result}) #=> Handling result...
handle_result.({:error}) #=> An error has occured!
!elixir temp.exs
# 名前付き関数
# 名前付き関数はモジュール内部で def キーワードを用いて定義する
%%writefile temp.exs
defmodule Greeter do
def hello(name) do
"Hello, " <> name
end
end
IO.puts Greeter.hello("Sean")
!elixir temp.exs
# 次のような書き方もできる do: を使う
%%writefile temp.exs
defmodule Greeter do
def hello(name), do: "Hello, " <> name
end
IO.puts Greeter.hello("Sean")
!elixir temp.exs
# 実験 シェルワンライナーで出来るか
!elixir -e 'defmodule Greeter do def hello(name) do "Hello, " <> name end end;IO.puts Greeter.hello("Sean")'
# 実験 シェルワンライナーで `, do:` 構文が使えるか
!elixir -e 'defmodule Greeter do def hello(name),do: "Hello, " <> name end;IO.puts Greeter.hello("Sean")'
# 再帰
%%writefile temp.exs
defmodule Length do
def of([]), do: 0
def of([_ | tail]), do: 1 + of(tail)
end
IO.puts Length.of []
IO.puts Length.of [1, 2, 3]
!elixir temp.exs
# アリティとは関数の引数の数
# 引数の数が違えば別の関数
%%writefile temp.exs
defmodule Greeter2 do
def hello(), do: "Hello, anonymous person!" # hello/0
def hello(name), do: "Hello, " <> name # hello/1
def hello(name1, name2), do: "Hello, #{name1} and #{name2}" # hello/2
end
IO.puts Greeter2.hello()
IO.puts Greeter2.hello("Fred")
IO.puts Greeter2.hello("Fred", "Jane")
!elixir temp.exs
# 関数とパターンマッチング
%%writefile temp.exs
defmodule Greeter1 do
def hello(%{name: person_name}) do
IO.puts "Hello, " <> person_name
end
end
fred = %{
name: "Fred",
age: "95",
favorite_color: "Taupe"
}
IO.puts Greeter1.hello(fred) #=> Hello, fred になる
#IO.puts Greeter1.hello(%{age: "95", favorite_color: "Taupe"}) #=> (FunctionClauseError) no function clause matching in Greeter1.hello/1
!elixir temp.exs
# Fredの名前を person_name にアサインしたいが、人物マップ全体の値も保持したいという場合
# マップを引数にすれば、別々の変数に格納することができる
%%writefile temp.exs
defmodule Greeter2 do
def hello(%{name: person_name} = person) do
IO.puts "Hello, " <> person_name
IO.inspect person
end
end
fred = %{
name: "Fred",
age: "95",
favorite_color: "Taupe"
}
Greeter2.hello(fred)
IO.puts("")
Greeter2.hello(%{name: "Fred"})
IO.puts("")
# Greeter2.hello(%{age: "95", favorite_color: "Taupe"}) #=> (FunctionClauseError) no function clause matching in Greeter2.hello/1
!elixir temp.exs
###Output
Hello, Fred
%{age: "95", favorite_color: "Taupe", name: "Fred"}
Hello, Fred
%{name: "Fred"}
###Markdown
###Code
# %{name: person_name} と person の順序を入れ替えても、それぞれがfredとマッチングするので同じ結果となる
# 変数とマップを入れ替えてみる
# それぞれがパターンマッチしているので結果は同じになる
%%writefile temp.exs
defmodule Greeter3 do
def hello(person = %{name: person_name}) do
IO.puts "Hello, " <> person_name
IO.inspect person
end
end
fred = %{
name: "Fred",
age: "95",
favorite_color: "Taupe"
}
Greeter3.hello(fred)
IO.puts("")
Greeter3.hello(%{name: "Fred"})
!elixir temp.exs
# プライベート関数
# プライベート関数は defp を用いて定義する
# そのモジュール自身の内部からのみ呼び出すことが出来る
%%writefile temp.exs
defmodule Greeter do
def hello(name), do: phrase() <> name
defp phrase, do: "Hello, "
end
IO.puts Greeter.hello("Sean") #=> "Hello, Sean"
# IO.puts Greeter.phrase #=> (UndefinedFunctionError) function Greeter.phrase/0 is undefined or private
!elixir temp.exs
# ガード
%%writefile temp.exs
defmodule Greeter do
def hello(names) when is_list(names) do
names
|> Enum.join(", ")
|> hello
end
def hello(name) when is_binary(name) do
phrase() <> name
end
defp phrase, do: "Hello, "
end
IO.puts Greeter.hello ["Sean", "Steve"]
IO.puts Greeter.hello "Bill"
!elixir temp.exs
###Output
Hello, Sean, Steve
Hello, Bill
###Markdown
---質問 Elixir のガードは Haskell のガードと同じか?
###Code
# デフォルト引数
# デフォルト値が欲しい場合、引数 \\ デフォルト値の記法を用いる
%%writefile temp.exs
defmodule Greeter do
def hello(name, language_code \\ "en") do
phrase(language_code) <> name
end
defp phrase("en"), do: "Hello, "
defp phrase("es"), do: "Hola, "
end
IO.puts Greeter.hello("Sean", "en")
IO.puts Greeter.hello("Sean")
IO.puts Greeter.hello("Sean", "es")
!elixir temp.exs
# ガードとデフォルト引数を組み合わせる場合
# 混乱を避けるためデフォルト引数を処理する定義を先に置く
%%writefile temp.exs
defmodule Greeter do
def hello(names, language_code \\ "en")
def hello(names, language_code) when is_list(names) do
names
|> Enum.join(", ")
|> hello(language_code)
end
def hello(name, language_code) when is_binary(name) do
phrase(language_code) <> name
end
defp phrase("en"), do: "Hello, "
defp phrase("es"), do: "Hola, "
end
IO.puts Greeter.hello ["Sean", "Steve"] #=> "Hello, Sean, Steve"
IO.puts Greeter.hello ["Sean", "Steve"], "es" #=> "Hola, Sean, Steve"
IO.puts Greeter.hello "Bob", "es"
!elixir temp.exs
# パイプライン演算子
# パイプライン演算子 `|>` はある式の結果を別の式に渡す
# 関数のネストを理解しやすくするためのもの
# 文字列をトークン化する、単語に分ける
!elixir -e 'IO.inspect "Elixir rocks" |> String.split()'
!elixir -e 'IO.inspect "Elixir rocks" |> String.upcase() |> String.split()'
# パイプラインを使う場合に関数の括弧は省略せずには入れた方がわかりやすい
!elixir -e 'IO.inspect "elixir" |> String.ends_with?("ixir")'
###Output
true
###Markdown
モジュール ---質問 いままで IO.puts とか一々モジュール名を付けていたが、elixir ではこれが普通なのか?関数を作る際に一々モジュールを作成していたがあれで既存のモジュールに付け加えられているのか?
###Code
# モジュールの基本的な例
%%writefile temp.exs
defmodule Example do
def greeting(name) do
"Hello #{name}."
end
end
IO.puts Example.greeting "Sean"
!elixir temp.exs
# モジュールはネストする事ができる
%%writefile temp.exs
defmodule Example.Greetings do
def morning(name) do
"Good morning #{name}."
end
def evening(name) do
"Good night #{name}."
end
end
IO.puts Example.Greetings.morning "Sean"
!elixir temp.exs
# モジュールの属性
# モジュール属性は Elixir では一般に定数として用いられる
# Elixirには予約されている属性がある
# moduledoc — 現在のモジュールにドキュメントを付ける
# doc — 関数やマクロについてのドキュメント管理
# behaviour — OTPまたはユーザが定義した振る舞い(ビヘイビア)に用いる
%%writefile temp.exs
defmodule Example do
@greeting "Hello"
def greeting(name) do
~s(#{@greeting} #{name}.)
end
end
IO.puts Example.greeting "tak"
!elixir temp.exs
# 構造体 struct
# 構造体は定義済みのキーの一群とデフォルト値を持つマップである
# 定義するには defstruct を用いる
%%writefile temp.exs
defmodule Example.User do
defstruct name: "Sean", roles: []
end
defmodule Main do
IO.inspect %Example.User{}
IO.inspect %Example.User{name: "Steve"}
IO.inspect %Example.User{name: "Steve", roles: [:manager]}
end
!elixir temp.exs
# 構造体の更新
%%writefile temp.exs
defmodule Example.User do
defstruct name: "Sean", roles: []
end
defmodule Main do
steve = %Example.User{name: "Steve"}
IO.inspect %{steve | name: "Sean"}
IO.inspect steve
end
!elixir temp.exs
# 構造体の更新とマッチング
%%writefile temp.exs
defmodule Example.User do
defstruct name: "Sean", roles: []
end
defmodule Main do
steve = %Example.User{name: "Steve"}
sean = %{steve | name: "Sean"}
IO.inspect %{name: "Sean"} = sean
end
!elixir temp.exs
# inspect の出力を変える
%%writefile temp.exs
defmodule Example.User do
# @derive {Inspect, only: [:name]}
@derive {Inspect, except: [:roles]}
defstruct name: "Sean", roles: []
end
defmodule Main do
steve = %Example.User{name: "Steve"}
sean = %{steve | name: "Sean"}
IO.inspect %{name: "Sean"} = sean
end
!elixir temp.exs
# コンポジション(Composition)
# コンポジションを用いてモジュールや構造体に既存の機能を追加する
# alias モジュール名をエイリアスする
%%writefile temp.exs
defmodule Sayings.Greetings do
def basic(name), do: "Hi, #{name}"
end
defmodule Example do
alias Sayings.Greetings
def greeting(name), do: Greetings.basic(name)
end
IO.puts Example.greeting "Bob!!"
# aliasを使わない場合
# defmodule Example do
# def greeting(name), do: Sayings.Greetings.basic(name)
# end
!elixir temp.exs
# 別名で alias したい時は `:as` を使う
%%writefile temp.exs
defmodule Sayings.Greetings do
def basic(name), do: "Hi, #{name}"
end
defmodule Example do
alias Sayings.Greetings, as: Hi
def print_message(name), do: Hi.basic(name)
end
IO.puts Example.print_message "Chris!!"
!elixir temp.exs
# 複数のモジュールを一度に alias する
# defmodule Example do
# alias Sayings.{Greetings, Farewells}
# end
# import
# 関数を取り込みたいという場合には、 import を使う
!elixir -e 'import List; IO.inspect last([1,2,3])'
# フィルタリング
# import のデフォルトでは全ての関数とマクロが取り込まれるが、 :only や :except でフィルタすることができる
# アリティを付ける必要がある
%%writefile temp.exs
import List, only: [last: 1]
IO.inspect last([1,2,3])
# IO.inspect first([1,2,3]) #=> (CompileError) temp.exs:3: undefined function first/1 (there is no such import)
!elixir temp.exs
# import には :functions と :macros という2つの特別なアトムもありるこれらはそれぞれ関数とマクロのみを取り込む
# import List, only: :functions
# import List, only: :macros
# require と import の違いがわからない
# まだロードされていないマクロを呼びだそうとすると、Elixirはエラーを発生させる
# とのこと
# defmodule Example do
# require SuperMacros
#
# SuperMacros.do_stuff
# end
# use
# use マクロを用いることで他のモジュールを利用して現在のモジュールの定義を変更することができる
# コード上で use を呼び出すと、実際には提供されたモジュールに定義されている
# __using__/1 コールバックを呼び出している
%%writefile temp.exs
defmodule Hello do
defmacro __using__ _ do
quote do
def hello(name), do: "Hi, #{name}"
end
end
end
defmodule Example do
use Hello
end
IO.puts Example.hello("Sean")
!elixir temp.exs
# greeting オプションを追加する
%%writefile temp.exs
defmodule Hello do
defmacro __using__(opts) do
greeting = Keyword.get(opts, :greeting, "Hi")
quote do
def hello(name), do: unquote(greeting) <> ", " <> name
end
end
end
defmodule Example do
use Hello, greeting: "Hola"
end
IO.puts Example.hello("Sean")
!elixir temp.exs
###Output
Hola, Sean
###Markdown
Mix
###Code
# mixとは Ruby の Bundler, RubyGems, Rake が組み合わさったようなもの
# colab の環境でやってみる
!mix new example
#=>
# * creating README.md
# * creating .formatter.exs
# * creating .gitignore
# * creating mix.exs
# * creating lib
# * creating lib/example.ex
# * creating test
# * creating test/test_helper.exs
# * creating test/example_test.exs
#
# Your Mix project was created successfully.
# You can use "mix" to compile it, test it, and more:
#
# cd example
# mix test
#
# Run "mix help" for more commands.
# colab 環境ではシステムコマンドを 1 行の中で書かないとディレクトリ内の処理ができない
!cd example; mix test
!cd example; ls -la
!cd example; cat mix.exs
#=> 次のフォーマットのプログラムが出来る
# defmodule Example.MixProject do
# use Mix.Project
# def project do # 名前(app)と依存関係(deps)が書かれている
# def application do
# defp deps do
# end
!cd example; iex -S mix
# iex で対話的に使うことが出来るが colab 環境では出来ない
# cd example
# iex -S mix
# compile
# mix はコードの変更を自動的にコンパイルする
# 明示的にコンパイルすることも出来る
# !cd example; mix compile
# rootディレクトリ以外から実行する場合は、グローバルmix taskのみが実行可能
!cd example; mix compile
!cd example; ls -la
!cd example; ls -laR _build
# 依存関係を管理する
# 新しい依存関係を追加するには、 mix.exs の deps 内に追加する
# パッケージ名のアトムと、バージョンを表す文字列)と1つの任意的な値(オプション)を持つタプル
# 実例として、phoenix_slimのようなプロジェクトの依存関係を見る
# def deps do
# [
# {:phoenix, "~> 1.1 or ~> 1.2"},
# {:phoenix_html, "~> 2.3"},
# {:cowboy, "~> 1.0", only: [:dev, :test]},
# {:slime, "~> 0.14"}
# ]
# end
# cowboy の依存は開発時とテスト時にのみ必要
# 依存しているパッケージの取り込みは bundle install に似たもの
# mix deps.get
!cd example/_build/test/lib/example/ebin; ./example.app #=> Permission denied
# colab 環境ではアプリは起動できないと言う事か
# 環境
# Bundler に似て、様々な環境に対応している
# mixは最初から 3 つの環境で動作するように構成されている
# :dev - 初期状態での環境。
# :test - mix testで用いられる環境。次のレッスンでさらに見ていきる
# :prod - アプリケーションを製品に出荷するときに用いられる環境。
# 現在の環境は Mix.env で取得することができる
# この環境は MIX_ENV 環境変数によって変更することが出来る
# MIX_ENV=prod mix compile
###Output
_____no_output_____
###Markdown
シギル sigil
###Code
# シギル sigil とは elixir で文字列リテラルを取り扱うための特別の構文
# チルダ ~ で始まる
# シギルのリスト
# ~C エスケープや埋め込みを含まない文字のリストを生成する
# ~c エスケープや埋め込みを含む文字のリストを生成する
# ~R エスケープや埋め込みを含まない正規表現を生成する
# ~r エスケープや埋め込みを含む正規表現を生成する
# ~S エスケープや埋め込みを含まない文字列を生成する
# ~s エスケープや埋め込みを含む文字列を生成する
# ~W エスケープや埋め込みを含まない単語のリストを生成する
# ~w エスケープや埋め込みを含む単語のリストを生成する
# ~N NaiveDateTime 構造体を生成する
# デリミタのリスト
# <...> カギ括弧のペア angle bracket
# {...} 中括弧のペア brace
# [...] 大括弧のペア bracket
# (...) 小括弧のペア parenthesis
# |...| パイプ記号のペア pipe
# /.../ スラッシュのペア slash
# "..." ダブルクォートのペア double quote
# '...' シングルクォートのペア single quote
# 文字のリスト #=> tutorial と結果が違う!!!!
!elixir -e 'IO.puts ~c/2 + 7 = #{ 2 + 7 }/'
!elixir -e 'IO.puts ~C/2 + 7 = #{ 2 + 7 }/'
# 正規表現
!elixir -e 'IO.puts 3 == 3'
!elixir -e 'IO.puts "Elixir" =~ ~r/elixir/'
!elixir -e 'IO.puts "elixir" =~ ~r/elixir/'
!echo
!elixir -e 'IO.puts "Elixir" =~ ~r/elixir/i'
!elixir -e 'IO.puts "elixir" =~ ~r/elixir/i'
# Erlang の正規表現ライブラリを元に作られた Regex.split/2 を使う
!elixir -e 'string="100_000_000"; IO.inspect Regex.split(~r/_/, string)'
# 文字列
!elixir -e 'IO.puts ~s/welcome to elixir #{String.downcase "SCHOOL"}/'
!elixir -e 'IO.puts ~S/welcome to elixir #{String.downcase "SCHOOL"}/'
# 単語のリスト
!elixir -e 'IO.inspect ~w/i love elixir school/'
!elixir -e 'IO.inspect ~w/i love\telixir school/'
!elixir -e 'IO.inspect ~W/i love\telixir school/'
!elixir -e 'name="Bob"; IO.inspect ~w/i love #{name}lixir school/'
!elixir -e 'name="Bob"; IO.inspect ~W/i love #{name}lixir school/'
# NaiveDateTime
# NaiveDateTime は タイムゾーンがない DateTime を表現する構造体を手早く作るときに有用
# NaiveDateTime 構造体を直接作ることは避けるべき
# パターンマッチングには有用
!elixir -e 'IO.inspect NaiveDateTime.from_iso8601("2015-01-23 23:50:07") == {:ok, ~N[2015-01-23 23:50:07]}'
# シギルを作る
%%writefile temp.exs
defmodule MySigils do
def sigil_u(string, []), do: String.upcase(string)
end
defmodule Main do
import MySigils
IO.puts (~u/elixir school/)
end
!elixir temp.exs
###Output
ELIXIR SCHOOL
###Markdown
**ドキュメント** **インラインドキュメント用の属性*** @moduledoc - モジュールレベルのドキュメント用* @doc - 関数レベルのドキュメント用省略 **テスト**ExUnit省略 内包表記
###Code
# 内包表記 list comprehension
# 内包表記は列挙体 enumerable をループするための糖衣構文である
!elixir -e 'list=[1,2,3,4,5];IO.inspect for x <- list, do: x*x'
# for とジェネレータの使い方に留意する
# ジェネレータとは `x <- list` の部分
# Haskell だと [x * x | x <- list] と書き、数学の集合での表記に近いが Elixir ではこのように書く
# 内包表記はリストに限定されない
# キーワードリスト
!elixir -e 'IO.inspect for {_key, val} <- [one: 1, two: 2, three: 3], do: val'
# マップ
!elixir -e 'IO.inspect for {k, v} <- %{"a" => "A", "b" => "B"}, do: {k, v}'
# バイナリ
!elixir -e 'IO.inspect for <<c <- "hello">>, do: <<c>>'
# ジェネレータは入力値セットと左辺の変数を比較するのにパターンマッチングを利用している
# マッチするものが見つからない場合には、値は無視される
!elixir -e 'IO.inspect for {:ok, val} <- [ok: "Hello", error: "Unknown", ok: "World"], do: val'
# 入れ子
%%writefile temp.exs
list = [1, 2, 3, 4]
IO.inspect (
for n <- list, times <- 1..n do
String.duplicate("*", times)
end
)
!elixir temp.exs
# ループの見える化
!elixir -e 'list = [1, 2, 3, 4]; for n <- list, times <- 1..n, do: IO.puts "#{n} - #{times}"'
# フィルタ
!elixir -e 'import Integer; IO.inspect for x <- 1..10, is_even(x), do: x'
# 偶数かつ 3 で割り切れる値のみをフィルタ
%%writefile temp.exs
import Integer
IO.inspect (
for x <- 1..100,
is_even(x),
rem(x, 3) == 0, do: x)
!elixir temp.exs
# :into の使用
# 他のものを生成したい場合
# :into は Collectable プロトコルを実装している構造体を指定する
# :into を用いて、キーワードリストからマップを作成する
!elixir -e 'IO.inspect for {k, v} <- [one: 1, two: 2, three: 3], into: %{}, do: {k, v}'
!elixir -e 'IO.inspect %{:one => 1, :three => 2, :two => 2}'
!elixir -e 'IO.inspect %{"one" => 1, "three" => 2, "two" => 2}'
# なるほど、と言うかわからなくて当然ですね。多分、Erlang の仕様を引き継いでこのようになっているのだろう
# map では高速なプログラムができなくて、キーワードリストを作って、キーワードリストはリストでありマップなのだろう
# ビット文字列 bitstring は列挙可能 enumerable なので、:into を用いて文字列を作成することが出来る
!elixir -e "IO.inspect for c <- [72, 101, 108, 108, 111], into: \"\", do: <<c>>"
###Output
"Hello"
###Markdown
文字列
###Code
# 文字列 string
# elixir の文字列はバイトのシーケンスである
!elixir -e 'string = <<104,101,108,108,111>>;IO.puts string'
!elixir -e 'string = <<104,101,108,108,111>>;IO.inspect string'
!elixir -e 'IO.inspect <<104,101,108,108,111>>'
!echo
# 文字列に 0 バイトを追加するとバイナリとして表示される
!elixir -e 'IO.inspect <<104,101,108,108,111,0>>'
# 質問 文字列をバイナリ表示するにはどうするか
!elixir -e 'IO.inspect "hello"<> <<0>>'
# 実験 日本語
!elixir -e 'IO.inspect "あ"<> <<0>>' #=> <<227, 129, 130, 0>>
!elixir -e 'IO.inspect <<227, 129, 130>>' #=> "あ"
# 文字リスト
# elixir は文字列と別に文字リストという型を別に持っている
# 文字列はダブルクオートで生成され、文字リストはシングルクオートで生成される
# 文字リストは utf-8 で、文字列はバイナリである
!elixir -e "IO.inspect 'hello'"
!elixir -e "IO.inspect 'hello' ++ [0]"
!elixir -e 'IO.inspect "hello"<> <<0>>'
!echo
!elixir -e "IO.inspect 'hełło' ++ [0]"
!elixir -e 'IO.inspect "hełło"<> <<0>>'
!echo
!elixir -e "IO.inspect 'あ' ++ [0]"
!elixir -e 'IO.inspect "あ"<> <<0>>'
# クエスチョンマークによるコードポイントの取得
# コードポイントは unicode なので 1 バイト以上のバイトである
!elixir -e 'IO.inspect ?Z'
!elixir -e 'IO.inspect ?あ'
!elixir -e 'IO.inspect "áñèane" <> <<0>>'
!elixir -e "IO.inspect 'áñèane' ++ [0]"
!elixir -e "IO.inspect 'あいう' ++ [0]"
# シンボルには ? 表記が使える
# elixir でプログラムする時は通常文字リストは使わず文字列を使う
# 文字リストが必要なのは erlang のため
# String モジュールにコードポイントを取得する関数 graphemes/1 と codepoints/1 がある
!elixir -e 'string = "\u0061\u0301"; IO.puts string' #=> á
!elixir -e 'string = "\u0061\u0301"; IO.inspect String.codepoints string'
!elixir -e 'string = "\u0061\u0301"; IO.inspect String.graphemes string'
# 下記の実験から á と あ は違う
# á は graphemes では 1 文字だが codepoints では 2 文字
# あ はどちらでも 1 文字
!elixir -e 'string = "あいう"; IO.puts string'
!elixir -e 'string = "あいう"; IO.inspect String.codepoints string'
!elixir -e 'string = "あいう"; IO.inspect String.graphemes string'
# 文字列関数
# length/1
!elixir -e 'IO.puts String.length "hello"'
!elixir -e 'IO.puts String.length "あいう"'
# replace/3
!elixir -e 'IO.puts String.replace("Hello", "e", "a")'
# duplicate/2
!elixir -e 'IO.puts String.duplicate("Oh my ", 3)'
# split/2
!elixir -e 'IO.inspect String.split("Oh my ", " ")'
# split/1 # こちらが words 相当か
!elixir -e 'IO.inspect String.split("Oh my ")'
# 問題 アナグラムチェック
# A = super
# B = perus
# 文字列 A を並び替えれば B に出来るので A は B のアナグラム
%%writefile temp.exs
defmodule Anagram do
def anagrams?(a, b) when is_binary(a) and is_binary(b) do
sort_string(a) == sort_string(b)
end
def sort_string(string) do
string
|> String.downcase()
|> String.graphemes()
|> Enum.sort()
end
end
defmodule Main do
IO.puts Anagram.anagrams?("Hello", "ohell")
IO.puts Anagram.anagrams?("María", "íMara")
IO.puts Anagram.anagrams?(3, 5) #=> エラー
end
!elixir temp.exs
###Output
_____no_output_____
###Markdown
日付と時間
###Code
# 日付と時間
# 現在時刻の取得
!elixir -e 'IO.puts Time.utc_now'
# シギルで Time 構造体を作る
!elixir -e 'IO.puts ~T[21:00:27.472988]'
# hour, minute, second
!elixir -e 't = ~T[21:00:27.472988];IO.puts t.hour'
!elixir -e 't = ~T[21:00:27.472988];IO.puts t.minute'
!elixir -e 't = ~T[21:00:27.472988];IO.puts t.second'
# Date
!elixir -e 'IO.puts Date.utc_today'
# シギルで Date 構造体を作る
!elixir -e 'IO.puts ~D[2022-03-22]'
#
!elixir -e '{:ok, date} = Date.new(2020, 12,12); IO.puts date'
!elixir -e '{:ok, date} = Date.new(2020, 12,12); IO.puts Date.day_of_week date'
!elixir -e '{:ok, date} = Date.new(2020, 12,12); IO.puts Date.leap_year? date'
!echo
# NaiveDateTime Date と Time の両方を扱えるがタイムゾーンのサポートがない
!elixir -e 'IO.puts NaiveDateTime.utc_now'
!elixir -e 'IO.puts ~N[2022-03-22 21:14:23.371420]'
!elixir -e 'IO.puts NaiveDateTime.add(~N[2022-03-22 21:14:23.371420],30)'
!elixir -e 'IO.puts NaiveDateTime.add(~N[2022-03-22 21:14:23],30)'
# DateTime
# DateTime は Date と Time の両方を扱えタイムゾーンのサポートがある
# しかし!!!! Elixir がデフォルトではタイムゾーンデータベースがない
# デフォルトでは Calendar.get_time_zone_database/0 によって返されるタイムゾーンデータベースを使う
# デフォルトでは Calendar.UTCOnlyTimeZoneDatabase で、Etc/UTC のみを処理し
# 他のタイムゾーンでは {:error, :utc_only_time_zone_database} を返す
# タイムゾーンを提供することにより NaiveDateTime から DateTimeのインスタンスを作ることができる
!elixir -e 'IO.inspect DateTime.from_naive(~N[2016-05-24 13:26:08.003], "Etc/UTC")'
# タイムゾーンの利用
# elixir でタイムゾーンを利用するには tzdata パッケージをインストールし
# Tzdata タイムゾーンデータベースとして使用する
# パリのタイムゾーンで時間を作成してそれをニューヨーク時間に変換してみる
# パリとニューヨークの時差は 6 時間である
# %%writefile temp.exs
# config :elixir, :time_zone_database, Tzdata.TimeZoneDatabase
# paris_datetime = DateTime.from_naive!(~N[2019-01-01 12:00:00], "Europe/Paris")
# {:ok, ny_datetime} = DateTime.shift_zone(paris_datetime, "America/New_York")
# IO.inspect paris_datetime
# IO.inspect ny_datetime
###Output
Overwriting temp.exs
###Markdown
カスタムMixタスク 省略 いまここ IEx Helpers 省略
###Code
###Output
_____no_output_____
###Markdown
メモelixir を齧る。かじる。今のイメージ $\quad$ erlang 上で、erlang は 並行処理のためのシステムで、その erlang 上で理想的な言語を作ろうとしたら、ruby + clojure みたいな言語になった。Dave Thomas と まつもとゆきひろ が勧めているのだからいい言語なのだろう。 * https://elixirschool.com/ja/lessons/basics/control-structures/* https://magazine.rubyist.net/articles/0054/0054-ElixirBook.* https://dev.to/gumi/elixir-01--2585---古本が手に入った。プログラミング elixirdave thomas, 笹田耕一・鳥居雪訳、 ohmsha, the pragmatic programmer 1.2: programming elixirを読む。
###Code
%%capture
!wget https://packages.erlang-solutions.com/erlang-solutions_2.0_all.deb && sudo dpkg -i erlang-solutions_2.0_all.deb
!sudo apt update
!sudo apt install elixir
!elixir -v
!date
###Output
/bin/bash: elixir: command not found
Wed Jun 2 00:43:58 UTC 2021
###Markdown
---メモ`!elixir -h` としたらシェルワンライナー `elixir -e` が使えるらしいことがわかった。`iex` というのがインタラクティブ環境なのだが、colab では使いにくいのでとりあえず使わない。
###Code
!elixir -e 'IO.puts 3 + 3'
!elixir -e 'IO.puts "hello world!"'
%%writefile temp.exs
IO.puts "this is a pen."
!elixir temp.exs
###Output
this is a pen.
###Markdown
---ネットで紹介されていた次のコードセルのコードはどうやって実行するのだろう。 今はわからなくていいと思うがとりあえず転記しておく。説明:このプログラムでは、Parallel というモジュールに pmap という関数を定義しています。 pmap は、与えられたコレクションに対して map(Ruby での Enumerablemap と同じようなものと考えて下さい)を行なうのですが、 各要素の処理を、要素数の分だけプロセスを生成し、各プロセスで並行に実行する、というものです。 ちょっと見ても、よくわからないような気がしますが、大丈夫、本書を読めば、わかるようになります。とのこと。
###Code
%%writefile temp.exs
defmodule Parallel do
def pmap(collection, func) do
collection
|> Enum.map(&(Task.async(fn -> func.(&1) end)))
|> Enum.map(&Task.await/1)
end
end
result = Parallel.pmap 1..1000, &(&1 * &1)
IO.inspect result
!elixir temp.exs
###Output
[1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121, 144, 169, 196, 225, 256, 289, 324,
361, 400, 441, 484, 529, 576, 625, 676, 729, 784, 841, 900, 961, 1024, 1089,
1156, 1225, 1296, 1369, 1444, 1521, 1600, 1681, 1764, 1849, 1936, 2025, 2116,
2209, 2304, 2401, 2500, ...]
###Markdown
上の例で colab 環境で非同期処理が問題なく動く。 ---次のもネットで紹介されていた例で、ハローワールド並行処理版
###Code
%%writefile temp.exs
parent = self()
spawn_link(fn ->
send parent, {:msg, "hello world"}
end)
receive do
{:msg, contents} -> IO.puts contents
end
!elixir temp.exs
###Output
hello world
###Markdown
上の例でやっていることはつぎのような流れである。1. spawn_linkという関数に渡された関数が、関数の内容を実行する。2. 新しく作られたプロセス側では、メインプロセス側(parent)に “hello world” というメッセージを送る。3. メインプロセス側は、どこからかメッセージが来ないかを待ち受けて(receive)、メッセージが来たらそれをコンソールに表示する。
###Code
# 実験 とりあえず理解しないでよい。 colab 環境でどうかだけ調べる。
%%writefile chain.exs
defmodule Chain do
def counter(next_pid) do
receive do
n -> send next_pid, n + 1
end
end
def create_processes(n) do
last = Enum.reduce 1..n, self(),
fn (_, send_to) -> spawn(Chain, :counter, [send_to]) end
send last, 0
receive do
final_answer when is_integer(final_answer) ->
"Result is #{inspect(final_answer)}"
end
end
def run(n) do
IO.puts inspect :timer.tc(Chain, :create_processes, [n])
end
end
!elixir --erl "+P 1000000" -r chain.exs -e "Chain.run(1_000_000)"
###Output
{4896723, "Result is 1000000"}
###Markdown
記事 https://ubiteku.oinker.me/2015/12/22/elixir試飲-2-カルチャーショックに戸惑う-並行指向プ/ のマシン Macbook Pro – 3 GHz Intel Core i7, 16GB RAM では 7 秒のところ、colab では 5 秒で終わってるね!!!! ---コメントは ``
###Code
%%writefile temp.exs
# コメント実験
str = "helloworld!!!!"
IO.puts str
!elixir temp.exs
###Output
helloworld!!!!
###Markdown
---n 進数、整数 integer
###Code
!elixir -e 'IO.puts 0b1111'
!elixir -e 'IO.puts 0o7777'
!elixir -e 'IO.puts 0xffff'
!elixir -e 'IO.puts 1000_000_00_0'
###Output
15
4095
65535
1000000000
###Markdown
整数型に上限下限 fixed limit はない。 factorial(10000) が計算できる。 ---問題10進数を $n$ 進数にベースを変えるのはどうするか。 python では `int()`, `bin()`, `oct()`, `hex()` があった。
###Code
# python
print(0b1111)
print(0o7777)
print(0xffff)
print(int('7777',8))
print(bin(15))
print(oct(4095))
print(hex(65535))
!elixir -e 'IO.puts 0b1111'
!elixir -e 'IO.puts 0o7777'
!elixir -e 'IO.puts 0xffff'
!echo
!elixir -e 'IO.puts "0b" <> Integer.to_string(15,2)'
!elixir -e 'IO.puts "0o" <> Integer.to_string(4095,8)'
!elixir -e 'IO.puts "0x" <> Integer.to_string(65535,16)'
###Output
15
4095
65535
0b1111
0o7777
0xFFFF
###Markdown
浮動小数点数 floating-point number
###Code
!elixir -e 'IO.puts 1.532e-4'
# .0 とか 1. とかはエラーになる
!elixir -e 'IO.puts 98099098.0809898888'
###Output
1.532e-4
98099098.08098988
###Markdown
文字列 stringstring という型はない、みたい。
###Code
!elixir -e 'IO.puts "日本語が書けますか"'
!elixir -e 'IO.puts "日本語が書けます"'
# 関数に括弧をつけることができる
!elixir -e 'IO.puts (0b1111)'
!elixir -e 'IO.puts ("にほんご\n日本語")'
!elixir -e "IO.puts ('にほんご\n\"日本語\"')"
# 文字連結 `+` ではない!!!!
!elixir -e 'IO.puts("ABCD"<>"EFGH")'
###Output
ABCDEFGH
###Markdown
---値の埋め込み`{変数名}` を記述することで、変数の値を埋め込むことができる。
###Code
!elixir -e 'val = 1000; IO.puts "val = #{val}"'
###Output
val = 1000
###Markdown
---真偽値elixir の 真偽値は true と false (小文字) で false と nil が false でそれ以外は true
###Code
!elixir -e 'if true do IO.puts "true" end'
!elixir -e 'if True do IO.puts "true" end'
!elixir -e 'if False do IO.puts "true" end'
!elixir -e 'if false do IO.puts "true" else IO.puts "false" end'
!elixir -e 'if nil do IO.puts "true" else IO.puts "false" end'
!elixir -e 'if 0 do IO.puts "true" else IO.puts "false" end'
!elixir -e 'if (-1) do IO.puts "true" else IO.puts "false" end'
!elixir -e 'if [] do IO.puts "true" else IO.puts "false" end'
!elixir -e 'if "" do IO.puts "true" else IO.puts "false" end'
###Output
true
true
true
false
false
true
true
true
true
###Markdown
`null` はない。 マッチ演算子 `=`マッチ演算子 `=` はマッチ演算子である。 マッチ演算子を通して値を代入し、その後、マッチさせることができる。マッチすると、方程式の結果が返され、失敗すると、エラーになる。
###Code
!elixir -e 'IO.puts a = 1'
!elixir -e 'a =1; IO.puts 1 = a'
!elixir -e 'a =1; IO.puts 2 = a'
!elixir -e 'IO.inspect a = [1,2,3]'
!elixir -e '[a,b,c] = [1,2,3]; IO.puts c; IO.puts b'
###Output
[1, 2, 3]
3
2
###Markdown
上の例は、elixir は マッチ演算子 `=` があると左右がマッチするように最善を尽くす。 そのため、`[a,b,c] = [1,2,3]` で a,b,c に値が代入される。
###Code
!elixir -e 'IO.inspect [1,2,[3,4,5]]'
!elixir -e '[a,b,c] = [1,2,[3,4,5]]; IO.inspect c; IO.inspect b'
# 実験 => エラー
!elixir -e 'IO.insepct [a,b] = [1,2,3]'
# 実験
!elixir -e 'IO.inspect a = [[1,2,3]]'
!elixir -e 'IO.inspect [a] = [[1,2,3]]'
!elixir -e '[a] = [[1,2,3]]; IO.inspect a'
# 実験 => エラー
!elixir -e 'IO.insepct [a,b] = [a,b]'
# 実験
!elixir -e 'IO.puts a = :a'
!elixir -e 'a = :a; IO.inspect a = a'
!elixir -e 'a = :a; IO.puts a = a'
!elixir -e 'IO.puts :b'
###Output
a
:a
a
b
###Markdown
アンダースコア `_` で値を無視する。 ワルドカード。なんでも受け付ける。
###Code
!elixir -e 'IO.inspect [1,_,_]=[1,2,3]'
!elixir -e 'IO.inspect [1,_,_]=[1,"cat","dog"]'
###Output
[1, "cat", "dog"]
###Markdown
変数は、バインド (束縛、紐付け) されると変更できない。かと思ったらできてしまう。
###Code
!elixir -e 'a = 1; IO.puts a = 2'
###Output
2
###Markdown
元の変数を指し示すピン演算子 (`^` カレット) がある。
###Code
!elixir -e 'a = 1; IO.puts ^a = 2'
###Output
** (MatchError) no match of right hand side value: 2
(stdlib 3.15) erl_eval.erl:450: :erl_eval.expr/5
(stdlib 3.15) erl_eval.erl:893: :erl_eval.expr_list/6
(stdlib 3.15) erl_eval.erl:408: :erl_eval.expr/5
(elixir 1.12.0) lib/code.ex:656: Code.eval_string_with_error_handling/3
###Markdown
メモ $\quad$ 普通の関数型言語のように変数は変更できないルールにしてしまった方が簡単ではなかったか、と思わないでもない。 変数を不変にする、const 宣言みたいなのはないのか。リストは不変 immutable なので安心。
###Code
# 大文字にする capitalize
!elixir -e 'IO.puts name = String.capitalize "elixir"'
# 大文字にする upcase
!elixir -e 'IO.puts String.upcase "elixir"'
###Output
ELIXIR
###Markdown
アトムアトムは名前がそのまま値となる定数である。**名前の前にコロン `:` をつけることでアトムになる。**アトムの名前は utf-8 文字列 (記号を含む)、数字、アンダースコア `_` 、`@` で、終端文字としてのみ「!」や「?」が使える。:fred $\quad$ :is_binary? $\quad$ :var@2 $\quad$ : $\quad$ :=== :"func/3" $\quad$ :"long john silver" $\quad$ :эликсир:mötley_crüeメモ
###Code
# 実験
!elixir -e 'IO.puts true === :true'
!elixir -e 'IO.puts :true'
!elixir -e 'IO.puts false === :false'
# 実験 多分 colab の環境のせいで引用符が処理できない。2 バイト文字はオッケー。
!elixir -e 'IO.puts :fred'
!elixir -e 'IO.puts :is_binary?'
!elixir -e 'IO.puts :var@2'
!elixir -e 'IO.puts :<>'
!elixir -e 'IO.puts :==='
# !elixir -e 'IO.puts :"func/3"'
# !elixir -e 'IO.puts :"long john silver"'
!elixir -e 'IO.puts :эликсир'
!elixir -e 'IO.puts :mötley_crüe'
!elixir -e 'IO.puts :日本語はどうか'
###Output
fred
is_binary?
var@2
<>
===
эликсир
mötley_crüe
日本語はどうか
###Markdown
演算子
###Code
!elixir -e 'IO.puts 1 + 2'
!elixir -e 'x = 10; IO.puts x + 1'
!elixir -e 'IO.puts 1 - 2'
!elixir -e 'x = 10; IO.puts x - 1'
!elixir -e 'IO.puts 5 * 2'
!elixir -e 'x = 10; IO.puts x * 4'
!echo
!elixir -e 'IO.puts 5 / 2'
!elixir -e 'x = 10; IO.puts x / 3'
# 浮動少数ではなく整数としての結果がほしい場合は div 関数を使用
!elixir -e 'IO.puts div(10,5)'
!elixir -e 'IO.puts div(10,4)'
# 割り算の余り、剰余を求める場合は rem関数を使用
!elixir -e 'IO.puts rem(10,4)'
!elixir -e 'IO.puts rem(10,3)'
!elixir -e 'IO.puts rem(10,2)'
# 比較演算子
!elixir -e 'IO.puts 1 == 1'
!elixir -e 'IO.puts 1 != 1'
!elixir -e 'IO.puts ! (1 != 1)'
!elixir -e 'IO.puts 20.0 == 20'
!elixir -e 'IO.puts 20.0 === 20'
!elixir -e 'IO.puts 20.0 !== 20'
# 論理演算子
# 論理和
!elixir -e 'IO.puts "ABC" == "ABC" || 20 == 30'
!elixir -e 'IO.puts "ABC" == "abc" || 20 == 30'
!echo
# 論理積
!elixir -e 'IO.puts "ABC" == "ABC" && 20 == 20'
!elixir -e 'IO.puts "ABC" == "ABC" && 20 == 30'
!elixir -e 'IO.puts "ABC" == "def" && 10 > 100'
!echo
# 否定
!elixir -e 'IO.puts !("ABC" == "ABC")'
!elixir -e 'IO.puts !("ABC" == "DEF")'
###Output
true
false
true
false
false
false
true
###Markdown
rangeメモ $\quad$ range は型ではなく、struct である。 構造体?`start..end` で表現される、とあるが、1..10 と書けばそれで range なのか?
###Code
!elixir -e 'IO.inspect Enum.to_list(1..3)'
!elixir -e 'IO.inspect Enum.to_list(0..10//3)'
!elixir -e 'IO.inspect Enum.to_list(0..10//-3)'
!elixir -e 'IO.inspect Enum.to_list(10..0//-3)'
!elixir -e 'IO.inspect Enum.to_list(1..1)'
!elixir -e 'IO.inspect Enum.to_list(1..1//2)'
!elixir -e 'IO.inspect Enum.to_list(1..-1//2)'
!elixir -e 'IO.inspect 1..9//2'
###Output
1..9//2
###Markdown
正規表現 regular expression正規表現も型ではなく、struct である。
###Code
!elixir -e 'IO.inspect Regex.run ~r{[aiueo]},"catapillar"'
!elixir -e 'IO.inspect Regex.scan ~r{[aiueo]},"catapillar"'
!elixir -e 'IO.inspect Regex.split ~r{[aiueo]},"catapillar"'
!elixir -e 'IO.inspect Regex.replace ~r{[aiueo]},"catapillar", "*"'
###Output
"c*t*p*ll*r"
###Markdown
コレクション型 タプルタプルは波括弧 brace を用いて定義する。タプルに限らず elixir のコレクションはすべて要素のタイプを限定しない。通常 2 から 4 の要素であり、それ以上の要素数の場合、map や struct の利用を考える。タプルは関数の返り値に便利に利用される。パターンマッチングと組み合わせて使われる。
###Code
!elixir -e 'IO.inspect {3.14, :pie, "Apple"}'
!elixir -e '{status, count, action} = {3.14, :pie, "next"}; IO.puts action'
!echo hello > temp.txt
!elixir -e '{status, file} = File.open("temp.txt"); IO.inspect {status, file}'
!elixir -e '{status, file} = File.read("temp.txt"); IO.inspect {status, file}'
!elixir -e '{status, file} = File.read("temp02.txt"); IO.inspect {status, file}'
# 実験 タプルに ++ は使えるか。 => 使えない
# !elixir -e 'IO.inspect {3.14, :pie, "Apple"} ++ {3}'
# 実験 タプルに head は使えるか。 => 使えない
# !elixir -e 'IO.inspect hd {3.14, :pie, "Apple"}'
# 実験 タプルにパターンマッチングは使えるか。 => 使える
!elixir -e '{a,b,c} = {3.14, :pie, "Apple"}; IO.inspect [c,a,b]'
# 実験
!elixir -e 'a=1; b=3; {b,a}={a,b}; IO.inspect {a,b}'
###Output
{3, 1}
###Markdown
リスト他の言語の配列 array と elixir のリストは違うので注意。 lisp のリストと似たような概念である。カラのリストでなければ、head と tail がある。
###Code
# リスト
!elixir -e 'IO.inspect [3.14, :pie, "Apple"]'
# リスト先頭への追加(高速)
!elixir -e 'IO.inspect ["π" | [3.14, :pie, "Apple"]]'
# リスト末尾への追加(低速)
!elixir -e 'IO.inspect [3.14, :pie, "Apple"] ++ ["Cherry"]'
###Output
["π", 3.14, :pie, "Apple"]
[3.14, :pie, "Apple", "Cherry"]
###Markdown
上と下のコードセルでリストの連結を行っているが、++/2 演算子を用いている。 この `++/2` という表記は `++` が演算子自体で `/2` がアリティ (引数の数) を表す。 ---質問 $\quad$ アリティとはなにか。質問 $\quad$ リストの連結に `++` で文字列の連結 `` なのはなぜか。 オーバーライディングはあるのか。 文字列 string はリストではないのか。 長さを測る関数も別々なのか。
###Code
# リストの連結
!elixir -e 'IO.inspect [1, 2] ++ [3, 4, 1]'
# リストの減算
# --/2 演算子は存在しない値を引いてしまってもオッケー
!elixir -e 'IO.inspect ["foo", :bar, 42] -- [42, "bar"]'
# 重複した値の場合、右辺の要素のそれぞれに対し、左辺の要素のうち初めて登場した同じ値が順次削除
!elixir -e 'IO.inspect [1,2,2,3,2,3] -- [1,2,3,2]'
# リストの減算の値のマッチには strict comparison が使われている
!elixir -e 'IO.inspect [2] -- [2.0]'
!elixir -e 'IO.inspect [2.0] -- [2.0]'
# head /tail
!elixir -e 'IO.inspect hd [3.14, :pie, "Apple"]'
!elixir -e 'IO.inspect tl [3.14, :pie, "Apple"]'
###Output
3.14
[:pie, "Apple"]
###Markdown
---リストを頭部と尾部に分けるのに* パターンマッチング* cons 演算子( `|` )を使うこともできる。
###Code
!elixir -e '[head | tail] = [3.14, :pie, "Apple"]; IO.inspect head; IO.inspect tail'
###Output
3.14
[:pie, "Apple"]
###Markdown
キーワードリストキーワードリストとマップは elixir の連想配列である。キーワードリストは最初の要素がアトムのタプルからなる特別なリストで、リストと同様の性能になる。
###Code
# キーワードリスト
!elixir -e 'IO.inspect [foo: "bar", hello: "world"]'
# タプルのリストとしても同じ
!elixir -e 'IO.inspect [{:foo, "bar"}, {:hello, "world"}]'
!elixir -e 'IO.inspect [foo: "bar", hello: "world"] == [{:foo, "bar"}, {:hello, "world"}]'
###Output
[foo: "bar", hello: "world"]
[foo: "bar", hello: "world"]
true
###Markdown
キーワードリストの 3 つの特徴* キーはアトムである。* キーは順序付けされている。* キーの一意性は保証されない。こうした理由から、キーワードリストは関数にオプションを渡すためによく用いられる。
###Code
# 実験 リストの角括弧は省略できる
!elixir -e 'IO.inspect foo: "bar", hello: "world"'
# 実験
!elixir -e 'IO.inspect [1, fred: 1, dave: 2]'
!elixir -e 'IO.inspect {1, fred: 1, dave: 2}'
!elixir -e 'IO.inspect {1, [{:fred,1},{:dave, 2}]}'
###Output
[1, {:fred, 1}, {:dave, 2}]
{1, [fred: 1, dave: 2]}
{1, [fred: 1, dave: 2]}
###Markdown
マップ* キーワードリストとは違ってどんな型のキーも使える。* 順序付けされない。* キーの一意性が保証されている。重複したキーが追加された場合は、前の値が置き換えられる。* 変数をマップのキーにできる。* `%{}` 構文で定義する。
###Code
!elixir -e 'IO.inspect %{:foo => "bar", "hello" => :world}'
!elixir -e 'map = %{:foo => "bar", "hello" => :world}; IO.inspect map[:foo]'
!elixir -e 'map = %{:foo => "bar", "hello" => :world}; IO.inspect map["hello"]'
!echo
!elixir -e 'key = "hello"; IO.inspect %{key => "world"}'
!elixir -e 'IO.inspect %{:foo => "bar", :foo => "hello world"}'
###Output
%{:foo => "bar", "hello" => :world}
"bar"
:world
%{"hello" => "world"}
%{foo: "hello world"}
###Markdown
上下の例にあるように、アトムのキーだけを含んだマップには特別な構文がある。
###Code
!elixir -e 'IO.inspect %{foo: "bar", hello: "world"} == %{:foo => "bar", :hello => "world"}'
# 加えて、アトムのキーにアクセスするための特別な構文がある。
!elixir -e 'map = %{foo: "bar", hello: "world"}; IO.inspect map.hello'
# マップの更新のための構文がある (新しい map が作成される)
# この構文は、マップに既に存在するキーを更新する場合にのみ機能する
!elixir -e 'map = %{foo: "bar", hello: "world"}; IO.inspect %{map | foo: "baz"}'
# 新しいキーを作成するには、`Map.put/3` を使用
!elixir -e 'map = %{hello: "world"}; IO.inspect Map.put(map, :foo, "baz")'
###Output
%{foo: "baz", hello: "world"}
###Markdown
いまここ バイナリ binary
###Code
# binaries
!elixir -e 'IO.inspect <<1,2>>'
!elixir -e 'IO.inspect <<1,10>>'
!elixir -e 'bin = <<1,10>>; IO.inspect byte_size bin'
!elixir -e 'bin = <<3::size(2),5::size(4),1::size(2)>>; IO.inspect bin'
!elixir -e 'IO.puts Integer.to_string(213,2)'
!elixir -e 'IO.puts 0b11'
!elixir -e 'IO.puts 0b0101'
!echo
!elixir -e 'bin = <<3::size(2),5::size(4),1::size(2)>>; IO.inspect byte_size bin'
!elixir -e 'bin = <<3::size(2),5::size(4),1::size(2)>>; IO.inspect :io.format("~-8.2b~n",:binary.bin_to_list(bin))'
###Output
<<1, 2>>
<<1, 10>>
2
<<213>>
11010101
3
5
1
11010101
:ok
###Markdown
Date and Time 日付
###Code
# Date and Time
!elixir -e 'IO.inspect Date.new(2021,6,2)'
!elixir -e '{:ok, d1}=Date.new(2021,6,2); IO.inspect d1'
!elixir -e '{:ok, d1}=Date.new(2021,6,2); IO.inspect Date.day_of_week(d1)'
!elixir -e '{:ok, d1}=Date.new(2021,6,2); IO.inspect Date.add(d1,7)'
!elixir -e '{:ok, d1}=Date.new(2021,6,2); IO.inspect d1, structs: false'
###Output
~D[2021-06-02]
3
~D[2021-06-09]
%{__struct__: Date, calendar: Calendar.ISO, day: 2, month: 6, year: 2021}
###Markdown
`~D[...]` や `~T[...]` は elixir の シギル sigil である。 文字列とバイナリーのところで説明する。
###Code
# date は range に使える
!
###Output
_____no_output_____
###Markdown
help についてメモ $\quad$ 関数の調べ方Helper の使い方。 help, type, info, information とか。下のコードセルにあるように、対象のモジュールの関数名を調べ、そのヘルプを見ればけっこうくわしくわかる。コメントアウトしてあるのは出力が大きいので、とりあえずコメントアウトして出力を抑制してある。具体的には、Enum にあたるところにモジュール名を入れて関数のリストを出す。 Ctrl+A Ctrl+C でコピーして vscode などでペーストして読む。 調べたい関数名をヘルプの、Enum.all?/1 のところに入れて出力をコピーして、vscode などでペーストして読む
###Code
# !elixir -e 'Enum.__info__(:functions) |> Enum.each(fn({function, arity}) -> IO.puts "#{function}/#{arity}" end)'
# !elixir -e 'require IEx.Helpers;IEx.Helpers.h Enum.all?/1'
# h 単独のドキュメントを見たい
# !elixir -e 'require IEx.Helpers;IEx.Helpers.h'
# i というのもある
# !elixir -e 'x = [3,2]; require IEx.Helpers;IEx.Helpers.i x'
# !elixir -e 'require IEx.Helpers;IEx.Helpers.h IO'
###Output
_____no_output_____
###Markdown
enum モジュールenum はリストなどコレクションを列挙するための一連のアルゴリズム。* all?、any?* chunk_every、chunk_by、map_every* each* map、filter、reduce* min、max* sort、uniq、uniq_by* キャプチャ演算子 `(&)`
###Code
# all? 関数を引数で受け取り、リストの全体が true の時、true を返す
!elixir -e 'IO.puts Enum.all?(["foo", "bar", "hello"], fn(s) -> String.length(s) == 3 end)'
!elixir -e 'IO.puts Enum.all?(["foo", "bar", "hello"], fn(s) -> String.length(s) >1 end)'
# any? 少なくとも1つの要素が true と評価された場合に true を返す
!elixir -e 'IO.puts Enum.any?(["foo", "bar", "hello"], fn(s) -> String.length(s) == 5 end)'
# chunk_every リストを小さなグループに分割する
!elixir -e 'IO.inspect Enum.chunk([1, 2, 3, 4, 5, 6], 2)'
!elixir -e 'IO.inspect Enum.chunk([1, 2, 3, 4, 5, 6], 3)'
!elixir -e 'IO.inspect Enum.chunk([1, 2, 3, 4, 5, 6], 4)'
# chunk_by 関数の戻り値が変化することによって分割する
!elixir -e 'IO.inspect Enum.chunk_by(["one", "two", "three", "four", "five"], fn(x) -> String.length(x) end)'
!elixir -e 'IO.inspect Enum.chunk_by(["one", "two", "three", "four", "five", "six"], fn(x) -> String.length(x) end)'
# map_every nth ごとに map 処理する
!elixir -e 'IO.inspect Enum.map_every(1..10, 3, fn x -> x + 1000 end)'
!elixir -e 'IO.inspect Enum.map_every(1..10, 1, fn x -> x + 1000 end)'
!elixir -e 'IO.inspect Enum.map_every(1..10, 0, fn x -> x + 1000 end)'
# each 新しい値を生成することなく反復する。返り値は:ok というアトム。
!elixir -e 'IO.inspect Enum.each(["one", "two", "three"], fn(s) -> IO.puts(s) end)'
!elixir -e 'IO.puts Enum.each(["one", "two", "three"], fn(s) -> IO.puts(s) end)'
# map 関数を各要素に適用して新しいリストを生み出す
!elixir -e 'IO.inspect Enum.map([0, 1, 2, 3], fn(x) -> x - 1 end)'
# min 最小の値を探す。 リストが空の場合エラーになる
# リストが空だったときのために予め最衣装血を生成する関数を渡すことができる
!elixir -e 'IO.inspect Enum.min([5, 3, 0, -1])'
!elixir -e 'IO.inspect Enum.min([], fn -> :foo end)'
# max 最大の(max/1)値を返す
!elixir -e 'IO.inspect Enum.max([5, 3, 0, -1])'
!elixir -e 'IO.inspect Enum.max([], fn -> :bar end)'
# filter 与えられた関数によって true と評価された要素だけを得る
!elixir -e 'IO.inspect Enum.filter([1, 2, 3, 4], fn(x) -> rem(x, 2) == 0 end)'
!elixir -e 'IO.inspect Enum.filter([], fn(x) -> rem(x, 2) == 0 end)'
# reduce リストを関数に従って単一の値へ抽出すうる。 accumulator を指定できる。
# accumulator が与えられない場合は最初の要素が用いられる。
!elixir -e 'IO.inspect Enum.reduce([1, 2, 3], 10, fn(x, acc) -> x + acc end)'
!elixir -e 'IO.inspect Enum.reduce([1, 2, 3], fn(x, acc) -> x + acc end)'
!elixir -e 'IO.inspect Enum.reduce(["a","b","c"], "1", fn(x,acc)-> x <> acc end)'
# sort `sort/1` はソートの順序に Erlangの Term 優先順位 を使う
!elixir -e 'IO.inspect Enum.sort([5, 6, 1, 3, -1, 4])'
!elixir -e 'IO.inspect Enum.sort([:foo, "bar", Enum, -1, 4])'
# `sort/2` は、順序を決める為の関数を渡すことができる
!elixir -e 'IO.inspect Enum.sort([%{:val => 4}, %{:val => 1}], fn(x, y) -> x[:val] > y[:val] end)'
# なしの場合
!elixir -e 'IO.inspect Enum.sort([%{:count => 4}, %{:count => 1}])'
# sort/2 に :asc または :desc をソート関数として渡すことができる
!elixir -e 'IO.inspect Enum.sort([2, 3, 1], :desc)'
# uniq 重複した要素を取り除く
!elixir -e 'IO.inspect Enum.uniq([1, 2, 3, 2, 1, 1, 1, 1, 1])'
[1, 2, 3]
# uniq_by 重複した要素を削除するが、ユニークかどうか比較を行う関数を渡せる
!elixir -e 'IO.inspect Enum.uniq_by([%{x: 1, y: 1}, %{x: 2, y: 1}, %{x: 3, y: 3}], fn coord -> coord.y end)'
###Output
[1, 2, 3]
[%{x: 1, y: 1}, %{x: 3, y: 3}]
###Markdown
キャプチャ演算子 `&` を使用した enum と無名関数 elixir の enum モジュール内の多くの関数は、引数として無名関数を取る。これらの無名関数は、多くの場合、キャプチャ演算子 `&` を使用して省略形で記述される。
###Code
# 無名関数でのキャプチャ演算子の使用
!elixir -e 'IO.inspect Enum.map([1,2,3], fn number -> number + 3 end)'
!elixir -e 'IO.inspect Enum.map([1,2,3], &(&1 + 3))'
!elixir -e 'plus_three = &(&1 + 3);IO.inspect Enum.map([1,2,3], plus_three)'
###Output
[4, 5, 6]
[4, 5, 6]
[4, 5, 6]
###Markdown
--- パターンマッチングパターンマッチングでは、値、データ構造、関数をマッチすることができる。* マッチ演算子* ピン演算子
###Code
# マッチ演算子 `=` はマッチ演算子である。 マッチ演算子を通して値を代入し、
# その後、マッチさせることができる。マッチすると、方程式の結果が返され、
# 失敗すると、エラーになる
!elixir -e 'IO.puts x = 1'
!elixir -e 'x = 1;IO.puts 1 = x'
# !elixir -e 'x = 1;IO.puts 2 = x'
#=> (MatchError) no match of right hand side value: 1
# リストでのマッチ演算子
!elixir -e 'IO.inspect list = [1, 2, 3]'
!elixir -e 'list = [1, 2, 3]; IO.inspect [1, 2, 3] = list'
# !elixir -e 'list = [1, 2, 3]; IO.inspect [] = list'
#=> (MatchError) no match of right hand side value: [1, 2, 3]
!elixir -e 'list = [1, 2, 3]; IO.inspect [1 | tail] = list'
!elixir -e 'list = [1, 2, 3]; [1 | tail] = list; IO.inspect tail'
# タプルとマッチ演算子
!elixir -e 'IO.inspect {:ok, value} = {:ok, "Successful!"}'
!elixir -e '{:ok, value} = {:ok, "Successful!"}; IO.inspect value'
###Output
{:ok, "Successful!"}
"Successful!"
:ok
###Markdown
ピン演算子マッチ演算子は左辺に変数が含まれている時に代入操作を行う。 この変数を再び束縛するという挙動は望ましくない場合がある。 そうした状況のために、ピン演算子 `^` がある。ピン演算子で変数を固定すると、新しく再束縛するのではなく既存の値とマッチする。
###Code
# ピン演算子
!elixir -e 'IO.inspect x = 1'
# !elixir -e 'x = 1; IO.inspect ^x = 2'
#=> ** (MatchError) no match of right hand side value: 2
!elixir -e 'x = 1; IO.inspect {x, ^x} = {2, 1}'
!elixir -e 'x = 1;{x, ^x} = {2, 1}; IO.inspect x'
!echo
!elixir -e 'IO.inspect key = "hello"'
!elixir -e 'key = "hello"; IO.inspect %{^key => value} = %{"hello" => "world"}'
!elixir -e 'key = "hello"; IO.inspect %{^key => value} = %{"hello" => "world"}'
!elixir -e 'key = "hello"; %{^key => value} = %{"hello" => "world"}; IO.inspect value'
# 関数の clause でのピン演算子
!elixir -e 'IO.inspect greeting = "Hello"'
!elixir -e 'greeting = "Hello"; IO.inspect greet = fn (^greeting, name) -> "Hi #{name}"; (greeting, name) -> "${greeting},${name}" end'
!elixir -e 'greeting = "Hello"; greet = fn (^greeting, name) -> "Hi #{name}"; (greeting, name) -> "#{greeting},#{name}" end; IO.inspect greet.("Hello","Sean")'
!elixir -e 'greeting = "Hello"; greet = fn (^greeting, name) -> "Hi #{name}"; (greeting, name) -> "#{greeting},#{name}" end; IO.inspect greet.("Mornin","Sean")'
###Output
"Hello"
#Function<12.99386804/2 in :erl_eval.expr/5>
"Hi Sean"
"Mornin,Sean"
###Markdown
いまここ 制御構造 control structure* if と unless* case* cond* with if と unless elixir の if と unless は ruby と同じ。elixir は if と unless はマクロとして定義されている。この実装は kernel module で知ることができる。elixir では偽とみなされる値は nil と真理値の false だけだということに留意。
###Code
iex> if String.valid?("Hello") do
...> "Valid string!"
...> else
...> "Invalid string."
...> end
"Valid string!"
iex> if "a string value" do
...> "Truthy"
...> end
"Truthy"
unless/2 は if/2 のように使いますが、条件が否定される時だけ作用します:
Copy
iex> unless is_integer("hello") do
...> "Not an Int"
...> end
"Not an Int"
case
複数のパターンに対してマッチする必要があるなら、 case/2 を使うことができます:
Copy
iex> case {:ok, "Hello World"} do
...> {:ok, result} -> result
...> {:error} -> "Uh oh!"
...> _ -> "Catch all"
...> end
"Hello World"
_ 変数は case/2 命令文の中に含まれる重要な要素です。これが無いと、マッチするものが見あたらない場合にエラーが発生します:
Copy
iex> case :even do
...> :odd -> "Odd"
...> end
** (CaseClauseError) no case clause matching: :even
iex> case :even do
...> :odd -> "Odd"
...> _ -> "Not Odd"
...> end
"Not Odd"
_ を”他の全て”にマッチする else と考えましょう。
case/2 はパターンマッチングに依存しているため、パターンマッチングと同じルールや制限が全て適用されます。既存の変数に対してマッチさせようという場合にはピン ^ 演算子を使わなくてはいけません:
Copy
iex> pie = 3.14
3.14
iex> case "cherry pie" do
...> ^pie -> "Not so tasty"
...> pie -> "I bet #{pie} is tasty"
...> end
"I bet cherry pie is tasty"
case/2 のもう1つの素晴らしい特徴として、ガード節に対応していることがあげられます:
この例は公式のElixirのGetting Startedガイドから直接持ってきています。
Copy
iex> case {1, 2, 3} do
...> {1, x, 3} when x > 0 ->
...> "Will match"
...> _ ->
...> "Won't match"
...> end
"Will match"
公式ドキュメントからExpressions allowed in guard clausesを読んでみてください。
cond
値ではなく、条件をマッチさせる必要がある時には、 cond/1 を使うことができます。これは他の言語でいうところの else if や elsif のようなものです:
この例は公式のElixirのGetting Startedガイドから直接持ってきています。
Copy
iex> cond do
...> 2 + 2 == 5 ->
...> "This will not be true"
...> 2 * 2 == 3 ->
...> "Nor this"
...> 1 + 1 == 2 ->
...> "But this will"
...> end
"But this will"
case のように、 cond はマッチしない場合にエラーを発生させます。これに対処するには、 true になる条件を定義すればよいです:
Copy
iex> cond do
...> 7 + 1 == 0 -> "Incorrect"
...> true -> "Catch all"
...> end
"Catch all"
with
特殊形式の with/1 はネストされた case/2 文を使うような時やきれいにパイプできない状況に便利です。 with/1 式はキーワード, ジェネレータ, そして式から成り立っています。
ジェネレータについてはリスト内包表記のレッスンでより詳しく述べますが、今は <- の右側と左側を比べるのにパターンマッチングが使われることを知っておくだけでよいです。
with/1 の簡単な例から始め、その後さらなる例を見てみましょう:
Copy
iex> user = %{first: "Sean", last: "Callan"}
%{first: "Sean", last: "Callan"}
iex> with {:ok, first} <- Map.fetch(user, :first),
...> {:ok, last} <- Map.fetch(user, :last),
...> do: last <> ", " <> first
"Callan, Sean"
式がマッチに失敗した場合はマッチしない値が返されます:
Copy
iex> user = %{first: "doomspork"}
%{first: "doomspork"}
iex> with {:ok, first} <- Map.fetch(user, :first),
...> {:ok, last} <- Map.fetch(user, :last),
...> do: last <> ", " <> first
:error
それでは、 with/1 を使わない長めの例と、それをどのようにリファクタリングできるかを見てみましょう:
Copy
case Repo.insert(changeset) do
{:ok, user} ->
case Guardian.encode_and_sign(user, :token, claims) do
{:ok, jwt, full_claims} ->
important_stuff(jwt, full_claims)
error ->
error
end
error ->
error
end
with/1 を導入するとコードが短く、わかりやすくなります:
Copy
with {:ok, user} <- Repo.insert(changeset),
{:ok, jwt, full_claims} <- Guardian.encode_and_sign(user, :token, claims),
do: important_stuff(jwt, full_claims)
Elixir 1.3からは with/1 で else を使えます:
Copy
import Integer
m = %{a: 1, c: 3}
a =
with {:ok, number} <- Map.fetch(m, :a),
true <- is_even(number) do
IO.puts "#{number} divided by 2 is #{div(number, 2)}"
:even
else
:error ->
IO.puts("We don't have this item in map")
:error
_ ->
IO.puts("It is odd")
:odd
end
これは case のようなパターンマッチングを提供することで、エラーを扱いやすくします。渡されるのはマッチングに失敗した最初の表現式の値です。
Caught a mistake or want to contribute to the lesson? Edit this page on GitHub!
← パターンマッチング
関数 →
bg bn de en es fr gr id it ja ko ms no pl pt ru sk ta th tr uk vi zh-hans zh-hant
Elixir 1.10.1 - Erlang/OTP 22.0 [erts-10.5.3]
Menu
プログラミング言語Elixirのレッスン
© 2021 Sean Callan All rights reserved.
Toggle
###Output
_____no_output_____ |
notebook/debugging/data_similarity.ipynb | ###Markdown
Data SimilarityPrevious experiments have had some strange results, with models occasionally performing abnormally well (or badly) on the out of sample set. To make sure that there are no duplicate samples or abnormally similar studies, I made this notebook
###Code
import json
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import yaml
from plotnine import *
from sklearn.metrics.pairwise import euclidean_distances
from saged import utils, datasets, models
###Output
_____no_output_____
###Markdown
Load the data
###Code
dataset_config_file = '../../dataset_configs/refinebio_labeled_dataset.yml'
dataset_config_str = """name: "RefineBioMixedDataset"
compendium_path: "../../data/subset_compendium.pkl"
metadata_path: "../../data/aggregated_metadata.json"
label_path: "../../data/sample_classifications.pkl"
"""
dataset_config = yaml.safe_load(dataset_config_str)
dataset_name = dataset_config.pop('name')
MixedDatasetClass = datasets.RefineBioMixedDataset
all_data = MixedDatasetClass.from_config(**dataset_config)
###Output
_____no_output_____
###Markdown
Look for samples that are very similar to each other despite having different IDs
###Code
sample_names = all_data.get_samples()
assert len(sample_names) == len(set(sample_names))
sample_names[:5]
expression = all_data.get_all_data()
print(len(sample_names))
print(expression.shape)
sample_distance_matrix = euclidean_distances(expression, expression)
# This is unrelated to debugging the data, I'm just curious
gene_distance_matrix = euclidean_distances(expression.T, expression.T)
sample_distance_matrix.shape
sample_distance_matrix
# See if there are any zero distances outside the diagonal
num_zeros = 10234 * 10234 - np.count_nonzero(sample_distance_matrix)
num_zeros
###Output
_____no_output_____
###Markdown
Since there are as many zeros as elements in the diagonal, there are no duplicate samples with different IDs (unless noise was added somewhere) Get all distancesBecause we know there aren't any zeros outside of the diagonal, we can zero out the lower diagonal and use the the non-zero entries of the upper diagonal to visualize the distance distribution
###Code
triangle = np.triu(sample_distance_matrix, k=0)
triangle
distances = triangle.flatten()
nonzero_distances = distances[distances != 0]
nonzero_distances.shape
plt.hist(nonzero_distances, bins=20)
###Output
_____no_output_____
###Markdown
Distribution looks bimodal, probably due to different platforms having different distances from each other?
###Code
plt.hist(nonzero_distances[nonzero_distances < 200])
plt.hist(nonzero_distances[nonzero_distances < 100])
###Output
_____no_output_____
###Markdown
Looks like there may be some samples that are abnormally close to each other. I wonder whether they're in the same study Correspondence between distance and study
###Code
# There is almost certainly a vectorized way of doing this but oh well
distances = []
first_samples = []
second_samples = []
for row_index in range(sample_distance_matrix.shape[0]):
for col_index in range(sample_distance_matrix.shape[0]):
distance = sample_distance_matrix[row_index, col_index]
if distance == 0:
continue
distances.append(distance)
first_samples.append(sample_names[row_index])
second_samples.append(sample_names[col_index])
distance_df = pd.DataFrame({'distance': distances, 'sample_1': first_samples,
'sample_2': second_samples})
# Free up memory to prevent swapping (probably hopeless if the user has < 32GB)
del(triangle)
del(sample_distance_matrix)
del(distances)
del(first_samples)
del(second_samples)
del(nonzero_distances)
distance_df
sample_to_study = all_data.sample_to_study
del(all_data)
distance_df['study_1'] = distance_df['sample_1'].map(sample_to_study)
distance_df['study_2'] = distance_df['sample_2'].map(sample_to_study)
distance_df['same_study'] = distance_df['study_1'] == distance_df['study_2']
distance_df.head()
print(len(distance_df))
###Output
104723274
###Markdown
For some reason my computer didn't want me to make a figure with 50 million points. We'll work with means instead
###Code
means_df = distance_df.groupby(['study_1', 'same_study']).mean()
means_df
means_df = means_df.unstack(level='same_study')
means_df = means_df.reset_index()
means_df.head()
# Get rid of the multilevel confusion
means_df.columns = means_df.columns.droplevel()
means_df.columns = ['study_name', 'distance_to_other', 'distance_to_same']
means_df['difference'] = means_df['distance_to_other'] - means_df['distance_to_same']
means_df.head()
plot = ggplot(means_df, aes(x='study_name', y='difference'))
plot += geom_point()
plot += ylab('out of study - in-study mean')
plot
means_df.sort_values(by='difference')
###Output
_____no_output_____
###Markdown
These results indicate that most of the data is behaving as expected (the distance between pairs of samples from different studies is less than the distance between pairs of samples within the same study).The outliers are mostly bead-chip, which makes sense (though they shouldn't be in the dataset and I'll need to look more closely at that later). The one exception is SRP049820 which is run on an Illumina Genome Analyzer II. Maybe it's due to the old tech? Without BE Correction
###Code
%reset -f
# Calling reset because the notebook runs out of memory otherwise
import json
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import yaml
from plotnine import *
from sklearn.metrics.pairwise import euclidean_distances
from saged import utils, datasets, models
dataset_config_file = '../../dataset_configs/refinebio_labeled_dataset.yml'
dataset_config_str = """name: "RefineBioMixedDataset"
compendium_path: "../../data/subset_compendium.pkl"
metadata_path: "../../data/aggregated_metadata.json"
label_path: "../../data/sample_classifications.pkl"
"""
dataset_config = yaml.safe_load(dataset_config_str)
dataset_name = dataset_config.pop('name')
MixedDatasetClass = datasets.RefineBioMixedDataset
all_data = MixedDatasetClass.from_config(**dataset_config)
# Correct for batch effects
all_data = datasets.correct_batch_effects(all_data, 'limma')
###Output
_____no_output_____
###Markdown
Look for samples that are very similar to each other despite having different IDs
###Code
sample_names = all_data.get_samples()
assert len(sample_names) == len(set(sample_names))
sample_names[:5]
expression = all_data.get_all_data()
print(len(sample_names))
print(expression.shape)
sample_distance_matrix = euclidean_distances(expression, expression)
# This is unrelated to debugging the data, I'm just curious
gene_distance_matrix = euclidean_distances(expression.T, expression.T)
sample_distance_matrix.shape
sample_distance_matrix
# See if there are any zero distances outside the diagonal
num_zeros = 10234 * 10234 - np.count_nonzero(sample_distance_matrix)
num_zeros
###Output
_____no_output_____
###Markdown
Since there are as many zeros as elements in the diagonal, there are no duplicate samples with different IDs (unless noise was added somewhere) Get all distancesBecause we know there aren't any zeros outside of the diagonal, we can zero out the lower diagonal and use the the non-zero entries of the upper diagonal to visualize the distance distribution
###Code
triangle = np.triu(sample_distance_matrix, k=0)
triangle
distances = triangle.flatten()
nonzero_distances = distances[distances != 0]
nonzero_distances.shape
plt.hist(nonzero_distances, bins=20)
###Output
_____no_output_____
###Markdown
Distribution looks bimodal, probably due to different platforms having different distances from each other?
###Code
plt.hist(nonzero_distances[nonzero_distances < 200])
plt.hist(nonzero_distances[nonzero_distances < 100])
###Output
_____no_output_____
###Markdown
Looks like there may be some samples that are abnormally close to each other. I wonder whether they're in the same study Correspondence between distance and study
###Code
# There is almost certainly a vectorized way of doing this but oh well
distances = []
first_samples = []
second_samples = []
for row_index in range(sample_distance_matrix.shape[0]):
for col_index in range(sample_distance_matrix.shape[0]):
distance = sample_distance_matrix[row_index, col_index]
if distance == 0:
continue
distances.append(distance)
first_samples.append(sample_names[row_index])
second_samples.append(sample_names[col_index])
distance_df = pd.DataFrame({'distance': distances, 'sample_1': first_samples,
'sample_2': second_samples})
# Free up memory to prevent swapping (probably hopeless if the user has < 32GB)
del(triangle)
del(sample_distance_matrix)
del(distances)
del(first_samples)
del(second_samples)
del(nonzero_distances)
distance_df
sample_to_study = all_data.sample_to_study
del(all_data)
distance_df['study_1'] = distance_df['sample_1'].map(sample_to_study)
distance_df['study_2'] = distance_df['sample_2'].map(sample_to_study)
distance_df['same_study'] = distance_df['study_1'] == distance_df['study_2']
distance_df.head()
print(len(distance_df))
###Output
104724522
###Markdown
For some reason my computer didn't want me to make a figure with 50 million points. We'll work with means instead
###Code
means_df = distance_df.groupby(['study_1', 'same_study']).mean()
means_df
means_df = means_df.unstack(level='same_study')
means_df = means_df.reset_index()
means_df.head()
# Get rid of the multilevel confusion
means_df.columns = means_df.columns.droplevel()
means_df.columns = ['study_name', 'distance_to_other', 'distance_to_same']
means_df['difference'] = means_df['distance_to_other'] - means_df['distance_to_same']
means_df.head()
plot = ggplot(means_df, aes(x='study_name', y='difference'))
plot += geom_point()
plot += ylab('out of study - in-study mean')
plot
means_df.sort_values(by='difference')
###Output
_____no_output_____ |
source/examples/basics/gog/geom_density2d.ipynb | ###Markdown
geom_density2d()
###Code
import pandas as pd
from lets_plot import *
LetsPlot.setup_html()
df = pd.read_csv('https://raw.githubusercontent.com/JetBrains/lets-plot-docs/master/data/mpg.csv')
ggplot(df, aes('cty', 'hwy')) + geom_density2d(aes(color='..group..'))
###Output
_____no_output_____
###Markdown
geom_density2d()
###Code
import pandas as pd
from lets_plot import *
LetsPlot.setup_html()
df = pd.read_csv('https://raw.githubusercontent.com/JetBrains/lets-plot-docs/master/data/mpg.csv')
ggplot(df, aes('cty', 'hwy')) + geom_density2d(aes(color='..group..'))
###Output
_____no_output_____ |
Moringa_Data_Science_Prep_W4_Independent_Project_2021_07_Cindy_Gachuhi_Python_IP.ipynb | ###Markdown
###Code
#let us import the pandas library
import pandas as pd
###Output
_____no_output_____
###Markdown
From the following data sources, we will acquire our datasets for analysis:http://bit.ly/autolib_datasethttps://drive.google.com/a/moringaschool.com/file/d/13DXF2CFWQLeYxxHFekng8HJnH_jtbfpN/view?usp=sharing
###Code
# let us create a dataframe from the following url:
# http://bit.ly/autolib_dataset
df_url = "http://bit.ly/autolib_dataset"
Autolib_dataset = pd.read_csv(df_url)
Autolib_dataset
# let us identify the columns with null values and drop them
#
Autolib_dataset.isnull()
Autolib_dataset.dropna(axis=1,how='all',inplace=True)
Autolib_dataset
# Dropping unnecessary columns
D_autolib= Autolib_dataset.drop(Autolib_dataset.columns[[8,9,10,15,17,18,19]], axis = 1)
D_autolib
# let us access the hour column from our dataframe
D_autolib['hour']
# Now, we want to identify the most popular hour in which the Blue cars are picked up
# To do this, we are going to use the mode() function
#
D_autolib['hour'].mode()
###Output
_____no_output_____
###Markdown
###Code
#let us import the pandas library
import pandas as pd
###Output
_____no_output_____
###Markdown
From the following data sources, we will acquire our datasets for analysis:http://bit.ly/autolib_datasethttps://drive.google.com/a/moringaschool.com/file/d/13DXF2CFWQLeYxxHFekng8HJnH_jtbfpN/view?usp=sharing
###Code
# let us create a dataframe from the following url:
# http://bit.ly/autolib_dataset
df_url = "http://bit.ly/autolib_dataset"
df = pd.read_csv(df_url)
df
# let us access the hour column from our dataframe
df['hour']
# Now, we want to identify the most popular hour in which the Blue cars are picked up
# To do this, we are going to use the mode() function
#
df['hour'].mode()
###Output
_____no_output_____ |
notebooks/Module1_unit5/4_AreaStatsTimeSeries.ipynb | ###Markdown
DescriptionThis notebook is used to request computation of average time-series of a WaPOR data layer for an area using WaPOR API.You will need WaPOR API Token to use this notebook Step 1: Read APITokenGet your APItoken from https://wapor.apps.fao.org/profile. Enter your API Token when running the cell below.
###Code
import requests
import pandas as pd
path_query=r'https://io.apps.fao.org/gismgr/api/v1/query/'
path_sign_in=r'https://io.apps.fao.org/gismgr/api/v1/iam/sign-in/'
APIToken=input('Your API token: ')
###Output
Your API token: Enter your API token
###Markdown
Step 2: Get Authorization AccessTokenUsing the input API token to get AccessToken for authorization
###Code
resp_signin=requests.post(path_sign_in,headers={'X-GISMGR-API-KEY':APIToken})
resp_signin = resp_signin.json()
AccessToken=resp_signin['response']['accessToken']
AccessToken
###Output
_____no_output_____
###Markdown
Step 3: Write Query PayloadFor more examples of areatimeseries query load visit https://io.apps.fao.org/gismgr/api/v1/swagger-ui/examples/AreaStatsTimeSeries.txt
###Code
crs="EPSG:4326" #coordinate reference system
cube_code="L1_PCP_E"
workspace='WAPOR_2'
start_date="2009-01-01"
end_date="2019-01-01"
#get datacube measure
cube_url=f'https://io.apps.fao.org/gismgr/api/v1/catalog/workspaces/{workspace}/cubes/{cube_code}/measures'
resp=requests.get(cube_url).json()
measure=resp['response']['items'][0]['code']
print('MEASURE: ',measure)
#get datacube time dimension
cube_url=f'https://io.apps.fao.org/gismgr/api/v1/catalog/workspaces/{workspace}/cubes/{cube_code}/dimensions'
resp=requests.get(cube_url).json()
items=pd.DataFrame.from_dict(resp['response']['items'])
dimension=items[items.type=='TIME']['code'].values[0]
print('DIMENSION: ',dimension)
###Output
MEASURE: WATER_MM
DIMENSION: DAY
###Markdown
Define area by coordinate extent
###Code
bbox= [37.95883206252312, 7.89534, 43.32093, 12.3873979377346] #latlon
xmin,ymin,xmax,ymax=bbox[0],bbox[1],bbox[2],bbox[3]
Polygon=[
[xmin,ymin],
[xmin,ymax],
[xmax,ymax],
[xmax,ymin],
[xmin,ymin]
]
query_areatimeseries={
"type": "AreaStatsTimeSeries",
"params": {
"cube": {
"code": cube_code, #cube_code
"workspaceCode": workspace, #workspace code: use WAPOR for v1.0 and WAPOR_2 for v2.1
"language": "en"
},
"dimensions": [
{
"code": dimension, #use DAY DEKAD MONTH or YEAR
"range": f"[{start_date},{end_date})" #start date and endate
}
],
"measures": [
measure
],
"shape": {
"type": "Polygon",
"properties": {
"name": crs #coordinate reference system
},
"coordinates": [
Polygon
]
}
}
}
query_areatimeseries
###Output
_____no_output_____
###Markdown
OR define area by reading GeoJSON
###Code
import ogr
shp_fh=r".\data\Awash_shapefile.shp"
shpfile=ogr.Open(shp_fh)
layer=shpfile.GetLayer()
epsg_code=layer.GetSpatialRef().GetAuthorityCode(None)
shape=layer.GetFeature(0).ExportToJson(as_object=True)['geometry'] #get geometry of shapefile in JSON string
shape["properties"]={"name": "EPSG:{0}".format(epsg_code)}#latlon projection
query_areatimeseries={
"type": "AreaStatsTimeSeries",
"params": {
"cube": {
"code": cube_code,
"workspaceCode": workspace,
"language": "en"
},
"dimensions": [
{
"code": dimension,
"range": f"[{start_date},{end_date})"
}
],
"measures": [
measure
],
"shape": shape
}
}
query_areatimeseries
###Output
_____no_output_____
###Markdown
Step 4: Post the QueryPayload with AccessToken in Header In responses, get an url to query job.
###Code
resp_query=requests.post(path_query,headers={'Authorization':'Bearer {0}'.format(AccessToken)},
json=query_areatimeseries)
resp_query = resp_query.json()
job_url=resp_query['response']['links'][0]['href']
job_url
###Output
_____no_output_____
###Markdown
Step 5: Get Job Results.It will take some time for the job to be finished. When the job is finished, its status will be changed from 'RUNNING' to 'COMPLETED' or 'COMPLETED WITH ERRORS'. If it is COMPLETED, the area time series results can be achieved from Response 'output'.
###Code
i=0
print('RUNNING',end=" ")
while i==0:
resp = requests.get(job_url)
resp=resp.json()
if resp['response']['status']=='RUNNING':
print('.',end =" ")
if resp['response']['status']=='COMPLETED':
results=resp['response']['output']
df=pd.DataFrame(results['items'],columns=results['header'])
i=1
if resp['response']['status']=='COMPLETED WITH ERRORS':
print(resp['response']['log'])
i=1
df
df.index=pd.to_datetime(df.day,format='%Y-%m-%d')
df.plot()
###Output
_____no_output_____ |
sklearn/notes/ensemble_gradient_boosting.ipynb | ###Markdown
Gradient-boosting decision tree (GBDT)In this notebook, we will present the gradient boosting decision treealgorithm and contrast it with AdaBoost.Gradient-boosting differs from AdaBoost due to the following reason: insteadof assigning weights to specific samples, GBDT will fit a decision tree onthe residuals error (hence the name "gradient") of the previous tree.Therefore, each new tree in the ensemble predicts the error made by theprevious learner instead of predicting the target directly.In this section, we will provide some intuition about the way learners arecombined to give the final prediction. In this regard, let's go back to ourregression problem which is more intuitive for demonstrating the underlyingmachinery.
###Code
import pandas as pd
import numpy as np
# Create a random number generator that will be used to set the randomness
rng = np.random.RandomState(0)
def generate_data(n_samples=50):
"""Generate synthetic dataset. Returns `data_train`, `data_test`,
`target_train`."""
x_max, x_min = 1.4, -1.4
len_x = x_max - x_min
x = rng.rand(n_samples) * len_x - len_x / 2
noise = rng.randn(n_samples) * 0.3
y = x ** 3 - 0.5 * x ** 2 + noise
data_train = pd.DataFrame(x, columns=["Feature"])
data_test = pd.DataFrame(np.linspace(x_max, x_min, num=300),
columns=["Feature"])
target_train = pd.Series(y, name="Target")
return data_train, data_test, target_train
data_train, data_test, target_train = generate_data()
import matplotlib.pyplot as plt
import seaborn as sns
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
_ = plt.title("Synthetic regression dataset")
###Output
_____no_output_____
###Markdown
As we previously discussed, boosting will be based on assembling a sequenceof learners. We will start by creating a decision tree regressor. We will setthe depth of the tree so that the resulting learner will underfit the data.
###Code
from sklearn.tree import DecisionTreeRegressor
tree = DecisionTreeRegressor(max_depth=3, random_state=0)
tree.fit(data_train, target_train)
target_train_predicted = tree.predict(data_train)
target_test_predicted = tree.predict(data_test)
###Output
_____no_output_____
###Markdown
Using the term "test" here refers to data that was not used for training.It should not be confused with data coming from a train-test split, as itwas generated in equally-spaced intervals for the visual evaluation of thepredictions.
###Code
# plot the data
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
# plot the predictions
line_predictions = plt.plot(data_test["Feature"], target_test_predicted, "--")
# plot the residuals
for value, true, predicted in zip(data_train["Feature"],
target_train,
target_train_predicted):
lines_residuals = plt.plot([value, value], [true, predicted], color="red")
plt.legend([line_predictions[0], lines_residuals[0]],
["Fitted tree", "Residuals"])
_ = plt.title("Prediction function together \nwith errors on the training set")
###Output
_____no_output_____
###Markdown
TipIn the cell above, we manually edited the legend to get only a single labelfor all the residual lines.Since the tree underfits the data, its accuracy is far from perfect on thetraining data. We can observe this in the figure by looking at the differencebetween the predictions and the ground-truth data. We represent these errors,called "Residuals", by unbroken red lines.Indeed, our initial tree was not expressive enough to handle the complexityof the data, as shown by the residuals. In a gradient-boosting algorithm, theidea is to create a second tree which, given the same data `data`, will tryto predict the residuals instead of the vector `target`. We would thereforehave a tree that is able to predict the errors made by the initial tree.Let's train such a tree.
###Code
residuals = target_train - target_train_predicted
tree_residuals = DecisionTreeRegressor(max_depth=5, random_state=0)
tree_residuals.fit(data_train, residuals)
target_train_predicted_residuals = tree_residuals.predict(data_train)
target_test_predicted_residuals = tree_residuals.predict(data_test)
sns.scatterplot(x=data_train["Feature"], y=residuals, color="black", alpha=0.5)
line_predictions = plt.plot(
data_test["Feature"], target_test_predicted_residuals, "--")
# plot the residuals of the predicted residuals
for value, true, predicted in zip(data_train["Feature"],
residuals,
target_train_predicted_residuals):
lines_residuals = plt.plot([value, value], [true, predicted], color="red")
plt.legend([line_predictions[0], lines_residuals[0]],
["Fitted tree", "Residuals"], bbox_to_anchor=(1.05, 0.8),
loc="upper left")
_ = plt.title("Prediction of the previous residuals")
###Output
_____no_output_____
###Markdown
We see that this new tree only manages to fit some of the residuals. We willfocus on a specific sample from the training set (i.e. we know that thesample will be well predicted using two successive trees). We will use thissample to explain how the predictions of both trees are combined. Let's firstselect this sample in `data_train`.
###Code
sample = data_train.iloc[[-2]]
x_sample = sample['Feature'].iloc[0]
target_true = target_train.iloc[-2]
target_true_residual = residuals.iloc[-2]
###Output
_____no_output_____
###Markdown
Let's plot the previous information and highlight our sample of interest.Let's start by plotting the original data and the prediction of the firstdecision tree.
###Code
# Plot the previous information:
# * the dataset
# * the predictions
# * the residuals
sns.scatterplot(x=data_train["Feature"], y=target_train, color="black",
alpha=0.5)
plt.plot(data_test["Feature"], target_test_predicted, "--")
for value, true, predicted in zip(data_train["Feature"],
target_train,
target_train_predicted):
lines_residuals = plt.plot([value, value], [true, predicted], color="red")
# Highlight the sample of interest
plt.scatter(sample, target_true, label="Sample of interest",
color="tab:orange", s=200)
plt.xlim([-1, 0])
plt.legend(bbox_to_anchor=(1.05, 0.8), loc="upper left")
_ = plt.title("Tree predictions")
###Output
_____no_output_____
###Markdown
Now, let's plot the residuals information. We will plot the residualscomputed from the first decision tree and show the residual predictions.
###Code
# Plot the previous information:
# * the residuals committed by the first tree
# * the residual predictions
# * the residuals of the residual predictions
sns.scatterplot(x=data_train["Feature"], y=residuals,
color="black", alpha=0.5)
plt.plot(data_test["Feature"], target_test_predicted_residuals, "--")
for value, true, predicted in zip(data_train["Feature"],
residuals,
target_train_predicted_residuals):
lines_residuals = plt.plot([value, value], [true, predicted], color="red")
# Highlight the sample of interest
plt.scatter(sample, target_true_residual, label="Sample of interest",
color="tab:orange", s=200)
plt.xlim([-1, 0])
plt.legend()
_ = plt.title("Prediction of the residuals")
###Output
_____no_output_____
###Markdown
For our sample of interest, our initial tree is making an error (smallresidual). When fitting the second tree, the residual in this case isperfectly fitted and predicted. We will quantitatively check this predictionusing the fitted tree. First, let's check the prediction of the initial treeand compare it with the true value.
###Code
print(f"True value to predict for "
f"f(x={x_sample:.3f}) = {target_true:.3f}")
y_pred_first_tree = tree.predict(sample)[0]
print(f"Prediction of the first decision tree for x={x_sample:.3f}: "
f"y={y_pred_first_tree:.3f}")
print(f"Error of the tree: {target_true - y_pred_first_tree:.3f}")
###Output
True value to predict for f(x=-0.517) = -0.393
Prediction of the first decision tree for x=-0.517: y=-0.145
Error of the tree: -0.248
###Markdown
As we visually observed, we have a small error. Now, we can use the secondtree to try to predict this residual.
###Code
print(f"Prediction of the residual for x={x_sample:.3f}: "
f"{tree_residuals.predict(sample)[0]:.3f}")
###Output
Prediction of the residual for x=-0.517: -0.248
###Markdown
We see that our second tree is capable of predicting the exact residual(error) of our first tree. Therefore, we can predict the value of `x` bysumming the prediction of all the trees in the ensemble.
###Code
y_pred_first_and_second_tree = (
y_pred_first_tree + tree_residuals.predict(sample)[0]
)
print(f"Prediction of the first and second decision trees combined for "
f"x={x_sample:.3f}: y={y_pred_first_and_second_tree:.3f}")
print(f"Error of the tree: {target_true - y_pred_first_and_second_tree:.3f}")
###Output
Prediction of the first and second decision trees combined for x=-0.517: y=-0.393
Error of the tree: 0.000
###Markdown
We chose a sample for which only two trees were enough to make the perfectprediction. However, we saw in the previous plot that two trees were notenough to correct the residuals of all samples. Therefore, one needs toadd several trees to the ensemble to successfully correct the error(i.e. the second tree corrects the first tree's error, while the third treecorrects the second tree's error and so on).We will compare the generalization performance of random-forest and gradientboosting on the California housing dataset.
###Code
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import cross_validate
data, target = fetch_california_housing(return_X_y=True, as_frame=True)
target *= 100 # rescale the target in k$
from sklearn.ensemble import GradientBoostingRegressor
gradient_boosting = GradientBoostingRegressor(n_estimators=200)
cv_results_gbdt = cross_validate(
gradient_boosting, data, target, scoring="neg_mean_absolute_error",
n_jobs=2,
)
print("Gradient Boosting Decision Tree")
print(f"Mean absolute error via cross-validation: "
f"{-cv_results_gbdt['test_score'].mean():.3f} +/- "
f"{cv_results_gbdt['test_score'].std():.3f} k$")
print(f"Average fit time: "
f"{cv_results_gbdt['fit_time'].mean():.3f} seconds")
print(f"Average score time: "
f"{cv_results_gbdt['score_time'].mean():.3f} seconds")
from sklearn.ensemble import RandomForestRegressor
random_forest = RandomForestRegressor(n_estimators=200, n_jobs=2)
cv_results_rf = cross_validate(
random_forest, data, target, scoring="neg_mean_absolute_error",
n_jobs=2,
)
print("Random Forest")
print(f"Mean absolute error via cross-validation: "
f"{-cv_results_rf['test_score'].mean():.3f} +/- "
f"{cv_results_rf['test_score'].std():.3f} k$")
print(f"Average fit time: "
f"{cv_results_rf['fit_time'].mean():.3f} seconds")
print(f"Average score time: "
f"{cv_results_rf['score_time'].mean():.3f} seconds")
###Output
_____no_output_____ |
White House/White House.ipynb | ###Markdown
Untill now, we overview the functionality that Jupyter notebook provides for us. How does length of employee titles correlate to salary?
###Code
position_title = white_house["Position Title"]
title_length = position_title.apply(len)
salary = white_house["Salary"]
from scipy.stats.stats import pearsonr
pearsonr(title_length, salary)
plt.scatter(title_length, salary)
plt.xlabel("title length")
plt.ylabel("salary")
plt.title("Title length - Salary Scatter Plot")
plt.show()
###Output
_____no_output_____
###Markdown
How much does the White House pay in total salary?
###Code
white_house["Salary"].sum()
###Output
_____no_output_____
###Markdown
Who are the highest and lowest paid staffers?
###Code
max_salary = white_house["Salary"].max()
max_salary_column = white_house["Salary"] == max_salary
white_house.loc[max_salary_column].reset_index(drop = True)
min_salary = white_house["Salary"].min()
min_salary_column = white_house["Salary"] == min_salary
white_house.loc[min_salary_column].reset_index(drop = True)
###Output
_____no_output_____
###Markdown
What words are the most common in titles?
###Code
words = {}
for title in position_title:
title_list = title.split()
for word in title_list:
if word not in words:
words[word] = 1
else:
words[word] += 1
import operator
sorted_words = sorted(words.items(), key=operator.itemgetter(1), reverse = True)
sorted_words
###Output
_____no_output_____ |
code/r/base/it-402-dc-data_processing_error_checking-base.ipynb | ###Markdown
Notes Legal (ISO) gender types:* https://data.gov.uk/education-standards/sites/default/files/CL-Legal-Sex-Type-v2-0.pdf For data from 2010 and all stored as % * need to relax sum to 100%* Symbol Meaning * '-' Not Applicable * '-' No Entries (Table 3) * 0% Less than 0.5% * *** Fewer Than 5 Entries Error Checking & Warnings* Ideally correct errors here and write out corrected csv to file with a note* TODO - log errors found and include error-checking code as part of pre-processing flow Errors to Watch ForPlease document as not found and/or what corrected, so can trace back to original. Update as needed and mirror in final docs submitted with project.* "Computing" (or "Computing Studies" or "Computing (New)") ... included in list of subjects * need to decide if files will be excluded or included with a flag to track changes in subjects offered* Each subject and grade listed only once per gender* proportions of male/female add up to 1Warning Only NeededNeed only document if triggered.* All values for a subject set to "-" or 0 (rare) -> translates to NAs if read in properly
###Code
# check focus subject (typically, but not necessarily, Computing) in list of subjects
checkFocusSubjectListed <-
function(awardFile, glimpseContent = FALSE, listSubjects = FALSE) {
awardData <- read_csv(awardFile, trim_ws = TRUE) %>% #, skip_empty_rows = T) # NOT skipping empty rows... :(
filter(rowSums(is.na(.)) != ncol(.)) %>%
suppressMessages
print(awardFile)
if (!exists("focus_subject") || is_null(focus_subject) || (str_trim(focus_subject) == "")) {
focus_subject <- "computing"
print(paste("No focus subject specified; defaulting to subjects containing: ", focus_subject))
} else
print(paste("Search on focus subject (containing term) '", focus_subject, "'", sep = ""))
if (glimpseContent)
print(glimpse(awardData))
result <- awardData %>%
select(Subject) %>%
filter(str_detect(Subject, regex(focus_subject, ignore_case = TRUE))) %>%
verify(nrow(.) > 0, error_fun = just_warn)
if (!listSubjects)
return(nrow(result)) # comment out this row to list subject names
else
return(result)
}
# check for data stored as percentages only
checkDataAsPercentageOnly <-
function(awardFile, glimpseContent = FALSE) {
awardData <- read_csv(awardFile, trim_ws = TRUE) %>% #, skip_empty_rows = T) # NOT skipping empty rows... :(
filter(rowSums(is.na(.)) != ncol(.)) %>%
suppressMessages
print(awardFile)
if (glimpseContent)
print(glimpse(awardData))
if (!exists("redundant_column_flags") || is.null(redundant_column_flags))
redundant_column_flags <- c("-percentage*", "-COMP", "-PassesUngradedCourses")
awardData %>%
select(-matches(c(redundant_column_flags, "all-Entries"))) %>% # "-percentage")) %>%
select(matches(c("male-", "female-", "all-"))) %>%
verify(ncol(.) > 0, error_fun = just_warn) %>%
#head(0) - comment in and next line out to list headers remaining
summarise(data_as_counts = (ncol(.) > 0))
}
# error checking - need to manually correct data if mismatch between breakdown by gender and totals found
# this case, if found, is relatively easy to fix
#TODO -include NotKnown and NA
checkDistributionByGenderErrors <-
function(awardFile, glimpseContent = FALSE) {
awardData <- read_csv(awardFile, trim_ws = TRUE) %>% #, skip_empty_rows = T) # NOT skipping empty rows... :(
filter(rowSums(is.na(.)) != ncol(.)) %>%
suppressMessages
print(awardFile)
if (glimpseContent)
print(glimpse(awardData))
if (awardData %>%
select(matches(gender_options)) %>%
verify(ncol(.) > 0, error_fun = just_warn) %>%
summarise(data_as_counts = (ncol(.) == 0)) == TRUE) {
awardData <- awardData %>%
select(-NumberOfCentres) %>%
pivot_longer(!c(Subject), names_to = "grade", values_to = "PercentageOfStudents") %>%
separate("grade", c("gender", "grade"), extra = "merge") %>%
mutate_at(c("gender", "grade"), as.factor) %>%
filter((gender %in% c("all")) & (grade %in% c("Entries")))
# building parallel structure
return(awardData %>%
group_by(Subject) %>%
mutate(total = -1) %>%
summarise(total = sum(total)) %>%
mutate(DataError = TRUE) # confirmation only - comment out to print al
)
}
awardData <- awardData %>%
mutate_at(vars(starts_with("male-") | starts_with("female-") | starts_with("all-")), as.character) %>%
mutate_at(vars(starts_with("male-") | starts_with("female-") | starts_with("all-")), parse_number) %>%
suppressWarnings
data_as_counts <- awardData %>%
select(-matches(redundant_column_flags)) %>% # "-percentage")) %>%
select(matches(c("male-", "female-"))) %>%
summarise(data_as_counts = (ncol(.) > 0)) %>%
as.logical
if (data_as_counts) {
awardData <- awardData %>%
select(-NumberOfCentres) %>%
mutate_at(vars(starts_with("male")), ~(. / `all-Entries`)) %>%
mutate_at(vars(starts_with("female")), ~(. / `all-Entries`)) %>%
select(-(starts_with("all") & !ends_with("-Entries"))) %>%
pivot_longer(!c(Subject), names_to = "grade", values_to = "PercentageOfStudents") %>%
separate("grade", c("gender", "grade"), extra = "merge") %>%
mutate_at(c("gender", "grade"), as.factor) %>%
filter(!(gender %in% c("all")) & (grade %in% c("Entries")))
} else { # dataAsPercentageOnly
awardData <- awardData %>%
select(Subject, ends_with("-percentage")) %>%
mutate_at(vars(ends_with("-percentage")), ~(. / 100)) %>%
pivot_longer(!c(Subject), names_to = "grade", values_to = "PercentageOfStudents") %>%
separate("grade", c("gender", "grade"), extra = "merge") %>%
mutate_at(c("gender", "grade"), as.factor)
} # end if-else - check for data capture approach
awardData %>%
group_by(Subject) %>%
summarise(total = sum(PercentageOfStudents, na.rm = TRUE)) %>%
verify((total == 1.0) | (total == 0), error_fun = just_warn) %>%
mutate(DataError = if_else(((total == 1.0) | (total == 0)), FALSE, TRUE)) %>%
filter(DataError == TRUE) %>% # confirmation only - comment out to print all
suppressMessages # ungrouping messages
}
# warning only - document if necessary
# double-check for subjects with values all NA - does this mean subject being excluded or no one took it?
checkSubjectsWithNoEntries <-
function(awardFile, glimpseContent = FALSE) {
awardData <- read_csv(awardFile, trim_ws = TRUE) %>% #, skip_empty_rows = T) # NOT skipping empty rows... :(
filter(rowSums(is.na(.)) != ncol(.)) %>%
suppressMessages
print(awardFile)
if (glimpseContent)
print(glimpse(awardData))
bind_cols(
awardData %>%
mutate(row_id = row_number()) %>%
select(row_id, Subject),
awardData %>%
select(-c(Subject, NumberOfCentres)) %>%
mutate_at(vars(starts_with("male-") | starts_with("female-") | starts_with("all-")), as.character) %>%
mutate_at(vars(starts_with("male-") | starts_with("female-") | starts_with("all-")), parse_number) %>%
suppressWarnings %>%
assert_rows(num_row_NAs,
within_bounds(0, length(colnames(.)), include.upper = F), everything(), error_fun = just_warn) %>%
# comment out just_warn to stop execution on fail
summarise(column_count = length(colnames(.)),
count_no_entries = num_row_NAs(.))
) %>% # end bind_cols
filter(count_no_entries == column_count) # comment out to print all
}
## call using any of the options below
## where files_to_verify is a vector containing (paths to) files to check
### checkFocusSubjectListed
#lapply(files_to_verify, checkFocusSubjectListed, listSubjects = TRUE)
#Map(checkFocusSubjectListed, files_to_verify, listSubjects = TRUE)
#as.data.frame(sapply(files_to_verify, checkFocusSubjectListed)) # call without as.data.frame if listing values
### checkDataAsPercentageOnly
#sapply(files_to_verify, checkDataAsPercentageOnly)
#Map(checkDataAsPercentageOnly, files_to_verify) #, T)
### checkDistributionByGenderErrors
#data.frame(sapply(files_to_verify, checkDistributionByGenderErrors))
### checkSubjectsWithNoEntries
#data.frame(sapply(files_to_verify, checkSubjectsWithNoEntries))
###Output
_____no_output_____ |
notebooks/2017-11-01-Creating-Archives-Using-Bagit.ipynb | ###Markdown
Using BagIt to tag oceanographic data[`BagIt`](https://en.wikipedia.org/wiki/BagIt) is a packaging format that supports storage of arbitrary digital content. The "bag" consists of arbitrary content and "tags," the metadata files. `BagIt` packages can be used to facilitate data sharing with federal archive centers - thus ensuring digital preservation of oceanographic datasets within IOOS and its regional associations. NOAA NCEI supports reading from a Web Accessible Folder (WAF) containing bagit archives. For an example please see: http://ncei.axiomdatascience.com/cencoos/On this notebook we will use the [python interface](http://libraryofcongress.github.io/bagit-python) for `BagIt` to create a "bag" of a time-series profile data. First let us load our data from a comma separated values file (`CSV`).
###Code
import os
import pandas as pd
fname = os.path.join('data', 'dsg', 'timeseriesProfile.csv')
df = pd.read_csv(fname, parse_dates=['time'])
df.head()
###Output
_____no_output_____
###Markdown
Instead of "bagging" the `CSV` file we will use this create a metadata rich netCDF file.We can convert the table to a `DSG`, Discrete Sampling Geometry, using `pocean.dsg`. The first thing we need to do is to create a mapping from the data column names to the netCDF `axes`.
###Code
axes = {
't': 'time',
'x': 'lon',
'y': 'lat',
'z': 'depth'
}
###Output
_____no_output_____
###Markdown
Now we can create a [Orthogonal Multidimensional Timeseries Profile](http://cfconventions.org/cf-conventions/v1.6.0/cf-conventions.html_orthogonal_multidimensional_array_representation_of_time_series) object...
###Code
import os
import tempfile
from pocean.dsg import OrthogonalMultidimensionalTimeseriesProfile as omtsp
output_fp, output = tempfile.mkstemp()
os.close(output_fp)
ncd = omtsp.from_dataframe(
df.reset_index(),
output=output,
axes=axes,
mode='a'
)
###Output
_____no_output_____
###Markdown
... And add some extra metadata before we close the file.
###Code
naming_authority = 'ioos'
st_id = 'Station1'
ncd.naming_authority = naming_authority
ncd.id = st_id
print(ncd)
ncd.close()
###Output
<class 'pocean.dsg.timeseriesProfile.om.OrthogonalMultidimensionalTimeseriesProfile'>
root group (NETCDF4 data model, file format HDF5):
Conventions: CF-1.6
date_created: 2017-11-27T15:11:00Z
featureType: timeSeriesProfile
cdm_data_type: TimeseriesProfile
naming_authority: ioos
id: Station1
dimensions(sizes): station(1), time(100), depth(4)
variables(dimensions): <class 'str'> [4mstation[0m(station), float64 [4mlat[0m(station), float64 [4mlon[0m(station), int32 [4mcrs[0m(), float64 [4mtime[0m(time), int32 [4mdepth[0m(depth), int32 [4mindex[0m(time,depth,station), float64 [4mhumidity[0m(time,depth,station), float64 [4mtemperature[0m(time,depth,station)
groups:
###Markdown
Time to create the archive for the file with `BagIt`. We have to create a folder for the bag.
###Code
temp_bagit_folder = tempfile.mkdtemp()
temp_data_folder = os.path.join(temp_bagit_folder, 'data')
###Output
_____no_output_____
###Markdown
Now we can create the bag and copy the netCDF file to a `data` sub-folder.
###Code
import bagit
import shutil
bag = bagit.make_bag(
temp_bagit_folder,
checksum=['sha256']
)
shutil.copy2(output, temp_data_folder + '/parameter1.nc')
###Output
_____no_output_____
###Markdown
Last, but not least, we have to set bag metadata and update the existing bag with it.
###Code
urn = 'urn:ioos:station:{naming_authority}:{st_id}'.format(
naming_authority=naming_authority,
st_id=st_id
)
bag_meta = {
'Bag-Count': '1 of 1',
'Bag-Group-Identifier': 'ioos_bagit_testing',
'Contact-Name': 'Kyle Wilcox',
'Contact-Phone': '907-230-0304',
'Contact-Email': '[email protected]',
'External-Identifier': urn,
'External-Description':
'Sensor data from station {}'.format(urn),
'Internal-Sender-Identifier': urn,
'Internal-Sender-Description':
'Station - URN:{}'.format(urn),
'Organization-address':
'1016 W 6th Ave, Ste. 105, Anchorage, AK 99501, USA',
'Source-Organization': 'Axiom Data Science',
}
bag.info.update(bag_meta)
bag.save(manifests=True, processes=4)
###Output
_____no_output_____
###Markdown
That is it! Simple and efficient!!The cell below illustrates the bag directory tree.(Note that the commands below will not work on Windows and some \*nix systems may require the installation of the command `tree`, however, they are only need for this demonstration.)
###Code
!tree $temp_bagit_folder
!cat $temp_bagit_folder/manifest-sha256.txt
###Output
[01;34m/tmp/tmp5qrdn3qe[00m
├── bag-info.txt
├── bagit.txt
├── [01;34mdata[00m
│ └── parameter1.nc
├── manifest-sha256.txt
└── tagmanifest-sha256.txt
1 directory, 5 files
63d47afc3b8b227aac251a234ecbb9cfc6cc01d1dd1aa34c65969fdabf0740f1 data/parameter1.nc
###Markdown
We can add more files to the bag as needed.
###Code
shutil.copy2(output, temp_data_folder + '/parameter2.nc')
shutil.copy2(output, temp_data_folder + '/parameter3.nc')
shutil.copy2(output, temp_data_folder + '/parameter4.nc')
bag.save(manifests=True, processes=4)
!tree $temp_bagit_folder
!cat $temp_bagit_folder/manifest-sha256.txt
###Output
[01;34m/tmp/tmp5qrdn3qe[00m
├── bag-info.txt
├── bagit.txt
├── [01;34mdata[00m
│ ├── parameter1.nc
│ ├── parameter2.nc
│ ├── parameter3.nc
│ └── parameter4.nc
├── manifest-sha256.txt
└── tagmanifest-sha256.txt
1 directory, 8 files
63d47afc3b8b227aac251a234ecbb9cfc6cc01d1dd1aa34c65969fdabf0740f1 data/parameter1.nc
63d47afc3b8b227aac251a234ecbb9cfc6cc01d1dd1aa34c65969fdabf0740f1 data/parameter2.nc
63d47afc3b8b227aac251a234ecbb9cfc6cc01d1dd1aa34c65969fdabf0740f1 data/parameter3.nc
63d47afc3b8b227aac251a234ecbb9cfc6cc01d1dd1aa34c65969fdabf0740f1 data/parameter4.nc
###Markdown
Using BagIt to tag oceanographic data[`BagIt`](https://en.wikipedia.org/wiki/BagIt) is a packaging format that supports storage of arbitrary digital content. The "bag" consists of arbitrary content and "tags," the metadata files. `BagIt` packages can be used to facilitate data sharing with federal archive centers - thus ensuring digital preservation of oceanographic datasets within IOOS and its regional associations. NOAA NCEI supports reading from a Web Accessible Folder (WAF) containing bagit archives. For an example please see: http://ncei.axiomdatascience.com/cencoos/On this notebook we will use the [python interface](http://libraryofcongress.github.io/bagit-python) for `BagIt` to create a "bag" of a time-series profile data. First let us load our data from a comma separated values file (`CSV`).
###Code
import os
import pandas as pd
fname = os.path.join("data", "dsg", "timeseriesProfile.csv")
df = pd.read_csv(fname, parse_dates=["time"])
df.head()
###Output
_____no_output_____
###Markdown
Instead of "bagging" the `CSV` file we will use this create a metadata rich netCDF file.We can convert the table to a `DSG`, Discrete Sampling Geometry, using `pocean.dsg`. The first thing we need to do is to create a mapping from the data column names to the netCDF `axes`.
###Code
axes = {"t": "time", "x": "lon", "y": "lat", "z": "depth"}
###Output
_____no_output_____
###Markdown
Now we can create a [Orthogonal Multidimensional Timeseries Profile](http://cfconventions.org/cf-conventions/v1.6.0/cf-conventions.html_orthogonal_multidimensional_array_representation_of_time_series) object...
###Code
import os
import tempfile
from pocean.dsg import OrthogonalMultidimensionalTimeseriesProfile as omtsp
output_fp, output = tempfile.mkstemp()
os.close(output_fp)
ncd = omtsp.from_dataframe(df.reset_index(), output=output, axes=axes, mode="a")
###Output
_____no_output_____
###Markdown
... And add some extra metadata before we close the file.
###Code
naming_authority = "ioos"
st_id = "Station1"
ncd.naming_authority = naming_authority
ncd.id = st_id
print(ncd)
ncd.close()
###Output
<class 'pocean.dsg.timeseriesProfile.om.OrthogonalMultidimensionalTimeseriesProfile'>
root group (NETCDF4 data model, file format HDF5):
Conventions: CF-1.6
date_created: 2017-11-27T15:11:00Z
featureType: timeSeriesProfile
cdm_data_type: TimeseriesProfile
naming_authority: ioos
id: Station1
dimensions(sizes): station(1), time(100), depth(4)
variables(dimensions): <class 'str'> [4mstation[0m(station), float64 [4mlat[0m(station), float64 [4mlon[0m(station), int32 [4mcrs[0m(), float64 [4mtime[0m(time), int32 [4mdepth[0m(depth), int32 [4mindex[0m(time,depth,station), float64 [4mhumidity[0m(time,depth,station), float64 [4mtemperature[0m(time,depth,station)
groups:
###Markdown
Time to create the archive for the file with `BagIt`. We have to create a folder for the bag.
###Code
temp_bagit_folder = tempfile.mkdtemp()
temp_data_folder = os.path.join(temp_bagit_folder, "data")
###Output
_____no_output_____
###Markdown
Now we can create the bag and copy the netCDF file to a `data` sub-folder.
###Code
import shutil
import bagit
bag = bagit.make_bag(temp_bagit_folder, checksum=["sha256"])
shutil.copy2(output, temp_data_folder + "/parameter1.nc")
###Output
_____no_output_____
###Markdown
Last, but not least, we have to set bag metadata and update the existing bag with it.
###Code
urn = "urn:ioos:station:{naming_authority}:{st_id}".format(
naming_authority=naming_authority, st_id=st_id
)
bag_meta = {
"Bag-Count": "1 of 1",
"Bag-Group-Identifier": "ioos_bagit_testing",
"Contact-Name": "Kyle Wilcox",
"Contact-Phone": "907-230-0304",
"Contact-Email": "[email protected]",
"External-Identifier": urn,
"External-Description": "Sensor data from station {}".format(urn),
"Internal-Sender-Identifier": urn,
"Internal-Sender-Description": "Station - URN:{}".format(urn),
"Organization-address": "1016 W 6th Ave, Ste. 105, Anchorage, AK 99501, USA",
"Source-Organization": "Axiom Data Science",
}
bag.info.update(bag_meta)
bag.save(manifests=True, processes=4)
###Output
_____no_output_____
###Markdown
That is it! Simple and efficient!!The cell below illustrates the bag directory tree.(Note that the commands below will not work on Windows and some \*nix systems may require the installation of the command `tree`, however, they are only need for this demonstration.)
###Code
!tree $temp_bagit_folder
!cat $temp_bagit_folder/manifest-sha256.txt
###Output
[01;34m/tmp/tmp5qrdn3qe[00m
├── bag-info.txt
├── bagit.txt
├── [01;34mdata[00m
│ └── parameter1.nc
├── manifest-sha256.txt
└── tagmanifest-sha256.txt
1 directory, 5 files
63d47afc3b8b227aac251a234ecbb9cfc6cc01d1dd1aa34c65969fdabf0740f1 data/parameter1.nc
###Markdown
We can add more files to the bag as needed.
###Code
shutil.copy2(output, temp_data_folder + "/parameter2.nc")
shutil.copy2(output, temp_data_folder + "/parameter3.nc")
shutil.copy2(output, temp_data_folder + "/parameter4.nc")
bag.save(manifests=True, processes=4)
!tree $temp_bagit_folder
!cat $temp_bagit_folder/manifest-sha256.txt
###Output
[01;34m/tmp/tmp5qrdn3qe[00m
├── bag-info.txt
├── bagit.txt
├── [01;34mdata[00m
│ ├── parameter1.nc
│ ├── parameter2.nc
│ ├── parameter3.nc
│ └── parameter4.nc
├── manifest-sha256.txt
└── tagmanifest-sha256.txt
1 directory, 8 files
63d47afc3b8b227aac251a234ecbb9cfc6cc01d1dd1aa34c65969fdabf0740f1 data/parameter1.nc
63d47afc3b8b227aac251a234ecbb9cfc6cc01d1dd1aa34c65969fdabf0740f1 data/parameter2.nc
63d47afc3b8b227aac251a234ecbb9cfc6cc01d1dd1aa34c65969fdabf0740f1 data/parameter3.nc
63d47afc3b8b227aac251a234ecbb9cfc6cc01d1dd1aa34c65969fdabf0740f1 data/parameter4.nc
|
Exploring+data+(Exploratory+Data+Analysis)+(1).ipynb | ###Markdown
Aggregating statistics
###Code
import pandas as pd
air_quality = pd.read_pickle('air_quality.pkl')
air_quality.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 95685 entries, 0 to 95684
Data columns (total 27 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 date_time 95685 non-null datetime64[ns]
1 PM2.5 95685 non-null float64
2 PM10 95685 non-null float64
3 SO2 95685 non-null float64
4 NO2 95685 non-null float64
5 CO 95685 non-null float64
6 O3 95685 non-null float64
7 TEMP 95685 non-null float64
8 PRES 95685 non-null float64
9 DEWP 95685 non-null float64
10 RAIN 95685 non-null float64
11 wd 95685 non-null object
12 WSPM 95685 non-null float64
13 station 95685 non-null object
14 year 95685 non-null int64
15 month 95685 non-null int64
16 day 95685 non-null int64
17 hour 95685 non-null int64
18 quarter 95685 non-null int64
19 day_of_week_num 95685 non-null int64
20 day_of_week_name 95685 non-null object
21 time_until_2022 95685 non-null timedelta64[ns]
22 time_until_2022_days 95685 non-null float64
23 time_until_2022_weeks 95685 non-null float64
24 prior_2016_ind 95685 non-null bool
25 PM2.5_category 95685 non-null category
26 TEMP_category 95685 non-null category
dtypes: bool(1), category(2), datetime64[ns](1), float64(13), int64(6), object(3), timedelta64[ns](1)
memory usage: 17.8+ MB
###Markdown
Series/one column of a DataFrame
###Code
air_quality['TEMP'].count()
air_quality['TEMP'].mean()
air_quality['TEMP'].std()
air_quality['TEMP'].min()
air_quality['TEMP'].max()
air_quality['TEMP'].quantile(0.25)
air_quality['TEMP'].median()
air_quality['TEMP'].describe()
air_quality['RAIN'].sum()
air_quality['PM2.5_category'].mode()
air_quality['PM2.5_category'].nunique()
air_quality['PM2.5_category'].describe()
###Output
_____no_output_____
###Markdown
DataFrame by columns
###Code
air_quality.count()
air_quality.mean()
air_quality.mean(numeric_only=True)
air_quality[['PM2.5', 'TEMP']].mean()
air_quality[['PM2.5', 'TEMP']].min()
air_quality[['PM2.5', 'TEMP']].max()
air_quality.describe().T
air_quality.describe(include=['object', 'category', 'bool'])
air_quality[['PM2.5_category', 'TEMP_category', 'hour']].mode()
air_quality['hour'].value_counts()
air_quality[['PM2.5', 'TEMP']].agg('mean')
air_quality[['PM2.5', 'TEMP']].mean()
air_quality[['PM2.5', 'TEMP']].agg(['min', 'max', 'mean'])
air_quality[['PM2.5', 'PM2.5_category']].agg(['min', 'max', 'mean', 'nunique'])
air_quality[['PM2.5', 'PM2.5_category']].agg({'PM2.5': 'mean', 'PM2.5_category': 'nunique'})
air_quality.agg({'PM2.5': ['min', 'max', 'mean'], 'PM2.5_category': 'nunique'})
def max_minus_min(s):
return s.max() - s.min()
max_minus_min(air_quality['TEMP'])
air_quality[['PM2.5', 'TEMP']].agg(['min', 'max', max_minus_min])
41.6 - (-16.8)
###Output
_____no_output_____
###Markdown
DataFrame by rows
###Code
air_quality[['PM2.5', 'PM10']]
air_quality[['PM2.5', 'PM10']].min()
air_quality[['PM2.5', 'PM10']].min(axis=1)
air_quality[['PM2.5', 'PM10']].mean(axis=1)
air_quality[['PM2.5', 'PM10']].sum(axis=1)
###Output
_____no_output_____
###Markdown
Grouping by
###Code
air_quality.groupby(by='PM2.5_category')
air_quality.groupby(by='PM2.5_category').groups
air_quality['PM2.5_category'].head(20)
air_quality.groupby(by='PM2.5_category').groups.keys()
air_quality.groupby(by='PM2.5_category').get_group('Good')
air_quality.sort_values('date_time')
air_quality.sort_values('date_time').groupby(by='year').first()
air_quality.sort_values('date_time').groupby(by='year').last()
air_quality.groupby('TEMP_category').size()
air_quality['TEMP_category'].value_counts(sort=False)
air_quality.groupby('quarter').mean()
#air_quality[['PM2.5', 'TEMP']].groupby('quarter').mean() # KeyError: 'quarter'
air_quality[['PM2.5', 'TEMP', 'quarter']].groupby('quarter').mean()
air_quality.groupby('quarter')[['PM2.5', 'TEMP']].mean()
air_quality.groupby('quarter').mean()[['PM2.5', 'TEMP']]
air_quality.groupby('quarter')[['PM2.5', 'TEMP']].describe()
air_quality.groupby('quarter')[['PM2.5', 'TEMP']].agg(['min', 'max'])
air_quality.groupby('day_of_week_name')[['PM2.5', 'TEMP', 'RAIN']].agg({'PM2.5': ['min', 'max', 'mean'], 'TEMP': 'mean', 'RAIN': 'mean'})
air_quality.groupby(['quarter', 'TEMP_category'])[['PM2.5', 'TEMP']].mean()
air_quality.groupby(['TEMP_category', 'quarter'])[['PM2.5', 'TEMP']].mean()
air_quality.groupby(['year', 'quarter', 'month'])['TEMP'].agg(['min', 'max'])
###Output
_____no_output_____
###Markdown
Pivoting tables
###Code
import pandas as pd
student = pd.read_csv('student.csv')
student.info()
student
pd.pivot_table(student,
index='sex')
pd.pivot_table(student,
index=['sex', 'internet']
)
pd.pivot_table(student,
index=['sex', 'internet'],
values='score')
pd.pivot_table(student,
index=['sex', 'internet'],
values='score',
aggfunc='mean')
pd.pivot_table(student,
index=['sex', 'internet'],
values='score',
aggfunc='median')
pd.pivot_table(student,
index=['sex', 'internet'],
values='score',
aggfunc=['min', 'mean', 'max'])
pd.pivot_table(student,
index=['sex', 'internet'],
values='score',
aggfunc='mean',
columns='studytime'
)
student[(student['sex']=='M') & (student['internet']=='no') & (student['studytime']=='4. >10 hours')]
pd.pivot_table(student,
index=['sex', 'internet'],
values='score',
aggfunc='mean',
columns='studytime',
fill_value=-999)
pd.pivot_table(student,
index=['sex', 'internet'],
values=['score', 'age'],
aggfunc='mean',
columns='studytime',
fill_value=-999)
pd.pivot_table(student,
index=['sex'],
values='score',
aggfunc='mean',
columns=['internet', 'studytime'],
fill_value=-999)
pd.pivot_table(student,
index='familysize',
values='score',
aggfunc='mean',
columns='sex'
)
pd.pivot_table(student,
index='familysize',
values='score',
aggfunc='mean',
columns='sex',
margins=True,
margins_name='Average score total')
student[student['sex']=='F'].mean()
pd.pivot_table(student,
index='studytime',
values=['age', 'score'],
aggfunc={'age': ['min', 'max'],
'score': 'median'},
columns='sex')
pd.pivot_table(student,
index='studytime',
values='score',
aggfunc=lambda s: s.max() - s.min(),
columns='sex'
)
###Output
_____no_output_____ |
Read log and experiment outcome.ipynb | ###Markdown
This notebook shows the outcome of the experiments I've conducted as well as the code used to read the 'log.txt' file in real time.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-bright')
import pandas as pd
###Output
_____no_output_____
###Markdown
Real-time plotting of log.txt Let's write a function that reads the current data and plots it:
###Code
def read_plot():
"Reads data and plots it."
df = pd.read_csv('log.txt', parse_dates=['time'])
df = df.set_index(df.pop('time'))
df.temperature.plot.line(title='temperature in the rice cooker')
df.temperature.rolling(window=60).mean().plot(ls='--', label='averaged temperature')
plt.figure(figsize=(10, 6))
read_plot()
plt.legend(loc='upper left')
plt.grid()
###Output
_____no_output_____
###Markdown
First experiment Timings:- Start at 12:20:30 (button on).- End at 12:44:00 (button turns itself off) of the experiment.
###Code
df = pd.read_csv('log_20160327_v1.txt', parse_dates=['time'])
df = df.set_index(df.pop('time'))
df.temperature.plot.line(title='2016-03-27 rice cooking experiment 1')
df.temperature.rolling(window=60).mean().plot(ls='--', label='averaged temperature')
plt.figure(figsize=(10, 6))
df.temperature.plot.line(title='2016-03-27 rice cooking experiment 1')
df.temperature.rolling(window=60).mean().plot(ls='--', label='averaged temperature')
plt.ylabel('degrees Celsius')
plt.legend(loc='lower right')
###Output
_____no_output_____
###Markdown
Second experiment I've wrapped the probe in a thin plastic layer this time. I'll also let the temperature stabilize before running the experiment. Starting temperature : 20.6 degrees. I started the log when I pushed the button. Push button pops back at 18:58. End of cooking: now warming instead.
###Code
df = pd.read_csv('log_20160327_v2.txt', parse_dates=['time'])
df = df.set_index(df.pop('time'))
df.temperature.plot.line(title='2016-03-27 rice cooking experiment 2')
df.temperature.rolling(window=60).mean().plot(ls='--', label='averaged temperature')
plt.figure(figsize=(10, 6))
df.temperature.plot.line(title='2016-03-27 rice cooking experiment 2')
df.temperature.rolling(window=60).mean().plot(ls='--', label='averaged temperature')
plt.xlim(1459189976.0, 1459191985.0)
plt.ylim(15, 115)
plt.ylabel('degrees Celsius')
plt.legend(loc='lower right')
###Output
_____no_output_____ |
04_Evaluation/04_evaluation.ipynb | ###Markdown
esBERTus: evaluation of the models resultsIn this notebook, an evaluation of the results obtained by the two models will be performed. The idea here is not as much to measure a benchmarking metric on the models but to understand the qualitative difference of the models.In order to do so Keyword extractionIn order to understand what are the "hot topics" of the corpuses that are being used to train the models, a keyword extraction is performed.Although the possibility to extract keywords based in a word embeddings approach has been considered, TF-IDF has been chosen over any other approach to model the discussion topic over the different corpuses due to it's interpretability Cleaning the textsFor this, a Spacy pipeline is used to speed up the cleaning process
###Code
from spacy.language import Language
import re
@Language.component("clean_lemmatize")
def clean_lemmatize(doc):
text = doc.text
text = re.sub(r'\w*\d\w*', r'', text) # remove words containing digits
text = re.sub(r'[^a-z\s]', '', text) # remove anything that is not a letter or a space
return nlp.make_doc(text)
print('Done!')
import spacy
# Instantiate the pipeline, disable ner component for perfomance reasons
nlp = spacy.load("en_core_web_sm", disable=['ner'])
# Add custom text cleaning function
nlp.add_pipe('clean_lemmatize', before="tok2vec")
# Apply to EU data
with open('../data/02_preprocessed/full_eu_text.txt') as f:
eu_texts = f.readlines()
nlp.max_length = max([len(text)+1 for text in eu_texts])
eu_texts = [' '.join([token.lemma_ for token in doc]) for doc in nlp.pipe(eu_texts, n_process=10)] # Get lemmas
with open('../data/04_evaluation/full_eu_text_for_tfidf.txt', 'w+') as f:
for text in eu_texts:
f.write(text)
f.write('\n')
print('Done EU!')
# Apply to US data
with open('../data/02_preprocessed/full_us_text.txt') as f:
us_texts = f.readlines()
nlp.max_length = max([len(text)+1 for text in us_texts])
us_texts = [' '.join([token.lemma_ for token in doc]) for doc in nlp.pipe(us_texts, n_process=10)] # Get lemmas
with open('../data/04_evaluation/full_us_text_for_tfidf.txt', 'w+') as f:
for text in us_texts:
f.write(text)
f.write('\n')
print('Done US!')
print('Done!')
###Output
Done EU!
Done US!
Done!
###Markdown
Keyword extractionDue to the differences in legths and number of texts, it's not possible to use a standard approach to keywords extraction. TF-IDF has been considered, but it takes away most of the very interesting keywords such as "pandemic" or "covid". This is the reason why a hybrid approach between both of the European and US corpuses has been chosen.The approach takes the top n words from one of the corpus that intersects with the top n words from the other corpus. In order to find the most relevant words, a simple count vector is used, that counts the frequency of the words. This takes only the words that are really relevant in both cases, even if you using a relatively naive approach.
###Code
from sklearn.feature_extraction.text import CountVectorizer
import numpy as np
# Read the processed data
with open('../data/04_evaluation/full_eu_text_for_tfidf.txt') as f:
eu_texts = f.readlines()
with open('../data/04_evaluation/full_us_text_for_tfidf.txt') as f:
us_texts = f.readlines()
# Join the texts together
from nltk.corpus import stopwords
stopwords = set(stopwords.words('english'))
max_df = 0.9
max_features = 1000
cv_eu=CountVectorizer(max_df=max_df, stop_words=stopwords , max_features=max_features)
word_count_vector=cv_eu.fit_transform(eu_texts)
cv_us=CountVectorizer(max_df=max_df, stop_words=stopwords , max_features=max_features)
word_count_vector=cv_us.fit_transform(us_texts)
n_words = 200
keywords = [word for word in list(cv_eu.vocabulary_.keys())[:n_words] if word in list(cv_us.vocabulary_.keys())[:n_words]]
keywords
###Output
_____no_output_____
###Markdown
Measure the models performance on masked tokens Extract sentences where the keywords appear
###Code
keywords = ['coronavirus', 'covid', 'covid-19', 'virus', 'influenza', 'flu',
'pandemic', 'epidemic', 'outbreak', 'crisis', 'emergency',
'vaccine', 'vaccinated', 'mask',
'quarantine', 'symptoms', 'antibody', 'inmunity', 'distance', 'isolation',
'test', 'positive', 'negative',
'nurse', 'doctor', 'health', 'healthcare',]
import spacy
from spacy.matcher import PhraseMatcher
with open('../data/02_preprocessed/full_eu_text.txt') as f:
eu_texts = f.readlines()
with open('../data/02_preprocessed/full_us_text.txt') as f:
us_texts = f.readlines()
nlp = spacy.load("en_core_web_sm", disable=['ner'])
texts = [item for sublist in [eu_texts, us_texts] for item in sublist]
nlp.max_length = max([len(text) for text in texts])
phrase_matcher = PhraseMatcher(nlp.vocab)
patterns = [nlp(text) for text in keywords]
phrase_matcher.add('KEYWORDS', None, *patterns)
docs = nlp.pipe(texts, n_process=12)
sentences = []
block_size = 350
# Parse the docs for sentences
open('../data/04_evaluation/sentences.txt', 'wb').close()
print('Starting keyword extraction')
for doc in docs:
for sent in doc.sents:
# Check if the token is in the big sentence
for match_id, start, end in phrase_matcher(nlp(sent.text)):
if nlp.vocab.strings[match_id] in ["KEYWORDS"]:
# Create sentences of length of no more than block size
tokens = sent.text.split(' ')
if len(tokens) <= block_size:
sentence = sent.text
else:
sentence = " ".join(tokens[:block_size])
with open('../data/04_evaluation/sentences.txt', 'ab') as f:
f.write(f'{sentence}\n'.encode('UTF-8'))
print(f"There are {len(open('../data/04_evaluation/sentences.txt', 'rb').readlines())} sentences containing keywords")
###Output
There are 68086 sentences containing keywords
###Markdown
Measure the probability of outputing the real token in the sentence
###Code
# Define a custom function that feeds the three models an example and returns the perplexity
def get_masked_token_probaility(sentence:str, keywords:list, models_pipelines:list):
# Find the word in the sentence to mask
sentence = sentence.lower()
keywords = [keyword.lower() for keyword in keywords]
target = None
for keyword in keywords:
# Substitute only the first matched keyword
if keyword in sentence:
target = keyword
masked_sentence = sentence.replace(keyword, '{}', 1)
break
if target:
model_pipeline_results = []
for model_pipeline in models_pipelines:
masked_sentence = masked_sentence.format(model_pipeline.tokenizer.mask_token)
try:
result = model_pipeline(masked_sentence, targets=target)
model_pipeline_results.append(result[0]['score'])
except Exception as e:
model_pipeline_results.append(0)
return keyword, model_pipeline_results
from transformers import pipeline, AutoModelForMaskedLM, DistilBertTokenizer
tokenizer = DistilBertTokenizer.from_pretrained('../data/03_models/tokenizer/')
# The best found European model
model=AutoModelForMaskedLM.from_pretrained("../data/03_models/eu_bert_model")
eu_model_pipeline = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer
)
# The best found US model
model=AutoModelForMaskedLM.from_pretrained("../data/03_models/us_bert_model")
us_model_pipeline = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer
)
model_checkpoint = 'distilbert-base-uncased'
# The baseline model from which the trainin
model = AutoModelForMaskedLM.from_pretrained(model_checkpoint)
base_model_pipeline = pipeline(
"fill-mask",
model=model,
tokenizer=model_checkpoint
)
results = []
print(f"There are {len(open('../data/04_evaluation/sentences.txt').readlines())} sentences to be evaluated")
for sequence in open('../data/04_evaluation/sentences.txt').readlines():
results.append(get_masked_token_probaility(sequence, keywords, [eu_model_pipeline, us_model_pipeline, base_model_pipeline]))
import pickle
pickle.dump(results, open('../data/04_evaluation/sentence_token_prediction.pickle', 'wb'))
###Output
_____no_output_____
###Markdown
Evaluate the results
###Code
import pickle
results = pickle.load(open('../data/04_evaluation/sentence_token_prediction.pickle', 'rb'))
results[0:5]
###Output
_____no_output_____
###Markdown
Frequences of masked words in the pipeline
###Code
from collections import Counter
import numpy as np
import matplotlib.pyplot as plt
words = Counter([result[0] for result in results if result!=None]).most_common(len(keywords)) # most_common also sorts them
labels = [word[0] for word in words]
values = [word[1] for word in words]
indexes = np.arange(len(labels))
fix, ax = plt.subplots(figsize=(10,5))
ax.set_xticks(range(len(words)))
plt.bar(indexes, values, width=.8, align="center",alpha=.8)
plt.xticks(indexes, labels, rotation=45)
plt.title('Frequences of masked words in the pipeline')
plt.show()
###Output
_____no_output_____
###Markdown
Average probability of all the masked keywords by model
###Code
n_results = len([result for result in results if result!=None])
eu_results = sum([(result[1][0]) for result in results if result!=None]) / n_results
us_results = sum([(result[1][1]) for result in results if result!=None]) / n_results
base_results = sum([(result[1][2]) for result in results if result!=None]) / n_results
labels = ['EU model', 'US model', 'Base model']
values = [eu_results, us_results, base_results]
indexes = np.arange(len(labels))
fix, ax = plt.subplots(figsize=(10,5))
ax.set_xticks(range(len(words)))
plt.bar(indexes, values, width=.6, align="center",alpha=.8)
plt.xticks(indexes, labels, rotation=45)
plt.title('Average probability of all the masked keywords by model')
plt.show()
###Output
_____no_output_____
###Markdown
Get the first predicted token in each sentence, masking
###Code
def get_first_predicted_masked_token(sentence:str, eu_pipeline, us_pipeline, base_pipeline):
sentence = sentence.lower()
model_pipeline_results = []
eu_model_pipeline_results = eu_pipeline(sentence.format(eu_pipeline.tokenizer.mask_token), top_k=1)
us_model_pipeline_results = us_pipeline(sentence.format(us_pipeline.tokenizer.mask_token), top_k=1)
base_model_pipeline_results = base_pipeline(sentence.format(base_pipeline.tokenizer.mask_token), top_k=1)
return (eu_model_pipeline_results[0]['token_str'].replace(' ', ''),
us_model_pipeline_results[0]['token_str'].replace(' ', ''),
base_model_pipeline_results[0]['token_str'].replace(' ', '')
)
# Create a function that identifies the first keyword in the sentences, masks it and feeds the it to the prediction function
results = []
for sequence in open('../data/04_evaluation/sentences.txt').readlines():
target = None
for keyword in keywords:
if keyword in sequence:
target = keyword
break
if target:
masked_sentence = sequence.replace(target, '{}', 1)
try:
predictions = get_first_predicted_masked_token(masked_sentence, eu_model_pipeline, us_model_pipeline, base_model_pipeline)
results.append({'masked_token': target,
'eu_prediction': predictions[0],
'us_prediction': predictions[1],
'base_prediction': predictions[2]})
except:
pass
import pickle
pickle.dump(results, open('../data/04_evaluation/sentence_first_predicted_tokens.pickle', 'wb'))
###Output
Token indices sequence length is longer than the specified maximum sequence length for this model (594 > 512). Running this sequence through the model will result in indexing errors
Token indices sequence length is longer than the specified maximum sequence length for this model (514 > 512). Running this sequence through the model will result in indexing errors
###Markdown
Evaluate the results
###Code
import pickle
results = pickle.load(open('../data/04_evaluation/sentence_first_predicted_tokens.pickle', 'rb'))
print(len(results))
# Group the results by masked token
from itertools import groupby
from operator import itemgetter
from collections import Counter
import numpy as np
import matplotlib.pyplot as plt
n_words = 10
results = sorted(results, key=itemgetter('masked_token'))
for keyword, v in groupby(results, key=lambda x: x['masked_token']):
token_results = list(v)
fig, ax = plt.subplots(1,3, figsize=(25,5))
for idx, (key, name) in enumerate(zip(['eu_prediction', 'us_prediction', 'base_prediction'], ['EU', 'US', 'Base'])):
words = Counter([item[key] for item in token_results]).most_common(n_words)
labels, values = zip(*words)
ax[idx].barh(labels, values, align="center",alpha=.8)
ax[idx].set_title(f'Predicted tokens by {name} model for {keyword}')
plt.show()
###Output
_____no_output_____
###Markdown
Qualitative evaluation of masked token predictionThe objective of this section is not to compare the score obtained by all the models that are being used, but to compare what are the qualitative outputs of these models. This means that the comparison is going to be done manually, by inputing phrases that contain words related to the COVID-19 pandemic, and comparing the outputs of the models among them, enabling the possibility of discussion of these results. Feeding selected phrases belonging to the European and United States institutions websites
###Code
def get_masked_token(sentence:str, eu_pipeline, us_pipeline, base_pipeline, n_results=1):
sentence = sentence.lower()
model_pipeline_results = []
eu_prediction = eu_pipeline(sentence.format(eu_pipeline.tokenizer.mask_token), top_k =n_results)[0]
us_prediction = us_pipeline(sentence.format(us_pipeline.tokenizer.mask_token), top_k =n_results)[0]
base_prediction = base_pipeline(sentence.format(base_pipeline.tokenizer.mask_token), top_k =n_results)[0]
token = eu_prediction['token_str'].replace(' ', '')
print(f"EUROPEAN MODEL -------> {token}\n\t{eu_prediction['sequence'].replace(token, token.upper())}")
token = us_prediction['token_str'].replace(' ', '')
print(f"UNITED STATES MODEL -------> {token}\n\t{us_prediction['sequence'].replace(token, token.upper())}")
token = base_prediction['token_str'].replace(' ', '')
print(f"BASE MODEL -------> {token}\n\t{base_prediction['sequence'].replace(token, token.upper())}")
from transformers import pipeline, AutoModelForMaskedLM, DistilBertTokenizer
tokenizer = DistilBertTokenizer.from_pretrained('../data/03_models/tokenizer/')
# The best found European model
model=AutoModelForMaskedLM.from_pretrained("../data/03_models/eu_bert_model")
eu_model_pipeline = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer
)
# The best found US model
model=AutoModelForMaskedLM.from_pretrained("../data/03_models/us_bert_model")
us_model_pipeline = pipeline(
"fill-mask",
model=model,
tokenizer=tokenizer
)
model_checkpoint = 'distilbert-base-uncased'
# The baseline model from which the trainin
model = AutoModelForMaskedLM.from_pretrained(model_checkpoint)
base_model_pipeline = pipeline(
"fill-mask",
model=model,
tokenizer=model_checkpoint
)
###Output
_____no_output_____
###Markdown
European institutions sentences
###Code
# Source https://ec.europa.eu/info/live-work-travel-eu/coronavirus-response_en
# Masked token: coronavirus
sentence = """The European Commission is coordinating a common European response to the {} outbreak. We are taking resolute action to reinforce our public health sectors and mitigate the socio-economic impact in the European Union. We are mobilising all means at our disposal to help our Member States coordinate their national responses and are providing objective information about the spread of the virus and effective efforts to contain it."""
get_masked_token(sentence, eu_model_pipeline, us_model_pipeline, base_model_pipeline)
# Source https://ec.europa.eu/info/live-work-travel-eu/coronavirus-response_en
# Masked token: vaccine
sentence = """A safe and effective {} is our best chance to beat coronavirus and return to our normal lives"""
get_masked_token(sentence, eu_model_pipeline, us_model_pipeline, base_model_pipeline)
# Source https://ec.europa.eu/info/live-work-travel-eu/coronavirus-response_en
# Masked token: medicines
sentence = """The European Commission is complementing the EU Vaccines Strategy with a strategy on COVID-19 therapeutics to support the development and availability of {}"""
get_masked_token(sentence, eu_model_pipeline, us_model_pipeline, base_model_pipeline)
# Source https://ec.europa.eu/info/strategy/recovery-plan-europe_en
# Masked token: recovery
sentence = """The EU’s long-term budget, coupled with NextGenerationEU, the temporary instrument designed to boost the {}, will be the largest stimulus package ever financed in Europe. A total of €1.8 trillion will help rebuild a post-COVID-19 Europe. It will be a greener, more digital and more resilient Europe."""
get_masked_token(sentence, eu_model_pipeline, us_model_pipeline, base_model_pipeline)
###Output
EUROPEAN MODEL -------> economy
the eu ’ s long - term budget, coupled with nextgenerationeu, the temporary instrument designed to boost the ECONOMY, will be the largest stimulus package ever financed in europe. a total of €1. 8 trillion will help rebuild a post - covid - 19 europe. it will be a greener, more digital and more resilient europe.
UNITED STATES MODEL -------> economy
the eu ’ s long - term budget, coupled with nextgenerationeu, the temporary instrument designed to boost the ECONOMY, will be the largest stimulus package ever financed in europe. a total of €1. 8 trillion will help rebuild a post - covid - 19 europe. it will be a greener, more digital and more resilient europe.
BASE MODEL -------> economy
the eu ’ s long - term budget, coupled with nextgenerationeu, the temporary instrument designed to boost the ECONOMY, will be the largest stimulus package ever financed in europe. a total of €1. 8 trillion will help rebuild a post - covid - 19 europe. it will be a greener, more digital and more resilient europe.
###Markdown
US Government sentences
###Code
# Source https://www.usa.gov/covid-unemployment-benefits
# Masked token: provide
sentence = 'The federal government has allowed states to change their laws to {} COVID-19 unemployment benefits for people whose jobs have been affected by the coronavirus pandemic.'
get_masked_token(sentence, eu_model_pipeline, us_model_pipeline, base_model_pipeline)
# Source https://www.usa.gov/covid-passports-and-travel
# Masked token: mask-wearing
sentence = """Many museums, aquariums, and zoos have restricted access or are closed during the pandemic. And many recreational areas including National Parks have COVID-19 restrictions and {} rules. Check with your destination for the latest information."""
get_masked_token(sentence, eu_model_pipeline, us_model_pipeline, base_model_pipeline)
# Source https://www.usa.gov/covid-stimulus-checks
# Masked token: people
sentence = """The American Rescue Plan Act of 2021 provides $1,400 Economic Impact Payments for {} who are eligible. You do not need to do anything to receive your payment. It will arrive by direct deposit to your bank account, or by mail in the form of a paper check or debit card."""
get_masked_token(sentence, eu_model_pipeline, us_model_pipeline, base_model_pipeline)
# Source https://www.usa.gov/covid-scams
# Masked token: scammers
sentence = """During the COVID-19 pandemic, {} may try to take advantage of you. They might get in touch by phone, email, postal mail, text, or social media. Protect your money and your identity. Don't share personal information like your bank account number, Social Security number, or date of birth. Learn how to recognize and report a COVID vaccine scam and other types of coronavirus scams. """
get_masked_token(sentence, eu_model_pipeline, us_model_pipeline, base_model_pipeline)
# Source https://www.acf.hhs.gov/coronavirus
# Masked token: situation
sentence = """With the COVID-19 {} continuing to evolve, we continue to provide relevant resources to help our grantees, partners, and stakeholders support children, families, and communities in need during this challenging time."""
get_masked_token(sentence, eu_model_pipeline, us_model_pipeline, base_model_pipeline)
###Output
EUROPEAN MODEL -------> crisis
with the covid - 19 CRISIS continuing to evolve, we continue to provide relevant resources to help our grantees, partners, and stakeholders support children, families, and communities in need during this challenging time.
UNITED STATES MODEL -------> pandemic
with the covid - 19 PANDEMIC continuing to evolve, we continue to provide relevant resources to help our grantees, partners, and stakeholders support children, families, and communities in need during this challenging time.
BASE MODEL -------> program
with the covid - 19 PROGRAM continuing to evolve, we continue to provide relevant resources to help our grantees, partners, and stakeholders support children, families, and communities in need during this challenging time.
|
4_Ttests.ipynb | ###Markdown
Week 4 T - testing and Inferential Statistics Most people turn to IMB SPSS for T-testings, but this programme is very expensive, very old and not really necessary if you have access to Python tools. Very focused on click and point and is probably more useful to people without a programming background. Libraries
###Code
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
import scipy.stats as ss
import statsmodels.stats.weightstats as sm_ttest
###Output
_____no_output_____
###Markdown
Reading * [Independent t-test using SPSS Statistics on laerd.com](https://statistics.laerd.com/spss-tutorials/independent-t-test-using-spss-statistics.php)* [ScipyStats documentation on ttest_ind](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ttest_ind.html)* [StatsModels documentation on ttest_ind](https://www.statsmodels.org/devel/generated/statsmodels.stats.weightstats.ttest_ind.html)* [StarTek.com, Hypothesis test: The Difference in Means](https://stattrek.com/hypothesis-test/difference-in-means.aspx)* [Python for Data Science, Independent T-Test](https://pythonfordatascience.org/independent-t-test-python/)* [Dependent t-test using SPSS Statistics on leard.com](https://statistics.laerd.com/spss-tutorials/dependent-t-test-using-spss-statistics.php)* [StackExchange, When conducting a t-test why would one prefer to assume (or test for) equal variances..?](https://stats.stackexchange.com/questions/305/when-conducting-a-t-test-why-would-one-prefer-to-assume-or-test-for-equal-vari) T-testing **Example:** If I take a sample of males and females from the population and calcaulte their heights. Now a question I might ask is, is the mean height of males in the population equal to the mean height of females in the population? T-testing is related to Hypothesis Testing. Scipy Stats
###Code
#Generating random data for the heights of 30 males in my sample
m = np.random.normal(1.8, 0.1, 30)
#Generating random data for the heights of 30 females in my sample
f = np.random.normal(1.6, 0.1, 30)
ss.stats.ttest_ind(m, f)
###Output
_____no_output_____
###Markdown
The null hypothesis (H0) claims that the average male height in the population is equal to the average female height in the population. Using my sample, I can infer if the H0 should be accepted or rejected. Based on my very small pvalue, we can reject the null hypothesis. The pvalue refers to the probability of finding these samples in two populations with the same mean. We have to accept our Alternate Hypothesis (H1), which should claim that the average male height is different to the average female height in the population. This is not surprising as I generated random data for my sample with male heights having a larger mean.
###Code
np.mean(m)
np.mean(f)
###Output
_____no_output_____
###Markdown
Statsmodels
###Code
sm_ttest.ttest_ind(m, f)
###Output
_____no_output_____
###Markdown
Graphical Analysis
###Code
#Seaborn displot to show means
plt.figure()
sns.distplot(m, label = 'male')
sns.distplot(f, label = 'female')
plt.legend();
df = pd.DataFrame({'male': m, 'female': f})
df
###Output
_____no_output_____
###Markdown
It's typically not a good idea to list values side by side. It implies a relationship between the data and can lead to problems if we don't have the same sample size of males as females.
###Code
a = ['male'] * 30
b = ['female'] * 30
gender = a+b
# I can't add arrays for males and females in the same way
# As they are numpy arrays
height = np.concatenate([m, f])
df = pd.DataFrame({'Gender': gender, 'Height': height})
df
#Taking out just the male heights
df[df['Gender'] == 'male']['Height']
df[df['Gender'] == 'female']['Height']
sns.catplot(x = 'Gender', y = 'Height', jitter = False, data = df);
sns.catplot(x = 'Gender', y = 'Height', kind = 'box', data = df);
###Output
_____no_output_____ |
notebooks/stephan_notebooks/01-Initial_EDA.ipynb | ###Markdown
Matplotlib Compare different resolutions
###Code
mrms4km
np.arange(lons.start, lons.stop, 512/100)
def add_grid(axs):
for ax in axs:
ax.set_xticks(np.arange(lons.start, lons.stop, 512/100))
ax.set_yticks(np.arange(lats.start, lats.stop, -512/100))
ax.grid(True)
ax.set_aspect('equal')
yopp16km
yopp16km.isel(init_time=i, lead_time=slice(0, 3)).valid_time
i = 3
valid_time = yopp16km.isel(init_time=i, lead_time=slice(0, 3)).valid_time
figsize = (16, 5)
axs = mrms4km.sel(time=valid_time.values).plot(vmin=0, vmax=50, col='time', cmap='gist_ncar_r', figsize=figsize).axes[0]
add_grid(axs)
axs = yopp16km.isel(init_time=i, lead_time=slice(0, 3)).plot(vmin=0, vmax=50, col='lead_time', cmap='gist_ncar_r', figsize=figsize).axes[0]
add_grid(axs)
axs = yopp32km.isel(init_time=i, lead_time=slice(0, 3)).plot(vmin=0, vmax=50, col='lead_time', cmap='gist_ncar_r', figsize=figsize).axes[0]
add_grid(axs)
i = 2
valid_time = tigge_det16km.isel(init_time=i, lead_time=slice(0, 3)).valid_time
figsize = (16, 5)
axs = mrms4km6h.sel(time=valid_time.values, method='nearest').assign_coords({'time': valid_time.values}).plot(vmin=0, vmax=50, col='time', cmap='gist_ncar_r', figsize=figsize).axes[0]
add_grid(axs)
axs = tigge_det16km.isel(init_time=i, lead_time=slice(0, 3)).plot(vmin=0, vmax=50, col='lead_time', cmap='gist_ncar_r', figsize=figsize).axes[0]
add_grid(axs)
axs = tigge_det32km.isel(init_time=i, lead_time=slice(0, 3)).plot(vmin=0, vmax=50, col='lead_time', cmap='gist_ncar_r', figsize=figsize).axes[0]
add_grid(axs)
tigge_ens16km.isel(init_time=i, lead_time=l)
i = 3
l = 0
t = tigge_ens16km.isel(init_time=i, lead_time=slice(l, l+2)).valid_time.values
axs = mrms4km6h.sel(time=t, method='nearest').assign_coords({'time': t}).plot(vmin=0, vmax=50, cmap='gist_ncar_r', figsize=(10, 4), col='time').axes[0]
add_grid(axs)
axs = tigge_ens16km.isel(init_time=i, lead_time=l, member=slice(0, 6)).plot(vmin=0, vmax=50, cmap='gist_ncar_r', figsize=(24, 4), col='member').axes[0]
add_grid(axs)
axs = tigge_ens16km.isel(init_time=i, lead_time=l+1, member=slice(0, 6)).plot(vmin=0, vmax=50, cmap='gist_ncar_r', figsize=(24, 4), col='member').axes[0]
add_grid(axs)
###Output
_____no_output_____
###Markdown
Holoviews
###Code
import holoviews as hv
hv.extension('bokeh')
hv.config.image_rtol = 1
# from holoviews import opts
# opts.defaults(opts.Scatter3D(color='Value', cmap='viridis', edgecolor='black', s=50))
lons2 = slice(268, 273)
lats2 = slice(40, 35)
lons2 = lons
lats2 = lats
def to_hv(da, dynamic=False, opts={'clim': (1, 50)}):
hv_ds = hv.Dataset(da)
img = hv_ds.to(hv.Image, kdims=["lon", "lat"], dynamic=dynamic)
return img.opts(**opts)
valid_time = yopp16km.isel(lead_time=slice(0, 4), init_time=slice(0, 3)).valid_time
valid_time2 = tigge_det16km.isel(lead_time=slice(0, 3), init_time=slice(0, 3)).valid_time
mrms2km_hv = to_hv(mrms2km.sel(time=valid_time, method='nearest').sel(lat=lats2, lon=lons2))
mrms4km_hv = to_hv(mrms4km.sel(time=valid_time, method='nearest').sel(lat=lats2, lon=lons2))
mrms2km6h_hv = to_hv(mrms2km6h.sel(time=valid_time2, method='nearest').sel(lat=lats2, lon=lons2))
mrms4km6h_hv = to_hv(mrms4km6h.sel(time=valid_time2, method='nearest').sel(lat=lats2, lon=lons2))
yopp16km_hv = to_hv(yopp16km.isel(lead_time=slice(0, 4), init_time=slice(0, 3)).sel(lat=lats2, lon=lons2))
yopp32km_hv = to_hv(yopp32km.isel(lead_time=slice(0, 4), init_time=slice(0, 3)).sel(lat=lats2, lon=lons2))
tigge_det16km_hv = to_hv(tigge_det16km.isel(lead_time=slice(0, 3), init_time=slice(0, 3)).sel(lat=lats2, lon=lons2))
tigge_det32km_hv = to_hv(tigge_det32km.isel(lead_time=slice(0, 3), init_time=slice(0, 3)).sel(lat=lats2, lon=lons2))
###Output
_____no_output_____
###Markdown
Which resolution for MRMS?
###Code
%%output holomap='widgets'
%%opts Image style(cmap='gist_ncar_r') [width=600, height=600]
# mrms4km6h_hv + tigge_det16km_hv + tigge_det32km_hv
# mrms4km_hv + yopp16km_hv + yopp32km_hv
%%output holomap='widgets'
%%opts Image style(cmap='gist_ncar_r') [width=600, height=600]
mrms4km_hv + mrms4km6h_hv
hv_yopp = yopp.isel(init_time=0).sel(latitude=lats, longitude=lons)
hv_yopp.coords['time'] = hv_yopp.init_time + hv_yopp.lead_time
hv_yopp = hv_yopp.swap_dims({'lead_time': 'time'})
# hv_yopp
hv_mrms = hv.Dataset(mrms.sel(latitude=lats, longitude=lons)[1:])
hv_yopp = hv.Dataset(hv_yopp.sel(time=mrms.time[1:]))
img1 = hv_mrms.to(hv.Image, kdims=["longitude", "latitude"], dynamic=False)
img2 = hv_yopp.to(hv.Image, kdims=["longitude", "latitude"], dynamic=False)
%%output holomap='widgets'
%%opts Image style(cmap='gist_ncar_r') plot[colorbar=True]
%%opts Image [width=500, height=400]
img1 + img2
hv_yopp = yopp.sel(latitude=lats, longitude=lons)
hv_yopp = hv.Dataset(hv_yopp)
img1 = hv_yopp.to(hv.Image, kdims=["longitude", "latitude"], dynamic=False)
%%output holomap='widgets'
%%opts Image style(cmap='gist_ncar_r') plot[colorbar=True]
%%opts Image [width=500, height=400]
img1
hv_ds = hv.Dataset(da.sel(latitude=lats, longitude=lons))
hv_ds
a = hv_ds.to(hv.Image, kdims=["longitude", "latitude"], dynamic=False)
a.opts(colorbar=True, fig_size=200, cmap='viridis')
###Output
_____no_output_____
###Markdown
Old
###Code
path = '../data/MultiSensor_QPE_01H_Pass1/'
da1 = open_nrms('../data/MultiSensor_QPE_01H_Pass1/')
da3 = open_nrms('../data/MultiSensor_QPE_03H_Pass1/')
dar = open_nrms('../data/RadarOnly_QPE_03H/')
da3p = open_nrms('../data/MultiSensor_QPE_03H_Pass2/')
da1
da3
da13 = da1.rolling(time=3).sum()
(da13 - da3).isel(time=3).sel(latitude=lats, longitude=lons).plot()
da13.isel(time=slice(0, 7)).sel(latitude=slice(44, 40), longitude=slice(268, 272)).plot(col='time', vmin=0, vmax=50)
plt.suptitle('1h accumulation with rolling(time=3).sum()', y=1.05)
da3.isel(time=slice(0, 7)).sel(latitude=slice(44, 40), longitude=slice(268, 272)).plot(col='time', vmin=0, vmax=50)
plt.suptitle('3h accumulation', y=1.05)
dar.isel(time=slice(0, 7)).sel(latitude=slice(44, 40), longitude=slice(268, 272)).plot(col='time', vmin=0, vmax=50)
plt.suptitle('3h accumulation radar', y=1.05)
da3.isel(time=slice(0, 7)).sel(latitude=slice(44, 43), longitude=slice(269, 270)).plot(col='time', vmin=0, vmax=50)
plt.suptitle('3h accumulation', y=1.05)
dar.isel(time=slice(0, 7)).sel(latitude=slice(44, 43), longitude=slice(269, 270)).plot(col='time', vmin=0, vmax=50)
plt.suptitle('3h accumulation radar', y=1.05)
for t in np.arange('2020-10-23', '2020-10-25', np.timedelta64(3, 'h'), dtype='datetime64[h]'):
print(t)
print('Radar', (dar.time.values == t).sum() > 0)
print('Pass1', (da3.time.values == t).sum() > 0)
print('Pass2', (da3p.time.values == t).sum() > 0)
t
(dar.time.values == t).sum() > 0
da3.time.values
def plot_facet(da, title='', **kwargs):
p = da.plot(
col='time', col_wrap=3,
subplot_kws={'projection': ccrs.PlateCarree()},
transform=ccrs.PlateCarree(),
figsize=(15, 15), **kwargs
)
for ax in p.axes.flat:
ax.coastlines()
ax.add_feature(states_provinces, edgecolor='gray')
# ax.set_extent([113, 154, -11, -44], crs=ccrs.PlateCarree())
plt.suptitle(title);
plot_facet(da.isel(time=slice(0, 9)).sel(latitude=lats, longitude=lons), vmin=0, vmax=10, add_colorbar=False)
import holoviews as hv
hv.extension('matplotlib')
from holoviews import opts
opts.defaults(opts.Scatter3D(color='Value', cmap='fire', edgecolor='black', s=50))
hv_ds = hv.Dataset(da.sel(latitude=lats, longitude=lons))
hv_ds
a = hv_ds.to(hv.Image, kdims=["longitude", "latitude"], dynamic=False)
a.opts(colorbar=True, fig_size=200, cmap='viridis')
da.longitude.diff('longitude').min()
!cp ../data/yopp/2020-10-23.nc ../data/yopp/2020-10-23.grib
a = xr.open_dataset('../data/yopp/2020-10-23.grib', engine='pynio')
a
a.g4_lat_2.diff('g4_lat_2')
a.g4_lon_3.diff('g4_lon_3')
!cp ../data/tigge/2020-10-23.nc ../data/tigge/2020-10-23.grib
b = xr.open_dataset('../data/tigge/2020-10-23.grib', engine='pynio')
b
###Output
_____no_output_____
###Markdown
Initial data and problem exploration
###Code
import xarray as xr
import pandas as pd
import urllib.request
import numpy as np
from glob import glob
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
import os
import cartopy.feature as cfeature
states_provinces = cfeature.NaturalEarthFeature(
category='cultural',
name='admin_1_states_provinces_lines',
scale='50m',
facecolor='none')
###Output
_____no_output_____
###Markdown
Data preprocessing TIGGE ECMWF Control run
###Code
tigge_ctrl = xr.open_mfdataset("/datadrive/tigge/16km/2m_temperature/2019-10.nc")
tigge_ctrl
tigge_ctrl.lat.min()
tigge_2dslice = tigge_ctrl.t2m.isel(lead_time=4, init_time=0)
p = tigge_2dslice.plot(
subplot_kws=dict(projection=ccrs.Orthographic(-80, 35), facecolor="gray"),
transform=ccrs.PlateCarree(),)
#p.axes.set_global()
p.axes.coastlines()
###Output
_____no_output_____
###Markdown
TIGGE CTRL precip
###Code
prec = xr.open_mfdataset("/datadrive/tigge/raw/total_precipitation/*.nc")
prec # aggregated precipitation
prec.tp.mean('init_time').diff('lead_time').plot(col='lead_time', col_wrap=3) # that takes a while!
###Output
_____no_output_____
###Markdown
Checking regridding
###Code
t2m_raw = xr.open_mfdataset("/datadrive/tigge/raw/2m_temperature/2019-10.nc")
t2m_32 = xr.open_mfdataset("/datadrive/tigge/32km/2m_temperature/2019-10.nc")
t2m_16 = xr.open_mfdataset("/datadrive/tigge/16km/2m_temperature/2019-10.nc")
for ds in [t2m_raw, t2m_16, t2m_32]:
tigge_2dslice = ds.t2m.isel(lead_time=4, init_time=-10)
plt.figure()
p = tigge_2dslice.plot(levels=np.arange(270,305),
subplot_kws=dict(projection=ccrs.Orthographic(-80, 35), facecolor="gray"),
transform=ccrs.PlateCarree(),)
p.axes.coastlines()
###Output
_____no_output_____
###Markdown
Ensemble
###Code
!ls -lh ../data/tigge/2020-10-23_ens2.grib
tigge = xr.open_mfdataset('../data/tigge/2020-10-23_ens2.grib', engine='pynio').isel()
tigge = tigge.rename({
'tp_P11_L1_GGA0_acc': 'tp',
'initial_time0_hours': 'init_time',
'forecast_time0': 'lead_time',
'lat_0': 'latitude',
'lon_0': 'longitude',
'ensemble0' : 'member'
}).diff('lead_time').tp
tigge = tigge.where(tigge >= 0, 0)
# tigge = tigge * 1000 # m to mm
tigge.coords['valid_time'] = xr.concat([i + tigge.lead_time for i in tigge.init_time], 'init_time')
tigge
tigge.to_netcdf('../data/tigge/2020-10-23_ens_preprocessed.nc')
###Output
_____no_output_____
###Markdown
Deterministic
###Code
tigge = xr.open_mfdataset('../data/tigge/2020-10-23.grib', engine='pynio')
tigge = tigge.rename({
'tp_P11_L1_GGA0_acc': 'tp',
'initial_time0_hours': 'init_time',
'forecast_time0': 'lead_time',
'lat_0': 'latitude',
'lon_0': 'longitude',
}).diff('lead_time').tp
tigge = tigge.where(tigge >= 0, 0)
tigge.coords['valid_time'] = xr.concat([i + tigge.lead_time for i in tigge.init_time], 'init_time')
tigge
tigge.to_netcdf('../data/tigge/2020-10-23_preprocessed.nc')
###Output
_____no_output_____
###Markdown
YOPP
###Code
yopp = xr.open_dataset('../data/yopp/2020-10-23.grib', engine='pynio').TP_GDS4_SFC
yopp2 = xr.open_dataset('../data/yopp/2020-10-23_12.grib', engine='pynio').TP_GDS4_SFC
yopp = xr.merge([yopp, yopp2]).rename({
'TP_GDS4_SFC': 'tp',
'initial_time0_hours': 'init_time',
'forecast_time1': 'lead_time',
'g4_lat_2': 'latitude',
'g4_lon_3': 'longitude'
})
yopp = yopp.diff('lead_time').tp
yopp = yopp.where(yopp >= 0, 0)
yopp = yopp * 1000 # m to mm
yopp.coords['valid_time'] = xr.concat([i + yopp.lead_time for i in yopp.init_time], 'init_time')
yopp.to_netcdf('../data/yopp/2020-10-23_preprocessed.nc')
###Output
_____no_output_____
###Markdown
NRMS data
###Code
def time_from_fn(fn):
s = fn.split('/')[-1].split('_')[-1]
year = s[:4]
month = s[4:6]
day = s[6:8]
hour = s[9:11]
return np.datetime64(f'{year}-{month}-{day}T{hour}')
def open_nrms(path):
fns = sorted(glob(f'{path}/*'))
dss = [xr.open_dataset(fn, engine='pynio') for fn in fns]
times = [time_from_fn(fn) for fn in fns]
times = xr.DataArray(times, name='time', dims=['time'], coords={'time': times})
ds = xr.concat(dss, times).rename({'lat_0': 'latitude', 'lon_0': 'longitude'})
da = ds[list(ds)[0]].rename('tp')
return da
def get_mrms_fn(path, source, year, month, day, hour):
month, day, hour = [str(x).zfill(2) for x in [month, day, hour]]
fn = f'{path}/{source}/MRMS_{source}_00.00_{year}{month}{day}-{hour}0000.grib2'
# print(fn)
return fn
def load_mrms_data(path, start_time, stop_time, accum=3):
times = pd.to_datetime(np.arange(start_time, stop_time, np.timedelta64(accum, 'h'), dtype='datetime64[h]'))
das = []
for t in times:
if os.path.exists(get_mrms_fn(path, f'MultiSensor_QPE_0{accum}H_Pass1', t.year, t.month, t.day, t.hour)):
ds = xr.open_dataset(get_mrms_fn(path, f'MultiSensor_QPE_0{accum}H_Pass1', t.year, t.month, t.day, t.hour), engine='pynio')
elif os.path.exists(get_mrms_fn(path, f'MultiSensor_QPE_0{accum}H_Pass2', t.year, t.month, t.day, t.hour)):
ds = xr.open_dataset(get_mrms_fn(path, f'MultiSensor_QPE_0{accum}H_Pass2', t.year, t.month, t.day, t.hour), engine='pynio')
elif os.path.exists(get_mrms_fn(path, f'RadarOnly_QPE_0{accum}H', t.year, t.month, t.day, t.hour)):
ds = xr.open_dataset(get_mrms_fn(path, f'RadarOnly_QPE_0{accum}H', t.year, t.month, t.day, t.hour), engine='pynio')
else:
raise Exception(f'No data found for {t}')
ds = ds.rename({'lat_0': 'latitude', 'lon_0': 'longitude'})
da = ds[list(ds)[0]].rename('tp')
das.append(da)
times = xr.DataArray(times, name='time', dims=['time'], coords={'time': times})
da = xr.concat(das, times)
return da
mrms = load_mrms_data('../data/', '2020-10-23', '2020-10-25')
mrms6h = mrms.rolling(time=2).sum().isel(time=slice(0, None, 2))
mrms.to_netcdf('../data/mrms/mrms_preprocessed.nc')
mrms6h.to_netcdf('../data/mrms/mrms6_preprocessed.nc')
###Output
_____no_output_____
###Markdown
Analysis
###Code
!ls ../data
tigge_det = xr.open_dataarray('../data/tigge/2020-10-23_preprocessed.nc').rename({'latitude': 'lat', 'longitude': 'lon'})
tigge_ens = xr.open_dataarray('../data/tigge/2020-10-23_ens_preprocessed.nc').rename({'latitude': 'lat', 'longitude': 'lon'})
yopp = xr.open_dataarray('../data/yopp/2020-10-23_preprocessed.nc').rename({'latitude': 'lat', 'longitude': 'lon'})
mrms = xr.open_dataarray('../data/mrms/mrms_preprocessed.nc').rename({'latitude': 'lat', 'longitude': 'lon'})
mrms6h = xr.open_dataarray('../data/mrms/mrms6_preprocessed.nc').rename({'latitude': 'lat', 'longitude': 'lon'})
###Output
_____no_output_____
###Markdown
Regrid
###Code
import xesmf as xe
lons = slice(260, 280)
lats = slice(45, 25)
def regrid(ds, km, lats, lons):
deg = km/100.
grid = xr.Dataset(
{
'lat': (['lat'], np.arange(lats.start, lats.stop, -deg)),
'lon': (['lon'], np.arange(lons.start, lons.stop, deg))
}
)
regridder = xe.Regridder(ds.sel(lat=lats, lon=lons), grid, 'bilinear')
return regridder(ds.sel(lat=lats, lon=lons), keep_attrs=True)
mrms4km = regrid(mrms, 4, lats, lons)
mrms2km = regrid(mrms, 2, lats, lons)
mrms4km6h = regrid(mrms6h, 4, lats, lons)
mrms2km6h = regrid(mrms6h, 2, lats, lons)
mrms4km6h = mrms4km6h.rename('tp')
mrms2km6h =mrms2km6h.rename('tp')
yopp16km = regrid(yopp, 16, lats, lons)
yopp32km = regrid(yopp, 32, lats, lons)
tigge_det16km = regrid(tigge_det, 16, lats, lons)
tigge_det32km = regrid(tigge_det, 32, lats, lons)
tigge_ens16km = regrid(tigge_ens, 16, lats, lons)
tigge_ens32km = regrid(tigge_ens, 32, lats, lons)
!mkdir ../data/regridded
mrms2km.to_netcdf('../data/regridded/mrms2km.nc')
mrms4km.to_netcdf('../data/regridded/mrms4km.nc')
mrms2km6h.to_netcdf('../data/regridded/mrms2km6h.nc')
mrms4km6h.to_netcdf('../data/regridded/mrms4km6h.nc')
yopp16km.to_netcdf('../data/regridded/yopp16km.nc')
yopp32km.to_netcdf('../data/regridded/yopp32km.nc')
tigge_det16km.to_netcdf('../data/regridded/tigge_det16km.nc')
tigge_det32km.to_netcdf('../data/regridded/tigge_det32km.nc')
tigge_ens16km.to_netcdf('../data/regridded/tigge_ens16km.nc')
tigge_ens32km.to_netcdf('../data/regridded/tigge_ens32km.nc')
mrms2km = xr.open_dataarray('../data/regridded/mrms2km.nc')
mrms4km = xr.open_dataarray('../data/regridded/mrms4km.nc')
mrms2km6h = xr.open_dataarray('../data/regridded/mrms2km6h.nc')
mrms4km6h = xr.open_dataarray('../data/regridded/mrms4km6h.nc')
yopp16km = xr.open_dataarray('../data/regridded/yopp16km.nc')
yopp32km = xr.open_dataarray('../data/regridded/yopp32km.nc')
tigge_det16km = xr.open_dataarray('../data/regridded/tigge_det16km.nc')
tigge_det32km = xr.open_dataarray('../data/regridded/tigge_det32km.nc')
tigge_ens16km = xr.open_dataarray('../data/regridded/tigge_ens16km.nc')
tigge_ens32km = xr.open_dataarray('../data/regridded/tigge_ens32km.nc')
###Output
_____no_output_____ |
docs/intro_sql_basic.ipynb | ###Markdown
Querying SQL (intro) Reading in dataIn this tutorial, we'll use the mtcars data ([source](https://stat.ethz.ch/R-manual/R-devel/library/datasets/html/mtcars.html)) that comes packaged with siuba. This data contains information about 32 cars, like their miles per gallon (`mpg`), and number of cylinders (`cyl`). This data in siuba is a pandas DataFrame.
###Code
from siuba.data import mtcars
mtcars.head()
###Output
_____no_output_____
###Markdown
First, we'll use sqlalchemy, and the pandas method `create_engine` to copy the data into a sqlite table.Once we have that, `siuba` can use a class called `LazyTbl` to connect to the table.
###Code
from sqlalchemy import create_engine
from siuba.sql import LazyTbl
# copy in to sqlite
engine = create_engine("sqlite:///:memory:")
mtcars.to_sql("mtcars", engine, if_exists = "replace")
# connect with siuba
tbl_mtcars = LazyTbl(engine, "mtcars")
tbl_mtcars
###Output
_____no_output_____
###Markdown
Notice that `siuba` by default prints a glimpse into the current data, along with some extra information about the database we're connected to. However, in this case, there are more than 5 rows of data. In order to get all of it back as a pandas DataFrame we need to `collect()` it. Connecting to existing databaseWhile we use `sqlalchemy.create_engine` to connect to a database in the previous section, `LazyTbl` also accepts a string as its first argument, followed by a table name.This is shown below, with placeholder variables, like "username" and "password". See this [SqlAlchemy doc](https://docs.sqlalchemy.org/en/13/core/engines.htmldatabase-urls) for more.```pythontbl = LazyTbl( "postgresql://username:password@localhost:5432/dbname", "tablename" )``` Collecting data and previewing queries
###Code
from siuba import head, collect, show_query
tbl_mtcars >> head(2) >> collect()
tbl_mtcars >> head(2) >> show_query()
###Output
SELECT mtcars."index", mtcars.mpg, mtcars.cyl, mtcars.disp, mtcars.hp, mtcars.drat, mtcars.wt, mtcars.qsec, mtcars.vs, mtcars.am, mtcars.gear, mtcars.carb
FROM mtcars
LIMIT 2 OFFSET 0
###Markdown
Basic queries A core goal of `siuba` is to make sure most column operations and methods that work on a pandas DataFrame, also work with a SQL table. As a result, the examples in these docs also work when applied to SQL.This is shown below for `filter`, `summarize`, and `mutate`.
###Code
from siuba import _, filter, select, group_by, summarize, mutate
tbl_mtcars >> filter(_.cyl == 6)
(tbl_mtcars
>> group_by(_.cyl)
>> summarize(avg_mpg = _.mpg.mean())
)
tbl_mtcars >> select(_.mpg, _.cyl, _.endswith('t'))
tbl_mtcars >> \
mutate(feetpg = _.mpg * 5290, inchpg = _.feetpg * 12)
###Output
_____no_output_____ |
biobb_wf_amber_md_setup/notebooks/mdsetup_lig/biobb_amber_complex_setup_notebook.ipynb | ###Markdown
Protein-ligand complex MD Setup tutorial using BioExcel Building Blocks (biobb) --***AmberTools package version***--**Based on the [MDWeb](http://mmb.irbbarcelona.org/MDWeb2/) [Amber FULL MD Setup tutorial](https://mmb.irbbarcelona.org/MDWeb2/help.php?id=workflowsAmberWorkflowFULL)*****This tutorial aims to illustrate the process of **setting up a simulation system** containing a **protein in complex with a ligand**, step by step, using the **BioExcel Building Blocks library (biobb)** wrapping the **AmberTools** utility from the **AMBER package**. The particular example used is the **T4 lysozyme** protein (PDB code [3HTB](https://www.rcsb.org/structure/3HTB)) with two residue modifications ***L99A/M102Q*** complexed with the small ligand **2-propylphenol** (3-letter code [JZ4](https://www.rcsb.org/ligand/JZ4)). *** Settings Biobb modules used - [biobb_io](https://github.com/bioexcel/biobb_io): Tools to fetch biomolecular data from public databases. - [biobb_amber](https://github.com/bioexcel/biobb_amber): Tools to setup and run Molecular Dynamics simulations with AmberTools. - [biobb_structure_utils](https://github.com/bioexcel/biobb_structure_utils): Tools to modify or extract information from a PDB structure file. - [biobb_analysis](https://github.com/bioexcel/biobb_analysis): Tools to analyse Molecular Dynamics trajectories. - [biobb_chemistry](https://github.com/bioexcel/biobb_chemistry): Tools to to perform chemical conversions. Auxiliar libraries used - [nb_conda_kernels](https://github.com/Anaconda-Platform/nb_conda_kernels): Enables a Jupyter Notebook or JupyterLab application in one conda environment to access kernels for Python, R, and other languages found in other environments. - [nglview](http://nglviewer.org/nglview): Jupyter/IPython widget to interactively view molecular structures and trajectories in notebooks. - [ipywidgets](https://github.com/jupyter-widgets/ipywidgets): Interactive HTML widgets for Jupyter notebooks and the IPython kernel. - [plotly](https://plot.ly/python/offline/): Python interactive graphing library integrated in Jupyter notebooks. - [simpletraj](https://github.com/arose/simpletraj): Lightweight coordinate-only trajectory reader based on code from GROMACS, MDAnalysis and VMD. Conda Installation and Launch```consolegit clone https://github.com/bioexcel/biobb_wf_amber_md_setup.gitcd biobb_wf_amber_md_setupconda env create -f conda_env/environment.ymlconda activate biobb_AMBER_MDsetup_tutorialsjupyter-nbextension enable --py --user widgetsnbextensionjupyter-nbextension enable --py --user nglviewjupyter-notebook biobb_wf_amber_md_setup/notebooks/mdsetup_lig/biobb_amber_complex_setup_notebook.ipynb``` *** Pipeline steps 1. [Input Parameters](input) 2. [Fetching PDB Structure](fetch) 3. [Preparing PDB file for AMBER](pdb4amber) 4. [Create ligand system topology](ligtop) 5. [Create Protein-Ligand Complex System Topology](top) 6. [Energetically Minimize the Structure](minv) 7. [Create Solvent Box and Solvating the System](box) 8. [Adding Ions](ions) 9. [Energetically Minimize the System](min) 10. [Heating the System](heating) 11. [Equilibrate the System (NVT)](nvt) 12. [Equilibrate the System (NPT)](npt) 13. [Free Molecular Dynamics Simulation](free) 14. [Post-processing and Visualizing Resulting 3D Trajectory](post) 15. [Output Files](output) 16. [Questions & Comments](questions) ***<img src="https://bioexcel.eu/wp-content/uploads/2019/04/Bioexcell_logo_1080px_transp.png" alt="Bioexcel2 logo" title="Bioexcel2 logo" width="400" />*** Input parameters**Input parameters** needed: - **pdbCode**: PDB code of the protein structure (e.g. 3HTB) - **ligandCode**: 3-letter code of the ligand (e.g. JZ4) - **mol_charge**: Charge of the ligand (e.g. 0)
###Code
import nglview
import ipywidgets
import plotly
from plotly import subplots
import plotly.graph_objs as go
pdbCode = "3htb"
ligandCode = "JZ4"
mol_charge = 0
###Output
_____no_output_____
###Markdown
*** Fetching PDB structureDownloading **PDB structure** with the **protein molecule** from the RCSB PDB database.Alternatively, a **PDB file** can be used as starting structure. Stripping from the **downloaded structure** any **crystallographic water** molecule or **heteroatom**. *****Building Blocks** used: - [pdb](https://biobb-io.readthedocs.io/en/latest/api.htmlmodule-api.pdb) from **biobb_io.api.pdb** - [remove_pdb_water](https://biobb-structure-utils.readthedocs.io/en/latest/utils.htmlmodule-utils.remove_pdb_water) from **biobb_structure_utils.utils.remove_pdb_water** - [remove_ligand](https://biobb-structure-utils.readthedocs.io/en/latest/utils.htmlmodule-utils.remove_ligand) from **biobb_structure_utils.utils.remove_ligand*****
###Code
# Import module
from biobb_io.api.pdb import pdb
# Create properties dict and inputs/outputs
downloaded_pdb = pdbCode+'.pdb'
prop = {
'pdb_code': pdbCode,
'filter': False
}
#Create and launch bb
pdb(output_pdb_path=downloaded_pdb,
properties=prop)
# Show protein
view = nglview.show_structure_file(downloaded_pdb)
view.clear_representations()
view.add_representation(repr_type='cartoon', selection='protein', color='sstruc')
view.add_representation(repr_type='ball+stick', radius='0.1', selection='water')
view.add_representation(repr_type='ball+stick', radius='0.5', selection='ligand')
view.add_representation(repr_type='ball+stick', radius='0.5', selection='ion')
view._remote_call('setSize', target='Widget', args=['','600px'])
view
# Import module
from biobb_structure_utils.utils.remove_pdb_water import remove_pdb_water
# Create properties dict and inputs/outputs
nowat_pdb = pdbCode+'.nowat.pdb'
#Create and launch bb
remove_pdb_water(input_pdb_path=downloaded_pdb,
output_pdb_path=nowat_pdb)
# Import module
from biobb_structure_utils.utils.remove_ligand import remove_ligand
# Removing PO4 ligands:
# Create properties dict and inputs/outputs
nopo4_pdb = pdbCode+'.noPO4.pdb'
prop = {
'ligand' : 'PO4'
}
#Create and launch bb
remove_ligand(input_structure_path=nowat_pdb,
output_structure_path=nopo4_pdb,
properties=prop)
# Removing BME ligand:
# Create properties dict and inputs/outputs
nobme_pdb = pdbCode+'.noBME.pdb'
prop = {
'ligand' : 'BME'
}
#Create and launch bb
remove_ligand(input_structure_path=nopo4_pdb,
output_structure_path=nobme_pdb,
properties=prop)
###Output
_____no_output_____
###Markdown
Visualizing 3D structure
###Code
# Show protein
view = nglview.show_structure_file(nobme_pdb)
view.clear_representations()
view.add_representation(repr_type='cartoon', selection='protein', color='sstruc')
view.add_representation(repr_type='ball+stick', radius='0.5', selection='hetero')
view._remote_call('setSize', target='Widget', args=['','600px'])
view
###Output
_____no_output_____
###Markdown
*** Preparing PDB file for AMBERBefore starting a **protein MD setup**, it is always strongly recommended to take a look at the initial structure and try to identify important **properties** and also possible **issues**. These properties and issues can be serious, as for example the definition of **disulfide bridges**, the presence of a **non-standard aminoacids** or **ligands**, or **missing residues**. Other **properties** and **issues** might not be so serious, but they still need to be addressed before starting the **MD setup process**. **Missing hydrogen atoms**, presence of **alternate atomic location indicators** or **inserted residue codes** (see [PDB file format specification](https://www.wwpdb.org/documentation/file-format-content/format33/sect9.htmlATOM)) are examples of these not so crucial characteristics. Please visit the [AMBER tutorial: Building Protein Systems in Explicit Solvent](http://ambermd.org/tutorials/basic/tutorial7/index.php) for more examples. **AmberTools** utilities from **AMBER MD package** contain a tool able to analyse **PDB files** and clean them for further usage, especially with the **AmberTools LEaP program**: the **pdb4amber tool**. The next step of the workflow is running this tool to analyse our **input PDB structure**.For the particular **T4 Lysosyme** example, the most important property that is identified by the **pdb4amber** utility is the presence of **disulfide bridges** in the structure. Those are marked changing the residue names **from CYS to CYX**, which is the code that **AMBER force fields** use to distinguish between cysteines forming or not forming **disulfide bridges**. This will be used in the following step to correctly form a **bond** between these cysteine residues. We invite you to check what the tool does with different, more complex structures (e.g. PDB code [6N3V](https://www.rcsb.org/structure/6N3V)). *****Building Blocks** used: - [pdb4amber_run](https://biobb-amber.readthedocs.io/en/latest/pdb4amber.htmlpdb4amber-pdb4amber-run-module) from **biobb_amber.pdb4amber.pdb4amber_run*****
###Code
# Import module
from biobb_amber.pdb4amber.pdb4amber_run import pdb4amber_run
# Create prop dict and inputs/outputs
output_pdb4amber_path = 'structure.pdb4amber.pdb'
# Create and launch bb
pdb4amber_run(input_pdb_path=nobme_pdb,
output_pdb_path=output_pdb4amber_path,
properties=prop)
###Output
_____no_output_____
###Markdown
*** Create ligand system topology**Building AMBER topology** corresponding to the ligand structure.Force field used in this tutorial step is **amberGAFF**: [General AMBER Force Field](http://ambermd.org/antechamber/gaff.html), designed for rational drug design.- [Step 1](ligandTopologyStep1): Extract **ligand structure**.- [Step 2](ligandTopologyStep2): Add **hydrogen atoms** if missing.- [Step 3](ligandTopologyStep3): **Energetically minimize the system** with the new hydrogen atoms. - [Step 4](ligandTopologyStep4): Generate **ligand topology** (parameters). *****Building Blocks** used: - [ExtractHeteroAtoms](https://biobb-structure-utils.readthedocs.io/en/latest/utils.htmlmodule-utils.extract_heteroatoms) from **biobb_structure_utils.utils.extract_heteroatoms** - [ReduceAddHydrogens](https://biobb-chemistry.readthedocs.io/en/latest/ambertools.htmlmodule-ambertools.reduce_add_hydrogens) from **biobb_chemistry.ambertools.reduce_add_hydrogens** - [BabelMinimize](https://biobb-chemistry.readthedocs.io/en/latest/babelm.htmlmodule-babelm.babel_minimize) from **biobb_chemistry.babelm.babel_minimize** - [AcpypeParamsAC](https://biobb-chemistry.readthedocs.io/en/latest/acpype.htmlmodule-acpype.acpype_params_ac) from **biobb_chemistry.acpype.acpype_params_ac** *** Step 1: Extract **Ligand structure**
###Code
# Create Ligand system topology, STEP 1
# Extracting Ligand JZ4
# Import module
from biobb_structure_utils.utils.extract_heteroatoms import extract_heteroatoms
# Create properties dict and inputs/outputs
ligandFile = ligandCode+'.pdb'
prop = {
'heteroatoms' : [{"name": "JZ4"}]
}
extract_heteroatoms(input_structure_path=output_pdb4amber_path,
output_heteroatom_path=ligandFile,
properties=prop)
###Output
_____no_output_____
###Markdown
Step 2: Add **hydrogen atoms**
###Code
# Create Ligand system topology, STEP 2
# Reduce_add_hydrogens: add Hydrogen atoms to a small molecule (using Reduce tool from Ambertools package)
# Import module
from biobb_chemistry.ambertools.reduce_add_hydrogens import reduce_add_hydrogens
# Create prop dict and inputs/outputs
output_reduce_h = ligandCode+'.reduce.H.pdb'
prop = {
'nuclear' : 'true'
}
# Create and launch bb
reduce_add_hydrogens(input_path=ligandFile,
output_path=output_reduce_h,
properties=prop)
###Output
_____no_output_____
###Markdown
Step 3: **Energetically minimize the system** with the new hydrogen atoms.
###Code
# Create Ligand system topology, STEP 3
# Babel_minimize: Structure energy minimization of a small molecule after being modified adding hydrogen atoms
# Import module
from biobb_chemistry.babelm.babel_minimize import babel_minimize
# Create prop dict and inputs/outputs
output_babel_min = ligandCode+'.H.min.mol2'
prop = {
'method' : 'sd',
'criteria' : '1e-10',
'force_field' : 'GAFF'
}
# Create and launch bb
babel_minimize(input_path=output_reduce_h,
output_path=output_babel_min,
properties=prop)
###Output
_____no_output_____
###Markdown
Visualizing 3D structuresVisualizing the small molecule generated **PDB structures** using **NGL**: - **Original Ligand Structure** (Left)- **Ligand Structure with hydrogen atoms added** (with Reduce program) (Center)- **Ligand Structure with hydrogen atoms added** (with Reduce program), **energy minimized** (with Open Babel) (Right)
###Code
# Show different structures generated (for comparison)
view1 = nglview.show_structure_file(ligandFile)
view1.add_representation(repr_type='ball+stick')
view1._remote_call('setSize', target='Widget', args=['350px','400px'])
view1.camera='orthographic'
view1
view2 = nglview.show_structure_file(output_reduce_h)
view2.add_representation(repr_type='ball+stick')
view2._remote_call('setSize', target='Widget', args=['350px','400px'])
view2.camera='orthographic'
view2
view3 = nglview.show_structure_file(output_babel_min)
view3.add_representation(repr_type='ball+stick')
view3._remote_call('setSize', target='Widget', args=['350px','400px'])
view3.camera='orthographic'
view3
ipywidgets.HBox([view1, view2, view3])
###Output
_____no_output_____
###Markdown
Step 4: Generate **ligand topology** (parameters).
###Code
# Create Ligand system topology, STEP 4
# Acpype_params_gmx: Generation of topologies for AMBER with ACPype
# Import module
from biobb_chemistry.acpype.acpype_params_ac import acpype_params_ac
# Create prop dict and inputs/outputs
output_acpype_inpcrd = ligandCode+'params.inpcrd'
output_acpype_frcmod = ligandCode+'params.frcmod'
output_acpype_lib = ligandCode+'params.lib'
output_acpype_prmtop = ligandCode+'params.prmtop'
output_acpype = ligandCode+'params'
prop = {
'basename' : output_acpype,
'charge' : mol_charge
}
# Create and launch bb
acpype_params_ac(input_path=output_babel_min,
output_path_inpcrd=output_acpype_inpcrd,
output_path_frcmod=output_acpype_frcmod,
output_path_lib=output_acpype_lib,
output_path_prmtop=output_acpype_prmtop,
properties=prop)
###Output
_____no_output_____
###Markdown
*** Create protein-ligand complex system topology**Building AMBER topology** corresponding to the protein-ligand complex structure.*IMPORTANT: the previous pdb4amber building block is changing the proper cysteines residue naming in the PDB file from CYS to CYX so that this step can automatically identify and add the disulfide bonds to the system topology.*The **force field** used in this tutorial is [**ff14SB**](https://doi.org/10.1021/acs.jctc.5b00255) for the **protein**, an evolution of the **ff99SB** force field with improved accuracy of protein side chains and backbone parameters; and the [**gaff**](https://doi.org/10.1002/jcc.20035) force field for the small molecule. **Water** molecules type used in this tutorial is [**tip3p**](https://doi.org/10.1021/jp003020w).Adding **side chain atoms** and **hydrogen atoms** if missing. Forming **disulfide bridges** according to the info added in the previous step. *NOTE: From this point on, the **protein-ligand complex structure and topology** generated can be used in a regular MD setup.*Generating three output files: - **AMBER structure** (PDB file)- **AMBER topology** (AMBER [Parmtop](https://ambermd.org/FileFormats.phptopology) file)- **AMBER coordinates** (AMBER [Coordinate/Restart](https://ambermd.org/FileFormats.phprestart) file) *****Building Blocks** used: - [leap_gen_top](https://biobb-amber.readthedocs.io/en/latest/leap.htmlmodule-leap.leap_gen_top) from **biobb_amber.leap.leap_gen_top*****
###Code
# Import module
from biobb_amber.leap.leap_gen_top import leap_gen_top
# Create prop dict and inputs/outputs
output_pdb_path = 'structure.leap.pdb'
output_top_path = 'structure.leap.top'
output_crd_path = 'structure.leap.crd'
prop = {
"forcefield" : ["protein.ff14SB","gaff"]
}
# Create and launch bb
leap_gen_top(input_pdb_path=output_pdb4amber_path,
input_lib_path=output_acpype_lib,
input_frcmod_path=output_acpype_frcmod,
output_pdb_path=output_pdb_path,
output_top_path=output_top_path,
output_crd_path=output_crd_path,
properties=prop)
###Output
_____no_output_____
###Markdown
Visualizing 3D structureVisualizing the **PDB structure** using **NGL**. Try to identify the differences between the structure generated for the **system topology** and the **original one** (e.g. hydrogen atoms).
###Code
import nglview
import ipywidgets
# Show protein
view = nglview.show_structure_file(output_pdb_path)
view.clear_representations()
view.add_representation(repr_type='cartoon', selection='protein', opacity='0.4')
view.add_representation(repr_type='ball+stick', selection='protein')
view.add_representation(repr_type='ball+stick', radius='0.5', selection='JZ4')
view._remote_call('setSize', target='Widget', args=['','600px'])
view
###Output
_____no_output_____
###Markdown
Energetically minimize the structure**Energetically minimize** the **protein-ligand complex structure** (in vacuo) using the **sander tool** from the **AMBER MD package**. This step is **relaxing the structure**, usually **constrained**, especially when coming from an X-ray **crystal structure**. The **miminization process** is done in two steps:- [Step 1](minv_1): **Hydrogen** minimization, applying **position restraints** (50 Kcal/mol.$Å^{2}$) to the **protein heavy atoms**.- [Step 2](minv_2): **System** minimization, with **no restraints**.*****Building Blocks** used: - [sander_mdrun](https://biobb-amber.readthedocs.io/en/latest/sander.htmlmodule-sander.sander_mdrun) from **biobb_amber.sander.sander_mdrun** - [process_minout](https://biobb-amber.readthedocs.io/en/latest/process.htmlmodule-process.process_minout) from **biobb_amber.process.process_minout***** Step 1: Minimize Hydrogens**Hydrogen** minimization, applying **position restraints** (50 Kcal/mol.$Å^{2}$) to the **protein heavy atoms**.
###Code
# Import module
from biobb_amber.sander.sander_mdrun import sander_mdrun
# Create prop dict and inputs/outputs
output_h_min_traj_path = 'sander.h_min.x'
output_h_min_rst_path = 'sander.h_min.rst'
output_h_min_log_path = 'sander.h_min.log'
prop = {
'simulation_type' : "min_vacuo",
"mdin" : {
'maxcyc' : 500,
'ntpr' : 5,
'ntr' : 1,
'restraintmask' : '\":*&!@H=\"',
'restraint_wt' : 50.0
}
}
# Create and launch bb
sander_mdrun(input_top_path=output_top_path,
input_crd_path=output_crd_path,
input_ref_path=output_crd_path,
output_traj_path=output_h_min_traj_path,
output_rst_path=output_h_min_rst_path,
output_log_path=output_h_min_log_path,
properties=prop)
###Output
_____no_output_____
###Markdown
Checking Energy Minimization resultsChecking **energy minimization** results. Plotting **potential energy** along time during the **minimization process**.
###Code
# Import module
from biobb_amber.process.process_minout import process_minout
# Create prop dict and inputs/outputs
output_h_min_dat_path = 'sander.h_min.energy.dat'
prop = {
"terms" : ['ENERGY']
}
# Create and launch bb
process_minout(input_log_path=output_h_min_log_path,
output_dat_path=output_h_min_dat_path,
properties=prop)
#Read data from file and filter energy values higher than 1000 Kj/mol^-1
with open(output_h_min_dat_path,'r') as energy_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in energy_file
if not line.startswith(("#","@"))
if float(line.split()[1]) < 1000
])
)
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": [go.Scatter(x=x, y=y)],
"layout": go.Layout(title="Energy Minimization",
xaxis=dict(title = "Energy Minimization Step"),
yaxis=dict(title = "Potential Energy kcal/mol")
)
}
plotly.offline.iplot(fig)
###Output
_____no_output_____
###Markdown
Step 2: Minimize the system**System** minimization, with **restraints** only on the **small molecule**, to avoid a possible change in position due to **protein repulsion**.
###Code
# Import module
from biobb_amber.sander.sander_mdrun import sander_mdrun
# Create prop dict and inputs/outputs
output_n_min_traj_path = 'sander.n_min.x'
output_n_min_rst_path = 'sander.n_min.rst'
output_n_min_log_path = 'sander.n_min.log'
prop = {
'simulation_type' : "min_vacuo",
"mdin" : {
'maxcyc' : 500,
'ntpr' : 5,
'restraintmask' : '\":' + ligandCode + '\"',
'restraint_wt' : 500.0
}
}
# Create and launch bb
sander_mdrun(input_top_path=output_top_path,
input_crd_path=output_h_min_rst_path,
output_traj_path=output_n_min_traj_path,
output_rst_path=output_n_min_rst_path,
output_log_path=output_n_min_log_path,
properties=prop)
###Output
_____no_output_____
###Markdown
Checking Energy Minimization resultsChecking **energy minimization** results. Plotting **potential energy** by time during the **minimization process**.
###Code
# Import module
from biobb_amber.process.process_minout import process_minout
# Create prop dict and inputs/outputs
output_n_min_dat_path = 'sander.n_min.energy.dat'
prop = {
"terms" : ['ENERGY']
}
# Create and launch bb
process_minout(input_log_path=output_n_min_log_path,
output_dat_path=output_n_min_dat_path,
properties=prop)
#Read data from file and filter energy values higher than 1000 Kj/mol^-1
with open(output_n_min_dat_path,'r') as energy_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in energy_file
if not line.startswith(("#","@"))
if float(line.split()[1]) < 1000
])
)
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": [go.Scatter(x=x, y=y)],
"layout": go.Layout(title="Energy Minimization",
xaxis=dict(title = "Energy Minimization Step"),
yaxis=dict(title = "Potential Energy kcal/mol")
)
}
plotly.offline.iplot(fig)
###Output
_____no_output_____
###Markdown
*** Create solvent box and solvating the systemDefine the unit cell for the **protein structure MD system** to fill it with water molecules.A **truncated octahedron box** is used to define the unit cell, with a **distance from the protein to the box edge of 9.0 Angstroms**. The solvent type used is the default **TIP3P** water model, a generic 3-point solvent model.*****Building Blocks** used: - [amber_to_pdb](https://biobb-amber.readthedocs.io/en/latest/ambpdb.htmlmodule-ambpdb.amber_to_pdb) from **biobb_amber.ambpdb.amber_to_pdb** - [leap_solvate](https://biobb-amber.readthedocs.io/en/latest/leap.htmlmodule-leap.leap_solvate) from **biobb_amber.leap.leap_solvate** *** Getting minimized structureGetting the result of the **energetic minimization** and converting it to **PDB format** to be then used as input for the **water box generation**. This is achieved by converting from **AMBER topology + coordinates** files to a **PDB file** using the **ambpdb** tool from the **AMBER MD package**.
###Code
# Import module
from biobb_amber.ambpdb.amber_to_pdb import amber_to_pdb
# Create prop dict and inputs/outputs
output_ambpdb_path = 'structure.ambpdb.pdb'
# Create and launch bb
amber_to_pdb(input_top_path=output_top_path,
input_crd_path=output_h_min_rst_path,
output_pdb_path=output_ambpdb_path)
###Output
_____no_output_____
###Markdown
Create water boxDefine the **unit cell** for the **protein-ligand complex structure MD system** and fill it with **water molecules**.
###Code
# Import module
from biobb_amber.leap.leap_solvate import leap_solvate
# Create prop dict and inputs/outputs
output_solv_pdb_path = 'structure.solv.pdb'
output_solv_top_path = 'structure.solv.parmtop'
output_solv_crd_path = 'structure.solv.crd'
prop = {
"forcefield" : ["protein.ff14SB","gaff"],
"water_type": "TIP3PBOX",
"distance_to_molecule": "9.0",
"box_type": "truncated_octahedron"
}
# Create and launch bb
leap_solvate(input_pdb_path=output_ambpdb_path,
input_lib_path=output_acpype_lib,
input_frcmod_path=output_acpype_frcmod,
output_pdb_path=output_solv_pdb_path,
output_top_path=output_solv_top_path,
output_crd_path=output_solv_crd_path,
properties=prop)
###Output
_____no_output_____
###Markdown
Adding ions**Neutralizing** the system and adding an additional **ionic concentration** using the **leap tool** from the **AMBER MD package**. Using **Sodium (Na+)** and **Chloride (Cl-)** counterions and an **additional ionic concentration** of 150mM.*****Building Blocks** used: - [leap_add_ions](https://biobb-amber.readthedocs.io/en/latest/leap.htmlmodule-leap.leap_add_ions) from **biobb_amber.leap.leap_add_ions*****
###Code
# Import module
from biobb_amber.leap.leap_add_ions import leap_add_ions
# Create prop dict and inputs/outputs
output_ions_pdb_path = 'structure.ions.pdb'
output_ions_top_path = 'structure.ions.parmtop'
output_ions_crd_path = 'structure.ions.crd'
prop = {
"forcefield" : ["protein.ff14SB","gaff"],
"neutralise" : True,
"positive_ions_type": "Na+",
"negative_ions_type": "Cl-",
"ionic_concentration" : 150, # 150mM
"box_type": "truncated_octahedron"
}
# Create and launch bb
leap_add_ions(input_pdb_path=output_solv_pdb_path,
input_lib_path=output_acpype_lib,
input_frcmod_path=output_acpype_frcmod,
output_pdb_path=output_ions_pdb_path,
output_top_path=output_ions_top_path,
output_crd_path=output_ions_crd_path,
properties=prop)
###Output
_____no_output_____
###Markdown
Visualizing 3D structureVisualizing the **protein-ligand complex system** with the newly added **solvent box** and **counterions** using **NGL**. Note the **truncated octahedron box** filled with **water molecules** surrounding the **protein structure**, as well as the randomly placed **positive** (Na+, blue) and **negative** (Cl-, gray) **counterions**.
###Code
# Show protein
view = nglview.show_structure_file(output_ions_pdb_path)
view.clear_representations()
view.add_representation(repr_type='cartoon', selection='protein')
view.add_representation(repr_type='ball+stick', selection='solvent')
view.add_representation(repr_type='spacefill', selection='Cl- Na+')
view._remote_call('setSize', target='Widget', args=['','600px'])
view
###Output
_____no_output_____
###Markdown
Energetically minimize the system**Energetically minimize** the **system** (protein structure + ligand + solvent + ions) using the **sander tool** from the **AMBER MD package**. **Restraining heavy atoms** with a force constant of 15 15 Kcal/mol.$Å^{2}$ to their initial positions.- [Step 1](emStep1): Energetically minimize the **system** through 500 minimization cycles.- [Step 2](emStep2): Checking **energy minimization** results. Plotting energy by time during the **minimization** process. *****Building Blocks** used: - [sander_mdrun](https://biobb-amber.readthedocs.io/en/latest/sander.htmlmodule-sander.sander_mdrun) from **biobb_amber.sander.sander_mdrun** - [process_minout](https://biobb-amber.readthedocs.io/en/latest/process.htmlmodule-process.process_minout) from **biobb_amber.process.process_minout***** Step 1: Running Energy MinimizationThe **minimization** type of the **simulation_type property** contains the main default parameters to run an **energy minimization**:- imin = 1 ; Minimization flag, perform an energy minimization.- maxcyc = 500; The maximum number of cycles of minimization.- ntb = 1; Periodic boundaries: constant volume.- ntmin = 2; Minimization method: steepest descent.In this particular example, the method used to run the **energy minimization** is the default **steepest descent**, with a **maximum number of 500 cycles** and **periodic conditions**.
###Code
# Import module
from biobb_amber.sander.sander_mdrun import sander_mdrun
# Create prop dict and inputs/outputs
output_min_traj_path = 'sander.min.x'
output_min_rst_path = 'sander.min.rst'
output_min_log_path = 'sander.min.log'
prop = {
"simulation_type" : "minimization",
"mdin" : {
'maxcyc' : 300, # Reducing the number of minimization steps for the sake of time
'ntr' : 1, # Overwritting restrain parameter
'restraintmask' : '\"!:WAT,Cl-,Na+\"', # Restraining solute
'restraint_wt' : 15.0 # With a force constant of 50 Kcal/mol*A2
}
}
# Create and launch bb
sander_mdrun(input_top_path=output_ions_top_path,
input_crd_path=output_ions_crd_path,
input_ref_path=output_ions_crd_path,
output_traj_path=output_min_traj_path,
output_rst_path=output_min_rst_path,
output_log_path=output_min_log_path,
properties=prop)
###Output
_____no_output_____
###Markdown
Step 2: Checking Energy Minimization resultsChecking **energy minimization** results. Plotting **potential energy** along time during the **minimization process**.
###Code
# Import module
from biobb_amber.process.process_minout import process_minout
# Create prop dict and inputs/outputs
output_dat_path = 'sander.min.energy.dat'
prop = {
"terms" : ['ENERGY']
}
# Create and launch bb
process_minout(input_log_path=output_min_log_path,
output_dat_path=output_dat_path,
properties=prop)
#Read data from file and filter energy values higher than 1000 Kj/mol^-1
with open(output_dat_path,'r') as energy_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in energy_file
if not line.startswith(("#","@"))
if float(line.split()[1]) < 1000
])
)
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": [go.Scatter(x=x, y=y)],
"layout": go.Layout(title="Energy Minimization",
xaxis=dict(title = "Energy Minimization Step"),
yaxis=dict(title = "Potential Energy kcal/mol")
)
}
plotly.offline.iplot(fig)
###Output
_____no_output_____
###Markdown
Heating the system**Warming up** the **prepared system** using the **sander tool** from the **AMBER MD package**. Going from 0 to the desired **temperature**, in this particular example, 300K. **Solute atoms restrained** (force constant of 10 Kcal/mol). Length 5ps.***- [Step 1](heatStep1): Warming up the **system** through 500 MD steps.- [Step 2](heatStep2): Checking results for the **system warming up**. Plotting **temperature** along time during the **heating** process. *****Building Blocks** used: - [sander_mdrun](https://biobb-amber.readthedocs.io/en/latest/sander.htmlmodule-sander.sander_mdrun) from **biobb_amber.sander.sander_mdrun** - [process_mdout](https://biobb-amber.readthedocs.io/en/latest/process.htmlmodule-process.process_mdout) from **biobb_amber.process.process_mdout***** Step 1: Warming up the systemThe **heat** type of the **simulation_type property** contains the main default parameters to run a **system warming up**:- imin = 0; Run MD (no minimization)- ntx = 5; Read initial coords and vels from restart file- cut = 10.0; Cutoff for non bonded interactions in Angstroms- ntr = 0; No restrained atoms- ntc = 2; SHAKE for constraining length of bonds involving Hydrogen atoms- ntf = 2; Bond interactions involving H omitted- ntt = 3; Constant temperature using Langevin dynamics- ig = -1; Seed for pseudo-random number generator- ioutfm = 1; Write trajectory in netcdf format- iwrap = 1; Wrap coords into primary box- nstlim = 5000; Number of MD steps - dt = 0.002; Time step (in ps)- tempi = 0.0; Initial temperature (0 K)- temp0 = 300.0; Final temperature (300 K)- irest = 0; No restart from previous simulation- ntb = 1; Periodic boundary conditions at constant volume- gamma_ln = 1.0; Collision frequency for Langevin dynamics (in 1/ps)In this particular example, the **heating** of the system is done in **2500 steps** (5ps) and is going **from 0K to 300K** (note that the number of steps has been reduced in this tutorial, for the sake of time).
###Code
# Import module
from biobb_amber.sander.sander_mdrun import sander_mdrun
# Create prop dict and inputs/outputs
output_heat_traj_path = 'sander.heat.netcdf'
output_heat_rst_path = 'sander.heat.rst'
output_heat_log_path = 'sander.heat.log'
prop = {
"simulation_type" : "heat",
"mdin" : {
'nstlim' : 2500, # Reducing the number of steps for the sake of time (5ps)
'ntr' : 1, # Overwritting restrain parameter
'restraintmask' : '\"!:WAT,Cl-,Na+\"', # Restraining solute
'restraint_wt' : 10.0 # With a force constant of 10 Kcal/mol*A2
}
}
# Create and launch bb
sander_mdrun(input_top_path=output_ions_top_path,
input_crd_path=output_min_rst_path,
input_ref_path=output_min_rst_path,
output_traj_path=output_heat_traj_path,
output_rst_path=output_heat_rst_path,
output_log_path=output_heat_log_path,
properties=prop)
###Output
_____no_output_____
###Markdown
Step 2: Checking results from the system warming upChecking **system warming up** output. Plotting **temperature** along time during the **heating process**.
###Code
# Import module
from biobb_amber.process.process_mdout import process_mdout
# Create prop dict and inputs/outputs
output_dat_heat_path = 'sander.md.temp.dat'
prop = {
"terms" : ['TEMP']
}
# Create and launch bb
process_mdout(input_log_path=output_heat_log_path,
output_dat_path=output_dat_heat_path,
properties=prop)
#Read data from file and filter energy values higher than 1000 Kj/mol^-1
with open(output_dat_heat_path,'r') as energy_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in energy_file
if not line.startswith(("#","@"))
if float(line.split()[1]) < 1000
])
)
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": [go.Scatter(x=x, y=y)],
"layout": go.Layout(title="Heating process",
xaxis=dict(title = "Heating Step (ps)"),
yaxis=dict(title = "Temperature (K)")
)
}
plotly.offline.iplot(fig)
###Output
_____no_output_____
###Markdown
*** Equilibrate the system (NVT)Equilibrate the **protein-ligand complex system** in **NVT ensemble** (constant Number of particles, Volume and Temperature). Protein **heavy atoms** will be restrained using position restraining forces: movement is permitted, but only after overcoming a substantial energy penalty. The utility of position restraints is that they allow us to equilibrate our solvent around our protein, without the added variable of structural changes in the protein.- [Step 1](eqNVTStep1): Equilibrate the **protein system** with **NVT** ensemble.- [Step 2](eqNVTStep2): Checking **NVT Equilibration** results. Plotting **system temperature** by time during the **NVT equilibration** process. *****Building Blocks** used: - [sander_mdrun](https://biobb-amber.readthedocs.io/en/latest/sander.htmlmodule-sander.sander_mdrun) from **biobb_amber.sander.sander_mdrun** - [process_mdout](https://biobb-amber.readthedocs.io/en/latest/process.htmlmodule-process.process_mdout) from **biobb_amber.process.process_mdout** *** Step 1: Equilibrating the system (NVT)The **nvt** type of the **simulation_type property** contains the main default parameters to run a **system equilibration in NVT ensemble**:- imin = 0; Run MD (no minimization)- ntx = 5; Read initial coords and vels from restart file- cut = 10.0; Cutoff for non bonded interactions in Angstroms- ntr = 0; No restrained atoms- ntc = 2; SHAKE for constraining length of bonds involving Hydrogen atoms- ntf = 2; Bond interactions involving H omitted- ntt = 3; Constant temperature using Langevin dynamics- ig = -1; Seed for pseudo-random number generator- ioutfm = 1; Write trajectory in netcdf format- iwrap = 1; Wrap coords into primary box- nstlim = 5000; Number of MD steps - dt = 0.002; Time step (in ps)- irest = 1; Restart previous simulation- ntb = 1; Periodic boundary conditions at constant volume- gamma_ln = 5.0; Collision frequency for Langevin dynamics (in 1/ps)In this particular example, the **NVT equilibration** of the system is done in **500 steps** (note that the number of steps has been reduced in this tutorial, for the sake of time).
###Code
# Import module
from biobb_amber.sander.sander_mdrun import sander_mdrun
# Create prop dict and inputs/outputs
output_nvt_traj_path = 'sander.nvt.netcdf'
output_nvt_rst_path = 'sander.nvt.rst'
output_nvt_log_path = 'sander.nvt.log'
prop = {
"simulation_type" : 'nvt',
"mdin" : {
'nstlim' : 500, # Reducing the number of steps for the sake of time (1ps)
'ntr' : 1, # Overwritting restrain parameter
'restraintmask' : '\"!:WAT,Cl-,Na+ & !@H=\"', # Restraining solute heavy atoms
'restraint_wt' : 5.0 # With a force constant of 5 Kcal/mol*A2
}
}
# Create and launch bb
sander_mdrun(input_top_path=output_ions_top_path,
input_crd_path=output_heat_rst_path,
input_ref_path=output_heat_rst_path,
output_traj_path=output_nvt_traj_path,
output_rst_path=output_nvt_rst_path,
output_log_path=output_nvt_log_path,
properties=prop)
###Output
_____no_output_____
###Markdown
Step 2: Checking NVT Equilibration resultsChecking **NVT Equilibration** results. Plotting **system temperature** by time during the NVT equilibration process.
###Code
# Import module
from biobb_amber.process.process_mdout import process_mdout
# Create prop dict and inputs/outputs
output_dat_nvt_path = 'sander.md.nvt.temp.dat'
prop = {
"terms" : ['TEMP']
}
# Create and launch bb
process_mdout(input_log_path=output_nvt_log_path,
output_dat_path=output_dat_nvt_path,
properties=prop)
#Read data from file and filter energy values higher than 1000 Kj/mol^-1
with open(output_dat_nvt_path,'r') as energy_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in energy_file
if not line.startswith(("#","@"))
if float(line.split()[1]) < 1000
])
)
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": [go.Scatter(x=x, y=y)],
"layout": go.Layout(title="NVT equilibration",
xaxis=dict(title = "Equilibration Step (ps)"),
yaxis=dict(title = "Temperature (K)")
)
}
plotly.offline.iplot(fig)
###Output
_____no_output_____
###Markdown
*** Equilibrate the system (NPT)Equilibrate the **protein-ligand complex system** in **NPT ensemble** (constant Number of particles, Pressure and Temperature). Protein **heavy atoms** will be restrained using position restraining forces: movement is permitted, but only after overcoming a substantial energy penalty. The utility of position restraints is that they allow us to equilibrate our solvent around our protein, without the added variable of structural changes in the protein.- [Step 1](eqNPTStep1): Equilibrate the **protein system** with **NPT** ensemble.- [Step 2](eqNPTStep2): Checking **NPT Equilibration** results. Plotting **system pressure and density** by time during the **NVT equilibration** process. *****Building Blocks** used: - [sander_mdrun](https://biobb-amber.readthedocs.io/en/latest/sander.htmlmodule-sander.sander_mdrun) from **biobb_amber.sander.sander_mdrun** - [process_mdout](https://biobb-amber.readthedocs.io/en/latest/process.htmlmodule-process.process_mdout) from **biobb_amber.process.process_mdout** *** Step 1: Equilibrating the system (NPT)The **npt** type of the **simulation_type property** contains the main default parameters to run a **system equilibration in NPT ensemble**:- imin = 0; Run MD (no minimization)- ntx = 5; Read initial coords and vels from restart file- cut = 10.0; Cutoff for non bonded interactions in Angstroms- ntr = 0; No restrained atoms- ntc = 2; SHAKE for constraining length of bonds involving Hydrogen atoms- ntf = 2; Bond interactions involving H omitted- ntt = 3; Constant temperature using Langevin dynamics- ig = -1; Seed for pseudo-random number generator- ioutfm = 1; Write trajectory in netcdf format- iwrap = 1; Wrap coords into primary box- nstlim = 5000; Number of MD steps - dt = 0.002; Time step (in ps)- irest = 1; Restart previous simulation- gamma_ln = 5.0; Collision frequency for Langevin dynamics (in 1/ps)- pres0 = 1.0; Reference pressure- ntp = 1; Constant pressure dynamics: md with isotropic position scaling- taup = 2.0; Pressure relaxation time (in ps)In this particular example, the **NPT equilibration** of the system is done in **500 steps** (note that the number of steps has been reduced in this tutorial, for the sake of time).
###Code
# Import module
from biobb_amber.sander.sander_mdrun import sander_mdrun
# Create prop dict and inputs/outputs
output_npt_traj_path = 'sander.npt.netcdf'
output_npt_rst_path = 'sander.npt.rst'
output_npt_log_path = 'sander.npt.log'
prop = {
"simulation_type" : 'npt',
"mdin" : {
'nstlim' : 500, # Reducing the number of steps for the sake of time (1ps)
'ntr' : 1, # Overwritting restrain parameter
'restraintmask' : '\"!:WAT,Cl-,Na+ & !@H=\"', # Restraining solute heavy atoms
'restraint_wt' : 2.5 # With a force constant of 2.5 Kcal/mol*A2
}
}
# Create and launch bb
sander_mdrun(input_top_path=output_ions_top_path,
input_crd_path=output_nvt_rst_path,
input_ref_path=output_nvt_rst_path,
output_traj_path=output_npt_traj_path,
output_rst_path=output_npt_rst_path,
output_log_path=output_npt_log_path,
properties=prop)
###Output
_____no_output_____
###Markdown
Step 2: Checking NPT Equilibration resultsChecking **NPT Equilibration** results. Plotting **system pressure and density** by time during the **NPT equilibration** process.
###Code
# Import module
from biobb_amber.process.process_mdout import process_mdout
# Create prop dict and inputs/outputs
output_dat_npt_path = 'sander.md.npt.dat'
prop = {
"terms" : ['PRES','DENSITY']
}
# Create and launch bb
process_mdout(input_log_path=output_npt_log_path,
output_dat_path=output_dat_npt_path,
properties=prop)
# Read pressure and density data from file
with open(output_dat_npt_path,'r') as pd_file:
x,y,z = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]),float(line.split()[2]))
for line in pd_file
if not line.startswith(("#","@"))
])
)
plotly.offline.init_notebook_mode(connected=True)
trace1 = go.Scatter(
x=x,y=y
)
trace2 = go.Scatter(
x=x,y=z
)
fig = subplots.make_subplots(rows=1, cols=2, print_grid=False)
fig.append_trace(trace1, 1, 1)
fig.append_trace(trace2, 1, 2)
fig['layout']['xaxis1'].update(title='Time (ps)')
fig['layout']['xaxis2'].update(title='Time (ps)')
fig['layout']['yaxis1'].update(title='Pressure (bar)')
fig['layout']['yaxis2'].update(title='Density (Kg*m^-3)')
fig['layout'].update(title='Pressure and Density during NPT Equilibration')
fig['layout'].update(showlegend=False)
plotly.offline.iplot(fig)
###Output
_____no_output_____
###Markdown
*** Free Molecular Dynamics SimulationUpon completion of the **two equilibration phases (NVT and NPT)**, the system is now well-equilibrated at the desired temperature and pressure. The **position restraints** can now be released. The last step of the **protein** MD setup is a short, **free MD simulation**, to ensure the robustness of the system. - [Step 1](mdStep1): Run short MD simulation of the **protein system**.- [Step 2](mdStep2): Checking results for the final step of the setup process, the **free MD run**. Plotting **Root Mean Square deviation (RMSd)** and **Radius of Gyration (Rgyr)** by time during the **free MD run** step.*****Building Blocks** used: - [sander_mdrun](https://biobb-amber.readthedocs.io/en/latest/sander.htmlmodule-sander.sander_mdrun) from **biobb_amber.sander.sander_mdrun** - [process_mdout](https://biobb-amber.readthedocs.io/en/latest/process.htmlmodule-process.process_mdout) from **biobb_amber.process.process_mdout** - [cpptraj_rms](https://biobb-analysis.readthedocs.io/en/latest/ambertools.htmlmodule-ambertools.cpptraj_rms) from **biobb_analysis.cpptraj.cpptraj_rms** - [cpptraj_rgyr](https://biobb-analysis.readthedocs.io/en/latest/ambertools.htmlmodule-ambertools.cpptraj_rgyr) from **biobb_analysis.cpptraj.cpptraj_rgyr***** Step 1: Creating portable binary run file to run a free MD simulationThe **free** type of the **simulation_type property** contains the main default parameters to run an **unrestrained MD simulation**:- imin = 0; Run MD (no minimization)- ntx = 5; Read initial coords and vels from restart file- cut = 10.0; Cutoff for non bonded interactions in Angstroms- ntr = 0; No restrained atoms- ntc = 2; SHAKE for constraining length of bonds involving Hydrogen atoms- ntf = 2; Bond interactions involving H omitted- ntt = 3; Constant temperature using Langevin dynamics- ig = -1; Seed for pseudo-random number generator- ioutfm = 1; Write trajectory in netcdf format- iwrap = 1; Wrap coords into primary box- nstlim = 5000; Number of MD steps - dt = 0.002; Time step (in ps)In this particular example, a short, **5ps-length** simulation (2500 steps) is run, for the sake of time.
###Code
# Import module
from biobb_amber.sander.sander_mdrun import sander_mdrun
# Create prop dict and inputs/outputs
output_free_traj_path = 'sander.free.netcdf'
output_free_rst_path = 'sander.free.rst'
output_free_log_path = 'sander.free.log'
prop = {
"simulation_type" : 'free',
"mdin" : {
'nstlim' : 2500, # Reducing the number of steps for the sake of time (5ps)
'ntwx' : 500 # Print coords to trajectory every 500 steps (1 ps)
}
}
# Create and launch bb
sander_mdrun(input_top_path=output_ions_top_path,
input_crd_path=output_npt_rst_path,
output_traj_path=output_free_traj_path,
output_rst_path=output_free_rst_path,
output_log_path=output_free_log_path,
properties=prop)
###Output
_____no_output_____
###Markdown
Step 2: Checking free MD simulation resultsChecking results for the final step of the setup process, the **free MD run**. Plotting **Root Mean Square deviation (RMSd)** and **Radius of Gyration (Rgyr)** by time during the **free MD run** step. **RMSd** against the **experimental structure** (input structure of the pipeline) and against the **minimized and equilibrated structure** (output structure of the NPT equilibration step).
###Code
# cpptraj_rms: Computing Root Mean Square deviation to analyse structural stability
# RMSd against minimized and equilibrated snapshot (backbone atoms)
# Import module
from biobb_analysis.ambertools.cpptraj_rms import cpptraj_rms
# Create prop dict and inputs/outputs
output_rms_first = pdbCode+'_rms_first.dat'
prop = {
'mask': 'backbone',
'reference': 'first'
}
# Create and launch bb
cpptraj_rms(input_top_path=output_ions_top_path,
input_traj_path=output_free_traj_path,
output_cpptraj_path=output_rms_first,
properties=prop)
# cpptraj_rms: Computing Root Mean Square deviation to analyse structural stability
# RMSd against experimental structure (backbone atoms)
# Import module
from biobb_analysis.ambertools.cpptraj_rms import cpptraj_rms
# Create prop dict and inputs/outputs
output_rms_exp = pdbCode+'_rms_exp.dat'
prop = {
'mask': 'backbone',
'reference': 'experimental'
}
# Create and launch bb
cpptraj_rms(input_top_path=output_ions_top_path,
input_traj_path=output_free_traj_path,
output_cpptraj_path=output_rms_exp,
input_exp_path=output_pdb_path,
properties=prop)
# Read RMS vs first snapshot data from file
with open(output_rms_first,'r') as rms_first_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in rms_first_file
if not line.startswith(("#","@"))
])
)
# Read RMS vs experimental structure data from file
with open(output_rms_exp,'r') as rms_exp_file:
x2,y2 = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in rms_exp_file
if not line.startswith(("#","@"))
])
)
trace1 = go.Scatter(
x = x,
y = y,
name = 'RMSd vs first'
)
trace2 = go.Scatter(
x = x,
y = y2,
name = 'RMSd vs exp'
)
data = [trace1, trace2]
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": data,
"layout": go.Layout(title="RMSd during free MD Simulation",
xaxis=dict(title = "Time (ps)"),
yaxis=dict(title = "RMSd (Angstrom)")
)
}
plotly.offline.iplot(fig)
# cpptraj_rgyr: Computing Radius of Gyration to measure the protein compactness during the free MD simulation
# Import module
from biobb_analysis.ambertools.cpptraj_rgyr import cpptraj_rgyr
# Create prop dict and inputs/outputs
output_rgyr = pdbCode+'_rgyr.dat'
prop = {
'mask': 'backbone'
}
# Create and launch bb
cpptraj_rgyr(input_top_path=output_ions_top_path,
input_traj_path=output_free_traj_path,
output_cpptraj_path=output_rgyr,
properties=prop)
# Read Rgyr data from file
with open(output_rgyr,'r') as rgyr_file:
x,y = map(
list,
zip(*[
(float(line.split()[0]),float(line.split()[1]))
for line in rgyr_file
if not line.startswith(("#","@"))
])
)
plotly.offline.init_notebook_mode(connected=True)
fig = {
"data": [go.Scatter(x=x, y=y)],
"layout": go.Layout(title="Radius of Gyration",
xaxis=dict(title = "Time (ps)"),
yaxis=dict(title = "Rgyr (Angstrom)")
)
}
plotly.offline.iplot(fig)
###Output
_____no_output_____
###Markdown
*** Post-processing and Visualizing resulting 3D trajectoryPost-processing and Visualizing the **protein system** MD setup **resulting trajectory** using **NGL**- [Step 1](ppStep1): *Imaging* the resulting trajectory, **stripping out water molecules and ions** and **correcting periodicity issues**.- [Step 2](ppStep3): Visualizing the *imaged* trajectory using the *dry* structure as a **topology**. *****Building Blocks** used: - [cpptraj_image](https://biobb-analysis.readthedocs.io/en/latest/ambertools.htmlmodule-ambertools.cpptraj_image) from **biobb_analysis.cpptraj.cpptraj_image** *** Step 1: *Imaging* the resulting trajectory.Stripping out **water molecules and ions** and **correcting periodicity issues**
###Code
# cpptraj_image: "Imaging" the resulting trajectory
# Removing water molecules and ions from the resulting structure
# Import module
from biobb_analysis.ambertools.cpptraj_image import cpptraj_image
# Create prop dict and inputs/outputs
output_imaged_traj = pdbCode+'_imaged_traj.trr'
prop = {
'mask': 'solute',
'format': 'trr'
}
# Create and launch bb
cpptraj_image(input_top_path=output_ions_top_path,
input_traj_path=output_free_traj_path,
output_cpptraj_path=output_imaged_traj,
properties=prop)
###Output
_____no_output_____
###Markdown
Step 2: Visualizing the generated dehydrated trajectory.Using the **imaged trajectory** (output of the [Post-processing step 1](ppStep1)) with the **dry structure** (output of
###Code
# Show trajectory
view = nglview.show_simpletraj(nglview.SimpletrajTrajectory(output_imaged_traj, output_ambpdb_path), gui=True)
view.clear_representations()
view.add_representation('cartoon', color='sstruc')
view.add_representation('licorice', selection='JZ4', color='element', radius=1)
view
###Output
_____no_output_____ |
license_recognition.ipynb | ###Markdown
Object Recognition using CNN model
###Code
import numpy as np
import cv2
import matplotlib.pyplot as plt
#detecting license plate on the vehicle
plateCascade = cv2.CascadeClassifier('indian_license_plate.xml')
#detect the plate and return car + plate image
def plate_detect(img):
plateImg = img.copy()
roi = img.copy()
plateRect = plateCascade.detectMultiScale(plateImg,scaleFactor = 1.2, minNeighbors = 7)
for (x,y,w,h) in plateRect:
roi_ = roi[y:y+h, x:x+w, :]
plate_part = roi[y:y+h, x:x+w, :]
cv2.rectangle(plateImg,(x+2,y),(x+w-3, y+h-5),(0,255,0),3)
return plateImg, plate_part
#normal function to display
def display_img(img):
img_ = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
plt.imshow(img_)
plt.show()
#test image is used for detecting plate
inputImg = cv2.imread('car.jpg')
inpImg, plate = plate_detect(inputImg)
display_img(inpImg)
display_img(plate)
###Output
_____no_output_____
###Markdown
Now we are taking every letter of it
###Code
def find_contours(dimensions, img) :
#finding all contours in the image using
#retrieval mode: RETR_TREE
#contour approximation method: CHAIN_APPROX_SIMPLE
cntrs, _ = cv2.findContours(img.copy(), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
#Approx dimensions of the contours
lower_width = dimensions[0]
upper_width = dimensions[1]
lower_height = dimensions[2]
upper_height = dimensions[3]
#Check largest 15 contours for license plate character respectively
cntrs = sorted(cntrs, key=cv2.contourArea, reverse=True)[:15]
ci = cv2.imread('contour.jpg')
x_cntr_list = []
target_contours = []
img_res = []
for cntr in cntrs :
#detecting contour in binary image and returns the coordinates of rectangle enclosing it
intX, intY, intWidth, intHeight = cv2.boundingRect(cntr)
#checking the dimensions of the contour to filter out the characters by contour's size
if intWidth > lower_width and intWidth < upper_width and intHeight > lower_height and intHeight < upper_height :
x_cntr_list.append(intX)
char_copy = np.zeros((44,24))
#extracting each character using the enclosing rectangle's coordinates.
char = img[intY:intY+intHeight, intX:intX+intWidth]
char = cv2.resize(char, (20, 40))
cv2.rectangle(ci, (intX,intY), (intWidth+intX, intY+intHeight), (50,21,200), 2)
plt.imshow(ci, cmap='gray')
char = cv2.subtract(255, char)
char_copy[2:42, 2:22] = char
char_copy[0:2, :] = 0
char_copy[:, 0:2] = 0
char_copy[42:44, :] = 0
char_copy[:, 22:24] = 0
img_res.append(char_copy) # List that stores the character's binary image (unsorted)
#return characters on ascending order with respect to the x-coordinate
plt.show()
#arbitrary function that stores sorted list of character indeces
indices = sorted(range(len(x_cntr_list)), key=lambda k: x_cntr_list[k])
img_res_copy = []
for idx in indices:
img_res_copy.append(img_res[idx])# stores character images according to their index
img_res = np.array(img_res_copy)
return img_res
def segment_characters(image) :
#pre-processing cropped image of plate
#threshold: convert to pure b&w with sharpe edges
#erod: increasing the backgroung black
#dilate: increasing the char white
img_lp = cv2.resize(image, (333, 75))
img_gray_lp = cv2.cvtColor(img_lp, cv2.COLOR_BGR2GRAY)
_, img_binary_lp = cv2.threshold(img_gray_lp, 200, 255, cv2.THRESH_BINARY+cv2.THRESH_OTSU)
img_binary_lp = cv2.erode(img_binary_lp, (3,3))
img_binary_lp = cv2.dilate(img_binary_lp, (3,3))
LP_WIDTH = img_binary_lp.shape[0]
LP_HEIGHT = img_binary_lp.shape[1]
img_binary_lp[0:3,:] = 255
img_binary_lp[:,0:3] = 255
img_binary_lp[72:75,:] = 255
img_binary_lp[:,330:333] = 255
#estimations of character contours sizes of cropped license plates
dimensions = [LP_WIDTH/6,
LP_WIDTH/2,
LP_HEIGHT/10,
2*LP_HEIGHT/3]
plt.imshow(img_binary_lp, cmap='gray')
plt.show()
cv2.imwrite('contour.jpg',img_binary_lp)
#getting contours
char_list = find_contours(dimensions, img_binary_lp)
return char_list
char = segment_characters(plate)
for i in range(10):
plt.subplot(1, 10, i+1)
plt.imshow(char[i], cmap='gray')
plt.axis('off')
import keras.backend as K
import tensorflow as tf
from sklearn.metrics import f1_score
from keras import optimizers
from keras.models import Sequential
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Dense, Flatten, MaxPooling2D, Dropout, Conv2D
train_datagen = ImageDataGenerator(rescale=1./255, width_shift_range=0.1, height_shift_range=0.1)
path = 'data/data/'
train_generator = train_datagen.flow_from_directory(
path+'/train',
target_size=(28,28),
batch_size=1,
class_mode='sparse')
validation_generator = train_datagen.flow_from_directory(
path+'/val',
target_size=(28,28),
class_mode='sparse')
#It is the harmonic mean of precision and recall
#Output range is [0, 1]
#Works for both multi-class and multi-label classification
def f1score(y, y_pred):
return f1_score(y, tf.math.argmax(y_pred, axis=1), average='micro')
def custom_f1score(y, y_pred):
return tf.py_function(f1score, (y, y_pred), tf.double)
K.clear_session()
model = Sequential()
model.add(Conv2D(16, (22,22), input_shape=(28, 28, 3), activation='relu', padding='same'))
model.add(Conv2D(32, (16,16), input_shape=(28, 28, 3), activation='relu', padding='same'))
model.add(Conv2D(64, (8,8), input_shape=(28, 28, 3), activation='relu', padding='same'))
model.add(Conv2D(64, (4,4), input_shape=(28, 28, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(4, 4)))
model.add(Dropout(0.4))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(36, activation='softmax'))
model.compile(loss='sparse_categorical_crossentropy', optimizer=optimizers.Adam(lr=0.0001), metrics=[custom_f1score])
class stop_training_callback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('val_custom_f1score') > 0.99):
self.model.stop_training = True
batch_size = 1
callbacks = [stop_training_callback()]
model.fit_generator(
train_generator,
steps_per_epoch = train_generator.samples // batch_size,
validation_data = validation_generator,
epochs = 80, verbose=1, callbacks=callbacks)
def fix_dimension(img):
new_img = np.zeros((28,28,3))
for i in range(3):
new_img[:,:,i] = img
return new_img
def show_results():
dic = {}
characters = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ'
for i,c in enumerate(characters):
dic[i] = c
output = []
for i,ch in enumerate(char):
img_ = cv2.resize(ch, (28,28), interpolation=cv2.INTER_AREA)
img = fix_dimension(img_)
img = img.reshape(1,28,28,3)
y_ = model.predict_classes(img)[0]
character = dic[y_] #
output.append(character)
plate_number = ''.join(output)
return plate_number
final_plate = show_results()
print(final_plate)
import requests
import xmltodict
import json
def get_vehicle_info(plate_number):
r = requests.get("http://www.regcheck.org.uk/api/reg.asmx/CheckIndia?RegistrationNumber={0}&username=geerling".format(str(plate_number)))
data = xmltodict.parse(r.content)
jdata = json.dumps(data)
df = json.loads(jdata)
df1 = json.loads(df['Vehicle']['vehicleJson'])
return df1
get_vehicle_info(final_plate)
model.save('license_plate_character.pkl')
get_vehicle_info('WB06F5977')
###Output
_____no_output_____ |
docs/source/Tutorials/archive/Tutorial 2 - Organization of annotated data.ipynb | ###Markdown
In this notebook, we demonstrate the basics of how data is organized inside of a trial object. First, an already existing trial object is loaded:
###Code
# import trial object to use as example
trialobj = pkg.import_obj(pkl_path='/home/pshah/Documents/code/packerlabimaging/tests/2020-12-19_t-013.pkl')
###Output
|- Loaded (RL109 t-013) TwoPhotonImagingTrial.alloptical experimental trial object, last saved: Sat Jan 22 04:01:26 2022
###Markdown
The AnnData library is used to store data in an efficient, multi-functional format. This is stored under: `trialobject.data` . The AnnData object is built around the raw Flu matrix of each `trialobject` . In keeping with AnnData conventions, the data structure is organized in *n* observations (obs) x *m* variables (var), where observations are suite2p ROIs and variables are imaging frame timepoints.
###Code
display.Image("/home/pshah/Documents/code/packerlabimaging/files/packerlabimaging-anndata-integration-01.jpg")
trialobj.data # this is the anndata object for this trial
###Output
_____no_output_____
###Markdown
storage of Flu data The raw data is stored in `.X`
###Code
print(trialobj.data.X)
print('shape: ', trialobj.data.X.shape)
###Output
[[184.1814 317.08673 290.3114 ... 226.17331 381.18915
148.0913 ]
[385.88657 137.67667 143.93521 ... -6.068924 134.3343
297.7108 ]
[ 60.864273 261.0147 104.70546 ... 129.49275 312.83362
94.42384 ]
...
[154.93796 5.2811584 299.87506 ... 57.443832 185.83585
91.68779 ]
[ 39.04761 72.97851 47.687088 ... 52.206535 107.23722
78.30358 ]
[ 68.78638 149.34642 123.59329 ... 129.84854 101.720566
116.995895 ]]
shape: (640, 16368)
###Markdown
Processed data is added to `trialobj.data` as a unique `layers` key.
###Code
trialobj.data.layers
print(trialobj.data.layers['dFF'])
print('shape: ', trialobj.data.layers['dFF'].shape)
###Output
[[ 2.49849148e+01 1.15174057e+02 9.70044022e+01 ... 5.34804955e+01
1.58673752e+02 4.94285583e-01]
[ 4.89079651e+02 1.10171936e+02 1.19725990e+02 ... -1.09264587e+02
1.05069611e+02 3.54473907e+02]
[ 4.28867249e+02 2.16803198e+03 8.09815979e+02 ... 1.02519995e+03
2.61830151e+03 7.20476013e+02]
...
[ 1.07392031e+04 2.69461121e+02 2.08787637e+04 ... 3.91867554e+03
1.29007676e+04 6.31432617e+03]
[ 6.17029190e+01 2.02216629e+02 9.74804840e+01 ... 1.16196297e+02
3.44087921e+02 2.24268677e+02]
[ 3.56949688e+04 7.73824844e+04 6.40559766e+04 ... 6.72928906e+04
5.27374688e+04 6.06420117e+04]]
shape: (640, 16368)
###Markdown
The rest of the AnnData data object is built according to the dimensions of the original Flu data input. observations (Suite2p ROIs metadata and associated processing info) For instance, the metadata for each suite2p ROI stored in Suite2p’s stat.npy output is added to `trialobject.data` under `obs` and `obsm` (1D and >1-D observations annotations, respectively).
###Code
trialobj.data.obs
trialobj.data.obsm
###Output
_____no_output_____
###Markdown
The `.obsm` includes the ypix and xpix outputs for each suite2p ROI which represent the pixel locations of the ROI mask.
###Code
print('ypix:', trialobj.data.obsm['ypix'][:5], '\n\nxpix: \t', trialobj.data.obsm['xpix'][:5])
###Output
ypix: [array([102, 102, 102, 102, 102, 103, 103, 103, 103, 103, 103, 103, 104,
104, 104, 104, 104, 104, 104, 104, 105, 105, 105, 105, 105, 105,
105, 105, 106, 106, 106, 106, 106, 106, 106, 106, 107, 107, 107,
107, 107, 107, 107, 108, 108, 108, 108, 108])
array([46, 46, 46, 46, 46, 46, 47, 47, 47, 47, 47, 47, 47, 47, 47, 48, 48,
48, 48, 48, 48, 48, 48, 49, 49, 49, 49, 49, 49, 49, 49, 50, 50, 50,
50, 50, 50, 50, 51, 51, 51, 51, 51, 52, 52, 52])
array([18, 18, 18, 18, 19, 19, 19, 19, 19, 19, 19, 19, 20, 20, 20, 20, 20,
20, 20, 20, 20, 21, 21, 21, 21, 21, 21, 21, 21, 21, 21, 22, 22, 22,
22, 22, 22, 22, 22, 22, 22, 23, 23, 23, 23, 23, 23, 23, 23, 24, 24,
24, 24, 24, 24, 24, 24, 25, 25, 25, 25, 25, 25, 25, 26, 26, 26, 26])
array([43, 44, 45, 46, 46, 47, 47, 47, 48, 48, 48, 48, 48, 49, 49, 49, 49,
49, 49, 50, 50, 50, 50, 50, 50, 50, 51, 51, 51, 51, 51, 51, 51, 52,
52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 52, 53, 53, 53, 53, 53, 53,
53, 53, 53, 53, 53, 53, 54, 54, 54, 54, 54, 54, 54, 54, 54, 54, 55,
55, 55, 55, 55, 55, 55, 55, 55, 56, 56, 56, 56, 56, 56, 56, 56, 57,
57, 57, 57, 57, 57, 57, 58, 58, 58, 58, 58, 59, 59])
array([156, 156, 156, 156, 156, 157, 157, 157, 157, 157, 157, 157, 157,
158, 158, 158, 158, 158, 158, 158, 158, 159, 159, 159, 159, 159,
159, 159, 159, 160, 160, 160, 160, 160, 160, 160, 161, 161, 161,
161, 161]) ]
xpix: [array([457, 458, 459, 460, 461, 456, 457, 458, 459, 460, 461, 462, 455,
456, 457, 458, 459, 460, 461, 462, 455, 456, 457, 458, 459, 460,
461, 462, 455, 456, 457, 458, 459, 460, 461, 462, 456, 457, 458,
459, 460, 461, 462, 457, 458, 459, 460, 461])
array([116, 117, 118, 119, 120, 121, 114, 115, 116, 117, 118, 119, 120,
121, 122, 115, 116, 117, 118, 119, 120, 121, 122, 115, 116, 117,
118, 119, 120, 121, 122, 116, 117, 118, 119, 120, 121, 122, 117,
118, 119, 120, 121, 118, 119, 120])
array([202, 203, 204, 205, 200, 201, 202, 203, 204, 205, 206, 207, 200,
201, 202, 203, 204, 205, 206, 207, 208, 200, 201, 202, 203, 204,
205, 206, 207, 208, 209, 199, 200, 201, 202, 203, 204, 205, 206,
207, 208, 200, 201, 202, 203, 204, 205, 206, 207, 200, 201, 202,
203, 204, 205, 206, 207, 201, 202, 203, 204, 205, 206, 207, 202,
203, 204, 205])
array([352, 352, 352, 352, 353, 352, 353, 354, 351, 352, 353, 354, 355,
350, 351, 352, 353, 354, 355, 350, 351, 352, 353, 354, 355, 356,
349, 350, 351, 352, 353, 354, 355, 349, 350, 351, 352, 353, 354,
355, 357, 358, 359, 360, 361, 350, 351, 352, 353, 354, 355, 356,
357, 358, 359, 360, 361, 352, 353, 354, 355, 356, 357, 358, 359,
360, 361, 354, 355, 356, 357, 358, 359, 360, 361, 362, 354, 355,
356, 357, 358, 359, 360, 361, 355, 356, 357, 358, 359, 360, 361,
357, 358, 359, 360, 361, 358, 359])
array([382, 383, 384, 385, 386, 380, 381, 382, 383, 384, 385, 386, 387,
380, 381, 382, 383, 384, 385, 386, 387, 380, 381, 382, 383, 384,
385, 386, 387, 380, 381, 382, 383, 384, 385, 386, 381, 382, 383,
384, 385]) ]
###Markdown
variables (temporal synchronization of paq channels and imaging) And the temporal synchronization data of the experiment collected in .paq output is added to the variables annotations under `var`. These variables are timed to the imaging frame clock timings. The total of variables is the number of imaging frames in the original Flu data input.
###Code
import pandas as pd
pd.options.display.max_rows = 999
trialobj.data.var
###Output
_____no_output_____ |
dementia_optima/models/misc/data_kernel_sl_nr_ft_newfeaturevariable_maya_notAskedandnull.ipynb | ###Markdown
------ **Dementia Patients -- Analysis and Prediction** ***Author : Akhilesh Vyas*** ****Date : August, 2019**** ***Result Plots*** - 0. Setup - 0.1. Load libraries - 0.2. Define paths - 1. Data Preparation - 1.1. Read Data - 1.2. Prepare data - 1.3. Prepare target - 1.4. Removing Unwanted Features - 2. Data Analysis - 2.1. Feature - 2.2. Target - 3. Data Preparation and Vector Transformation- 4. Analysis and Imputing Missing Values - 5. Feature Analysis - 5.1. Correlation Matrix - 5.2. Feature and target - 5.3. Feature Selection Models - 6.Machine Learning -Classification Model 0. Setup 0.1 Load libraries Loading Libraries
###Code
import sys
sys.path.insert(1, '../preprocessing/')
import numpy as np
import pickle
import scipy.stats as spstats
import matplotlib.pyplot as plt
import seaborn as sns
import pandas_profiling
from sklearn.datasets.base import Bunch
#from data_transformation_cls import FeatureTransform
from ast import literal_eval
import plotly.figure_factory as ff
import plotly.offline as py
import plotly.graph_objects as go
import pandas as pd
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
pd.set_option('display.max_colwidth', -1)
from ordered_set import OrderedSet
%matplotlib inline
###Output
_____no_output_____
###Markdown
0.2 Define paths
###Code
# data_path
data_path = '../../../datalcdem/data/optima/dementia_18July/data_notasked/'
result_path = '../../../datalcdem/data/optima/dementia_18July/data_notasked/results/'
optima_path = '../../../datalcdem/data/optima/optima_excel/'
###Output
_____no_output_____
###Markdown
1. Data Preparation 1.1. Read Data
###Code
#Preparation Features from Raw data
# Patient Comorbidities data
'''patient_com_raw_df = pd.read_csv(data_path + 'optima_patients_comorbidities.csv').groupby(by=['patient_id', 'EPISODE_DATE'], as_index=False).agg(lambda x: x.tolist())[['patient_id', 'EPISODE_DATE', 'Comorbidity_cui']]
display(patient_com_raw_df.head(5))
patient_com_raw_df['EPISODE_DATE'] = pd.to_datetime(patient_com_raw_df['EPISODE_DATE'])
# Patient Treatment data
patient_treat_raw_df = pd.read_csv(data_path + 'optima_patients_treatments.csv').groupby(by=['patient_id', 'EPISODE_DATE'], as_index=False).agg(lambda x: x.tolist())[['patient_id', 'EPISODE_DATE', 'Medication_cui']]
display(patient_treat_raw_df.head(5))
patient_treat_raw_df['EPISODE_DATE'] = pd.to_datetime(patient_treat_raw_df['EPISODE_DATE'])
# Join Patient Treatment and Comorbidities data
patient_com_treat_raw_df = pd.merge(patient_com_raw_df, patient_treat_raw_df,on=['patient_id', 'EPISODE_DATE'], how='outer')
patient_com_treat_raw_df.sort_values(by=['patient_id', 'EPISODE_DATE'],axis=0, inplace=True, ascending=True)
patient_com_treat_raw_df.reset_index(drop=True, inplace=True)
patient_com_treat_raw_df.head(5)
#Saving data
patient_com_treat_raw_df.to_csv(data_path + 'patient_com_treat_episode_df.csv', index=False)'''
# Extracting selected features from Raw data
def rename_columns(col_list):
d = {}
for i in col_list:
if i=='GLOBAL_PATIENT_DB_ID':
d[i]='patient_id'
elif 'CAMDEX SCORES: ' in i:
d[i]=i.replace('CAMDEX SCORES: ', '').replace(' ', '_')
elif 'CAMDEX ADMINISTRATION 1-12: ' in i:
d[i]=i.replace('CAMDEX ADMINISTRATION 1-12: ', '').replace(' ', '_')
elif 'DIAGNOSIS 334-351: ' in i:
d[i]=i.replace('DIAGNOSIS 334-351: ', '').replace(' ', '_')
elif 'OPTIMA DIAGNOSES V 2010: ' in i:
d[i]=i.replace('OPTIMA DIAGNOSES V 2010: ', '').replace(' ', '_')
elif 'PM INFORMATION: ' in i:
d[i]=i.replace('PM INFORMATION: ', '').replace(' ', '_')
else:
d[i]=i.replace(' ', '_')
return d
sel_col_df = pd.read_excel(data_path+'Variable_Guide_Highlighted_Fields_.xlsx')
display(sel_col_df.head(5))
sel_cols = [i+j.replace('+', ':')for i,j in zip(sel_col_df['Sub Category '].tolist(), sel_col_df['Variable Label'].tolist())]
rem_cols= ['OPTIMA DIAGNOSES V 2010: OTHER SYSTEMIC ILLNESS: COMMENT'] # Missing column in the dataset
sel_cols = sorted(list(set(sel_cols)-set(rem_cols)))
print (sel_cols)
columns_selected = list(OrderedSet(['GLOBAL_PATIENT_DB_ID', 'EPISODE_DATE'] + sel_cols))
df_datarequest = pd.read_excel(optima_path+'Data_Request_Jan_2019_final.xlsx')
display(df_datarequest.head(1))
df_datarequest_features = df_datarequest[columns_selected]
display(df_datarequest_features.columns)
columns_renamed = rename_columns(df_datarequest_features.columns.tolist())
df_datarequest_features.rename(columns=columns_renamed, inplace=True)
display(df_datarequest_features.head(5))
# df_datarequest_features.drop(columns=['Age_At_Episode', 'PETERSEN_MCI_TYPE'], inplace=True)
display(df_datarequest_features.head(5))
# drop columns having out of range MMSE value
#df_datarequest_features = df_datarequest_features[(df_datarequest_features['MINI_MENTAL_SCORE']<=30) & (df_datarequest_features['MINI_MENTAL_SCORE']>=0)]
# Merging Join Patient Treatment, Comorbidities and selected features from raw data
#patient_com_treat_raw_df['EPISODE_DATE'] = pd.to_datetime(patient_com_treat_raw_df['EPISODE_DATE'])
#patient_com_treat_fea_raw_df = pd.merge(patient_com_treat_raw_df,df_datarequest_features,on=['patient_id', 'EPISODE_DATE'], how='left')
#patient_com_treat_fea_raw_df.sort_values(by=['patient_id', 'EPISODE_DATE'],axis=0, inplace=True, ascending=True)
#patient_com_treat_fea_raw_df.reset_index(inplace=True, drop=True)
#display(patient_com_treat_fea_raw_df.head(5))
patient_com_treat_fea_raw_df = df_datarequest_features # Need to be changed ------------------------
# Filling misssing MMSE value with patient group Average
#patient_com_treat_fea_raw_df['MINI_MENTAL_SCORE']\
# = patient_com_treat_fea_raw_df.groupby(by=['patient_id'])['MINI_MENTAL_SCORE'].transform(lambda x: x.fillna(x.mean()))
display(patient_com_treat_fea_raw_df.head(5))
# 19<=Mild<=24 , 14<=Moderate<=18 , Severe<=13
#patient_com_treat_fea_raw_df['MINI_MENTAL_SCORE_CATEGORY']=np.nan
def change_minimentalscore_to_category(df):
df.loc[(df['MINI_MENTAL_SCORE']<=30) & (df['MINI_MENTAL_SCORE']>24),'MINI_MENTAL_SCORE_CATEGORY'] = 'Normal'
df.loc[(df['MINI_MENTAL_SCORE']<=24) & (df['MINI_MENTAL_SCORE']>=19),
'MINI_MENTAL_SCORE_CATEGORY'] = 'Mild'
df.loc[(df['MINI_MENTAL_SCORE']<=18) & (df['MINI_MENTAL_SCORE']>=14),
'MINI_MENTAL_SCORE_CATEGORY'] = 'Moderate'
df.loc[(df['MINI_MENTAL_SCORE']<=13) & (df['MINI_MENTAL_SCORE']>=0),'MINI_MENTAL_SCORE_CATEGORY'] = 'Severe'
return df
#patient_com_treat_fea_raw_df = change_minimentalscore_to_category(patient_com_treat_fea_raw_df)
# saving file
patient_com_treat_fea_raw_df.to_csv(data_path + 'patient_com_treat_fea_episode_raw_without_expand_df.csv', index=False)
# Set line number for treatment line
def setLineNumber(lst):
lst_dict = {ide:0 for ide in lst}
lineNumber_list = []
for idx in lst:
if idx in lst_dict:
lst_dict[idx] = lst_dict[idx] + 1
lineNumber_list.append(lst_dict[idx])
return lineNumber_list
patient_com_treat_fea_raw_df['lineNumber'] = setLineNumber(patient_com_treat_fea_raw_df['patient_id'].tolist())
display(patient_com_treat_fea_raw_df.head(5))
# Extend episode data into columns
def extend_episode_data(df):
id_dict = {i:0 for i in df['patient_id'].tolist()}
for x in df['patient_id'].tolist():
if x in id_dict:
id_dict[x]=id_dict[x]+1
line_updated = [int(j) for i in id_dict.values() for j in range(1,i+1)]
# print (line_updated[0:10])
df.update(pd.Series(line_updated, name='lineNumber'),errors='ignore')
print ('\n----------------After creating line-number for each patients------------------')
display(df.head(20))
# merging episodes based on id and creating new columns for each episode
r = df['lineNumber'].max()
print ('Max line:',r)
l = [df[df['lineNumber']==i] for i in range(1, int(r+1))]
print('Number of Dfs to merge: ',len(l))
df_new = pd.DataFrame()
tmp_id = []
for i, df_l in enumerate(l):
df_l = df_l[~df_l['patient_id'].isin(tmp_id)]
for j, df_ll in enumerate(l[i+1:]):
#df_l = df_l.merge(df_ll, on='id', how='left', suffix=(str(j), str(j+1))) #suffixe is not working
#print (j)
df_l = df_l.join(df_ll.set_index('patient_id'), on='patient_id', rsuffix='_'+str(j+1))
tmp_id = tmp_id + df_l['patient_id'].tolist()
#display(df_l)
df_new = df_new.append(df_l, ignore_index=True, sort=False)
return df_new
patient_com_treat_fea_raw_df['lineNumber'] = setLineNumber(patient_com_treat_fea_raw_df['patient_id'].tolist())
# drop rows with duplicated episode for a patient
patient_com_treat_fea_raw_df = patient_com_treat_fea_raw_df.drop_duplicates(subset=['patient_id', 'EPISODE_DATE'])
patient_com_treat_fea_raw_df.sort_values(by=['patient_id', 'EPISODE_DATE'], inplace=True)
columns = patient_com_treat_fea_raw_df.columns.tolist()
patient_com_treat_fea_raw_df = patient_com_treat_fea_raw_df[columns[0:2]+columns[-1:]
+columns[2:4]+columns[-2:-1]
+columns[4:-2]]
# Expand patient
#patient_com_treat_fea_raw_df = extend_episode_data(patient_com_treat_fea_raw_df)
display(patient_com_treat_fea_raw_df.head(2))
#Saving extended episode of each patients
#patient_com_treat_fea_raw_df.to_csv(data_path + 'patient_com_treat_fea_episode_raw_df.csv', index=False)
patient_com_treat_fea_raw_df.shape
display(patient_com_treat_fea_raw_df.describe(include='all'))
display(patient_com_treat_fea_raw_df.info())
tmp_l = []
for i in range(len(patient_com_treat_fea_raw_df.index)) :
# print("Nan in row ", i , " : " , patient_com_treat_fea_raw_df.iloc[i].isnull().sum())
tmp_l.append(patient_com_treat_fea_raw_df.iloc[i].isnull().sum())
plt.hist(tmp_l)
plt.show()
# find NAN and Notasked after filled value
def findnotasked(v):
#print(v)
c = 0.0
flag = False
try:
for i in v:
if float(i)<9.0 and float(i)>=0.0 and flag==False: #float(i)<9.0 and float(i)>=0.0:
flag = True
elif (float(i)==9.0 and flag==True):
c = c+1
except:
pass
'''try:
for i in v:
if i!=9.0 or i!=i: #float(i)<9.0 and float(i)>=0.0:
flag = True
elif (float(i)==9.0 and flag==True):
c = c+1
except:
pass'''
return c
def findnan(v):
#print(v)
c = 0.0
flag = False
try:
for i in v:
if float(i)<9.0 and float(i)>=0.0 and flag==False: #float(i)<9.0 and float(i)>=0.0:
flag = True
elif (float(i)!=float(i) and flag==True):
c = c+1
except:
pass
'''try:
for i in v:
if i!=9.0 or i!=i: #float(i)<9.0 and float(i)>=0.0:
flag = True
elif (float(i)!=float(i) and flag==True):
c = c+1
except:
pass'''
return c
df = patient_com_treat_fea_raw_df[list(
set([col for col in patient_com_treat_fea_raw_df.columns.tolist()])
-set(['EPISODE_DATE']))]
tmpdf = pd.DataFrame(data=df['patient_id'].unique(), columns=['patient_id'])
display(tmpdf.head(5))
for col in df.columns.tolist():
#print (col)
tmp_df1 = df.groupby(by=['patient_id'])[col].apply(lambda x : findnotasked(x)
).reset_index(name='Count(notAsked)_'+col )
tmp_df2 = df.groupby(by=['patient_id'])[col].apply(lambda x : findnan(x)
).reset_index(name='Count(nan)_'+col )
#print (tmp_df1.isnull().sum().sum(), tmp_df2.isnull().sum().sum())
tmpdf = tmpdf.merge(tmp_df1, on=['patient_id'], how='inner')
tmpdf = tmpdf.merge(tmp_df2, on=['patient_id'], how='inner')
#print (tmpdf.columns.tolist()[-2])
#display(tmpdf)
#display(tmpdf.agg(lambda x: x.sum(), axis=1))
col_notasked = [col for col in tmpdf.columns if 'Count(notAsked)_' in col]
col_nan = [col for col in tmpdf.columns.tolist() if 'Count(nan)_' in col]
tmpdf['count_Total(notasked)']=tmpdf[col_notasked].agg(lambda x: x.sum(),axis=1)
tmpdf['count_Total(nan)']=tmpdf[col_nan].agg(lambda x: x.sum(),axis=1)
display(tmpdf.head(5))
profile = tmpdf.profile_report(title='Dementia Profiling Report')
profile.to_file(output_file= result_path + "dementia_data_profiling_report_output_all_patients_notasked_nan.html")
# find NAN and Notasked after filled value
def findnotasked_full(v):
#print(v)
c = 0.0
try:
for i in v:
if float(i)==9.0:
c = c+1
except:
pass
return c
def findnan_full(v):
c = 0.0
try:
for i in v:
if float(i)!=i:
c = c+1
except:
pass
return c
df = patient_com_treat_fea_raw_df[list(
set([col for col in patient_com_treat_fea_raw_df.columns.tolist()])
-set(['EPISODE_DATE']))]
tmpdf_full = pd.DataFrame(data=df['patient_id'].unique(), columns=['patient_id'])
display(tmpdf_full.head(5))
for col in df.columns.tolist():
#print (col)
tmp_df1_full = df.groupby(by=['patient_id'])[col].apply(lambda x : findnotasked_full(x)
).reset_index(name='Count(notAsked)_'+col )
tmp_df2_full = df.groupby(by=['patient_id'])[col].apply(lambda x : findnan_full(x)
).reset_index(name='Count(nan)_'+col )
#print (tmp_df1.isnull().sum().sum(), tmp_df2.isnull().sum().sum())
tmpdf_full = tmpdf_full.merge(tmp_df1_full, on=['patient_id'], how='inner')
tmpdf_full = tmpdf_full.merge(tmp_df2_full, on=['patient_id'], how='inner')
#print (tmpdf.columns.tolist()[-2])
#display(tmpdf)
#display(tmpdf.agg(lambda x: x.sum(), axis=1))
col_notasked_full = [col for col in tmpdf_full.columns if 'Count(notAsked)_' in col]
col_nan_full = [col for col in tmpdf_full.columns.tolist() if 'Count(nan)_' in col]
tmpdf_full['count_Total(notasked)']=tmpdf_full[col_notasked].agg(lambda x: x.sum(),axis=1)
tmpdf_full['count_Total(nan)']=tmpdf_full[col_nan].agg(lambda x: x.sum(),axis=1)
display(tmpdf_full.head(5))
profile = tmpdf_full.profile_report(title='Dementia Profiling Report')
profile.to_file(output_file= result_path + "dementia_data_profiling_report_output_all_patients_notasked_nan_full.html")
# profile = patient_com_treat_fea_raw_df.profile_report(title='Dementia Profiling Report', style={'full_width':True})
profile = patient_com_treat_fea_raw_df.profile_report(title='Dementia Profiling Report')
profile.to_file(output_file= result_path + "dementia_data_profiling_report_output_all_patients_notasked.html")
#columnswise sum
total_notasked_nan = tmpdf.sum(axis = 0, skipna = True)
total_notasked_nan.to_csv(data_path+'total_notasked_nan.csv', index=True)
total_notasked_nan_com = tmpdf_full.sum(axis = 0, skipna = True)
total_notasked_nan_com.to_csv(data_path+'total_notasked_nan_com.csv', index=True)
patient_com_treat_fea_raw_df.describe()
###Output
_____no_output_____ |
jupyter_notebooks/notebooks/NB13_CIX-DNN_susy_Pytorch.ipynb | ###Markdown
Notebook 13: Using Deep Learning to Study SUSY with Pytorch Learning GoalsThe goal of this notebook is to introduce the powerful PyTorch framework for building neural networks and use it to analyze the SUSY dataset. After this notebook, the reader should understand the mechanics of PyTorch and how to construct DNNs using this package. In addition, the reader is encouraged to explore the GPU backend available in Pytorch on this dataset. OverviewIn this notebook, we use Deep Neural Networks to classify the supersymmetry dataset, first introduced by Baldi et al. in [Nature Communication (2015)](https://www.nature.com/articles/ncomms5308). The SUSY data set consists of 5,000,000 Monte-Carlo samples of supersymmetric and non-supersymmetric collisions with $18$ features. The signal process is the production of electrically-charged supersymmetric particles which decay to $W$ bosons and an electrically-neutral supersymmetric particle that is invisible to the detector.The first $8$ features are "raw" kinematic features that can be directly measured from collisions. The final $10$ features are "hand constructed" features that have been chosen using physical knowledge and are known to be important in distinguishing supersymmetric and non-supersymmetric collision events. More specifically, they are given by the column names below.In this notebook, we study this dataset using Pytorch.
###Code
from __future__ import print_function, division
import os,sys
import numpy as np
import torch # pytorch package, allows using GPUs
# fix seed
seed=17
np.random.seed(seed)
torch.manual_seed(seed)
###Output
_____no_output_____
###Markdown
Structure of the ProcedureConstructing a Deep Neural Network to solve ML problems is a multiple-stage process. Quite generally, one can identify the key steps as follows:* ***step 1:*** Load and process the data* ***step 2:*** Define the model and its architecture* ***step 3:*** Choose the optimizer and the cost function* ***step 4:*** Train the model * ***step 5:*** Evaluate the model performance on the *unseen* test data* ***step 6:*** Modify the hyperparameters to optimize performance for the specific data setBelow, we sometimes combine some of these steps together for convenience.Notice that we take a rather different approach, compared to the simpler MNIST Keras notebook. We first define a set of classes and functions and run the actual computation only in the very end. Step 1: Load and Process the SUSY DatasetThe supersymmetry dataset can be downloaded from the UCI Machine Learning repository on [https://archive.ics.uci.edu/ml/machine-learning-databases/00279/SUSY.csv.gz](https://archive.ics.uci.edu/ml/machine-learning-databases/00279/SUSY.csv.gz). The dataset is quite large. Download the dataset and unzip it in a directory.Loading data in Pytroch is done by creating a user-defined a class, which we name `SUSY_Dataset`, and is a child of the `torch.utils.data.Dataset` class. This ensures that all necessary attributes required for the processing of the data during the training and test stages are easily inherited. The `__init__` method of our custom data class should contain the usual code for loading the data, which is problem-specific, and has been discussed for the SUSY data set in Notebook 5. More importantly, the user-defined data class must override the `__len__` and `__getitem__` methods of the parent `DataSet` class. The former returns the size of the data set, while the latter allows the user to access a particular data point from the set by specifying its index.
###Code
from torchvision import datasets # load data
class SUSY_Dataset(torch.utils.data.Dataset):
"""SUSY pytorch dataset."""
def __init__(self, data_file, root_dir, dataset_size, train=True, transform=None, high_level_feats=None):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
train (bool, optional): If set to `True` load training data.
transform (callable, optional): Optional transform to be applied on a sample.
high_level_festures (bool, optional): If set to `True`, working with high-level features only.
If set to `False`, working with low-level features only.
Default is `None`: working with all features
"""
import pandas as pd
features=['SUSY','lepton 1 pT', 'lepton 1 eta', 'lepton 1 phi', 'lepton 2 pT', 'lepton 2 eta', 'lepton 2 phi',
'missing energy magnitude', 'missing energy phi', 'MET_rel', 'axial MET', 'M_R', 'M_TR_2', 'R', 'MT2',
'S_R', 'M_Delta_R', 'dPhi_r_b', 'cos(theta_r1)']
low_features=['lepton 1 pT', 'lepton 1 eta', 'lepton 1 phi', 'lepton 2 pT', 'lepton 2 eta', 'lepton 2 phi',
'missing energy magnitude', 'missing energy phi']
high_features=['MET_rel', 'axial MET', 'M_R', 'M_TR_2', 'R', 'MT2','S_R', 'M_Delta_R', 'dPhi_r_b', 'cos(theta_r1)']
#Number of datapoints to work with
df = pd.read_csv(root_dir+data_file, header=None,nrows=dataset_size,engine='python')
df.columns=features
Y = df['SUSY']
X = df[[col for col in df.columns if col!="SUSY"]]
# set training and test data size
train_size=int(0.8*dataset_size)
self.train=train
if self.train:
X=X[:train_size]
Y=Y[:train_size]
print("Training on {} examples".format(train_size))
else:
X=X[train_size:]
Y=Y[train_size:]
print("Testing on {} examples".format(dataset_size-train_size))
self.root_dir = root_dir
self.transform = transform
# make datasets using only the 8 low-level features and 10 high-level features
if high_level_feats is None:
self.data=(X.values.astype(np.float32),Y.values.astype(int))
print("Using both high and low level features")
elif high_level_feats is True:
self.data=(X[high_features].values.astype(np.float32),Y.values.astype(int))
print("Using both high-level features only.")
elif high_level_feats is False:
self.data=(X[low_features].values.astype(np.float32),Y.values.astype(int))
print("Using both low-level features only.")
# override __len__ and __getitem__ of the Dataset() class
def __len__(self):
return len(self.data[1])
def __getitem__(self, idx):
sample=(self.data[0][idx,...],self.data[1][idx])
if self.transform:
sample=self.transform(sample)
return sample
###Output
_____no_output_____
###Markdown
Last, we define a helper function `load_data()` that accepts as a required argument the set of parameters `args`, and returns two generators: `test_loader` and `train_loader` which readily return mini-batches.
###Code
def load_data(args):
data_file='SUSY.csv'
root_dir=os.path.expanduser('~')+'/ML_review/SUSY_data/'
kwargs = {} # CUDA arguments, if enabled
# load and noralise train and test data
train_loader = torch.utils.data.DataLoader(
SUSY_Dataset(data_file,root_dir,args.dataset_size,train=True,high_level_feats=args.high_level_feats),
batch_size=args.batch_size, shuffle=True, **kwargs)
test_loader = torch.utils.data.DataLoader(
SUSY_Dataset(data_file,root_dir,args.dataset_size,train=False,high_level_feats=args.high_level_feats),
batch_size=args.test_batch_size, shuffle=True, **kwargs)
return train_loader, test_loader
###Output
_____no_output_____
###Markdown
Step 2: Define the Neural Net and its ArchitectureTo construct neural networks with Pytorch, we make another class called `model` as a child of Pytorch's `nn.Module` class. The `model` class initializes the types of layers needed for the deep neural net in its `__init__` method, while the DNN is assembled in a function method called `forward`, which accepts an `autograd.Variable` object and returns the output layer. Using this convention Pytorch will automatically recognize the structure of the DNN, and the `autograd` module will pull the gradients forward and backward using backprop.Our code below is constructed in such a way that one can choose whether to use the high-level and low-level features separately and altogether. This choice determines the size of the fully-connected input layer `fc1`. Therefore the `__init__` method accepts the optional argument `high_level_feats`.
###Code
import torch.nn as nn # construct NN
class model(nn.Module):
def __init__(self,high_level_feats=None):
# inherit attributes and methods of nn.Module
super(model, self).__init__()
# an affine operation: y = Wx + b
if high_level_feats is None:
self.fc1 = nn.Linear(18, 200) # all features
elif high_level_feats:
self.fc1 = nn.Linear(10, 200) # low-level only
else:
self.fc1 = nn.Linear(8, 200) # high-level only
self.batchnorm1=nn.BatchNorm1d(200, eps=1e-05, momentum=0.1)
self.batchnorm2=nn.BatchNorm1d(100, eps=1e-05, momentum=0.1)
self.fc2 = nn.Linear(200, 100) # see forward function for dimensions
self.fc3 = nn.Linear(100, 2)
def forward(self, x):
'''Defines the feed-forward function for the NN.
A backward function is automatically defined using `torch.autograd`
Parameters
----------
x : autograd.Tensor
input data
Returns
-------
autograd.Tensor
output layer of NN
'''
# apply rectified linear unit
x = F.relu(self.fc1(x))
# apply dropout
#x=self.batchnorm1(x)
x = F.dropout(x, training=self.training)
# apply rectified linear unit
x = F.relu(self.fc2(x))
# apply dropout
#x=self.batchnorm2(x)
x = F.dropout(x, training=self.training)
# apply affine operation fc2
x = self.fc3(x)
# soft-max layer
x = F.log_softmax(x,dim=1)
return x
###Output
_____no_output_____
###Markdown
Steps 3+4+5: Choose the Optimizer and the Cost Function. Train and Evaluate the ModelNext, we define the function `evaluate_model`. The first argument, `args`, contains all hyperparameters needed for the DNN (see below). The second and third arguments are the `train_loader` and the `test_loader` objects, returned by the function `load_data()` we defined in Step 1 above. The `evaluate_model` function returns the final `test_loss` and `test_accuracy` of the model.First, we initialize a `model` and call the object `DNN`. In order to define the loss function and the optimizer, we use modules `torch.nn.functional` (imported here as `F`) and `torch.optim`. As a loss function we choose the negative log-likelihood, and stored is under the variable `criterion`. As usual, we can choose any from a variety of different SGD-based optimizers, but we focus on the traditional SGD.Next, we define two functions: `train()` and `test()`. They are called at the end of `evaluate_model` where we loop over the training epochs to train and test our model. The `train` function accepts an integer called `epoch`, which is only used to print the training data. We first set the `DNN` in a train mode using the `train()` method inherited from `nn.Module`. Then we loop over the mini-batches in `train_loader`. We cast the data as pytorch `Variable`, re-set the `optimizer`, perform the forward step by calling the `DNN` model on the `data` and computing the `loss`. The backprop algorithm is then easily done using the `backward()` method of the loss function `criterion`. We use `optimizer.step` to update the weights of the `DNN`. Last print the performance for every minibatch. `train` returns the loss on the data.The `test` function is similar to `train` but its purpose is to test the performance of a trained model. Once we set the `DNN` model in `eval()` mode, the following steps are similar to those in `train`. We then compute the `test_loss` and the number of `correct` predictions, print the results and return them.
###Code
import torch.nn.functional as F # implements forward and backward definitions of an autograd operation
import torch.optim as optim # different update rules such as SGD, Nesterov-SGD, Adam, RMSProp, etc
def evaluate_model(args,train_loader,test_loader):
# create model
DNN = model(high_level_feats=args.high_level_feats)
# negative log-likelihood (nll) loss for training: takes class labels NOT one-hot vectors!
criterion = F.nll_loss
# define SGD optimizer
optimizer = optim.SGD(DNN.parameters(), lr=args.lr, momentum=args.momentum)
#optimizer = optim.Adam(DNN.parameters(), lr=0.001, betas=(0.9, 0.999))
################################################
def train(epoch):
'''Trains a NN using minibatches.
Parameters
----------
epoch : int
Training epoch number.
'''
# set model to training mode (affects Dropout and BatchNorm)
DNN.train()
# loop over training data
for batch_idx, (data, label) in enumerate(train_loader):
# zero gradient buffers
optimizer.zero_grad()
# compute output of final layer: forward step
output = DNN(data)
# compute loss
loss = criterion(output, label)
# run backprop: backward step
loss.backward()
# update weigths of NN
optimizer.step()
# print loss at current epoch
if batch_idx % args.log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item() ))
return loss.item()
################################################
def test():
'''Tests NN performance.
'''
# evaluate model
DNN.eval()
test_loss = 0 # loss function on test data
correct = 0 # number of correct predictions
# loop over test data
for data, label in test_loader:
# compute model prediction softmax probability
output = DNN(data)
# compute test loss
test_loss += criterion(output, label, size_average=False).item() # sum up batch loss
# find most likely prediction
pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
# update number of correct predictions
correct += pred.eq(label.data.view_as(pred)).cpu().sum().item()
# print test loss
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.3f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
return test_loss, correct / len(test_loader.dataset)
################################################
train_loss=np.zeros((args.epochs,))
test_loss=np.zeros_like(train_loss)
test_accuracy=np.zeros_like(train_loss)
epochs=range(1, args.epochs + 1)
for epoch in epochs:
train_loss[epoch-1] = train(epoch)
test_loss[epoch-1], test_accuracy[epoch-1] = test()
return test_loss[-1], test_accuracy[-1]
###Output
_____no_output_____
###Markdown
Step 6: Modify the Hyperparameters to Optimize Performance of the ModelTo study the performance of the model for a variety of different `data_set_sizes` and `learning_rates`, we do a grid search. Let us define a function `grid_search`, which accepts the `args` variable containing all hyper-parameters needed for the problem. After choosing logarithmically-spaced `data_set_sizes` and `learning_rates`, we first loop over all `data_set_sizes`, update the `args` variable, and call the `load_data` function. We then loop once again over all `learning_rates`, update `args` and call `evaluate_model`.
###Code
def grid_search(args):
# perform grid search over learnign rate and number of hidden neurons
dataset_sizes=[1000, 10000, 100000, 200000] #np.logspace(2,5,4).astype('int')
learning_rates=np.logspace(-5,-1,5)
# pre-alocate data
test_loss=np.zeros((len(dataset_sizes),len(learning_rates)),dtype=np.float64)
test_accuracy=np.zeros_like(test_loss)
# do grid search
for i, dataset_size in enumerate(dataset_sizes):
# upate data set size parameters
args.dataset_size=dataset_size
args.batch_size=int(0.01*dataset_size)
# load data
train_loader, test_loader = load_data(args)
for j, lr in enumerate(learning_rates):
# update learning rate
args.lr=lr
print("\n training DNN with %5d data points and SGD lr=%0.6f. \n" %(dataset_size,lr) )
test_loss[i,j],test_accuracy[i,j] = evaluate_model(args,train_loader,test_loader)
plot_data(learning_rates,dataset_sizes,test_accuracy)
###Output
_____no_output_____
###Markdown
Last, we use the function `plot_data`, defined below, to plot the results.
###Code
import matplotlib.pyplot as plt
def plot_data(x,y,data):
# plot results
fontsize=16
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(data, interpolation='nearest', vmin=0, vmax=1)
cbar=fig.colorbar(cax)
cbar.ax.set_ylabel('accuracy (%)',rotation=90,fontsize=fontsize)
cbar.set_ticks([0,.2,.4,0.6,0.8,1.0])
cbar.set_ticklabels(['0%','20%','40%','60%','80%','100%'])
# put text on matrix elements
for i, x_val in enumerate(np.arange(len(x))):
for j, y_val in enumerate(np.arange(len(y))):
c = "${0:.1f}\\%$".format( 100*data[j,i])
ax.text(x_val, y_val, c, va='center', ha='center')
# convert axis vaues to to string labels
x=[str(i) for i in x]
y=[str(i) for i in y]
ax.set_xticklabels(['']+x)
ax.set_yticklabels(['']+y)
ax.set_xlabel('$\\mathrm{learning\\ rate}$',fontsize=fontsize)
ax.set_ylabel('$\\mathrm{hidden\\ neurons}$',fontsize=fontsize)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Run CodeAs we mentioned in the beginning of the notebook, all functions and classes discussed above only specify the procedure but do not actually perform any computations. This allows us to re-use them for different problems. Actually running the training and testing for every point in the grid search is done below. The `argparse` class allows us to conveniently keep track of all hyperparameters, stored in the variable `args` which enters most of the functions we defined above. To run the simulation, we call the function `grid_search`. Exercises* One of the advantages of Pytorch is that it allows to automatically use the CUDA library for fast performance on GPU's. For the sake of clarity, we have omitted this in the above notebook. Go online to check how to put the CUDA commands back into the code above. _Hint:_ study the [Pytorch MNIST tutorial](https://github.com/pytorch/examples/blob/master/mnist/main.py) to see how this works in practice.
###Code
import argparse # handles arguments
import sys; sys.argv=['']; del sys # required to use parser in jupyter notebooks
# Training settings
parser = argparse.ArgumentParser(description='PyTorch SUSY Example')
parser.add_argument('--dataset_size', type=int, default=100000, metavar='DS',
help='size of data set (default: 100000)')
parser.add_argument('--high_level_feats', type=bool, default=None, metavar='HLF',
help='toggles high level features (default: None)')
parser.add_argument('--batch-size', type=int, default=100, metavar='N',
help='input batch size for training (default: 64)')
parser.add_argument('--test-batch-size', type=int, default=1000, metavar='N',
help='input batch size for testing (default: 1000)')
parser.add_argument('--epochs', type=int, default=10, metavar='N',
help='number of epochs to train (default: 10)')
parser.add_argument('--lr', type=float, default=0.05, metavar='LR',
help='learning rate (default: 0.02)')
parser.add_argument('--momentum', type=float, default=0.8, metavar='M',
help='SGD momentum (default: 0.5)')
parser.add_argument('--no-cuda', action='store_true', default=False,
help='disables CUDA training')
parser.add_argument('--seed', type=int, default=2, metavar='S',
help='random seed (default: 1)')
parser.add_argument('--log-interval', type=int, default=10, metavar='N',
help='how many batches to wait before logging training status')
args = parser.parse_args()
# set seed of random number generator
torch.manual_seed(args.seed)
grid_search(args)
###Output
Training on 800 examples
Using both high and low level features
Testing on 200 examples
Using both high and low level features
training DNN with 1000 data points and SGD lr=0.000010.
Train Epoch: 1 [0/800 (0%)] Loss: 0.561476
Train Epoch: 1 [100/800 (12%)] Loss: 0.823435
Train Epoch: 1 [200/800 (25%)] Loss: 0.647225
Train Epoch: 1 [300/800 (38%)] Loss: 0.612186
Train Epoch: 1 [400/800 (50%)] Loss: 0.962393
Train Epoch: 1 [500/800 (62%)] Loss: 0.835941
Train Epoch: 1 [600/800 (75%)] Loss: 0.808794
Train Epoch: 1 [700/800 (88%)] Loss: 0.766973
Test set: Average loss: 0.7115, Accuracy: 109/200 (54.500%)
Train Epoch: 2 [0/800 (0%)] Loss: 0.861468
Train Epoch: 2 [100/800 (12%)] Loss: 0.653841
Train Epoch: 2 [200/800 (25%)] Loss: 0.823339
Train Epoch: 2 [300/800 (38%)] Loss: 0.745887
Train Epoch: 2 [400/800 (50%)] Loss: 0.694589
Train Epoch: 2 [500/800 (62%)] Loss: 0.693052
Train Epoch: 2 [600/800 (75%)] Loss: 0.719047
Train Epoch: 2 [700/800 (88%)] Loss: 0.591686
Test set: Average loss: 0.7107, Accuracy: 109/200 (54.500%)
Train Epoch: 3 [0/800 (0%)] Loss: 0.728128
Train Epoch: 3 [100/800 (12%)] Loss: 0.698269
Train Epoch: 3 [200/800 (25%)] Loss: 0.705191
Train Epoch: 3 [300/800 (38%)] Loss: 0.683300
Train Epoch: 3 [400/800 (50%)] Loss: 0.732665
Train Epoch: 3 [500/800 (62%)] Loss: 0.849138
Train Epoch: 3 [600/800 (75%)] Loss: 0.728277
|
Baseline_Model.ipynb | ###Markdown
Baseline Model - Initial parameters search.- Search parameter for baseline model. Author: Israel Oliveira [\[e-mail\]](mailto:'Israel%20Oliveira%20')
###Code
%load_ext watermark
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split, ParameterGrid
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import f1_score, roc_auc_score
from tqdm import tqdm
from glob import glob
# import matplotlib.pyplot as plt
# %matplotlib inline
# from matplotlib import rcParams
# from cycler import cycler
# rcParams['figure.figsize'] = 12, 8 # 18, 5
# rcParams['axes.spines.top'] = False
# rcParams['axes.spines.right'] = False
# rcParams['axes.grid'] = True
# rcParams['axes.prop_cycle'] = cycler(color=['#365977'])
# rcParams['lines.linewidth'] = 2.5
# import seaborn as sns
# sns.set_theme()
# pd.set_option("max_columns", None)
# pd.set_option("max_rows", None)
# pd.set_option('display.max_colwidth', None)
from IPython.display import Markdown, display
def md(arg):
display(Markdown(arg))
# from pandas_profiling import ProfileReport
# #report = ProfileReport(#DataFrame here#, minimal=True)
# #report.to
# import pyarrow.parquet as pq
# #df = pq.ParquetDataset(path_to_folder_with_parquets, filesystem=None).read_pandas().to_pandas()
# import json
# def open_file_json(path,mode='r',var=None):
# if mode == 'w':
# with open(path,'w') as f:
# json.dump(var, f)
# if mode == 'r':
# with open(path,'r') as f:
# return json.load(f)
# import functools
# import operator
# def flat(a):
# return functools.reduce(operator.iconcat, a, [])
# import json
# from glob import glob
# from typing import NewType
# DictsPathType = NewType("DictsPath", str)
# def open_file_json(path):
# with open(path, "r") as f:
# return json.load(f)
# class LoadDicts:
# def __init__(self, dict_path: DictsPathType = "./data"):
# Dicts_glob = glob(f"{dict_path}/*.json")
# self.List = []
# self.Dict = {}
# for path_json in Dicts_glob:
# name = path_json.split("/")[-1].replace(".json", "")
# self.List.append(name)
# self.Dict[name] = open_file_json(path_json)
# setattr(self, name, self.Dict[name])
# Run this cell before close.
%watermark -d --iversion -b -r -g -m -v
!cat /proc/cpuinfo |grep 'model name'|head -n 1 |sed -e 's/model\ name/CPU/'
!free -h |cut -d'i' -f1 |grep -v total
###Output
Python implementation: CPython
Python version : 3.9.6
IPython version : 7.26.0
Compiler : GCC 8.3.0
OS : Linux
Release : 5.11.0-7620-generic
Machine : x86_64
Processor :
CPU cores : 8
Architecture: 64bit
Git hash: 38749d73a7d8f2b4d7906687d79829b7ff7b69d3
Git repo: https://github.com/ysraell/creditcardfraud.git
Git branch: main
pandas: 1.3.1
numpy : 1.19.5
CPU : Intel(R) Xeon(R) CPU E3-1241 v3 @ 3.50GHz
Mem: 31G
Swap: 4.0G
###Markdown
Initial search
###Code
#
n_jobs = 4
#
N_fraud_test = 200
N_truth_test = int(2e4)
N_truth_train = int(2e5)
#
split_seeds = [13, 17, 47, 53]
# random_state used by RandomForestClassifier
random_state = 42
# Number of trees in random forest
n_estimators = [200, 400, 800]
# Number of features to consider at every split
max_features = ['auto', 'sqrt']
# Minimum number of samples required to split a node
min_samples_split = [2, 8]
# Minimum number of samples required at each leaf node
min_samples_leaf = [1, 4]
# Method of selecting samples for training each tree
bootstrap = [True]
# Create the random grid
search_grid = {'n_estimators': n_estimators,
'max_features': max_features,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
'bootstrap': bootstrap}
print(search_grid)
target_col = 'Class'
ds_cols = ['V1', 'V2', 'V3', 'V4', 'V5', 'V6', 'V7', 'V8', 'V9', 'V10', 'V11', 'V12', 'V13', 'V14', 'V15', 'V16', 'V17', 'V18', 'V19', 'V20', 'V21', 'V22', 'V23', 'V24', 'V25', 'V26', 'V27', 'V28', 'Amount']
glob_paths = glob('/work/data/creditcard*.csv')
total_exps = len(glob_paths)*len(split_seeds)*len(ParameterGrid(search_grid))
print(total_exps)
with tqdm(total=total_exps) as progress_bar:
def RunGrid(df_train, df_test, random_state):
out = []
for params in ParameterGrid(search_grid):
params['random_state'] = random_state
params['n_jobs'] = n_jobs
rf = RandomForestClassifier(**params)
rf.fit(df_train[ds_cols].to_numpy(), df_train[target_col].to_numpy())
probs = rf.predict_proba(df_test[ds_cols].to_numpy())
exp = {
'probs' : probs,
'rf_classes': rf.classes_,
'params': params
}
out.append(exp)
progress_bar.update(1)
return out
Results = {}
for ds_path in glob_paths:
df = pd.read_csv(ds_path)
df = df[ds_cols+[target_col]]
df_fraud = df.query('Class == 1').reset_index(drop=True).copy()
df_truth = df.query('Class == 0').reset_index(drop=True).copy()
del df
set_exp = {}
for seed in split_seeds:
df_fraud_train, df_fraud_test = train_test_split(df_fraud, test_size=N_fraud_test, random_state=seed)
df_truth_train, df_truth_test = train_test_split(df_truth, train_size=N_truth_train, test_size=N_truth_test, random_state=seed)
df_train = pd.concat([df_fraud_train, df_truth_train]).reset_index(drop=True)
df_test = pd.concat([df_fraud_test, df_truth_test]).reset_index(drop=True)
out = RunGrid(df_train, df_test, random_state)
set_exp[seed] = {
'target_test': df_test[target_col].to_numpy(),
'exps': out
}
Results[ds_path] = set_exp
cols_results = ['ds_path', 'seed']
cols_param = ['bootstrap', 'max_features', 'min_samples_leaf', 'min_samples_split', 'n_estimators', 'random_state']
cols_metrics = ['Fraud_True_Sum','Truth_False_Sum', 'Fraud_False_Sum', 'F1_M', 'AUC_ROC_M', 'TP_0', 'TP_1']
cols = cols_results+cols_param+cols_metrics
', '.join(cols_metrics)
''.join([ f'param[\'{col}\'], ' for col in cols_param])
data = []
for ds_path,sets_exp in Results.items():
for seed,set_exp in sets_exp.items():
target_test = set_exp['target_test']
for exp in set_exp['exps']:
df_exp = pd.DataFrame(exp['probs'], columns=exp['rf_classes'])
df_exp['pred'] = df_exp[[0, 1]].apply(lambda x: exp['rf_classes'][np.argmax(x)], axis=1)
df_exp['target'] = target_test
Fraud_True_Sum = df_exp.loc[(df_exp.pred == 1) & (df_exp.target == 1)][1].sum()/sum(df_exp.target == 1)
Truth_False_Sum = df_exp.loc[(df_exp.pred == 0) & (df_exp.target == 1)][0].sum()/sum(df_exp.target == 1)
Fraud_False_Sum = df_exp.loc[(df_exp.pred == 1) & (df_exp.target == 0)][1].sum()/sum(df_exp.target == 0)
F1_M = f1_score(target_test, df_exp['pred'].to_numpy(), average='macro')
AUC_ROC_M = roc_auc_score(target_test, df_exp['pred'].to_numpy(), average='macro')
TP_0 = df_exp.loc[(df_exp.pred == 0) & (df_exp.target == 0)].shape[0]/sum(df_exp.target == 0)
TP_1 = df_exp.loc[(df_exp.pred == 1) & (df_exp.target == 1)].shape[0]/sum(df_exp.target == 1)
param = exp['params']
data.append([
ds_path, seed,
param['bootstrap'], param['max_features'], param['min_samples_leaf'],
param['min_samples_split'], param['n_estimators'], param['random_state'],
Fraud_True_Sum, Truth_False_Sum, Fraud_False_Sum, F1_M, AUC_ROC_M, TP_0, TP_1
])
df_Results = pd.DataFrame(data, columns=cols)
#df_Results.to_csv('/work/data/Results_creditcard_Init.csv', index=False)
df_Results.to_csv('/work/data/Results_creditcard_Init.csv', index=False)
df_Results = pd.read_csv('/work/data/Results_creditcard_Init.csv')
map_ds_path = {
'/work/data/creditcard_trans_float.csv': 'Float',
'/work/data/creditcard.csv': 'Original',
'/work/data/creditcard_trans_int.csv': 'Integer'
}
df_Results['DS'] = df_Results.ds_path.apply(lambda x: map_ds_path[x])
for metric in cols_metrics:
md(f'# {metric}')
display(df_Results.sort_values(metric, ascending=False).head(20)[['DS', 'seed']+cols_param[:-1]+cols_metrics])
for col in cols_param[:-1]:
md(f'# {col}')
display(df_Results[['DS', col]+cols_metrics].groupby(['DS', col]).mean())
###Output
_____no_output_____
###Markdown
Baseline model.
###Code
#
N_fraud_test = 200
N_truth_test = int(2e4)
N_truth_train = int(2e5)
#
split_seeds = [13, 17, 19, 41]
# random_state used by RandomForestClassifier
random_state = 42
# Number of trees in random forest
n_estimators = [100, 200, 400, 800, 100]
# Number of features to consider at every split
max_features = ['auto']
# Minimum number of samples required to split a node
min_samples_split = [2]
# Minimum number of samples required at each leaf node
min_samples_leaf = [1]
# Method of selecting samples for training each tree
bootstrap = [True]
# Create the random grid
search_grid = {'n_estimators': n_estimators,
'max_features': max_features,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
'bootstrap': bootstrap}
print(search_grid)
glob_paths = ['/work/data/creditcard_trans_int.csv']
total_exps = len(glob_paths)*len(split_seeds)*len(ParameterGrid(search_grid))
print(total_exps)
with tqdm(total=total_exps) as progress_bar:
def RunGrid(df_train, df_test, random_state):
out = []
for params in ParameterGrid(search_grid):
params['random_state'] = random_state
params['n_jobs'] = n_jobs
rf = RandomForestClassifier(**params)
rf.fit(df_train[ds_cols].to_numpy(), df_train[target_col].to_numpy())
probs = rf.predict_proba(df_test[ds_cols].to_numpy())
exp = {
'probs' : probs,
'rf_classes': rf.classes_,
'params': params
}
out.append(exp)
progress_bar.update(1)
return out
Results = {}
for ds_path in glob_paths:
df = pd.read_csv(ds_path)
df = df[ds_cols+[target_col]]
df_fraud = df.query('Class == 1').reset_index(drop=True).copy()
df_truth = df.query('Class == 0').reset_index(drop=True).copy()
del df
set_exp = {}
for seed in split_seeds:
df_fraud_train, df_fraud_test = train_test_split(df_fraud, test_size=N_fraud_test, random_state=seed)
df_truth_train, df_truth_test = train_test_split(df_truth, train_size=N_truth_train, test_size=N_truth_test, random_state=seed)
df_train = pd.concat([df_fraud_train, df_truth_train]).reset_index(drop=True)
df_test = pd.concat([df_fraud_test, df_truth_test]).reset_index(drop=True)
out = RunGrid(df_train, df_test, random_state)
set_exp[seed] = {
'target_test': df_test[target_col].to_numpy(),
'exps': out
}
Results[ds_path] = set_exp
data = []
for ds_path,sets_exp in Results.items():
for seed,set_exp in sets_exp.items():
target_test = set_exp['target_test']
for exp in set_exp['exps']:
df_exp = pd.DataFrame(exp['probs'], columns=exp['rf_classes'])
df_exp['pred'] = df_exp[[0, 1]].apply(lambda x: exp['rf_classes'][np.argmax(x)], axis=1)
df_exp['target'] = target_test
Fraud_True_Sum = df_exp.loc[(df_exp.pred == 1) & (df_exp.target == 1)][1].sum()/sum(df_exp.target == 1)
Truth_False_Sum = df_exp.loc[(df_exp.pred == 0) & (df_exp.target == 1)][0].sum()/sum(df_exp.target == 1)
Fraud_False_Sum = df_exp.loc[(df_exp.pred == 1) & (df_exp.target == 0)][1].sum()/sum(df_exp.target == 0)
F1_M = f1_score(target_test, df_exp['pred'].to_numpy(), average='macro')
AUC_ROC_M = roc_auc_score(target_test, df_exp['pred'].to_numpy(), average='macro')
TP_0 = df_exp.loc[(df_exp.pred == 0) & (df_exp.target == 0)].shape[0]/sum(df_exp.target == 0)
TP_1 = df_exp.loc[(df_exp.pred == 1) & (df_exp.target == 1)].shape[0]/sum(df_exp.target == 1)
param = exp['params']
data.append([
ds_path, seed,
param['bootstrap'], param['max_features'], param['min_samples_leaf'],
param['min_samples_split'], param['n_estimators'], param['random_state'],
Fraud_True_Sum, Truth_False_Sum, Fraud_False_Sum, F1_M, AUC_ROC_M, TP_0, TP_1
])
df_Results = pd.DataFrame(data, columns=cols)
df_Results.to_csv('/work/data/Results_creditcard_Baseline.csv', index=False)
df_Results
for metric in cols_metrics:
md(f'# {metric}')
display(df_Results.sort_values(metric, ascending=False).head(20)[cols_param[:-1]+cols_metrics])
###Output
_____no_output_____
###Markdown
Normalize text without Language ModelWe are using string similarity (Normalized Levenshtein score) to replace out-of-vocabulary words.
###Code
from strsimpy.normalized_levenshtein import NormalizedLevenshtein
from nltk.corpus import words
import string
import nltk
nltk.download('punkt')
nltk.download('words')
normalized_levenshtein = NormalizedLevenshtein()
def levensthein_score(word1, word2):
return normalized_levenshtein.similarity(word1, word2)
vocab = [x.lower() for x in words.words()]
len(vocab )
from nltk.tokenize import word_tokenize
def normalize_data(text):
words = word_tokenize(text)
normalized_text=''
for word in words:
if word in string.punctuation:
normalized_text = normalized_text+word
elif word not in vocab:
word=word.lower()
max_score=0
replace_word=word
for v in vocab:
score = levensthein_score(word, v)
if score>max_score:
max_score=score
replace_word=v
normalized_text = normalized_text + ' ' + replace_word
else:
normalized_text = normalized_text + ' ' + word
return normalized_text.strip()
###Output
_____no_output_____
###Markdown
Evaluation on Test Data
###Code
import pandas as pd
import os
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
pd.set_option('display.max_colwidth', None)
path = 'drive/MyDrive/CS685'
input_df = pd.read_csv(os.path.join(path, "xsum_test_baseline.csv"),sep='\t')
input_df.head(2)
test_df = input_df
test_df['normalized_text'] = test_df['text'].apply(normalize_data)
test_df.to_csv(os.path.join(path, "xsum_test_pred_baseline.csv"), index=False, sep='\t')
test_df.head(5)
test_df = pd.read_csv(os.path.join(path, "xsum_test_pred_baseline.csv"),sep='\t')
import difflib
def get_dissimilar_spans(orig_words, gt_words, pred_words):
gt_matcher = difflib.SequenceMatcher(a=orig_words, b=gt_words)
pred_matcher = difflib.SequenceMatcher(a=gt_words, b=pred_words)
orig_spans = []
gt_spans = []
pred_spans = []
mismatch_spans = []
for codes in gt_matcher.get_opcodes():
op,a_start,a_end,b_start,b_end = codes
if op == 'replace':
orig_spans.append(" ".join(orig_words[a_start:a_end]))
gt_spans.append(" ".join(gt_words[b_start:b_end]))
for codes in pred_matcher.get_opcodes():
op,a_start,a_end,b_start,b_end = codes
if op == 'replace':
pred_spans.append(" ".join(pred_words[b_start:b_end]))
mismatch_spans.append(" ".join(gt_words[a_start:a_end]))
return orig_spans, gt_spans, pred_spans, mismatch_spans
def get_stats_for_predictions(orig_text, gt_text, pred_text):
orig_words = nltk.word_tokenize(orig_text)
gt_words = nltk.word_tokenize(gt_text)
pred_words = nltk.word_tokenize(pred_text)
orig_words = [word.lower().strip() for word in orig_words]
gt_words = [word.lower().strip() for word in gt_words]
pred_words = [word.lower().strip() for word in pred_words]
correct_preds = []
wrong_preds = []
changed_orig_words = []
changed_gt_words = []
replaced_word_cnt = 0
correct_pred_cnt = 0
if len(orig_words)!= len(gt_words):
print(orig_text)
print(gt_text)
elif len(gt_words)!=len(pred_words):
orig_spans, gt_spans, pred_spans, mismatch_spans = get_dissimilar_spans(orig_words, gt_words, pred_words)
wrong_preds = pred_spans
changed_orig_words = orig_spans
changed_gt_words = gt_spans
replaced_word_cnt = len(gt_spans)
correct_pred_cnt = len(gt_spans) - len(mismatch_spans)
correct_preds = list(set(gt_spans)-set(mismatch_spans))
else:
for i in range(len(orig_words)):
orig_word = orig_words[i]
gt_word = gt_words[i]
pred_word = pred_words[i]
if orig_word != gt_word:
changed_orig_words.append(orig_word)
changed_gt_words.append(gt_word)
replaced_word_cnt = replaced_word_cnt+1
if pred_word == gt_word:
correct_preds.append(pred_word)
correct_pred_cnt = correct_pred_cnt+1
else:
wrong_preds.append(pred_word)
return {"replaced_gt_words":changed_gt_words,
"replaced_original_words": changed_orig_words,
"replaced_word_count": replaced_word_cnt,
"correct_predictions": correct_preds,
"correct_prediction_count": correct_pred_cnt,
"wrong_predictions": wrong_preds}
def get_accuracy_df(input_df, pred_df):
pred_df = pred_df.drop(columns=['gt_text', 'text'])
df = pd.concat([input_df, pred_df], axis=1)
df["Stats"] = df.apply(lambda x: get_stats_for_predictions(x["text"], x["gt_text"],x["normalized_text"]), axis = 1)
df = pd.concat([df.drop(['Stats'], axis=1), df['Stats'].apply(pd.Series)], axis=1)
return df
stat_df = get_accuracy_df(input_df,test_df)
stat_df.head(5)
print(f"Total incorrect tokens: {stat_df['replaced_word_count'].sum()}\n Total correct predictions: {stat_df['correct_prediction_count'].sum()} \nTest accuracy: {stat_df['correct_prediction_count'].sum()/stat_df['replaced_word_count'].sum()}")
###Output
Total incorrect tokens: 232
Total correct predictions: 72
Test accuracy: 0.3103448275862069
###Markdown
**Description:** This notebook is used to how different methods score on the raw data set. We just use the basic and needed preprocessing (scaler and encoder) and tried a few. Later on we added a baseline artificial neural network on a slightly different data set. This notebook is just to get an impression of the results and see the performances.**Project Name:** Churn Prediction - Die Zeit**Team:** Carlotta Ulm, Silas Mederer, Jonas Bechthold**Date:** 2020-10-26 to 2020-11-27 Base model supervised learning
###Code
# data analysis and wrangling
import pandas as pd
import numpy as np
import math
import IPython
# own modules
import eda_methods as eda
# visualization
import seaborn as sns
sns.set(style="white")
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
from pandas.plotting import scatter_matrix
# warnings handler
import warnings
warnings.filterwarnings("ignore")
# Machine Learning Libraries
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.metrics import fbeta_score, accuracy_score, f1_score, recall_score, precision_score
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from xgboost import XGBClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import make_scorer
from sklearn.model_selection import KFold
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
# Pipeline
from sklearn.model_selection import train_test_split, cross_val_predict
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder, MinMaxScaler
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
###Output
_____no_output_____
###Markdown
Load data
###Code
df = pd.read_csv('data/f_chtr_churn_traintable_nf_2.csv')
df.drop("Unnamed: 0", axis=1, inplace=True)
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 209043 entries, 0 to 209042
Columns: 170 entries, auftrag_new_id to date_x
dtypes: float64(32), int64(120), object(18)
memory usage: 271.1+ MB
###Markdown
Check
###Code
null_rel = round(df.isin([0]).sum() / df.shape[0]*100,2)
null_rel = null_rel.to_frame()
null_rel.rename(columns={0: "zeros %"}, inplace=True)
eda.meta(df).T.join(null_rel).head()
print(f"shape {df.shape}")
continues = df.select_dtypes(include=['float64','int64'])
print(f"numeric features {len(continues.columns)}")
categorial = df.select_dtypes(include="object")
print(f"object features {len(categorial.columns)}")
###Output
shape (209043, 170)
numeric features 152
object features 18
###Markdown
For further information about the data, check the [EDA Notebook](https://github.com/jb-ds2020/nf-ds3-capstone-churn-prevention/blob/main/Capstone_Zeit_EDA.ipynb) Check for correleations
###Code
df.drop('churn', axis=1).corrwith(df.churn).sort_values().plot(kind='barh',figsize=(10, 50));
###Output
_____no_output_____
###Markdown
Drop features
###Code
print(df.shape)
df.drop(["auftrag_new_id", "avg_churn", "ort", "date_x", "kuendigungs_eingangs_datum", "abo_registrierung_min", "training_set"], axis=1, inplace=True)
print(df.shape)
###Output
(209043, 170)
(209043, 163)
###Markdown
Sample & split
###Code
# calculate sample size for 1% and 10%
twenty_percent = df.sample(int(round(len(df) / 5)))
print(twenty_percent.shape)
twenty_percent.head()
y = df['churn']
X = df.drop('churn', axis = 1)
X.shape
###Output
_____no_output_____
###Markdown
Pipeline
###Code
def pipeline(X,y):
# devide features
categoric_features = list(X.columns[X.dtypes==object])
categoric_features
numeric_features = list(X.columns[X.dtypes != object])
numeric_features
# split train and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# defining the models
models = [
LogisticRegression(n_jobs=-1),
KNeighborsClassifier(),
SVC(),
DecisionTreeClassifier(),
RandomForestClassifier(n_jobs=-1),
XGBClassifier(n_jobs=-1),
AdaBoostClassifier()
]
# create preprocessors
numeric_transformer = Pipeline(steps=[
('imputer_num', SimpleImputer(missing_values=np.nan, strategy='mean')),
('scaler', StandardScaler())
])
categorical_transformer = Pipeline(steps=[
('imputer_cat', SimpleImputer(strategy='constant', fill_value='missing')),
('onehot', OneHotEncoder(handle_unknown='ignore'))
])
preprocessor = ColumnTransformer(
transformers=[
('num', numeric_transformer, numeric_features),
('cat', categorical_transformer, categoric_features)
])
# process pipeline for every model
for model in models:
print(f"\n " + str(model))
pipe = Pipeline(steps=[('preprocessor', preprocessor),
('classifier', model)
])
# fit model
pipe.fit(X_train, y_train)
#predict results
y_train_pred = cross_val_predict(pipe, X_train, y_train, cv=5)
y_test_pred = pipe.predict(X_test)
# print results
# print("\nResults on training data: ")
# print(classification_report(y_train, y_train_pred))
print("\nResults on test data:")
print(classification_report(y_test, y_test_pred))
# print("\nConfusion matrix on test")
# print(confusion_matrix(y_test, y_test_pred))
# print("\n")
# plot heatmap
conf_mat = pd.crosstab(np.ravel(y_train), np.ravel(y_train_pred),
colnames=["Predicted"], rownames=["Actual"])
sns.heatmap(conf_mat/np.sum(conf_mat), annot=True, cmap="Blues", fmt=".2%")
plt.show()
plt.close()
# print balance and sizes
print ("Testing set has {} features.".format(X_test.shape[1]))
eda.plot_train_test_split(y,y_train,y_test)
###Output
_____no_output_____
###Markdown
We used a list of most common classifiers, just to get an impression of their performance on the dataset. We will use the Logistic Regression as a base model. Die Zeit Verlag Hamburg uses a similar method for their churn prediction. Also we need the overview to find the models that could handle our dataset and will improve them to use them for our best solution. These are the models: - LogisticRegression,- KNeighborsClassifier,- SVC,- DecisionTreeClassifier,- RandomForestClassifier, - XGBClassifier,- AdaBoostClassifier Results
###Code
pipeline(X,y)
###Output
LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
intercept_scaling=1, max_iter=100, multi_class='warn', n_jobs=-1,
penalty='l2', random_state=None, solver='warn', tol=0.0001,
verbose=0, warm_start=False)
Results on test data:
precision recall f1-score support
0 0.79 0.92 0.85 35702
1 0.74 0.48 0.58 16559
micro avg 0.78 0.78 0.78 52261
macro avg 0.77 0.70 0.72 52261
weighted avg 0.78 0.78 0.77 52261
###Markdown
Base model unsupervised learning: Artificial Neural Network
###Code
# ANN
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Embedding, Flatten
from keras.metrics import Recall, Precision
from keras.callbacks import TensorBoard
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow import keras
import tensorflow_docs as tfdocs
import tensorflow_docs.modeling
import tensorflow_docs.plots
###Output
Using TensorFlow backend.
###Markdown
Data preperation
###Code
df = pd.read_csv('data/df_clean_engineered_all.csv')
y = df['churn']
df = df.drop(['churn','plz_3','abo_registrierung_min','nl_registrierung_min','ort'], axis = 1)
X = pd.get_dummies(df, columns = ['kanal', 'objekt_name', 'aboform_name', 'zahlung_rhythmus_name',
'zahlung_weg_name', 'plz_1', 'plz_2', 'land_iso_code',
'anrede','titel'], drop_first = True)
# devide features
categoric_features = list(X.columns[X.dtypes==object])
numeric_features = list(X.columns[X.dtypes != object])
# split train and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1,stratify=y)
#create preprocessors
numeric_transformer = Pipeline(steps=[
('imputer_num', SimpleImputer(strategy='median')),
('scaler', MinMaxScaler())
#('scaler', StandardScaler())
])
categorical_transformer = Pipeline(steps=[
('imputer_cat', SimpleImputer(strategy='constant', fill_value='missing')),
('onehot', OneHotEncoder(handle_unknown='ignore'))
])
preprocessor = ColumnTransformer(
transformers=[
('num', numeric_transformer, numeric_features),
('cat', categorical_transformer, categoric_features)
])
preprocessor.fit(X_train)
X_train = preprocessor.transform(X_train)
X_test = preprocessor.transform(X_test)
###Output
_____no_output_____
###Markdown
Setup: ANN
###Code
units = int((X.shape[1] + 1) / 2)
input_dim = X.shape[1]
print(units)
print(input_dim)
cb_epoch_dots = tfdocs.modeling.EpochDots(report_every=100)
Recall = tf.keras.metrics.Recall(name='recall')
Precision = tf.keras.metrics.Precision(name='precision')
Loss = tf.keras.losses.BinaryCrossentropy(name='binary_crossentropy')
nn = Sequential()
nn.add(Dense(units, input_dim=input_dim, activation='relu'))
nn.add(Dense(units,activation='relu'))
nn.add(Dense(1, activation='sigmoid'))
nn.summary()
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 154) 47432
_________________________________________________________________
dense_2 (Dense) (None, 154) 23870
_________________________________________________________________
dense_3 (Dense) (None, 1) 155
=================================================================
Total params: 71,457
Trainable params: 71,457
Non-trainable params: 0
_________________________________________________________________
###Markdown
It is very common to use the formula: (features + 1)/2 to calculate the neuron number. So we will use an ANN with 154 neurons input layer, 154 hidden layer and one output. Adam is the basic optimizer and the combination relu, relu, sigmoid is the most recommended combination of activation functions we could find for binary classification.
###Code
adam = tf.keras.optimizers.Adam()
nn.compile(loss="binary_crossentropy",
optimizer=adam,
metrics=[Loss, Recall, Precision, 'accuracy'])
nn_history = nn.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=1000,
batch_size=1024, callbacks=[cb_epoch_dots],verbose=0)
###Output
Epoch: 0, accuracy:0.7275, binary_crossentropy:0.5474, loss:0.5473, precision:0.5537, recall:0.2030, val_accuracy:0.7350, val_binary_crossentropy:0.5368, val_loss:0.5369, val_precision:0.6048, val_recall:0.3016,
...................................................................................................
Epoch: 100, accuracy:0.8683, binary_crossentropy:0.3110, loss:0.3112, precision:0.7397, recall:0.5835, val_accuracy:0.7655, val_binary_crossentropy:0.5875, val_loss:0.5866, val_precision:0.7399, val_recall:0.5839,
....................................................................................................
Epoch: 200, accuracy:0.9028, binary_crossentropy:0.2419, loss:0.2415, precision:0.7681, recall:0.6474, val_accuracy:0.7695, val_binary_crossentropy:0.7064, val_loss:0.7043, val_precision:0.7683, val_recall:0.6476,
....................................................................................................
Epoch: 300, accuracy:0.9235, binary_crossentropy:0.1969, loss:0.1968, precision:0.7860, recall:0.6855, val_accuracy:0.7607, val_binary_crossentropy:0.8344, val_loss:0.8296, val_precision:0.7861, val_recall:0.6857,
....................................................................................................
Epoch: 400, accuracy:0.9397, binary_crossentropy:0.1628, loss:0.1629, precision:0.7991, recall:0.7128, val_accuracy:0.7700, val_binary_crossentropy:0.9613, val_loss:0.9557, val_precision:0.7992, val_recall:0.7129,
....................................................................................................
Epoch: 500, accuracy:0.9516, binary_crossentropy:0.1356, loss:0.1356, precision:0.8095, recall:0.7340, val_accuracy:0.7697, val_binary_crossentropy:1.1091, val_loss:1.0999, val_precision:0.8095, val_recall:0.7340,
....................................................................................................
Epoch: 600, accuracy:0.9604, binary_crossentropy:0.1141, loss:0.1141, precision:0.8180, recall:0.7512, val_accuracy:0.7719, val_binary_crossentropy:1.2657, val_loss:1.2567, val_precision:0.8181, val_recall:0.7512,
....................................................................................................
Epoch: 700, accuracy:0.9685, binary_crossentropy:0.0956, loss:0.0955, precision:0.8254, recall:0.7655, val_accuracy:0.7686, val_binary_crossentropy:1.4304, val_loss:1.4201, val_precision:0.8254, val_recall:0.7656,
....................................................................................................
Epoch: 800, accuracy:0.9740, binary_crossentropy:0.0815, loss:0.0814, precision:0.8317, recall:0.7778, val_accuracy:0.7703, val_binary_crossentropy:1.6112, val_loss:1.5984, val_precision:0.8317, val_recall:0.7779,
....................................................................................................
Epoch: 900, accuracy:0.9806, binary_crossentropy:0.0662, loss:0.0659, precision:0.8374, recall:0.7884, val_accuracy:0.7655, val_binary_crossentropy:1.8080, val_loss:1.7941, val_precision:0.8374, val_recall:0.7884,
....................................................................................................
###Markdown
Evaluation
###Code
y_pred_proba = nn.predict(X_test)
y_pred = (y_pred_proba > 0.5)
conf_matrix = confusion_matrix(y_test, y_pred)
print(classification_report(y_test,y_pred))
fig = plt.figure(figsize=(7,5))
ax = fig.add_axes([0,0,1,1])
metrics = ['Accuracy', 'Recall', 'Precision', 'f1']
scores = [0.76, 0.65, 0.60, 0.63]
ax.bar(metrics, scores)
ax.set_ylabel('scores',fontsize= 16)
ax.set_xlabel('Metric',fontsize= 16)
ax.set_title('Baseline ANN')
for i, v in enumerate(scores):
ax.text( i ,v, str(v), color='black', fontweight='bold', fontsize=12)
plt.show()
group_names = ['True Neg','False Pos','False Neg','True Pos']
group_counts = ["{0:0.0f}".format(value) for value in
conf_matrix.flatten()]
group_percentages = ["{0:.2%}".format(value) for value in
conf_matrix.flatten()/np.sum(conf_matrix)]
labels = [f"{v1}\n{v2}\n{v3}" for v1, v2, v3 in
zip(group_names,group_counts,group_percentages)]
labels = np.asarray(labels).reshape(2,2)
sns.set(font_scale=1.6)
fig, ax = plt.subplots(figsize=(8,7))
sns.heatmap(conf_matrix, annot=labels, fmt='', cmap='Blues',annot_kws={"size": 16})
plt.title('Confusion Matrix: NN', fontsize = 16); # title with fontsize 20
plt.xlabel('Predicted', fontsize = 16);
plt.ylabel('Actual', fontsize = 16);
conf_matrix
###Output
_____no_output_____
###Markdown
Plots over epochs
###Code
plt.plot(nn_history.history['loss'])
plt.plot(nn_history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
plt.plot(nn_history.history['accuracy'])
plt.plot(nn_history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Here we can see that our model tends to overfitting. The discrepancy between train (red) and test (blue) is around 25%. That needs to be reduced. Plot ANN
###Code
from keras.utils.vis_utils import plot_model
plot_model(nn, show_shapes=True, show_layer_names=True)
###Output
_____no_output_____ |
01.Getting-Started.ipynb | ###Markdown
Deep Learning on JuiceFS Tutorial - 01. Getting Started JuiceFS is a shared POSIX file system for the cloud.You may replace existing solutions with JuiceFS with zero cost, turns any object store into a shared POSIX file system.Sign up for 1T free quota now at https://juicefs.comSource code of this tutorial can be found in https://github.com/juicedata/juicefs-dl-tutorial 0. Requirements It's very easy to setup JuiceFS in your remote HPC machine or Google Colab or CoCalc by insert just one line of command into your Jupyter Notebook:
###Code
!curl -sL https://juicefs.com/static/juicefs -o juicefs && chmod +x juicefs
###Output
_____no_output_____
###Markdown
Here we go, let's try the magic of JuiceFS! 1. Mounting your JuiceFS After create your JuiceFS volumn followed by [documentation here](https://juicefs.com/docs/en/getting_started.html), you have two ways to mount your JuiceFS here: 1.1 The security way Just run the mount command, and input your access key and secret key from the public cloud or storage provider. This scene is for people who want to collaborate with others and protecting credentials. It can also let your teammates using their JuiceFS volume or share notebook publicly.
###Code
!./juicefs mount {JFS_VOLUMN_NAME} /jfs
###Output
_____no_output_____
###Markdown
1.2 The convenient way However, maybe you are working alone, no worries about leak credentials, and don't want to do annoying input credentials every time restart kernel. Surely, you can save your token and access secrets in your notebook, just change the corresponding fields in the following command to your own.
###Code
!./juicefs auth --token {JUICEFS_TOKEN} --accesskey {ACCESSKEY} --secretkey {SECRETKEY} JuiceFS
!./juicefs mount -h
###Output
_____no_output_____
###Markdown
2. Preparing dataset Okay, let's assume you have already mounted your JuiceFS volume. You can test by list your file here.
###Code
!ls /jfs
###Output
mnist.npz
###Markdown
You have many ways to get data into your JuiceFS volume, like mounting in your local machine and directly drag and drop, or mounting in cloud servers and write data or crawling data and save. Here we took the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) (with a training set of 60,000 images, and a test set of 10,000 images) as an example. If you have not to get the MNIST dataset ready, you can execute the following block:
###Code
!curl -sL https://s3.amazonaws.com/img-datasets/mnist.npz -o /jfs/mnist.npz
###Output
_____no_output_____
###Markdown
3. Training model Once we have got our dataset ready in JuiceFS, we can begin the training process.
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
import warnings
warnings.simplefilter(action='ignore')
###Output
Using TensorFlow backend.
###Markdown
Firstly, load our MNIST dataset from JuiceFS volume.
###Code
with np.load('/jfs/mnist.npz') as f:
X_train, y_train = f['x_train'], f['y_train']
X_test, y_test = f['x_test'], f['y_test']
###Output
_____no_output_____
###Markdown
Visualize some data to ensure we have successfully loaded data from JuiceFS.
###Code
sns.countplot(y_train)
fig, ax = plt.subplots(6, 6, figsize = (12, 12))
fig.suptitle('First 36 images in MNIST')
fig.tight_layout(pad = 0.3, rect = [0, 0, 0.9, 0.9])
for x, y in [(i, j) for i in range(6) for j in range(6)]:
ax[x, y].imshow(X_train[x + y * 6].reshape((28, 28)), cmap = 'gray')
ax[x, y].set_title(y_train[x + y * 6])
###Output
_____no_output_____
###Markdown
Cool! We have successfully loaded the MNIST dataset from JuiceFS! Let's training a CNN model.
###Code
batch_size = 128
num_classes = 10
epochs = 12
img_rows, img_cols = 28, 28
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
model.fit(X_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(X_test, y_test))
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Test loss: 0.029089462164806172
Test accuracy: 0.9914
###Markdown
4. Saving model Awesome! We have trained a simple CNN model, now let's try to write back the model into JuiceFS. Thanks to the POSIX-compatible feature of JuiceFS, we can easily save the model as usual. No additional effort need.
###Code
model.save('/jfs/mnist_model.h5')
###Output
_____no_output_____
###Markdown
5. Loading model Assuming you want to debug the model in your local machine or want to sync with the production environment. You can load your model from JuiceFS in any machine in real time. JuiceFS's strong consistency feature will ensure all confirmed changes made to your data reflected in different machines immediately.
###Code
from keras.models import load_model
model_from_jfs = load_model('/jfs/mnist_model.h5')
###Output
_____no_output_____
###Markdown
We have successfully load our previous model from JuiceFS here, let's randomly pick an image from test dataset and use loader model to make a prediction.
###Code
import random
pick_idx = random.randint(0, X_test.shape[0])
###Output
_____no_output_____
###Markdown
What image have we picked?
###Code
plt.imshow(X_test[pick_idx].reshape((28, 28)), cmap = 'gray')
###Output
_____no_output_____
###Markdown
Let's do prediction using the model loaded from JuiceFS.
###Code
y_pred = np.argmax(model_from_jfs.predict(np.expand_dims(X_test[pick_idx], axis=0)))
print(f'Prediction: {y_pred}')
###Output
Prediction: 1
|
contributors/Giulia_Mazzotti/spatiotemporal_variability.ipynb | ###Markdown
Thermal data exploration notebook*Author: Giulia Mazzotti - heavily relying on Steven Pestana's amazing Thermal IR tutorials!* Main goal - scientificOn February 8th, airborne thermal IR data was acquired over the Grand Mesa domain during multiple overpasses. The goal of this notebook isto kick off working with this dataset. Some initial explorative analysis is done, and ideas for further datamining are provided.We consider two aspects: 1. How do surface temperature compare to ground measurements over time?2. How do surface temperature vary in space and time in forested and open areas? Personal goalFirst steps with Python and Jupyter, and get familiar with SnowEx thermal data! 0. Preparatory stepsSo... we need some data! We're going to work with two datasets: 1. A NetCDF file including all airborne acquisitions of that day2. Temperature data from a snow pit that had stationary sensors---**>>**Download a sample airborne IR netcdf file that contains 17 image mosaics from the morning of Feb. 8th, 2020.(Start by downloading [driveanon](https://github.com/friedrichknuth/driveanon) to download sample file from google drive using "pip install")
###Code
%%capture
!pip install git+https://github.com/friedrichknuth/driveanon.git
# import driveanon
import driveanon as da
# download and save the file
folder_blob_id = '1BYz63HsSilPcQpCWPNZOp62ZZU2OdeWO'
file_names, file_blob_ids = da.list_blobs(folder_blob_id,'.nc')
print(file_names, file_blob_ids)
da.save(file_blob_ids[0])
###Output
['SNOWEX2020_IR_PLANE_2020Feb08_mosaicked_APLUW_v2.nc'] ['1Rgw7y7hmnefZyMQXosQvF0g_rJRkoMPx']
###Markdown
Download the pit data
###Code
# import the temp data in lack of a better approach
!aws s3 sync --quiet s3://snowex-data/tutorial-data/thermal-ir/ /tmp/thermal-ir/
###Output
_____no_output_____
###Markdown
**>>**----- Import packages we're going to need
###Code
# import xarray and rioxarray packages to work with the airborne raster data
import xarray as xr
import rioxarray
import matplotlib.pyplot as plt
# Import some general-purpose packages for handling different data structures
import numpy as np # for working with n-D arrays
import pandas as pd # for reading our csv data file and working with tabular data
# Import some packages for working with the SnowEx SQL database
from snowexsql.db import get_db # Import the connection function from the snowexsql library
from snowexsql.data import SiteData # Import the table classes from our data module which is where our ORM classes are defined
from datetime import date # Import some tools to build dates
from snowexsql.conversions import query_to_geopandas # Import a useful function for plotting and saving queries! See https://snowexsql.readthedocs.io/en/latest/snowexsql.html#module-snowexsql.conversions
###Output
_____no_output_____
###Markdown
1. Comparing time series of airborne images to snowpit data Open and inspect the NetCDF file containing airborne IR time series
###Code
# open the NetCDF file as dataset
ds = xr.open_dataset('SNOWEX2020_IR_PLANE_2020Feb08_mosaicked_APLUW_v2.nc')
ds
###Output
_____no_output_____
###Markdown
This command opens a NetCDF file and creates a dataset that contains all 17 airborne thermal IR acquisitions throughout the day on February 8th, as seen by printing the timestamps.Some data conversion tasks are now necessary:
###Code
# To make rioxarray happy, we should rename our spatial coorinates "x" and "y" (it automatically looks for coordinates with these names)
# We want to look at the variable "STBmosaic" (temperatures in degrees C), so we can drop everything else.
da = ds.STBmosaic.rename({'easting':'x', 'northing':'y'}) # create a new data array of "STBmosaic" with the renamed coordinates
# We also need to perform a coordinate transformation to ensure compatibility with the pit dataset
da = da.rio.write_crs('EPSG:32613') # assign current crs
da = da.rio.reproject('EPSG:26912') # overwrite with new reprojected data array
# Create a pandas timestamp array, subtract 7 hours from UTC time to get local time (MST, UTC-7)
# Ideally programmatically by reading out the entries in da.time
air_timestamps = [pd.Timestamp(2020,2,8,8,7,17), pd.Timestamp(2020,2,8,8,16,44), pd.Timestamp(2020,2,8,8,28,32), pd.Timestamp(2020,2,8,8,43,2),\
pd.Timestamp(2020,2,8,8,55,59), pd.Timestamp(2020,2,8,9,7,54), pd.Timestamp(2020,2,8,11,7,37), pd.Timestamp(2020,2,8,11,19,15),\
pd.Timestamp(2020,2,8,11,29,16), pd.Timestamp(2020,2,8,11,40,56), pd.Timestamp(2020,2,8,11,50,20), pd.Timestamp(2020,2,8,12,1,9),\
pd.Timestamp(2020,2,8,12,6,22), pd.Timestamp(2020,2,8,12,18,49), pd.Timestamp(2020,2,8,12,31,35), pd.Timestamp(2020,2,8,12,44,28),\
pd.Timestamp(2020,2,8,12,56,16)]
###Output
_____no_output_____
###Markdown
*Additional plotting ideas for later: create interactive plot of the 17 acquisitions with time slider to get an idea of data coverage* Get the location for snow pit 2S10 from the SnowEx SQL database (query [SiteData](https://snowexsql.readthedocs.io/en/latest/database_structure.htmlsites-table) using [filter_by](https://docs.sqlalchemy.org/en/14/orm/query.htmlsqlalchemy.orm.Query.filter_by) to find the entry with the site ID that we want). Then preview the resulting geodataframe, and perform some necessary data wrangling steps as demonstrated by Steven in his tutorial
###Code
# Standard commands to access the database according to tutorials
db_name = 'snow:[email protected]/snowex'
engine, session = get_db(db_name)
# Form the query to receive site_id='2S10' from the sites table
qry = session.query(SiteData).filter_by(site_id='2S10')
# Convert the record received into a geopandas dataframe
siteData_df = query_to_geopandas(qry, engine)
# Preview the resulting geopandas dataframe; this is just the site info, not the data!
# siteData_df
# Check that coordinate systems match indeed
# siteData_df.crs
# need to create column headers
column_headers = ['table', 'year', 'doy', 'time', # year, day of year, time of day (local time, UTC-7)
'rad_avg', 'rad_max', 'rad_min', 'rad_std', # radiometer surface temperature
'sb_avg', 'sb_max', 'sb_min', 'sb_std', # radiometer sensor body temperature (for calibration)
'temp1_avg', 'temp1_max', 'temp1_min', 'temp1_std', # temperature at 5 cm below snow surface
'temp2_avg', 'temp2_max', 'temp2_min', 'temp2_std', # 10 cm
'temp3_avg', 'temp3_max', 'temp3_min', 'temp3_std', # 15 cm
'temp4_avg', 'temp4_max', 'temp4_min', 'temp4_std', # 20 cm
'temp5_avg', 'temp5_max', 'temp5_min', 'temp5_std', # 30 cm
'batt_a','batt_b', # battery voltage data
]
# read the actual data and do the necessary conversion step to include column headers
df = pd.read_csv('/tmp/thermal-ir/SNEX20_VPTS_Raw/Level-0/snow-temperature-timeseries/CR10X_GM1_final_storage_1.dat',
header = None, names = column_headers)
# After the filepath we specify header=None because the file doesn't contain column headers,
# then we specify names=column_headers to give our own names for each column.
# Create a zero-padded time string (e.g. for 9:30 AM we are changing '930' into '0930')
df['time_str'] = [('0' * (4 - len(str(df.time[i])))) + str(df.time[i]) for i in range(df.shape[0])]
# change midnight from '2400' to '0000' ... might introduce some funny things...
df.time_str.replace('2400', '0000', inplace=True)
def compose_date(years, months=1, days=1, weeks=None, hours=None, minutes=None,
seconds=None, milliseconds=None, microseconds=None, nanoseconds=None):
'''Compose a datetime object from various datetime components. This clever solution is from:
https://stackoverflow.com/questions/34258892/converting-year-and-day-of-year-into-datetime-index-in-pandas'''
years = np.asarray(years) - 1970
months = np.asarray(months) - 1
days = np.asarray(days) - 1
types = ('<M8[Y]', '<m8[M]', '<m8[D]', '<m8[W]', '<m8[h]',
'<m8[m]', '<m8[s]', '<m8[ms]', '<m8[us]', '<m8[ns]')
vals = (years, months, days, weeks, hours, minutes, seconds,
milliseconds, microseconds, nanoseconds)
return sum(np.asarray(v, dtype=t) for t, v in zip(types, vals)
if v is not None)
# Create a datetime value from the date field and zero-padded time_str field, set this as our dataframe's index
df.index = compose_date(df['year'],
days=df['doy'],
hours=df['time_str'].str[:2],
minutes=df['time_str'].str[2:])
# Remove entries that are from table "102" (this contains datalogger battery information we're not interested in at the moment)
df = df[df.table != 102]
# drop the columns we no longer need
df.drop(columns=['table','year','doy','time','time_str','batt_a','batt_b'], inplace=True)
# Create a datetime value from the date field and zero-padded time_str field, set this as our dataframe's index
df.index = compose_date(df['year'],
days=df['doy'],
hours=df['time_str'].str[:2],
minutes=df['time_str'].str[2:])
# Remove entries that are from table "102" (this contains datalogger battery information we're not interested in at the moment)
df = df[df.table != 102]
# drop the columns we no longer need
df.drop(columns=['table','year','doy','time','time_str','batt_a','batt_b'], inplace=True)
###Output
_____no_output_____
###Markdown
Clip the airborne IR data around the pit and plot in timeMake a simple plot of the data. We are interested in the variable `rad_avg` which is the average temperature measured by the radiometer over each 5 minute period.
###Code
# reminder of where our snow pit is at to create bounding box around it
siteData_df.geometry.bounds
# Let's first look at a 100m grid cell around the pit
minx = 743026
miny = 4322639
maxx = 743126
maxy = 4322739
# clip
da_clipped = da.rio.clip_box(minx,miny,maxx,maxy)
# quick check to see where the bounding box and the pit data overlap by at least 80% (approx)
# for da_clipped_step in da_clipped:
# fig, ax = plt.subplots()
# da_clipped_step.plot(ax=ax,cmap='magma')
# save the corresponding indices for use in the next plot; skip index 1 because it's going to be plotted differently
ints = [3, 5, 7, 9, 11, 12, 14, 16]
plt.figure(figsize=(10,4))
# plot radiometer average temperature
df.rad_avg.plot(linestyle='-', marker='', markersize=1, c='k', label='Ground-based $T_s$')
# plot the mean airborne IR temperature from the area around the snow pit:
plt.plot(air_timestamps[1], da_clipped.isel(time = 1).mean(),
marker='o', c='r', linestyle='none',
label='Airborne IR mean $T_s$ for 100 m bounding box area')
# plot an error bar showing the maximum and minimum airborne IR temperature around the snow pit
plt.errorbar(air_timestamps[1], da_clipped.isel(time = 1).mean(),
yerr=[[da_clipped.isel(time = 1).mean()-da_clipped.isel(time = 1).min()],
[da_clipped.isel(time = 1).max()-da_clipped.isel(time = 1).mean()]],
capsize=3, fmt='none', ecolor='b',
label='Airborne IR $T_s$ range for 100 m bounxing box area')
for i in ints:
# plot the mean airborne IR temperature from the area around the snow pit:
plt.plot(air_timestamps[i], da_clipped.isel(time = i).mean(),
marker='o', c='r', linestyle='none') #,
# label='Airborne IR mean $T_s$ for 100 m radius area')
# plot an error bar showing the maximum and minimum airborne IR temperature around the snow pit
plt.errorbar(air_timestamps[i], da_clipped.isel(time = i).mean(),
yerr=[[da_clipped.isel(time = i).mean()-da_clipped.isel(time = i).min()],
[da_clipped.isel(time = i).max()-da_clipped.isel(time = i).mean()]],
capsize=3, fmt='none', ecolor='b')#,
# label='Airborne IR $T_s$ range for 100 m radius area')
# set axes limits
plt.ylim((-15,0))
plt.xlim((pd.Timestamp(2020,2,8,6,0),pd.Timestamp(2020,2,8,20,0))) # zoom in to daytime hours on Feb. 8, 2020
# add a legend to the plot
plt.legend()
# set axes labels
plt.ylabel('Temperature [$C\degree$]')
plt.xlabel('Time')
# add grid lines to the plot
plt.grid('on')
# set the plot title
plt.title('Snow Surface Temperature at Snow Pit 2S10 compared to Airborne IR imagery (100 m box)');
plt.savefig('timeseries.jpg')
###Output
_____no_output_____
###Markdown
Compare spatial patterns in open and forest
###Code
# create new bounding boxes and quickly verify where they have overlapping data. now let's look at 500 m pixels just to make sure we see something
# set up our box bounding coordinates - pit site (verified that it doesnt contain trees)
minx = 742826
miny = 4322439
maxx = 743326
maxy = 4322939
# bounding box in forest for comparison
minx_for = 743926
miny_for = 4322839
maxx_for = 744426
maxy_for = 4323339
# clip
da_clipped = da.rio.clip_box(minx,miny,maxx,maxy)
da_clipped_for = da.rio.clip_box(minx_for,miny_for,maxx_for,maxy_for)
# both cases have cool datasets for i = 1 and i = 16
# for da_clipped_step in da_clipped_for:
# fig, ax = plt.subplots()
# da_clipped_step.plot(ax=ax,cmap='magma')
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(10,4), tight_layout=True)
airborne_ir_area_temperature = da_clipped.isel(time = 1)
# plot the portion of the airborne TIR image we selected within the buffer area geometry
airborne_ir_area_temperature.plot(cmap='magma', vmin=-20, vmax=5, ax=ax[0],
cbar_kwargs={'label': 'Temperature $\degree C$'})
ax[0].set_title('Airborne TIR image within\n500 m pixel - open area- 8AM\n')
ax[0].set_aspect('equal')
ax[0].set_xlabel('Eastings UTM 12N (m)')
ax[0].set_ylabel('Northings UTM 12N (m)')
# ax[0].set_xlim((xmin-150, xmax+150)) # x axis limits to +/- 150 m from our point's "total bounds"
# ax[0].set_ylim((ymin-150, ymax+150)) # y axis limits to +/- 150 m from our point's "total bounds"
airborne_ir_area_temperature = da_clipped_for.isel(time = 1)
# plot the portion of the airborne TIR image we selected within the buffer area geometry
airborne_ir_area_temperature.plot(cmap='magma', vmin=-20, vmax=5, ax=ax[1],
cbar_kwargs={'label': 'Temperature $\degree C$'})
ax[1].set_title('Airborne TIR image within\n500 m pixel - forested area - 8AM\n')
ax[1].set_aspect('equal')
ax[1].set_xlabel('Eastings UTM 12N (m)')
ax[1].set_ylabel('Northings UTM 12N (m)')
plt.savefig('spatial_8am.jpg')
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(10,4), tight_layout=True)
airborne_ir_area_temperature = da_clipped.isel(time = 16)
# plot the portion of the airborne TIR image we selected within the buffer area geometry
airborne_ir_area_temperature.plot(cmap='magma', vmin=-20, vmax=5, ax=ax[0],
cbar_kwargs={'label': 'Temperature $\degree C$'})
ax[0].set_title('Airborne TIR image within\n500 m pixel - open area - 12PM\n')
ax[0].set_aspect('equal')
ax[0].set_xlabel('Eastings UTM 12N (m)')
ax[0].set_ylabel('Northings UTM 12N (m)')
# ax[0].set_xlim((xmin-150, xmax+150)) # x axis limits to +/- 150 m from our point's "total bounds"
# ax[0].set_ylim((ymin-150, ymax+150)) # y axis limits to +/- 150 m from our point's "total bounds"
airborne_ir_area_temperature = da_clipped_for.isel(time = 16)
# plot the portion of the airborne TIR image we selected within the buffer area geometry
airborne_ir_area_temperature.plot(cmap='magma', vmin=-20, vmax=5, ax=ax[1],
cbar_kwargs={'label': 'Temperature $\degree C$'})
ax[1].set_title('Airborne TIR image within\n500 m pixel - forested area - 12PM\n')
ax[1].set_aspect('equal')
ax[1].set_xlabel('Eastings UTM 12N (m)')
ax[1].set_ylabel('Northings UTM 12N (m)')
plt.savefig('spatial_12pm.jpg')
fig, ax = plt.subplots(nrows=1, ncols=4, figsize=(16,4), tight_layout=True)
airborne_ir_area_temperature = da_clipped.isel(time = 1)
# plot a histogram of image temperature data within the buffer area geometry
airborne_ir_area_temperature.plot.hist(ax=ax[0],
color='grey',
zorder=1, # use zorder to make sure this plots below the point
label='zonal $T_S$ histogram')
# plot a vertical line for the mean temperature within the buffer area geometry
ax[0].axvline(airborne_ir_area_temperature.mean(),
color='b',linestyle='--', # set color and style
zorder=2, # use zorder to make sure this plots on top of the histogram
label='zonal mean $T_S$')
ax[0].legend(loc='upper left') # add a legend
ax[0].set_xlim((-22,-2)) # set xlim to same values as colorbar in image plot
ax[0].set_title('Open area - 8AM')
ax[0].set_ylabel('Number of pixels');
airborne_ir_area_temperature = da_clipped_for.isel(time = 1)
# plot a histogram of image temperature data within the buffer area geometry
airborne_ir_area_temperature.plot.hist(ax=ax[1],
color='grey',
zorder=1, # use zorder to make sure this plots below the point
label='zonal $T_S$ histogram')
# plot a vertical line for the mean temperature within the buffer area geometry
ax[1].axvline(airborne_ir_area_temperature.mean(),
color='g',linestyle='--', # set color and style
zorder=2, # use zorder to make sure this plots on top of the histogram
label='zonal mean $T_S$')
ax[1].legend(loc='upper left') # add a legend
ax[1].set_xlim((-22,-2)) # set xlim to same values as colorbar in image plot
ax[1].set_title('Forest area - 8AM')
ax[1].set_ylabel('Number of pixels');
airborne_ir_area_temperature = da_clipped.isel(time = 16)
# plot a histogram of image temperature data within the buffer area geometry
airborne_ir_area_temperature.plot.hist(ax=ax[2],
color='grey',
zorder=1, # use zorder to make sure this plots below the point
label='zonal $T_S$ histogram')
# plot a vertical line for the mean temperature within the buffer area geometry
ax[2].axvline(airborne_ir_area_temperature.mean(),
color='b',linestyle='--', # set color and style
zorder=2, # use zorder to make sure this plots on top of the histogram
label='zonal mean $T_S$')
ax[2].legend(loc='upper left') # add a legend
ax[2].set_xlim((-15,5)) # set xlim to same values as colorbar in image plot
# ax[1].set_ylim((0,400)) # set ylim
ax[2].set_title('Open area - 12PM')
ax[2].set_ylabel('Number of pixels');
airborne_ir_area_temperature = da_clipped_for.isel(time = 16)
# plot a histogram of image temperature data within the buffer area geometry
airborne_ir_area_temperature.plot.hist(ax=ax[3],
color='grey',
zorder=1, # use zorder to make sure this plots below the point
label='zonal $T_S$ histogram')
# plot a vertical line for the mean temperature within the buffer area geometry
ax[3].axvline(airborne_ir_area_temperature.mean(),
color='g',linestyle='--', # set color and style
zorder=2, # use zorder to make sure this plots on top of the histogram
label='zonal mean $T_S$')
ax[3].legend(loc='upper left') # add a legend
ax[3].set_xlim((-15,5)) # set xlim to same values as colorbar in image plot
# ax[1].set_ylim((0,400)) # set ylim
ax[3].set_title('Forest area - 12PM')
ax[3].set_ylabel('Number of pixels');
plt.savefig('histograms.jpg')
###Output
_____no_output_____ |
tsa/src/jupyter/python/foundations/probability-theory.ipynb | ###Markdown
Probability theory Random experimentWhen we toss an unbiased coin, we say that it lands heads up with probability $\frac{1}{2}$ and tails up with probability $\frac{1}{2}$.Such a coin toss is an example of a **random experiment** and the set of **outcomes** of this random experiment is the **sample space** $\Omega = \{h, t\}$, where $h$ stands for "heads" and $t$ stands for tails.What if we toss a coin twice? We could view the two coin tosses as a single random experiment with the sample space $\Omega = \{hh, ht, th, tt\}$, where $ht$ (for example) denotes "heads on the first toss", "tails on the second toss".What if, instead of tossing a coin, we roll a die? The sample space for this random experiment is $\Omega = \{1, 2, 3, 4, 5, 6\}$. EventsAn **event**, then, is a subset of the sample space. In our example of the two consecutive coin tosses, getting heads on all coin tosses is an event:$$A = \text{"getting heads on all coin tosses"} = \{hh\} \subseteq \{hh, ht, th, tt\} = \Omega.$$Getting distinct results on the two coin tosses is also an event:$$D = \{ht, th\} \subseteq \{hh, ht, th, tt\} = \Omega.$$We can simulate a coin toss in Python as follows:
###Code
import numpy as np
np.random.seed(42)
np.random.randint(0, 2)
###Output
_____no_output_____
###Markdown
(Let's say 0 is heads and 1 is tails.) Similarly, in our roll-of-a-die example, the following are all events:$$S = \text{"six shows up"} = \{6\} \subseteq \{1, 2, 3, 4, 5, 6\} = \Omega,$$$$E = \text{"even number shows up"} = \{2, 4, 6\} \subseteq \{1, 2, 3, 4, 5, 6\} = \Omega,$$$$O = \text{"odd number shows up"} = \{1, 3, 5\} \subseteq \{1, 2, 3, 4, 5, 6\} = \Omega.$$The empty set, $\emptyset = \{\}$, represents the **impossible event**, whereas the sample space $\Omega$ itself represents the **certain event**: one of the numbers $1, 2, 3, 4, 5, 6$ always occurs when a die is rolled, so $\Omega$ always occurs.We can simulate the roll of a die in Python as follows:
###Code
np.random.randint(1, 7)
###Output
_____no_output_____
###Markdown
If we get 4, say, $S$ has not occurred, since $4 \notin S$; $E$ has occurred, since $4 \in E$; $O$ has not occurred, since $4 \notin O$.When all outcomes are equally likely, and the sample space is finite, the probability of an event $A$ is given by $$\mathbb{P}(A) = \frac{|A|}{|\Omega|},$$ where $|\cdot|$ denotes the number of elements in a given set.Thus, the probability of the event $E$, "even number shows up" is equal to $$\mathbb{P}(A) = \frac{|E|}{|\Omega|} = \frac{3}{6} = \frac{1}{2}.$$If Python's random number generator is decent enough, we should get pretty close to this number by simulating die rolls:
###Code
outcomes = np.random.randint(1, 7, 100)
len([x for x in outcomes if x % 2 == 0]) / len(outcomes)
###Output
_____no_output_____
###Markdown
Here we have used 100 simulated "rolls". If we used, 1000000, say, we would get even closer to $\frac{1}{2}$:
###Code
outcomes = np.random.randint(1, 7, 1000000)
len([x for x in outcomes if x % 2 == 0]) / len(outcomes)
###Output
_____no_output_____ |
venv/Lib/site-packages/nbdime/tests/files/multi_cell_nb--cellchange.ipynb | ###Markdown
from Inserted Markdown cell!
###Code
f(6, -2)
###Output
_____no_output_____ |
Section 6/using_mc_application.ipynb | ###Markdown
Using the Application
###Code
from montecarlo import MonteCarlo as mc
help(mc)
sim = mc.monte_carlo_sim(10000,.095,.185,30,5000)
sim[list(range(5))]
###Output
_____no_output_____ |
lecture_01.ipynb | ###Markdown
Guest Lecture COMP7230 Using Python packages for Linked Data & spatial data by Dr Nicholas CarThis Notebook is the resource used to deliver a guest lecture for the [Australian National University](https://www.anu.edu.au)'s course [COMP7230](https://programsandcourses.anu.edu.au/2020/course/COMP7230): *Introduction to Programming for Data Scientists*Click here to run this lecture in your web browser:[](https://mybinder.org/v2/gh/nicholascar/comp7230-training/HEAD?filepath=lecture_01.ipynb) About the lecturer**Nicholas Car**:* PhD in informatics for irrigation* A former CSIRO informatics researcher * worked on integrating environmental data across government / industry * developed data standards* Has worked in operation IT in government* Now in a private IT consulting company, [SURROUND Australia Pty Ltd](https://surroundaustralia.com) supplying Data Science solutionsRelevant current work:* building data processing systems for government & industry* mainly using Python * due to its large number of web and data science packages* maintains the [RDFlib](https://rdflib.net) Python toolkit * for processing [RDF](https://en.wikipedia.org/wiki/Resource_Description_Framework)* co-chairs the [Australian Government Linked Data Working Group](https://www.linked.data.gov.au) with Armin Haller * plans for multi-agency data integration* still developing data standards * in particular GeoSPARQL 1.1 (https://opengeospatial.github.io/ogc-geosparql/geosparql11/spec.html) * for graph representations of spatial information 0. Lecture Outline1. Notes about this training material2. Accessing RDF data3. Parsing RDF data4. Data 'mash up'5. Data Conversions & Display 1. Notes about this training material This tool* This is a Jupyter Notebook - interactive Python scripting* You will cover Jupyter Notebooks more, later in this course* Access this material online at: * GitHub: [](https://mybinder.org/v2/gh/nicholascar/comp7230-training/?filepath=lecture_01.ipynb) Background data concepts - RDF_Nick will talk RDF using these web pages:_* [Semantic Web](https://www.w3.org/standards/semanticweb/) - the concept* [RDF](https://en.wikipedia.org/wiki/Resource_Description_Framework) - the data model * refer to the RDF image below* [RDFlib](https://rdflib.net) - the (Python) toolkit* [RDFlib training Notebooks are available](https://github.com/surroundaustralia/rdflib-training)The LocI project:* The Location Index project: RDF image, from [the RDF Primer](https://www.w3.org/TR/rdf11-primer/), for discussion:Note that:* _everything_ is "strongly" identified * including all relationships * unlike lots of related data* many of the identifiers resolve * to more info (on the web) 2. Accessing RDF data* Here we use an online structured dataset, the Geocoded National Address File for Australia * Dataset Persistent Identifier: * The above link redirects to the API at * GNAF-LD Data is presented according to *Linked Data* principles * online * in HTML & machine-readable form, RDF * RDF is a Knowledge Graph: a graph containing data + model * each resource is available via a URI * e.g. 2.1. Get the Address GAACT714845933 using the *requests* package
###Code
import requests # NOTE: you must have installed requests first, it's not a standard package
r = requests.get(
"https://linked.data.gov.au/dataset/gnaf/address/GAACT714845933"
)
print(r.text)
###Output
<!DOCTYPE html>
<html>
<head lang="en">
<meta charset="UTF-8">
<title>Address API</title>
<link rel="stylesheet" href="/static/css/psma_theme.css" />
</head>
<body>
<div id="widther">
<div id="header">
<div style="float:left;">
<a href="https://www.psma.com.au/">PSMA Australia Ltd.</a>
</div>
<div style="float:right;">
<a href="/">Home</a>
<a href="/?_view=reg">Registers</a>
<a href="/sparql">SPARQL endpoint</a>
<a href="http://linked.data.gov.au/def/gnaf">GNAF ontology</a>
<a href="http://linked.data.gov.au/def/gnaf/code/">GNAF codes</a>
<a href="/about">About</a>
</div>
<div style="clear:both;"></div>
</div>
<div id="container-content">
<h1>Address GAACT714845933</h1>
<script type="application/ld+json">
{"@type": "Place", "name": "Geocoded Address GAACT714845933", "geo": {"latitude": -35.20113263, "@type": "GeoCoordinates", "longitude": 149.03865604}, "@context": "http://schema.org", "address": {"addressRegion": "Australian Capital Territory", "postalCode": "2615", "streetAddress": "6 Packham Place", "addressCountry": "AU", "@type": "PostalAddress", "addressLocality": "Charnwood"}}
</script>
<h2>G-NAF View</h2>
<table class="content">
<tr><th>Property</th><th>Value</th></tr>
<tr><td>Address Line</td><td><code>6 Packham Place, Charnwood, ACT 2615</code></td></tr>
<tr>
<td><a href="http://linked.data.gov.au/def/gnaf#FirstStreetNumber">First Street Number</a></td>
<td><code>6</code></td>
</tr>
<tr><td><a href="http://linked.data.gov.au/def/gnaf#hasStreetLocality">Street Locality</a></td><td><code><a href="/streetLocality/ACT3857">Packham Place</a></code></td></tr>
<tr><td><a href="http://linked.data.gov.au/def/gnaf#hasLocality">Locality</a></td><td><code><a href="/locality/ACT570">Charnwood</a></code></td></tr>
<tr><td><a href="http://linked.data.gov.au/def/gnaf#hasState">State/Territory</a></td><td><code>ACT</code></td></tr>
<tr><td><a href="http://linked.data.gov.au/def/gnaf#hasPostcode">Postcode</a></td><td><code>2615</code></td></tr>
<tr><td><a href="http://linked.data.gov.au/def/gnaf#hasLegalParcelId">Legal Parcel ID</a></td><td><code>BELC/CHAR/15/16/</code></td></tr>
<tr><td><a href="http://linked.data.gov.au/def/gnaf#hasAddressSite">Address Site PID</a></td><td><code><a href="/addressSite/710446419">710446419</a></code></td></tr>
<tr><td><a href="http://linked.data.gov.au/def/gnaf#">Level Geocoded Code</a></td><td><code>7</code></td></tr>
<tr><td><a href="http://linked.data.gov.au/def/gnaf#hasGnafConfidence">GNAF Confidence</a></td><td><code><a href="http://gnafld.net/def/gnaf/GnafConfidence_2">Confidence level 2</a></code></td></tr>
<tr><td><a href="http://linked.data.gov.au/def/gnaf#hasCreatedDate">Date Created</a></td><td><code>2004-04-29</code></td></tr>
<tr><td><a href="http://linked.data.gov.au/def/gnaf#hasLastModifiedDate">Date Last Modified</a></td><td><code>2018-02-01</code></td></tr>
<tr>
<td><a href="http://www.opengis.net/ont/geosparql#hasGeometry">Geometry</a></td>
<td><code><a href="http://gnafld.net/def/gnaf/code/GeocodeTypes#FrontageCentreSetback">Frontage Centre Setback</a> →<br /><http://www.opengis.net/def/crs/EPSG/0/4283> POINT(149.03865604 -35.20113263)</code></td>
</tr>
<tr>
<td><a href="http://linked.data.gov.au/def/gnaf#hasMeshBlockMatch">Mesh Blocks 2011</a></td><td>
<code><a href="http://gnafld.net/def/gnaf/code/MeshBlockMatchTypes#ParcelLevel">Parcel Level Match</a> → <a href="http://linked.data.gov.au/dataset/asgs/MB2011/80006300000">80006300000</a></code><br />
</td>
</tr>
<tr>
<td><a href="http://linked.data.gov.au/def/gnaf#hasMeshBlockMatch">Mesh Blocks 2016</a></td><td>
<code><a href="http://gnafld.net/def/gnaf/code/MeshBlockMatchTypes#ParcelLevel">Parcel Level Match</a> → <a href="http://linked.data.gov.au/dataset/asgs/MB2016/80006300000">80006300000</a></code><br />
</td>
</tr>
</table>
<h2>Other views</h2>
<p>Other model views of a Address are listed in the <a href="/address/GAACT714845933?_view=alternates">Alternates View</a>.</p>
<h2>Citation</h2>
<p>If you wish to cite this Address as you would a publication, please use the following format:</p>
<code style="display:block; margin: 0 5em 0 5em;">
PSMA Australia Limited (2017). Address GAACT714845933. Address object from the Geocoded National Address File (G-NAF). http://linked.data.gov.au/dataset/gnaf/address/GAACT714845933
</code>
</div>
<div id="footer"></div>
</div>
</body>
</html>
###Markdown
2.2 Get machine-readable data, RDF triplesUse HTTP Content NegotiationSame URI, different *format* of data
###Code
r = requests.get(
"https://linked.data.gov.au/dataset/gnaf/address/GAACT714845933",
headers={"Accept": "application/n-triples"}
)
print(r.text)
###Output
<http://linked.data.gov.au/dataset/gnaf/address/GAACT714845933> <http://purl.org/dc/terms/identifier> "GAACT714845933"^^<http://www.w3.org/2001/XMLSchema#string> .
<http://linked.data.gov.au/dataset/gnaf/address/GAACT714845933> <http://www.w3.org/2000/01/rdf-schema#comment> "6 Packham Place, Charnwood, ACT 2615"^^<http://www.w3.org/2001/XMLSchema#string> .
_:Nfbdb238ffe9d4fa4bd6dd6f8cced7318 <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://www.opengis.net/ont/sf#Point> .
_:N3677002343da47bbb35c569fc67f349f <http://www.w3.org/ns/prov#value> "6"^^<http://www.w3.org/2001/XMLSchema#integer> .
<http://linked.data.gov.au/dataset/gnaf/address/GAACT714845933> <http://linked.data.gov.au/def/gnaf#hasStreetLocality> <http://linked.data.gov.au/dataset/gnaf/streetLocality/ACT3857> .
<http://linked.data.gov.au/dataset/gnaf/address/GAACT714845933> <http://purl.org/dc/terms/modified> "2018-02-01"^^<http://www.w3.org/2001/XMLSchema#date> .
<http://linked.data.gov.au/dataset/gnaf/address/GAACT714845933> <http://www.w3.org/2000/01/rdf-schema#label> "Address GAACT714845933 of Unknown type"^^<http://www.w3.org/2001/XMLSchema#string> .
_:Nfbdb238ffe9d4fa4bd6dd6f8cced7318 <http://linked.data.gov.au/def/gnaf#gnafType> <http://gnafld.net/def/gnaf/code/GeocodeTypes#FrontageCentreSetback> .
<http://linked.data.gov.au/dataset/gnaf/address/GAACT714845933> <http://linked.data.gov.au/def/gnaf#hasDateCreated> "2004-04-29"^^<http://www.w3.org/2001/XMLSchema#date> .
<http://linked.data.gov.au/dataset/gnaf/address/GAACT714845933> <http://www.opengis.net/ont/geosparql#hasGeometry> _:Nfbdb238ffe9d4fa4bd6dd6f8cced7318 .
<http://linked.data.gov.au/dataset/gnaf/address/GAACT714845933> <http://linked.data.gov.au/def/gnaf#hasNumber> _:N3677002343da47bbb35c569fc67f349f .
<http://linked.data.gov.au/dataset/gnaf/address/GAACT714845933> <http://purl.org/dc/terms/type> <http://gnafld.net/def/gnaf/code/AddressTypes#Unknown> .
<http://linked.data.gov.au/dataset/gnaf/address/GAACT714845933> <http://linked.data.gov.au/def/loci#isMemberOf> <http://linked.data.gov.au/dataset/gnaf/address/> .
<http://linked.data.gov.au/dataset/gnaf/address/GAACT714845933> <http://linked.data.gov.au/def/gnaf#hasDateLastModified> "2018-02-01"^^<http://www.w3.org/2001/XMLSchema#date> .
<http://linked.data.gov.au/dataset/gnaf/address/GAACT714845933> <http://linked.data.gov.au/def/gnaf#hasGnafConfidence> <http://gnafld.net/def/gnaf/GnafConfidence_2> .
<http://linked.data.gov.au/dataset/gnaf/address/GAACT714845933> <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://linked.data.gov.au/def/gnaf#Address> .
<http://linked.data.gov.au/dataset/gnaf/address/GAACT714845933> <http://purl.org/dc/terms/created> "2004-04-29"^^<http://www.w3.org/2001/XMLSchema#date> .
<http://gnafld.net/def/gnaf/GnafConfidence_2> <http://www.w3.org/2000/01/rdf-schema#label> "Confidence level 2"^^<http://www.w3.org/2001/XMLSchema#string> .
_:N3677002343da47bbb35c569fc67f349f <http://linked.data.gov.au/def/gnaf#gnafType> <http://linked.data.gov.au/def/gnaf/code/NumberTypes#FirstStreet> .
_:Nfbdb238ffe9d4fa4bd6dd6f8cced7318 <http://www.w3.org/2000/01/rdf-schema#label> "Frontage Centre Setback"^^<http://www.w3.org/2001/XMLSchema#string> .
<http://linked.data.gov.au/dataset/gnaf/address/GAACT714845933> <http://linked.data.gov.au/def/gnaf#hasAddressSite> <http://linked.data.gov.au/dataset/gnaf/addressSite/710446419> .
<http://linked.data.gov.au/dataset/gnaf/address/GAACT714845933> <http://linked.data.gov.au/def/gnaf#hasState> <http://www.geonames.org/2177478> .
_:N3677002343da47bbb35c569fc67f349f <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://linked.data.gov.au/def/gnaf#Number> .
<http://www.geonames.org/2177478> <http://www.w3.org/2000/01/rdf-schema#label> "Australian Capital Territory"^^<http://www.w3.org/2001/XMLSchema#string> .
_:Nfbdb238ffe9d4fa4bd6dd6f8cced7318 <http://purl.org/dc/terms/type> <http://gnafld.net/def/gnaf/code/GeocodeTypes#FrontageCentreSetback> .
_:Nfbdb238ffe9d4fa4bd6dd6f8cced7318 <http://www.opengis.net/ont/geosparql#asWKT> "<http://www.opengis.net/def/crs/EPSG/0/4283> POINT(149.03865604 -35.20113263)"^^<http://www.opengis.net/ont/geosparql#wktLiteral> .
<http://linked.data.gov.au/dataset/gnaf/address/GAACT714845933> <http://linked.data.gov.au/def/gnaf#hasLocality> <http://linked.data.gov.au/dataset/gnaf/locality/ACT570> .
<http://linked.data.gov.au/dataset/gnaf/address/GAACT714845933> <http://linked.data.gov.au/def/gnaf#hasPostcode> "2615"^^<http://www.w3.org/2001/XMLSchema#integer> .
###Markdown
2.3 Get machine-readable data, TurtleEasier to read
###Code
r = requests.get(
"https://linked.data.gov.au/dataset/gnaf/address/GAACT714845933",
headers={"Accept": "text/turtle"}
)
print(r.text)
###Output
@prefix dct: <http://purl.org/dc/terms/> .
@prefix geo: <http://www.opengis.net/ont/geosparql#> .
@prefix gnaf: <http://linked.data.gov.au/def/gnaf#> .
@prefix loci: <http://linked.data.gov.au/def/loci#> .
@prefix prov: <http://www.w3.org/ns/prov#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix sf: <http://www.opengis.net/ont/sf#> .
@prefix xml: <http://www.w3.org/XML/1998/namespace> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
<http://linked.data.gov.au/dataset/gnaf/address/GAACT714845933> a gnaf:Address ;
rdfs:label "Address GAACT714845933 of Unknown type"^^xsd:string ;
gnaf:hasAddressSite <http://linked.data.gov.au/dataset/gnaf/addressSite/710446419> ;
gnaf:hasDateCreated "2004-04-29"^^xsd:date ;
gnaf:hasDateLastModified "2018-02-01"^^xsd:date ;
gnaf:hasGnafConfidence <http://gnafld.net/def/gnaf/GnafConfidence_2> ;
gnaf:hasLocality <http://linked.data.gov.au/dataset/gnaf/locality/ACT570> ;
gnaf:hasNumber [ a gnaf:Number ;
gnaf:gnafType <http://linked.data.gov.au/def/gnaf/code/NumberTypes#FirstStreet> ;
prov:value 6 ] ;
gnaf:hasPostcode 2615 ;
gnaf:hasState <http://www.geonames.org/2177478> ;
gnaf:hasStreetLocality <http://linked.data.gov.au/dataset/gnaf/streetLocality/ACT3857> ;
loci:isMemberOf <http://linked.data.gov.au/dataset/gnaf/address/> ;
dct:created "2004-04-29"^^xsd:date ;
dct:identifier "GAACT714845933"^^xsd:string ;
dct:modified "2018-02-01"^^xsd:date ;
dct:type <http://gnafld.net/def/gnaf/code/AddressTypes#Unknown> ;
geo:hasGeometry [ a sf:Point ;
rdfs:label "Frontage Centre Setback"^^xsd:string ;
gnaf:gnafType <http://gnafld.net/def/gnaf/code/GeocodeTypes#FrontageCentreSetback> ;
dct:type <http://gnafld.net/def/gnaf/code/GeocodeTypes#FrontageCentreSetback> ;
geo:asWKT "<http://www.opengis.net/def/crs/EPSG/0/4283> POINT(149.03865604 -35.20113263)"^^geo:wktLiteral ] ;
rdfs:comment "6 Packham Place, Charnwood, ACT 2615"^^xsd:string .
<http://gnafld.net/def/gnaf/GnafConfidence_2> rdfs:label "Confidence level 2"^^xsd:string .
<http://www.geonames.org/2177478> rdfs:label "Australian Capital Territory"^^xsd:string .
###Markdown
3. Parsing RDF dataImport the RDFlib library for manipulating RDF dataAdd some namespaces to shorten URIs
###Code
import rdflib
from rdflib.namespace import RDF, RDFS
GNAF = rdflib.Namespace("http://linked.data.gov.au/def/gnaf#")
ADDR = rdflib.Namespace("http://linked.data.gov.au/dataset/gnaf/address/")
GEO = rdflib.Namespace("http://www.opengis.net/ont/geosparql#")
print(GEO)
###Output
http://www.opengis.net/ont/geosparql#
###Markdown
Create a graph and add the namespaces to it
###Code
g = rdflib.Graph()
g.bind("gnaf", GNAF)
g.bind("addr", ADDR)
g.bind("geo", GEO)
###Output
_____no_output_____
###Markdown
Parse in the machine-readable data from the GNAF-LD
###Code
r = requests.get(
"https://linked.data.gov.au/dataset/gnaf/address/GAACT714845933",
headers={"Accept": "text/turtle"}
)
g.parse(data=r.text, format="text/turtle")
###Output
_____no_output_____
###Markdown
Print graph length (no. of triples) to check
###Code
print(len(g))
###Output
28
###Markdown
Print graph content, in Turtle
###Code
print(g.serialize(format="text/turtle").decode())
###Output
@prefix addr: <http://linked.data.gov.au/dataset/gnaf/address/> .
@prefix dct: <http://purl.org/dc/terms/> .
@prefix geo: <http://www.opengis.net/ont/geosparql#> .
@prefix gnaf: <http://linked.data.gov.au/def/gnaf#> .
@prefix loci: <http://linked.data.gov.au/def/loci#> .
@prefix prov: <http://www.w3.org/ns/prov#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix sf: <http://www.opengis.net/ont/sf#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
addr:GAACT714845933 a gnaf:Address ;
rdfs:label "Address GAACT714845933 of Unknown type"^^xsd:string ;
gnaf:hasAddressSite <http://linked.data.gov.au/dataset/gnaf/addressSite/710446419> ;
gnaf:hasDateCreated "2004-04-29"^^xsd:date ;
gnaf:hasDateLastModified "2018-02-01"^^xsd:date ;
gnaf:hasGnafConfidence <http://gnafld.net/def/gnaf/GnafConfidence_2> ;
gnaf:hasLocality <http://linked.data.gov.au/dataset/gnaf/locality/ACT570> ;
gnaf:hasNumber [ a gnaf:Number ;
gnaf:gnafType <http://linked.data.gov.au/def/gnaf/code/NumberTypes#FirstStreet> ;
prov:value 6 ] ;
gnaf:hasPostcode 2615 ;
gnaf:hasState <http://www.geonames.org/2177478> ;
gnaf:hasStreetLocality <http://linked.data.gov.au/dataset/gnaf/streetLocality/ACT3857> ;
loci:isMemberOf addr: ;
dct:created "2004-04-29"^^xsd:date ;
dct:identifier "GAACT714845933"^^xsd:string ;
dct:modified "2018-02-01"^^xsd:date ;
dct:type <http://gnafld.net/def/gnaf/code/AddressTypes#Unknown> ;
geo:hasGeometry [ a sf:Point ;
rdfs:label "Frontage Centre Setback"^^xsd:string ;
gnaf:gnafType <http://gnafld.net/def/gnaf/code/GeocodeTypes#FrontageCentreSetback> ;
dct:type <http://gnafld.net/def/gnaf/code/GeocodeTypes#FrontageCentreSetback> ;
geo:asWKT "<http://www.opengis.net/def/crs/EPSG/0/4283> POINT(149.03865604 -35.20113263)"^^geo:wktLiteral ] ;
rdfs:comment "6 Packham Place, Charnwood, ACT 2615"^^xsd:string .
<http://gnafld.net/def/gnaf/GnafConfidence_2> rdfs:label "Confidence level 2"^^xsd:string .
<http://www.geonames.org/2177478> rdfs:label "Australian Capital Territory"^^xsd:string .
###Markdown
3.1 Getting multi-address data:3.1.1. Retrieve an index of 10 addresses, in RDF3.1.2. For each address in the index, get each Address' data* use paging URI: 3.1.3. Get only the street address and map coordinates 3.1.1. Retrieve index
###Code
# clear the graph
g = rdflib.Graph()
r = requests.get(
"https://linked.data.gov.au/dataset/gnaf/address/?page=1",
headers={"Accept": "text/turtle"}
)
g.parse(data=r.text, format="text/turtle")
print(len(g))
###Output
70
###Markdown
3.1.2. Parse in each address' data
###Code
for s, p, o in g.triples((None, RDF.type, GNAF.Address)):
print(s.split("/")[-1])
r = requests.get(
str(s),
headers={"Accept": "text/turtle"}
)
g.parse(data=r.text, format="turtle")
print(len(g))
###Output
GAACT714845953
97
GAACT714845955
122
GAACT714845945
147
GAACT714845941
172
GAACT714845935
197
GAACT714845951
222
GAACT714845949
247
GAACT714845954
272
GAACT714845950
297
GAACT714845942
322
GAACT714845943
347
GAACT714845946
372
GAACT714845947
397
GAACT714845938
422
GAACT714845944
447
GAACT714845933
472
GAACT714845936
497
GAACT714845934
522
GAACT714845952
547
GAACT714845939
572
###Markdown
The graph model used by the GNAF-LD is based on [GeoSPARQL 1.1](https://opengeospatial.github.io/ogc-geosparql/geosparql11/spec.html) and looks like this: 3.1.3. Extract (& print) street address text & coordinates(CSV)
###Code
addresses_tsv = "GNAF ID\tAddress\tCoordinates\n"
for s, p, o in g.triples((None, RDF.type, GNAF.Address)):
for s2, p2, o2 in g.triples((s, RDFS.comment, None)):
txt = str(o2)
for s2, p2, o2 in g.triples((s, GEO.hasGeometry, None)):
for s3, p3, o3 in g.triples((o2, GEO.asWKT, None)):
coords = str(o3).replace("<http://www.opengis.net/def/crs/EPSG/0/4283> ", "")
addresses_tsv += "{}\t{}\t{}\n".format(str(s).split("/")[-1], txt, coords)
print(addresses_tsv)
###Output
GNAF ID Address Coordinates
GAACT714845953 5 Jamieson Crescent, Kambah, ACT 2902 POINT(149.06864966 -35.37733591)
GAACT714845955 3 Baylis Place, Charnwood, ACT 2615 POINT(149.03046282 -35.20202762)
GAACT714845945 9 Baylis Place, Charnwood, ACT 2615 POINT(149.03047333 -35.20156767)
GAACT714845941 7 Mcdowall Place, Kambah, ACT 2902 POINT(149.06860919 -35.37833726)
GAACT714845935 26 Jauncey Court, Charnwood, ACT 2615 POINT(149.03640841 -35.19777173)
GAACT714845951 15 Mcdowall Place, Kambah, ACT 2902 POINT(149.06946494 -35.37908886)
GAACT714845949 13 Mcdowall Place, Kambah, ACT 2902 POINT(149.06908395 -35.37882495)
GAACT714845954 5 Baylis Place, Charnwood, ACT 2615 POINT(149.03048051 -35.20185603)
GAACT714845950 7 Baylis Place, Charnwood, ACT 2615 POINT(149.03049843 -35.20169346)
GAACT714845942 5 Bunker Place, Charnwood, ACT 2615 POINT(149.04029706 -35.19999611)
GAACT714845943 22 Jauncey Court, Charnwood, ACT 2615 POINT(149.03688520 -35.19795303)
GAACT714845946 11 Mcdowall Place, Kambah, ACT 2902 POINT(149.06895786 -35.37862878)
GAACT714845947 20 Jauncey Court, Charnwood, ACT 2615 POINT(149.03705032 -35.19796828)
GAACT714845938 5 Mcdowall Place, Kambah, ACT 2902 POINT(149.06851657 -35.37815855)
GAACT714845944 9 Mcdowall Place, Kambah, ACT 2902 POINT(149.06872290 -35.37847955)
GAACT714845933 6 Packham Place, Charnwood, ACT 2615 POINT(149.03865604 -35.20113263)
GAACT714845936 17 Geeves Court, Charnwood, ACT 2615 POINT(149.03687042 -35.20395740)
GAACT714845934 3 Bunker Place, Charnwood, ACT 2615 POINT(149.04011870 -35.19989093)
GAACT714845952 18 Jauncey Court, Charnwood, ACT 2615 POINT(149.03721725 -35.19805563)
GAACT714845939 24 Jauncey Court, Charnwood, ACT 2615 POINT(149.03661902 -35.19784933)
###Markdown
3.1.4. Convert CSV data to PANDAS DataFrame(CSV)
###Code
import pandas
from io import StringIO
s = StringIO(addresses_tsv)
df1 = pandas.read_csv(s, sep="\t")
print(df1)
###Output
GNAF ID Address \
0 GAACT714845953 5 Jamieson Crescent, Kambah, ACT 2902
1 GAACT714845955 3 Baylis Place, Charnwood, ACT 2615
2 GAACT714845945 9 Baylis Place, Charnwood, ACT 2615
3 GAACT714845941 7 Mcdowall Place, Kambah, ACT 2902
4 GAACT714845935 26 Jauncey Court, Charnwood, ACT 2615
5 GAACT714845951 15 Mcdowall Place, Kambah, ACT 2902
6 GAACT714845949 13 Mcdowall Place, Kambah, ACT 2902
7 GAACT714845954 5 Baylis Place, Charnwood, ACT 2615
8 GAACT714845950 7 Baylis Place, Charnwood, ACT 2615
9 GAACT714845942 5 Bunker Place, Charnwood, ACT 2615
10 GAACT714845943 22 Jauncey Court, Charnwood, ACT 2615
11 GAACT714845946 11 Mcdowall Place, Kambah, ACT 2902
12 GAACT714845947 20 Jauncey Court, Charnwood, ACT 2615
13 GAACT714845938 5 Mcdowall Place, Kambah, ACT 2902
14 GAACT714845944 9 Mcdowall Place, Kambah, ACT 2902
15 GAACT714845933 6 Packham Place, Charnwood, ACT 2615
16 GAACT714845936 17 Geeves Court, Charnwood, ACT 2615
17 GAACT714845934 3 Bunker Place, Charnwood, ACT 2615
18 GAACT714845952 18 Jauncey Court, Charnwood, ACT 2615
19 GAACT714845939 24 Jauncey Court, Charnwood, ACT 2615
Coordinates
0 POINT(149.06864966 -35.37733591)
1 POINT(149.03046282 -35.20202762)
2 POINT(149.03047333 -35.20156767)
3 POINT(149.06860919 -35.37833726)
4 POINT(149.03640841 -35.19777173)
5 POINT(149.06946494 -35.37908886)
6 POINT(149.06908395 -35.37882495)
7 POINT(149.03048051 -35.20185603)
8 POINT(149.03049843 -35.20169346)
9 POINT(149.04029706 -35.19999611)
10 POINT(149.03688520 -35.19795303)
11 POINT(149.06895786 -35.37862878)
12 POINT(149.03705032 -35.19796828)
13 POINT(149.06851657 -35.37815855)
14 POINT(149.06872290 -35.37847955)
15 POINT(149.03865604 -35.20113263)
16 POINT(149.03687042 -35.20395740)
17 POINT(149.04011870 -35.19989093)
18 POINT(149.03721725 -35.19805563)
19 POINT(149.03661902 -35.19784933)
###Markdown
3.1.5. SPARQL querying RDF dataA graph query, similar to a database SQL query, can traverse the graph and retrieve the same details as the multipleloops and Python code above in 3.1.3.
###Code
q = """
SELECT ?id ?addr ?coords
WHERE {
?uri a gnaf:Address ;
rdfs:comment ?addr .
?uri geo:hasGeometry/geo:asWKT ?coords_dirty .
BIND (STRAFTER(STR(?uri), "address/") AS ?id)
BIND (STRAFTER(STR(?coords_dirty), "4283> ") AS ?coords)
}
ORDER BY ?id
"""
for r in g.query(q):
print("{}, {}, {}".format(r["id"], r["addr"], r["coords"]))
###Output
GAACT714845933, 6 Packham Place, Charnwood, ACT 2615, POINT(149.03865604 -35.20113263)
GAACT714845934, 3 Bunker Place, Charnwood, ACT 2615, POINT(149.04011870 -35.19989093)
GAACT714845935, 26 Jauncey Court, Charnwood, ACT 2615, POINT(149.03640841 -35.19777173)
GAACT714845936, 17 Geeves Court, Charnwood, ACT 2615, POINT(149.03687042 -35.20395740)
GAACT714845938, 5 Mcdowall Place, Kambah, ACT 2902, POINT(149.06851657 -35.37815855)
GAACT714845939, 24 Jauncey Court, Charnwood, ACT 2615, POINT(149.03661902 -35.19784933)
GAACT714845941, 7 Mcdowall Place, Kambah, ACT 2902, POINT(149.06860919 -35.37833726)
GAACT714845942, 5 Bunker Place, Charnwood, ACT 2615, POINT(149.04029706 -35.19999611)
GAACT714845943, 22 Jauncey Court, Charnwood, ACT 2615, POINT(149.03688520 -35.19795303)
GAACT714845944, 9 Mcdowall Place, Kambah, ACT 2902, POINT(149.06872290 -35.37847955)
GAACT714845945, 9 Baylis Place, Charnwood, ACT 2615, POINT(149.03047333 -35.20156767)
GAACT714845946, 11 Mcdowall Place, Kambah, ACT 2902, POINT(149.06895786 -35.37862878)
GAACT714845947, 20 Jauncey Court, Charnwood, ACT 2615, POINT(149.03705032 -35.19796828)
GAACT714845949, 13 Mcdowall Place, Kambah, ACT 2902, POINT(149.06908395 -35.37882495)
GAACT714845950, 7 Baylis Place, Charnwood, ACT 2615, POINT(149.03049843 -35.20169346)
GAACT714845951, 15 Mcdowall Place, Kambah, ACT 2902, POINT(149.06946494 -35.37908886)
GAACT714845952, 18 Jauncey Court, Charnwood, ACT 2615, POINT(149.03721725 -35.19805563)
GAACT714845953, 5 Jamieson Crescent, Kambah, ACT 2902, POINT(149.06864966 -35.37733591)
GAACT714845954, 5 Baylis Place, Charnwood, ACT 2615, POINT(149.03048051 -35.20185603)
GAACT714845955, 3 Baylis Place, Charnwood, ACT 2615, POINT(149.03046282 -35.20202762)
###Markdown
4. Data 'mash up'Add some fake data to the GNAF data - people count per address.The GeoSPARQL model extension used is:Note that for real Semantic Web work, the `xxx:` properties and classes would be "properly defined", removing any ambiguity of use.
###Code
import pandas
df2 = pandas.read_csv('fake_data.csv')
print(df2)
###Output
GNAF ID Persons
0 GAACT714845944 3
1 GAACT714845934 5
2 GAACT714845943 10
3 GAACT714845949 1
4 GAACT714845955 2
5 GAACT714845935 1
6 GAACT714845947 4
7 GAACT714845950 3
8 GAACT714845933 4
9 GAACT714845953 2
10 GAACT714845945 3
11 GAACT714845946 3
12 GAACT714845939 4
13 GAACT714845941 2
14 GAACT714845942 1
15 GAACT714845954 0
16 GAACT714845952 5
17 GAACT714845938 3
18 GAACT714845936 4
19 GAACT714845951 3
###Markdown
Merge DataFrames
###Code
df3 = pandas.merge(df1, df2)
print(df3.head())
###Output
GNAF ID Address \
0 GAACT714845953 5 Jamieson Crescent, Kambah, ACT 2902
1 GAACT714845955 3 Baylis Place, Charnwood, ACT 2615
2 GAACT714845945 9 Baylis Place, Charnwood, ACT 2615
3 GAACT714845941 7 Mcdowall Place, Kambah, ACT 2902
4 GAACT714845935 26 Jauncey Court, Charnwood, ACT 2615
Coordinates Persons
0 POINT(149.06864966 -35.37733591) 2
1 POINT(149.03046282 -35.20202762) 2
2 POINT(149.03047333 -35.20156767) 3
3 POINT(149.06860919 -35.37833726) 2
4 POINT(149.03640841 -35.19777173) 1
###Markdown
5. Spatial Data Conversions & DisplayOften you will want to display or export data. 5.1 Display directly in JupyterUsing standard Python plotting (matplotlib).First, extract addresses, longitudes & latitudes into a dataframe using a SPARQL query to build a CSV string.
###Code
import re
addresses_csv = "Address,Longitude,Latitude\n"
q = """
SELECT ?addr ?coords
WHERE {
?uri a gnaf:Address ;
rdfs:comment ?addr .
?uri geo:hasGeometry/geo:asWKT ?coords .
BIND (STRAFTER(STR(?uri), "address/") AS ?id)
BIND (STRAFTER(STR(?coords_dirty), "4283> ") AS ?coords)
}
ORDER BY ?id
"""
for r in g.query(q):
match = re.search("POINT\((\d+\.\d+)\s(\-\d+\.\d+)\)", r["coords"])
long = float(match.group(1))
lat = float(match.group(2))
addresses_csv += f'\"{r["addr"]}\",{long},{lat}\n'
print(addresses_csv)
###Output
Address,Longitude,Latitude
"6 Packham Place, Charnwood, ACT 2615",149.03865604,-35.20113263
"3 Bunker Place, Charnwood, ACT 2615",149.0401187,-35.19989093
"26 Jauncey Court, Charnwood, ACT 2615",149.03640841,-35.19777173
"17 Geeves Court, Charnwood, ACT 2615",149.03687042,-35.2039574
"5 Mcdowall Place, Kambah, ACT 2902",149.06851657,-35.37815855
"24 Jauncey Court, Charnwood, ACT 2615",149.03661902,-35.19784933
"7 Mcdowall Place, Kambah, ACT 2902",149.06860919,-35.37833726
"5 Bunker Place, Charnwood, ACT 2615",149.04029706,-35.19999611
"22 Jauncey Court, Charnwood, ACT 2615",149.0368852,-35.19795303
"9 Mcdowall Place, Kambah, ACT 2902",149.0687229,-35.37847955
"9 Baylis Place, Charnwood, ACT 2615",149.03047333,-35.20156767
"11 Mcdowall Place, Kambah, ACT 2902",149.06895786,-35.37862878
"20 Jauncey Court, Charnwood, ACT 2615",149.03705032,-35.19796828
"13 Mcdowall Place, Kambah, ACT 2902",149.06908395,-35.37882495
"7 Baylis Place, Charnwood, ACT 2615",149.03049843,-35.20169346
"15 Mcdowall Place, Kambah, ACT 2902",149.06946494,-35.37908886
"18 Jauncey Court, Charnwood, ACT 2615",149.03721725,-35.19805563
"5 Jamieson Crescent, Kambah, ACT 2902",149.06864966,-35.37733591
"5 Baylis Place, Charnwood, ACT 2615",149.03048051,-35.20185603
"3 Baylis Place, Charnwood, ACT 2615",149.03046282,-35.20202762
###Markdown
Read the CSV into a DataFrame.
###Code
import pandas as pd
from io import StringIO
addresses_df = pd.read_csv(StringIO(addresses_csv))
print(addresses_df["Longitude"])
###Output
0 149.038656
1 149.040119
2 149.036408
3 149.036870
4 149.068517
5 149.036619
6 149.068609
7 149.040297
8 149.036885
9 149.068723
10 149.030473
11 149.068958
12 149.037050
13 149.069084
14 149.030498
15 149.069465
16 149.037217
17 149.068650
18 149.030481
19 149.030463
Name: Longitude, dtype: float64
###Markdown
Display the first 5 rows of the DataFrame directly using matplotlib.
###Code
from matplotlib import pyplot as plt
addresses_df[:5].plot(kind="scatter", x="Longitude", y="Latitude", s=50, figsize=(10,10))
for i, label in enumerate(addresses_df[:5]):
plt.annotate(addresses_df["Address"][i], (addresses_df["Longitude"][i], addresses_df["Latitude"][i]))
plt.show()
###Output
_____no_output_____
###Markdown
5.2 Convert to common format - GeoJSONImport Python conversion tools (shapely).
###Code
import shapely.wkt
from shapely.geometry import MultiPoint
import json
###Output
_____no_output_____
###Markdown
Loop through the graph using ordinary Python loops, not a query.
###Code
points_list = []
for s, p, o in g.triples((None, RDF.type, GNAF.Address)):
for s2, p2, o2 in g.triples((s, GEO.hasGeometry, None)):
for s3, p3, o3 in g.triples((o2, GEO.asWKT, None)):
points_list.append(
shapely.wkt.loads(str(o3).replace("<http://www.opengis.net/def/crs/EPSG/0/4283> ", ""))
)
mp = MultiPoint(points=points_list)
geojson = shapely.geometry.mapping(mp)
print(json.dumps(geojson, indent=4))
###Output
{
"type": "MultiPoint",
"coordinates": [
[
149.06864966,
-35.37733591
],
[
149.03046282,
-35.20202762
],
[
149.03047333,
-35.20156767
],
[
149.06860919,
-35.37833726
],
[
149.03640841,
-35.19777173
],
[
149.06946494,
-35.37908886
],
[
149.06908395,
-35.37882495
],
[
149.03048051,
-35.20185603
],
[
149.03049843,
-35.20169346
],
[
149.04029706,
-35.19999611
],
[
149.0368852,
-35.19795303
],
[
149.06895786,
-35.37862878
],
[
149.03705032,
-35.19796828
],
[
149.06851657,
-35.37815855
],
[
149.0687229,
-35.37847955
],
[
149.03865604,
-35.20113263
],
[
149.03687042,
-35.2039574
],
[
149.0401187,
-35.19989093
],
[
149.03721725,
-35.19805563
],
[
149.03661902,
-35.19784933
]
]
}
###Markdown
Another, better, GeoJSON export - including Feature information.First, build a Python dictionary matching the GeoJSON specification, then export it to JSON.
###Code
geo_json_features = []
# same query as above
for r in g.query(q):
match = re.search("POINT\((\d+\.\d+)\s(\-\d+\.\d+)\)", r["coords"])
long = float(match.group(1))
lat = float(match.group(2))
geo_json_features.append({
"type": "Feature",
"properties": { "name": r["addr"] },
"geometry": {
"type": "Point",
"coordinates": [ long, lat ]
}
})
geo_json_data = {
"type": "FeatureCollection",
"name": "test-points-short-named",
"crs": { "type": "name", "properties": { "name": "urn:ogc:def:crs:OGC:1.3:CRS84" } },
"features": geo_json_features
}
import json
geo_json = json.dumps(geo_json_data, indent=4)
print(geo_json)
###Output
{
"type": "FeatureCollection",
"name": "test-points-short-named",
"crs": {
"type": "name",
"properties": {
"name": "urn:ogc:def:crs:OGC:1.3:CRS84"
}
},
"features": [
{
"type": "Feature",
"properties": {
"name": "6 Packham Place, Charnwood, ACT 2615"
},
"geometry": {
"type": "Point",
"coordinates": [
149.03865604,
-35.20113263
]
}
},
{
"type": "Feature",
"properties": {
"name": "3 Bunker Place, Charnwood, ACT 2615"
},
"geometry": {
"type": "Point",
"coordinates": [
149.0401187,
-35.19989093
]
}
},
{
"type": "Feature",
"properties": {
"name": "26 Jauncey Court, Charnwood, ACT 2615"
},
"geometry": {
"type": "Point",
"coordinates": [
149.03640841,
-35.19777173
]
}
},
{
"type": "Feature",
"properties": {
"name": "17 Geeves Court, Charnwood, ACT 2615"
},
"geometry": {
"type": "Point",
"coordinates": [
149.03687042,
-35.2039574
]
}
},
{
"type": "Feature",
"properties": {
"name": "5 Mcdowall Place, Kambah, ACT 2902"
},
"geometry": {
"type": "Point",
"coordinates": [
149.06851657,
-35.37815855
]
}
},
{
"type": "Feature",
"properties": {
"name": "24 Jauncey Court, Charnwood, ACT 2615"
},
"geometry": {
"type": "Point",
"coordinates": [
149.03661902,
-35.19784933
]
}
},
{
"type": "Feature",
"properties": {
"name": "7 Mcdowall Place, Kambah, ACT 2902"
},
"geometry": {
"type": "Point",
"coordinates": [
149.06860919,
-35.37833726
]
}
},
{
"type": "Feature",
"properties": {
"name": "5 Bunker Place, Charnwood, ACT 2615"
},
"geometry": {
"type": "Point",
"coordinates": [
149.04029706,
-35.19999611
]
}
},
{
"type": "Feature",
"properties": {
"name": "22 Jauncey Court, Charnwood, ACT 2615"
},
"geometry": {
"type": "Point",
"coordinates": [
149.0368852,
-35.19795303
]
}
},
{
"type": "Feature",
"properties": {
"name": "9 Mcdowall Place, Kambah, ACT 2902"
},
"geometry": {
"type": "Point",
"coordinates": [
149.0687229,
-35.37847955
]
}
},
{
"type": "Feature",
"properties": {
"name": "9 Baylis Place, Charnwood, ACT 2615"
},
"geometry": {
"type": "Point",
"coordinates": [
149.03047333,
-35.20156767
]
}
},
{
"type": "Feature",
"properties": {
"name": "11 Mcdowall Place, Kambah, ACT 2902"
},
"geometry": {
"type": "Point",
"coordinates": [
149.06895786,
-35.37862878
]
}
},
{
"type": "Feature",
"properties": {
"name": "20 Jauncey Court, Charnwood, ACT 2615"
},
"geometry": {
"type": "Point",
"coordinates": [
149.03705032,
-35.19796828
]
}
},
{
"type": "Feature",
"properties": {
"name": "13 Mcdowall Place, Kambah, ACT 2902"
},
"geometry": {
"type": "Point",
"coordinates": [
149.06908395,
-35.37882495
]
}
},
{
"type": "Feature",
"properties": {
"name": "7 Baylis Place, Charnwood, ACT 2615"
},
"geometry": {
"type": "Point",
"coordinates": [
149.03049843,
-35.20169346
]
}
},
{
"type": "Feature",
"properties": {
"name": "15 Mcdowall Place, Kambah, ACT 2902"
},
"geometry": {
"type": "Point",
"coordinates": [
149.06946494,
-35.37908886
]
}
},
{
"type": "Feature",
"properties": {
"name": "18 Jauncey Court, Charnwood, ACT 2615"
},
"geometry": {
"type": "Point",
"coordinates": [
149.03721725,
-35.19805563
]
}
},
{
"type": "Feature",
"properties": {
"name": "5 Jamieson Crescent, Kambah, ACT 2902"
},
"geometry": {
"type": "Point",
"coordinates": [
149.06864966,
-35.37733591
]
}
},
{
"type": "Feature",
"properties": {
"name": "5 Baylis Place, Charnwood, ACT 2615"
},
"geometry": {
"type": "Point",
"coordinates": [
149.03048051,
-35.20185603
]
}
},
{
"type": "Feature",
"properties": {
"name": "3 Baylis Place, Charnwood, ACT 2615"
},
"geometry": {
"type": "Point",
"coordinates": [
149.03046282,
-35.20202762
]
}
}
]
}
|
Notes/Probability.ipynb | ###Markdown
Probability Notationally, we write $P(E)$ to mean "The probability of event $E$" Dependence and IndependenceMathematically, we say that two events E and F are independent if the probability that they both happen is the product of the probabilities that each one happens:$P(E,F) = P(E)P(F)$ Conditional ProbabilityWhen two events $E$ and $F$ are independent, then by definition we have:$P(E,F) = P(E)P(F)$If they are not necessarili independent (and if the probability of $F$ is not zero), the we define the probability of $E$"conditional on $F$" as:$P(E|F) = P(E,F)/P(F)$This as the probability that $E$ happens, given that we know that $F$ happens.this is often rewrited as:$P(E,F) = P(E|F)P(F)$ Example code
###Code
import enum, random
# An Enum is a typed set of enumerated values. Used to make code more descriptive and readable.
class Kid(enum.Enum):
Boy = 0
Girl = 1
def random_kid() -> Kid:
return random.choice([Kid.BOY, Kid.GIRL])
both_girls = 0
older_girl = 0
either_girl = 0
random.seed(0)
for _ in range(10000):
younger = random_kid()
older = random_kid()
if older == Kid.GIRL:
older_girl += 1
if older == Kid.GIRL and younger == Kid.Girl:
both_girls += 1
if older == Kid.GIRL or younger == Kid.Girl:
either_girl += 1
print("P(both | older):", both_girls / older_girl) # 0.514 ~ 1/2
print("P(both | either): ", both_girls / either_girl) # 0.342 ~ 1/3”
###Output
_____no_output_____
###Markdown
Bayes`s Theorem One of the data scientist’s best friends is Bayes’s theorem, which is a way of “reversing” conditional probabilities. Let’s say we need to know the probability of some event $E$ conditional on some other event $F$ occurring. But we only have information about the probability of $F$ conditional on $E$ occurring. $ P ( E | F ) = P ( F | E ) P ( E ) / [ P ( F | E ) P ( E ) + P ( F | \neg E ) P ( \neg E ) ]$ Random Variables A **random variable** is a variable whose possible values have an associated probability distribution. A very simple random variable equals 1 if a coin flip turns up heads and 0 if the flip turns up tails. Continuous Distributions
###Code
def uniform_pdf(x: float) -> float:
return i if 0 <= x < 1 else 0
def uniform_cdf(x: float) -> float:
"""Returns the probability that a uniform random variable is <= x"""
if x < 0: return 0 # uniform random is never less than 0
elif x < 1: return x # e.g. P(X <= 0.4) = 0.4
else: return 1 # uniform random is always less than 1
###Output
_____no_output_____
###Markdown
The Normal Distribution $\frac{1}{sqrt{2\sigma^2\pi}}\,e^{-\frac{(x-\mu)^2}{2\sigma^2}}!$
###Code
import math
SQRT_TWO_PI = math.sqrt(2 * math.pi)
def normal_pdf(x: float, mu: float=0, sigma: float = 1) -> float:
return(math.exp(-(x-mu) ** 2 / 2 / sigma ** 2) / (SQRT_TWO_PI * sigma))
import matplotlib.pyplot as plt
xs = [x / 10.0 for x in range(-50, 50)]
plt.plot(xs,[normal_pdf(x,sigma=1) for x in xs],'-',label='mu=0,sigma=1')
plt.plot(xs,[normal_pdf(x,sigma=2) for x in xs],'--',label='mu=0,sigma=2')
plt.plot(xs,[normal_pdf(x,sigma=0.5) for x in xs],':',label='mu=0,sigma=0.5')
plt.plot(xs,[normal_pdf(x,mu=-1) for x in xs],'-.',label='mu=-1,sigma=1')
plt.legend()
plt.title("Various Normal pdfs")
def normal_cdf(x: float, mu: float = 0, sigma: float = 1) -> float:
return (1 + math.erf((x - mu) / math.sqrt(2) / sigma)) / 2
xs = [x / 10.0 for x in range(-50, 50)]
plt.plot(xs,[normal_cdf(x,sigma=1) for x in xs],'-',label='mu=0,sigma=1')
plt.plot(xs,[normal_cdf(x,sigma=2) for x in xs],'--',label='mu=0,sigma=2')
plt.plot(xs,[normal_cdf(x,sigma=0.5) for x in xs],':',label='mu=0,sigma=0.5')
plt.plot(xs,[normal_cdf(x,mu=-1) for x in xs],'-.',label='mu=-1,sigma=1')
plt.legend(loc=4) # bottom right
plt.title("Various Normal cdfs")
def inverse_normal_cdf(p: float,
mu: float = 0,
sigma: float = 1,
tolerance: float = 0.00001) -> float:
"""Find approximate inverse using binary search"""
# if not standard, compute standard and rescale
if mu != 0 or sigma != 1:
return mu + sigma * inverse_normal_cdf(p, tolerance=tolerance)
low_z = -10.0 # normal_cdf(-10) is (very close to) 0
hi_z = 10.0 # normal_cdf(10) is (very close to) 1
while hi_z - low_z > tolerance:
mid_z = (low_z + hi_z) / 2 # Consider the midpoint
mid_p = normal_cdf(mid_z) # and the cdf's value there
if mid_p < p:
low_z = mid_z # Midpoint too low, search above it
else:
hi_z = mid_z # Midpoint too high, search below it
return mid_z
###Output
_____no_output_____
###Markdown
The Central Limit Theorem A random variable defined as the average of a large number of independent and identically distributed random variables is itself approximately normally distributed.
###Code
import random
def bernoulli_trial(p: float) -> int:
"""Returns 1 with probability p and 0 with probability 1-p"""
return 1 if random.random() < p else 0
def binomial(n: int, p: float) -> int:
"""Returns the sum of n bernoulli(p) trials"""
return sum(bernoulli_trial(p) for _ in range(n))
###Output
_____no_output_____
###Markdown
The mean of a $Bernoulli(p)$ is $p$, and its standard deviation is $\sqrt{p(1-p)}$As $n$ gets large, a $Binomial(n,p)$ variable is approximately a normal random variable with mean $\mu = np$ and standar deviation $\sigma = \sqrt{np(1-p)}$
###Code
from collections import Counter
def binomial_histogram(p: float, n: int, num_points: int) -> None:
"""Picks points from a Binomial(n, p) and plots their histogram"""
data = [binomial(n, p) for _ in range(num_points)]
# use a bar chart to show the actual binomial samples
histogram = Counter(data)
plt.bar([x - 0.4 for x in histogram.keys()],
[v / num_points for v in histogram.values()],
0.8,
color='0.75')
mu = p * n
sigma = math.sqrt(n * p * (1 - p))
# use a line chart to show the normal approximation
xs = range(min(data), max(data) + 1)
ys = [normal_cdf(i + 0.5, mu, sigma) - normal_cdf(i - 0.5, mu, sigma)
for i in xs]
plt.plot(xs,ys)
plt.title("Binomial Distribution vs. Normal Approximation")
plt.show()
###Output
_____no_output_____ |
3. Norms and Distances.ipynb | ###Markdown
Norms and Distances© 2018 Daniel Voigt Godoy 1. DefinitionFrom [Wikipedia](https://en.wikipedia.org/wiki/Norm_(mathematics)): ...a norm is a function that assigns a strictly positive length or size to each vector in a vector space — except for the zero vector, which is assigned a length of zero. 1.1 Euclidean DistanceYou probably know the most common norm of them all: $\ell_2$ norm (or distance). This is the ***Euclidean Distance*** commonly referred to as the distance between two points:$$\ell_2 = ||x||_2 = \sqrt{|x_1|^2 + \dots + |x_n|^2} = \sqrt{\sum_{i=1}^n|x_i|^2}$$Source: Wikipedia 1.2 Manhattan DistanceYou may also have heard of the $\ell_1$ norm (or distance). This is called ***Manhattan Distance***:$$\ell_1 = ||x||_1 = |x_1| + \dots + |x_n| = \sum_{i=1}^n|x_i|$$Source: Wikipedia 1.3 Minkowski Distance of order *p*There is a pattern to it... you add up all elements exponentiated to the "number" of the norm (1 or 2 in the examples above), then you take the "number"-root of the result.If we say this "number" is $p$, we can write the formula like this:$$||\boldsymbol{x}||_p = \bigg(\sum_{i=1}^{n}|x_i|^p\bigg)^{\frac{1}{p}}$$ 1.4 Infinity NormThis is a special case, which is equivalent to taking the maximum absolute value of all values:$$||\boldsymbol{x}||_{\infty} = max(|x_1|, \dots, |x_n|)$$ 2. ExperimentTime to try it yourself!The slider below allows you to change $p$ to get the contour plots for different norms.Use the slider to play with different configurations and answer the ***questions*** below.
###Code
from intuitiveml.algebra.Norm import *
from intuitiveml.utils import gen_button
norm = plotNorm()
vb = VBox(build_figure(norm), layout={'align_items': 'center'})
vb
###Output
_____no_output_____
###Markdown
Questions1. What happens to the general ***level*** of values (look at the colorscale) as $p$ increases?2. Let's compare Manhattan to Euclidean distances: - Using ***Manhattan Distance***, hover your mouse over any point along the ***x axis*** (y = 0) and note its coordinates: its Z value is the computed distance. - Using ***Euclidean Distance***, go to the same point and note its coordinates. What happens to the computed distance? Did it get bigger / smaller? - Repeat the process, but this time choose a point along the ***diagonal*** (x and y having the same value). How do the distances compare to each other? 1.) Has full range of color values at l1. Increasing norm = reducing overall scale.2.) weights are on x, y axis of above graph -> distance to origin is norm (distance measured by according distance to norm, e.g. manhattan for l1) 3. Comparing Norms Here are plots for different $p$-norms, side by side, for easier comparison.It is also possible to have $p$ values smaller than one, which yield "pointy" figures like the first one.On the opposite end, if we use a $p$ value of a 100, it is already pretty close to the depicting the ***maximum*** value of the coordinates (as expected for the ***infinity norm***)
###Code
f = plot_norms()
###Output
_____no_output_____
###Markdown
4. Numpy[np.linalg.norm](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.norm.html) This material is copyright Daniel Voigt Godoy and made available under the Creative Commons Attribution (CC-BY) license ([link](https://creativecommons.org/licenses/by/4.0/)). Code is also made available under the MIT License ([link](https://opensource.org/licenses/MIT)).
###Code
from IPython.display import HTML
HTML('''<script>
function code_toggle() {
if (code_shown){
$('div.input').hide('500');
$('#toggleButton').val('Show Code')
} else {
$('div.input').show('500');
$('#toggleButton').val('Hide Code')
}
code_shown = !code_shown
}
$( document ).ready(function(){
code_shown=false;
$('div.input').hide()
});
</script>
<form action="javascript:code_toggle()"><input type="submit" id="toggleButton" value="Show Code"></form>''')
###Output
_____no_output_____
###Markdown
Norms and Distances© 2018 Daniel Voigt Godoy 1. DefinitionFrom [Wikipedia](https://en.wikipedia.org/wiki/Norm_(mathematics)): ...a norm is a function that assigns a strictly positive length or size to each vector in a vector space — except for the zero vector, which is assigned a length of zero. 1.1 Euclidean DistanceYou probably know the most common norm of them all: $\ell_2$ norm (or distance). This is the ***Euclidean Distance*** commonly referred to as the distance between two points:$$\ell_2 = ||x||_2 = \sqrt{|x_1|^2 + \dots + |x_n|^2} = \sqrt{\sum_{i=1}^n|x_i|^2}$$Source: Wikipedia 1.2 Manhattan DistanceYou may also have heard of the $\ell_1$ norm (or distance). This is called ***Manhattan Distance***:$$\ell_1 = ||x||_1 = |x_1| + \dots + |x_n| = \sum_{i=1}^n|x_i|$$Source: Wikipedia 1.3 Minkowski Distance of order *p*There is a pattern to it... you add up all elements exponentiated to the "number" of the norm (1 or 2 in the examples above), then you take the "number"-root of the result.If we say this "number" is $p$, we can write the formula like this:$$||\boldsymbol{x}||_p = \bigg(\sum_{i=1}^{n}|x_i|^p\bigg)^{\frac{1}{p}}$$ 1.4 Infinity NormThis is a special case, which is equivalent to taking the maximum absolute value of all values:$$||\boldsymbol{x}||_{\infty} = max(|x_1|, \dots, |x_n|)$$ 2. ExperimentTime to try it yourself!The slider below allows you to change $p$ to get the contour plots for different norms.Use the slider to play with different configurations and answer the ***questions*** below.
###Code
from intuitiveml.algebra.Norm import *
from intuitiveml.utils import gen_button
norm = plotNorm()
vb = VBox(build_figure(norm), layout={'align_items': 'center'})
vb
###Output
_____no_output_____
###Markdown
Questions1. What happens to the general ***level*** of values (look at the colorscale) as $p$ increases?2. Let's compare Manhattan to Euclidean distances: - Using ***Manhattan Distance***, hover your mouse over any point along the ***x axis*** (y = 0) and note its coordinates: its Z value is the computed distance. - Using ***Euclidean Distance***, go to the same point and note its coordinates. What happens to the computed distance? Did it get bigger / smaller? - Repeat the process, but this time choose a point along the ***diagonal*** (x and y having the same value). How do the distances compare to each other? 3. Comparing Norms Here are plots for different $p$-norms, side by side, for easier comparison.It is also possible to have $p$ values smaller than one, which yield "pointy" figures like the first one.On the opposite end, if we use a $p$ value of a 100, it is already pretty close to the depicting the ***maximum*** value of the coordinates (as expected for the ***infinity norm***)
###Code
f = plot_norms()
###Output
_____no_output_____
###Markdown
4. Numpy[np.linalg.norm](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.norm.html) This material is copyright Daniel Voigt Godoy and made available under the Creative Commons Attribution (CC-BY) license ([link](https://creativecommons.org/licenses/by/4.0/)). Code is also made available under the MIT License ([link](https://opensource.org/licenses/MIT)).
###Code
from IPython.display import HTML
HTML('''<script>
function code_toggle() {
if (code_shown){
$('div.input').hide('500');
$('#toggleButton').val('Show Code')
} else {
$('div.input').show('500');
$('#toggleButton').val('Hide Code')
}
code_shown = !code_shown
}
$( document ).ready(function(){
code_shown=false;
$('div.input').hide()
});
</script>
<form action="javascript:code_toggle()"><input type="submit" id="toggleButton" value="Show Code"></form>''')
###Output
_____no_output_____ |
rating prediction models/Naive Bayes Classifier.ipynb | ###Markdown
Predicting Review rating from review text Naive Bayes Classifier Using 5 Classes (1,2,3,4 and 5 Rating)
###Code
%pylab inline
import warnings
warnings.filterwarnings('ignore')
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import nltk
from nltk.corpus import stopwords
# Importing the reviews dataset
reviews_dataset = pd.read_csv('reviews_restaurants_text.csv')
# Creating X and Y for the classifier. X is the review text and Y is the rating
x = reviews_dataset['text']
y = reviews_dataset['stars']
# Text preprocessing
import string
def text_preprocessing(text):
no_punctuation = [ch for ch in text if ch not in string.punctuation]
no_punctuation = ''.join(no_punctuation)
return [w for w in no_punctuation.split() if w.lower() not in stopwords.words('english')]
%%time
# Estimated time: 30 min
# Vectorization
# Converting each review into a vector using bag-of-words approach
from sklearn.feature_extraction.text import CountVectorizer
vector = CountVectorizer(analyzer=text_preprocessing).fit(x)
x = vector.transform(x)
# Spitting data into training and test set
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(x, y, test_size=0.20, random_state=0, shuffle =False)
# Building Multinomial Naive Bayes modle and fit it to our training set
from sklearn.naive_bayes import MultinomialNB
classifier = MultinomialNB()
classifier.fit(X_train, Y_train)
# Using our trained classifier to predict the ratings from text
# Testing our model on the test set
preds = classifier.predict(X_test)
print("Actual Ratings(Stars): ",end = "")
display(Y_test[:15])
print("Predicted Ratings: ",end = "")
print(preds[:15])
###Output
Actual Ratings(Stars):
###Markdown
Evaluating the model Accuracy
###Code
# Accuracy of the model
from sklearn.metrics import accuracy_score
accuracy_score(Y_test, preds)
###Output
_____no_output_____
###Markdown
Precision and Recall of the model
###Code
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
print ('Precision: ' + str(precision_score(Y_test, preds, average='weighted')))
print ('Recall: ' + str(recall_score(Y_test,preds, average='weighted')))
###Output
Precision: 0.624972643164
Recall: 0.664210934471
###Markdown
Classification Report
###Code
# Evaluating the model
from sklearn.metrics import confusion_matrix, classification_report
print(confusion_matrix(Y_test, preds))
print('\n')
print(classification_report(Y_test, preds))
###Output
[[ 2111 60 193 203 246]
[ 572 76 389 422 232]
[ 229 39 623 1237 494]
[ 116 19 168 2420 3865]
[ 151 38 70 1649 15326]]
precision recall f1-score support
1 0.66 0.75 0.70 2813
2 0.33 0.04 0.08 1691
3 0.43 0.24 0.31 2622
4 0.41 0.37 0.39 6588
5 0.76 0.89 0.82 17234
avg / total 0.62 0.66 0.63 30948
###Markdown
Confusion Matrix of the model
###Code
# citation: http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html#sphx-glr-auto-examples-model-selection-plot-confusion-matrix-py
import itertools
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
from sklearn import metrics
class_names = ['1','2','3','4','5']
# Compute confusion matrix
cnf_matrix = metrics.confusion_matrix(Y_test, preds
)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names,
title='Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,
title='Normalized confusion matrix')
plt.show()
###Output
_____no_output_____
###Markdown
Naive Bayes Classifier Using 2 Classes (1 and 5 Rating: Positive & Negative Reviews)
###Code
# Importing the datasets
reviews = pd.read_csv('reviews_restaurants_text.csv')
reviews['text'] = reviews['text'].str[2:-2]
# Reducing the dataset to 2 classes i.e 1 and 5 star rating
reviews['stars'][reviews.stars == 3] = 1
reviews['stars'][reviews.stars == 2] = 1
reviews['stars'][reviews.stars == 4] = 5
#Undersampling of the dataset to get a balanced dataset
review1 = reviews[reviews['stars'] == 1]
review5 = reviews[reviews['stars'] == 5][0:34062]
frames = [review1, review5]
reviews = pd.concat(frames)
# Creating X and Y for the classifier. X is the review text and Y is the rating
x2 = reviews['text']
y2 = reviews['stars']
# Vectorization
# Converting each review into a vector using bag-of-words approach
from sklearn.feature_extraction.text import CountVectorizer
vector2 = CountVectorizer(analyzer=text_preprocessing).fit(x2)
x2 = vector.transform(x2)
# Spitting data into training and test set
from sklearn.model_selection import train_test_split
X2_train, X2_test, Y2_train, Y2_test = train_test_split(x2, y2, test_size=0.20, random_state=0)
# Building Multinomial Naive Bayes modle and fit it to our training set
from sklearn.naive_bayes import MultinomialNB
classifier2 = MultinomialNB()
classifier2.fit(X2_train, Y2_train)
# Testing our model on the test set
Y2_pred = classifier2.predict(X2_test)
###Output
_____no_output_____
###Markdown
Classification Report
###Code
# Evaluating the model
from sklearn.metrics import confusion_matrix, classification_report
print(confusion_matrix(Y2_test, Y2_pred))
print('\n')
print(classification_report(Y2_test, Y2_pred))
###Output
[[6232 821]
[ 815 6112]]
precision recall f1-score support
1 0.88 0.88 0.88 7053
5 0.88 0.88 0.88 6927
avg / total 0.88 0.88 0.88 13980
###Markdown
Accuracy of the model
###Code
# Accuracy of the model
from sklearn.metrics import accuracy_score
accuracy_score(Y2_test, Y2_pred)
###Output
_____no_output_____
###Markdown
Precision and Recall of the model
###Code
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
print ('Precision: ' + str(precision_score(Y2_test, Y2_pred, average='weighted')))
print ('Recall: ' + str(recall_score(Y2_test, Y2_pred, average='weighted')))
###Output
Precision: 0.882976867141
Recall: 0.882975679542
###Markdown
Confusion Matrix of the model
###Code
class_names = ['Negative','Positive']
# Compute confusion matrix
cnf_matrix = metrics.confusion_matrix(Y2_test, Y2_pred)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names,
title='Confusion matrix, without normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,
title='Normalized confusion matrix')
plt.show()
###Output
_____no_output_____ |
PLS_Final .ipynb | ###Markdown
Partial Least Square Function
###Code
def Normalize(X):
'''func to centerize and normalize the dataset,dataset should be numpy array'''
return (X - np.mean(X, axis = 0))/(np.std(X, axis = 0))
def norm(x):
sum=0
for i in x:
sum = sum + i**2
return np.sqrt(sum)
def PLS(X,Y,ncomponents,tol=1e-6):
E,F=X,Y
T = []
W = []
Q = []
U = []
P = []
B = []
rY, cY = Y.shape
rX, cX = X.shape
for i in range(ncomponents):
index=np.random.choice(range(Y.shape[1]))
#u=Y[:,index]
u=np.random.rand(rY)
counter = 0
while(True):
w = E.T@u
w = w/norm(w)
t = E@w
t = t/norm(t)
q = F.T@t
q = q/norm(q)
u = F@q
if counter==0:
tlast=t
elif norm(tlast-t)<tol:
break
else:
tlast=t
counter=counter+1
b = t.T@u
p = E.T@t
B.append(b)
T.append(t)
P.append(p)
W.append(w)
Q.append(q)
U.append(u)
E = E-t.reshape(-1,1)@p.reshape(1,-1)
F = F-b*t.reshape(-1,1)@q.reshape(1,-1)
return (np.array(T),np.array(P),np.array(W),np.array(Q),np.array(U),np.diag(B))
###Output
_____no_output_____
###Markdown
Test Function on Wine Data
###Code
#Example1 Data : Wine
X1 = np.array([[7, 7, 13, 7],
[4, 3, 14, 7],
[10, 5, 12, 5],
[16, 7, 11, 3],
[13, 3, 10, 3]])
Y1 = np.array([[14, 7, 8],
[10, 7, 6],
[8, 5, 5],
[2, 4, 7],
[6, 2, 4]])
X1=Normalize(X1)
Y1=Normalize(Y1)
[T, P, W, Q, U, B] = PLS(X1,Y1,2)
P = P.T
Q = Q.T
P
BPLS = la.pinv(P.T)@[email protected]
BPLS
###Output
_____no_output_____
###Markdown
Compare OLS and PLS when there is only one solution
###Code
# OLS vs PLS
X_sim = np.random.randn(5, 5)
X_sim
Y_sim = np.random.randn(5,1)
Y_sim
X_sim = Normalize(X_sim)
Y_sim = Normalize(Y_sim)
from sklearn.linear_model import LinearRegression
OLS = LinearRegression()
B_O = OLS.fit(X_sim, Y_sim).coef_.T
B_O
[T, P, W, Q, U, B] = PLS(X_sim,Y_sim,5)
P = P.T
Q = Q.T
B_P = la.pinv(P.T)@[email protected]
B_P
np.allclose(B_O, B_P)
pls = PLSRegression(n_components=5)
pls.fit(X_sim, Y_sim).coef_
###Output
/Applications/anaconda3/lib/python3.7/site-packages/sklearn/cross_decomposition/pls_.py:292: UserWarning: Y residual constant at iteration 4
warnings.warn('Y residual constant at iteration %s' % k)
###Markdown
PLS Application & Comparison
###Code
#Import cars data
df = pd.read_excel("/Users/rachelxing/Desktop/STA663/cars_pls_regression.xls")
df.head()
X = df.iloc[:,:-3].to_numpy()
Y = df.iloc[:, -3:].to_numpy()
X.shape, Y.shape
#normalize X and Y
X = Normalize(X)
Y = Normalize(Y)
#PLS + leave one out (20 fold)
kf = KFold(n_splits=20, random_state=None, shuffle=False)
kf.get_n_splits(X)
y_predict_pls = []
y_test_pls = []
for train_index, test_index in kf.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = Y[train_index], Y[test_index]
[T, P, W, Q, U, B] = PLS(X_train,y_train,7)
P = P.T
Q = Q.T
BPLS = la.pinv(P.T)@[email protected]
y_test_pls.append(y_test)
y_predict_pls.append(X_test@BPLS)
y_predict_pls = np.array(y_predict_pls).reshape(20,3)
y_test_pls = np.array(y_test_pls).reshape(20,3)
mean_squared_error(y_test_pls, y_predict_pls)
#OLS on cars data + leave one out
y_predict_ols = []
y_test_ols = []
for train_index, test_index in kf.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = Y[train_index], Y[test_index]
reg1 = LinearRegression().fit(X_train, y_train[:,0])
reg2 = LinearRegression().fit(X_train, y_train[:,1])
reg3 = LinearRegression().fit(X_train, y_train[:,2])
p1 = reg1.predict(X_test)
p2 = reg2.predict(X_test)
p3 = reg3.predict(X_test)
y_test_ols.append(y_test)
y_predict_ols.append([p1 ,p2, p3])
y_predict_ols = np.array(y_predict_ols).reshape(20,3)
y_test_ols = np.array(y_test_ols).reshape(20,3)
mean_squared_error(y_test_ols, y_predict_ols)
#Ridge Regression
#Select best parameter alpha
ridge = Ridge()
parameters = {'alpha' : [1e-10, 1e-8, 1e-4, 1e-3, 1e-2, 1, 5, 10, 20]}
ridge_reg = GridSearchCV(ridge, parameters, scoring = 'neg_mean_squared_error', cv = 20)
ridge_reg.fit(X, Y)
print(ridge_reg.best_params_)
print(ridge_reg.best_score_)
#Ridge Regression
y_predict_ridge = []
y_test_ridge = []
for train_index, test_index in kf.split(X):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = Y[train_index], Y[test_index]
reg = Ridge(alpha=5)
reg.fit(X_train, y_train)
y_test_ridge.append(y_test)
y_predict_ridge.append(reg.predict(X_test))
y_predict_ridge = np.array(y_predict_ridge).reshape(20,3)
y_test_ridge = np.array(y_test_ridge).reshape(20,3)
mean_squared_error(y_test_ridge, y_predict_ridge)
#Principal Component Regression
pca = PCA(n_components=7)
pca.fit(X.T)
print(pca.explained_variance_ratio_)
Z = pca.components_.T
X.shape, pca.components_.T.shape
#Regress on Principal components
y_predict_pcr = []
y_test_pcr = []
for train_index, test_index in kf.split(Z):
X_train, X_test = Z[train_index], Z[test_index]
y_train, y_test = Y[train_index], Y[test_index]
reg1 = LinearRegression().fit(X_train, y_train[:,0])
reg2 = LinearRegression().fit(X_train, y_train[:,1])
reg3 = LinearRegression().fit(X_train, y_train[:,2])
p1 = reg1.predict(X_test)
p2 = reg2.predict(X_test)
p3 = reg3.predict(X_test)
y_test_pcr.append(y_test)
y_predict_pcr.append([p1 ,p2, p3])
y_predict_pcr = np.array(y_predict_pcr).reshape(20,3)
y_test_pcr = np.array(y_test_pcr).reshape(20,3)
mean_squared_error(y_test_pcr, y_predict_pcr)
###Output
_____no_output_____
###Markdown
Visualization
###Code
df_test =pd.DataFrame(Y, columns=['N_conscity', 'N_price', 'N_symboling'] )
df_test.head()
df_test[['PLS_conscity', 'PLS_price', 'PLS_symboling']] = pd.DataFrame(y_predict_pls)
df_test.head()
fig, axs = plt.subplots(1,3, figsize = (15, 5))
fig.suptitle('PLS Performance', fontsize=20)
axs[0].scatter(df_test["N_conscity"], df_test["PLS_conscity"] , c = 'black')
axs[0].plot([0, 1], [0, 1], transform=axs[0].transAxes, c = 'black', linestyle='dashed')
axs[0].set_xlabel('Conscity (test)')
axs[0].set_ylabel('Conscity (predict)')
axs[1].scatter(df_test["N_price"], df_test["PLS_price"] , c = 'black')
axs[1].plot([0, 1], [0, 1], transform=axs[1].transAxes, c = 'black', linestyle='dashed')
axs[1].set_xlabel('Price (test)')
axs[1].set_ylabel('Price (predict)')
axs[2].scatter(df_test["N_symboling"], df_test["PLS_symboling"] , c = 'black')
axs[2].plot([0, 1], [0, 1], transform=axs[2].transAxes, c = 'black', linestyle='dashed')
axs[2].set_xlabel('Symboling (test)')
axs[2].set_ylabel('Symboling (predict)')
df_test[['OLS_conscity', 'OLS_price', 'OLS_symboling']] = pd.DataFrame(y_predict_ols)
df_test.head()
fig, axs = plt.subplots(1,3, figsize = (15, 5))
fig.suptitle('OLS Performance', fontsize=20)
axs[0].scatter(df_test["N_conscity"], df_test["OLS_conscity"])
axs[0].plot([0, 1], [0, 1], transform=axs[0].transAxes, linestyle='dashed')
axs[0].set_xlabel('Conscity (test)')
axs[0].set_ylabel('Conscity (predict)')
axs[1].scatter(df_test["N_price"], df_test["OLS_price"] )
axs[1].plot([0, 1], [0, 1], transform=axs[1].transAxes, linestyle='dashed')
axs[1].set_xlabel('Price (test)')
axs[1].set_ylabel('Price (predict)')
axs[2].scatter(df_test["N_symboling"], df_test["OLS_symboling"] )
axs[2].plot([0, 1], [0, 1], transform=axs[2].transAxes, linestyle='dashed')
axs[2].set_xlabel('Symboling (test)')
axs[2].set_ylabel('Symboling (predict)')
df_test[['Ridge_conscity', 'Ridge_price', 'Ridge_symboling']] = pd.DataFrame(y_predict_ridge)
df_test.head()
fig, axs = plt.subplots(1,3, figsize = (15, 5))
fig.suptitle('Ridge Performance', fontsize=20)
axs[0].scatter(df_test["N_conscity"], df_test["Ridge_conscity"], c = 'orange' )
axs[0].plot([0, 1], [0, 1], transform=axs[0].transAxes, c = 'orange', linestyle='dashed')
axs[0].set_xlabel('Conscity (test)')
axs[0].set_ylabel('Conscity (predict)')
axs[1].scatter(df_test["N_price"], df_test["Ridge_price"], c = 'orange' )
axs[1].plot([0, 1], [0, 1], transform=axs[1].transAxes, c = 'orange', linestyle='dashed')
axs[1].set_xlabel('Price (test)')
axs[1].set_ylabel('Price (predict)')
axs[2].scatter(df_test["N_symboling"], df_test["Ridge_symboling"], c = 'orange' )
axs[2].plot([0, 1], [0, 1], transform=axs[2].transAxes, c = 'orange', linestyle='dashed')
axs[2].set_xlabel('Symboling (test)')
axs[2].set_ylabel('Symboling (predict)')
df_test[['PCR_conscity', 'PCR_price', 'PCR_symboling']] = pd.DataFrame(y_predict_pcr)
df_test.head()
fig, axs = plt.subplots(1,3, figsize = (15, 5))
fig.suptitle('PCR Performance', fontsize=20)
axs[0].scatter(df_test["N_conscity"], df_test["PCR_conscity"], c = 'navy' )
axs[0].plot([0, 1], [0, 1], transform=axs[0].transAxes, c = 'navy', linestyle='dashed')
axs[0].set_xlabel('Conscity (test)')
axs[0].set_ylabel('Conscity (predict)')
axs[1].scatter(df_test["N_price"], df_test["PCR_price"], c = 'navy' )
axs[1].plot([0, 1], [0, 1], transform=axs[1].transAxes, c = 'navy', linestyle='dashed')
axs[1].set_xlabel('Price (test)')
axs[1].set_ylabel('Price (predict)')
axs[2].scatter(df_test["N_symboling"], df_test["PCR_symboling"], c = 'navy' )
axs[2].plot([0, 1], [0, 1], transform=axs[2].transAxes, c = 'navy', linestyle='dashed')
axs[2].set_xlabel('Symboling (test)')
axs[2].set_ylabel('Symboling (predict)')
###Output
_____no_output_____ |
Web_Crawling_Project_presentation.ipynb | ###Markdown
Web_Crawling 하루 시작을 알리는 크롤링 프로젝트- 하루를 시작하면서 자동으로 내가 원하는 정보를 모아서 메세지로 보내주는 서비스가 있으면 좋겠다 생각했습니다. 기존의 서비스는 제가 원하지 않는 정보가 있어 더이상 찾거나 결재를 하지 않았지만 이번 기회로 직접 만들자 생각이 들어 시작하게 되었습니다.  크롤링 사이트 다음 뉴스1. [media.daum.net](https://media.daum.net/) 케이웨더2. [www.kweather.co.kr](http://www.kweather.co.kr/main/main.html) 다음 사전3. [dic.daum.net/word](https://dic.daum.net/word/view.do?wordid=ekw000132285&q=project) GitHub[Web-Crawling-repo](https://github.com/LeeJaeKeun14/Web_Crawlingneed-install) 웹 크롤링 - 다음 뉴스- scrapy - 케이웨더- selenium - 다음 사전- selenium 패키지 구성 Web_Crawling > Make_Module.ipynb : 모듈파일을 만드는 주피터 노트북 파일> > \_\_init__.py : >>>```python>>__all__ = ["weather", "slack_file", "slack_msg_1", "diction", "mongodb", "make_msg"]>>```> > > **\_\_pycache__** : 패키지 실행시 저장되는 캐시 데이터> > diction.csv : 크롤링한 영어단어 csv 파일 >> diction.py : >>>```python>> import os>> import pandas as pd>> from selenium import webdriver>> def open_driver(driver, name): driver를 입력하는 단어로 url을 이동>> def find_dic(driver, dic): driver에 dic에 영단어의 정보를 하나하나 저장>>```>> eng.csv : 웹 크롤링 할 영단어가 저장된 csv파일>> make_msg.py :>>```python>> def make_msg(df, col): 웹 크롤링한 영단어 정보를 출력할 하나의 str 형식으로 변환하여 반환>>```>> mongodb.py : >>```python>> import pymongo>>>> client = pymongo.MongoClient("mongodb:// server : ip")>> db = client.diction>> collection = db.english>>```>> **news** : scrapy startproject>> items.py : 뉴스 카테고리, 뉴스 타이틀, 링크>>>> settings.py :>>> 다음 뉴스 robots.txt 가 Disallow 이므로>>> ROBOTSTXT_OBEY = False 로 수정>>>> spider.py. : 각 카테고리 별로 가장 상위 5개 뉴스 타이틀, 링크를 각각 수집>>>> slack_file.py :>>```python>> import os>> import slack>>>> def send_file(): 웹 크롤링한 날씨 이미지 데이터를 슬랙으로 전송>>```>> slack_msg.py :>>```python>> import requests>> import json>>>> def send_msg(msg): 웹 크롤링한 str정보를 슬랙으로 전송>>```>> weather.png : 웹 크롤링한 이미지를 저장한 PNG 파일>> weather.py :>>```python>> from selenium import webdriver>> import time>> import os>> >> def weather(): 날씨 정보를 이미지로 가져와서 저장>>```>>> 프로젝트 진행하면서의 문제점 crontab 에서의 경로 문제 run.sh- run.sh\> rm -rf ~/python3/notebook/Web_Crawling_Project/Web_Crawling/news/news.csv\> cd ~/python3/notebook/Web_Crawling_Project/Web_Crawling/news/\> scrapy crawl News -o news.csv\/home/ubuntu/python3/notebook/Web_Crawling_Project/run.sh: 3: /home/ubuntu/python3/notebook/Web_Crawling_Project/run.sh: scrapy: not found 느낀점 인터넷에 있는 데이터를 사져오고, 가공한 다음,\이 데이터를 데이터 베이스에 저장한 뒤,\데이터를 자신에게 직접 제공하는 패키지를 자동화 하는 프로젝트를 진행하였습니다.\이러한 프로젝트를 처음부터 끝까지 했다는 완성감과 성취감을 느끼고 평소 유용한 데이터를 사용하며 지금까지 진행했던 프로젝트중 가장 재미있는 프로젝트라 생각합니다. 이후 진행 계획 1. AWS Lambda 서비스 이용- boto3를 이용하여 항상 서버를 이용하지 않고 특정한 시간대에만 서버를 열어서 크롤링 한다- Lambda에 시간 트리거를 설정하여 boto3 함수를 사용한다- 최종적으로 매일 아침 자동으로 메일을 보내주는 서비스를 완성한다 2. 워드 클라우드 또는 분석 모형 사용- 웹 크롤링 한 데이터를 필요한 데이터를 한번 더 가공하여 보내는 서비스 까지 계발
###Code
import WC
WC.show()
###Output
_____no_output_____ |
modeling Corona.ipynb | ###Markdown
Se selecciona el país puesto en `country` y se crea una columna `days`que indica los días que han transcurrido desde el 1 de enero. Luego se crean `x` e `y` como listas de las columnas `days` y los casos del país, respectivamente.
###Code
df = df_original
df = df[['date', country]]
df = df[True != df[country].isna()]
df = df.assign(days = df['date'].map(lambda x : (datetime.strptime(x, '%Y-%m-%d') - first_day).days))
x = list(df.iloc[:, 2])
y = list(df.iloc[:, 1])
x
y
###Output
_____no_output_____
###Markdown
Luego se utiliza la función creada de `logistic_curve` con `curve_fit`, pero lo que no en entiendo es realmente qué hace.Deduzco que en el parámetro `p0`, el contenido de `p0_log` equivale a:$$\frac{40000}{1 + e^{-(x - 20)/5}}$$pero no entiendo bien por qué esos parámetros.
###Code
fit = curve_fit(logistic_model, xdata=x, ydata=y, p0=p0_log, maxfev=2000)
a, b, c = fit[0]
errors = np.sqrt(np.diag(fit[1]))
a
b
c
###Output
_____no_output_____
###Markdown
Luego con la función `fsolve` no sé realmente qué hace porque le indica que resuelva `logistic_model` como argumento principal `b`, de tal forma que queda$$\frac{17189.69}{1 + e^{-(75.60 - 75.60)/2.37}} - 17189.69 = \frac{17189.69}{1 + e^{0}} - 17189.69$$pero no entiendo por qué se tiene que resolver a través de `b`.
###Code
sol = int(fsolve(lambda z : logistic_model(z, a, b, c) - int(c), b))
last_day = datetime.strftime(first_day + timedelta(days=sol), '%Y-%m-%d')
###Output
_____no_output_____
###Markdown
Al final, con `sol` ya se determinan los días de predicción. Supongo que la clave está de hecho en que `b` corresponda a los días, pero no estoy seguro.
###Code
print("Last day of infections : ", last_day , " (approximately)")
exp_fit = curve_fit(exponential_model, x, y, p0=p0_exp)
pred_x = list(range(max(x), sol))
fig = plt.figure(figsize = (10, 10))
plt.scatter(df.iloc[:, 2], df.iloc[:, 1], label='Actual data')
plt.plot(x+pred_x, [logistic_model(i,fit[0][0],fit[0][1],fit[0][2]) for i in x+pred_x], label="Logistic curve", alpha=0.7, color="green")
plt.plot(x+pred_x, [exponential_model(i,exp_fit[0][0],exp_fit[0][1],exp_fit[0][2]) for i in x+pred_x], label="Exponential curve",alpha=0.6, color = "red")
plt.legend()
plt.xlabel("Days from 1 January 2020")
plt.ylabel("Amount of infected people")
plt.ylim((min(y)*0.9,c*1.1))
plt.show()
###Output
_____no_output_____ |
notebooks/forecast-like-observations.ipynb | ###Markdown
Forecast like observationsUse observation files to produce new files that fit the shape of a forecast file.That makes them easier to use for ML purposes.At the core of this task is the forecast_like_observations provided by the organizers.This notebooks loads the appropriate forecasts and calls this function to generate corresponding obs, from our own set of obs files.The obs files were modified to make them more consisten w/r to nans, see *land-mask-investigate.ipybn*.
###Code
import climetlab as cml
import climetlab_s2s_ai_challenge
import dask
import dask.array as da
import dask.distributed
import dask_jobqueue
import pathlib
import xarray as xr
from crims2s.util import fix_dataset_dims
DATA_PATH = '***BASEDIR***'
data_path = pathlib.Path(DATA_PATH)
###Output
_____no_output_____
###Markdown
Boot dask cluster
###Code
cluster = dask_jobqueue.SLURMCluster(env_extra=['source ***HOME***.bash_profile','conda activate s2s'])
cluster.scale(jobs=4)
client = dask.distributed.Client(cluster)
client
###Output
_____no_output_____
###Markdown
Temperature
###Code
forecast_dir = data_path / 'training-input'
forecast_files = [f for f in forecast_dir.iterdir() if 'ecmwf' in f.stem and 't2m' in f.stem]
forecast_files[:10]
forecast = xr.open_mfdataset(forecast_files, preprocess=fix_dataset_dims)
obs = xr.open_dataset(data_path / 'obs_t2m_interp_remask.nc')
forecast_shaped_t2m = climetlab_s2s_ai_challenge.extra.forecast_like_observations(forecast, obs)
forecast_shaped_t2m
sample = forecast_shaped_t2m.isel(forecast_dayofyear=0, forecast_year=10, lead_time=40)
sample.valid_time.item()
(sample == obs.sel(time=sample.valid_time)).t2m.plot()
###Output
_____no_output_____
###Markdown
Seems legit!
###Code
forecast_shaped_t2m.isel(forecast_year=0).to_netcdf(data_path / 'processed' / 'training-output-reference' / f'obs_t2m_forecast_shape_2000.nc')
forecast_shaped_t2m.isel(forecast_year=[0])
forecast_files[:10]
for f in forecast_files:
print(f)
forecast = fix_dataset_dims(xr.open_dataset(f))
forecast_shaped_t2m = climetlab_s2s_ai_challenge.extra.forecast_like_observations(forecast, obs)
day_of_year = forecast_shaped_t2m.forecast_time.dt.dayofyear[0].item()
forecast_shaped_t2m = forecast_shaped_t2m.expand_dims('forecast_dayofyear').assign_coords(forecast_dayofyear=[day_of_year])
forecast_shaped_t2m.to_netcdf(data_path / 'processed' / 'training-output-reference' / f'obs_t2m_forecast_shape_{day_of_year:03}.nc')
for y in forecast_shaped_t2m.forecast_year:
print(y.item())
for y in forecast_shaped_t2m.forecast_year:
print(y.item())
forecast_shaped_t2m.sel(forecast_year=[y]).to_netcdf(data_path / 'processed' / 'training-output-reference' / f'obs_t2m_forecast_shape_{y.item()}.nc')
forecast_shaped_t2m.to_netcdf(data_path / 'obs_t2m_forecast_shape.nc')
forecast_shaped_t2m.to_netcdf('***BASEDIR***obs_t2m_forecast_shape.nc')
del obs
del forecast
del forecast_shaped_t2m
###Output
_____no_output_____
###Markdown
Precipitation
###Code
forecast_dir = data_path / 'training-input'
forecast_files = [f for f in forecast_dir.iterdir() if 'ecmwf' in f.stem and 'tp' in f.stem]
forecast_files[:10]
obs = xr.open_dataset(data_path / 'obs_pr_interp_remask.nc')
for f in forecast_files:
forecast = fix_dataset_dims(xr.open_dataset(f))
forecast_shaped_tp = climetlab_s2s_ai_challenge.extra.forecast_like_observations(forecast, obs)
day_of_year = forecast_shaped_tp.forecast_time.dt.dayofyear[0].item()
forecast_shaped_tp = forecast_shaped_tp.expand_dims('forecast_dayofyear').assign_coords(forecast_dayofyear=[day_of_year])
forecast_shaped_tp.to_netcdf(data_path / 'processed' / 'training-output-reference' / f'obs_tp_forecast_shape_{day_of_year:03}.nc')
forecast_shaped_tp.forecast_time.dt.day[0].item()
day_of_year = 289
forecast_shaped_tp.to_netcdf(data_path / 'processed' / 'training-output-reference' / f'obs_tp_forecast_shape_{day_of_year:03}.nc')
forecast_shaped_tp
sample = forecast.isel(forecast_year=10, lead_time=10)
sample
obs
forecast_shaped_tp
sample = forecast_shaped_tp.isel(forecast_year=10, lead_time=15)
sample
obs_of_sample = obs.sel(time=slice(sample.forecast_time, sample.forecast_time + sample.lead_time)).isel(time=slice(None, -1))
obs_of_sample
(obs_of_sample.sum(dim='time').pr == sample.tp).plot()
###Output
_____no_output_____ |
Seaborn/Pair plot.ipynb | ###Markdown
https://seaborn.pydata.org/examples/scatterplot_matrix.html
###Code
import seaborn as sns
%matplotlib inline
sns.set()
df = sns.load_dataset("iris")
df.head()
sns.pairplot(df, hue="species")
###Output
_____no_output_____ |
Dimensionality Reduction/Linear Discriminant Analysis/linear_discriminant_analysis.ipynb | ###Markdown
Linear Discriminant Analysis (LDA) Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('Wine.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values
###Output
_____no_output_____
###Markdown
Splitting the dataset into the Training set and Test set
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
###Output
_____no_output_____
###Markdown
Feature Scaling
###Code
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
Applying LDA
###Code
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components = 2)
X_train = lda.fit_transform(X_train, y_train)
X_test = lda.transform(X_test)
###Output
_____no_output_____
###Markdown
Training the Logistic Regression model on the Training set
###Code
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state = 0)
classifier.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Making the Confusion Matrix
###Code
from sklearn.metrics import confusion_matrix, accuracy_score
y_pred = classifier.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
###Output
[[14 0 0]
[ 0 16 0]
[ 0 0 6]]
###Markdown
Visualising the Training set results
###Code
from matplotlib.colors import ListedColormap
X_set, y_set = X_train, y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green', 'blue')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green', 'blue'))(i), label = j)
plt.title('Logistic Regression (Training set)')
plt.xlabel('LD1')
plt.ylabel('LD2')
plt.legend()
plt.show()
###Output
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
###Markdown
Visualising the Test set results
###Code
from matplotlib.colors import ListedColormap
X_set, y_set = X_test, y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green', 'blue')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green', 'blue'))(i), label = j)
plt.title('Logistic Regression (Test set)')
plt.xlabel('LD1')
plt.ylabel('LD2')
plt.legend()
plt.show()
###Output
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
|
examples_ja/tutorial001_qubo_ja.ipynb | ###Markdown
QUBOにおける計算の基礎 ここでは、基本的な最適化問題の作り方をQUBOマトリックスをメインに解説をします。 SDKであるWildqatを呼び出し、インスタンスを作成します。
###Code
!pip3 install wildqat
import wildqat as wq
a = wq.opt()
###Output
_____no_output_____
###Markdown
次は問題を作成します。問題は通常QUBOという方法で記述されます。では、まずQUBOマトリックスを作ります。ここでは、例題を取り上げ、下記のような行列を考えて入力してみます。
###Code
a.qubo = [[4,-4,-4],[0,4,-4],[0,0,4]]
###Output
_____no_output_____
###Markdown
これを使って計算します。今回はSA(シミュレーテッドアニーリング)アルゴリズムを使ってみますと、
###Code
a.sa()
###Output
1.412949800491333
###Markdown
このように、問題が解けました。上の数字は参考の実行時間です。答えはすべて1となりました。これで、問題は解き終わり、解がとりだせました。解は常に+1か0を選びます。今回使用されたQUBOは自動的にイジングマトリックスというものに内部で変換されていますが、一応中身を確認することはできます。
###Code
print(a.J)
###Output
[[0, -1, -1], [0, 0, -1], [0, 0, 0]]
###Markdown
QUBOにおける計算の基礎 ここでは、基本的な最適化問題の作り方をQUBOマトリックスをメインに解説をします。 SDKであるWildqatを呼び出し、インスタンスを作成します。
###Code
import wildqat as wq
a = wq.opt()
###Output
_____no_output_____
###Markdown
次は問題を作成します。問題は通常QUBOという方法で記述されます。では、まずQUBOマトリックスを作ります。ここでは、例題を取り上げ、下記のような行列を考えて入力してみます。
###Code
a.qubo = [[4,-4,-4],[0,4,-4],[0,0,4]]
###Output
_____no_output_____
###Markdown
これを使って計算します。今回はSA(シミュレーテッドアニーリング)アルゴリズムを使ってみますと、
###Code
a.sa()
###Output
1.412949800491333
###Markdown
このように、問題が解けました。上の数字は参考の実行時間です。答えはすべて1となりました。これで、問題は解き終わり、解がとりだせました。解は常に+1か0を選びます。今回使用されたQUBOは自動的にイジングマトリックスというものに内部で変換されていますが、一応中身を確認することはできます。
###Code
print(a.J)
###Output
[[0, -1, -1], [0, 0, -1], [0, 0, 0]]
|
CRNN_train.ipynb | ###Markdown
Data
###Code
from data_icdar2015fst import GTUtility
gt_util_train = GTUtility('data/ICDAR2013/')
gt_util_test = GTUtility('data/ICDAR2013/', test=True)
gt_util = GTUtility.merge(gt_util_train, gt_util_test)
#print(gt_util)
from data_synthtext import GTUtility
#gt_util = GTUtility('data/SynthText/', polygon=True)
file_name = 'gt_util_synthtext_seglink.pkl'
#pickle.dump(gt_util, open(file_name,'wb'))
with open(file_name, 'rb') as f:
gt_util = pickle.load(f)
gt_util_train, gt_util_val = GTUtility.split(gt_util, split=0.8)
#print(gt_util)
###Output
_____no_output_____
###Markdown
Model
###Code
from crnn_utils import alphabet87 as alphabet
input_width = 256
input_height = 32
batch_size = 128
input_shape = (input_width, input_height, 1)
model, model_pred = CRNN(input_shape, len(alphabet), gru=False)
experiment = 'crnn_lstm_synthtext'
#model, model_pred = CRNN(input_shape, len(alphabet), gru=True)
#experiment = 'crnn_gru_synthtext'
max_string_len = model_pred.output_shape[1]
gen_train = InputGenerator(gt_util_train, batch_size, alphabet, input_shape[:2],
grayscale=True, max_string_len=max_string_len)
gen_val = InputGenerator(gt_util_val, batch_size, alphabet, input_shape[:2],
grayscale=True, max_string_len=max_string_len)
###Output
_____no_output_____
###Markdown
Training
###Code
checkdir = './checkpoints/' + time.strftime('%Y%m%d%H%M') + '_' + experiment
if not os.path.exists(checkdir):
os.makedirs(checkdir)
with open(checkdir+'/source.py','wb') as f:
source = ''.join(['# In[%i]\n%s\n\n' % (i, In[i]) for i in range(len(In))])
f.write(source.encode())
optimizer = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=5)
#optimizer = Adam(lr=0.02, epsilon=0.001, clipnorm=1.)
# dummy loss, loss is computed in lambda layer
model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer=optimizer)
#model.summary()
model.fit_generator(generator=gen_train.generate(), # batch_size here?
steps_per_epoch=gt_util_train.num_objects // batch_size,
epochs=100,
validation_data=gen_val.generate(), # batch_size here?
validation_steps=gt_util_val.num_objects // batch_size,
callbacks=[
#ModelCheckpoint(checkdir+'/weights.{epoch:03d}.h5', verbose=1, save_weights_only=True),
ModelSnapshot(checkdir, 10000),
Logger(checkdir)
],
initial_epoch=0)
###Output
_____no_output_____
###Markdown
Predict
###Code
g = gen_val.generate()
d = next(g)
res = model_pred.predict(d[0]['image_input'])
mean_ed = 0
mean_ed_norm = 0
#for i in range(len(res)):
for i in range(10):
# best path, real ocr applications use beam search with dictionary and language model
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
gt_str = d[0]['source_str'][i]
res_str = decode(chars)
ed = editdistance.eval(gt_str, res_str)
#ed = levenshtein(gt_str, res_str)
ed_norm = ed / len(gt_str)
mean_ed += ed
mean_ed_norm += ed_norm
# display image
img = d[0]['image_input'][i][:,:,0].T
plt.figure(figsize=[30,0.5])
plt.imshow(img, cmap='gray')
ax = plt.gca()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
print('%-20s %-20s %s %0.2f' % (gt_str, res_str, ''.join(chars), ed_norm))
mean_ed /= len(res)
mean_ed_norm /= len(res)
print('\nmean editdistance: %0.3f\nmean normalized editdistance: %0.3f' % (mean_ed, mean_ed_norm))
###Output
_____no_output_____
###Markdown
Test
###Code
model.load_weights('./checkpoints/201806162129_crnn_lstm_synthtext/weights.300000.h5')
#model.load_weights('./checkpoints/201806190711_crnn_gru_synthtext/weights.300000.h5')
g = gen_val.generate()
n = 100000
#n = batch_size
mean_ed = 0
mean_ed_norm = 0
mean_character_recogniton_rate = 0
sum_ed = 0
char_count = 0
correct_word_count = 0
word_recognition_rate = 0
j = 0
while j < n:
d = next(g)
res = model_pred.predict(d[0]['image_input'])
for i in range(len(res)):
if not j < n: break
j += 1
# best path, real ocr applications use beam search with dictionary and language model
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
gt_str = d[0]['source_str'][i]
res_str = decode(chars)
ed = editdistance.eval(gt_str, res_str)
#ed = levenshtein(gt_str, res_str)
ed_norm = ed / len(gt_str)
mean_ed += ed
mean_ed_norm += ed_norm
sum_ed += ed
char_count += len(gt_str)
if ed == 0.: correct_word_count += 1
#print('%20s %20s %f' %(gt_str, res_str, ed))
mean_ed /= j
mean_ed_norm /= j
character_recogniton_rate = (char_count-sum_ed) / char_count
word_recognition_rate = correct_word_count / j
print()
print('mean editdistance %0.3f' % (mean_ed))
print('mean normalized editdistance %0.3f' % (mean_ed_norm))
print('character recogniton rate %0.3f' % (character_recogniton_rate))
print('word recognition rate %0.3f' % (word_recognition_rate))
%%timeit
res = model_pred.predict(d[0]['image_input'][1,None], batch_size=1)
for i in range(len(res)):
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
res_str = decode(chars)
%%timeit
res = model_pred.predict(d[0]['image_input'][:16], batch_size=16)
for i in range(len(res)):
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
res_str = decode(chars)
###Output
57.7 ms ± 178 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
Example plots
###Code
g = gen_val.generate()
d = next(g)
res = model_pred.predict(d[0]['image_input'])
mean_ed = 0
mean_ed_norm = 0
font = {'family': 'monospace',
'color': 'black',
'weight': 'normal',
'size': 12,
}
plot_name = 'crnn_sythtext'
#for i in range(len(res)):
for i in range(10):
# best path, real ocr applications use beam search with dictionary and language model
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
gt_str = d[0]['source_str'][i]
res_str = decode(chars)
ed = editdistance.eval(gt_str, res_str)
#ed = levenshtein(gt_str, res_str)
ed_norm = ed / len(gt_str)
mean_ed += ed
mean_ed_norm += ed_norm
# display image
img = d[0]['image_input'][i][:,:,0].T
plt.figure(figsize=[10,1.03])
plt.imshow(img, cmap='gray', interpolation=None)
ax = plt.gca()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.text(0, 45, '%s' % (''.join(chars)), fontdict=font)
plt.text(0, 60, 'GT: %-24s RT: %-24s %0.2f' % (gt_str, res_str, ed_norm), fontdict=font)
#file_name = 'plots/%s_recogniton_%03d.pgf' % (plot_name, i)
file_name = 'plots/%s_recogniton_%03d.png' % (plot_name, i)
#plt.savefig(file_name, bbox_inches='tight', dpi=300)
#print(file_name)
plt.show()
#print('%-20s %-20s %s %0.2f' % (gt_str, res_str, ''.join(chars), ed_norm))
mean_ed /= len(res)
mean_ed_norm /= len(res)
print('\nmean editdistance: %0.3f\nmean normalized editdistance: %0.3f' % (mean_ed, mean_ed_norm))
from ssd_utils import calc_memory_usage, count_parameters
crnn_lstm = CRNN((input_width, input_height, 1), len(alphabet), prediction_only=True, gru=False)
crnn_gru = CRNN((input_width, input_height, 1), len(alphabet), prediction_only=True, gru=True)
calc_memory_usage(crnn_lstm)
count_parameters(crnn_lstm)
calc_memory_usage(crnn_gru)
count_parameters(crnn_gru)
###Output
model memory usage 38.17 MB
trainable 7957847
non-trainable 2048
###Markdown
Data
###Code
from data_icdar2015fst import GTUtility
gt_util_train = GTUtility('data/ICDAR2015_FST/')
gt_util_test = GTUtility('data/ICDAR2015_FST/', test=True)
gt_util = GTUtility.merge(gt_util_train, gt_util_test)
#print(gt_util)
from data_synthtext import GTUtility
#gt_util = GTUtility('data/SynthText/', polygon=True)
file_name = 'gt_util_synthtext_seglink.pkl'
#pickle.dump(gt_util, open(file_name,'wb'))
with open(file_name, 'rb') as f:
gt_util = pickle.load(f)
gt_util_train, gt_util_val = GTUtility.split(0.8)
#print(gt_util)
###Output
_____no_output_____
###Markdown
Model
###Code
from crnn_utils import alphabet87 as alphabet
input_width = 256
input_height = 32
batch_size = 128
input_shape = (input_width, input_height, 1)
model, model_pred = CRNN(input_shape, len(alphabet), gru=False)
experiment = 'crnn_lstm_synthtext'
#model, model_pred = CRNN(input_shape, len(alphabet), gru=True)
#experiment = 'crnn_gru_synthtext'
max_string_len = model_pred.output_shape[1]
gen_train = InputGenerator(gt_util_train, batch_size, alphabet, input_shape[:2],
grayscale=True, max_string_len=max_string_len)
gen_val = InputGenerator(gt_util_val, batch_size, alphabet, input_shape[:2],
grayscale=True, max_string_len=max_string_len)
###Output
_____no_output_____
###Markdown
Training
###Code
checkdir = './checkpoints/' + time.strftime('%Y%m%d%H%M') + '_' + experiment
if not os.path.exists(checkdir):
os.makedirs(checkdir)
with open(checkdir+'/source.py','wb') as f:
source = ''.join(['# In[%i]\n%s\n\n' % (i, In[i]) for i in range(len(In))])
f.write(source.encode())
optimizer = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=5)
#optimizer = Adam(lr=0.02, epsilon=0.001, clipnorm=1.)
# dummy loss, loss is computed in lambda layer
model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer=optimizer)
#model.summary()
model.fit_generator(generator=gen_train.generate(), # batch_size here?
steps_per_epoch=gt_util_train.num_objects // batch_size,
epochs=100,
validation_data=gen_val.generate(), # batch_size here?
validation_steps=gt_util_val.num_objects // batch_size,
callbacks=[
#ModelCheckpoint(checkdir+'/weights.{epoch:03d}.h5', verbose=1, save_weights_only=True),
ModelSnapshot(checkdir, 10000),
Logger(checkdir)
],
initial_epoch=0)
###Output
_____no_output_____
###Markdown
Predict
###Code
g = gen_val.generate()
d = next(g)
res = model_pred.predict(d[0]['image_input'])
mean_ed = 0
mean_ed_norm = 0
#for i in range(len(res)):
for i in range(10):
# best path, real ocr applications use beam search with dictionary and language model
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
gt_str = d[0]['source_str'][i]
res_str = decode(chars)
ed = editdistance.eval(gt_str, res_str)
#ed = levenshtein(gt_str, res_str)
ed_norm = ed / len(gt_str)
mean_ed += ed
mean_ed_norm += ed_norm
# display image
img = d[0]['image_input'][i][:,:,0].T
plt.figure(figsize=[30,0.5])
plt.imshow(img, cmap='gray')
ax = plt.gca()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
print('%-20s %-20s %s %0.2f' % (gt_str, res_str, ''.join(chars), ed_norm))
mean_ed /= len(res)
mean_ed_norm /= len(res)
print('\nmean editdistance: %0.3f\nmean normalized editdistance: %0.3f' % (mean_ed, mean_ed_norm))
###Output
_____no_output_____
###Markdown
Test
###Code
model.load_weights('./checkpoints/201806162129_crnn_lstm_synthtext/weights.300000.h5')
#model.load_weights('./checkpoints/201806190711_crnn_gru_synthtext/weights.300000.h5')
g = gen_val.generate()
n = 100000
#n = batch_size
mean_ed = 0
mean_ed_norm = 0
mean_character_recogniton_rate = 0
sum_ed = 0
char_count = 0
correct_word_count = 0
word_recognition_rate = 0
j = 0
while j < n:
d = next(g)
res = model_pred.predict(d[0]['image_input'])
for i in range(len(res)):
if not j < n: break
j += 1
# best path, real ocr applications use beam search with dictionary and language model
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
gt_str = d[0]['source_str'][i]
res_str = decode(chars)
ed = editdistance.eval(gt_str, res_str)
#ed = levenshtein(gt_str, res_str)
ed_norm = ed / len(gt_str)
mean_ed += ed
mean_ed_norm += ed_norm
sum_ed += ed
char_count += len(gt_str)
if ed == 0.: correct_word_count += 1
#print('%20s %20s %f' %(gt_str, res_str, ed))
mean_ed /= j
mean_ed_norm /= j
character_recogniton_rate = (char_count-sum_ed) / char_count
word_recognition_rate = correct_word_count / j
print()
print('mean editdistance %0.3f' % (mean_ed))
print('mean normalized editdistance %0.3f' % (mean_ed_norm))
print('character recogniton rate %0.3f' % (character_recogniton_rate))
print('word recognition rate %0.3f' % (word_recognition_rate))
%%timeit
res = model_pred.predict(d[0]['image_input'][1,None], batch_size=1)
for i in range(len(res)):
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
res_str = decode(chars)
%%timeit
res = model_pred.predict(d[0]['image_input'][:16], batch_size=16)
for i in range(len(res)):
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
res_str = decode(chars)
###Output
57.7 ms ± 178 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
Example plots
###Code
g = gen_val.generate()
d = next(g)
res = model_pred.predict(d[0]['image_input'])
mean_ed = 0
mean_ed_norm = 0
font = {'family': 'monospace',
'color': 'black',
'weight': 'normal',
'size': 12,
}
plot_name = 'crnn_sythtext'
#for i in range(len(res)):
for i in range(10):
# best path, real ocr applications use beam search with dictionary and language model
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
gt_str = d[0]['source_str'][i]
res_str = decode(chars)
ed = editdistance.eval(gt_str, res_str)
#ed = levenshtein(gt_str, res_str)
ed_norm = ed / len(gt_str)
mean_ed += ed
mean_ed_norm += ed_norm
# display image
img = d[0]['image_input'][i][:,:,0].T
plt.figure(figsize=[10,1.03])
plt.imshow(img, cmap='gray', interpolation=None)
ax = plt.gca()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.text(0, 45, '%s' % (''.join(chars)), fontdict=font)
plt.text(0, 60, 'GT: %-24s RT: %-24s %0.2f' % (gt_str, res_str, ed_norm), fontdict=font)
#file_name = 'plots/%s_recogniton_%03d.pgf' % (plot_name, i)
file_name = 'plots/%s_recogniton_%03d.png' % (plot_name, i)
#plt.savefig(file_name, bbox_inches='tight', dpi=300)
#print(file_name)
plt.show()
#print('%-20s %-20s %s %0.2f' % (gt_str, res_str, ''.join(chars), ed_norm))
mean_ed /= len(res)
mean_ed_norm /= len(res)
print('\nmean editdistance: %0.3f\nmean normalized editdistance: %0.3f' % (mean_ed, mean_ed_norm))
from ssd_utils import calc_memory_usage, count_parameters
crnn_lstm = CRNN((input_width, input_height, 1), len(alphabet), prediction_only=True, gru=False)
crnn_gru = CRNN((input_width, input_height, 1), len(alphabet), prediction_only=True, gru=True)
calc_memory_usage(crnn_lstm)
count_parameters(crnn_lstm)
calc_memory_usage(crnn_gru)
count_parameters(crnn_gru)
###Output
model memory usage 38.17 MB
trainable 7957847
non-trainable 2048
###Markdown
Data
###Code
from data_icdar2015fst import GTUtility
gt_util_train = GTUtility('data/ICDAR2015_FST/')
gt_util_test = GTUtility('data/ICDAR2015_FST/', test=True)
gt_util = gt_util_train.merge(gt_util_test)
#print(gt_util)
from data_synthtext import GTUtility
#gt_util = GTUtility('data/SynthText/', polygon=True)
file_name = 'gt_util_synthtext_seglink.pkl'
#pickle.dump(gt_util, open(file_name,'wb'))
with open(file_name, 'rb') as f:
gt_util = pickle.load(f)
gt_util_train, gt_util_val = gt_util.split(0.8)
#print(gt_util)
###Output
_____no_output_____
###Markdown
Model
###Code
from crnn_utils import alphabet87 as alphabet
input_width = 256
#input_width = 1024
input_height = 32
batch_size = 128
input_shape = (input_width, input_height, 1)
model, model_pred = CRNN(input_shape, len(alphabet))
experiment = 'crnn_lstm_synthtext'
#model, model_pred = CRNN(input_shape, len(alphabet), gru=True)
#experiment = 'crnn_gru_synthtext'
#model, model_pred = CRNN(input_shape, len(alphabet), cnn=True)
#experiment = 'crnn_cnn_synthtext'
#experiment = 'crnn_cnn_synthtext_concat_continued'
max_string_len = model_pred.output_shape[1]
gen_train = InputGenerator(gt_util_train, batch_size, alphabet, input_shape[:2],
grayscale=True, max_string_len=max_string_len, concatenate=False)
gen_val = InputGenerator(gt_util_val, batch_size, alphabet, input_shape[:2],
grayscale=True, max_string_len=max_string_len, concatenate=False)
#model.load_weights('./checkpoints/202001290841_crnn_cnn_synthtext_concat/weights.260000.h5')
###Output
_____no_output_____
###Markdown
Training
###Code
checkdir = './checkpoints/' + time.strftime('%Y%m%d%H%M') + '_' + experiment
if not os.path.exists(checkdir):
os.makedirs(checkdir)
with open(checkdir+'/source.py','wb') as f:
source = ''.join(['# In[%i]\n%s\n\n' % (i, In[i]) for i in range(len(In))])
f.write(source.encode())
optimizer = tf.optimizers.SGD(learning_rate=0.01, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=5)
#optimizer = tf.optimizers.SGD(learning_rate=0.001, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=5)
#optimizer = tf.optimizers.Adam(learning_rate=0.02, epsilon=0.001, clipnorm=1.)
# dummy loss, loss is computed in lambda layer
model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer=optimizer)
#model.summary()
model.fit(gen_train.generate(), # batch_size here?
steps_per_epoch=gt_util_train.num_objects // batch_size,
epochs=100,
validation_data=gen_val.generate(), # batch_size here?
validation_steps=gt_util_val.num_objects // batch_size,
callbacks=[
#ModelCheckpoint(checkdir+'/weights.{epoch:03d}.h5', verbose=1, save_weights_only=True),
ModelSnapshot(checkdir, 10000),
Logger(checkdir)
],
initial_epoch=0)
###Output
_____no_output_____
###Markdown
Predict
###Code
g = gen_val.generate()
d = next(g)
res = model_pred.predict(d[0]['image_input'])
mean_ed = 0
mean_ed_norm = 0
#for i in range(len(res)):
for i in range(10):
# best path, real ocr applications use beam search with dictionary and language model
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
gt_str = d[0]['source_str'][i]
res_str = decode(chars)
ed = editdistance.eval(gt_str, res_str)
#ed = levenshtein(gt_str, res_str)
ed_norm = ed / len(gt_str)
mean_ed += ed
mean_ed_norm += ed_norm
# display image
plt.figure(figsize=[30,0.5])
img = d[0]['image_input'][i]
img = np.transpose(img, (1,0,2)) / 255
if img.shape[-1] == 1:
plt.imshow(img[:,:,0], cmap='gray')
else:
plt.imshow(img[:,:,(2,1,0)])
ax = plt.gca()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
print('%-20s %-20s %s %0.2f' % (gt_str, res_str, ''.join(chars), ed_norm))
mean_ed /= len(res)
mean_ed_norm /= len(res)
print('\nmean editdistance: %0.3f\nmean normalized editdistance: %0.3f' % (mean_ed, mean_ed_norm))
###Output
_____no_output_____
###Markdown
Test
###Code
model.load_weights('./checkpoints/201806162129_crnn_lstm_synthtext/weights.400000.h5')
#model.load_weights('./checkpoints/201806190711_crnn_gru_synthtext/weights.400000.h5')
#model.load_weights('./checkpoints/202001111100_crnn_cnn_synthtext/weights.600000.h5')
g = gen_val.generate()
n = 100000
#n = batch_size
mean_ed = 0
mean_ed_norm = 0
mean_character_recogniton_rate = 0
sum_ed = 0
char_count = 0
correct_word_count = 0
word_recognition_rate = 0
j = 0
while j < n:
d = next(g)
res = model_pred.predict(d[0]['image_input'])
for i in range(len(res)):
if not j < n: break
j += 1
# best path, real ocr applications use beam search with dictionary and language model
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
gt_str = d[0]['source_str'][i]
res_str = decode(chars)
ed = editdistance.eval(gt_str, res_str)
#ed = levenshtein(gt_str, res_str)
ed_norm = ed / len(gt_str)
mean_ed += ed
mean_ed_norm += ed_norm
sum_ed += ed
char_count += len(gt_str)
if ed == 0.: correct_word_count += 1
#print('%20s %20s %f' %(gt_str, res_str, ed))
mean_ed /= j
mean_ed_norm /= j
character_recogniton_rate = (char_count-sum_ed) / char_count
word_recognition_rate = correct_word_count / j
print()
print('mean editdistance %0.3f' % (mean_ed))
print('mean normalized editdistance %0.3f' % (mean_ed_norm))
print('character recogniton rate %0.3f' % (character_recogniton_rate))
print('word recognition rate %0.3f' % (word_recognition_rate))
%%timeit
res = model_pred.predict(d[0]['image_input'][1,None], batch_size=1)
for i in range(len(res)):
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
res_str = decode(chars)
%%timeit
res = model_pred.predict(d[0]['image_input'][:16], batch_size=16)
for i in range(len(res)):
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
res_str = decode(chars)
###Output
_____no_output_____
###Markdown
Example plots
###Code
g = gen_val.generate()
d = next(g)
res = model_pred.predict(d[0]['image_input'])
mean_ed = 0
mean_ed_norm = 0
font = {'family': 'monospace',
'color': 'black',
'weight': 'normal',
'size': 12,
}
plot_name = 'crnn_sythtext'
#for i in range(len(res)):
for i in range(10):
# best path, real ocr applications use beam search with dictionary and language model
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
gt_str = d[0]['source_str'][i]
res_str = decode(chars)
ed = editdistance.eval(gt_str, res_str)
#ed = levenshtein(gt_str, res_str)
ed_norm = ed / len(gt_str)
mean_ed += ed
mean_ed_norm += ed_norm
# display image
img = d[0]['image_input'][i][:,:,0].T
plt.figure(figsize=[10,1.03])
plt.imshow(img, cmap='gray', interpolation=None)
ax = plt.gca()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.text(0, 45, '%s' % (''.join(chars)), fontdict=font)
plt.text(0, 60, 'GT: %-24s RT: %-24s %0.2f' % (gt_str, res_str, ed_norm), fontdict=font)
#file_name = 'plots/%s_recogniton_%03d.pgf' % (plot_name, i)
file_name = 'plots/%s_recogniton_%03d.png' % (plot_name, i)
#plt.savefig(file_name, bbox_inches='tight', dpi=300)
#print(file_name)
plt.show()
#print('%-20s %-20s %s %0.2f' % (gt_str, res_str, ''.join(chars), ed_norm))
mean_ed /= len(res)
mean_ed_norm /= len(res)
print('\nmean editdistance: %0.3f\nmean normalized editdistance: %0.3f' % (mean_ed, mean_ed_norm))
from utils.model import calc_memory_usage, count_parameters
from crnn_utils import alphabet87 as alphabet
input_width = 256
input_height = 32
model_names = ['lstm', 'gru', 'cnn']
models = [
CRNN((input_width, input_height, 1), len(alphabet), prediction_only=True),
CRNN((input_width, input_height, 1), len(alphabet), prediction_only=True, gru=True),
CRNN((input_width, input_height, 1), len(alphabet), prediction_only=True, cnn=True),
]
for n, m in zip(model_names, models):
print(n)
calc_memory_usage(m)
count_parameters(m)
print()
%%timeit
res = models[0].predict(d[0]['image_input'][:1], batch_size=1)
for i in range(len(res)):
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
res_str = decode(chars)
%%timeit
res = models[1].predict(d[0]['image_input'][:1], batch_size=1)
for i in range(len(res)):
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
res_str = decode(chars)
%%timeit
res = models[2].predict(d[0]['image_input'][:1], batch_size=1)
for i in range(len(res)):
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
res_str = decode(chars)
import cv2
batch = next(gen_train.generate())
img = np.concatenate(batch[0]['image_input'], axis=0)[:,:,0].T
print(img.shape)
cv2.imwrite('./data/images/text_long.jpg', img)
###Output
_____no_output_____
###Markdown
Data
###Code
from data_icdar2015fst import GTUtility
gt_util_train = GTUtility('data/ICDAR2015_FST/')
gt_util_test = GTUtility('data/ICDAR2015_FST/', test=True)
gt_util = GTUtility.merge(gt_util_train, gt_util_test)
#print(gt_util)
from data_synthtext import GTUtility
#gt_util = GTUtility('data/SynthText/', polygon=True)
file_name = 'gt_util_synthtext_seglink.pkl'
#pickle.dump(gt_util, open(file_name,'wb'))
with open(file_name, 'rb') as f:
gt_util = pickle.load(f)
gt_util_train, gt_util_val = GTUtility.split(gt_util, split=0.8)
#print(gt_util)
from data_cocotext import GTUtility
file_name = 'gt_util_cocotext_val.pkl'
with open(file_name, 'rb') as f:
gt_util_val = pickle.load(f)
#gt_util_train, gt_util_val = GTUtility.split(gt_util, split=0.8)
###Output
_____no_output_____
###Markdown
Model
###Code
from crnn_utils import alphabet87 as alphabet
input_width = 256
input_height = 32
batch_size = 128
input_shape = (input_width, input_height, 1)
model, model_pred = CRNN(input_shape, len(alphabet), gru=False)
experiment = 'crnn_lstm_cocotext'
#model, model_pred = CRNN(input_shape, len(alphabet), gru=True)
#experiment = 'crnn_gru_synthtext'
max_string_len = model_pred.output_shape[1]
gen_train = InputGenerator(gt_util_train, batch_size, alphabet, input_shape[:2],
grayscale=True, max_string_len=max_string_len)
gen_val = InputGenerator(gt_util_val, batch_size, alphabet, input_shape[:2],
grayscale=True, max_string_len=max_string_len)
###Output
_____no_output_____
###Markdown
Training
###Code
checkdir = './checkpoints/' + time.strftime('%Y%m%d%H%M') + '_' + experiment
if not os.path.exists(checkdir):
os.makedirs(checkdir)
with open(checkdir+'/source.py','wb') as f:
source = ''.join(['# In[%i]\n%s\n\n' % (i, In[i]) for i in range(len(In))])
f.write(source.encode())
#optimizer = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=5)
#optimizer = Adam(lr=0.02, epsilon=0.001, clipnorm=1.)
optimizer = Adam(lr=0.1, epsilon=0.001, clipnorm=1.)
regularizer = keras.regularizers.l2(5e-4) # None if disabled
for l in model.layers:
if l.__class__.__name__.startswith('Conv'):
l.kernel_regularizer = regularizer
# dummy loss, loss is computed in lambda layer
model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer=optimizer)
#model.summary()
model.fit_generator(generator=gen_train.generate(), # batch_size here?
steps_per_epoch=gt_util_train.num_objects // batch_size,
epochs=30,
validation_data=gen_val.generate(), # batch_size here?
validation_steps=gt_util_val.num_objects // batch_size,
callbacks=[
ModelCheckpoint(checkdir+'/weights.{epoch:03d}.h5', verbose=1, save_weights_only=True),
#ModelSnapshot(checkdir, 1000),
Logger(checkdir)
],
initial_epoch=0)
###Output
Epoch 1/30
NEW epoch
NEW epoch
432/433 [============================>.] - ETA: 0s - loss: 21.7267NEW epoch
433/433 [==============================] - 438s 1s/step - loss: 21.7169 - val_loss: 22.2391
Epoch 2/30
432/433 [============================>.] - ETA: 0s - loss: 13.1322NEW epoch
433/433 [==============================] - 427s 986ms/step - loss: 13.1253 - val_loss: 31.1976
Epoch 3/30
134/433 [========>.....................] - ETA: 4:31 - loss: 7.1842
Saving model ./checkpoints/201904030137_crnn_lstm_cocotext/weights.001000.h5
432/433 [============================>.] - ETA: 0s - loss: 5.9171NEW epoch
433/433 [==============================] - 427s 985ms/step - loss: 5.9084 - val_loss: 41.4059
Epoch 4/30
432/433 [============================>.] - ETA: 0s - loss: 3.5704NEW epoch
433/433 [==============================] - 426s 983ms/step - loss: 3.5694 - val_loss: 46.6011
Epoch 5/30
268/433 [=================>............] - ETA: 2:29 - loss: 3.4652
Saving model ./checkpoints/201904030137_crnn_lstm_cocotext/weights.002000.h5
432/433 [============================>.] - ETA: 0s - loss: 3.4700NEW epoch
433/433 [==============================] - 427s 986ms/step - loss: 3.4751 - val_loss: 46.9480
Epoch 6/30
432/433 [============================>.] - ETA: 0s - loss: 3.3264NEW epoch
433/433 [==============================] - 427s 986ms/step - loss: 3.3234 - val_loss: 50.7635
Epoch 7/30
318/433 [=====================>........] - ETA: 1:44 - loss: 3.8653
###Markdown
Predict
###Code
g = gen_val.generate()
d = next(g)
res = model_pred.predict(d[0]['image_input'])
mean_ed = 0
mean_ed_norm = 0
#for i in range(len(res)):
for i in range(10):
# best path, real ocr applications use beam search with dictionary and language model
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
gt_str = d[0]['source_str'][i]
res_str = decode(chars)
ed = editdistance.eval(gt_str, res_str)
#ed = levenshtein(gt_str, res_str)
ed_norm = ed / len(gt_str)
mean_ed += ed
mean_ed_norm += ed_norm
# display image
img = d[0]['image_input'][i][:,:,0].T
plt.figure(figsize=[30,0.5])
plt.imshow(img, cmap='gray')
ax = plt.gca()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
print('%-20s %-20s %s %0.2f' % (gt_str, res_str, ''.join(chars), ed_norm))
mean_ed /= len(res)
mean_ed_norm /= len(res)
print('\nmean editdistance: %0.3f\nmean normalized editdistance: %0.3f' % (mean_ed, mean_ed_norm))
###Output
_____no_output_____
###Markdown
Test
###Code
model.load_weights('./checkpoints/201904090148_crnn_lstm_cocotext_pretrained_v13/')
#model.load_weights('./checkpoints/201806190711_crnn_gru_synthtext/weights.300000.h5')
g = gen_val.generate()
n = 100000
#n = batch_size
mean_ed = 0
mean_ed_norm = 0
mean_character_recogniton_rate = 0
sum_ed = 0
char_count = 0
correct_word_count = 0
word_recognition_rate = 0
j = 0
while j < n:
d = next(g)
res = model_pred.predict(d[0]['image_input'])
for i in range(len(res)):
if not j < n: break
j += 1
# best path, real ocr applications use beam search with dictionary and language model
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
gt_str = d[0]['source_str'][i]
res_str = decode(chars)
ed = editdistance.eval(gt_str, res_str)
#ed = levenshtein(gt_str, res_str)
ed_norm = ed / len(gt_str)
mean_ed += ed
mean_ed_norm += ed_norm
sum_ed += ed
char_count += len(gt_str)
if ed == 0.: correct_word_count += 1
#print('%20s %20s %f' %(gt_str, res_str, ed))
mean_ed /= j
mean_ed_norm /= j
character_recogniton_rate = (char_count-sum_ed) / char_count
word_recognition_rate = correct_word_count / j
print()
print('mean editdistance %0.3f' % (mean_ed))
print('mean normalized editdistance %0.3f' % (mean_ed_norm))
print('character recogniton rate %0.3f' % (character_recogniton_rate))
print('word recognition rate %0.3f' % (word_recognition_rate))
%%timeit
res = model_pred.predict(d[0]['image_input'][1,None], batch_size=1)
for i in range(len(res)):
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
res_str = decode(chars)
%%timeit
res = model_pred.predict(d[0]['image_input'][:16], batch_size=16)
for i in range(len(res)):
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
res_str = decode(chars)
res
###Output
_____no_output_____
###Markdown
Example plots
###Code
g = gen_train.generate()
d = next(g)
res = model_pred.predict(d[0]['image_input'])
mean_ed = 0
mean_ed_norm = 0
font = {'family': 'monospace',
'color': 'black',
'weight': 'normal',
'size': 12,
}
plot_name = 'crnn_sythtext'
#for i in range(len(res)):
for i in range(40):
# best path, real ocr applications use beam search with dictionary and language model
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
gt_str = d[0]['source_str'][i]
res_str = decode(chars)
ed = editdistance.eval(gt_str, res_str)
#ed = levenshtein(gt_str, res_str)
ed_norm = ed / len(gt_str)
mean_ed += ed
mean_ed_norm += ed_norm
# display image
img = d[0]['image_input'][i][:,:,0].T
plt.figure(figsize=[10,1.03])
plt.imshow(img, cmap='gray', interpolation=None)
ax = plt.gca()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.text(0, 45, '%s' % (''.join(chars)), fontdict=font)
plt.text(0, 60, 'GT: %-24s RT: %-24s %0.2f' % (gt_str, res_str, ed_norm), fontdict=font)
#file_name = 'plots/%s_recogniton_%03d.pgf' % (plot_name, i)
file_name = 'plots/%s_recogniton_%03d.png' % (plot_name, i)
#plt.savefig(file_name, bbox_inches='tight', dpi=300)
#print(file_name)
plt.show()
#print('%-20s %-20s %s %0.2f' % (gt_str, res_str, ''.join(chars), ed_norm))
mean_ed /= len(res)
mean_ed_norm /= len(res)
print('\nmean editdistance: %0.3f\nmean normalized editdistance: %0.3f' % (mean_ed, mean_ed_norm))
from ssd_utils import calc_memory_usage, count_parameters
crnn_lstm = CRNN((input_width, input_height, 1), len(alphabet), prediction_only=True, gru=False)
crnn_gru = CRNN((input_width, input_height, 1), len(alphabet), prediction_only=True, gru=True)
calc_memory_usage(crnn_lstm)
count_parameters(crnn_lstm)
calc_memory_usage(crnn_gru)
count_parameters(crnn_gru)
###Output
model memory usage 38.17 MB
trainable 7957847
non-trainable 2048
###Markdown
Data
###Code
from data_icdar2015fst import GTUtility
gt_util_train = GTUtility('data/ICDAR2015_FST/')
gt_util_test = GTUtility('data/ICDAR2015_FST/', test=True)
gt_util = gt_util_train.merge(gt_util_test)
#print(gt_util)
from data_synthtext import GTUtility
#gt_util = GTUtility('data/SynthText/', polygon=True)
file_name = 'gt_util_synthtext_seglink.pkl'
#pickle.dump(gt_util, open(file_name,'wb'))
with open(file_name, 'rb') as f:
gt_util = pickle.load(f)
gt_util_train, gt_util_val = gt_util.split(0.8)
#print(gt_util)
###Output
_____no_output_____
###Markdown
Model
###Code
from crnn_utils import alphabet87 as alphabet
input_width = 256
input_height = 32
batch_size = 128
input_shape = (input_width, input_height, 1)
model, model_pred = CRNN(input_shape, len(alphabet))
experiment = 'crnn_lstm_synthtext'
#model, model_pred = CRNN(input_shape, len(alphabet), gru=True)
#experiment = 'crnn_gru_synthtext'
#model, model_pred = CRNN(input_shape, len(alphabet), cnn=True)
#experiment = 'crnn_cnn_synthtext'
max_string_len = model_pred.output_shape[1]
gen_train = InputGenerator(gt_util_train, batch_size, alphabet, input_shape[:2],
grayscale=True, max_string_len=max_string_len)
gen_val = InputGenerator(gt_util_val, batch_size, alphabet, input_shape[:2],
grayscale=True, max_string_len=max_string_len)
###Output
_____no_output_____
###Markdown
Training
###Code
checkdir = './checkpoints/' + time.strftime('%Y%m%d%H%M') + '_' + experiment
if not os.path.exists(checkdir):
os.makedirs(checkdir)
with open(checkdir+'/source.py','wb') as f:
source = ''.join(['# In[%i]\n%s\n\n' % (i, In[i]) for i in range(len(In))])
f.write(source.encode())
optimizer = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=5)
#optimizer = Adam(lr=0.02, epsilon=0.001, clipnorm=1.)
# dummy loss, loss is computed in lambda layer
model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer=optimizer)
#model.summary()
model.fit_generator(generator=gen_train.generate(), # batch_size here?
steps_per_epoch=gt_util_train.num_objects // batch_size,
epochs=100,
validation_data=gen_val.generate(), # batch_size here?
validation_steps=gt_util_val.num_objects // batch_size,
callbacks=[
#ModelCheckpoint(checkdir+'/weights.{epoch:03d}.h5', verbose=1, save_weights_only=True),
ModelSnapshot(checkdir, 10000),
Logger(checkdir)
],
initial_epoch=0)
###Output
_____no_output_____
###Markdown
Predict
###Code
g = gen_val.generate()
d = next(g)
res = model_pred.predict(d[0]['image_input'])
mean_ed = 0
mean_ed_norm = 0
#for i in range(len(res)):
for i in range(10):
# best path, real ocr applications use beam search with dictionary and language model
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
gt_str = d[0]['source_str'][i]
res_str = decode(chars)
ed = editdistance.eval(gt_str, res_str)
#ed = levenshtein(gt_str, res_str)
ed_norm = ed / len(gt_str)
mean_ed += ed
mean_ed_norm += ed_norm
# display image
img = d[0]['image_input'][i][:,:,0].T
plt.figure(figsize=[30,0.5])
plt.imshow(img, cmap='gray')
ax = plt.gca()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
print('%-20s %-20s %s %0.2f' % (gt_str, res_str, ''.join(chars), ed_norm))
mean_ed /= len(res)
mean_ed_norm /= len(res)
print('\nmean editdistance: %0.3f\nmean normalized editdistance: %0.3f' % (mean_ed, mean_ed_norm))
###Output
_____no_output_____
###Markdown
Test
###Code
model.load_weights('./checkpoints/201806162129_crnn_lstm_synthtext/weights.400000.h5')
#model.load_weights('./checkpoints/201806190711_crnn_gru_synthtext/weights.400000.h5')
#model.load_weights('./checkpoints/202001111100_crnn_cnn_synthtext/weights.600000.h5')
g = gen_val.generate()
n = 100000
#n = batch_size
mean_ed = 0
mean_ed_norm = 0
mean_character_recogniton_rate = 0
sum_ed = 0
char_count = 0
correct_word_count = 0
word_recognition_rate = 0
j = 0
while j < n:
d = next(g)
res = model_pred.predict(d[0]['image_input'])
for i in range(len(res)):
if not j < n: break
j += 1
# best path, real ocr applications use beam search with dictionary and language model
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
gt_str = d[0]['source_str'][i]
res_str = decode(chars)
ed = editdistance.eval(gt_str, res_str)
#ed = levenshtein(gt_str, res_str)
ed_norm = ed / len(gt_str)
mean_ed += ed
mean_ed_norm += ed_norm
sum_ed += ed
char_count += len(gt_str)
if ed == 0.: correct_word_count += 1
#print('%20s %20s %f' %(gt_str, res_str, ed))
mean_ed /= j
mean_ed_norm /= j
character_recogniton_rate = (char_count-sum_ed) / char_count
word_recognition_rate = correct_word_count / j
print()
print('mean editdistance %0.3f' % (mean_ed))
print('mean normalized editdistance %0.3f' % (mean_ed_norm))
print('character recogniton rate %0.3f' % (character_recogniton_rate))
print('word recognition rate %0.3f' % (word_recognition_rate))
%%timeit
res = model_pred.predict(d[0]['image_input'][1,None], batch_size=1)
for i in range(len(res)):
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
res_str = decode(chars)
%%timeit
res = model_pred.predict(d[0]['image_input'][:16], batch_size=16)
for i in range(len(res)):
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
res_str = decode(chars)
###Output
57.7 ms ± 178 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
Example plots
###Code
g = gen_val.generate()
d = next(g)
res = model_pred.predict(d[0]['image_input'])
mean_ed = 0
mean_ed_norm = 0
font = {'family': 'monospace',
'color': 'black',
'weight': 'normal',
'size': 12,
}
plot_name = 'crnn_sythtext'
#for i in range(len(res)):
for i in range(10):
# best path, real ocr applications use beam search with dictionary and language model
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
gt_str = d[0]['source_str'][i]
res_str = decode(chars)
ed = editdistance.eval(gt_str, res_str)
#ed = levenshtein(gt_str, res_str)
ed_norm = ed / len(gt_str)
mean_ed += ed
mean_ed_norm += ed_norm
# display image
img = d[0]['image_input'][i][:,:,0].T
plt.figure(figsize=[10,1.03])
plt.imshow(img, cmap='gray', interpolation=None)
ax = plt.gca()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.text(0, 45, '%s' % (''.join(chars)), fontdict=font)
plt.text(0, 60, 'GT: %-24s RT: %-24s %0.2f' % (gt_str, res_str, ed_norm), fontdict=font)
#file_name = 'plots/%s_recogniton_%03d.pgf' % (plot_name, i)
file_name = 'plots/%s_recogniton_%03d.png' % (plot_name, i)
#plt.savefig(file_name, bbox_inches='tight', dpi=300)
#print(file_name)
plt.show()
#print('%-20s %-20s %s %0.2f' % (gt_str, res_str, ''.join(chars), ed_norm))
mean_ed /= len(res)
mean_ed_norm /= len(res)
print('\nmean editdistance: %0.3f\nmean normalized editdistance: %0.3f' % (mean_ed, mean_ed_norm))
from utils.model import calc_memory_usage, count_parameters
from crnn_utils import alphabet87 as alphabet
input_width = 256
input_height = 32
model_names = ['lstm', 'gru', 'cnn']
models = [
CRNN((input_width, input_height, 1), len(alphabet), prediction_only=True),
CRNN((input_width, input_height, 1), len(alphabet), prediction_only=True, gru=True),
CRNN((input_width, input_height, 1), len(alphabet), prediction_only=True, cnn=True),
]
for n, m in zip(model_names, models):
print(n)
calc_memory_usage(m)
count_parameters(m)
print()
%%timeit
res = models[0].predict(d[0]['image_input'][:1], batch_size=1)
for i in range(len(res)):
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
res_str = decode(chars)
%%timeit
res = models[1].predict(d[0]['image_input'][:1], batch_size=1)
for i in range(len(res)):
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
res_str = decode(chars)
%%timeit
res = models[2].predict(d[0]['image_input'][:1], batch_size=1)
for i in range(len(res)):
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
res_str = decode(chars)
###Output
3.68 ms ± 24.5 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
Data
###Code
from data_icdar2015fst import GTUtility
gt_util_train = GTUtility('data/ICDAR2015_FST/')
gt_util_test = GTUtility('data/ICDAR2015_FST/', test=True)
gt_util = gt_util_train.merge(gt_util_test)
#print(gt_util)
from data_synthtext import GTUtility
#gt_util = GTUtility('data/SynthText/', polygon=True)
file_name = 'gt_util_synthtext_seglink.pkl'
#pickle.dump(gt_util, open(file_name,'wb'))
with open(file_name, 'rb') as f:
gt_util = pickle.load(f)
gt_util_train, gt_util_val = gt_util.split(0.8)
#print(gt_util)
###Output
_____no_output_____
###Markdown
Model
###Code
from crnn_utils import alphabet87 as alphabet
input_width = 256
#input_width = 1024
input_height = 32
batch_size = 128
input_shape = (input_width, input_height, 1)
model, model_pred = CRNN(input_shape, len(alphabet))
experiment = 'crnn_lstm_synthtext'
#model, model_pred = CRNN(input_shape, len(alphabet), gru=True)
#experiment = 'crnn_gru_synthtext'
#model, model_pred = CRNN(input_shape, len(alphabet), cnn=True)
#experiment = 'crnn_cnn_synthtext'
#experiment = 'crnn_cnn_synthtext_concat_continued'
max_string_len = model_pred.output_shape[1]
gen_train = InputGenerator(gt_util_train, batch_size, alphabet, input_shape[:2],
grayscale=True, max_string_len=max_string_len, concatenate=False)
gen_val = InputGenerator(gt_util_val, batch_size, alphabet, input_shape[:2],
grayscale=True, max_string_len=max_string_len, concatenate=False)
#model.load_weights('./checkpoints/202001290841_crnn_cnn_synthtext_concat/weights.260000.h5')
###Output
_____no_output_____
###Markdown
Training
###Code
checkdir = './checkpoints/' + time.strftime('%Y%m%d%H%M') + '_' + experiment
if not os.path.exists(checkdir):
os.makedirs(checkdir)
with open(checkdir+'/source.py','wb') as f:
source = ''.join(['# In[%i]\n%s\n\n' % (i, In[i]) for i in range(len(In))])
f.write(source.encode())
optimizer = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=5)
#optimizer = SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=5)
#optimizer = Adam(lr=0.02, epsilon=0.001, clipnorm=1.)
# dummy loss, loss is computed in lambda layer
model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer=optimizer)
#model.summary()
model.fit_generator(generator=gen_train.generate(), # batch_size here?
steps_per_epoch=gt_util_train.num_objects // batch_size,
epochs=100,
validation_data=gen_val.generate(), # batch_size here?
validation_steps=gt_util_val.num_objects // batch_size,
callbacks=[
#ModelCheckpoint(checkdir+'/weights.{epoch:03d}.h5', verbose=1, save_weights_only=True),
ModelSnapshot(checkdir, 10000),
Logger(checkdir)
],
initial_epoch=0)
###Output
_____no_output_____
###Markdown
Predict
###Code
g = gen_val.generate()
d = next(g)
res = model_pred.predict(d[0]['image_input'])
mean_ed = 0
mean_ed_norm = 0
#for i in range(len(res)):
for i in range(10):
# best path, real ocr applications use beam search with dictionary and language model
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
gt_str = d[0]['source_str'][i]
res_str = decode(chars)
ed = editdistance.eval(gt_str, res_str)
#ed = levenshtein(gt_str, res_str)
ed_norm = ed / len(gt_str)
mean_ed += ed
mean_ed_norm += ed_norm
# display image
plt.figure(figsize=[30,0.5])
img = d[0]['image_input'][i]
img = np.transpose(img, (1,0,2)) / 255
if img.shape[-1] == 1:
plt.imshow(img[:,:,0], cmap='gray')
else:
plt.imshow(img[:,:,(2,1,0)])
ax = plt.gca()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()
print('%-20s %-20s %s %0.2f' % (gt_str, res_str, ''.join(chars), ed_norm))
mean_ed /= len(res)
mean_ed_norm /= len(res)
print('\nmean editdistance: %0.3f\nmean normalized editdistance: %0.3f' % (mean_ed, mean_ed_norm))
###Output
_____no_output_____
###Markdown
Test
###Code
model.load_weights('./checkpoints/201806162129_crnn_lstm_synthtext/weights.400000.h5')
#model.load_weights('./checkpoints/201806190711_crnn_gru_synthtext/weights.400000.h5')
#model.load_weights('./checkpoints/202001111100_crnn_cnn_synthtext/weights.600000.h5')
g = gen_val.generate()
n = 100000
#n = batch_size
mean_ed = 0
mean_ed_norm = 0
mean_character_recogniton_rate = 0
sum_ed = 0
char_count = 0
correct_word_count = 0
word_recognition_rate = 0
j = 0
while j < n:
d = next(g)
res = model_pred.predict(d[0]['image_input'])
for i in range(len(res)):
if not j < n: break
j += 1
# best path, real ocr applications use beam search with dictionary and language model
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
gt_str = d[0]['source_str'][i]
res_str = decode(chars)
ed = editdistance.eval(gt_str, res_str)
#ed = levenshtein(gt_str, res_str)
ed_norm = ed / len(gt_str)
mean_ed += ed
mean_ed_norm += ed_norm
sum_ed += ed
char_count += len(gt_str)
if ed == 0.: correct_word_count += 1
#print('%20s %20s %f' %(gt_str, res_str, ed))
mean_ed /= j
mean_ed_norm /= j
character_recogniton_rate = (char_count-sum_ed) / char_count
word_recognition_rate = correct_word_count / j
print()
print('mean editdistance %0.3f' % (mean_ed))
print('mean normalized editdistance %0.3f' % (mean_ed_norm))
print('character recogniton rate %0.3f' % (character_recogniton_rate))
print('word recognition rate %0.3f' % (word_recognition_rate))
%%timeit
res = model_pred.predict(d[0]['image_input'][1,None], batch_size=1)
for i in range(len(res)):
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
res_str = decode(chars)
%%timeit
res = model_pred.predict(d[0]['image_input'][:16], batch_size=16)
for i in range(len(res)):
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
res_str = decode(chars)
###Output
57.7 ms ± 178 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
Example plots
###Code
g = gen_val.generate()
d = next(g)
res = model_pred.predict(d[0]['image_input'])
mean_ed = 0
mean_ed_norm = 0
font = {'family': 'monospace',
'color': 'black',
'weight': 'normal',
'size': 12,
}
plot_name = 'crnn_sythtext'
#for i in range(len(res)):
for i in range(10):
# best path, real ocr applications use beam search with dictionary and language model
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
gt_str = d[0]['source_str'][i]
res_str = decode(chars)
ed = editdistance.eval(gt_str, res_str)
#ed = levenshtein(gt_str, res_str)
ed_norm = ed / len(gt_str)
mean_ed += ed
mean_ed_norm += ed_norm
# display image
img = d[0]['image_input'][i][:,:,0].T
plt.figure(figsize=[10,1.03])
plt.imshow(img, cmap='gray', interpolation=None)
ax = plt.gca()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.text(0, 45, '%s' % (''.join(chars)), fontdict=font)
plt.text(0, 60, 'GT: %-24s RT: %-24s %0.2f' % (gt_str, res_str, ed_norm), fontdict=font)
#file_name = 'plots/%s_recogniton_%03d.pgf' % (plot_name, i)
file_name = 'plots/%s_recogniton_%03d.png' % (plot_name, i)
#plt.savefig(file_name, bbox_inches='tight', dpi=300)
#print(file_name)
plt.show()
#print('%-20s %-20s %s %0.2f' % (gt_str, res_str, ''.join(chars), ed_norm))
mean_ed /= len(res)
mean_ed_norm /= len(res)
print('\nmean editdistance: %0.3f\nmean normalized editdistance: %0.3f' % (mean_ed, mean_ed_norm))
from utils.model import calc_memory_usage, count_parameters
from crnn_utils import alphabet87 as alphabet
input_width = 256
input_height = 32
model_names = ['lstm', 'gru', 'cnn']
models = [
CRNN((input_width, input_height, 1), len(alphabet), prediction_only=True),
CRNN((input_width, input_height, 1), len(alphabet), prediction_only=True, gru=True),
CRNN((input_width, input_height, 1), len(alphabet), prediction_only=True, cnn=True),
]
for n, m in zip(model_names, models):
print(n)
calc_memory_usage(m)
count_parameters(m)
print()
%%timeit
res = models[0].predict(d[0]['image_input'][:1], batch_size=1)
for i in range(len(res)):
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
res_str = decode(chars)
%%timeit
res = models[1].predict(d[0]['image_input'][:1], batch_size=1)
for i in range(len(res)):
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
res_str = decode(chars)
%%timeit
res = models[2].predict(d[0]['image_input'][:1], batch_size=1)
for i in range(len(res)):
chars = [alphabet[c] for c in np.argmax(res[i], axis=1)]
res_str = decode(chars)
###Output
3.68 ms ± 24.5 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
|
pyspark-udacity/N07_data_wrangling-sql.ipynb | ###Markdown
Spark SQL ExamplesRun the code cells below. This is the same code from the previous screencast.
###Code
from pyspark.sql import SparkSession
from pyspark.sql.functions import udf
from pyspark.sql.types import StringType
from pyspark.sql.types import IntegerType
from pyspark.sql.functions import desc
from pyspark.sql.functions import asc
from pyspark.sql.functions import sum as Fsum
import datetime
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
spark = SparkSession \
.builder \
.appName("Data wrangling with Spark SQL") \
.getOrCreate()
path = "data/sparkify_log_small.json"
user_log = spark.read.json(path)
user_log.take(1)
user_log.printSchema()
###Output
root
|-- artist: string (nullable = true)
|-- auth: string (nullable = true)
|-- firstName: string (nullable = true)
|-- gender: string (nullable = true)
|-- itemInSession: long (nullable = true)
|-- lastName: string (nullable = true)
|-- length: double (nullable = true)
|-- level: string (nullable = true)
|-- location: string (nullable = true)
|-- method: string (nullable = true)
|-- page: string (nullable = true)
|-- registration: long (nullable = true)
|-- sessionId: long (nullable = true)
|-- song: string (nullable = true)
|-- status: long (nullable = true)
|-- ts: long (nullable = true)
|-- userAgent: string (nullable = true)
|-- userId: string (nullable = true)
###Markdown
Create a View And Run QueriesThe code below creates a temporary view against which you can run SQL queries.
###Code
user_log.createOrReplaceTempView("user_log_table")
spark.sql("SELECT * FROM user_log_table LIMIT 2").show()
spark.sql('''
SELECT *
FROM user_log_table
LIMIT 2
'''
).show()
spark.sql('''
SELECT COUNT(*)
FROM user_log_table
'''
).show()
spark.sql('''
SELECT userID, firstname, page, song
FROM user_log_table
WHERE userID == '1046'
'''
).collect()
spark.sql('''
SELECT DISTINCT page
FROM user_log_table
ORDER BY page ASC
'''
).show()
###Output
+----------------+
| page|
+----------------+
| About|
| Downgrade|
| Error|
| Help|
| Home|
| Login|
| Logout|
| NextSong|
| Save Settings|
| Settings|
|Submit Downgrade|
| Submit Upgrade|
| Upgrade|
+----------------+
###Markdown
User Defined Functions
###Code
spark.udf.register("get_hour", lambda x: int(datetime.datetime.fromtimestamp(x / 1000.0).hour))
spark.sql('''
SELECT *,
get_hour(ts) AS hour
FROM user_log_table
LIMIT 1
'''
).collect()
songs_in_hour = spark.sql('''
SELECT
get_hour(ts) AS hour,
COUNT(*) as plays_per_hour
FROM user_log_table
WHERE page = "NextSong"
GROUP BY hour
ORDER BY cast(hour as int) ASC
'''
)
songs_in_hour.show(24)
###Output
+----+--------------+
|hour|plays_per_hour|
+----+--------------+
| 0| 456|
| 1| 454|
| 2| 382|
| 3| 302|
| 4| 352|
| 5| 276|
| 6| 348|
| 7| 358|
| 8| 375|
| 9| 249|
| 10| 216|
| 11| 228|
| 12| 251|
| 13| 339|
| 14| 462|
| 15| 479|
| 16| 484|
| 17| 430|
| 18| 362|
| 19| 295|
| 20| 257|
| 21| 248|
| 22| 369|
| 23| 375|
+----+--------------+
###Markdown
Converting Results to Pandas
###Code
songs_in_hour_pd = songs_in_hour.toPandas()
print(songs_in_hour_pd)
###Output
hour plays_per_hour
0 0 456
1 1 454
2 2 382
3 3 302
4 4 352
5 5 276
6 6 348
7 7 358
8 8 375
9 9 249
10 10 216
11 11 228
12 12 251
13 13 339
14 14 462
15 15 479
16 16 484
17 17 430
18 18 362
19 19 295
20 20 257
21 21 248
22 22 369
23 23 375
|
ibl-intro-model-fitting-notebook.ipynb | ###Markdown
Tutorial on computational modeling and statistical model fitting part of the *IBL Computational Neuroscience Course* organized by the [International Brain Laboratory](https://www.internationalbrainlab.com/) (April 2020). **Lecturer:** [Luigi Acerbi](http://luigiacerbi.com/).**Instructions:** - To run the tutorial, you will need a standard scientific Python 3.x installation with Jupyter notebook (such as [Anaconda](https://www.anaconda.com/distribution/)). - You will also need the `CMA-ES` optimization algorithm (see [here](https://github.com/CMA-ES/pycma)). You can install CMA-ES from the command line with `pip install cma`.- For any question, please email the course instructor at [email protected].**Initial setup and loading the data:**
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import scipy as sp
from scipy.stats import norm
import cma
###Output
_____no_output_____
###Markdown
During this tutorial, we are going to use data from the [International Brain Laboratory](https://www.internationalbrainlab.com/) publicly released behavioral mouse dataset, from exemplar mouse `KS014`. See [this preprint](https://www.biorxiv.org/content/10.1101/2020.01.17.909838v2) for more information about the task and datasets. These data can also be inspected via the IBL DataJoint public interface [here](https://data.internationalbrainlab.org/mouse/18a54f60-534b-4ed5-8bda-b434079b8ab8).For convenience, the data of all behavioral sessions from examplar mouse `KS014` have been already downloaded in the `data` folder and slightly preprocessed into two `.csv` files, one for the training sessions (`KS014_train.csv`) and one with the *biased* sessions (`KS014_biased.csv`). We begin our tutorial by examining the training sessions.
###Code
df = pd.read_csv('./data/KS014_train.csv') # Load .csv file into a pandas DataFrame
df['signed_contrast'] = df['contrast']*df['position'] # We define a new column for "signed contrasts"
df.drop(columns='stim_probability_left', inplace=True) # Stimulus probability has no meaning for training sessions
print('Total # of trials: ' + str(len(df['trial_num'])))
print('Sessions: ' + str(np.unique(df['session_num'])))
df.head()
###Output
Total # of trials: 10310
Sessions: [ 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15]
###Markdown
**Inspecting the data:**The first thing to do with any dataset is to get familiar with it by running simple visualizations. Just plot stuff!For example, as a starter we plot data from individual sessions using a *scatterplot* format (perhaps not the best). What can we see from here?
###Code
def scatterplot_psychometric_data(df,session_num=None,ax=None):
"""Plot psychometric data (optionally, of a chosen training session) as a scatter plot."""
if session_num == None:
trial_mask = np.ones(len(df['session_num']), dtype=bool) # Select all trials
else:
trial_mask = df['session_num'] == session_num # Indexes of trials of the chosen session
Ntrials = np.sum(trial_mask) # Number of chosen trials
# Count "left" and "right" responses for each signed contrast level
left_resp = df[(df['response_choice'] == -1) & trial_mask].groupby(['signed_contrast']).count()['trial_num']
right_resp = df[(df['response_choice'] == 1) & trial_mask].groupby(['signed_contrast']).count()['trial_num']
if ax == None:
ax=fig.add_axes([0,0,1,1])
ax.scatter(left_resp.index,np.zeros(len(left_resp.index)), s=left_resp*10);
ax.scatter(right_resp.index,np.ones(len(right_resp.index)), s=right_resp*10);
ax.set_xlabel('Signed contrast (%)')
ax.set_ylabel('Rightward response')
if session_num == None:
ax.set_title('Psychometric data (# trials = ' + str(Ntrials) + ')')
else:
ax.set_title('Psychometric data (session ' + str(session_num) + ', # trials = ' + str(Ntrials) + ')')
return ax
# Plot 2nd session
fig = plt.figure(figsize=(9,4))
scatterplot_psychometric_data(df,2)
plt.show()
# Plot 15th session (last training session)
fig = plt.figure(figsize=(9,4))
scatterplot_psychometric_data(df,15)
plt.show()
###Output
_____no_output_____
###Markdown
We plot the same data again, this time with a different type of plot which may be more informative.
###Code
def plot_psychometric_data(df,session_num=None,ax=None):
"""Plot psychometric data (optionally, of a chosen training session) as a scatter plot."""
if session_num == None:
trial_mask = np.ones(len(df['session_num']), dtype=bool) # Select all trials
else:
trial_mask = df['session_num'] == session_num # Indexes of trials of the chosen session
Ntrials = np.sum(trial_mask) # Number of chosen trials
# Count "left" and "right" responses for each signed contrast level
left_resp = df[(df['response_choice'] == -1) & trial_mask].groupby(['signed_contrast']).count()['trial_num']
right_resp = df[(df['response_choice'] == 1) & trial_mask].groupby(['signed_contrast']).count()['trial_num']
frac_resp = right_resp / (left_resp + right_resp)
err_bar = np.sqrt(frac_resp*(1-frac_resp)/(left_resp + right_resp)) # Why this formula for error bars?
if ax == None:
ax=fig.add_axes([0,0,1,1])
ax.errorbar(x=left_resp.index,y=frac_resp,yerr=err_bar,label='data');
ax.set_xlabel('Signed contrast (%)')
ax.set_ylabel('Rightward response')
if session_num == None:
ax.set_title('Psychometric data (# trials = ' + str(Ntrials) + ')')
else:
ax.set_title('Psychometric data (session ' + str(session_num) + ', # trials = ' + str(Ntrials) + ')')
plt.xlim((-105,105))
plt.ylim((0,1))
return ax
fig = plt.figure(figsize=(9,4))
plot_psychometric_data(df,2)
plt.show()
fig = plt.figure(figsize=(9,4))
plot_psychometric_data(df,15)
plt.show()
###Output
_____no_output_____
###Markdown
**The psychometric function model:**We define now the `basic` psychometric function (descriptive) model and a plotting function.
###Code
def psychofun(theta,stim):
"""Psychometric function based on normal CDF and lapses"""
mu = theta[0] # bias
sigma = theta[1] # slope/noise
lapse = theta[2] # lapse rate
if len(theta) == 4: # lapse bias
lapse_bias = theta[3];
else:
lapse_bias = 0.5 # if theta has only three elements, assume symmetric lapses
p_right = norm.cdf(stim,loc=mu,scale=sigma) # Probability of responding "rightwards", without lapses
p_right = lapse*lapse_bias + (1-lapse)*p_right # Adding lapses
return p_right
def psychofun_plot(theta,ax):
"""Plot psychometric function"""
stim = np.linspace(-100,100,201) # Create stimulus grid for plotting
p_right = psychofun(theta,stim) # Compute psychometric function values
ax.plot(stim,p_right,label='model')
ax.legend()
return
###Output
_____no_output_____
###Markdown
Now try plotting the psychometric function for different values of the parameters (use both the symmetric and asymmetric psychometric function). Try and match the data from one of the sessions.
###Code
theta0 = (0,50,0.2,0.5) # Arbitrary parameter values - try different ones
session_num = 15
fig = plt.figure(figsize=(9,4))
ax = plot_psychometric_data(df,session_num)
psychofun_plot(theta0,ax)
plt.show()
###Output
_____no_output_____
###Markdown
We now define the log likelihood function of the psychometric function model for a given dataset and model parameter vector, $\log p(\text{data}|\mathbf{\theta})$.
###Code
def psychofun_loglike(theta,df):
"""Log-likelihood for psychometric function model"""
s_vec = df['signed_contrast'] # Stimulus values
r_vec = df['response_choice'] # Responses
p_right = psychofun(theta,s_vec)
# Compute summed log likelihood for all rightwards and leftwards responses
loglike = np.sum(np.log(p_right[r_vec == 1])) + np.sum(np.log(1 - p_right[r_vec == -1]))
return loglike
###Output
_____no_output_____
###Markdown
Now try to get the best fit for this session, as we did before, but by finding better and better values of the log-likelihood.
###Code
session_num = 14 # Let's use a different session
theta0 = (0,25,0.1,0.5)
ll = psychofun_loglike(theta0,df[df['session_num'] == session_num])
print('Log-likelihood value: ' + "{:.3f}".format(ll))
fig = plt.figure(figsize=(9,4))
ax = plot_psychometric_data(df,session_num)
psychofun_plot(theta0,ax)
plt.show()
###Output
Log-likelihood value: -324.400
###Markdown
**Maximum-likelihood estimation:**In this section, we are going to estimate model parameters (aka fit our models) by maximizing the log-likelihood. By convention in optimization, we are going to *minimize* the negative log-likelihood.Before running the optimization, we define the *hard* lower and upper bounds for the parameters. If the optimization algorithm supports constrained (bound) optimization, it will never go outside the hard bounds. We also define informally the *plausible* bounds as the range of parameters that we would expect to see. We are going to use the plausible range to initialize the problem later.
###Code
# Define hard parameter bounds
lb = np.array([-100,0.5,0,0])
ub = np.array([100,200,1,1])
bounds = [lb,ub]
# Define plausible range
plb = np.array([-25,5,0.05,0.2])
pub = np.array([25,25,0.40,0.8])
# Pick session data
session_num = 14
df_session = df[df['session_num'] == session_num]
# Define objective function: negative log-likelihood
opt_fun = lambda theta_: -psychofun_loglike(theta_,df_session)
###Output
_____no_output_____
###Markdown
We are now going to run a *black-box* optimization algorithm called CMA-ES. For now we are going to run the optimization only once, but in general you should *always* run the optimization from multiple distinct starting points.
###Code
# Generate random starting point for the optimization inside the plausible box
theta0 = np.random.uniform(low=plb,high=pub)
# Initialize CMA-ES algorithm
opts = cma.CMAOptions()
opts.set("bounds",bounds)
opts.set("tolfun",1e-5)
# Run optimization
res = cma.fmin(opt_fun, theta0, 0.5, opts)
print('')
print('Returned parameter vector: ' + str(res[0]))
print('Negative log-likelihood at solution: ' + str(res[1]))
fig = plt.figure(figsize=(9,4))
ax = plot_psychometric_data(df_session,session_num)
psychofun_plot(res[0],ax)
plt.show()
###Output
(4_w,8)-aCMA-ES (mu_w=2.6,w_1=52%) in dimension 4 (seed=495280, Mon Apr 20 19:18:25 2020)
Iterat #Fevals function value axis ratio sigma min&max std t[m:s]
1 8 3.583701362763698e+02 1.0e+00 5.06e-01 5e-01 6e-01 0:00.0
2 16 3.710842472145192e+02 1.6e+00 4.35e-01 4e-01 5e-01 0:00.0
3 24 3.857958858389333e+02 1.8e+00 4.64e-01 4e-01 5e-01 0:00.0
100 800 2.986616051848593e+02 1.0e+02 2.25e-01 2e-03 1e-01 0:00.9
149 1192 2.986582720706249e+02 6.1e+01 1.21e-03 2e-06 8e-05 0:01.4
termination on tolfun=1e-05 (Mon Apr 20 19:18:27 2020)
final/bestever f-value = 2.986583e+02 2.986583e+02
incumbent solution: [-1.975946309619393, 9.24258412819946, 0.1399429395636772, 0.6275534769527764]
std deviation: [4.985100456757879e-05, 8.061647904356876e-05, 1.665751599590455e-06, 4.865868263540596e-06]
Returned parameter vector: [-1.97594631 9.24258413 0.13994294 0.62755348]
Negative log-likelihood at solution: 298.65827206991577
###Markdown
**Model comparison:**We consider now a slightly more advanced model which includes time dependency by having the response in the current trial being influenced by the response in the previous trial. We adopt a simple model, `repeatlast`, in which the observer has a fixed chance of repeating the previous choice.
###Code
def psychofun_repeatlast_loglike(theta,df):
"""Log-likelihood for last-choice dependent psychometric function model"""
s_vec = np.array(df['signed_contrast']) # Stimulus values
r_vec = np.array(df['response_choice']) # Responses
p_last = theta[0] # Probability of responding as last choice
theta_psy = theta[1:] # Standard psychometric function parameters
p_right = psychofun(theta_psy,s_vec)
# Starting from the 2nd trial, probability of responding equal to the last trial
p_right[1:] = p_last*(r_vec[0:-1] == 1) + (1-p_last)*p_right[1:]
# Compute summed log likelihood for all rightwards and leftwards responses
loglike = np.sum(np.log(p_right[r_vec == 1])) + np.sum(np.log(1 - p_right[r_vec == -1]))
return loglike
lb = np.array([0,-100,1,0,0])
ub = np.array([1,100,100,1,1])
bounds = [lb,ub]
plb = np.array([0.05,-25,5,0.05,0.2])
pub = np.array([0.2,25,25,0.45,0.8])
df_session = df[df['session_num'] == session_num]
# df_session = df[(df['session_num'] == session_num) & (df['trial_num'] > 300)]
opt_fun = lambda theta_: -psychofun_repeatlast_loglike(theta_,df_session)
theta0 = np.random.uniform(low=plb,high=pub)
opts = cma.CMAOptions()
opts.set("bounds",bounds)
opts.set("tolfun",1e-5)
res_repeatlast = cma.fmin(opt_fun, theta0, 0.5, opts)
print('')
print('Returned parameter vector: ' + str(res_repeatlast[0]))
print('Negative log-likelihood at solution: ' + str(res_repeatlast[1]))
fig = plt.figure(figsize=(9,4))
ax = plot_psychometric_data(df_session,session_num)
#psychofun_plot(res[0],ax)
plt.show()
###Output
(4_w,8)-aCMA-ES (mu_w=2.6,w_1=52%) in dimension 5 (seed=459872, Mon Apr 20 19:18:28 2020)
Iterat #Fevals function value axis ratio sigma min&max std t[m:s]
1 8 3.405287529436299e+02 1.0e+00 4.93e-01 5e-01 5e-01 0:00.0
2 16 3.279753519409780e+02 1.2e+00 4.84e-01 4e-01 5e-01 0:00.0
3 24 3.316640923133305e+02 1.3e+00 5.08e-01 4e-01 6e-01 0:00.0
100 800 2.812924563992388e+02 5.9e+01 4.96e-01 1e-02 7e-01 0:00.5
165 1320 2.804863985663652e+02 5.5e+01 6.63e-04 3e-06 1e-04 0:00.8
termination on tolfun=1e-05 (Mon Apr 20 19:18:29 2020)
final/bestever f-value = 2.804864e+02 2.804864e+02
incumbent solution: [0.15168427913123783, -2.109312926136682, 7.180818765150742, 0.056903528067765635, 0.6222986377090183]
std deviation: [2.8918985924487735e-06, 0.0001109077089983831, 0.00012538462848712305, 2.547479005253977e-06, 1.9214364669849703e-05]
Returned parameter vector: [ 0.15168515 -2.10938185 7.18092125 0.0569031 0.62230309]
Negative log-likelihood at solution: 280.48639855305305
###Markdown
We now calculate a few model simple comparison metrics, such as AIC and BIC, for the `basic` and `repeatlast` models.
###Code
Nmodels = 2
nll = np.zeros(Nmodels)
nparams = np.zeros(Nmodels)
results = [res,res_repeatlast] # Store all optimization output in a vector
for i in range(0,len(results)):
nll[i] = results[i][1] # The optimization algorithm received the *negative* log-likelihood
nparams[i] = len(results[i][0])
ntrials = len(df['signed_contrast'])
aic = 2*nll + 2*nparams
bic = 2*nll + nparams*np.log(ntrials)
print('Model comparison results (for all metrics, lower is better)\n')
print('Negative log-likelihoods: ' + str(nll))
print('AIC: ' + str(aic))
print('BIC: ' + str(bic))
###Output
Model comparison results (for all metrics, lower is better)
Negative log-likelihoods: [298.65827207 280.48639855]
AIC: [605.31654414 570.97279711]
BIC: [634.28002245 607.17714499]
###Markdown
**[Advanced] Optional model:** We consider next a more advanced model which includes explicit time dependency (the trials are not all the same), also known as *non-stationarity*. Note that this function is not coded very efficiently and runs quite slowly due to the `for` loop - it could be improved with vectorization.
###Code
def psychofun_timevarying_loglike(theta,df):
"""Log-likelihood for time-varying psychometric function model"""
s_vec = np.array(df['signed_contrast']) # Stimulus values
r_vec = np.array(df['response_choice']) # Responses
Ntrials = len(s_vec)
mu_vec = np.linspace(theta[0],theta[4],Ntrials)
sigma_vec = np.linspace(theta[1],theta[5],Ntrials)
lapse_vec = np.linspace(theta[2],theta[6],Ntrials)
lapsebias_vec = np.linspace(theta[3],theta[7],Ntrials)
p_right = np.zeros(Ntrials)
for t in range(0,Ntrials):
p_right[t] = psychofun([mu_vec[t],sigma_vec[t],lapse_vec[t],lapsebias_vec[t]],s_vec[t])
# Compute summed log likelihood for all rightwards and leftwards responses
loglike = np.sum(np.log(p_right[r_vec == 1])) + np.sum(np.log(1 - p_right[r_vec == -1]))
return loglike
theta0 = (0,20,0.1,0.5,1,20,0.1,0.5)
ll = psychofun_timevarying_loglike(theta0,df[df['session_num'] == session_num])
lb = np.array([-100,1,0,0,-100,1,0,0])
ub = np.array([100,100,1,1,100,100,1,1])
bounds = [lb,ub]
plb = np.array([-25,5,0.05,0.2,-25,5,0.05,0.2])
pub = np.array([25,25,0.45,0.8,25,25,0.45,0.8])
session_num = 14
df_session = df[df['session_num'] == session_num]
# df_session = df[(df['session_num'] == session_num) & (df['trial_num'] > 300)]
opt_fun = lambda theta_: -psychofun_timevarying_loglike(theta_,df_session)
theta0 = np.random.uniform(low=plb,high=pub)
opts = cma.CMAOptions()
opts.set("bounds",bounds)
opts.set("tolfun",1e-5)
res_time = cma.fmin(opt_fun, theta0, 0.5, opts)
print('')
print('Returned parameter vector: ' + str(res_time[0]))
print('Negative log-likelihood at solution: ' + str(res_time[1]))
fig = plt.figure(figsize=(9,4))
ax = plot_psychometric_data(df_session,session_num)
#psychofun_plot(res[0],ax)
plt.show()
###Output
(5_w,10)-aCMA-ES (mu_w=3.2,w_1=45%) in dimension 8 (seed=554213, Mon Apr 20 19:18:29 2020)
Iterat #Fevals function value axis ratio sigma min&max std t[m:s]
1 10 3.349579083043661e+02 1.0e+00 5.17e-01 5e-01 5e-01 0:00.6
2 20 3.259085424092377e+02 1.4e+00 5.33e-01 5e-01 6e-01 0:01.2
3 30 3.322308738382135e+02 1.4e+00 5.12e-01 5e-01 6e-01 0:01.8
9 90 3.252502325255594e+02 2.1e+00 3.55e-01 3e-01 4e-01 0:05.0
14 140 3.287852844909411e+02 2.5e+00 2.50e-01 2e-01 4e-01 0:09.1
20 200 3.209822813055404e+02 3.0e+00 2.48e-01 2e-01 3e-01 0:14.4
30 300 3.188113981542900e+02 4.0e+00 1.58e-01 7e-02 2e-01 0:20.8
43 430 3.178582868197084e+02 5.0e+00 1.27e-01 4e-02 1e-01 0:28.5
55 550 3.159915131023549e+02 8.6e+00 1.98e-01 5e-02 3e-01 0:37.1
67 670 3.087858583174051e+02 1.6e+01 2.77e-01 4e-02 6e-01 0:47.0
82 820 3.007650052775848e+02 3.4e+01 7.76e-01 9e-02 2e+00 0:57.1
100 1000 2.937737319752238e+02 4.6e+01 3.56e-01 2e-02 9e-01 1:07.6
122 1220 2.926148940697984e+02 6.4e+01 1.34e-01 6e-03 2e-01 1:19.8
145 1450 2.924533006299938e+02 7.2e+01 1.74e-01 5e-03 2e-01 1:33.1
170 1700 2.924010314977727e+02 7.3e+01 2.41e-02 5e-04 2e-02 1:47.4
196 1960 2.924008862284228e+02 8.7e+01 2.76e-03 4e-05 2e-03 2:03.1
200 2000 2.924008855819802e+02 9.2e+01 1.93e-03 2e-05 2e-03 2:05.8
221 2210 2.924008843379380e+02 8.6e+01 3.68e-04 3e-06 2e-04 2:22.0
termination on tolfun=1e-05 (Mon Apr 20 19:20:52 2020)
final/bestever f-value = 2.924009e+02 2.924009e+02
incumbent solution: [-2.2974790450036595, 2.5889184565217986, 0.17287813876580832, 0.4157862312725872, -1.2194922287511163, 16.800218459734477, 0.08567043430743715, 0.9999999999965106]
std deviation: [0.00015187030975833562, 0.00012989299594279695, 4.061485337057617e-06, 1.101989548716165e-05, 0.0001930051308047627, 0.000207788061494618, 3.4354748219364335e-06, 2.5682657988880178e-05]
Returned parameter vector: [-2.29747905 2.58891846 0.17287814 0.41578623 -1.21949223 16.80021846
0.08567043 1. ]
Negative log-likelihood at solution: 292.40088433713703
|
datasetGrafigi.ipynb | ###Markdown
Gerekli kütüphaneler import edildi
###Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
fileName = "datasetDuzenlenmisHali_5000.csv"
df=pd.read_csv(fileName, sep=';')
df.head(10)
###Output
_____no_output_____
###Markdown
Veri seti satır, sütun sayısı
###Code
df.shape
###Output
_____no_output_____
###Markdown
Verilere genel bakış
###Code
for column in ['cinsiyet','egitimSeviyesi', 'meslekGrubu','yasadigiOrtam', 'arabaVarMi','mulkVarMi',
'medeniHal']:
print(df[column].value_counts())
###Output
Kadin 3102
Erkek 1898
Name: cinsiyet, dtype: int64
Ortaokul 3461
Lise 1313
LiseTerk 185
Ilkokul 41
Name: egitimSeviyesi, dtype: int64
TarimIscisi 1210
TecrubeliCalisan 772
SatisPersoneli 735
Yonetici 592
Sofor 439
UzmanPersonel 274
SaglikPersoneli 220
Muhasebeci 196
Asci 165
GuvenlikPersoneli 116
TemizlikPersoneli 98
OzelHizmetPersoneli 63
Sekreter 26
Isci 24
Emlakci 24
Garson 23
InsanKaynaklariPersoneli 18
BTPersoneli 5
Name: meslekGrubu, dtype: int64
KendiEvi 4444
Kira 290
AilesiyleYasiyor 266
Name: yasadigiOrtam, dtype: int64
Hayir 2738
Evet 2262
Name: arabaVarMi, dtype: int64
Evet 3656
Hayir 1344
Name: mulkVarMi, dtype: int64
Evli 3938
Bekar 1062
Name: medeniHal, dtype: int64
###Markdown
Verilere genel bakış grafiği
###Code
df_cat = df[['cinsiyet','egitimSeviyesi', 'meslekGrubu','yasadigiOrtam', 'arabaVarMi','mulkVarMi',
'medeniHal']]
for i in df_cat.columns:
cat_num = df_cat[i].value_counts()
print("%s Tablosu: Veri Sayısı = %d" % (i, len(cat_num)))
chart = sns.barplot(x=cat_num, y=cat_num.index, palette="Blues_d", orient='h')
plt.show()
###Output
cinsiyet Tablosu: Veri Sayısı = 2
###Markdown
Her kategorik sütun içerisinde medeniHal oran grafiği
###Code
f, axes = plt.subplots(1,4, figsize = (20,7))
cinsiyet = df_cat.groupby(['cinsiyet','medeniHal']).cinsiyet.count().unstack()
p1 = cinsiyet.plot(kind = 'bar', stacked = True,
title = 'Cinsiyet: Bekar Evli Oranı',
color = ['lightgreen','grey'], alpha = .70, ax = axes[0])
p1.set_xlabel('')
p1.legend(['Bekar','Evli'])
egitimSeviyesi = df_cat.groupby(['egitimSeviyesi','medeniHal']).egitimSeviyesi.count().unstack()
p2 = egitimSeviyesi.plot(kind = 'bar', stacked = True,
title = 'EgitimSeviyesi: Bekar Evli Oranı',
color = ['lightgreen','grey'], alpha = .70, ax = axes[1])
p2.set_xlabel('')
p2.legend(['Bekar','Evli'])
meslekGrubu = df_cat.groupby(['meslekGrubu','medeniHal']).meslekGrubu.count().unstack()
p3 = meslekGrubu.plot(kind = 'bar', stacked = True,
title = 'MeslekGrubu: Bekar Evli Oranı',
color = ['lightgreen','grey'], alpha = .70, ax = axes[2])
p3.set_xlabel('')
p3.legend(['Bekar','Evli'])
yasadigiOrtam = df_cat.groupby(['yasadigiOrtam','medeniHal']).yasadigiOrtam.count().unstack()
p4 = yasadigiOrtam.plot(kind = 'bar', stacked = True,
title = 'YasadigiOrtam : Bekar Evli Oranı',
color = ['lightgreen','grey'], alpha = .70, ax = axes[3])
p4.set_xlabel('')
p4.legend(['Bekar','Evli'])
plt.show()
###Output
_____no_output_____ |
astr_119_final_project.ipynb | ###Markdown
Compute a Monte Carlo integral for any specified function.
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import math
###Output
_____no_output_____
###Markdown
Riofa-Gean Fernandez ID: 1396498
###Code
N = 500 # Number of points
a = 0 #x-axis min to replace
b = 1.75 #x-axis max to replace
def f(x):
return np.cos(x) #function to replace
x = np.arange(a,b,0.01) #(start, stop, step interval)
y = f(x) #function
d = max(y) #y-axis maximum
c = min(y) #y-axis minimum
#compute the number of random points
x_rand = a + (b - a)*np.random.random(N)
y_rand = np.random.random(N)*d
ind_below = np.where(y_rand < f(x_rand)) #points below the function
ind_above = np.where(y_rand >= f(x_rand)) #points above the function
#plot the function
pts_below = plt.scatter(x_rand[ind_below], y_rand[ind_below], label = "Points below function", color = "green")
pts_above = plt.scatter(x_rand[ind_above], y_rand[ind_above], label = "Points above function", color = "blue")
plt.plot(x, y, label = "Function", color = "red")
plt.legend(loc = 'lower center', ncol = 2)
int_answer_1 = len(ind_below[0])/(N)*((b-a)*(d-c)) #first integral estimate (By R. Fernandez and S. Yuen)
#print the answer
print ("Number of points above the function:", len(ind_above[0]))
print ("Number of points below the function:", len(ind_below[0]))
print ("Fraction of points below the function:", int_answer_1) #By S. Yuen
###Output
_____no_output_____
###Markdown
Sierra Yuen ID: 1495259
###Code
N = 10000 #number of points
a2 = 0 #x-axis minimum
b2 = 1.75 #x-axis maximum
def f(x):
return np.cos(x) #function to replace
x = np.arange(a2,b2,0.01) #(start,stop,step interval)
y = f(x) #function
d2 = max(y) #y-axis maximum
c2 = min(y) #y-axis minimum
#compute the number of random points
x_rand = a2 + (b2 - a2)*np.random.random(N)
y_rand = np.random.random(N)*d2
ind_below = np.where(y_rand < f(x_rand)) #points below the function
ind_above = np.where(y_rand >= f(x_rand)) #points above the function
#plot the function
pts_below = plt.scatter(x_rand[ind_below], y_rand[ind_below], label = "Dots below function", color = "green")
pts_above = plt.scatter(x_rand[ind_above], y_rand[ind_above], label = "Dots above function", color = "blue")
plt.plot(x, y, label = "Function", color = "red")
plt.legend(loc = 'lower center', ncol = 2)
int_answer_2 = len(ind_below[0])/(N)*((b2-a2)*(d2-c2)) #second integral estimate (By R. Fernandez and S. Yuen)
#print the answer
print ("Number of points above the function:", len(ind_above[0]))
print ("Number of points below the function:", len(ind_below[0]))
print ("Fraction of points below the function:", int_answer_2)
#specify a tolerance for the integration
tolerance = int_answer_2 - int_answer_1
#print the tolerance
print(tolerance)
###Output
_____no_output_____ |
SALT2_fit/make_samples.ipynb | ###Markdown
Check how are the Ias in DDF in comparison with WFD
###Code
fnames_Ia = os.listdir('Ia/results/')
fnames_Ia.remove('master_fitres.fitres')
fnames_Ia.remove('salt3')
fnames_Ia.remove('.ipynb_checkpoints')
salt2_wfd = []
for name in fnames_Ia:
fitres_temp = pd.read_csv('Ia/results/' + name, delim_whitespace=True,
comment='#')
salt2_wfd.append(fitres_temp)
salt2_Ia_wfd = pd.concat(salt2_wfd, ignore_index=True)
salt2_Ia_ddf = pd.read_csv('Ia/results/master_fitres.fitres', comment='#', delim_whitespace=True)
salt2_Ia_ddf['x1 - SIM_x1'] = salt2_Ia_ddf['x1'] - salt2_Ia_ddf['SIM_x1']
salt2_Ia_ddf['c - SIM_c'] = salt2_Ia_ddf['c'] - salt2_Ia_ddf['SIM_c']
salt2_Ia_ddf['x0 - SIM_x0'] = salt2_Ia_ddf['x0'] - salt2_Ia_ddf['SIM_x0']
salt2_Ia_ddf['mB - SIM_mB'] = salt2_Ia_ddf['mB'] - salt2_Ia_ddf['SIM_mB']
salt2_Ia_wfd['x1 - SIM_x1'] = salt2_Ia_wfd['x1'] - salt2_Ia_wfd['SIM_x1']
salt2_Ia_wfd['c - SIM_c'] = salt2_Ia_wfd['c'] - salt2_Ia_wfd['SIM_c']
salt2_Ia_wfd['mB - SIM_mB'] = salt2_Ia_wfd['mB'] - salt2_Ia_wfd['SIM_mB']
salt2_Ia_wfd['x0 - SIM_x0'] = salt2_Ia_wfd['x0'] - salt2_Ia_wfd['SIM_x0']
plt.figure(figsize=(24,10))
ax1 = plt.subplot(2,3,1)
sns.distplot(salt2_Ia_ddf['x1'], label='DDF', ax=ax1)
sns.distplot(salt2_Ia_wfd['x1'], label='WFD', ax=ax1)
plt.legend()
ax2 = plt.subplot(2,3,2)
sns.distplot(salt2_Ia_ddf['c'], label='DDF', ax=ax2)
sns.distplot(salt2_Ia_wfd['c'], label='WFD', ax=ax2)
plt.legend()
ax3 = plt.subplot(2,3,3)
sns.distplot(salt2_Ia_ddf['mB'], label='DDF', ax=ax3)
sns.distplot(salt2_Ia_wfd['mB'], label='WFD', ax=ax3)
plt.legend()
ax4 = plt.subplot(2,3,4)
sns.distplot(salt2_Ia_ddf['x1 - SIM_x1'], label='DDF', ax=ax4)
sns.distplot(salt2_Ia_wfd['x1 - SIM_x1'], label='WFD', ax=ax4)
plt.legend()
ax5 = plt.subplot(2,3,5)
sns.distplot(salt2_Ia_ddf['c - SIM_c'], label='DDF', ax=ax5)
sns.distplot(salt2_Ia_wfd['c - SIM_c'], label='WFD', ax=ax5)
plt.legend()
ax6 = plt.subplot(2,3,6)
sns.distplot(salt2_Ia_ddf['mB - SIM_mB'], label='DDF', ax=ax6)
sns.distplot(salt2_Ia_wfd['mB - SIM_mB'], label='WFD', ax=ax6)
plt.legend()
#plt.savefig('plots/SALT2_params_DDF_WFD.png')
###Output
_____no_output_____
###Markdown
Create perfect sample WFD
###Code
nobjs = 3000
for j in range(1, 6):
perfect_sample = salt2_Ia_wfd.sample(n=nobjs, replace=False)
perfect_sample['zHD'] = perfect_sample['SIM_ZCMB']
perfect_sample.fillna(value=-99, inplace=True)
perfect_sample.to_csv('WFD' + str(j) + '/perfect' + str(nobjs) + '.csv', index=False)
perfect_sample.to_csv('/media/RESSPECT/data/PLAsTiCC/for_metrics/wfd/emille_samples' + str(j) + '/perfect' + str(nobjs) + '.csv', index=False)
del perfect_sample
###Output
_____no_output_____
###Markdown
Calculate populations - WFD
###Code
types_names = {90: 'Ia', 67: '91bg', 52:'Iax', 42:'II', 62:'Ibc',
95: 'SLSN', 15:'TDE', 64:'KN', 88:'AGN', 92:'RRL', 65:'M-dwarf',
16:'EB',53:'Mira', 6:'MicroL', 991:'MicroLB', 992:'ILOT',
993:'CART', 994:'PISN',995:'MLString'}
SNANA_names = {11: 'Ia', 3:'Ibc', 13: 'Ibc', 2:'II', 12:'II', 14:'II',
41: '91bg', 43:'Iax', 51:'KN', 60:'SLSN', 61:'PISN', 62:'ILOT',
63:'CART', 64:'TDE', 70:'AGN', 80:'RRL', 81:'M-dwarf', 83:'EB',
84:'Mira', 90:'MicroLB', 91:'MicroL', 93:'MicroL'}
groups, freq = np.unique(test_metadata[~ddf_flag]['true_target'].values, return_counts=True)
tot_wfd = sum(~ddf_flag)
print('Type \t\t Total number \t %')
for i in range(len(groups)):
if types_names[groups[i]] in ['M-dwarf', 'MicroLB']:
print(i, ' --- ', types_names[groups[i]], '\t', freq[i], '\t\t', round(100*freq[i]/tot_wfd, 3))
else:
print(i, ' -- ', types_names[groups[i]], '\t\t', freq[i], '\t\t', round(100*freq[i]/tot_wfd, 3))
###Output
Type Total number %
0 -- MicroL 1297 0.037
1 -- TDE 13487 0.39
2 -- EB 96569 2.791
3 -- II 984164 28.444
4 -- Iax 62857 1.817
5 -- Mira 1447 0.042
6 -- Ibc 172900 4.997
7 -- KN 131 0.004
8 --- M-dwarf 93433 2.7
9 -- 91bg 39831 1.151
10 -- AGN 101061 2.921
11 -- Ia 1647191 47.607
12 -- RRL 197013 5.694
13 -- SLSN 35684 1.031
14 --- MicroLB 524 0.015
15 -- ILOT 1671 0.048
16 -- CART 9538 0.276
17 -- PISN 1166 0.034
###Markdown
Populations for a sample with 3000 SNIa
###Code
data_all_wfd2 = pd.read_csv('/media/RESSPECT/data/PLAsTiCC/for_metrics/wfd/emille_samples/all_WFD.csv', index_col=False)
###Output
_____no_output_____
###Markdown
Random
###Code
for j in range(1, 6):
d1 = data_all_wfd2.sample(n=3000, replace=False)
d1.to_csv('WFD' + str(j) + '/perfect' + str(nobjs) + '.csv', index=False)
print(d1.iloc[0])
d1.to_csv('/media/RESSPECT/data/PLAsTiCC/for_metrics/wfd/emille_samples' + str(j) + '/random' + str(nobjs) + '.csv', index=False)
del d1
nIa = freq[11]
nIa_sample = 3000
fitres_types, fitres_freq = np.unique(data_all_wfd2['SIM_TYPE_INDEX'].values, return_counts=True)
mock = []
for i in range(len(groups)):
n_objs = int(nIa_sample * freq[i]/nIa)
print(n_objs, ' --- ', types_names[groups[i]], ' --- ', SNANA_types[groups[i]])
if n_objs > 0:
if isinstance(SNANA_types[groups[i]], int) and SNANA_types[groups[i]] in fitres_types:
print('***', types_names[groups[i]], ' --- ', n_objs)
snana_type = SNANA_types[groups[i]]
flag = data_all_wfd2['SIM_TYPE_INDEX'].values == snana_type
data_partial = data_all_wfd2[flag]
data_partial2 = data_partial.sample(n=n_objs, replace=False)
mock.append(data_partial2)
elif isinstance(SNANA_types[groups[i]], dict) and len(SNANA_types[groups[i]]) == 3:
print('***', types_names[groups[i]], ' --- ', n_objs)
f1 = np.logical_or(data_all_wfd2['SIM_TYPE_INDEX'].values == 2,
data_all_wfd2['SIM_TYPE_INDEX'].values == 12)
f2 = np.logical_or(data_all_wfd2['SIM_TYPE_INDEX'].values == 14, f1)
data_partial = data_all_wfd2[f2]
data_partial2 = data_partial.sample(n=n_objs, replace=False)
mock.append(data_partial2)
elif isinstance(SNANA_types[groups[i]], dict) and len(SNANA_types[groups[i]]) == 2:
print('***', types_names[groups[i]], ' --- ', n_objs)
flag = np.logical_or(data_all_wfd2['SIM_TYPE_INDEX'].values == 3,
data_all_wfd2['SIM_TYPE_INDEX'].values == 13)
data_partial = data_all_wfd2[flag]
data_partial2 = data_partial.sample(n=n_objs, replace=False)
mock.append(data_partial2)
mock2 = pd.concat(mock, ignore_index=True)
mock2.fillna(value=-99, inplace=True)
mock2['zHD'] = mock2['SIM_ZCMB']
def classification_metrics(cont):
"""Classification metrics for a sample of 3k SNIa.
Parameters
----------
cont: float \in [0, 1]
Percentage of contamination.
Returns
-------
accuracy: float
efficiency: float
purity: float
figure of merit (W=1): float
figure of merit (W=3): float
"""
totIa = 3000
ntotal = 5588
acc = (ntotal - (2* totIa * cont))/5588
eff = (totIa - totIa * cont)/3000
f1 = ((totIa - 3000 * cont)/3000) * (1 - cont)
f3 = ((1 - cont) * totIa)/(((1-cont) * totIa) + 3 * ((cont) * totIa))
return acc, eff, 1 - cont, f1, f3
classification_metrics(0.02)
###Output
_____no_output_____
###Markdown
Single contaminant
###Code
c1 = [[72, 'II'], [75, 'Iax'], [75, 'II'], [90, 'Iax'], [90, 'Ibc'], [90, 'II'], [95, 'AGN'], [95, '91bg'],
[95, 'Iax'], [95, 'Ibc'], [95, 'II'], [98, 'AGN'], [98, '91bg'], [98, 'Iax'], [98, 'Ibc'], [98, 'II'],
[99.6, 'TDE'], [99.7, 'CART'], [99, 'AGN'], [99, 'SLSN'], [99, '91bg'], [99, 'Iax'], [99, 'Ibc'], [99, 'II']]
k = 5
for i in range(len(c1)):
fname_salt2 = os.listdir(c1[i][1] + '/results/')
if '.ipynb_checkpoints' in fname_salt2:
fname_salt2.remove('.ipynb_checkpoints')
fname_salt2.remove('salt3')
fname_salt2.remove('master_fitres.fitres')
nobjs = round(0.01* (100 - c1[i][0]) * 3000)
print('nobjs = ', nobjs)
salt2_wfd = []
for name in fname_salt2:
try:
fitres_temp = pd.read_csv(c1[i][1] + '/results/' + name, delim_whitespace=True,
comment='#')
salt2_wfd.append(fitres_temp)
except:
pass
salt2_wfd = pd.concat(salt2_wfd, ignore_index=True)
types, counts = np.unique(salt2_wfd['SIM_TYPE_INDEX'].values, return_counts=True)
print('salt2_wfd.shape = ', salt2_wfd.shape)
print('types = ', types)
print('counts = ', counts)
salt2_sample = salt2_wfd.sample(n=nobjs, replace=False)
fnames_Ia = os.listdir('Ia/results/')
fnames_Ia.remove('master_fitres.fitres')
fnames_Ia.remove('salt3')
fnames_Ia.remove('.ipynb_checkpoints')
salt2_wfd = []
for name in fnames_Ia:
fitres_temp = pd.read_csv('Ia/results/' + name, delim_whitespace=True,
comment='#')
salt2_wfd.append(fitres_temp)
salt2_Ia_wfd = pd.concat(salt2_wfd, ignore_index=True)
types, counts = np.unique(salt2_Ia_wfd['SIM_TYPE_INDEX'].values, return_counts=True)
print('types = ', types)
print('counts = ', counts)
salt2_Ia_sample = salt2_Ia_wfd.sample(n=3000-nobjs, replace=False)
final_sample = pd.concat([salt2_Ia_sample, salt2_sample], ignore_index=True)
final_sample['zHD'] = final_sample['SIM_ZCMB']
final_sample.fillna(value=-99, inplace=True)
print('final_sample.shape = ', final_sample.shape)
if c1[i][1] in ['AGN', 'TDE', 'SLSN', 'CART', '91bg']:
cont = c1[i][1]
elif c1[i][1] == '91bg':
cont = 'SNIa-91bg'
else:
cont = 'SN' + c1[i][1]
fname = 'WFD' + str(k) + '/' + str(c1[i][0]) + 'SNIa' + str(round(100 - c1[i][0], 1)) + cont + '.csv'
fname2 = '/media/RESSPECT/data/PLAsTiCC/for_metrics/wfd/emille_samples' + str(k) + '/' + str(c1[i][0]) + \
'SNIa' + str(round(100 - c1[i][0], 1)) + cont + '.csv'
print('fname = ', fname)
print('fname2= ', fname2)
final_sample.to_csv(fname, index=False)
final_sample.to_csv(fname2, index=False)
del final_sample
del cont
###Output
_____no_output_____
###Markdown
DDF Perfect
###Code
k = 5
np.random.seed(k)
salt2_Ia_ddf = pd.read_csv('Ia/results/master_fitres.fitres', comment='#', delim_whitespace=True)
types, counts = np.unique(salt2_Ia_ddf['SIM_TYPE_INDEX'].values, return_counts=True)
zflag = salt2_Ia_ddf['SIM_ZCMB'].values <= 1
data = salt2_Ia_ddf[zflag]
print(types)
print(counts)
nobjs = 3000
final_sample = data.sample(n=nobjs, replace=False)
final_sample['zHD'] = final_sample['SIM_ZCMB']
final_sample.fillna(value=-99, inplace=True)
final_sample.to_csv('DDF' + str(k)+ '/perfect' + str(nobjs) + '.csv')
final_sample.to_csv('/media/RESSPECT/data/PLAsTiCC/for_metrics/ddf/emille_samples' + str(k) + \
'/perfect' + str(nobjs) + '.csv')
c2 = [[72, 'II'], [75, 'II'],[86, 'Iax'], [90, 'Iax'], [90, 'II'], [91, 'Iax'], [92,'Ibc'], [95, 'Iax'],
[95, 'Ibc'], [95,'II'], [98, 'Iax'], [98, 'Ibc'], [98, 'II'], [99.1,'CART'], [99.8, '91bg'],
[99.9, 'AGN'], [99.9, 'SLSN'], [99, 'Iax'], [99, 'Ibc'], [99, 'II']]
k = 1
np.random.seed(k)
for i in range(len(c2)):
if c2[i][1] not in ['AGN', 'SLSN', 'CART', '91bg']:
cont = 'SN' + c2[i][1]
elif c2[i][1] == '91bg':
cont = 'SNIa-91bg'
else:
cont = c2[i][1]
salt2_ddf = pd.read_csv(c2[i][1] + '/results/master_fitres.fitres', comment='#', delim_whitespace=True)
types, counts = np.unique(salt2_ddf['SIM_TYPE_INDEX'].values, return_counts=True)
print(types)
print(counts)
nobjs = round(0.01* (100 - c2[i][0]) * 3000)
print(nobjs)
salt2_ddf_sample = salt2_ddf.sample(n=nobjs, replace=False)
salt2_Ia_ddf = pd.read_csv('Ia/results/master_fitres.fitres', comment='#', delim_whitespace=True)
types, counts = np.unique(salt2_Ia_ddf['SIM_TYPE_INDEX'].values, return_counts=True)
salt2_Ia_sample = salt2_Ia_ddf.sample(n=3000-nobjs, replace=False)
final_sample = pd.concat([salt2_Ia_sample, salt2_ddf_sample], ignore_index=True)
final_sample['zHD'] = final_sample['SIM_ZCMB']
final_sample.fillna(value=-99, inplace=True)
fname2 = 'DDF' + str(k) + '/' + str(c2[i][0]) + 'SNIa' + str(round(100 - c2[i][0], 1)) + cont + '.csv'
print(fname2)
final_sample.to_csv(fname2, index=False)
fname3 = '/media/RESSPECT/data/PLAsTiCC/for_metrics/ddf/emille_samples' + str(k) + '/' + \
str(c2[i][0]) + 'SNIa' + str(round(100 - c2[i][0], 1)) + cont + '.csv'
ddd = pd.read_csv('DDF1/86SNIa14SNIax.csv', index_col=False)
sum(ddd['SIM_TYPE_INDEX'].values == 11)/3000
###Output
_____no_output_____
###Markdown
Make list with all DDF surviving SALT2
###Code
import os
import pandas as pd
import numpy as np
fnames = os.listdir('.')
fnames.remove('make_samples.ipynb')
fnames.remove('summary.ipynb')
fnames.remove('.ipynb_checkpoints')
fnames.remove('WFD')
fnames.remove('DDF')
fnames.remove('DDF_Alex')
fnames.remove('plots')
fnames.remove('WFD_Alex')
all_fitres = []
for name in fnames:
try:
data = pd.read_csv(name + '/results/master_fitres.fitres', comment='#', delim_whitespace=True)
data.fillna(value=-99, inplace=True)
data['zHD'] = data['SIM_ZCMB']
all_fitres.append(data)
except:
pass
all_fitres = pd.concat(all_fitres, ignore_index=True)
all_fitres.fillna(value=-99, inplace=True)
types = np.array([SNANA_names[item] for item in all_fitres['SIM_TYPE_INDEX'].values])
all_fitres['types_names'] = types
all_fitres2 = {}
all_fitres2['id'] = all_fitres['CID'].values
all_fitres2['redshift'] = all_fitres['SIM_ZCMB'].values
all_fitres2['type'] = [SNANA_names[item] for item in all_fitres['SIM_TYPE_INDEX'].values]
all_fitres2['code'] = all_fitres['SIM_TYPE_INDEX'].values
all_fitres2['orig_sample'] = ['test' for i in range(all_fitres.shape[0])]
all_fitres2['querayble'] = [True for i in range(all_fitres.shape[0])]
all_fitres3 = pd.DataFrame(all_fitres2)
all_fitres.to_csv('/media/RESSPECT/data/PLAsTiCC/for_metrics/ddf/emille_samples/all_DDF.csv', index=False)
all_fitres
all_fitres = pd.read_csv('/media/RESSPECT/data/PLAsTiCC/for_metrics/ddf/emille_samples/all_DDF.csv', index_col=False)
for i in range(1,6):
np.random.seed(i)
d1 = all_fitres.sample(n=3000, replace=False)
d1.to_csv('DDF' + str(i) + '/random3000.csv', index=False)
d1.to_csv('/media/RESSPECT/data/PLAsTiCC/for_metrics/ddf/emille_samples' + str(i) + '/random3000.csv',
index=False)
del d1
###Output
_____no_output_____
###Markdown
Make list with all WFD surviving SALT2
###Code
fnames = os.listdir('.')
fnames.remove('make_samples.ipynb')
fnames.remove('summary.ipynb')
fnames.remove('.ipynb_checkpoints')
fnames.remove('WFD')
fnames.remove('DDF')
fnames.remove('DDF_Alex')
fnames.remove('WFD_Alex')
fnames.remove('plots')
all_wfd = []
data_all_wfd = []
for name in fnames:
flist = os.listdir(name + '/results/')
flist.remove('master_fitres.fitres')
flist.remove('salt3')
for elem in flist:
try:
data = pd.read_csv(name + '/results/' + elem, comment='#', delim_whitespace=True)
data['zHD'] = data['SIM_ZCMB']
data.fillna(value=-99, inplace=True)
data_all_wfd.append(data)
dtemp = {}
dtemp['id'] = data['CID'].values
dtemp['redshift'] = data['SIM_ZCMB'].values
dtemp['type'] = [SNANA_names[i] for i in data['SIM_TYPE_INDEX'].values]
dtemp['code'] = data['SIM_TYPE_INDEX'].values
dtemp['orig_sample'] = ['test' for i in range(data.shape[0])]
dtemp['queryable'] = [True for i in range(data.shape[0])]
dtemp = pd.DataFrame(dtemp)
all_wfd.append(dtemp)
except:
pass
all_fitres_wfd = pd.concat(all_wfd, ignore_index=True)
data_all_wfd2 = pd.concat(data_all_wfd, ignore_index=True)
data_all_wfd2.fillna(value=-99, inplace=True)
data_all_wfd2.append(data)
types_wfd = np.array([SNANA_names[item] for item in data_all_wfd2['SIM_TYPE_INDEX'].values])
data_all_wfd2['types_names'] = types_wfd
data_all_wfd2.to_csv('/media/RESSPECT/data/PLAsTiCC/for_metrics/wfd/emille_samples/all_WFD.csv', index=False)
all_fitres_wfd.to_csv('/media/RESSPECT/data/PLAsTiCC/for_metrics/wfd/samples/all_objs_survived_SALT2_WFD.csv',
index=False)
all_fitres_wfd = pd.read_csv('/media/RESSPECT/data/PLAsTiCC/for_metrics/wfd/emille_samples/all_WFD.csv',
index_col=False)
for i in range(1,6):
np.random.seed(i)
d1 = all_fitres_wfd.sample(n=3000, replace=False)
d1.to_csv('WFD' + str(i) + '/random3000.csv', index=False)
d1.to_csv('/media/RESSPECT/data/PLAsTiCC/for_metrics/wfd/emille_samples' + str(i) + '/random.csv',
index=False)
del d1
types = [SNANA_names[item] for item in all_fitres_wfd['SIM_TYPE_INDEX'].values]
all_fitres_wfd['types_names'] = types
all_fitres_wfd = all_fitres_wfd.to_csv('/media/RESSPECT/data/PLAsTiCC/for_metrics/wfd/emille_samples/all_WFD.csv',
index=False)
###Output
_____no_output_____
###Markdown
plots
###Code
import matplotlib.pylab as plt
import seaborn as sns
types = np.array([SNANA_names[item] for item in all_fitres['SIM_TYPE_INDEX'].values])
sntype, freq = np.unique(types, return_counts=True)
plt.pie(freq, labels=sntype,
autopct='%1.1f%%', shadow=True, startangle=140)
plt.axis('equal')
plt.show()
types_wfd = np.array([SNANA_names[item] for item in data_all_wfd2['SIM_TYPE_INDEX'].values])
sntype, freq = np.unique(types_wfd, return_counts=True)
plt.pie(freq, labels=sntype,
autopct='%1.1f%%', shadow=True, startangle=140)
plt.axis('equal')
plt.show()
###Output
_____no_output_____
###Markdown
Kyle results - DDF
###Code
fname_ddf = '/media/kara/resspect_metric/workspace/kyle_boone_ddf.csv'
kyle_ddf = pd.read_csv(fname_ddf, names=['object_id','6','15','16','42','52','53','62','64','65','67','88',
'90','92','95'], skiprows=1)
class_final = []
for i in range(kyle_ddf.shape[0]):
indx = np.argsort(kyle_ddf.iloc[i].values[1:])[-1]
code = int(kyle_ddf.keys()[indx + 1])
class_final.append(types_names[code])
class_final = np.array(class_final)
flag_class_Ia = class_final == 'Ia'
kyle_ddf_Ia = kyle_ddf[flag_class_Ia]
k = 5
np.random.seed(k)
kyle_ddf_sample = kyle_ddf_Ia.sample(n=3000, replace=False)
fitres_ddf_flag = np.array([item in kyle_ddf_sample['object_id'].values
for item in all_fitres['CID'].values])
sum(fitres_ddf_flag)
kyle_fitres_ddf = all_fitres[fitres_ddf_flag]
ids, freq = np.unique(kyle_fitres_ddf['CID'].values, return_counts=True)
sum(freq > 1 )
kyle_fitres_ddf.to_csv('/media/RESSPECT/data/PLAsTiCC/for_metrics/ddf/emille_samples' + str(k) + '/fiducial3000.csv', index=False)
kyle_fitres_ddf.to_csv('/media/emille/git/COIN/RESSPECT_work/PLAsTiCC/metrics_paper/resspect_metric/SALT2_fit/DDF' + str(k) + '/fiducial3000.csv', index=False)
sum(kyle_fitres_ddf['SIM_TYPE_INDEX'].values == 11)
###Output
_____no_output_____
###Markdown
Kyle results - WFD
###Code
np.random.seed(750)
fname_wfd = '/media/kara/resspect_metric/workspace/kyle_boone_wfd.csv'
kyle_wfd = pd.read_csv(fname_wfd, names=['object_id','6','15','16','42','52','53','62','64','65','67','88',
'90','92','95'], skiprows=1)
class_final = []
for i in range(kyle_wfd.shape[0]):
indx = np.argsort(kyle_wfd.iloc[i].values[1:])[-1]
code = int(kyle_wfd.keys()[indx + 1])
class_final.append(types_names[code])
class_final = np.array(class_final)
flag_class_Ia = class_final == 'Ia'
kyle_wfd_Ia = kyle_wfd[flag_class_Ia]
kyle_wfd_sample = kyle_wfd_Ia.sample(n=3000, replace=False)
fitres_wfd_flag = np.array([item in kyle_wfd_sample['object_id'].values for item in data_all_wfd2['CID'].values])
sum(fitres_wfd_flag)
kyle_fitres_wfd = data_all_wfd2[fitres_wfd_flag]
kyle_fitres_wfd2 = kyle_fitres_wfd.drop_duplicates(subset=['CID'], keep='first')
ids, freq = np.unique(kyle_fitres_wfd2['CID'].values, return_counts=True)
sum(freq > 1 )
k = 5
kyle_fitres_wfd2.to_csv('/media/RESSPECT/data/PLAsTiCC/for_metrics/wfd/emille_samples' + str(k) + '/fiducial3000.csv',
index=False)
kyle_fitres_wfd2.to_csv('/media/emille/git/COIN/RESSPECT_work/PLAsTiCC/metrics_paper/resspect_metric/SALT2_fit/WFD' + str(k) + '/fiducial3000.csv', index=False)
sum(kyle_fitres_wfd2['SIM_TYPE_INDEX'].values == 11)/3000
###Output
_____no_output_____ |
IIMB-Assignments/Assgn-2/M3_Assignment Cases and Data Files -20180831/.ipynb_checkpoints/Module3_Assignment2_Sayantan_Raha-Copy1-checkpoint.ipynb | ###Markdown
Q 1.1We will use the following formula to calculate the coefficient of CRIM.\begin{equation*} \beta = r * \frac{SD_x} {SD_Y}\end{equation*}\begin{equation*}\text {where r = Correlation of X (CRIM) and Y (PRICE) &} \end{equation*}\begin{equation*}SD_x \text{= Standard deviation of X}\end{equation*}\begin{equation*}SD_y \text{= Standard deviation of Y}\end{equation*}From table 1.1 we can find SDx = 8.60154511 & SDy = 9.197From table 1.2 we can find r = -.388Using the above we can find:
###Code
sd_crim = 8.60154511
sd_price = 9.197
r = -.388
B1 = r * sd_price / sd_crim
print("B1 {}, implies as crime rate increases by 1 unit, unit price reduces by {} units".format(B1, abs(B1)))
###Output
B1 -0.414859883238, implies as crime rate increases by 1 unit, unit price reduces by 0.414859883238 units
###Markdown
Q 1.2The range of coefficients is given by:\begin{equation*} \beta \pm \text{t-crit *} SE_{beta}\end{equation*}where t-critical is the critical value of T for significance alpha
###Code
n = 506
seb1 = 0.044
tcrit = abs(stats.t.ppf(0.025, df = 505))
print("T-critical at alpha {} and df {} is {}".format(0.5, 505, tcrit))
print("Min B1 {}".format(B1 + tcrit * seb1))
print("Max B1 {}".format(B1 - tcrit * seb1))
print("Price will reduce between 32K to 50K with 95% CI, hence his assumption that it reduces by at least 30K is correct")
###Output
T-critical at alpha 0.5 and df 505 is 1.96467263874
Min B1 -0.328414287133
Max B1 -0.501305479342
Price will reduce between 32K to 50K with 95% CI, hence his assumption that it reduces by at least 30K is correct
###Markdown
Q 1.3Regression is valid for only the observed ranges. The min value of Crime rate = .0068 > 0. Hence it is incorrect to draw any conclusion about the predicted values of Y for Crim==0 as that value is unobserved.We cannot claim the value will be 24.03 Q 1.4Here Y predicted can be calculated from the regression equation:24.033 - 0.414 * 1 (Value of CRIM)For large values of n the range of Y-predicted is given by:The range of coefficients is given by:\begin{equation*} \hat Y \pm \text{t-crit *} SE_{Y}\end{equation*}where t-critical is the critical value of T for significance alpha.
###Code
se = 8.484 #seb1 * sd_crim * (n - 1) ** 0.5
#print(se)
yhat = 24.033 - 0.414 * 1
yhat_max = (yhat + tcrit * se)
print("Max Value of Price for CRIM ==1 is {}".format(yhat_max))
###Output
Max Value of Price for CRIM ==1 is 40.2872826671
###Markdown
Q 1.5Here Y predicted (mean value of regression) can be calculated from the regression equation:24.033 - 6.346 * 1 (Value of SEZ)t-critical is computed as:\begin{equation*} t = \frac {(t_o - t_{mean})} {SE_{estimate}} \end{equation*}
###Code
yhat = 22.094 + 6.346
print("Mean Regression value {}".format(yhat))
t = (40 - yhat) / 9.064
print("t-crit at alpha 0.05 is {}".format(t))
print("Y-pred follows a normal distribution. Probability of Price being at least 40 lac is {} percent".format(round((1 - sp.stats.norm.cdf(t))* 100, 2)))
###Output
Mean Regression value 28.44
t-crit at alpha 0.05 is 1.27537511033
Y-pred follows a normal distribution. Probability of Price being at least 40 lac is 10.11 percent
###Markdown
Q 1.6 - aFrom the residual plot we can see that the spread of standardised errors are higher for lower values of standardised prediction compared to higher values.Hence the variance of the residuals are not equal and it demonstrates heteroscedasticity Q 1.6 - b1. It is a right skewed distribution2. The left tail has less proportion of data than that of a normal distribution3. Between 40-80 % range the distribution has much less proportion of data compared to a normal distributionFrom observing the P-P plot we conclude there is considerable difference between this distribution and normal distribution. Q 1.6 - cBased on the above we can conclude that this regression equation may not be functionally correct. Q 1.7The increase in R-squared when a new variable is added to a model is the given by the **Square of the Semi-Partial (PART) Correlation**.- From Table 1.7: R-squared @ Step 2 = 0.542- From Table 1.8: PART Correlation for adding RES = -.153
###Code
print("R-squared in Step 3 is {}".format(0.542 + (-.153) ** 2))
###Output
R-squared in Step 3 is 0.565409
###Markdown
Q 1.8 ==> better representationIt reduces as there is correlation among RM and CRIM. Part of what was explained by RM in model 1 is now being explained by CRIM in model 2 hence the coefficient value reduces. ==Put in the equations and Graphs in possible Q 1.9 ==> look againWe will use the model in step - 6 for answering this question. - Since the variables are not standardised we cannot use the magnitude of the coefficients as a measure of impact on dependent variable (Price)- We will use the notion of the Standardised Coefficients to measure how much 1 SD change in the variable X (Predictor) changes Y (dependant)- From Tables 1.1 and 1.8 we can easily obtain the Standardised Coefficients for the regression variable and model for all variables except for RM as the SD of RM is not provided in table 1.1 and the Standardised coefficient of RM is not provided in table 1.8. Standardised Coefficient is calculated using: \begin{equation*} \beta_{STANDARDISED} = \hat\beta * \frac {S_X} {S_Y} \end{equation*}where \begin{equation*} \text{Standard Deviation X} = S_X \end{equation*}& \begin{equation*} \text{Standard Deviation Y} = S_Y \end{equation*}- To calculate the variance of RM we will use the Model 1 and Model 2 from table 1.8. In Model1 the coefficient of RM is 9.102- In Model 2 the coefficient reduces to 8.391 on adding CRIM. This shows there is correlation among CRIM and RM which reduces the coefficient of RM in model 2. We can use the following equation to calculate SD of RM:\begin{equation*} \alpha_{RM_{Model1}} = \beta_{RM_{Model2}} + \frac{\beta_{CRIM_{Model2}} * Cor(RM, CRIM)} {Var(RM)} \end{equation*}- SD is square root of variance- From tabel 1.2 Cor(RM, CRIM) = -.219, Hence SD of RM = 2.13 - We can now use the SD of RM to calculate the standarddised coefficient for RM- From the below table we can see that **RM** has the highest impact on PRICE.
###Code
#print(((8.391 * .388) / (9.102 - 8.391))**0.5)
data = pd.DataFrame({"_": ["INTERCEPT","RM","CRIM","RES","SEZ","Highway", "AGE"]})
data["Coefficients"] = [-8.993, 7.182, -.194, -.318, 4.499, -1.154, -.077]
data["Standardized Coefficients"] = ['', 7.182 * 2.13 / 9.197, -.194 * 8.60154511 / 9.197,
-.238, .0124, .208,
-.077 * 28.1489 / 9.197]
data
###Output
_____no_output_____
###Markdown
Q 2.11. The model explains 42.25% of variation in box office collection.2. There are outliers in the model.3. The residuals do not follow a normal distribution.4. The model cannot be used since R-square is low.5. Box office collection increases as the budget increases.1, 2, 3 are right ==> color / highlight Q 2.2Here Budget (X) can never be = 0, as it may not be possible to produce a movie without money and X = 0 is unobserved i.e. X = 0 falls outside the domain of the observed values of the variable X. The relationship between the variables can change as we move outside the observed region. We cannot predict for a point that is outside the range of observed values using the regression model. The Model explains the relationship between Y and X within the range of observed values only. Hence Mr Chellapa's observation is incorrect Q 2.3 == check again?Since the variable is insignificant at alpha = 0.05, hence the coefficient may not be different from zero. There is is no statistical validity that the collection of movie released in Releasing_Time Normal_Season is different from Releasing_Time Holiday_Season (which is factored in the intercept / constant).Since we do not have the data hence we cannot rerun the model. We will assume that the co-efficient is 0 and it's removal does not have any effect on the overall equation (other significant variables).Hence the difference is **Zero**.
###Code
y = 2.685 + .147
#print("With beta = .147 y = {}".format(y))
#print("With beta = 0 y = {}".format(2.685))
###Output
_____no_output_____
###Markdown
Q 2.4 == check again?The beta for Release Normal Time is being considered as 0 as it is statistically insignificant at alpha. Hence it will be factored in the Intercept term. Releasing_Time Long_Weekend is statistically significant and the coefficient = 1.247.
###Code
Bmax = 1.247 + 1.964 *.588
print("Max B can be {}".format(Bmax))
Bmin = 1.247 - 1.964 *.588
print("Min B can be {}".format(Bmin))
print("Movies released in Long Wekends may earn upto 2.4 lac more than movies released in normal season.")
print("Mr. Chellapa's statement is statistically incorrect.")
###Output
Max B can be 2.4018319999999997
Min B can be 0.09216800000000025
Movies released in Long Wekends may earn upto 2.4 lac more than movies released in normal season.
Mr. Chellapa's statement is statistically incorrect.
###Markdown
Q 2.5The increase in R-squared when a new variable is added to a model is the given by the **Square of the Semi-Partial (PART) Correlation**.- From Table 2.5 : R-squared @ Step 5 = 0.810 ** 2 = .6561- From Table 2.6: PART Correlation for adding Director_CAT C = -.104
###Code
print("R-squared in Step 3 is {}".format(0.6561 + (-.104) ** 2))
###Output
_____no_output_____
###Markdown
Q2.6 ==> Need to relook at this( can we do some hypothesis tests)- Budget_35_Cr is the highest impact on the performance of the movie. - Recommendation is to use high enough budget to hire Category A Production House, Category C Director and Music Director and produce a Comedy movie. Q 2.7- We cannot say that the variables have no relationship to Y (BOX Office Collection)- We can conclude that in presence of the other variables the variables in Model 2 are not explaining additional information about Y >> Formulate more nicely (charts and graphs are needed - Venn Diagram)
###Code
# Import the library
import matplotlib.pyplot as plt
from matplotlib_venn import venn3
x =10
# Make the diagram
venn3(subsets = (x, 10, 10, 10, 10,10, 10))
plt.show()
###Output
_____no_output_____
###Markdown
Q 2.8We are making the assumption that the variable Youtube views imply views of the actual movie and not the trailers before movie release dates. The following explanation will not be valid in that case. Also, we are assuming that revenue collected from advertisements during Youtube views do not fall under the Box Office Collection.Youtube_Views = Will not contribute anything meaningful functionally to the Box Office collection as the movie has been created and released in theaters and all possible collection is completed. The main essence of the prediction here is to understand before making a movie, what all factors may lead to better revenue collection for a movie Q 3.1 Table 3.1- **Observations** (N) = 543- **Standard Error** - \begin{equation*} SE = \sqrt {\frac{ \sum_{k=1}^N {(Y_k - \hat{Y_k})^2}} {N - 2}} \end{equation*} \begin{equation*} (Y_k - \hat{Y_k})^2 = \epsilon_k^2 = \text{Residual SS (SSE)} = \text{17104.06 (Table 3.2)}\end{equation*}- **R-Squared** = 1 - SSE / SST - SSE = 17104.06 (Table 3.2) - SST = 36481.89 (Table 3.2)- **Adjuated R-Squared** = 1 - (SSE / N-k-1) / (SST/N-1) - N = 543 - K = 3- **Multiple R** = \begin{equation*} \sqrt R_{Squared}\end{equation*}
###Code
x = ["Multiple R", "R Square", "Adjusted R Squared", "Standard Error", "Observations"]
data = pd.DataFrame({"Regression Statistics": x})
data["_"] = [(1 - 17104.06/36481.89) ** 0.5,1 - 17104.06/36481.89, 1 - (17104.06/(543 - 3 -1))/(36481.89/542),((17104.06)/541) ** 0.5,543]
data
###Output
_____no_output_____
###Markdown
Table 3.2- **DF Calculation** - DF for Regression (K) = Number of variables = 3 - DF for Residual = N - K - 1 = 539- **SS Calculation** - Residual SS (SSE) = 17104.06 (given) - Total SS (TSS)= 36481.89 (given) - Regression SS (SSR) = TSS - SSE = 19377.83- **MS Calculation** - MSR (Regression) = SSR / DF for SSR (=3) - MSE (Error) = SSE / DF for SSE (= 539)- **F Claculation** - F = MSR / MSE
###Code
x = ["Regression", "Residual", "Total"]
ss = [36481.89 - 17104.06, 17104.06,36481.89]
df = [3, 539,542]
ms = [19377.83 / 2, 17104 / 539, '']
f = [(19377.83 / 2) / (17104 / 539),'','']
sf = [1 - sp.stats.f.cdf(305, 3, 539),'','']
data = pd.DataFrame({"_": x})
data["DF"] = df
data["SS"] = ss
data["MS"] = ms
data["F"] = f
data["SignificanceF"] = sf
data
###Output
_____no_output_____
###Markdown
Table 3.3 - Coefficients- MLR T-Test - \begin{equation*} t_i = \frac {\beta_i - 0} {Se(\beta_i)}\end{equation*} where i denotes the different variables (here i = 3)
###Code
data = pd.DataFrame({"_":["Intercept", "Margin", "Gender", "College"]})
data["Coefficeints"] = [38.59235, 5.32e-05, 1.551306, -1.47506]
data["Standard Error"] = [0.937225, 2.18e-06, 0.777806, 0.586995]
data["t Stat"] = [(38.59235 / 0.937225),5.32e-05 / 2.18e-06, 1.551306/0.777806, -1.47506/ 0.586995]
data["P-Value"] = ['','','','']
data["Lower 95%"] = [36.75129, 4.89E-05, 0.023404, -2.62814]
data["Upper 95%"] = [40.4334106,5.7463E-05,3.07920835,-0.3219783]
data
###Output
_____no_output_____
###Markdown
Q 3.2From the table above we see that for all the variables the t-value > 1.964. hence all the variables are significant. Q 3.3F-distribution with DF = 3, 539 at significance = 95% is 2.621. Hence the model is significant.
###Code
1 - sp.stats.f.cdf(2.621, 3, 539)
sp.stats.f.ppf(0.95, 3, 539)
###Output
_____no_output_____
###Markdown
Q 3.4The increase in R-squared when a new variable is added to a model is the given by the **Square of the Semi-Partial (PART) Correlation**.- R-squared for Model 2 = 0.52567 (R1)- R-squared for Model 3 = 0.531163 (R2)Part Correlation of College & % Votes = \begin{equation*}\sqrt{R_2 - R_1} \end{equation*}
###Code
print("Increase in R-Squared due to adding College = {}".format(0.531163 - 0.52567))
print("Part Correlation of College & % Votes = {}".format((0.531163 - 0.52567)**0.5))
###Output
Increase in R-Squared due to adding College = 0.005493
Part Correlation of College & % Votes = 0.0741147758548
###Markdown
Q 3.5We will conduct Partial F-test between models to test for significance of each model. We make the assumption that the variables added are significant at each step (model) at alpha 0.05\begin{equation*}F_{PARTIAL} = \frac{\frac{R_{FULL}^2 - R_{PARTIAL}^2} {k - r}} {\frac{1 - R_{FULL}^2} {N - k - 1}}\end{equation*}where k = variables in full model, r = variables in reduced model, N = Total number of records
###Code
def f_partial(rf, rp, n, k, r):
return ((rf **2 - rp ** 2)/(k-r))/((1 - rf ** 2)/ (n - k - 1))
print("Model 3 Partial F {}".format(f_partial(0.531163, 0.52567, 543, 3, 2)))
print("Model 3 Critical F at Df = (1, 539) {}".format(1 - sp.stats.f.cdf(4.36, 1, 539)))
print("Model 4 Partial F {}".format(f_partial(0.56051, 0.531163, 543, 4, 3)))
print("Model 4 Critical F at Df = (1, 539) {}".format(1 - sp.stats.f.cdf(25.13, 1, 539)))
print("Model 5 Partial F {}".format(f_partial(0.581339, 0.56051, 543, 5, 4)))
print("Model 5 Critical F at Df = (1, 539) {}".format(1 - sp.stats.f.cdf(19.29, 1, 539)))
print("\nHence we can see that all the models are significant. The number of features (5) are not very high, hence we conclude it's justified to add the additional variables")
###Output
Model 3 Partial F 4.35874463399
Model 3 Critical F at Df = (1, 539) 0.0372611210892
Model 4 Partial F 25.1317657533
Model 4 Critical F at Df = (1, 539) 7.28176735132e-07
Model 5 Partial F 19.2914065358
Model 5 Critical F at Df = (1, 539) 1.35225861937e-05
Hence we can see that all the models are significant. The number of features (5) are not very high, hence we conclude it's justified to add the additional variables
###Markdown
Q 3.6- Equations used for computing Standardized coefficients are provided in Q1.9- Since the variables are not standardised we cannot use the magnitude of the coefficients as a measure of impact on dependent variable (Vote %)- We will use the notion of the Standardised Coefficients to measure how much 1 SD change in the variable X (Predictor) changes Y (dependant)- From the below table we can see that **MARGIN** has the highest impact on Vote %. 1 SD change in Margin changes .75 SD in Vote %
###Code
data = pd.DataFrame({"_": ["INTERCEPT","MARGIN","Gender","College","UP","AP"]})
data["Coefficients"] = [38.56993, 5.58E-05, 1.498308, -1.53774, -3.71439, 5.715821]
data["Standard deviation"] = ['', 111365.7, 0.311494, 0.412796, 0.354761, 0.209766]
data["Standardized Coefficients"] = ['', 5.58E-05 * 111365.7 / 8.204253, 1.498308 * 0.311494 / 8.204253,
-1.53774 * 0.412796 / 8.204253, -3.71439 * 0.354761 / 8.204253,
5.715821 * 0.209766 / 8.204253]
data
###Output
_____no_output_____
###Markdown
Q 4.1
###Code
positives = 353+692
negatives = 751+204
N = positives + negatives
print("Total Positives: {} :: Total Negatives: {} :: Total Records: {}".format(positives, negatives, N))
pi1 = positives / N
pi2 = negatives / N
print("P(Y=1) = positives / N = {} :: P(Y=0) = negatives /N = {}".format(pi1, pi2))
_2LL0 = -2* (negatives * np.log(pi2) + positives * np.log(pi1))
print("-2LL0 = {}".format(_2LL0))
###Output
Total Positives: 1045 :: Total Negatives: 955 :: Total Records: 2000
P(Y=1) = positives / N = 0 :: P(Y=0) = negatives /N = 0
-2LL0 = inf
###Markdown
- -2LLo is called the "Null Deviance" of a model. It is -2 Log Likelihood of a model which had no predictor variables. Hence we obtain the probabilities of positive and negative in the dataset using the frequencies for such model.- After adding "Premium" 2LL reduces to 2629.318 (Table 4.2). Hence reduction is equal to (-2LLo -(-2LLm)):
###Code
print(2768.537 - 2629.318)
###Output
139.219
###Markdown
Q 4.2
###Code
print("True Positive :Actually Positive and Predicted Positive = {}".format(692))
print("False Positive :Actually Negative and Predicted Positive = {}".format(204))
print("Precision = True Positive / (True Positive + False Positive) = {}".format(692.0 / (692 + 204)))
###Output
True Positive :Actually Positive and Predicted Positive = 692
False Positive :Actually Negative and Predicted Positive = 204
Precision = True Positive / (True Positive + False Positive) = 0.772321428571
###Markdown
Q 4.3exp(B) = change in odds ratio. The odds ratio can be interpreted as the multiplicative adjustment to the odds of the outcome, given a **unit** change in the independent variable. In this case the unit of measurement for Premium (1 INR) which is very small compared to the actual Premium (1000s INR), hence a unit change does not lead to a meaningful change in odds ratio, subsequently the odds ratio will be very close to one. Q 4.4
###Code
print("The model predicts 751 + 353 = {} customers have a probability less than 0.5 of paying premium".format(
751+353))
print("The will call 1104 customers through Call Center")
###Output
The model predicts 751 + 353 = 1104 customers have a probability less than 0.5 of paying premium
The will call 1104 customers through Call Center
###Markdown
Q 4.5 Total points we are getting is 1960.total = tp + fp + fn + tnsensitivity = tp/ (tp + fn)specificity = tn / (tn + fp)recall = sensitivityprecision = tp / (tp + fp)
###Code
tp = 60.0
fp = 20.0
fn = 51*20
tn = 43 * 20
total = tp + fp + fn + tn
print(total)
sensitivity = tp/ (tp + fn)
specificity = tn / (tn + fp)
recall = sensitivity
precision = tp / (tp + fp)
print("Precision {} :: \nRecall {} :: \nsensitivity {} :: \nspecificity {} ::".format(precision, recall, sensitivity, specificity))
###Output
1960.0
Precision 0.75 ::
Recall 0.0555555555556 ::
sensitivity 0.0555555555556 ::
specificity 0.977272727273 ::
###Markdown
Q 4.6Probability can be calculated using the following formula:\begin{equation*} P(Y=1) = \frac{\exp^z} {1 + \exp^z}\end{equation*}\begin{equation*} \text{where z} = \beta_0 + \beta_1 * Salaried + \beta_2 * HouseWife +\beta_3 * others\end{equation*}However in this case the variable Housewife is not a significant variable. Hence using this equation to calculate probability for the variable house wife may not be appropriate. However we will procced to compute the probability using the equation, using the coefficient in the equation and also using the coefficient as 0 (B is not significantly different from 0 for insignificant variables)
###Code
print("Probability of House wife paying the Premium is (beta ==22.061): {}".format(np.exp(-.858 + 22.061)
/ (1 + np.exp(-.858 + 22.061))))
print("Probability of House wife paying the Premium is (beta = 0): {}".format(np.exp(-.858 + 0)
/ (1 + np.exp(-.858 + 0))))
print("Since Beta is insignificant Beta == 0, hence .298 is the probability for housewife paying renewal")
###Output
Probability of House wife paying the Premium is (beta ==22.061): 0.999999999381
Probability of House wife paying the Premium is (beta = 0): 0.297757372269
Since Beta is insignificant Beta == 0, hence .298 is the probability for housewife paying renewal
###Markdown
Q 4.7The Constant / Intercept measures for people with the following occupations **Professionals, Business and Agriculture** and they have a lower probability of renewal payment Q 4.8Probability can be calculated using the following formula:\begin{equation*} P(Y=1) = \frac{\exp^z} {1 + \exp^z}\end{equation*}\begin{equation*} \text{where z} = constant + \beta_1 * Policy Term\end{equation*}SSC Education, Agriculturist Profession & Marital Status Single will be factored in the term constant of the given equation.
###Code
print("Probability : {}".format(np.exp(3.105 + 60 * -0.026)/ (1 + np.exp(3.105 + 60 * -0.026))))
###Output
Probability : 0.824190402911
###Markdown
Q 4.9The coefficients tell about the relationship between the independent variables and the dependent variable, where the dependent variable is on the logit scale. These estimates tell the amount of increase in the predicted log odds that would be predicted by a 1 unit increase in the predictor, holding all other predictors constant.**Recommendations** :- Married People has higher possibility of renewals (log odds ratio increases)- As payment term increases it leads to slightly reduced log odds of renewals- Professionals, Business men have much higher chance of defaulting on log odds of renewals- Being a graduate does increase the chance of log odds of renewals- Annual / Half yearly / Quarterly policy renewal schemes see reduced log odds of renewals- Model Change - Premuim : Variable scale should be changed for better understanding of Premium's contribution to affinity to renew policy (may be reduce unit to 1000s)- Strategy: - For new customers target Married people and graduates - For existing customers send more reminders (via Call centers / messgaes etc) to Business men, Professionals for renewal - For people paying premiums in yearly / quarterly / halfyearly terms, send reminders to them before renewal dates - For people with long payment terms keep sending them payment reminders as the tenure of their engagement increases Q 4.10Gain is calculated as:\begin{equation*} gain = \frac {\text{cumulative number of positive obs upto decile i}} {\text {Total number of positive observations}} \end{equation*}Lift is calculated as:\begin{equation*} lift = \frac {\text{cumulative number of positive obs upto decile i}} {\text {Total number of positive observations upto decile i from random model}} \end{equation*}
###Code
data = pd.DataFrame({'Decile': [.1, .2, .3, .4, .5, .6, .7, .8, .9, 1]})
data['posunits'] = [31, 0, 0, 0, 3, 5, 5, 4, 2, 1]
data['negunits'] = [0, 0, 0, 0, 0, 5, 11, 17, 12, 2]
data['posCountunits'] = data['posunits'] * 20
data['negCountunits'] = data['negunits'] * 20
avgPerDec = np.sum(data['posCountunits']) / 10
data['avgCountunits'] = avgPerDec
data['cumPosCountunits'] = data['posCountunits'].cumsum()
data['cumAvgCountunits'] = data['avgCountunits'].cumsum()
data['lift'] = data['cumPosCountunits'] / data['cumAvgCountunits']
data['gain'] = data['cumPosCountunits'] / data['posCountunits'].sum()
data['avgLift'] = 1
#print(df)
#### Plots
plt.figure(figsize=(15, 5))
plt.subplot(1,2,1)
plt.plot(data.avgLift, 'r-', label='Average Model Performance')
plt.plot(data.lift, 'g-', label='Predict Model Performance')
plt.title('Cumulative Lift Chart')
plt.xlabel('Deciles')
plt.ylabel('Normalised Model')
plt.legend()
plt.xlim(0, 10)
plt.subplot(1,2,2)
plt.plot(data.Decile, 'r-', label='Average Model Performance')
plt.plot(data.gain, 'g-', label='Predict Model Performance')
plt.title('Cumulative Gain Chart')
plt.xlabel('Deciles')
plt.ylabel('Gain')
plt.legend()
plt.xlim(0, 10)
data
###Output
_____no_output_____ |
module07/[HW07]NLP.ipynb | ###Markdown
Natural Language ProcessingIn this homework, you will apply the TFIDF technique to text classification as well as use word2vec model to generate the dense word embedding for other NLP tasks. Text ClassificationThe 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups. To the best of my knowledge, it was originally collected by Ken Lang, probably for his Newsweeder: Learning to filter netnews paper, though he does not explicitly mention this collection. The 20 newsgroups collection has become a popular data set for experiments in text applications of machine learning techniques, such as text classification and text clustering.In this lab, we will experiment different feature extraction on the 20 newgroups dataset, including the count vector and TF-IDF vector. Also, we will apply the Naive Bayes classifier to this dataset and report the prediciton accuracy. Load the explore the 20newsgroup data20 news group data is part of the sklearn library. We can directly load the data using the following command.
###Code
# load the traning data and test data
import numpy as np
from sklearn.datasets import fetch_20newsgroups
twenty_train = fetch_20newsgroups(subset='train', shuffle=False)
twenty_test = fetch_20newsgroups(subset='test', shuffle=False)
# print total number of categories
print("Number of training data:" + str(len(twenty_train.data)))
print("Number of categories:" + str(len(twenty_train.target_names)))
# print the first text and its category
print(twenty_train.data[0])
print(twenty_train.target[0])
# You can check the target variable by printing all the categories
twenty_train.target_names
###Output
_____no_output_____
###Markdown
Build a Naive Bayes Model Your task is to build practice an ML model to classify the newsgroup data into different categories. You will try both raw count and TF-IDF for feature extraction and then followed by a Naive Bayes classifier. Note that you can connect the feature generation and model training steps into one by using the [pipeline API](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html) in sklearn.Try to use Grid Search to find the best hyper parameter from the following settings (feel free to explore other options as well):* Differnet ngram range* Weather or not to remove the stop words* Weather or not to apply IDFAfter building the best model from the training set, we apply that model to make predictions on the test data and report its accuracy.
###Code
# TODO
###Output
_____no_output_____
###Markdown
--------- Word Embedding with word2vecWord embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing (NLP) where words or phrases from the vocabulary are mapped to vectors of real numbers. In this assessment, we will experiment with [word2vec](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf) model from package [gensim](https://radimrehurek.com/gensim/) and generate word embeddings from a review dataset. You can then explore those word embeddings and see if they make sense semantically.
###Code
import gzip
import logging
import warnings
from gensim.models import Word2Vec
warnings.simplefilter(action='ignore', category=FutureWarning)
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
###Output
_____no_output_____
###Markdown
Load the review data
###Code
import gensim
def read_input(input_file):
"""This method reads the input file which is in gzip format"""
print("reading file {0}...this may take a while".format(input_file))
with gzip.open(input_file, 'rb') as f:
for i, line in enumerate(f):
if (i % 10000 == 0):
print("read {0} reviews".format(i))
# do some pre-processing and return list of words for each review b text
yield gensim.utils.simple_preprocess(line)
documents = list(read_input('reviews_data.txt.gz'))
logging.info("Done reading data file")
###Output
_____no_output_____
###Markdown
Train the word2vec modelThe word2vec algorithms include skip-gram and CBOW models, using either hierarchical softmax or negative sampling introduced in Efficient Estimation of Word Representations in Vector Space and Distributed Representations of Words and Phrases and their Compositionality. A word2vec tutorial can be found [here](https://rare-technologies.com/word2vec-tutorial/).
###Code
# TODO build vocabulary and train model
model = None
###Output
_____no_output_____
###Markdown
Find similar words for a given wordOnce the model is built, you can find interesting patterns in the model. For example, can you find the 5 most similar words to word `polite`
###Code
# TODO: look up top 5 words similar to 'polite' using most_similar function
# Feel free to try other words and see if it makes sense.
###Output
_____no_output_____
###Markdown
Compare the word embedding by comparing their similaritiesWe can also find similarity betwen two words in the embedding space. Can you find the similarities between word `great` and `good`/`horrible`, and also `dirty` and `clean`/`smelly`. Feel free to play around with the word embedding you just learnt and see if they make sense.
###Code
# TODO: find similarities between two words using similarity function
###Output
_____no_output_____ |
19-1/Machine-Learning-3/1901 ML3 HW2.ipynb | ###Markdown
First visit Monte Carlo Policy Evaluation ┌─┬─┬─┬─┐│0 │1 │2 │3 │├─┼─┼─┼─┤│4 │X │5 │6 │├─┼─┼─┼─┤│7 │8 │9 │10│└─┴─┴─┴─┘3 : 1점, 6 : -1점0 왼 1 오 2 위 3 아래 V
###Code
import numpy as np
from copy import deepcopy
testmode = 0
# set parameters ###############################################################
epoch = 10000
# set parameters ###############################################################
# state
states = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
N_STATES = len(states)
# action
actions = [0, 1, 2, 3]
N_ACTIONS = len(actions)
# transition probabilities
P = np.empty((N_STATES, N_ACTIONS, N_STATES))
# 0 1 2 3 4 5 6 7 8 9 10
P[ 0, 0, :] = [ .9, 0, 0, 0, .1, 0, 0, 0, 0, 0, 0]
P[ 0, 1, :] = [ .1, .8, 0, 0, .1, 0, 0, 0, 0, 0, 0]
P[ 0, 2, :] = [ .9, .1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
P[ 0, 3, :] = [ .1, .1, 0, 0, .8, 0, 0, 0, 0, 0, 0]
# 0 1 2 3 4 5 6 7 8 9 10
P[ 1, 0, :] = [ .8, .2, 0, 0, 0, 0, 0, 0, 0, 0, 0]
P[ 1, 1, :] = [ 0, .2, .8, 0, 0, 0, 0, 0, 0, 0, 0]
P[ 1, 2, :] = [ .1, .8, .1, 0, 0, 0, 0, 0, 0, 0, 0]
P[ 1, 3, :] = [ .1, .8, .1, 0, 0, 0, 0, 0, 0, 0, 0]
# 0 1 2 3 4 5 6 7 8 9 10
P[ 2, 0, :] = [ 0, .8, .1, 0, 0, .1, 0, 0, 0, 0, 0]
P[ 2, 1, :] = [ 0, 0, .1, .8, 0, .1, 0, 0, 0, 0, 0]
P[ 2, 2, :] = [ 0, .1, .8, .1, 0, 0, 0, 0, 0, 0, 0]
P[ 2, 3, :] = [ 0, .1, 0, .1, 0, .8, 0, 0, 0, 0, 0]
# 0 1 2 3 4 5 6 7 8 9 10
P[ 3, 0, :] = [ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]
P[ 3, 1, :] = [ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]
P[ 3, 2, :] = [ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]
P[ 3, 3, :] = [ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]
# 0 1 2 3 4 5 6 7 8 9 10
P[ 4, 0, :] = [ .1, 0, 0, 0, .8, 0, 0, .1, 0, 0, 0]
P[ 4, 1, :] = [ .1, 0, 0, 0, .8, 0, 0, .1, 0, 0, 0]
P[ 4, 2, :] = [ .8, 0, 0, 0, .2, 0, 0, 0, 0, 0, 0]
P[ 4, 3, :] = [ 0, 0, 0, 0, .2, 0, 0, .8, 0, 0, 0]
# 0 1 2 3 4 5 6 7 8 9 10
P[ 5, 0, :] = [ 0, 0, .1, 0, 0, .8, 0, 0, 0, .1, 0]
P[ 5, 1, :] = [ 0, 0, .1, 0, 0, 0, .8, 0, 0, .1, 0]
P[ 5, 2, :] = [ 0, 0, .8, 0, 0, .1, .1, 0, 0, 0, 0]
P[ 5, 3, :] = [ 0, 0, 0, 0, 0, .1, .1, 0, 0, .8, 0]
# 0 1 2 3 4 5 6 7 8 9 10
P[ 6, 0, :] = [ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]
P[ 6, 1, :] = [ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]
P[ 6, 2, :] = [ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]
P[ 6, 3, :] = [ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]
# 0 1 2 3 4 5 6 7 8 9 10
P[ 7, 0, :] = [ 0, 0, 0, 0, .1, 0, 0, .9, 0, 0, 0]
P[ 7, 1, :] = [ 0, 0, 0, 0, .1, 0, 0, .1, .8, 0, 0]
P[ 7, 2, :] = [ 0, 0, 0, 0, .8, 0, 0, .1, .1, 0, 0]
P[ 7, 3, :] = [ 0, 0, 0, 0, 0, 0, 0, .9, .1, 0, 0]
# 0 1 2 3 4 5 6 7 8 9 10
P[ 8, 0, :] = [ 0, 0, 0, 0, 0, 0, 0, .8, .2, 0, 0]
P[ 8, 1, :] = [ 0, 0, 0, 0, 0, 0, 0, 0, .2, .8, 0]
P[ 8, 2, :] = [ 0, 0, 0, 0, 0, 0, 0, .1, .8, .1, 0]
P[ 8, 3, :] = [ 0, 0, 0, 0, 0, 0, 0, .1, .8, .1, 0]
# 0 1 2 3 4 5 6 7 8 9 10
P[ 9, 0, :] = [ 0, 0, 0, 0, 0, .1, 0, 0, .8, .1, 0]
P[ 9, 1, :] = [ 0, 0, 0, 0, 0, .1, 0, 0, 0, .1, .8]
P[ 9, 2, :] = [ 0, 0, 0, 0, 0, .8, 0, 0, .1, 0, .1]
P[ 9, 3, :] = [ 0, 0, 0, 0, 0, 0, 0, 0, .1, .8, .1]
# 0 1 2 3 4 5 6 7 8 9 10
P[10, 0, :] = [ 0, 0, 0, 0, 0, 0, .1, 0, 0, .8, .1]
P[10, 1, :] = [ 0, 0, 0, 0, 0, 0, .1, 0, 0, 0, .9]
P[10, 2, :] = [ 0, 0, 0, 0, 0, 0, .8, 0, 0, .1, .1]
P[10, 3, :] = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, .1, .9]
# rewards
R = -0.02 * np.ones((N_STATES, N_ACTIONS))
R[3,:] = 1.
R[6,:] = -1.
# discount factor
gamma = 0.99
# policy
if 0:
# bad policy
policy = np.empty((N_STATES, N_ACTIONS))
policy[0,:] = [0,1,0,0]
policy[1,:] = [0,1,0,0]
policy[2,:] = [0,1,0,0]
policy[3,:] = [0,1,0,0]
policy[4,:] = [0,0,0,1]
policy[5,:] = [0,1,0,0]
policy[6,:] = [0,1,0,0]
policy[7,:] = [0,1,0,0]
policy[8,:] = [0,1,0,0]
policy[9,:] = [0,0,1,0]
policy[10,:] = [0,0,1,0]
elif 0:
# random policy
policy = 0.25*np.ones((N_STATES, N_ACTIONS))
elif 0:
# optimal policy
policy = np.empty((N_STATES, N_ACTIONS))
policy[0,:] = [0,1,0,0]
policy[1,:] = [0,1,0,0]
policy[2,:] = [0,1,0,0]
policy[3,:] = [0,1,0,0]
policy[4,:] = [0,0,1,0]
policy[5,:] = [0,0,1,0]
policy[6,:] = [0,0,1,0]
policy[7,:] = [0,0,1,0]
policy[8,:] = [1,0,0,0]
policy[9,:] = [1,0,0,0]
policy[10,:] = [1,0,0,0]
elif 1:
# optimal policy + noise
# we use optimal policy with probability 1/(1+ep)
# we use random policy with probability ep/(1+ep)
ep = 0.1
policy = np.empty((N_STATES, N_ACTIONS))
policy[0,:] = [0,1,0,0]
policy[1,:] = [0,1,0,0]
policy[2,:] = [0,1,0,0]
policy[3,:] = [0,1,0,0]
policy[4,:] = [0,0,1,0]
policy[5,:] = [0,0,1,0]
policy[6,:] = [0,0,1,0]
policy[7,:] = [0,0,1,0]
policy[8,:] = [1,0,0,0]
policy[9,:] = [1,0,0,0]
policy[10,:] = [1,0,0,0]
policy = policy + (ep/4)*np.ones((N_STATES, N_ACTIONS))
policy = policy / np.sum(policy, axis=1).reshape((N_STATES,1))
# Every-Visit Monte-Carlo Policy Evaluation
# n_visits records number of visits for each state
# cum_gains records cumulative gains, i.e., sum of gains for each state
# where
# gain = reward + gamma * next_reward + gamma^2 * ...
n_visits = np.zeros(N_STATES)
cum_gains = np.zeros(N_STATES)
for _ in range(epoch):
if testmode == 1 :
print ("안녕")
print (_,"번 째")
# simulation_history records visited states including the terminal states 3 and 6
# reward_history records occured rewards including final rewards 1. and -1.
simulation_history = []
reward_history = []
# indicate game is not over yet
done = False
# choose initial state randomly, not from 3 or 6
s = np.random.choice([0, 1, 2, 4, 5, 7, 8, 9, 10])
NUM = 0
while not done:
if testmode == 1 :
print (NUM)
# choose action using current policy
a = np.random.choice(actions, p=policy[s, :])
simulation_history.append(s)
reward_history.append(R[s,a])
if testmode == 1 :
print ("si_hi : ", simulation_history)
print ("re_hi : ", reward_history)
# choose next state using transition probabilities
s1 = np.random.choice(states, p=P[s, a, :])
if s1 == 3:
# if game is over,
# ready to break while loop by letting done = True
# append end result to simulation_history
done = True
simulation_history.append(s1)
reward_history.append(R[s1,0])
elif s1 == 6:
# if game is over,
# ready to break while loop by letting done = True
# append end result to simulation_history
done = True
simulation_history.append(s1)
reward_history.append(R[s1,0])
else:
# if game is not over, continue playing game
s = s1
NUM += 1
# reward_history records occured rewards including final rewards 1 and -1
simulation_history = np.array(simulation_history)
reward_history = np.array(reward_history)
n = len(reward_history)
# gain_history records occured gains
gain_history = deepcopy(reward_history)
if testmode == 1 :
print ("gain_pre : ", gain_history)
for i, reward in enumerate(reward_history[:-1][::-1]):
gain_history[n-i-2] = reward + gamma * gain_history[n-i-2+1]
if testmode == 1 :
print ("i:",i,"reward",reward)
print (gain_history)
if testmode == 1 :
print ("gain_post :", gain_history)
check = [-1 for i in range(N_STATES)]
for i in range(N_STATES):
bbbbbb=-1
for j in range(n) :
if simulation_history[j] == i and bbbbbb == -1 :
bbbbbb = j
check [i] = bbbbbb
if testmode == 1 :
print ("check :", check)
# update n_visits and cum_gains
for i in range(N_STATES):
if check[i] != -1:
n_visits[i] += 1
cum_gains[i] += gain_history[check[i]]
if testmode == 1 :
print ("n_vi : ", n_visits)
print ("cum_ga : ", cum_gains)
V = cum_gains / (n_visits + 1.0e-8)
print(V)
###Output
[ 0.83452835 0.87895072 0.9196893 1. 0.79492031 0.65281199
-1. 0.75337609 0.71636092 0.67149823 0.41075439]
###Markdown
Q
###Code
# test
testmode = 0
import numpy as np
from copy import deepcopy
# set parameters ###############################################################
epoch = 50000
# set parameters ###############################################################
# state
states = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
N_STATES = len(states)
# action
actions = [0, 1, 2, 3]
N_ACTIONS = len(actions)
# transition probabilities
P = np.empty((N_STATES, N_ACTIONS, N_STATES))
# 0 1 2 3 4 5 6 7 8 9 10
P[ 0, 0, :] = [ .9, 0, 0, 0, .1, 0, 0, 0, 0, 0, 0]
P[ 0, 1, :] = [ .1, .8, 0, 0, .1, 0, 0, 0, 0, 0, 0]
P[ 0, 2, :] = [ .9, .1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
P[ 0, 3, :] = [ .1, .1, 0, 0, .8, 0, 0, 0, 0, 0, 0]
# 0 1 2 3 4 5 6 7 8 9 10
P[ 1, 0, :] = [ .8, .2, 0, 0, 0, 0, 0, 0, 0, 0, 0]
P[ 1, 1, :] = [ 0, .2, .8, 0, 0, 0, 0, 0, 0, 0, 0]
P[ 1, 2, :] = [ .1, .8, .1, 0, 0, 0, 0, 0, 0, 0, 0]
P[ 1, 3, :] = [ .1, .8, .1, 0, 0, 0, 0, 0, 0, 0, 0]
# 0 1 2 3 4 5 6 7 8 9 10
P[ 2, 0, :] = [ 0, .8, .1, 0, 0, .1, 0, 0, 0, 0, 0]
P[ 2, 1, :] = [ 0, 0, .1, .8, 0, .1, 0, 0, 0, 0, 0]
P[ 2, 2, :] = [ 0, .1, .8, .1, 0, 0, 0, 0, 0, 0, 0]
P[ 2, 3, :] = [ 0, .1, 0, .1, 0, .8, 0, 0, 0, 0, 0]
# 0 1 2 3 4 5 6 7 8 9 10
P[ 3, 0, :] = [ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]
P[ 3, 1, :] = [ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]
P[ 3, 2, :] = [ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]
P[ 3, 3, :] = [ 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]
# 0 1 2 3 4 5 6 7 8 9 10
P[ 4, 0, :] = [ .1, 0, 0, 0, .8, 0, 0, .1, 0, 0, 0]
P[ 4, 1, :] = [ .1, 0, 0, 0, .8, 0, 0, .1, 0, 0, 0]
P[ 4, 2, :] = [ .8, 0, 0, 0, .2, 0, 0, 0, 0, 0, 0]
P[ 4, 3, :] = [ 0, 0, 0, 0, .2, 0, 0, .8, 0, 0, 0]
# 0 1 2 3 4 5 6 7 8 9 10
P[ 5, 0, :] = [ 0, 0, .1, 0, 0, .8, 0, 0, 0, .1, 0]
P[ 5, 1, :] = [ 0, 0, .1, 0, 0, 0, .8, 0, 0, .1, 0]
P[ 5, 2, :] = [ 0, 0, .8, 0, 0, .1, .1, 0, 0, 0, 0]
P[ 5, 3, :] = [ 0, 0, 0, 0, 0, .1, .1, 0, 0, .8, 0]
# 0 1 2 3 4 5 6 7 8 9 10
P[ 6, 0, :] = [ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]
P[ 6, 1, :] = [ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]
P[ 6, 2, :] = [ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]
P[ 6, 3, :] = [ 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]
# 0 1 2 3 4 5 6 7 8 9 10
P[ 7, 0, :] = [ 0, 0, 0, 0, .1, 0, 0, .9, 0, 0, 0]
P[ 7, 1, :] = [ 0, 0, 0, 0, .1, 0, 0, .1, .8, 0, 0]
P[ 7, 2, :] = [ 0, 0, 0, 0, .8, 0, 0, .1, .1, 0, 0]
P[ 7, 3, :] = [ 0, 0, 0, 0, 0, 0, 0, .9, .1, 0, 0]
# 0 1 2 3 4 5 6 7 8 9 10
P[ 8, 0, :] = [ 0, 0, 0, 0, 0, 0, 0, .8, .2, 0, 0]
P[ 8, 1, :] = [ 0, 0, 0, 0, 0, 0, 0, 0, .2, .8, 0]
P[ 8, 2, :] = [ 0, 0, 0, 0, 0, 0, 0, .1, .8, .1, 0]
P[ 8, 3, :] = [ 0, 0, 0, 0, 0, 0, 0, .1, .8, .1, 0]
# 0 1 2 3 4 5 6 7 8 9 10
P[ 9, 0, :] = [ 0, 0, 0, 0, 0, .1, 0, 0, .8, .1, 0]
P[ 9, 1, :] = [ 0, 0, 0, 0, 0, .1, 0, 0, 0, .1, .8]
P[ 9, 2, :] = [ 0, 0, 0, 0, 0, .8, 0, 0, .1, 0, .1]
P[ 9, 3, :] = [ 0, 0, 0, 0, 0, 0, 0, 0, .1, .8, .1]
# 0 1 2 3 4 5 6 7 8 9 10
P[10, 0, :] = [ 0, 0, 0, 0, 0, 0, .1, 0, 0, .8, .1]
P[10, 1, :] = [ 0, 0, 0, 0, 0, 0, .1, 0, 0, 0, .9]
P[10, 2, :] = [ 0, 0, 0, 0, 0, 0, .8, 0, 0, .1, .1]
P[10, 3, :] = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, .1, .9]
# rewards
R = -0.02 * np.ones((N_STATES, N_ACTIONS))
R[3,:] = 1.
R[6,:] = -1.
# discount factor
gamma = 0.99
# policy
if 0:
# bad policy
policy = np.empty((N_STATES, N_ACTIONS))
policy[0,:] = [0,1,0,0]
policy[1,:] = [0,1,0,0]
policy[2,:] = [0,1,0,0]
policy[3,:] = [0,1,0,0]
policy[4,:] = [0,0,0,1]
policy[5,:] = [0,1,0,0]
policy[6,:] = [0,1,0,0]
policy[7,:] = [0,1,0,0]
policy[8,:] = [0,1,0,0]
policy[9,:] = [0,0,1,0]
policy[10,:] = [0,0,1,0]
elif 0:
# random policy
policy = 0.25*np.ones((N_STATES, N_ACTIONS))
elif 0:
# optimal policy
policy = np.empty((N_STATES, N_ACTIONS))
policy[0,:] = [0,1,0,0]
policy[1,:] = [0,1,0,0]
policy[2,:] = [0,1,0,0]
policy[3,:] = [0,1,0,0]
policy[4,:] = [0,0,1,0]
policy[5,:] = [0,0,1,0]
policy[6,:] = [0,0,1,0]
policy[7,:] = [0,0,1,0]
policy[8,:] = [1,0,0,0]
policy[9,:] = [1,0,0,0]
policy[10,:] = [1,0,0,0]
elif 1:
# optimal policy + noise
# we use optimal policy with probability 1/(1+ep)
# we use random policy with probability ep/(1+ep)
ep = 0.1
policy = np.empty((N_STATES, N_ACTIONS))
policy[0,:] = [0,1,0,0]
policy[1,:] = [0,1,0,0]
policy[2,:] = [0,1,0,0]
policy[3,:] = [0,1,0,0]
policy[4,:] = [0,0,1,0]
policy[5,:] = [0,0,1,0]
policy[6,:] = [0,0,1,0]
policy[7,:] = [0,0,1,0]
policy[8,:] = [1,0,0,0]
policy[9,:] = [1,0,0,0]
policy[10,:] = [1,0,0,0]
policy = policy + (ep/4)*np.ones((N_STATES, N_ACTIONS))
policy = policy / np.sum(policy, axis=1).reshape((N_STATES,1))
# Every-Visit Monte-Carlo Policy Evaluation
# n_visits records number of visits for each state and action
# cum_gains records cumulative gains, i.e., sum of gains for each state and action
# where
# gain = reward + gamma * next_reward + gamma^2 * ...
# Previously for V
# n_visits = np.zeros(N_STATES)
# cum_gains = np.zeros(N_STATES)
print ("policy")
print (policy)
n_visits = np.zeros((N_STATES, N_ACTIONS))
cum_gains = np.zeros((N_STATES, N_ACTIONS))
for _ in range(epoch):
if _ % 5000 == 0 :
print (_,"/",epoch)
if testmode == 1 :
print ("안녕")
print (_,"번 째")
# simulation_history records visited states and actions including the terminal states 3 and 6
# reward_history records occured rewards including final rewards 1 and -1
simulation_history = []
reward_history = []
# indicate game is not over yet
done = False
# choose initial state randomly, not from 3 or 6
s = np.random.choice([0, 1, 2, 4, 5, 7, 8, 9, 10])
NUM = 0
while not done:
# choose action using current policy
a = np.random.choice(actions, p=policy[s, :])
# Previously for V
# simulation_history.append(s)
simulation_history.append((s,a))
reward_history.append(R[s,a])
if testmode == 1 :
print ("si_hi : ", simulation_history)
print ("re_hi : ", reward_history)
# choose next state using transition probabilities
s1 = np.random.choice(states, p=P[s, a, :])
if s1 == 3:
# if game is over,
# ready to break while loop by letting done = True
# append end result to simulation_history
done = True
# Previously for V
# simulation_history.append(s1)
simulation_history.append((s1,0))
reward_history.append(R[s1,0])
elif s1 == 6:
# if game is over,
# ready to break while loop by letting done = True
# append end result to simulation_history
done = True
# Previously for V
# simulation_history.append(s1)
simulation_history.append((s1,0))
reward_history.append(R[s1,0])
else:
# if game is not over, continue playing game
s = s1
NUM += 1
# reward_history records occured rewards including final rewards 1 and -1
reward_history = np.array(reward_history)
n = len(reward_history)
# gain_history records occured gains
gain_history = deepcopy(reward_history)
for i, reward in enumerate(reward_history[:-1][::-1]):
gain_history[n-i-2] = reward + gamma * gain_history[n-i-2+1]
# update n_visits and cum_gains
# Previously for V
# for i in range(N_STATES):
# n_visits[i] += np.sum(simulation_history==i)
# cum_gains[i] += np.sum(gain_history[simulation_history==i])
if testmode == 1 :
print ("simu_hi : ", simulation_history)
print ("gain_hi : ", gain_history)
check = [-1 for i in range(N_STATES)]
for i in range(N_STATES):
bbbbbb=-1
for j in range(n) :
if simulation_history[j][0] == i and bbbbbb ==-1 :
bbbbbb = j
check [i] = bbbbbb
if testmode == 1 :
print ("check :", check)
for i, (s, a) in enumerate(simulation_history):
if check[s]==i :
n_visits[s, a] += 1.
cum_gains[s, a] += gain_history[i]
if testmode == 1 :
print ("n_vi : ", n_visits)
print ("cum_ga : ", cum_gains)
Q = cum_gains / (n_visits + 1.0e-8)
print("Q", Q)
###Output
policy
[[0.02272727 0.93181818 0.02272727 0.02272727]
[0.02272727 0.93181818 0.02272727 0.02272727]
[0.02272727 0.93181818 0.02272727 0.02272727]
[0.02272727 0.93181818 0.02272727 0.02272727]
[0.02272727 0.02272727 0.93181818 0.02272727]
[0.02272727 0.02272727 0.93181818 0.02272727]
[0.02272727 0.02272727 0.93181818 0.02272727]
[0.02272727 0.02272727 0.93181818 0.02272727]
[0.93181818 0.02272727 0.02272727 0.02272727]
[0.93181818 0.02272727 0.02272727 0.02272727]
[0.93181818 0.02272727 0.02272727 0.02272727]]
0 / 50000
5000 / 50000
10000 / 50000
15000 / 50000
20000 / 50000
25000 / 50000
30000 / 50000
35000 / 50000
40000 / 50000
45000 / 50000
Q [[ 0.8124048 0.83628846 0.81700873 0.78564795]
[ 0.82047735 0.88157714 0.86164418 0.84349861]
[ 0.8303344 0.92612563 0.89593137 0.70326968]
[ 1. 0. 0. 0. ]
[ 0.75992865 0.76643904 0.79851278 0.73531867]
[ 0.68300958 -0.65478403 0.66757181 0.51155294]
[-1. 0. 0. 0. ]
[ 0.74724522 0.71254001 0.75538028 0.6882971 ]
[ 0.71687345 0.65100374 0.69550857 0.69648988]
[ 0.67336413 0.49608847 0.60233782 0.61925892]
[ 0.44248267 0.20683606 -0.68777782 0.40559323]]
|
notebooks/global_mutations.ipynb | ###Markdown
TODO* `gb_accession` and `gisaid_accession` are not found for new sequences, how do we concat to `metadata.csv` without them?* metadata format for NCBI* support tools for manual sanity checks
###Code
from bjorn import *
from bjorn_support import *
from onion_trees import *
import gffutils
import math
from mutations import *
input_fasta = "/home/al/analysis/mutations/S501Y/msa_reference.fa"
meta_fp = "/home/al/analysis/mutations/S501Y/metadata_2020-12-20_12-24.tsv"
out_dir = "/home/al/analysis/mutations/S501Y/"
ref_fp = "/home/al/data/test_inputs/NC045512.fasta"
patient_zero = 'NC_045512.2'
## keep only seqs contained in meta_file and save to fasta file
## concat with internal SD file
## generate MSA
meta = pd.read_csv(meta_fp, sep='\t')
meta.columns
# consensus_data = SeqIO.to_dict(SeqIO.parse(seqs_fp, "fasta"))
strains = meta['strain'].unique().tolist()
len(strains)
print(f"Loading Alignment file at: {input_fasta}")
cns = AlignIO.read(input_fasta, 'fasta')
print(f"Initial cleaning...")
seqs, ref_seq = process_cns_seqs(cns, patient_zero,
start_pos=0, end_pos=30000)
print(f"Creating a dataframe...")
seqsdf = (pd.DataFrame(index=seqs.keys(),
data=seqs.values(),
columns=['sequence'])
.reset_index()
.rename(columns={'index': 'idx'}))
def find_replacements(x, ref):
return [f'{i}:{n}' for i, n in enumerate(x)
if n!=ref[i] and n!='-' and n!='n']
print(f"Identifying mutations...")
# for each sample, identify list of substitutions (position:alt)
seqsdf['replacements'] = seqsdf['sequence'].apply(find_replacements, args=(ref_seq,))
# wide-to-long data manipulation
seqsdf = seqsdf.explode('replacements')
# seqsdf
seqsdf['pos'] = -1
# populate position column
seqsdf.loc[~seqsdf['replacements'].isna(), 'pos'] = (seqsdf.loc[~seqsdf['replacements'].isna(), 'replacements']
.apply(lambda x: int(x.split(':')[0])))
# filter out non-substitutions
seqsdf = seqsdf.loc[seqsdf['pos']!=-1]
print(f"Mapping Genes to mutations...")
# identify gene of each substitution
seqsdf['gene'] = seqsdf['pos'].apply(map_gene_to_pos)
seqsdf = seqsdf.loc[~seqsdf['gene'].isna()]
# seqsdf
# filter our substitutions in non-gene positions
seqsdf = seqsdf.loc[seqsdf['gene']!='nan']
print(f"Compute codon numbers...")
# compute codon number of each substitution
seqsdf['codon_num'] = seqsdf.apply(compute_codon_num, args=(GENE2POS,), axis=1)
print(f"Fetch reference codon...")
# fetch the reference codon for each substitution
seqsdf['ref_codon'] = seqsdf.apply(get_ref_codon, args=(ref_seq, GENE2POS), axis=1)
print(f"Fetch alternative codon...")
# fetch the alternative codon for each substitution
seqsdf['alt_codon'] = seqsdf.apply(get_alt_codon, args=(GENE2POS,), axis=1)
print(f"Map amino acids...")
# fetch the reference and alternative amino acids
seqsdf['ref_aa'] = seqsdf['ref_codon'].apply(get_aa)
seqsdf['alt_aa'] = seqsdf['alt_codon'].apply(get_aa)
# filter out substitutions with non-amino acid alternates (bad consensus calls)
seqsdf = seqsdf.loc[seqsdf['alt_aa']!='nan']
print(f"Fuse with metadata...")
# load and join metadata
meta = pd.read_csv(meta_fp, sep='\t')
seqsdf = pd.merge(seqsdf, meta, left_on='idx', right_on='strain')
seqsdf['date'] = pd.to_datetime(seqsdf['date_submitted'])
seqsdf['month'] = seqsdf['date'].dt.month
seqsdf.columns
seqsdf.loc[seqsdf['location'].isna(), 'location'] = 'unk'
out_dir = Path('/home/al/analysis/mutations/gisaid')
seqsdf.drop(columns=['sequence']).to_csv(out_dir/'gisaid_replacements_19-12-2020.csv', index=False)
seqsdf[['idx', 'sequence']].to_csv(out_dir/'gisaid_sequences_19-12-2020.csv', index=False)
seqsdf = pd.read_csv('/home/al/analysis/mutations/gisaid/gisaid_replacements_19-12-2020.csv')
seqsdf = seqsdf[seqsdf['host']=='Human']
print(f"Aggregate final results...")
# aggregate on each substitutions, compute number of samples and other attributes
subs = (seqsdf.groupby(['gene', 'pos', 'ref_aa', 'codon_num', 'alt_aa'])
.agg(
num_samples=('idx', 'nunique'),
first_detected=('date', 'min'),
last_detected=('date', 'max'),
num_locations=('location', 'nunique'),
location_counts=('location', lambda x: np.unique(x, return_counts=True)),
num_divisions=('division', 'nunique'),
division_counts=('division', lambda x: np.unique(x, return_counts=True)),
num_countries=('country', 'nunique'),
country_counts=('country', lambda x: np.unique(x, return_counts=True))
)
.reset_index())
# 1-based nucleotide position coordinate system
subs['pos'] = subs['pos'] + 1
# subs.sort_values('num_samples', ascending=False).iloc[0]['country_counts']
subs['locations'] = subs['location_counts'].apply(lambda x: list(x[0]))
subs['location_counts'] = subs['location_counts'].apply(lambda x: list(x[1]))
subs['divisions'] = subs['division_counts'].apply(lambda x: list(x[0]))
subs['division_counts'] = subs['division_counts'].apply(lambda x: list(x[1]))
subs['countries'] = subs['country_counts'].apply(lambda x: list(x[0]))
subs['country_counts'] = subs['country_counts'].apply(lambda x: list(x[1]))
print(f"Aggregate final results...")
# aggregate on each substitutions, compute number of samples and other attributes
subs_mnth = (seqsdf.groupby(['month', 'gene', 'pos', 'ref_aa', 'codon_num', 'alt_aa'])
.agg(
num_samples=('idx', 'nunique'),
first_detected_mnth=('date', 'min'),
last_detected_mnth=('date', 'max'),
num_locations=('location', 'nunique'),
# locations=('location', lambda x: list(np.unique(x))),
location_counts=('location', lambda x: np.unique(x, return_counts=True)),
num_divisions=('division', 'nunique'),
division_counts=('division', lambda x: np.unique(x, return_counts=True)),
num_countries=('country', 'nunique'),
# countries=('country', lambda x: list(np.unique(x))),
country_counts=('country', lambda x: np.unique(x, return_counts=True)),
)
.reset_index())
# 1-based nucleotide position coordinate system
subs_mnth['pos'] = subs_mnth['pos'] + 1
subs_mnth = pd.merge(subs_mnth, subs[['gene', 'pos', 'alt_aa', 'first_detected', 'last_detected']], on=['gene', 'pos', 'alt_aa'])
out_dir = Path('/home/al/analysis/mutations/gisaid')
subs.to_csv(out_dir/'gisaid_substitutions_aggregated_19-12-2020.csv', index=False)
top_s_mnthly = (subs_mnth[subs_mnth['gene']=='S'].sort_values('num_samples', ascending=False)
.drop_duplicates(subset=['gene', 'codon_num', 'alt_aa'])
.iloc[:50]
.reset_index(drop=True))
muts_of_interest = []
for i, mutation in top_s_mnthly.iterrows():
locs = mutation['location_counts'][0]
for l in locs:
if 'san diego' in l.lower():
muts_of_interest.append(i)
muts_of_interest
def is_in(x, loc):
for i in x[0]:
if loc in i.lower():
return True
return False
top_s_mnthly['isin_SD'] = top_s_mnthly['location_counts'].apply(is_in, args=('san diego',))
top_s_mnthly['isin_CA'] = top_s_mnthly['division_counts'].apply(is_in, args=('california',))
top_s_mnthly['isin_US'] = top_s_mnthly['country_counts'].apply(is_in, args=('usa',))
top_s_mnthly.to_csv("/home/al/analysis/mutations/gisaid/top_S_mutations_monthly.csv", index=False)
###Output
_____no_output_____
###Markdown
Integrate GISAID information with ALab variants table
###Code
out_dir = Path('/home/al/analysis/mutations/gisaid')
subs = pd.read_csv(out_dir/'gisaid_substitutions_aggregated_19-12-2020.csv')
gisaid_subs = (subs.rename(columns={'num_samples': 'gisaid_num_samples', 'first_detected': 'gisaid_1st_detected', 'last_detected': 'gisaid_last_detected',
'num_locations': 'gisaid_num_locations', 'locations': 'gisaid_locations', 'location_counts': 'gisaid_location_counts',
'num_divisions': 'gisaid_num_states','divisions': 'gisaid_states', 'division_counts': 'gisaid_state_counts',
'num_countries': 'gisaid_num_countries', 'countries': 'gisaid_countries', 'country_counts': 'gisaid_country_counts'})
.drop(columns=['ref_aa', 'pos']))
gisaid_subs.columns
# gisaid_subs.sort_values('gisaid_num_samples', ascending=False).iloc[0]['gisaid_country_counts']
our_subs = pd.read_csv("/home/al/analysis/mutations/alab_git/substitutions_22-12-2020_orig.csv")
our_subs.shape
all_subs = pd.merge(our_subs, gisaid_subs, on=['gene', 'codon_num', 'alt_aa'], how='left').drop_duplicates(subset=['gene', 'codon_num', 'alt_aa'])
all_subs.columns
all_subs.sort_values('num_samples', ascending=False)
subs.loc[(subs['gene']=='S')&(subs['alt_aa']=='L')&(subs['codon_num']==957)]
cols = ['month', 'ref_aa', 'codon_num', 'alt_aa', 'first_detected',
'last_detected', 'num_samples', 'num_countries',
'countries', 'country_counts', 'num_locations', 'locations', 'location_counts' ,
'first_detected_mnth', 'last_detected_mnth']
# (subs_mnth[(subs_mnth['gene']=='S') & (subs_mnth['month']==12)]
# .sort_values('num_samples', ascending=False)
# .drop_duplicates(subset=['codon_num', 'alt_aa'], keep='first')
# .iloc[:50]
# .reset_index(drop=True))[cols]
# keys_df = seqsdf[['idx', 'sequence']]
# keys_df.to_csv('gisaid_replacements.csv', index=False)
sd = []
for d in seqsdf['location'].dropna().unique():
if 'san diego' in d.lower():
sd.append(d)
ca = []
for d in seqsdf['division'].unique():
if 'cali' in d.lower():
ca.append(d)
# cols = ['idx', 'location', 'division', 'pos']
# seqsdf.loc[(seqsdf['codon_num']==681) & (seqsdf['gene']=='S')][cols]
###Output
_____no_output_____
###Markdown
Deletions
###Code
input_fasta = "/home/al/analysis/mutations/S501Y/msa_reference.fa"
meta_fp = "/home/al/analysis/mutations/S501Y/metadata_2020-12-20_12-24.tsv"
out_dir = "/home/al/analysis/mutations/S501Y/"
ref_fp = "/home/al/data/test_inputs/NC045512.fasta"
patient_zero = 'NC_045512.2'
min_del_len = 1
start_pos = 265
end_pos = 29674
# read MSA file
consensus_data = AlignIO.read(input_fasta, 'fasta')
# prcess MSA to remove insertions and fix position coordinate systems
seqs, ref_seq = process_cns_seqs(consensus_data, patient_zero, start_pos=start_pos, end_pos=end_pos)
# load into dataframe
seqsdf = (pd.DataFrame(index=seqs.keys(), data=seqs.values(),
columns=['sequence'])
.reset_index().rename(columns={'index': 'idx'}))
# load and join metadata
meta = pd.read_csv(meta_fp, sep='\t')
print(seqsdf.shape)
seqsdf = pd.merge(seqsdf, meta, left_on='idx', right_on='strain')
print(seqsdf.shape)
# # clean and process sample collection dates
# seqsdf = seqsdf.loc[(seqsdf['collection_date']!='Unknown')
# & (seqsdf['collection_date']!='1900-01-00')]
# seqsdf.loc[seqsdf['collection_date'].str.contains('/'), 'collection_date'] = seqsdf['collection_date'].apply(lambda x: x.split('/')[0])
seqsdf['date'] = pd.to_datetime(seqsdf['date_submitted'])
# compute length of each sequence
seqsdf['seq_len'] = seqsdf['sequence'].str.len()
# identify deletion positions
seqsdf['del_positions'] = seqsdf['sequence'].apply(find_deletions)
seqsdf.columns
seqsdf = seqsdf[seqsdf['host']=='Human']
# sequences with one or more deletions
del_seqs = seqsdf.loc[seqsdf['del_positions'].str.len() > 0]
del_seqs = del_seqs.explode('del_positions')
# compute length of each deletion
del_seqs['del_len'] = del_seqs['del_positions'].apply(len)
# only consider deletions longer than 2nts
del_seqs = del_seqs[del_seqs['del_len'] >= min_del_len]
# fetch coordinates of each deletion
del_seqs['relative_coords'] = del_seqs['del_positions'].apply(get_indel_coords)
del_seqs.loc[del_seqs['location'].isna(), 'location'] = 'unk'
# group sample by the deletion they share
del_seqs = (del_seqs.groupby(['relative_coords', 'del_len'])
.agg(
samples=('idx', 'unique'),
num_samples=('idx', 'nunique'),
first_detected=('date', 'min'),
last_detected=('date', 'max'),
# locations=('location', lambda x: list(np.unique(x))),
location_counts=('location', lambda x: np.unique(x, return_counts=True)),
# divisions=('division', lambda x: list(np.unique(x))),
division_counts=('division', lambda x: np.unique(x, return_counts=True)),
# countries=('country', lambda x: list(np.unique(x))),
country_counts=('country', lambda x: np.unique(x, return_counts=True)),
)
.reset_index()
.sort_values('num_samples'))
del_seqs['type'] = 'deletion'
# adjust coordinates to account for the nts trimmed from beginning e.g. 265nts
del_seqs['absolute_coords'] = del_seqs['relative_coords'].apply(adjust_coords, args=(start_pos+1,))
del_seqs['pos'] = del_seqs['absolute_coords'].apply(lambda x: int(x.split(':')[0]))
# approximate the gene where each deletion was identified
del_seqs['gene'] = del_seqs['pos'].apply(map_gene_to_pos)
del_seqs = del_seqs.loc[~del_seqs['gene'].isna()]
# filter our substitutions in non-gene positions
del_seqs = del_seqs.loc[del_seqs['gene']!='nan']
# compute codon number of each substitution
del_seqs['codon_num'] = del_seqs.apply(compute_codon_num, args=(GENE2POS,), axis=1)
# fetch the reference codon for each substitution
del_seqs['ref_codon'] = del_seqs.apply(get_ref_codon, args=(ref_seq, GENE2POS), axis=1)
# fetch the reference and alternative amino acids
del_seqs['ref_aa'] = del_seqs['ref_codon'].apply(get_aa)
# record the 5 nts before each deletion (based on reference seq)
del_seqs['prev_5nts'] = del_seqs['absolute_coords'].apply(lambda x: ref_seq[int(x.split(':')[0])-5:int(x.split(':')[0])])
# record the 5 nts after each deletion (based on reference seq)
del_seqs['next_5nts'] = del_seqs['absolute_coords'].apply(lambda x: ref_seq[int(x.split(':')[1])+1:int(x.split(':')[1])+6])
del_seqs['locations'] = del_seqs['location_counts'].apply(lambda x: list(x[0]))
del_seqs['location_counts'] = del_seqs['location_counts'].apply(lambda x: list(x[1]))
del_seqs['divisions'] = del_seqs['division_counts'].apply(lambda x: list(x[0]))
del_seqs['division_counts'] = del_seqs['division_counts'].apply(lambda x: list(x[1]))
del_seqs['countries'] = del_seqs['country_counts'].apply(lambda x: list(x[0]))
del_seqs['country_counts'] = del_seqs['country_counts'].apply(lambda x: list(x[1]))
del_seqs.sort_values('num_samples', ascending=False)
del_seqs.to_csv('/home/al/analysis/mutations/gisaid/gisaid_deletions_aggregated_19-12-2020.csv', index=False)
del_seqs.columns
gisaid_dels = (del_seqs.rename(columns={'num_samples': 'gisaid_num_samples', 'first_detected': 'gisaid_1st_detected', 'last_detected': 'gisaid_last_detected',
'locations': 'gisaid_locations', 'location_counts': 'gisaid_location_counts',
'divisions': 'gisaid_states', 'division_counts': 'gisaid_state_counts',
'countries': 'gisaid_countries', 'country_counts': 'gisaid_country_counts'})
.drop(columns=['ref_aa', 'pos', 'type', 'samples', 'ref_codon', 'prev_5nts', 'next_5nts', 'relative_coords', 'del_len']))
our_dels = pd.read_csv("/home/al/analysis/mutations/alab_git/deletions_22-12-2020_orig.csv")
# our_dels
cols = ['type', 'gene', 'absolute_coords', 'del_len', 'pos',
'ref_aa', 'codon_num', 'num_samples',
'first_detected', 'last_detected', 'locations',
'location_counts', 'gisaid_num_samples',
'gisaid_1st_detected', 'gisaid_last_detected', 'gisaid_countries', 'gisaid_country_counts',
'gisaid_states', 'gisaid_state_counts', 'gisaid_locations', 'gisaid_location_counts', 'samples',
'ref_codon', 'prev_5nts', 'next_5nts'
]
our_dels = pd.merge(our_dels, gisaid_dels, on=['gene', 'codon_num', 'absolute_coords'], how='left')
our_dels[cols]
our_dels[cols].sort_values('num_samples', ascending=False).to_csv("/home/al/analysis/mutations/alab_git/deletions_22-12-2020.csv", index=False)
align_fasta_reference(seqs_fp, num_cpus=25, ref_fp=ref_fp)
###Output
_____no_output_____
###Markdown
CNS Mutations Report
###Code
analysis_folder = Path('/home/al/code/HCoV-19-Genomics/consensus_sequences/')
meta_fp = Path('/home/al/code/HCoV-19-Genomics/metadata.csv')
ref_path = Path('/home/gk/code/hCoV19/db/NC045512.fasta')
patient_zero = 'NC_045512.2'
in_fp = '/home/al/analysis/mutations/S501Y/msa_aligned.fa'
subs = identify_replacements(in_fp, meta_fp)
subs.head()
dels = identify_deletions(in_fp, meta_fp, patient_zero)
dels
dels[dels['gene']=='S'].sort_values('num_samples', ascending=False)#.to_csv('S_deletions_consensus.csv', index=False)
identify_insertions(in_fp, patient_zero).to_csv('test.csv', index=False)
###Output
_____no_output_____
###Markdown
dev
###Code
GENE2POS = {
'5UTR': {'start': 0, 'end': 265},
'ORF1ab': {'start': 265, 'end': 21555},
'S': {'start': 21562, 'end': 25384},
'ORF3a': {'start': 25392, 'end': 26220},
'E': {'start': 26244, 'end': 26472},
'M': {'start': 26522, 'end': 27191},
'ORF6': {'start': 27201, 'end': 27387},
'ORF7a': {'start': 27393, 'end': 27759},
'ORF7b': {'start': 27755, 'end': 27887},
'ORF8': {'start': 27893, 'end': 28259},
'N': {'start': 28273, 'end': 29533},
'ORF10': {'start': 29557, 'end': 29674},
'3UTR': {'start': 29674, 'end': 29902}
}
in_dir = '/home/al/analysis/mutations/fa/'
out_dir = '/home/al/analysis/mutations/msa/'
!rm -r /home/al/analysis/mutations
!mkdir /home/al/analysis/mutations
!mkdir /home/al/analysis/mutations/fa
for filename in analysis_folder.listdir():
if (filename.endswith('fa') or filename.endswith('fasta')):
copy(filename, '/home/al/analysis/mutations/fa/')
# print(filename)
copy(ref_path, in_dir)
in_dir = '/home/al/analysis/mutations/fa/'
out_dir = '/home/al/analysis/mutations/msa'
concat_fasta(in_dir, out_dir)
align_fasta_reference('/home/al/analysis/mutations/msa.fa', num_cpus=12, ref_fp=ref_path)
cns = AlignIO.read('/home/al/analysis/mutations/msa_aligned.fa', 'fasta')
ref_seq = get_seq(cns, patient_zero)
len(ref_seq)
seqs = get_seqs(cns, 0, 30000)
seqsdf = (pd.DataFrame(index=seqs.keys(), data=seqs.values(), columns=['sequence'])
.reset_index().rename(columns={'index': 'idx'}))
# seqsdf
def find_replacements(x, ref):
return [f'{i}:{n}' for i, n in enumerate(x)
if n!=ref[i] and n!='-' and n!='n']
seqsdf['replacements'] = seqsdf['sequence'].apply(find_replacements, args=(ref_seq,))
seqsdf = seqsdf.explode('replacements')
seqsdf['pos'] = -1
seqsdf.loc[~seqsdf['replacements'].isna(), 'pos'] = seqsdf.loc[~seqsdf['replacements'].isna(), 'replacements'].apply(lambda x: int(x.split(':')[0]))
seqsdf = seqsdf.loc[seqsdf['pos']!=-1]
def compute_codon_num(x, gene2pos: dict):
pos = x['pos']
ref_pos = gene2pos[x['gene']]['start']
return math.ceil((pos - ref_pos + 1) / 3)
seqsdf['gene'] = seqsdf['pos'].apply(map_gene_to_pos)
seqsdf = seqsdf.loc[~seqsdf['gene'].isna()]
seqsdf = seqsdf.loc[seqsdf['gene']!='nan']
seqsdf['codon_num'] = seqsdf.apply(compute_codon_num, args=(GENE2POS,), axis=1)
def get_ref_codon(x, ref_seq, gene2pos: dict):
ref_pos = gene2pos[x['gene']]['start']
codon_start = ref_pos + ((x['codon_num'] - 1) * 3)
return ref_seq[codon_start: codon_start+3].upper()
seqsdf['ref_codon'] = seqsdf.apply(get_ref_codon, args=(ref_seq, GENE2POS), axis=1)
def get_alt_codon(x, gene2pos: dict):
ref_pos = gene2pos[x['gene']]['start']
codon_start = ref_pos + ((x['codon_num'] - 1) * 3)
return x['sequence'][codon_start: codon_start+3].upper()
seqsdf['alt_codon'] = seqsdf.apply(get_alt_codon, args=(GENE2POS,), axis=1)
def get_aa(codon: str):
CODON2AA = {
'ATA':'I', 'ATC':'I', 'ATT':'I', 'ATG':'M',
'ACA':'T', 'ACC':'T', 'ACG':'T', 'ACT':'T',
'AAC':'N', 'AAT':'N', 'AAA':'K', 'AAG':'K',
'AGC':'S', 'AGT':'S', 'AGA':'R', 'AGG':'R',
'CTA':'L', 'CTC':'L', 'CTG':'L', 'CTT':'L',
'CCA':'P', 'CCC':'P', 'CCG':'P', 'CCT':'P',
'CAC':'H', 'CAT':'H', 'CAA':'Q', 'CAG':'Q',
'CGA':'R', 'CGC':'R', 'CGG':'R', 'CGT':'R',
'GTA':'V', 'GTC':'V', 'GTG':'V', 'GTT':'V',
'GCA':'A', 'GCC':'A', 'GCG':'A', 'GCT':'A',
'GAC':'D', 'GAT':'D', 'GAA':'E', 'GAG':'E',
'GGA':'G', 'GGC':'G', 'GGG':'G', 'GGT':'G',
'TCA':'S', 'TCC':'S', 'TCG':'S', 'TCT':'S',
'TTC':'F', 'TTT':'F', 'TTA':'L', 'TTG':'L',
'TAC':'Y', 'TAT':'Y', 'TAA':'_', 'TAG':'_',
'TGC':'C', 'TGT':'C', 'TGA':'_', 'TGG':'W',
}
return CODON2AA.get(codon, 'nan')
seqsdf['ref_aa'] = seqsdf['ref_codon'].apply(get_aa)
seqsdf['alt_aa'] = seqsdf['alt_codon'].apply(get_aa)
seqsdf = seqsdf.loc[seqsdf['alt_aa']!='nan']
seqsdf.columns
meta = pd.read_csv(meta_fp)
print(seqsdf['idx'].unique().shape)
seqsdf = pd.merge(seqsdf, meta, left_on='idx', right_on='fasta_hdr')
print(seqsdf['idx'].unique().shape)
seqsdf = seqsdf.loc[(seqsdf['collection_date']!='Unknown')
& (seqsdf['collection_date']!='1900-01-00')]
seqsdf.loc[seqsdf['collection_date'].str.contains('/'), 'collection_date'] = seqsdf['collection_date'].apply(lambda x: x.split('/')[0])
seqsdf['date'] = pd.to_datetime(seqsdf['collection_date'])
seqsdf['date'].min()
# (seqsdf.groupby(['gene', 'ref_aa', 'codon_num', 'alt_aa'])
# .agg(
# num_samples=('ID', 'nunique')))
def uniq_locs(x):
return np.unique(x)
def loc_counts(x):
_, counts = np.unique(x, return_counts=True)
return counts
subs = (seqsdf.groupby(['gene', 'pos', 'ref_aa', 'codon_num', 'alt_aa'])
.agg(
num_samples=('ID', 'nunique'),
first_detected=('date', 'min'),
last_detected=('date', 'max'),
locations=('location', uniq_locs),
location_counts=('location', loc_counts),
samples=('ID', 'unique')
)
.reset_index())
subs['pos'] = subs['pos'] + 1
(subs[subs['gene']=='S'].sort_values('num_samples', ascending=False)
.to_csv('S_mutations_consensus.csv', index=False))
###Output
_____no_output_____
###Markdown
Consolidate metadata ID and fasta headers
###Code
def fix_header(x):
if 'Consensus' in x:
return x.split('_')[1]
else:
return x.split('/')[2]
seqsdf['n_ID'] = seqsdf['idx'].apply(fix_header)
seqsdf['n_ID'] = seqsdf['n_ID'].str.replace('ALSR', 'SEARCH')
meta = pd.read_csv(meta_fp)
meta['n_ID'] = meta['ID'].apply(lambda x: '-'.join(x.split('-')[:2]))
seqsdf['n_ID'] = seqsdf['n_ID'].apply(lambda x: '-'.join(x.split('-')[:2]))
tmp = pd.merge(seqsdf, meta, on='n_ID')
# tmp[tmp['ID'].str.contains('2112')]
# seqsdf
set(meta['n_ID'].unique()) - set(tmp['n_ID'].unique())
seqsdf['idx'].unique().shape
meta['ID'].unique().shape
s = seqsdf[['n_ID', 'idx']].drop_duplicates()
new_meta = pd.merge(meta, s, on='n_ID', how='left')
(new_meta.drop(columns=['n_ID'])
.rename(columns={'idx': 'fasta_hdr'})
.to_csv('metadata.csv', index=False))
new_meta.shape
new_meta
len(ref_seq)
###Output
_____no_output_____ |
docs/ml/image/Train.ipynb | ###Markdown
Local TrainLocal train is faster and more interactive, but it cannot take too much training data Copy a subset of data.
###Code
# Some code to determine a unique bucket name for the purposes of the sample
from gcp.context import Context
CLOUD_PROJECT = Context.default().project_id
ml_bucket_name = CLOUD_PROJECT + '-mldata'
ml_bucket_path = 'gs://' + ml_bucket_name
OUTPUT_DIR = ml_bucket_path + '/sampledata/ml/image/output'
%%bash -s "$OUTPUT_DIR"
mkdir -p /datalab/ml/image
gsutil -m cp $1/dict.txt $1/data/data.*-00000-of-00010 /datalab/ml/image
with open('/datalab/ml/image/dict.txt') as f:
dictionary = f.read().splitlines()
print dictionary
###Output
['tulips', 'roses', 'dandelion', 'sunflowers', 'daisy']
###Markdown
Define the TensorFlow modelTensorFlow model can be either declared in-line or loaded from a file.There is already a file that defines image classification model. It is loaded here.
###Code
%%tensorflow graph
from image_classification import *
###Output
_____no_output_____
###Markdown
Define a dataset
###Code
LOCAL_TRAIN_DATA = '/datalab/ml/image/data.train.json-00000-of-00010'
LOCAL_TEST_DATA = '/datalab/ml/image/data.test.json-00000-of-00010'
%%ml dataset -n image_data_local
train: $LOCAL_TRAIN_DATA
test: $LOCAL_TEST_DATA
###Output
_____no_output_____
###Markdown
Run trainingAs part of training call a set of hyperparameters can be passed. Those hyperparameters can be access in code (image_classification.py imported above) that generates the TensorFlow graph.
###Code
%%ml train -m image.v1 -d image_data_local
dropout_keep_prob: 0.5
hidden_layer_size: 512
batch_size: 64
learning_rate: 0.01
steps: 637
embedding_size: 2048
labels: 5
%%ml analyze --model image.v1
###Output
_____no_output_____
###Markdown
Cloud TrainDefine full training dateset.
###Code
CLOUD_TRAIN_DATA = '%s/data/data.train.json-*' % OUTPUT_DIR
CLOUD_TEST_DATA = '%s/data/data.test.json-00000-of-00010' % OUTPUT_DIR
%%ml dataset -n cloud_image_data
train: $CLOUD_TRAIN_DATA
test: $CLOUD_TEST_DATA
###Output
_____no_output_____
###Markdown
Run training
###Code
%%ml train -m image.v1 -d cloud_image_data -o "$OUTPUT_DIR/model/" --cloud --overwrite
replicas: 5
dropout_keep_prob: 0.5
hidden_layer_size: 512
batch_size: 64
learning_rate: 0.01
steps: 6370
embedding_size: 2048
labels: 5
%%ml analyze --model image.v1 --cloud
###Output
_____no_output_____ |
notebooks/0.4.join_subject_level.ipynb | ###Markdown
Cognitive Tasks Comprehension
###Code
comp_df = frames[0].set_index(['SSID','time'])
comp_df = comp_df.unstack('time')['Score-sum'].reset_index()
comp_df['comp_change'] = comp_df[2] - comp_df[1]
comp_df = comp_df.rename(columns={'SSID':'sub',1:'comp_t1',2:'comp_t2'})
comp_df.head()
###Output
_____no_output_____
###Markdown
N-back
###Code
nback_df = frames[1][['sub','nback_RT','CoR']].rename(columns={'nback_RT':'nb_RT','CoR':'nb_CoR'})
nback_df.head()
###Output
_____no_output_____
###Markdown
ProcSpd
###Code
procspd_df = frames[2][['Subject','procspd_RT']].rename(columns={'Subject':'sub','RT':'procspd_RT'})
procspd_df.head()
###Output
_____no_output_____
###Markdown
Surveys
###Code
frames[3] = frames[3].rename(columns={'SSID':'sub'})
frames[3].head()
###Output
_____no_output_____
###Markdown
Demographics Defining Functions:*`group_ages()` and `group_fields()` simply parse raw responses into our categorical groupings.**`sanitize_fieldtext()` performs some operations to clean up text responses to the questions, "What is your major?" and "What Field and Level is your degree?"*
###Code
def group_ages(age,sub_id):
if 35 < age < 65: return(np.nan)
else: return(str(sub_id)[0])
def group_fields(student_major,sci_degree):
if student_major == 5 or sci_degree == 1: #self-classified science major/degree
return(1) # science field
elif student_major == 2 or sci_degree == 2: #self-classified nonscience major/degree
return(2) # nonscience field
else: # preferred not to answer
return(0)
def sanitize_fieldtext(field):
try:
new_field = field.lower()
new_field = new_field.strip(' ').replace(' ','-')
new_field = new_field.replace('sciences','science')
except AttributeError as e:
#print(e)
return('')
#print(field,'::',new_field)
return(new_field)
###Output
_____no_output_____
###Markdown
Select demographics data`frames[3]` *contains all of the survey data from* `survey_subject_level.csv`
###Code
demog_df = frames[3].loc[:,[
'sub','Condition','Age','Gender',
'Major','Major_TEXT','SciDegree','SciDegree_TEXT',
'EduYears','SciEdu_HS','SciEdu_UGrad','SciEdu_Grad',
]]
###Output
_____no_output_____
###Markdown
Apply functions*Using *`np.vectorize`* to efficiently apply those functions we defined earlier to the columns of our data frame.*
###Code
pd.cut(demog_df['Age'], (18,35,65,90), include_lowest=True, labels=('YA', 0, 'OA'))
demog_df['AgeGroup'] = pd.cut(demog_df['Age'], (18,35,65,90), include_lowest=True, labels=('YA', 0, 'OA')).replace({0:np.nan})
demog_df['SciField'] = np.vectorize(group_fields)(demog_df['Major'],demog_df['SciDegree'])
demog_df['Major_TEXT'] = np.vectorize(sanitize_fieldtext)(demog_df['Major_TEXT'])
###Output
_____no_output_____
###Markdown
Display Demographics output
###Code
demog_df
###Output
_____no_output_____
###Markdown
Subscaling functions:`sum_subscale()`* takes a DataFrame and a label, simply adding a column with the sum score for each row in that scale/subscale. We'll use it for every scale we collected: once for each subscale in applicable scales.*`reverse_score()`* takes a DataFrame and, using *`max_likert`* and an assumption that columns ending with a lowercase "r" are flagged for reverse-scoring, applies reverse-scoring to the relevant data inside the DataFrame.*
###Code
def sum_subscale(df,label):
df = df.set_index('sub')
df[label+'_sum'] = df.sum(axis=1)
df = df.reset_index()
return(df)
def reverse_score(df,max_likert):
df[[c.strip('r') for c in df.columns if c.endswith('r')]] = (max_likert +1) - df[[c for c in df.columns if c.endswith('r')]]
df = df[[c for c in df if not c.endswith('r')]]
df = df.reindex(sorted(df.columns), axis=1).set_index('sub').reset_index()
return(df)
###Output
_____no_output_____
###Markdown
Vocab Derived from Shipley Institute of Living*Assesses correct identification of synonyms for 40 word-items.**Scoring is handled in Qualtrics; `Score-sum_x` reflects the number of synonyms correctly identified.*
###Code
vocab_df = frames[3][['sub','Score-sum_x']].rename(columns={'Score-sum_x':'vocab_sum'})
vocab_df.head()
###Output
_____no_output_____
###Markdown
NFCS Need for Cognition Scale*Measures dispositional motivation to seek intellectual challenge.**Items reflecting an avoidance to this behavior are reverse-scored.*
###Code
nfcs_df = frames[3][['sub']+[c for c in frames[3].columns if c.startswith('NFCS')]]
nfcs_df = nfcs_df.drop(columns='NFCS-00')
nfcs_df.head()
###Output
_____no_output_____
###Markdown
These items are to be ***forward-scored***:- `NFCS-01` "I prefer complex to simple problems."- `NFCS-02` "I like to have the responsibility of handling a situation that requires a lot of thinking."- `NFCS-06` "I find satisfaction in deliberating hard and for long hours."- `NFCS-10` "The idea of relying on thought to make my way to the top appeals to me." - `NFCS-11` "I really enjoy a task that involves coming up with new solutions to problems."- `NFCS-13` "I prefer my life to be filled with puzzles I must solve."- `NFCS-14` "The notion of thinking abstractly is appealing to me."- `NFCS-15` "I would prefer a task that is intellectual, difficult, and important to one that is somewhat important."- `NFCS-18` "I usually end up deliberating about issues even when they do not affect me personally."---The following items are to be ***reverse-scored***:- `NFCS-03r` "Thinking is not my idea of fun."- `NFCS-04r` "I would rather do something that requires little thought than something that is sure to challenge."- `NFCS-05r` "I try to anticipate and avoid situations where there is a likely chance that I will have to think."- `NFCS-07r` "I only think as hard as I have to."- `NFCS-08r` "I prefer to think about small, daily projects to long-term ones." - `NFCS-09r` "I like tasks that require little thought once I've learned them."- `NFCS-12r` "Learning new ways to think doesn't excite me very much." - `NFCS-16r` "I feel relief rather than satisfaction after completing a task that requires a lot of mental effort."- `NFCS-17r` "It's enough for me that something gets the job done; I don't care how or why it works."
###Code
nfcs_df = reverse_score(nfcs_df,5)
nfcs_df = sum_subscale(nfcs_df,'NFCS')
nfcs_df.head()
###Output
_____no_output_____
###Markdown
TSSI Trust in Science and Scientists Inventory*Measures tendencies to value the scientific process. Items reflecting a lack of trust are reverse-scored.*
###Code
tssi_df = frames[3][['sub']+[c for c in frames[3].columns if c.startswith('TSSI')]]
tssi_df = tssi_df.drop(columns='TSSI-00')
tssi_df.head()
###Output
_____no_output_____
###Markdown
The following items are to be ***forward-scored***:- `TSSI-05` "We can trust scientists to share their discoveries even if they don't like their findings." - `TSSI-07` "I trust the work of scientists to make life better for people." - `TSSI-09` "We should trust the work of scientists."- `TSSI-10` "We should trust that scientists are being honest in their work." - `TSSI-11` "We should trust that scientists are being ethical in their work."- `TSSI-12` "Scientific theories are trustworthy."- `TSSI-14` "People who understand science more have more trust in science." - `TSSI-15` "We can trust science to find the answers that explain the natural world." - `TSSI-16` "I trust scientists can find solutions to our major technological problems"---The following items are to be ***reverse-scored***:- `TSSI-01r` "When scientists change their mind about a scientific idea it diminishes my trust in their work." - `TSSI-02r` "Scientists ignore evidence that contradicts their work." - `TSSI-03r` "Scientific theories are weak explanations." - `TSSI-04r` "Scientists intentionally keep their work secret." - `TSSI-06r` "Scientists don't value the ideas of others."- `TSSI-08r` "Scientists don't care if laypersons understand their work."- `TSSI-13r` "When scientists form a hypothesis they are just guessing." - `TSSI-17r` "We cannot trust scientists because they are biased in their perspectives."- `TSSI-18r` "Scientists will protect each other even when they are wrong."- `TSSI-19r` "We cannot trust scientists to consider ideas that contradict their own." - `TSSI-20r` "Today's scientists will sacrifice the well being of others to advance their research." - `TSSI-21r` "We cannot trust science because it moves too slowly."
###Code
tssi_df = reverse_score(tssi_df,5)
tssi_df = sum_subscale(tssi_df,'TSSI')
tssi_df.head()
###Output
_____no_output_____
###Markdown
TOSLS Test of Science Literacy Skills*All data represent raw scoring with no reverse scores.*
###Code
sciLit_df = frames[3][['sub']+[c for c in frames[3].columns if c.startswith('SciLit')]]
sciLit_df = sciLit_df.drop(columns=['SciLit-00','SciLit-00.1'])
sciLit_df.head()
###Output
_____no_output_____
###Markdown
**SciLit2-10:**>Your interest is piqued by a story about human pheromones on the news.>>A Google search leads you to the following website:>>>>For this website (Eros Foundation), which of the following characteristics is most important in your confidence that the resource is accurate or not.**SciLit2-12:**>“A recent study, following more than 2,500 New Yorkers for 9+ years, found that people who drank diet soda every day had a 61% higher risk of vascular events, including stroke and heart attack, compared to those who avoided diet drinks. For this study, Hannah Gardner’s research team randomly surveyed 2,564 New Yorkers about their eating behaviors, exercise habits, as well as cigarette and alcohol consumption. Participants were also given physical check-ups, including blood pressure measurements and blood tests for cholesterol and other factors that might affect the risk for heart attack and stroke. The increased likelihood of vascular events remained even after Gardener and her colleagues accounted for risk factors, such as smoking, high blood pressure and high cholesterol levels. The researchers found no increased risk among people who drank regular soda.”>The excerpt above comes from what type of source of information?**SciLit2-17:**>"The most important factor influencing you to categorize a research article as trustworthy science is:"**SciLit2-26:** >*"You’ve been doing research to help your grandmother understand two new drugs for osteoporosis. One publication, Eurasian Journal of Bone and Joint Medicine, contains articles with data only showing the effectiveness of one of these new drugs. A pharmaceutical company funded the Eurasian Journal of Bone and Joint Medicine production and most advertisements in the journal are for this company’s products. In your searches, you find other articles that show the same drug has only limited effectiveness."*>>Pick the best answer that would help you decide about the credibility of the Eurasian Journal of Bone and joint medicine:**SciLit3-27:**>"Which of the following actions is a valid scientific course of action?"**SciLit2-22:**>Your doctor prescribed you a drug that is brand new. The drug has some significant side effects, so you do some research to determine the effectiveness of the new drug compared to similar drugs on the market. Which of the following sources would provide the *most accurate* information?"
###Code
sciLit_df = sciLit_df.merge(frames[3][['sub','Score-sum_y']].rename(columns={'Score-sum_y':'SciLit_sum'}))
sciLit_df.head()
###Output
_____no_output_____
###Markdown
TOSRA Test of Science-Related Attitudes*Measures the emotional value towards science. Reverse scoring indicates a higher emotional value in science.*
###Code
scitude_df = frames[3][['sub']+[c for c in frames[3].columns if c.startswith('SciTude')]]
scitude_df = scitude_df.drop(columns='SciTude-00')
scitude_df.head()
###Output
_____no_output_____
###Markdown
It's important to note for the TOSRA that the likert scale used was as follows:> Strongly Agree | Agree | Neutral | Disagree | Strongly DisagreeBecause these are coded 1-5 from left to right, the raw responses for each item are effectively *already* reverse-scored.Thus, we apply the `reverse_score()` function to the items we want to be forward-scored in the final data set, by labelling them with 'r'.___The following items are to be ***forward-scored***:- `SciTudeA` - `SciTudeA-04r` "I enjoy reading about things which disagree with my previous ideas." - `SciTudeA-18r` "I am curious about the world in which we live."- `SciTudeL` - `SciTudeL-20r` "I would like to be given a science book as a present." - `SciTudeL-62r` "I would enjoy visiting a science museum on the weekend."- `SciTudeS` - `SciTudeS-01r` "Money spent on science is well worth spending." - `SciTudeS-15r` "Public money spent on science in the last few years has been used wisely." - `SciTudeS-29r` "The government should spend more money on scientific research." - `SciTudeS-43r` "Science helps make life better." - `SciTudeS-57r` "Science can help to make the world a better place in the future."---The following items are to be ***reverse-scored***:- `SciTudeA` - `SciTudeA-25` "Finding out about new things is unimportant." - `SciTudeA-39` "I find it boring to hear about new ideas." - `SciTudeA-53` "I am unwilling to change my ideas when evidence shows that the ideas are poor." - `SciTudeA-67` "I dislike listening to other people's opinions."- `SciTudeL` - `SciTudeL-13` "I get bored when watching science programs on TV at home." - `SciTudeL-27` "I dislike reading books about science on my vacation." - `SciTudeL-55` "Listening to talk about science on the radio would be boring." - `SciTudeL-69` "I dislike reading newspaper articles about science."- `SciTudeS` - `SciTudeS-22` "Scientific discoveries are doing more harm than good."---You will notice that there are 3 subscales we are going to sum up for the TOSRA:- `SciTudeA` "Adoption of Scientific Attitudes"- `SciTudeL` "Leisure Interest in Science"- `SciTudeS` "Social Implications of Science"To do this, we will first split up the dataframe into a list of subscale DataFrames, and individually apply the `reverse_score()` and `sum_subscale()` functions.Then we will recombine that list using the `reduce()` function, into a final DataFrame with our scored subscale values.
###Code
scitude_subscales = sorted(list(set([c.split('-')[0] for c in scitude_df.set_index('sub').columns])))
scitude_dfs=[]
for subscale in scitude_subscales:
df = scitude_df.loc[:,['sub']+[c for c in scitude_df.columns if c.startswith(subscale)]]
df = reverse_score(df,5)
df = sum_subscale(df,subscale)
scitude_dfs.append(df)
print(df.head()[['sub',subscale+'_sum']])
scitude_df = reduce(lambda left,right: pd.merge(left,right,on='sub'), scitude_dfs)
scitude_df.head()[['sub','SciTudeA_sum','SciTudeL_sum','SciTudeS_sum']]
###Output
_____no_output_____
###Markdown
Openness to Exerience Big-Five personality trait Openness to Experience*Personality trait of seeking new experience and intellectualpursuits. High scores may day dream a lot. Low scorers may be very down to earth.*
###Code
o2xp_df = frames[3][['sub']+[c for c in frames[3].columns if c.startswith('O')]]
o2xp_df = o2xp_df.drop(columns=['Open-0','Original_Feedback'])
o2xp_df.head()
###Output
_____no_output_____
###Markdown
The following items are to be ***forward-scored***:- `O1` - `O1-003` "Have a vivid imagination." - `O1-033` "Enjoy wild flights of fantasy." - `O1-063` "Love to daydream" - `O1-093` "Like to get lost in thought"- `O2` - `O2-008` "Believe in the importance of art." - `O2-038` "See beauty in things that others might not notice."- `O3` - `O3-013` "Experience my emotions intensely." - `O3-043` "Feel others' emotions."- `O4` - `O4-018` "Prefer variety to routine."- `O5` - `O5-023` "Love to read challenging material."- `O6` - `O6-028` "Tend to vote for liberal political candidates." - `O6-058` "Believe that there is no absolute right or wrong." The following items are to be ***reversed-scored***:- `O2` - `O2-068r` "Do not like poetry." - `O2-098r` "Do not enjoy going to art museums`- `O3` - `O3-073r` "Rarely notice my emoitional reactions." - `O3-103r` "Don't understanad people who get emotional."- `O4` - `O4-048r` "Prefer to stick with things that I know" - `O4-078r` "Dislike changes." - `O4-108r` "Am attached to conventional ways."- `O5` - `O5-053r` "Avoid philosophical discussions." - `O5-083r` "Have difficulty understanding abstract ideas." - `O5-113r` "Am not interested in theoretical discussions."- `O6` - `O6-088r` "Tend to vote for conservative political candidates." - `O6-118r` "Believe that we should be tough on crime." From this version of the Openness to Experience inventory, there are six subscales we are going to score:- `O1` "Imagination"- `O2` "Artistic Interests"- `O3` "Emotionality"- `O4` "Adventurousness"- `O5` "Intellect"- `O6` "Liberalism"Just like we did for the TOSRA subscales earlier, we will first split up the dataframe into a list of subscale DataFrames, and individually apply the `reverse_score()` and `sum_subscale()` functions.Then we will recombine that list using the `reduce()` function, into a final DataFrame with our scored subscale values.
###Code
o2xp_subscales = sorted(list(set([c.split('-')[0] for c in o2xp_df.set_index('sub').columns])))
o2xp_dfs=[]
for subscale in o2xp_subscales:
df = o2xp_df.loc[:,['sub']+[c for c in o2xp_df.columns if c.startswith(subscale)]]
df = reverse_score(df,5)
df = sum_subscale(df,subscale)
o2xp_dfs.append(df)
print(df.head())
o2xp_df = reduce(lambda left,right: pd.merge(left,right,on='sub'), o2xp_dfs)
o2xp_df.head()[['sub','O1_sum','O2_sum','O3_sum','O4_sum','O5_sum','O6_sum']]
###Output
_____no_output_____
###Markdown
Output
###Code
output_df = demog_df.merge(comp_df[['sub','comp_t1','comp_t2','comp_change']],'outer'
).merge(nback_df[['sub','nb_RT','nb_CoR']],'outer'
).merge(procspd_df[['sub','procspd_RT']],'outer'
).merge(nfcs_df[['sub','NFCS_sum']],'outer'
).merge(tssi_df[['sub','TSSI_sum']],'outer'
).merge(vocab_df[['sub','vocab_sum']],'outer'
).merge(sciLit_df[['sub','SciLit_sum']],'outer'
).merge(scitude_df[['sub','SciTudeA_sum','SciTudeL_sum','SciTudeS_sum']],'outer'
).merge(o2xp_df[['sub','O1_sum','O2_sum','O3_sum','O4_sum','O5_sum','O6_sum']],'outer'
)
output_df
output_df.to_csv( output_dir / 'all_subject_level.csv' , index=False)
###Output
_____no_output_____
###Markdown
Patch to pre-analysis Remainder of this notebook is a hard patch of some quick and dirty fixes for the final pre-analysis dataset. Should be better integrated, later.
###Code
data = pd.read_csv(output_dir / 'all_subject_level.csv')
###Output
_____no_output_____
###Markdown
Quick cleaning
###Code
data['Condition'] = data['Condition'].map({1:'Annotated',2:'Video',3:'Original'})
data['SciField'] = data['SciField'] - 1
data['Gender'] = data['Gender'] - 1
data['nb_CoR'] = data['nb_CoR']*100
data[['SciEdu_HS','SciEdu_UGrad','SciEdu_Grad']] = data[['SciEdu_HS','SciEdu_UGrad','SciEdu_Grad']].replace({np.nan:0})
data.head()
###Output
_____no_output_____
###Markdown
Group-level Outlier Exclusion Using a quantile method for this. Nulling out subject data who are outside the 0.00135/0.99865(th) percentiles within their Age Group
###Code
from outliers import group_exclude
for value_col in ('comp_change', 'comp_t1', 'comp_t2', 'nb_RT', 'nb_CoR',
'procspd_RT', 'NFCS_sum', 'TSSI_sum', 'vocab_sum',
'SciLit_sum', 'SciTudeA_sum', 'SciTudeL_sum', 'SciTudeS_sum',
'O1_sum', 'O2_sum', 'O3_sum', 'O4_sum', 'O5_sum', 'O6_sum'):
data = data.join(group_exclude(data, 'AgeGroup', value_col))
data.to_csv(derivs_dir / '20190218' / 'all_subject_level_bound.csv', index=False)
###Output
_____no_output_____ |
fashionMNIST.ipynb | ###Markdown
Preparing your data for training with DataLoadersIn machine learning, you need to specify what the feature and label are in your dataset. Features are input and labels are output. We train use features and train the model to predict the label.Labels are what 10 class types: T-shirt, Sandal, Dress etcFeatures are the patterns in the images pixels
###Code
from torch.utils.data import DataLoader
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
Iterate through the DataLoaderWe have loaded that dataset into the Dataloader and can iterate through the dataset as needed. Each iteration below returns a batch of train_features and train_labels(containing batch_size=64 features and labels respectively). Because we specified shuffle=True, after we iterate over all batches the data is shuffled (for finer-grained control over the data loading order.
###Code
# Display image and label.
train_features, train_labels = next(iter(train_dataloader))
print(f"Feature batch shape: {train_features.size()}")
print(f"Labels batch shape: {train_labels.size()}")
img = train_features[0].squeeze()
label = train_labels[0]
plt.imshow(img, cmap="gray")
plt.show()
print(f"Label: {label}")
###Output
Feature batch shape: torch.Size([64, 1, 28, 28])
Labels batch shape: torch.Size([64])
###Markdown
NormalizationData does not always come in its final processed form that is required for training machine learning algorithms. We use transforms to perform somemanipulation of the data and make it suitable for training.
###Code
from torchvision import datasets
from torchvision.transforms import ToTensor, Lambda
ds = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor(),
target_transform=Lambda(lambda y: torch.zeros(10, dtype=torch.float).scatter_(0, torch.tensor(y), value=1))
)
###Output
_____no_output_____
###Markdown
ToTensor() ToTensor converts a PIL image or NumPy ndarray into a FloatTensor and scales the image's pixel intensity values in the range [0., 1.]Lambda transformsLambda transforms apply any user-defined lambda function. Here, we define a function to turn the integer into a one-hot encoded tensor. It first creates a zero tensor of size 10 (the number of labels in our dataset) and calls scatter which assigns a value=1 on the index as given by the label y. You can also use torch.nn.functional.one_hot as another option to do that.
###Code
target_transform = Lambda(lambda y: torch.zeros(
10, dtype=torch.float).scatter_(dim=0, index=torch.tensor(y), value=1))
###Output
_____no_output_____
###Markdown
Load the dataset
###Code
# Use standard FashionMNIST dataset
train_set = torchvision.datasets.FashionMNIST(
root = './data/FashionMNIST',
train = True,
download = True,
transform = transforms.Compose([
transforms.ToTensor()
])
)
loader = torch.utils.data.DataLoader(train_set, batch_size = 1)
# type(loader.dataset[0][0]) # this is to access one of the images.
# type(loader.dataset[0][1]) # this is to access the label.
images = loader.dataset[10][0]
# images.size()
images.size()
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
imgplot = plt.imshow(images[0, :, :].numpy())
###Output
_____no_output_____
###Markdown
Build the network
###Code
class Network(nn.Module):
def __init__(self):
super().__init__()
# define layers
self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5)
self.conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5)
self.fc1 = nn.Linear(in_features=12*4*4, out_features=120)
self.fc2 = nn.Linear(in_features=120, out_features=60)
self.out = nn.Linear(in_features=60, out_features=10)
# define forward function
def forward(self, t):
# conv 1
t = self.conv1(t)
t = F.relu(t)
t = F.max_pool2d(t, kernel_size=2, stride=2)
# conv 2
t = self.conv2(t)
t = F.relu(t)
t = F.max_pool2d(t, kernel_size=2, stride=2)
# fc1
t = t.reshape(-1, 12*4*4)
t = self.fc1(t)
t = F.relu(t)
# fc2
t = self.fc2(t)
t = F.relu(t)
# output
t = self.out(t)
# don't need softmax here since we'll use cross-entropy as activation.
return t
class Network(nn.Module):
def __init__(self):
super().__init__()
# define layers
self.conv1 = nn.Conv2d(in_channels=1, out_channels=16, kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(in_channels=16, out_channels=16, kernel_size=3, padding=1)
self.conv3 = nn.Conv2d(in_channels=16, out_channels=32, kernel_size=3, padding=1)
self.conv4 = nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3, padding=1)
self.conv5 = nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, padding=1)
self.conv6 = nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, padding=1)
self.fc1 = nn.Linear(in_features=64*7*7, out_features=128)
self.fc2 = nn.Linear(in_features=128, out_features=64)
self.out = nn.Linear(in_features=64, out_features=10)
# define forward function
def forward(self, t):
t = F.relu(self.conv1(t))
t = F.relu(self.conv2(t))
t = F.max_pool2d(t, kernel_size=2, stride=2)
t = F.relu(self.conv3(t))
t = F.relu(self.conv4(t))
t = F.max_pool2d(t, kernel_size=2, stride=2)
t = F.relu(self.conv5(t))
t = F.relu(self.conv6(t))
# fc1
t = t.reshape(-1, t.shape[1]*t.shape[2]*t.shape[3])
t = self.fc1(t)
t = F.relu(t)
# fc2
t = self.fc2(t)
t = F.relu(t)
# output
t = self.out(t)
# don't need softmax here since we'll use cross-entropy as activation.
return t
# Network sanity check.
x = loader.dataset[0][0]
x = x.unsqueeze(0)
model = Network()
y = model(x)
print(x.shape)
print(y.shape)
cuda = torch.device("cuda")
modelGPU = Network().to(device=cuda)
y = modelGPU(x.to(device=cuda))
y.shape
# Store hyperparameters in a dictionary.
# params = OrderedDict(
# lr = [.01, .001],
# batch_size = [100, 1000],
# shuffle = [True, False]
# )
params = OrderedDict(
lr = [.01],
batch_size = [100],
shuffle = [True]
)
epochs = 25
###Output
_____no_output_____
###Markdown
RunBuilder
###Code
# Read in the hyper-parameters and return a Run namedtuple containing all the
# combinations of hyper-parameters
class RunBuilder():
@staticmethod
def get_runs(params):
Run = namedtuple('Run', params.keys())
runs = []
for v in product(*params.values()):
runs.append(Run(*v))
return runs
###Output
_____no_output_____
###Markdown
RunManager
###Code
# Helper class, help track loss, accuracy, epoch time, run time,
# hyper-parameters etc. Also record to TensorBoard and write into csv, json.
class RunManager():
def __init__(self):
# tracking every epoch count, loss, accuracy, time
self.epoch_count = 0
self.epoch_loss = 0
self.epoch_num_correct = 0
self.epoch_start_time = None
# tracking every run count, run data, hyper-params used, time
self.run_params = None
self.run_count = 0
self.run_data = []
self.run_start_time = None
# record model, loader and TensorBoard
self.network = None
self.loader = None
self.tb = None
# record the count, hyper-param, model, loader of each run
# record sample images and network graph to TensorBoard
def begin_run(self, run, network, loader):
self.run_start_time = time.time()
self.run_params = run
self.run_count += 1
self.network = network
self.loader = loader
self.tb = SummaryWriter(comment=f'-{run}')
images, labels = next(iter(self.loader))
grid = torchvision.utils.make_grid(images)
self.tb.add_image('images', grid)
self.tb.add_graph(self.network, images.to(device=cuda))
# when run ends, close TensorBoard, zero epoch count
def end_run(self):
self.tb.close()
self.epoch_count = 0
# zero epoch count, loss, accuracy,
def begin_epoch(self):
self.epoch_start_time = time.time()
self.epoch_count += 1
self.epoch_loss = 0
self.epoch_num_correct = 0
#
def end_epoch(self):
# calculate epoch duration and run duration(accumulate)
epoch_duration = time.time() - self.epoch_start_time
run_duration = time.time() - self.run_start_time
# record epoch loss and accuracy
loss = self.epoch_loss / len(self.loader.dataset)
accuracy = self.epoch_num_correct / len(self.loader.dataset)
# Record epoch loss and accuracy to TensorBoard
self.tb.add_scalar('Loss', loss, self.epoch_count)
self.tb.add_scalar('Accuracy', accuracy, self.epoch_count)
# Record params to TensorBoard
for name, param in self.network.named_parameters():
self.tb.add_histogram(name, param, self.epoch_count)
self.tb.add_histogram(f'{name}.grad', param.grad, self.epoch_count)
# Write into 'results' (OrderedDict) for all run related data
results = OrderedDict()
results["run"] = self.run_count
results["epoch"] = self.epoch_count
results["loss"] = loss
results["accuracy"] = accuracy
results["epoch duration"] = epoch_duration
results["run duration"] = run_duration
# Record hyper-params into 'results'
for k,v in self.run_params._asdict().items(): results[k] = v
self.run_data.append(results)
df = pd.DataFrame.from_dict(self.run_data, orient = 'columns')
# display epoch information and show progress
clear_output(wait=True)
display(df)
# accumulate loss of batch into entire epoch loss
def track_loss(self, loss):
# multiply batch size so variety of batch sizes can be compared
self.epoch_loss += loss.item() * self.loader.batch_size
# accumulate number of corrects of batch into entire epoch num_correct
def track_num_correct(self, preds, labels):
self.epoch_num_correct += self._get_num_correct(preds, labels)
@torch.no_grad()
def _get_num_correct(self, preds, labels):
return preds.argmax(dim=1).eq(labels).sum().item()
# save end results of all runs into csv, json for further analysis
def save(self, fileName):
pd.DataFrame.from_dict(
self.run_data,
orient = 'columns',
).to_csv(f'{fileName}.csv')
with open(f'{fileName}.json', 'w', encoding='utf-8') as f:
json.dump(self.run_data, f, ensure_ascii=False, indent=4)
###Output
_____no_output_____
###Markdown
Training (CPU)
###Code
# m = RunManager()
# # get all runs from params using RunBuilder class
# for run in RunBuilder.get_runs(params):
# # if params changes, following line of code should reflect the changes too
# network = Network()
# loader = torch.utils.data.DataLoader(train_set, batch_size = run.batch_size)
# optimizer = optim.Adam(network.parameters(), lr=run.lr)
# m.begin_run(run, network, loader)
# for epoch in range(epochs):
# m.begin_epoch()
# for batch in loader:
# images = batch[0]
# labels = batch[1]
# preds = network(images)
# loss = F.cross_entropy(preds, labels)
# optimizer.zero_grad()
# loss.backward()
# optimizer.step()
# m.track_loss(loss)
# m.track_num_correct(preds, labels)
# m.end_epoch()
# m.end_run()
# # when all runs are done, save results to files
# m.save('results')
###Output
_____no_output_____
###Markdown
Training (GPU)
###Code
m = RunManager()
# get all runs from params using RunBuilder class
for run in RunBuilder.get_runs(params):
# if params changes, following line of code should reflect the changes too
networkGPU = Network().to(device=cuda)
loader = torch.utils.data.DataLoader(train_set, batch_size = run.batch_size)
optimizer = optim.Adam(networkGPU.parameters(), lr=run.lr)
m.begin_run(run, networkGPU, loader)
for epoch in range(epochs):
m.begin_epoch()
for batch in loader:
images = batch[0].to(device=cuda)
labels = batch[1]
preds = networkGPU(images)
#print("\n", "QQQQQ", "\n")
preds = preds.cpu()
loss = F.cross_entropy(preds, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
m.track_loss(loss)
m.track_num_correct(preds, labels)
m.end_epoch()
m.end_run()
# when all runs are done, save results to files
m.save('results')
###Output
_____no_output_____
###Markdown
TensorBoard
###Code
!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
!unzip ngrok-stable-linux-amd64.zip
LOG_DIR = './runs'
get_ipython().system_raw(
'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &'
.format(LOG_DIR)
)
get_ipython().system_raw('./ngrok http 6006 &')
! curl -s http://localhost:4040/api/tunnels | python3 -c \
"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"
# !rm -r runs/Nov*
###Output
_____no_output_____
###Markdown
###Code
import torch
from torchvision import transforms,datasets
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import SubsetRandomSampler
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Defining our Transforms
###Code
transform=transforms.Compose([transforms.ToTensor()])
###Output
_____no_output_____
###Markdown
Gathering the train and test data
###Code
train_data=datasets.FashionMNIST('data',train=True,download=True,transform=transform)
test_data=datasets.FashionMNIST('data',train=False,download=True,transform=transform)
###Output
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to data/FashionMNIST/raw/train-images-idx3-ubyte.gz
###Markdown
Defining our Train, Valid and Test Dataloaders
###Code
valid_size=0.2
train_length=len(train_data)
indices=[i for i in range(train_length)]
np.random.shuffle(indices)
split=int(np.floor(valid_size*train_length))
train_idx=indices[split:]
valid_idx=indices[:split]
train_sampler=SubsetRandomSampler(train_idx)
valid_sampler=SubsetRandomSampler(valid_idx)
num_workers=0
batch_size=20
train_loader=torch.utils.data.DataLoader(train_data,batch_size=batch_size,sampler=train_sampler,num_workers=num_workers)
valid_loader=torch.utils.data.DataLoader(train_data,batch_size=batch_size,sampler=train_sampler,num_workers=num_workers)
test_loader=torch.utils.data.DataLoader(test_data,batch_size=batch_size,num_workers=num_workers)
print(f"Training data size : {train_idx.__len__()}, Validation data size : {valid_idx.__len__()}, Test data size : {test_loader.dataset.__len__()}")
# checking our data
dataiter=iter(train_loader)
images,labels=dataiter.next()
print(images, images.shape, len(images), images[0].shape)
print()
print(labels,labels.shape,len(labels))
fashion_class={
0:"T-shirt/top",
1:"Trouser",
2:"Pullover",
3:"Dress",
4:"Coat",
5:"Sandal",
6:"Shirt",
7:"Sneaker",
8:"Bag",
9:"Ankle boot"
}
fig=plt.figure(figsize=(30,10))
for i in range(len(labels)):
ax=fig.add_subplot(2,10,i+1,xticks=[],yticks=[])
plt.imshow(np.squeeze(images[i]))
ax.set_title(f"{fashion_class[labels[i].item()]}({labels[i].item()})")
###Output
_____no_output_____
###Markdown
Defining our Neural Net Architecture
###Code
class FNet(nn.Module):
def __init__(self):
super(FNet,self).__init__()
self.fc1=nn.Linear(784,512)
self.fc2=nn.Linear(512,256)
self.out=nn.Linear(256,10)
# Dropout probability - set for avoiding overfitting
self.dropout=nn.Dropout(0.2)
def forward(self,x):
x = x.view(-1, 28 * 28)
x=self.dropout(F.relu(self.fc1(x)))
x=self.dropout(F.relu(self.fc2(x)))
x=self.out(x)
return x
class convNet(nn.Module):
def __init__(self):
super(convNet,self).__init__()
self.conv1=nn.Conv2d(in_channels=1,out_channels=16,kernel_size=3,padding=1,stride=1)
self.conv2=nn.Conv2d(in_channels=16,out_channels=32,kernel_size=3,padding=1,stride=1)
self.pool=nn.MaxPool2d(kernel_size=2,stride=2)
self.fc1=nn.Linear(7*7*32,512)
self.fc2=nn.Linear(512,256)
self.out=nn.Linear(256,10)
self.dropout=nn.Dropout(0.2)
def forward(self,x):
x=self.pool(F.relu(self.conv1(x)))
x=self.pool(F.relu(self.conv2(x)))
x=x.view(-1,7*7*32)
x = self.dropout(x)
x=self.dropout(F.relu(self.fc1(x)))
x=self.dropout(F.relu(self.fc2(x)))
x=self.out(x)
return x
model_1=FNet()
model_2=convNet()
def weight_init_normal(m):
classname=m.__class__.__name__
if classname.find('Linear')!=-1:
n = m.in_features
y = (1.0/np.sqrt(n))
m.weight.data.normal_(0, y)
m.bias.data.fill_(0)
model_1.apply(weight_init_normal),model_2.apply(weight_init_normal)
use_cuda=True
if use_cuda and torch.cuda.is_available():
model_1.cuda()
model_2.cuda()
print(model_1,'\n\n\n\n',model_2,'\n\n\n\n','On GPU : ',torch.cuda.is_available())
# Loss Function
# If we did not compute softmax at output use nn.CrossentropyLoss() else use nn.NLLLoss()
criterion=nn.CrossEntropyLoss()
###Output
_____no_output_____
###Markdown
Training and Validation Phase
###Code
def trainNet(model,lr):
optimizer=torch.optim.Adam(model.parameters(),lr=lr)
loss_keeper={'train':[],'valid':[]}
epochs=50
valid_loss_min = np.Inf
for epoch in range(epochs):
train_loss=0.0
valid_loss=0.0
"""
TRAINING PHASE
"""
model.train()
for images,labels in train_loader:
if use_cuda and torch.cuda.is_available():
images,labels=images.cuda(),labels.cuda()
optimizer.zero_grad()
output=model(images)
loss=criterion(output,labels)
loss.backward()
optimizer.step()
train_loss+=loss.item()
"""
VALIDATION PHASE
"""
model.eval()
for images,labels in valid_loader:
if use_cuda and torch.cuda.is_available():
images,labels=images.cuda(),labels.cuda()
output=model(images)
loss=criterion(output,labels)
valid_loss+=loss.item()
train_loss = train_loss/len(train_loader)
valid_loss = valid_loss/len(valid_loader)
loss_keeper['train'].append(train_loss)
loss_keeper['valid'].append(valid_loss)
print(f"\nEpoch : {epoch+1}\tTraining Loss : {train_loss}\tValidation Loss : {valid_loss}")
if valid_loss<=valid_loss_min:
print(f"Validation loss decreased from : {valid_loss_min} ----> {valid_loss} ----> Saving Model.......")
z=type(model).__name__
torch.save(model.state_dict(), z+'_model.pth')
valid_loss_min=valid_loss
return(loss_keeper)
m1_loss=trainNet(model_1,0.001)
m1_loss
m2_loss=trainNet(model_2,0.001)
m2_loss
###Output
_____no_output_____
###Markdown
Loading model from Lowest Validation Loss
###Code
# Loading the model from the lowest validation loss
model_1.load_state_dict(torch.load('FNet_model.pth'))
model_2.load_state_dict(torch.load('convNet_model.pth'))
print(model_1.state_dict,'\n\n\n\n',model_2.state_dict)
###Output
<bound method Module.state_dict of FNet(
(fc1): Linear(in_features=784, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=256, bias=True)
(out): Linear(in_features=256, out_features=10, bias=True)
(dropout): Dropout(p=0.2, inplace=False)
)>
<bound method Module.state_dict of convNet(
(conv1): Conv2d(1, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(fc1): Linear(in_features=1568, out_features=512, bias=True)
(fc2): Linear(in_features=512, out_features=256, bias=True)
(out): Linear(in_features=256, out_features=10, bias=True)
(dropout): Dropout(p=0.2, inplace=False)
)>
###Markdown
Plotting Training and Validation Losses
###Code
title=['FFNN','CNN']
model_losses=[m1_loss,m2_loss]
fig=plt.figure(1,figsize=(10,5))
idx=1
for i in model_losses:
ax=fig.add_subplot(1,2,idx)
ax.plot(i['train'],label="Training Loss")
ax.plot(i['valid'],label="Validation Loss")
ax.set_title('Fashion MNIST : '+title[idx-1])
idx+=1
plt.legend();
###Output
_____no_output_____
###Markdown
Testing Phase
###Code
def test(model):
correct=0
test_loss=0
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
model.eval() # test the model with dropout layers off
for images,labels in test_loader:
if use_cuda and torch.cuda.is_available():
images,labels=images.cuda(),labels.cuda()
output=model(images)
loss=criterion(output,labels)
test_loss+=loss.item()
_,pred=torch.max(output,1)
correct = np.squeeze(pred.eq(labels.data.view_as(pred)))
for i in range(batch_size):
label = labels.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
test_loss=test_loss/len(test_loader)
print(f'For {type(model).__name__} :')
print(f"Test Loss: {test_loss}")
print(f"Correctly predicted per class : {class_correct}, Total correctly perdicted : {sum(class_correct)}")
print(f"Total Predictions per class : {class_total}, Total predictions to be made : {sum(class_total)}\n")
for i in range(10):
if class_total[i] > 0:
print(f"Test Accuracy of class {fashion_class[i]} : {float(100 * class_correct[i] / class_total[i])}% where {int(np.sum(class_correct[i]))} of {int(np.sum(class_total[i]))} were predicted correctly")
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print(f"\nOverall Test Accuracy : {float(100. * np.sum(class_correct) / np.sum(class_total))}% where {int(np.sum(class_correct))} of {int(np.sum(class_total))} were predicted correctly")
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# get sample outputs
if use_cuda and torch.cuda.is_available():
images,labels=images.cuda(),labels.cuda()
output = model(images)
# convert output probabilities to predicted class
_, preds = torch.max(output, 1)
# prep images for display
images = images.cpu().numpy()
fig = plt.figure(figsize=(15, 20))
for idx in np.arange(batch_size):
ax = fig.add_subplot(5, batch_size/5, idx+1, xticks=[], yticks=[])
plt.imshow(np.squeeze(images[idx]))
ax.set_title("{}-{} for ({}-{})".format(str(preds[idx].item()), fashion_class[preds[idx].item()],str(labels[idx].item()),fashion_class[labels[idx].item()]),
color=("blue" if preds[idx]==labels[idx] else "red"))
###Output
_____no_output_____
###Markdown
Visualizing a Test batch with results FFNN
###Code
test(model_1)
###Output
For FNet :
Test Loss: 0.43074727362405973
Correctly predicted per class : [853.0, 982.0, 831.0, 914.0, 758.0, 972.0, 715.0, 970.0, 976.0, 953.0], Total correctly perdicted : 8924.0
Total Predictions per class : [1000.0, 1000.0, 1000.0, 1000.0, 1000.0, 1000.0, 1000.0, 1000.0, 1000.0, 1000.0], Total predictions to be made : 10000.0
Test Accuracy of class T-shirt/top : 85.3% where 853 of 1000 were predicted correctly
Test Accuracy of class Trouser : 98.2% where 982 of 1000 were predicted correctly
Test Accuracy of class Pullover : 83.1% where 831 of 1000 were predicted correctly
Test Accuracy of class Dress : 91.4% where 914 of 1000 were predicted correctly
Test Accuracy of class Coat : 75.8% where 758 of 1000 were predicted correctly
Test Accuracy of class Sandal : 97.2% where 972 of 1000 were predicted correctly
Test Accuracy of class Shirt : 71.5% where 715 of 1000 were predicted correctly
Test Accuracy of class Sneaker : 97.0% where 970 of 1000 were predicted correctly
Test Accuracy of class Bag : 97.6% where 976 of 1000 were predicted correctly
Test Accuracy of class Ankle boot : 95.3% where 953 of 1000 were predicted correctly
Overall Test Accuracy : 89.24% where 8924 of 10000 were predicted correctly
###Markdown
CNN
###Code
test(model_2)
###Output
_____no_output_____ |
notebooks/sgd_comparison.ipynb | ###Markdown
🧪 Optimization-Toolkit [(Github)](https://github.com/haven-ai/optimization-toolkit)Use this Colab to develop and compare optimizers and models across a variety of datasets.It contains optimizers like [Adam](https://https://arxiv.org/pdf/1412.6980.pdf), [SLS](https://arxiv.org/abs/1905.09997), [AdaSLS](https://arxiv.org/abs/2006.06835), [SPS](https://arxiv.org/pdf/2002.10542.pdf) and [LBFGS](https://www.cs.ubc.ca/~schmidtm/Software/minFunc.html) that could be ran on standard datastets like MNIST.Run your first set of experiments with these 4 steps,0. 🥕 **Install and Import Required Libraries**1. 🤓 **Create Datasets, Models, and Optimizers**2. 🎨 **Define list of experiments**3. 🧠 **Train and Validate**4. 📊 **Visualize the results**You can run and visualize other large-scale experiments from this [Github Repo](https://github.com/haven-ai/optimization-toolkit). Original [Colab link](https://colab.research.google.com/drive/11rFC_5nnOTb3UBztg0S5F11QyRrq483d?usp=sharingscrollTo=pEq-Mzmka1gU&uniqifier=1). 🌐 Credits Authors: - Issam H. Laradji - Sharan Vaswani - Kevin Murphy License: - Apache License 2.0 0. Install and import required libraries
###Code
# Install Libraries
!pip install -q --upgrade git+https://github.com/haven-ai/haven-ai.git
!pip install -q git+https://github.com/IssamLaradji/sls.git
!pip install -q git+https://github.com/IssamLaradji/sps.git
!pip install -q git+https://github.com/IssamLaradji/ada_sls.git
# !pip install -q pytorch-nlp
# Import Libraries
import sls, sps, adasls
import pandas as pd
import torch, copy, pprint
import numpy as np
import os, shutil, torchvision
import tqdm.notebook as tqdm
import sklearn
import torch.nn.functional as F
from sklearn import preprocessing
from sklearn.datasets import load_iris
from haven import haven_results as hr
from haven import haven_utils as hu
from torch import nn
from torch.utils.data import DataLoader, TensorDataset
from sklearn.model_selection import train_test_split
from google.colab import data_table
# from torchnlp.datasets.imdb import imdb_dataset
###Output
_____no_output_____
###Markdown
1) 🤓 Create datasets, models, and optimizers- Add your dataset in `get_dataset()`- Add your model in `get_model()`- Add your optimizer in `get_optimizer()`
###Code
# Define a set of datasets
# ------------------------
def get_dataset(dataset_dict, split):
name = dataset_dict['name']
if name == 'syn':
X, y = hu.make_binary_linear(n=1000, d=49, margin=0.5,
separable=dataset_dict['separable'])
dataset = hu.get_split_torch_dataset(X, y, split)
dataset.task = 'binary_classification'
dataset.n_output = 1
return dataset
if name == 'iris':
#X, y = sklearn.datasets.load_iris(return_X_y=True)
X, y = load_iris(return_X_y=True)
X = preprocessing.StandardScaler().fit_transform(X)
dataset = hu.get_split_torch_dataset(X, y, split)
dataset.task = 'multi_classification'
dataset.n_output = 3
return dataset
if name == 'diabetes':
X, y = sklearn.datasets.load_diabetes(return_X_y=True)
y = y/y.max()
X = preprocessing.StandardScaler().fit_transform(X)
dataset = hu.get_split_torch_dataset(X, y.astype('float'), split)
dataset.task = 'regression'
dataset.n_output = 1
return dataset
if name == 'imdb':
train = True if split=='train' else False
test = True if split=='val' else False
dataset = imdb_dataset(train=train, test=test)
X = [d['text'] for d in dataset]
y = [d['sentiment'] for d in dataset]
dataset.task = 'classification'
dataset.n_output = 1
return dataset
if name == 'fashion_mnist':
train = True if split=='train' else False
dataset = torchvision.datasets.FashionMNIST('data/', train=train, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.5,), (0.5,))
]))
dataset.n_input = 784
dataset.task = 'multi_classification'
dataset.n_output = 10
return dataset
if name == 'mnist':
train = True if split=='train' else False
dataset = torchvision.datasets.MNIST('data/', train=train, download=True,
transform=torchvision.transforms.Compose([
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(
(0.5,), (0.5,))
]))
dataset.n_input = 784
dataset.task = 'multi_classification'
dataset.n_output = 10
return dataset
# Define a set of optimizers
# --------------------------
def get_optimizer(opt_dict, model, train_set, batch_size):
name = opt_dict['name']
if name == 'adam':
return torch.optim.Adam(model.parameters(), lr=opt_dict.get('lr', 1e-3))
elif name == 'adasls':
return adasls.AdaSLS(model.parameters(), c=opt_dict.get('c', .5),
n_batches_per_epoch=opt_dict.get('n_batches_per_epoch',
len(train_set)/batch_size))
elif name == 'sls':
return sls.Sls(model.parameters(), c=opt_dict.get('c', .5),
n_batches_per_epoch=opt_dict.get('n_batches_per_epoch',
len(train_set)/batch_size))
elif name == 'sps':
return sps.Sps(model.parameters(), c=opt_dict.get('c', .5))
elif name == 'sgd':
return torch.optim.Sgd(model.parameters(), lr=opt_dict.get('lr', 1e-3),
momentum=opt_dict.get('momentum', 0.))
elif name == 'lbfgs':
return torch.optim.LBFGS(model.parameters(),
# tolerance_change=1.,
# tolerance_grad=1e-5,
max_iter=1,
max_eval=1,
lr=opt_dict.get('lr', 1),
line_search_fn='strong_wolfe'
)
# Define a set of models
# ----------------------
def get_model(model_dict, dataset):
name = model_dict['name']
if name == 'mlp':
return MLP(dataset, model_dict['layer_list'])
class MLP(nn.Module):
def __init__(self, dataset, layer_list):
super().__init__()
self.task = dataset.task
layer_list = [dataset.n_input] + layer_list + [dataset.n_output]
layers = [nn.Flatten()]
for i in range(len(layer_list)-1):
layers += [nn.Linear(layer_list[i], layer_list[i+1])]
self.layers = nn.Sequential(*layers)
self.n_forwards = 0
def forward(self, x):
return self.layers(x)
def compute_loss(self, X, y):
# Compute the loss based on the task
logits = self(X)
if self.task == 'binary_classification':
func = nn.BCELoss()
loss = func(logits.sigmoid().view(-1), y.float().view(-1))
if self.task == 'multi_classification':
func = nn.CrossEntropyLoss()
loss = func(logits.softmax(dim=1), y)
if self.task == 'regression':
func = nn.MSELoss()
loss = F.mse_loss(logits.view(-1), y.float().view(-1))
# Add L2 loss
w = 0.
for p in self.parameters():
w += (p**2).sum()
loss += 1e-4 * w
return loss
def compute_score(self, X, y):
# Computes the score based on the task
logits = self(X)
if self.task == 'binary_classification':
y_hat = (logits.sigmoid().view(-1) > 0.5).long()
return (y_hat == y.view(-1)).sum()
if self.task == 'multi_classification':
y_hat = logits.softmax(dim=1).argmax(dim=1).long()
return (y_hat == y).sum()
if self.task == 'regression':
return F.mse_loss(logits.view(-1), y.float().view(-1))
def compute_metrics(self, dataset):
metric_list = []
n = len(dataset)
loader = DataLoader(dataset, batch_size=100,
shuffle=False, drop_last=False)
for batch in loader:
# get batch
Xi, yi = batch
# compute loss & acc
loss = self.compute_loss(Xi, yi)
score = self.compute_score(Xi, yi)
# aggregate scores
metric_list += [{'loss':float(loss)/n, 'score':float(score)/n}]
metric_dict = pd.DataFrame(metric_list).sum().to_dict()
return metric_dict
###Output
_____no_output_____
###Markdown
2) 🎨 Define list of experiments
###Code
# Specify the hyperparameters
#dataset_list = [{'name':'syn', 'separable':True, 'n_max':500}]
run_list = [0, 1]
#dataset_list = [{'name':'diabetes', 'n_max':-1}]
dataset_list = [{'name':'iris', 'n_max':-1}]
# dataset_list = [{'name':'mnist', 'n_max':1000}]
model_list = [{'name':'mlp', 'layer_list':[]}]
opt_list = [
{'name':'adasls', 'c':.5, 'batch_size':128},
{'name':'lbfgs', 'lr':1, 'batch_size':-1},
{'name':'adam', 'lr':1e-3, 'batch_size':128},
# {'name':'adam', 'lr':1e-4, 'batch_size':128},
{'name':'sps', 'c':.5, 'batch_size':128},
{'name':'sls', 'c':.5, 'batch_size':128}
]
# Create experiments
exp_list = []
for dataset in dataset_list:
for model in model_list:
for opt in opt_list:
for run in run_list:
exp_list += [{'dataset':dataset, 'model':model, 'opt':opt,
'epochs':20, 'run':run}]
print(f"Defined {len(exp_list)} experiments")
###Output
Defined 10 experiments
###Markdown
3) 🧠 Train and Validate
###Code
# Create main save directory
savedir_base = 'results'
if os.path.exists(savedir_base):
shutil.rmtree(savedir_base)
def trainval(exp_dict):
# set seed
seed = 5 + exp_dict['run']
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
# print exp dict
savedir = f'{savedir_base}/{hu.hash_dict(exp_dict)}'
hu.save_json(os.path.join(savedir, 'exp_dict.json'), exp_dict)
# Get datasets
train_set = get_dataset(exp_dict['dataset'], split='train')
val_set = get_dataset(exp_dict['dataset'], split='val')
# sample n_max examples
n_max = exp_dict['dataset']['n_max']
if n_max == -1 or n_max >= len(train_set):
ind_list = np.arange(len(train_set))
n_max = len(train_set)
else:
ind_list = np.random.choice(len(train_set), n_max, replace=False)
train_set = torch.utils.data.Subset(train_set, ind_list)
# choose full or mini-batch
batch_size = exp_dict['opt']['batch_size']
if batch_size < 0:
batch_size = n_max
batch_size = min(batch_size, len(train_set))
print(f'Dataset: {exp_dict["dataset"]["name"]} ({len(train_set)}) '
f'- Model: {exp_dict["model"]["name"]} - '
f'Opt: {exp_dict["opt"]["name"]} ({batch_size})')
# get loader
train_loader = DataLoader(train_set, batch_size=batch_size,
shuffle=True, drop_last=True)
# Load model and optimizer
model = get_model(exp_dict['model'], train_set.dataset)
opt = get_optimizer(exp_dict['opt'], model, train_set.dataset, batch_size)
score_list = []
# start training and validating
ebar = tqdm.tqdm(range(exp_dict['epochs']), leave=False)
model.n_calls = 0.
for e in ebar:
# Compute Metrics on Validation and Training Set
val_dict = model.compute_metrics(val_set)
train_dict = model.compute_metrics(train_set)
# Train a single epoch
for batch in train_loader:
# get batch
Xi, yi = batch
# define closure
def closure():
loss = model.compute_loss(Xi, yi)
if exp_dict['opt']['name'] not in ['adasls', 'sls']:
loss.backward()
model.n_calls += Xi.shape[0]
# print(Xi.shape[0])
return loss
# update parameters
opt.zero_grad()
loss = opt.step(closure=closure)
# Update and save metrics
score_dict = {}
score_dict['epoch'] = e
score_dict['val_score'] = val_dict['score']
score_dict['val_loss'] = val_dict['loss']
score_dict['train_loss'] = train_dict['loss']
score_dict['n_train'] = len(train_set)
score_dict["step_size"] = opt.state.get("step_size", {})
n_iters = len(train_loader) * (e+1)
score_dict["n_calls"] = int(model.n_calls)
score_list += [score_dict]
# Save metrics
hu.save_pkl(os.path.join(savedir, 'score_list.pkl'), score_list)
ebar.update(1)
ebar.set_description(f'Training Loss {train_dict["loss"]:.3f}')
# Run each experiment and save their results
pbar = tqdm.tqdm(exp_list)
for ei, exp_dict in enumerate(pbar):
pbar.set_description(f'Running Exp {ei+1}/{len(exp_list)} ')
trainval(exp_dict)
# Update progress bar
pbar.update(1)
###Output
_____no_output_____
###Markdown
4) 📊 Visualize results
###Code
# Plot results
rm = hr.ResultManager(
exp_list=exp_list,
savedir_base=savedir_base,
verbose=0
)
rm.get_plot_all(y_metric_list=['train_loss', 'val_loss', 'val_score', 'step_size'],
x_metric='epoch', figsize=(18,4), title_list=['dataset.name'],
legend_list=['opt.name'], groupby_list=['dataset'],
log_metric_list=['train_loss', 'val_loss'], avg_across='run')
rm.get_plot_all(y_metric_list=['train_loss', 'val_loss', 'val_score', 'step_size'],
x_metric='n_calls', figsize=(18,4), title_list=['dataset.name'],
legend_list=['opt.name'], groupby_list=['dataset'],
log_metric_list=['train_loss', 'val_loss'], avg_across='run')
data_table.DataTable(rm.get_score_df(), include_index=False, num_rows_per_page=3)
###Output
Warning: Total number of columns (26) exceeds max_columns (20) limiting to first (20) columns.
###Markdown
🔨 Debug Section
###Code
train_set = get_dataset({'name': 'mnist'}, split='train')
len(train_set)/128
1000/128
7.8*20
import pickle
p={1:2}
q={3:4}
filename="picklefile"
with open(filename, 'ab') as fp:
pickle.dump(p,fp)
pickle.dump(q,fp)
with open(filename, 'rb') as fp:
print(pickle.load(fp))
print(pickle.load(fp))
###Output
{1: 2}
{3: 4}
|
notebook/ADE.ipynb | ###Markdown
EDAExploratory Data Analysis adalah proses yang memungkinkan analyst memahami isi data yang digunakan, mulai dari distribusi, frekuensi, korelasi dan lainnya. Dalam proses ini pemahaman konteks data juga diperhatikan karena akan menjawab masalah - masalah dasar. 1. Import LibrariesImport library yang akan digunakan
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import string
###Output
_____no_output_____
###Markdown
2. Load DatasetLoad dataset hasil Crawling dengan menggunakan `tweepy` sebelumnya
###Code
# Load Dataset
data1 = pd.read_csv('../data/Crawling Twitter Jakarta 26 - 27.csv')
data2 = pd.read_csv('../data/Crawling Twitter Jakarta 25 - 23.csv')
data3 = pd.read_csv('../data/Crawling Twitter Jakarta 22 - 19 setengah.csv')
###Output
_____no_output_____
###Markdown
**Dataset info**Menampilkan banyak data dan `Dtype` tiap kolomnya.
###Code
# Info
for i in [data1,data2,data3]:
i.info()
print()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 63468 entries, 0 to 63467
Data columns (total 8 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Tanggal 63468 non-null object
1 Tweets 63468 non-null object
2 ID 63468 non-null int64
3 Screen Name 63468 non-null object
4 Banyak Retweet 63468 non-null int64
5 Source 63468 non-null object
6 Retweet Status 63468 non-null int64
7 Hashtags 63468 non-null object
dtypes: int64(3), object(5)
memory usage: 3.9+ MB
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 95490 entries, 0 to 95489
Data columns (total 8 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Tanggal 95490 non-null object
1 Tweets 95490 non-null object
2 ID 95490 non-null int64
3 Screen Name 95490 non-null object
4 Banyak Retweet 95490 non-null int64
5 Source 95490 non-null object
6 Retweet Status 95490 non-null int64
7 Hashtags 95490 non-null object
dtypes: int64(3), object(5)
memory usage: 5.8+ MB
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 91321 entries, 0 to 91320
Data columns (total 8 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Tanggal 91321 non-null object
1 Tweets 91321 non-null object
2 ID 91321 non-null int64
3 Screen Name 91321 non-null object
4 Banyak Retweet 91321 non-null int64
5 Source 91319 non-null object
6 Retweet Status 91321 non-null int64
7 Hashtags 91321 non-null object
dtypes: int64(3), object(5)
memory usage: 5.6+ MB
###Markdown
3. Merge DatasetMenyatukan dataset yang terpisah
###Code
# Merge Info
data = pd.concat([data1,data2,data3])
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 250279 entries, 0 to 91320
Data columns (total 8 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Tanggal 250279 non-null object
1 Tweets 250279 non-null object
2 ID 250279 non-null int64
3 Screen Name 250279 non-null object
4 Banyak Retweet 250279 non-null int64
5 Source 250277 non-null object
6 Retweet Status 250279 non-null int64
7 Hashtags 250279 non-null object
dtypes: int64(3), object(5)
memory usage: 17.2+ MB
###Markdown
4. EDAMelakukan `Exploratory Data Analysis` pada data. 4.1. Tweet perhariMengecek banyaknya tweet perharinya
###Code
# Melihat banyak Tweet perhari
data['Tanggal'] = pd.to_datetime(data['Tanggal'])
tph = data['Tweets'].groupby(data['Tanggal'].dt.date).count()
frek = tph.values
h_index = {6:'Minggu',0:'Senin',1:'Selasa',2:'Rabu',3:'Kamis',4:'Jumat',5:"Sabtu"}
hari = [x.weekday() for x in tph.index]
hari = [h_index[x] for x in hari]
for i in range(len(hari)):
hari[i] = str(tph.index[i]) + f'\n{hari[i]}'
###Output
_____no_output_____
###Markdown
**Plotting** (Menampilkan hasil `EDA` lewat visual / Visualisasi Data)
###Code
# Plotting Line
plt.figure(figsize = (10,10))
sns.lineplot(range(len(frek)), frek)
for i, v in enumerate(frek.tolist()):
if i == 0 or i==2 or i ==4 or i == len(tph.values)-2:
plt.text(i-.25, v - 1000, str(v),fontsize=11)
elif i == 1 or i == 3 or i==6 or i == len(tph.values)-1:
plt.text(i-.25, v + 400, str(v),fontsize=11)
else :
plt.text(i+.07, v, str(v),fontsize=11)
plt.title('Banyak Tweet per Hari',fontsize=20)
plt.xticks(range(len(tph.values)), hari, rotation=45)
plt.xlabel('Tanggal',fontsize=16)
plt.ylabel('Frekuensi',fontsize=16)
plt.show()
###Output
_____no_output_____
###Markdown
**Insight**Dapat dilihat jika jumlah tweet berada pada puncaknya di hari Sabtu dan Senin. Hal yang cukup mengejutkan yaitu terjadi penurunan jumlah tweet yang signifikan pada hari minggu. 4.2. Tweet perjamSekarang akan dilihat banyaknya tweet perjamnya.
###Code
# Melihat banyak Tweet perjam
tpj = []
for i in range(1,len(tph.index)) :
if i != len(tph.index)-1 :
tpj.append(data['Tanggal'][(data['Tanggal'] >= str(tph.index[i])) & (data['Tanggal']<str(tph.index[i+1]))])
else :
tpj.append(data['Tanggal'][data['Tanggal']>=str(tph.index[i])])
tpj = [x.groupby(x.dt.hour).count() for x in tpj]
###Output
_____no_output_____
###Markdown
**Plotting** (Menampilkan hasil `EDA` lewat visual / Visualisasi Data)
###Code
# Ploting Line
fig, axes = plt.subplots(nrows=2, ncols=4,figsize=(20,10))
for i in range(len(tpj)):
sns.lineplot(tpj[i].index.tolist(),tpj[i].values,ax=axes[i//4,i%4])
axes[i//4,i%4].set_title(f'{hari[i+1]}')
axes[i//4,i%4].set(xlabel = 'Jam', ylabel = 'Frekuensi')
plt.tight_layout()
#fig.suptitle('Banyak Tweet per Jam',fontsize=24)
plt.show()
###Output
_____no_output_____
###Markdown
**Insight**Dapat dilihat bahwa user optimal melakukan tweet pada pukul 10 - pukul 15, selanjutnya akan terjadi penurunan jumlah tweet pada pukul 15 sampai dengan pukul 20. Selanjutnya jumlah tweet kembali naik pada pukul 20 dan kemudian menurun pada pukul 21 / 22. 4.3. Perbandingan Tweet dan RetweetAkan dilihat perbandingan antara jumlah tweet dan retweet yang ada.
###Code
# Menghitung perbandingan tweet dan retweet
r_stat = data['Retweet Status'].groupby(data['Retweet Status']).count()
temp = r_stat.values
###Output
_____no_output_____
###Markdown
**Plotting** (Menampilkan hasil `EDA` lewat visual / Visualisasi Data)
###Code
# Plotting Pie
def func(pct, allvals):
absolute = int(pct/100.*np.sum(allvals))
return "{:.1f}%\n{:d}".format(pct, absolute)
plt.figure(figsize = (8,8))
plt.pie(temp,explode=(0.1,0),labels=['Tweet','Retweet'],shadow=True,colors=['#A3FBFF','#ADFFA3'],
autopct=lambda pct: func(pct, temp),startangle=90)
plt.title('Perbandingan Jumlah Tweet dan Retweet',fontsize=18)
plt.axis('equal')
plt.legend(fontsize=11)
plt.show()
###Output
_____no_output_____
###Markdown
4.4. Hashtag terbanyakDilihat hashtag terbanyak.
###Code
# Menghitung banyak hashtag terkait
hashtag = data['Hashtags'].tolist()
temp = []
freks = []
for x in hashtag:
if x != []:
x = x.translate(str.maketrans('', '', string.punctuation))
x = x.lower().split()
for i in x :
if i not in temp :
temp.append(i)
freks.append(1)
else :
freks[temp.index(i)] += 1
hashtag_ = pd.DataFrame({'Hashtag':temp,'Frekuensi':freks})
hashtag_ = hashtag_.sort_values(by='Frekuensi', ascending=False)
###Output
_____no_output_____
###Markdown
**Plotting** (Menampilkan hasil `EDA` lewat visual / Visualisasi Data)
###Code
# Plot 20 hashtag terbanyak
hmm = hashtag_.head(20)
plt.figure(figsize = (10,10))
sns.barplot(x = hmm['Hashtag'],y = hmm['Frekuensi'])
for i, v in enumerate(hmm['Frekuensi'].tolist()):
plt.text(i-len(str(v))/10, v + 50, str(v),fontsize=10)
plt.title('Hashtag Terbanyak',fontsize=20)
plt.xticks(rotation=90)
plt.xlabel('Hashtag',fontsize=16)
plt.ylabel('Frekuensi',fontsize=16)
plt.show()
###Output
_____no_output_____
###Markdown
4.5. Source (Device) TerbanyakDilihat Source/Device terbanyak yang digunakan oleh user.
###Code
# Source count
source = data['Source'].groupby(data['Source']).count()
source = pd.DataFrame({'Source' : source.index.tolist(),'Frekuensi' : source.values})
source = source.sort_values(by='Frekuensi', ascending=False)
###Output
_____no_output_____
###Markdown
**Plotting** (Menampilkan hasil `EDA` lewat visual / Visualisasi Data)
###Code
# Plot 20 Source terbanyak
hm = source.head(20)
plt.figure(figsize = (10,10))
sns.barplot(x = hm['Source'],y = hm['Frekuensi'])
for i, v in enumerate(hm['Frekuensi'].tolist()):
plt.text(i-len(str(v))/10, v + 1000, str(v),fontsize=10)
plt.title('Source Terbanyak',fontsize=20)
plt.xticks(rotation=90)
plt.xlabel('Source',fontsize=16)
plt.ylabel('Frekuensi',fontsize=16)
plt.show()
###Output
_____no_output_____ |
notebooks/ROI/02_Nearshore/03_HYCREWW_RunUp.ipynb | ###Markdown
... ***CURRENTLY UNDER DEVELOPMENT*** ... HyCReWW runup estimation inputs required: * Nearshore reconstructed historical storms * Nearshore reconstructed simulated storms * Historical water levels * Synthetic water levels in this notebook: * HyCReWW runup estimation of historical and synthetic events * Extreme value analysis and validation Workflow: **HyCReWW** provides wave-driven run-up estimations along coral reef-lined shorelines under a wide range of fringing reef morphologies and offshore forcing characteristics. The metamodel is based on two models: (a) a full factorial design of recent XBeach Non-Hydrostatic simulations under different reef configurations and offshore wave and water level conditions (Pearson et al, 2017); and (b) Radial Basis Functions (RBFs) for approximating the non-linear function of run-up for the set of multivariate parameters: Runup = RBF($\eta_0$, $H_0$, ${H_0/L_0}$, $\beta_f$,$W_reef$, $\beta_b$, $c_f$ ); Where, the hydrodynamic variables defined are offshore water level ($\eta_0$), significant wave height ($H_0$), and wave steepness (${H_0/L_0}$); the reef morphologic parameters include fore reef slope ($\beta_f$), reef flat width ($W_reef$), beach slope ($\beta_b$), and seabed roughness ($c_f$). ${L_0}$ is the deep water wave length $L_0=gT_p^2/2pi$, and $T_p$ is the peak period. Beach crest elevation ($z_b$) was fixed at a height of 30 m to focus on run-up as a proxy for coastal inundation.
###Code
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# common
import os
import os.path as op
# pip
import numpy as np
import pandas as pd
import xarray as xr
from scipy.interpolate import griddata
# DEV: override installed teslakit
import sys
sys.path.insert(0, op.join(os.path.abspath(''), '..', '..', '..'))
# teslakit
from teslakit.database import Database
from teslakit.rbf import RBF_Interpolation, RBF_Reconstruction
from teslakit.mda import Normalize, MaxDiss_Simplified_NoThreshold, nearest_indexes
from teslakit.plotting.extremes import Plot_ReturnPeriodValidation
###Output
_____no_output_____
###Markdown
Database and Site parameters
###Code
# --------------------------------------
# Teslakit database
p_data = r'/Users/nico/Projects/TESLA-kit/TeslaKit/data'
db = Database(p_data)
# set site
db.SetSite('ROI')
###Output
_____no_output_____
###Markdown
HyCReWW - RBFs configurationrunup has been calculated for a total of 15 scenarios (hs, hs_lo) and a set of reef characteristics
###Code
# 15 scenarios of runup model execution
# RBF wave conditions
rbf_hs = [1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5]
rbf_hs_lo = [0.005, 0.025, 0.05, 0.005, 0.025, 0.05, 0.005, 0.025, 0.05, 0.005, 0.025, 0.05, 0.005, 0.025, 0.05]
# load trained RBF coefficients and variables min. and max. limits
var_lims, rbf_coeffs = db.Load_HYCREWW()
# reef characteristics
reef_cs = {
'rslope': 0.0505,
'bslope': 0.1667,
'rwidth': 250,
'cf': 0.0105,
}
# rbf variables names: level is our teslakit input data
rbf_vns = ['level', 'rslope', 'bslope', 'rwidth', 'cf']
###Output
_____no_output_____
###Markdown
HyCReWW methodology library
###Code
def HyCReWW_RU(df):
'''
Calculates runup using HyCReWW RBFs (level, reef variables)
and a linear interpolation (hs, hs_lo2) to input dataset
var_lims - HyCReWW variables min and max limits
rbf_coeffs - HyCReWW rbf coefficients
reef_cs - reef characteristics
rbf_vns - rbf variables
df - input pandas.dataframe (time,), vars: level, hs, tp, dir, hs_lo2
'''
# 1. Prepare input data
# -----------------------------------------------------------------
# add reef characteristics to input dataset
for p in reef_cs.keys(): df[p] = reef_cs[p]
# filter data: all variables inside limits
lp = []
for vn in var_lims.keys():
ps = (df[vn] > var_lims[vn][0]) & (df[vn] < var_lims[vn][1])
lp.append(ps)
ix_in = np.where(np.all(lp, axis=0))[0]
# select dataset to interpolate at RBFs
ds_in = df.iloc[ix_in]
ds_rbf_in = ds_in[rbf_vns]
# 2. Calculate RUNUP with input LEVEL for the 15 RBF scenarios
# -----------------------------------------------------------------
# parameters
ix_sc = [0, 1, 2, 3, 4]
ix_dr = []
minis = [var_lims[x][0] for x in rbf_vns]
maxis = [var_lims[x][1] for x in rbf_vns]
# Normalize data
ds_nm ,_ ,_ = Normalize(ds_rbf_in.values, ix_sc, ix_dr, minis=minis, maxis=maxis)
# RBF interpolate level for the 15 scenarios
aux_1 = []
for rc in rbf_coeffs:
ro = RBF_Interpolation(rc['constant'], rc['coeff'], rc['nodes'], ds_nm.T)
aux_1.append(ro)
ru_z = np.array(aux_1)
# 3. interpolate RUNUP for input WAVES with the 15 RBF scenarios
# -----------------------------------------------------------------
# RU linear interpolation (15 sets: hs, hs_lo -> runup)
#ru_in = np.zeros(ds_in.shape[0]) * np.nan
#for c, (_, r) in enumerate(ds_in.iterrows()):
# ru_in[c] = griddata((rbf_hs, rbf_hs_lo), ru_z[:,c], (r['hs'], r['hs_lo2']), method='linear')
# RU linear interpolation (15 sets: hs, hs_lo -> runup) (*faster than loop)
def axis_ipl_rbfs(inp):
return griddata((rbf_hs, rbf_hs_lo), inp[:15], (inp[15], inp[16]), method='linear')
inp = np.concatenate((ru_z, ds_in[['hs', 'hs_lo2']].T))
ru_in = np.apply_along_axis(axis_ipl_rbfs, 0, inp)
# 4. Prepare output
# -----------------------------------------------------------------
# add level to run_up
ru_in = ru_in + ds_in['level']
# return runup
ru_out = np.zeros(len(df.index)) * np.nan
ru_out[ix_in] = ru_in
xds_ru = xr.Dataset({'runup': (('time',), ru_out)}, coords={'time': df.index})
return xds_ru
###Output
_____no_output_____
###Markdown
HyCReWW MDA-RBF statistical wrap
###Code
def mdarbf_HyCReWW(dataset):
'''
Solves HyCReWW methodology using a MDA-RBFs statistical wrap.
This results in a substantial reduce in computational cost.
A Statistical representative subset will be selected with MaxDiss algorithm from input dataset.
This subset will be solved using HyCReWW methodology.
This subset and its runup HyCReWW output will be used to fit Radial Basis Functions.
Using RBFs, the entire input dataset is statistically solved
'''
base_dataset = dataset.copy()
# 1. MaxDiss
# -----------------------------------------------------------------
vns_mda = ['hs', 'hs_lo2','level'] # variables used at classification
n_subset = 100
ix_scalar = [0, 1, 2]
ix_directional = []
# remove nan data from input dataset
dataset.dropna(inplace=True)
# data for MDA
data = dataset[vns_mda]
# MDA algorithm
sel = MaxDiss_Simplified_NoThreshold(data.values[:], n_subset, ix_scalar, ix_directional)
subset = pd.DataFrame(data=sel, columns=vns_mda)
# fill subset variables
ix_n = nearest_indexes(subset[vns_mda].values[:], data.values[:], ix_scalar, ix_directional)
vns_fill = ['tp', 'dir']
for vn in vns_fill:
subset[vn] = dataset[vn].iloc[ix_n].values[:]
# calculate runup with HyCReWW
ru_sel = HyCReWW_RU(subset)
target = ru_sel.runup.to_dataframe()
# clean subset variables
subset.drop(columns=['rslope', 'bslope', 'rwidth', 'cf'], inplace=True)
# clean nans from runup target and input subset
ix_rm = np.where(np.isnan(target.values))[0]
subset.drop(index=ix_rm, inplace=True)
target.drop(index=ix_rm, inplace=True)
# 2. RBF RunUp Reconstruction
# -----------------------------------------------------------------
vs_recon = ['hs', 'hs_lo2','level']
subset_r = subset[vs_recon]
dataset_r = base_dataset[vs_recon] # to maintain input indexes and put nan where there is no output
ix_scalar_subset = [0, 1, 2]
ix_scalar_target = [0]
recon = RBF_Reconstruction(
subset_r.values, ix_scalar_subset, [],
target.values, ix_scalar_target, [],
dataset_r.values
)
xds_ru = xr.Dataset({'runup': (('time',), recon.squeeze())}, coords={'time': base_dataset.index})
return xds_ru
###Output
_____no_output_____
###Markdown
HyCReWW RBF Interpolation: Historical
###Code
# Load complete historical data and nearshore waves
# offshore level
level = db.Load_HIST_OFFSHORE(vns=['level'], decode_times=True)
# nearshore waves
waves = db.Load_HIST_NEARSHORE(vns=['Hs', 'Tp', 'Dir'], decode_times=True)
waves["time"] = waves["time"].dt.round("H") # fix waves times: round to nearest hour
# use same time for nearshore calculations
level = level.sel(time=waves.time)
# prepare data for HyCReWW
waves = waves.rename_vars({"Hs": "hs", "Tp": "tp", 'Dir':'dir'}) # rename vars
waves['hs_lo2'] = waves['hs']/(1.5613*waves['tp']**2) # calc. hs_lo2
waves['level'] = level['level'] # add level
dataset = waves[['hs', 'tp', 'dir', 'level', 'hs_lo2']].to_dataframe()
# calculate runup with HyCReWW
#ru_hist = HyCReWW_RU(dataset)
# calculate runup with HyCReWW MDA-RBF wrap
ru_hist = mdarbf_HyCReWW(dataset)
# store historical runup
db.Save_HIST_NEARSHORE(ru_hist)
###Output
MaxDiss dataset: 116755 --> 100
MDA centroids: 100/100
ix_scalar: 0, optimization: 0.64 | interpolation: 33.96
###Markdown
HyCREWW RBF Interpolation: Simulation
###Code
# offshore level
level = db.Load_SIM_OFFSHORE_all(vns=['level'], decode_times=False)
# nearshore waves
waves = db.Load_SIM_NEARSHORE_all(vns=['Hs', 'Tp', 'Dir', 'max_storms'], decode_times=False)
# prepare data for hycreww
waves = waves.rename_vars({"Hs": "hs", "Tp": "tp", 'Dir':'dir'}) # rename vars
waves['hs_lo2'] = waves['hs']/(1.5613*waves['tp']**2) # calc. hs_lo2
waves['level'] = level['level'] # add level
# fix simulation times (cftimes)
tmpt = db.Load_SIM_NEARSHORE_all(vns=['Hs'], decode_times=True, use_cftime=True)
waves['time'] = tmpt['time']
# iterate simulations
for n in waves.n_sim:
waves_n = waves.sel(n_sim=int(n))
dataset = waves_n[['hs', 'tp', 'dir', 'level', 'hs_lo2']].to_dataframe()
# calculate runup with HyCReWW
#ru_sim_n = HyCREWW_RU(dataset)
# calculate runup with HyCReWW MDA-RBF wrap
ru_sim_n = mdarbf_HyCReWW(dataset)
# store simulation runup
db.Save_SIM_NEARSHORE(ru_sim_n, int(n))
print('simulation {0} processed.'.format(int(n)))
###Output
_____no_output_____
###Markdown
Methodology Validation: Annual Maxima
###Code
# load all simulations
ru_sims = db.Load_SIM_NEARSHORE_all(vns=['runup'], decode_times=True, use_cftime=True)
# compare historical and simulations runup annual maxima
hist_A = ru_hist['runup'].groupby('time.year').max(dim='time')
sim_A = ru_sims['runup'].groupby('time.year').max(dim='time')
# Return Period historical vs. simulations
Plot_ReturnPeriodValidation(hist_A, sim_A.transpose());
###Output
_____no_output_____ |
week0_05_Bias_variance_and_CrossValidation/week0_05_BiasVariance.ipynb | ###Markdown
*Credits: this notebook origin (shared under MIT license) belongs to [ML course at ICL](https://github.com/yandexdataschool/MLatImperial2020) held by Yandex School of Data Analysis. Special thanks to the course team for making it available online.* week0_05: Bias-Variance decomposition example
###Code
import numpy as np
import matplotlib.pyplot as plt
def true_dep(x):
return np.cos((x - 0.2)**2) + 0.2 / (1 + 50 * (x - 0.3)**2)
x_true = np.linspace(0, 1, 100)
y_true = true_dep(x_true)
def generate_n_datasets(num_datasets, dataset_length, noise_power=0.02):
shape = (num_datasets, dataset_length, 1)
x = np.random.uniform(size=shape)
y = true_dep(x) + np.random.normal(scale=noise_power, size=shape)
return x, y
x, y = generate_n_datasets(1, 30)
plt.scatter(x.squeeze(), y.squeeze(), s=20, c='orange')
plt.plot(x_true, y_true, c='c', linewidth=1.5);
from copy import deepcopy
from tqdm import tqdm, trange
def calc_bias2_variance(model, datasets_X, datasets_y):
preds = []
for X, y in tqdm(zip(datasets_X, datasets_y), total=len(datasets_X)):
m = deepcopy(model)
m.fit(X, y)
preds.append(m.predict(x_true[:,np.newaxis]).squeeze())
preds = np.array(preds)
mean_pred = preds.mean(axis=0)
bias2 = (y_true - mean_pred)**2
variance = ((preds - mean_pred[np.newaxis,...])**2).mean(axis=0)
return bias2, variance, preds
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
###Output
_____no_output_____
###Markdown
As you can see, we are using the `Pipeline` once again to both preprocess the feature space and fit the model at once.
###Code
MAX_POWER = 6
powers = np.arange(1, MAX_POWER+1)
bias2, variance, preds = [], [], []
for p in powers:
model = Pipeline([
('poly', PolynomialFeatures(degree=p)),
('linear', LinearRegression())
])
b2, v, p = calc_bias2_variance(model, *generate_n_datasets(1000, 20))
bias2.append(b2)
variance.append(v)
preds.append(p)
bias2 = np.array(bias2)
variance = np.array(variance)
ncols = 4
nrows = int(np.ceil(len(powers) / ncols))
plt.figure(figsize=(18, 3.5 * nrows))
yrange = y_true.max() - y_true.min()
for i, (pred, pow) in tqdm(enumerate(zip(preds, powers), 1)):
plt.subplot(nrows, ncols, i)
for p in pred[np.random.choice(len(pred), size=200, replace=False)]:
plt.plot(x_true, p, linewidth=0.05, c='b');
plt.plot(x_true, y_true, linewidth=3, label='Truth', c='r')
plt.ylim(y_true.min() - 0.5 * yrange, y_true.max() + 0.5 * yrange)
plt.title('power = {}'.format(pow))
plt.legend();
plt.plot(powers, bias2.mean(axis=1), label='bias^2')
plt.plot(powers, variance.mean(axis=1), label='variance')
plt.legend()
plt.yscale('log')
plt.xlabel('power');
###Output
_____no_output_____
###Markdown
Extra: Runge's phenomenonSpeaking about polinomial features, going to higher degrees does not always improve accuracy. This effect was discovered by Carl David Tolmé Runge (1901) when exploring the behavior of errors when using polynomial interpolation to approximate certain functions. Refer to the [wikipedia page](https://en.wikipedia.org/wiki/Runge%27s_phenomenon) for more info. To observe this phenomenon, let's run the exact same code as above, but with increased maximum power of the polinome.
###Code
MAX_POWER = 8
powers = np.arange(1, MAX_POWER+1)
bias2, variance, preds = [], [], []
for p in powers:
model = Pipeline([
('poly', PolynomialFeatures(degree=p)),
('linear', LinearRegression())
])
b2, v, p = calc_bias2_variance(model, *generate_n_datasets(1000, 20))
bias2.append(b2)
variance.append(v)
preds.append(p)
bias2 = np.array(bias2)
variance = np.array(variance)
ncols = 4
nrows = int(np.ceil(len(powers) / ncols))
plt.figure(figsize=(18, 3.5 * nrows))
yrange = y_true.max() - y_true.min()
for i, (pred, pow) in tqdm(enumerate(zip(preds, powers), 1)):
plt.subplot(nrows, ncols, i)
for p in pred[np.random.choice(len(pred), size=200, replace=False)]:
plt.plot(x_true, p, linewidth=0.05, c='b');
plt.plot(x_true, y_true, linewidth=3, label='Truth', c='r')
plt.ylim(y_true.min() - 0.5 * yrange, y_true.max() + 0.5 * yrange)
plt.title('power = {}'.format(pow))
plt.legend();
plt.plot(powers, bias2.mean(axis=1), label='bias^2')
plt.plot(powers, variance.mean(axis=1), label='variance')
plt.legend()
plt.yscale('log')
plt.xlabel('power');
###Output
_____no_output_____ |
Excercises/finding_donors/.ipynb_checkpoints/finding_donors -checkpoint.ipynb | ###Markdown
Supervised Learning Project: Finding Donors for *CharityML* In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `'TODO'` statement. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. >**Note:** Please specify WHICH VERSION OF PYTHON you are using when submitting this notebook. Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Getting StartedIn this project, you will employ several supervised algorithms of your choice to accurately model individuals' income using data collected from the 1994 U.S. Census. You will then choose the best candidate algorithm from preliminary results and further optimize this algorithm to best model the data. Your goal with this implementation is to construct a model that accurately predicts whether an individual makes more than $50,000. This sort of task can arise in a non-profit setting, where organizations survive on donations. Understanding an individual's income can help a non-profit better understand how large of a donation to request, or whether or not they should reach out to begin with. While it can be difficult to determine an individual's general income bracket directly from public sources, we can (as we will see) infer this value from other publically available features. The dataset for this project originates from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Census+Income). The datset was donated by Ron Kohavi and Barry Becker, after being published in the article _"Scaling Up the Accuracy of Naive-Bayes Classifiers: A Decision-Tree Hybrid"_. You can find the article by Ron Kohavi [online](https://www.aaai.org/Papers/KDD/1996/KDD96-033.pdf). The data we investigate here consists of small changes to the original dataset, such as removing the `'fnlwgt'` feature and records with missing or ill-formatted entries. ---- Exploring the DataRun the code cell below to load necessary Python libraries and load the census data. Note that the last column from this dataset, `'income'`, will be our target label (whether an individual makes more than, or at most, $50,000 annually). All other columns are features about each individual in the census database.
###Code
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from time import time
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualization code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Census dataset
data = pd.read_csv("census.csv")
# Success - Display the first record
display(data.head(n=1))
###Output
_____no_output_____
###Markdown
Implementation: Data ExplorationA cursory investigation of the dataset will determine how many individuals fit into either group, and will tell us about the percentage of these individuals making more than \$50,000. In the code cell below, you will need to compute the following:- The total number of records, `'n_records'`- The number of individuals making more than \$50,000 annually, `'n_greater_50k'`.- The number of individuals making at most \$50,000 annually, `'n_at_most_50k'`.- The percentage of individuals making more than \$50,000 annually, `'greater_percent'`.** HINT: ** You may need to look at the table above to understand how the `'income'` entries are formatted.
###Code
# TODO: Total number of records
n_records = len(data)
# TODO: Number of records where individual's income is more than $50,000
n_greater_50k = len( data[data.income == ">50K"] )
# TODO: Number of records where individual's income is at most $50,000
n_at_most_50k = len( data[data.income == "<=50K"] )
# TODO: Percentage of individuals whose income is more than $50,000
greater_percent = (float(n_greater_50k) / float(n_records)) * 100
# Print the results
print("Total number of records: {}".format(n_records))
print("Individuals making more than $50,000: {}".format(n_greater_50k))
print("Individuals making at most $50,000: {}".format(n_at_most_50k))
print("Percentage of individuals making more than $50,000: {}%".format(greater_percent))
###Output
Total number of records: 45222
Individuals making more than $50,000: 11208
Individuals making at most $50,000: 34014
Percentage of individuals making more than $50,000: 24.78439697492371%
###Markdown
** Featureset Exploration *** **age**: continuous. * **workclass**: Private, Self-emp-not-inc, Self-emp-inc, Federal-gov, Local-gov, State-gov, Without-pay, Never-worked. * **education**: Bachelors, Some-college, 11th, HS-grad, Prof-school, Assoc-acdm, Assoc-voc, 9th, 7th-8th, 12th, Masters, 1st-4th, 10th, Doctorate, 5th-6th, Preschool. * **education-num**: continuous. * **marital-status**: Married-civ-spouse, Divorced, Never-married, Separated, Widowed, Married-spouse-absent, Married-AF-spouse. * **occupation**: Tech-support, Craft-repair, Other-service, Sales, Exec-managerial, Prof-specialty, Handlers-cleaners, Machine-op-inspct, Adm-clerical, Farming-fishing, Transport-moving, Priv-house-serv, Protective-serv, Armed-Forces. * **relationship**: Wife, Own-child, Husband, Not-in-family, Other-relative, Unmarried. * **race**: Black, White, Asian-Pac-Islander, Amer-Indian-Eskimo, Other. * **sex**: Female, Male. * **capital-gain**: continuous. * **capital-loss**: continuous. * **hours-per-week**: continuous. * **native-country**: United-States, Cambodia, England, Puerto-Rico, Canada, Germany, Outlying-US(Guam-USVI-etc), India, Japan, Greece, South, China, Cuba, Iran, Honduras, Philippines, Italy, Poland, Jamaica, Vietnam, Mexico, Portugal, Ireland, France, Dominican-Republic, Laos, Ecuador, Taiwan, Haiti, Columbia, Hungary, Guatemala, Nicaragua, Scotland, Thailand, Yugoslavia, El-Salvador, Trinadad&Tobago, Peru, Hong, Holand-Netherlands. ---- Preparing the DataBefore data can be used as input for machine learning algorithms, it often must be cleaned, formatted, and restructured — this is typically known as **preprocessing**. Fortunately, for this dataset, there are no invalid or missing entries we must deal with, however, there are some qualities about certain features that must be adjusted. This preprocessing can help tremendously with the outcome and predictive power of nearly all learning algorithms. Transforming Skewed Continuous FeaturesA dataset may sometimes contain at least one feature whose values tend to lie near a single number, but will also have a non-trivial number of vastly larger or smaller values than that single number. Algorithms can be sensitive to such distributions of values and can underperform if the range is not properly normalized. With the census dataset two features fit this description: '`capital-gain'` and `'capital-loss'`. Run the code cell below to plot a histogram of these two features. Note the range of the values present and how they are distributed.
###Code
# Split the data into features and target label
income_raw = data['income']
features_raw = data.drop('income', axis = 1)
# Visualize skewed continuous features of original data
vs.distribution(data)
###Output
_____no_output_____
###Markdown
For highly-skewed feature distributions such as `'capital-gain'` and `'capital-loss'`, it is common practice to apply a logarithmic transformation on the data so that the very large and very small values do not negatively affect the performance of a learning algorithm. Using a logarithmic transformation significantly reduces the range of values caused by outliers. Care must be taken when applying this transformation however: The logarithm of `0` is undefined, so we must translate the values by a small amount above `0` to apply the the logarithm successfully.Run the code cell below to perform a transformation on the data and visualize the results. Again, note the range of values and how they are distributed.
###Code
# Log-transform the skewed features
skewed = ['capital-gain', 'capital-loss']
features_log_transformed = pd.DataFrame(data = features_raw)
features_log_transformed[skewed] = features_raw[skewed].apply(lambda x: np.log(x + 1))
# Visualize the new log distributions
vs.distribution(features_log_transformed, transformed = True)
###Output
_____no_output_____
###Markdown
Normalizing Numerical FeaturesIn addition to performing transformations on features that are highly skewed, it is often good practice to perform some type of scaling on numerical features. Applying a scaling to the data does not change the shape of each feature's distribution (such as `'capital-gain'` or `'capital-loss'` above); however, normalization ensures that each feature is treated equally when applying supervised learners. Note that once scaling is applied, observing the data in its raw form will no longer have the same original meaning, as exampled below.Run the code cell below to normalize each numerical feature. We will use [`sklearn.preprocessing.MinMaxScaler`](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html) for this.
###Code
# Import sklearn.preprocessing.StandardScaler
from sklearn.preprocessing import MinMaxScaler
# Initialize a scaler, then apply it to the features
scaler = MinMaxScaler() # default=(0, 1)
numerical = ['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week']
features_log_minmax_transform = pd.DataFrame(data = features_log_transformed)
features_log_minmax_transform[numerical] = scaler.fit_transform(features_log_transformed[numerical])
# Show an example of a record with scaling applied
display(features_log_minmax_transform.head(n = 5))
###Output
_____no_output_____
###Markdown
Implementation: Data PreprocessingFrom the table in **Exploring the Data** above, we can see there are several features for each record that are non-numeric. Typically, learning algorithms expect input to be numeric, which requires that non-numeric features (called *categorical variables*) be converted. One popular way to convert categorical variables is by using the **one-hot encoding** scheme. One-hot encoding creates a _"dummy"_ variable for each possible category of each non-numeric feature. For example, assume `someFeature` has three possible entries: `A`, `B`, or `C`. We then encode this feature into `someFeature_A`, `someFeature_B` and `someFeature_C`.| | someFeature | | someFeature_A | someFeature_B | someFeature_C || :-: | :-: | | :-: | :-: | :-: || 0 | B | | 0 | 1 | 0 || 1 | C | ----> one-hot encode ----> | 0 | 0 | 1 || 2 | A | | 1 | 0 | 0 |Additionally, as with the non-numeric features, we need to convert the non-numeric target label, `'income'` to numerical values for the learning algorithm to work. Since there are only two possible categories for this label ("50K"), we can avoid using one-hot encoding and simply encode these two categories as `0` and `1`, respectively. In code cell below, you will need to implement the following: - Use [`pandas.get_dummies()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html?highlight=get_dummiespandas.get_dummies) to perform one-hot encoding on the `'features_log_minmax_transform'` data. - Convert the target label `'income_raw'` to numerical entries. - Set records with "50K" to `1`.
###Code
# TODO: One-hot encode the 'features_log_minmax_transform' data using pandas.get_dummies()
features_final = pd.get_dummies(features_log_minmax_transform)
# TODO: Encode the 'income_raw' data to numerical values
income = income_raw.replace({'<=50K':0, '>50K':1})
# Print the number of features after one-hot encoding
encoded = list(features_final.columns)
print("{} total features after one-hot encoding.".format(len(encoded)))
# Uncomment the following line to see the encoded feature names
print (encoded)
###Output
103 total features after one-hot encoding.
['age', 'education-num', 'capital-gain', 'capital-loss', 'hours-per-week', 'workclass_ Federal-gov', 'workclass_ Local-gov', 'workclass_ Private', 'workclass_ Self-emp-inc', 'workclass_ Self-emp-not-inc', 'workclass_ State-gov', 'workclass_ Without-pay', 'education_level_ 10th', 'education_level_ 11th', 'education_level_ 12th', 'education_level_ 1st-4th', 'education_level_ 5th-6th', 'education_level_ 7th-8th', 'education_level_ 9th', 'education_level_ Assoc-acdm', 'education_level_ Assoc-voc', 'education_level_ Bachelors', 'education_level_ Doctorate', 'education_level_ HS-grad', 'education_level_ Masters', 'education_level_ Preschool', 'education_level_ Prof-school', 'education_level_ Some-college', 'marital-status_ Divorced', 'marital-status_ Married-AF-spouse', 'marital-status_ Married-civ-spouse', 'marital-status_ Married-spouse-absent', 'marital-status_ Never-married', 'marital-status_ Separated', 'marital-status_ Widowed', 'occupation_ Adm-clerical', 'occupation_ Armed-Forces', 'occupation_ Craft-repair', 'occupation_ Exec-managerial', 'occupation_ Farming-fishing', 'occupation_ Handlers-cleaners', 'occupation_ Machine-op-inspct', 'occupation_ Other-service', 'occupation_ Priv-house-serv', 'occupation_ Prof-specialty', 'occupation_ Protective-serv', 'occupation_ Sales', 'occupation_ Tech-support', 'occupation_ Transport-moving', 'relationship_ Husband', 'relationship_ Not-in-family', 'relationship_ Other-relative', 'relationship_ Own-child', 'relationship_ Unmarried', 'relationship_ Wife', 'race_ Amer-Indian-Eskimo', 'race_ Asian-Pac-Islander', 'race_ Black', 'race_ Other', 'race_ White', 'sex_ Female', 'sex_ Male', 'native-country_ Cambodia', 'native-country_ Canada', 'native-country_ China', 'native-country_ Columbia', 'native-country_ Cuba', 'native-country_ Dominican-Republic', 'native-country_ Ecuador', 'native-country_ El-Salvador', 'native-country_ England', 'native-country_ France', 'native-country_ Germany', 'native-country_ Greece', 'native-country_ Guatemala', 'native-country_ Haiti', 'native-country_ Holand-Netherlands', 'native-country_ Honduras', 'native-country_ Hong', 'native-country_ Hungary', 'native-country_ India', 'native-country_ Iran', 'native-country_ Ireland', 'native-country_ Italy', 'native-country_ Jamaica', 'native-country_ Japan', 'native-country_ Laos', 'native-country_ Mexico', 'native-country_ Nicaragua', 'native-country_ Outlying-US(Guam-USVI-etc)', 'native-country_ Peru', 'native-country_ Philippines', 'native-country_ Poland', 'native-country_ Portugal', 'native-country_ Puerto-Rico', 'native-country_ Scotland', 'native-country_ South', 'native-country_ Taiwan', 'native-country_ Thailand', 'native-country_ Trinadad&Tobago', 'native-country_ United-States', 'native-country_ Vietnam', 'native-country_ Yugoslavia']
###Markdown
Shuffle and Split DataNow all _categorical variables_ have been converted into numerical features, and all numerical features have been normalized. As always, we will now split the data (both features and their labels) into training and test sets. 80% of the data will be used for training and 20% for testing.Run the code cell below to perform this split.
###Code
# Import train_test_split
from sklearn.cross_validation import train_test_split
# Split the 'features' and 'income' data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(features_final,
income,
test_size = 0.2,
random_state = 0)
# Show the results of the split
print("Training set has {} samples.".format(X_train.shape[0]))
print("Testing set has {} samples.".format(X_test.shape[0]))
###Output
Training set has 36177 samples.
Testing set has 9045 samples.
###Markdown
---- Evaluating Model PerformanceIn this section, we will investigate four different algorithms, and determine which is best at modeling the data. Three of these algorithms will be supervised learners of your choice, and the fourth algorithm is known as a *naive predictor*. Metrics and the Naive Predictor*CharityML*, equipped with their research, knows individuals that make more than \$50,000 are most likely to donate to their charity. Because of this, *CharityML* is particularly interested in predicting who makes more than \$50,000 accurately. It would seem that using **accuracy** as a metric for evaluating a particular model's performace would be appropriate. Additionally, identifying someone that *does not* make more than \$50,000 as someone who does would be detrimental to *CharityML*, since they are looking to find individuals willing to donate. Therefore, a model's ability to precisely predict those that make more than \$50,000 is *more important* than the model's ability to **recall** those individuals. We can use **F-beta score** as a metric that considers both precision and recall:$$ F_{\beta} = (1 + \beta^2) \cdot \frac{precision \cdot recall}{\left( \beta^2 \cdot precision \right) + recall} $$In particular, when $\beta = 0.5$, more emphasis is placed on precision. This is called the **F$_{0.5}$ score** (or F-score for simplicity).Looking at the distribution of classes (those who make at most $\$50,000$, and those who make more), it's clear most individuals do not make more than $\$50,000$. This can greatly affect **accuracy**, since we could simply say *"this person does not make more than $\$50,000$"* and generally be right, without ever looking at the data! Making such a statement would be called **naive**, since we have not considered any information to substantiate the claim. It is always important to consider the *naive prediction* for your data, to help establish a benchmark for whether a model is performing well. That been said, using that prediction would be pointless: If we predicted all people made less than \$50,000, *CharityML* would identify no one as donors. Note: Recap of accuracy, precision, recall**Accuracy:** measures how often the classifier makes the correct prediction. It’s the ratio of the number of correct predictions to the total number of predictions (the number of test data points).**Precision:** tells us what proportion of messages we classified as spam, actually were spam.It is a ratio of true positives(words classified as spam, and which are actually spam) to all positives(all words classified as spam, irrespective of whether that was the correct classificatio), in other words it is the ratio of`[True Positives/(True Positives + False Positives)]`**Recall(sensitivity):** tells us what proportion of messages that actually were spam were classified by us as spam.It is a ratio of true positives(words classified as spam, and which are actually spam) to all the words that were actually spam, in other words it is the ratio of`[True Positives/(True Positives + False Negatives)]`For classification problems that are skewed in their classification distributions like in our case, for example if we had a 100 text messages and only 2 were spam and the rest 98 weren't, accuracy by itself is not a very good metric. We could classify 90 messages as not spam(including the 2 that were spam but we classify them as not spam, hence they would be false negatives) and 10 as spam(all 10 false positives) and still get a reasonably good accuracy score. For such cases, precision and recall come in very handy. These two metrics can be combined to get the F1 score, which is weighted average(harmonic mean) of the precision and recall scores. This score can range from 0 to 1, with 1 being the best possible F1 score(we take the harmonic mean as we are dealing with ratios). Question 1 - Naive Predictor Performace* If we chose a model that always predicted an individual made more than $50,000, what would that model's accuracy and F-score be on this dataset? You must use the code cell below and assign your results to `'accuracy'` and `'fscore'` to be used later.**Please note** that the the purpose of generating a naive predictor is simply to show what a base model without any intelligence would look like. In the real world, ideally your base model would be either the results of a previous model or could be based on a research paper upon which you are looking to improve. When there is no benchmark model set, getting a result better than random choice is a place you could start from.**HINT:** * When we have a model that always predicts '1' (i.e. the individual makes more than 50k) then our model will have no True Negatives(TN) or False Negatives(FN) as we are not making any negative('0' value) predictions. Therefore our Accuracy in this case becomes the same as our Precision(True Positives/(True Positives + False Positives)) as every prediction that we have made with value '1' that should have '0' becomes a False Positive; therefore our denominator in this case is the total number of records we have in total. * Our Recall score(True Positives/(True Positives + False Negatives)) in this setting becomes 1 as we have no False Negatives.
###Code
TP = np.sum(income) # Counting the ones as this is the naive case.
# Note that 'income' is the 'income_raw' data encoded to numerical values done in the data preprocessing step.
FP = income.count() - TP # Specific to the naive case
TN = 0 # No predicted negatives in the naive case
FN = 0 # No predicted negatives in the naive case
# TODO: Calculate accuracy, precision and recall
accuracy = float(TP) / float( TP + FP )
recall = float(TP) / float( TP + FN )
precision = float(TP) / float( TP + FP )
# TODO: Calculate F-score using the formula above for beta = 0.5 and correct values for precision and recall.
beta = 0.5
fscore = ( 1 + beta**2 ) * (precision * recall ) / (( beta**2 * precision ) + recall )
# Print the results
print("Naive Predictor: [Accuracy score: {:.4f}, F-score: {:.4f}]".format(accuracy, fscore))
###Output
Naive Predictor: [Accuracy score: 0.2478, F-score: 0.2917]
###Markdown
Supervised Learning Models**The following are some of the supervised learning models that are currently available in** [`scikit-learn`](http://scikit-learn.org/stable/supervised_learning.html) **that you may choose from:**- Gaussian Naive Bayes (GaussianNB)- Decision Trees- Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting)- K-Nearest Neighbors (KNeighbors)- Stochastic Gradient Descent Classifier (SGDC)- Support Vector Machines (SVM)- Logistic Regression Question 2 - Model ApplicationList three of the supervised learning models above that are appropriate for this problem that you will test on the census data. For each model chosen- Describe one real-world application in industry where the model can be applied. - What are the strengths of the model; when does it perform well?- What are the weaknesses of the model; when does it perform poorly?- What makes this model a good candidate for the problem, given what you know about the data?**HINT:**Structure your answer in the same format as above^, with 4 parts for each of the three models you pick. Please include references with your answer. **Answer:** 1. Support Vector Machines.- **Application:** * [Clinical diagnosis of Parkinson's Disease and Essential Tremor.](https://www.sciencedirect.com/science/article/pii/S1697791217300468) * [Forecasting solar and wind energy resources.](https://www.sciencedirect.com/science/article/pii/S095965261832153X) * [Copper potential mapping in Kerman region, Iran](https://www.sciencedirect.com/science/article/pii/S1464343X16303648)- **Strengths:**[1] * SVM's can model non-linear decision boundaries, and there are many kernels to choose from. * They are also fairly robust against overfitting, especially in high-dimensional space. * It is memory efficient due to its use of a subset of training points in the decision function.- **Weaknesses:**[2] * SVM's are memory intensive. * Trickier to tune due to the importance of picking the right kernel. * Don't scale well to larger datasets. - **What makes it a good candidate for the problem:** > * Data is labelled. > * There are a lot of samples to train very well the algorithm. > * Since we are working with classification, we can predict a category. 2. Forest Random Classifier.- **Applications:** * [Classification of remote sensing images](http://www.age-geografia.es/tig/2016_Malaga/Cánovas-García.pdf) * [Calculation of epidemiological risk of dengue.](https://pdfs.semanticscholar.org/7c0a/da6809af18b6396fee637f7d02e8aee041eb.pdf) * [Estimation of movements of the interest rate of a country.](http://repositorio.uchile.cl/bitstream/handle/2250/117556/Dupouy%20Berrios%20Carlos.pdf?sequence=1&isAllowed=y) * [Detection of Alzheimer's disease](https://www.sciencedirect.com/science/article/pii/S2213158214001326) * __Predict if a person can live or die in the sinking of the Titanic taking into account the age, sex and location of their cabin__- [**Strengths:**](https://bookdown.org/content/2031/ensambladores-random-forest-parte-i.html) * There are very few assumptions and therefore the preparation of the data is minimal. * It can handle up to thousands of input variables and identify the most significant ones. Dimensionality reduction method. * One of the outputs of the model is the importance of variables. * It incorporates effective methods to estimate missing values. * It is possible to use it as an unsupervised method (clustering) and detection of outliers.- [**Weaknesses:**](https://bookdown.org/content/2031/ensambladores-random-forest-parte-i.html) * Relatively high prediction time. * Loss of interpretation. * Good for classification, not so much for regression. The predictions are not continuous in nature.- **What makes it a good candidate for the problem:** > Since we require a classifier, random forest is a good option because of the precision in the training process. In addition, there are about 45000 entries, and some of them are categorical variables, for this reason this algorithm is adapted. It is also important to mention that the algorithm is better the performance to avoid **overfitting**. 3. Gradient Boosting.- **Application:** * [Product recommendation](http://openaccess.uoc.edu/webapps/o2/bitstream/10609/63685/9/plopezseTFM0617memoria.pdf) * [Comparative analysis of new business failure prediction models.](http://www.aeca1.org/xixcongresoaeca/cd/30a.pdf) * [Ranking algorithms.](https://papers.nips.cc/paper/3270-mcrank-learning-to-rank-using-multiple-classification-and-gradient-boosting.pdf)- **Strengths:** * It is very well for large datasets. * Reduce bias (underfitting) and variance (overfitting) if we have a large dataset. * Combines multiple **weak predictors** to a build strong/smart predictor.- **Weaknesses:** * It requieres a high time to train data. * If the dataset is very small, it could has **overfitting** error. - **What makes it a good candidate for the problem:** * Since, there is a lot of data to train and test it, and we can build and combines multiples predictors. This makes it a very good candidate for the problem.[1], [2] : https://elitedatascience.com/machine-learning-algorithms Implementation - Creating a Training and Predicting PipelineTo properly evaluate the performance of each model you've chosen, it's important that you create a training and predicting pipeline that allows you to quickly and effectively train models using various sizes of training data and perform predictions on the testing data. Your implementation here will be used in the following section.In the code block below, you will need to implement the following: - Import `fbeta_score` and `accuracy_score` from [`sklearn.metrics`](http://scikit-learn.org/stable/modules/classes.htmlsklearn-metrics-metrics). - Fit the learner to the sampled training data and record the training time. - Perform predictions on the test data `X_test`, and also on the first 300 training points `X_train[:300]`. - Record the total prediction time. - Calculate the accuracy score for both the training subset and testing set. - Calculate the F-score for both the training subset and testing set. - Make sure that you set the `beta` parameter!
###Code
# TODO: Import two metrics from sklearn - fbeta_score and accuracy_score
from sklearn.metrics import fbeta_score, accuracy_score
def train_predict(learner, sample_size, X_train, y_train, X_test, y_test):
'''
inputs:
- learner: the learning algorithm to be trained and predicted on
- sample_size: the size of samples (number) to be drawn from training set
- X_train: features training set
- y_train: income training set
- X_test: features testing set
- y_test: income testing set
'''
results = {}
# TODO: Fit the learner to the training data using slicing with 'sample_size' using .fit(training_features[:], training_labels[:])
start = time() # Get start time
learner = learner.fit(X_train[:sample_size], y_train[:sample_size])
end = time() # Get end time
# TODO: Calculate the training time
results['train_time'] = end - start
# TODO: Get the predictions on the test set(X_test),
# then get predictions on the first 300 training samples(X_train) using .predict()
start = time() # Get start time
predictions_test = learner.predict(X_test)
predictions_train = learner.predict(X_train[:300])
end = time() # Get end time
# TODO: Calculate the total prediction time
results['pred_time'] = start - end
# TODO: Compute accuracy on the first 300 training samples which is y_train[:300]
results['acc_train'] = accuracy_score(y_train[:300], predictions_train)
# TODO: Compute accuracy on test set using accuracy_score()
results['acc_test'] = accuracy_score(y_test, predictions_test)
# TODO: Compute F-score on the the first 300 training samples using fbeta_score()
results['f_train'] = fbeta_score(y_train[:300], predictions_train, beta = 0.5)
# TODO: Compute F-score on the test set which is y_test
results['f_test'] = fbeta_score(y_test, predictions_test, beta = 0.5)
# Success
print("{} trained on {} samples.".format(learner.__class__.__name__, sample_size))
# Return the results
return results
###Output
_____no_output_____
###Markdown
Implementation: Initial Model EvaluationIn the code cell, you will need to implement the following:- Import the three supervised learning models you've discussed in the previous section.- Initialize the three models and store them in `'clf_A'`, `'clf_B'`, and `'clf_C'`. - Use a `'random_state'` for each model you use, if provided. - **Note:** Use the default settings for each model — you will tune one specific model in a later section.- Calculate the number of records equal to 1%, 10%, and 100% of the training data. - Store those values in `'samples_1'`, `'samples_10'`, and `'samples_100'` respectively.**Note:** Depending on which algorithms you chose, the following implementation may take some time to run!
###Code
# TODO: Import the three supervised learning models from sklearn
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
# TODO: Initialize the three models
clf_A = SVC(random_state=10)
clf_B = RandomForestClassifier(random_state=10)
clf_C = GradientBoostingClassifier(random_state=10)
# TODO: Calculate the number of samples for 1%, 10%, and 100% of the training data
# HINT: samples_100 is the entire training set i.e. len(y_train)
# HINT: samples_10 is 10% of samples_100 (ensure to set the count of the values to be `int` and not `float`)
# HINT: samples_1 is 1% of samples_100 (ensure to set the count of the values to be `int` and not `float`)
samples_100 = len(y_train)
samples_10 = int(len(y_train)/10)
samples_1 = int(len( y_train)/100)
# Collect results on the learners
results = {}
for clf in [clf_A, clf_B, clf_C]:
clf_name = clf.__class__.__name__
results[clf_name] = {}
for i, samples in enumerate([samples_1, samples_10, samples_100]):
results[clf_name][i] = \
train_predict(clf, samples, X_train, y_train, X_test, y_test)
# Run metrics visualization for the three supervised learning models chosen
vs.evaluate(results, accuracy, fscore)
###Output
_____no_output_____
###Markdown
---- Improving ResultsIn this final section, you will choose from the three supervised learning models the *best* model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (`X_train` and `y_train`) by tuning at least one parameter to improve upon the untuned model's F-score. Question 3 - Choosing the Best Model* Based on the evaluation you performed earlier, in one to two paragraphs, explain to *CharityML* which of the three models you believe to be most appropriate for the task of identifying individuals that make more than \$50,000. ** HINT: ** Look at the graph at the bottom left from the cell above(the visualization created by `vs.evaluate(results, accuracy, fscore)`) and check the F score for the testing set when 100% of the training set is used. Which model has the highest score? Your answer should include discussion of the:* metrics - F score on the testing when 100% of the training data is used, * prediction/training time* the algorithm's suitability for the data. **Answer: ** Question 4 - Describing the Model in Layman's Terms* In one to two paragraphs, explain to *CharityML*, in layman's terms, how the final model chosen is supposed to work. Be sure that you are describing the major qualities of the model, such as how the model is trained and how the model makes a prediction. Avoid using advanced mathematical jargon, such as describing equations.** HINT: **When explaining your model, if using external resources please include all citations. **Answer: ** Implementation: Model TuningFine tune the chosen model. Use grid search (`GridSearchCV`) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following:- Import [`sklearn.grid_search.GridSearchCV`](http://scikit-learn.org/0.17/modules/generated/sklearn.grid_search.GridSearchCV.html) and [`sklearn.metrics.make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html).- Initialize the classifier you've chosen and store it in `clf`. - Set a `random_state` if one is available to the same state you set before.- Create a dictionary of parameters you wish to tune for the chosen model. - Example: `parameters = {'parameter' : [list of values]}`. - **Note:** Avoid tuning the `max_features` parameter of your learner if that parameter is available!- Use `make_scorer` to create an `fbeta_score` scoring object (with $\beta = 0.5$).- Perform grid search on the classifier `clf` using the `'scorer'`, and store it in `grid_obj`.- Fit the grid search object to the training data (`X_train`, `y_train`), and store it in `grid_fit`.**Note:** Depending on the algorithm chosen and the parameter list, the following implementation may take some time to run!
###Code
# TODO: Import 'GridSearchCV', 'make_scorer', and any other necessary libraries
# TODO: Initialize the classifier
clf = None
# TODO: Create the parameters list you wish to tune, using a dictionary if needed.
# HINT: parameters = {'parameter_1': [value1, value2], 'parameter_2': [value1, value2]}
parameters = None
# TODO: Make an fbeta_score scoring object using make_scorer()
scorer = None
# TODO: Perform grid search on the classifier using 'scorer' as the scoring method using GridSearchCV()
grid_obj = None
# TODO: Fit the grid search object to the training data and find the optimal parameters using fit()
grid_fit = None
# Get the estimator
best_clf = grid_fit.best_estimator_
# Make predictions using the unoptimized and model
predictions = (clf.fit(X_train, y_train)).predict(X_test)
best_predictions = best_clf.predict(X_test)
# Report the before-and-afterscores
print("Unoptimized model\n------")
print("Accuracy score on testing data: {:.4f}".format(accuracy_score(y_test, predictions)))
print("F-score on testing data: {:.4f}".format(fbeta_score(y_test, predictions, beta = 0.5)))
print("\nOptimized Model\n------")
print("Final accuracy score on the testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)))
print("Final F-score on the testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5)))
###Output
_____no_output_____
###Markdown
Question 5 - Final Model Evaluation* What is your optimized model's accuracy and F-score on the testing data? * Are these scores better or worse than the unoptimized model? * How do the results from your optimized model compare to the naive predictor benchmarks you found earlier in **Question 1**?_ **Note:** Fill in the table below with your results, and then provide discussion in the **Answer** box. Results:| Metric | Unoptimized Model | Optimized Model || :------------: | :---------------: | :-------------: | | Accuracy Score | | || F-score | | EXAMPLE | **Answer: ** ---- Feature ImportanceAn important task when performing supervised learning on a dataset like the census data we study here is determining which features provide the most predictive power. By focusing on the relationship between only a few crucial features and the target label we simplify our understanding of the phenomenon, which is most always a useful thing to do. In the case of this project, that means we wish to identify a small number of features that most strongly predict whether an individual makes at most or more than \$50,000.Choose a scikit-learn classifier (e.g., adaboost, random forests) that has a `feature_importance_` attribute, which is a function that ranks the importance of features according to the chosen classifier. In the next python cell fit this classifier to training set and use this attribute to determine the top 5 most important features for the census dataset. Question 6 - Feature Relevance ObservationWhen **Exploring the Data**, it was shown there are thirteen available features for each individual on record in the census data. Of these thirteen records, which five features do you believe to be most important for prediction, and in what order would you rank them and why? **Answer:** Implementation - Extracting Feature ImportanceChoose a `scikit-learn` supervised learning algorithm that has a `feature_importance_` attribute availble for it. This attribute is a function that ranks the importance of each feature when making predictions based on the chosen algorithm.In the code cell below, you will need to implement the following: - Import a supervised learning model from sklearn if it is different from the three used earlier. - Train the supervised model on the entire training set. - Extract the feature importances using `'.feature_importances_'`.
###Code
# TODO: Import a supervised learning model that has 'feature_importances_'
# TODO: Train the supervised model on the training set using .fit(X_train, y_train)
model = None
# TODO: Extract the feature importances using .feature_importances_
importances = None
# Plot
vs.feature_plot(importances, X_train, y_train)
###Output
_____no_output_____
###Markdown
Question 7 - Extracting Feature ImportanceObserve the visualization created above which displays the five most relevant features for predicting if an individual makes at most or above \$50,000. * How do these five features compare to the five features you discussed in **Question 6**?* If you were close to the same answer, how does this visualization confirm your thoughts? * If you were not close, why do you think these features are more relevant? **Answer:** Feature SelectionHow does a model perform if we only use a subset of all the available features in the data? With less features required to train, the expectation is that training and prediction time is much lower — at the cost of performance metrics. From the visualization above, we see that the top five most important features contribute more than half of the importance of **all** features present in the data. This hints that we can attempt to *reduce the feature space* and simplify the information required for the model to learn. The code cell below will use the same optimized model you found earlier, and train it on the same training set *with only the top five important features*.
###Code
# Import functionality for cloning a model
from sklearn.base import clone
# Reduce the feature space
X_train_reduced = X_train[X_train.columns.values[(np.argsort(importances)[::-1])[:5]]]
X_test_reduced = X_test[X_test.columns.values[(np.argsort(importances)[::-1])[:5]]]
# Train on the "best" model found from grid search earlier
clf = (clone(best_clf)).fit(X_train_reduced, y_train)
# Make new predictions
reduced_predictions = clf.predict(X_test_reduced)
# Report scores from the final model using both versions of data
print("Final Model trained on full data\n------")
print("Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, best_predictions)))
print("F-score on testing data: {:.4f}".format(fbeta_score(y_test, best_predictions, beta = 0.5)))
print("\nFinal Model trained on reduced data\n------")
print("Accuracy on testing data: {:.4f}".format(accuracy_score(y_test, reduced_predictions)))
print("F-score on testing data: {:.4f}".format(fbeta_score(y_test, reduced_predictions, beta = 0.5)))
###Output
_____no_output_____
###Markdown
Question 8 - Effects of Feature Selection* How does the final model's F-score and accuracy score on the reduced data using only five features compare to those same scores when all features are used?* If training time was a factor, would you consider using the reduced data as your training set? **Answer:** > **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to **File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission. Before You SubmitYou will also need run the following in order to convert the Jupyter notebook into HTML, so that your submission will include both files.
###Code
!!jupyter nbconvert *.ipynb
###Output
_____no_output_____ |
monetary-economics/Python 3 Chapter 4 Model PC.ipynb | ###Markdown
Monetary Economics: Chapter 4 Preliminaries
###Code
# This line configures matplotlib to show figures embedded in the notebook,
# instead of opening a new window for each figure. More about that later.
# If you are using an old version of IPython, try using '%pylab inline' instead.
%matplotlib inline
import matplotlib.pyplot as plt
from pysolve.model import Model
from pysolve.utils import is_close,round_solution
###Output
_____no_output_____
###Markdown
Model PC
###Code
def create_pc_model():
model = Model()
model.set_var_default(0)
model.var('Bcb', desc='Government bills held by the Central Bank')
model.var('Bh', desc='Government bills held by households')
model.var('Bs', desc='Government bills supplied by the government')
model.var('C', desc='Consumption goods')
model.var('Hh', desc='Cash held by households')
model.var('Hs', desc='Cash supplied by the central bank')
model.var('R', desc='Interest rate on government bills')
model.var('T', desc='Taxes')
model.var('V', desc='Household wealth')
model.var('Y', desc='Income = GDP')
model.var('YD', desc='Disposable income of households')
model.param('alpha1', desc='Propensity to consume out of income', default=0.6)
model.param('alpha2', desc='Propensity to consume out of wealth', default=0.4)
model.param('lambda0', desc='Parameter in asset demand function', default=0.635)
model.param('lambda1', desc='Parameter in asset demand function', default=5.0)
model.param('lambda2', desc='Parameter in asset demand function', default=0.01)
model.param('theta', desc='Tax rate', default=0.2)
model.param('G', desc='Government goods', default=20.)
model.param('Rbar', desc='Interest rate as policy instrument')
model.add('Y = C + G') # 4.1
model.add('YD = Y - T + R(-1)*Bh(-1)') # 4.2
model.add('T = theta*(Y + R(-1)*Bh(-1))') #4.3, theta < 1
model.add('V = V(-1) + (YD - C)') # 4.4
model.add('C = alpha1*YD + alpha2*V(-1)') # 4.5, 0<alpha2<alpha1<1
model.add('Hh = V - Bh') # 4.6
model.add('Bh = V*lambda0 + V*lambda1*R - lambda2*YD') # 4.7
model.add('Bs - Bs(-1) = (G + R(-1)*Bs(-1)) - (T + R(-1)*Bcb(-1))') # 4.8
model.add('Hs - Hs(-1) = Bcb - Bcb(-1)') # 4.9
model.add('Bcb = Bs - Bh') # 4.10
model.add('R = Rbar') # 4.11
return model
steady = create_pc_model()
steady.set_values({'alpha1': 0.6,
'alpha2': 0.4,
'lambda0': 0.635,
'lambda1': 5.0,
'lambda2': 0.01,
'G': 20,
'Rbar': 0.025})
for _ in range(100):
steady.solve(iterations=100, threshold=1e-5)
if is_close(steady.solutions[-2], steady.solutions[-1], atol=1e-4):
break
###Output
_____no_output_____
###Markdown
Model PCEX
###Code
def create_pcex_model():
model = Model()
model.set_var_default(0)
model.var('Bcb', desc='Government bills held by the Central Bank')
model.var('Bd', desc='Demand for government bills')
model.var('Bh', desc='Government bills held by households')
model.var('Bs', desc='Government bills supplied by the government')
model.var('C', desc='Consumption goods')
model.var('Hd', desc='Demand for cash')
model.var('Hh', desc='Cash held by households')
model.var('Hs', desc='Cash supplied by the central bank')
model.var('R', desc='Interest rate on government bills')
model.var('T', desc='Taxes')
model.var('V', desc='Household wealth')
model.var('Ve', desc='Expected household wealth')
model.var('Y', desc='Income = GDP')
model.var('YD', desc='Disposable income of households')
model.var('YDe', desc='Expected disposable income of households')
model.set_param_default(0)
model.param('alpha1', desc='Propensity to consume out of income', default=0.6)
model.param('alpha2', desc='Propensity to consume o of wealth', default=0.4)
model.param('lambda0', desc='Parameter in asset demand function', default=0.635)
model.param('lambda1', desc='Parameter in asset demand function', default=5.0)
model.param('lambda2', desc='Parameter in asset demand function', default=0.01)
model.param('theta', desc='Tax rate', default=0.2)
model.param('G', desc='Government goods', default=20.)
model.param('Ra', desc='Random shock to expectations', default=0.0)
model.param('Rbar', desc='Interest rate as policy instrument', default=0.025)
model.add('Y = C + G') # 4.1
model.add('YD = Y - T + R(-1)*Bh(-1)') # 4.2
model.add('T = theta*(Y + R(-1)*Bh(-1))') #4.3, theta < 1
model.add('V = V(-1) + (YD - C)') # 4.4
model.add('C = alpha1*YDe + alpha2*V(-1)') # 4.5E
model.add('Bd = Ve*lambda0 + Ve*lambda1*R - lambda2*YDe') # 4.7E
model.add('Hd = Ve - Bd') # 4.13
model.add('Ve = V(-1) + (YDe - C)') # 4.14
model.add('Hh = V - Bh') # 4.6
model.add('Bh = Bd') # 4.15
model.add('Bs - Bs(-1) = (G + R(-1)*Bs(-1)) - (T + R(-1)*Bcb(-1))') # 4.8
model.add('Hs - Hs(-1) = Bcb - Bcb(-1)') # 4.9
model.add('Bcb = Bs - Bh') # 4.10
model.add('R = Rbar') # 4.11
model.add('YDe = YD * (1 + Ra)') # 4.16
return model
###Output
_____no_output_____
###Markdown
Steady state and shocks
###Code
pcex_steady = create_pcex_model()
pcex_steady.set_values([('alpha1', 0.6),
('alpha2', 0.4),
('lambda0', 0.635),
('lambda1', 5.0),
('lambda2', 0.01),
('theta', 0.2),
('G', 20),
('Rbar', 0.025),
('Ra', 0),
('Bcb', 116.36),
('Bh', 363.59),
('Bs', 'Bh + Bcb'),
('Hh', 116.35),
('Hs', 'Hh'),
('V', 'Bh + Hh'),
('R', 'Rbar')])
for _ in range(100):
pcex_steady.solve(iterations=100, threshold=1e-5)
if is_close(pcex_steady.solutions[-2], pcex_steady.solutions[-1], atol=1e-4):
break
import random
random.seed(6)
shocks = create_pcex_model()
shocks.set_values(pcex_steady.solutions[-1], ignore_errors=True)
for _ in range(50):
shocks.parameters['Ra'].value = random.gauss(0,1) / 10.
shocks.solve(iterations=100, threshold=1e-3)
###Output
_____no_output_____
###Markdown
Figure 4.1
###Code
caption = '''
Figure 4.1 Money demand and held money balances, when the economy is subjected
to random shocks.'''
hddata = [s['Hd'] for s in shocks.solutions[25:]]
hhdata = [s['Hh'] for s in shocks.solutions[25:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(min(hddata+hhdata)-2, max(hddata+hhdata)+2)
axes.plot(hhdata, 'b')
axes.plot(hddata, linestyle='--', color='r')
# add labels
plt.text(13, 35, 'Held money balances')
plt.text(13, 34, '(continuous line)')
plt.text(16, 12, 'Money demand')
plt.text(16, 11, '(dotted line)')
fig.text(0.1, -.05, caption);
###Output
_____no_output_____
###Markdown
Figure 4.2
###Code
caption = '''
Figure 4.2 Changes in money demand and in money balances held (first differences),
when the economy is subjected to random shocks. '''
hddata = [s['Hd'] for s in shocks.solutions[24:]]
hhdata = [s['Hh'] for s in shocks.solutions[24:]]
for i in range(len(hddata)-1, 0, -1):
hddata[i] -= hddata[i-1]
hhdata[i] -= hhdata[i-1]
hddata = hddata[1:]
hhdata = hhdata[1:]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(min(hddata+hhdata)-2, max(hddata+hhdata)+2)
axes.plot(hhdata, 'b')
axes.plot(hddata, linestyle='--', color='r')
# add labels
plt.text(13, 20, 'Held money balances')
plt.text(13, 18, '(continuous line)')
plt.text(15, -18, 'Money demand')
plt.text(15, -20, '(dotted line)')
fig.text(0.1, -.05, caption);
###Output
_____no_output_____
###Markdown
Scenario: Model PC, Steady state with increase in interest rate
###Code
rate_shock = create_pc_model()
rate_shock.set_values({'Bcb': 21.576,
'Bh': 64.865,
'Bs': 86.441,
'Hh': 21.62,
'Hs': 21.62,
'V': 86.485,
'alpha1': 0.6,
'alpha2': 0.4,
'lambda0': 0.635,
'lambda1': 5.0,
'lambda2': 0.01,
'G': 20,
'Rbar': 0.025})
# solve until stable
for i in range(50):
rate_shock.solve(iterations=100, threshold=1e-5)
if is_close(rate_shock.solutions[-2], rate_shock.solutions[-1], atol=1e-4):
break
rate_shock.parameters['Rbar'].value = 0.035
for i in range(40):
rate_shock.solve(iterations=100, threshold=1e-5)
###Output
_____no_output_____
###Markdown
Figure 4.3
###Code
caption = '''
Figure 4.3 Evolution of the shares of bills and money balances in the portfolio of
households, following an increase of 100 points in the rate of interest on bills.'''
hhdata = [s['Hh']/s['V'] for s in rate_shock.solutions[15:]]
bhdata = [s['Bh']/s['V'] for s in rate_shock.solutions[15:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top=False)
axes.spines['top'].set_visible(False)
axes.set_ylim(0.19, 0.26)
axes.plot(hhdata, 'b')
axes2 = axes.twinx()
axes2.tick_params(top=False)
axes2.spines['top'].set_visible(False)
axes2.set_ylim(0.74, 0.81)
axes2.plot(bhdata, linestyle='--', color='r')
plt.text(1, 0.81, 'Share of')
plt.text(1, 0.807, 'money balances')
plt.text(45, 0.81, 'Share of')
plt.text(45, 0.807, 'bills')
plt.text(15, 0.795, 'Share of bills in')
plt.text(15, 0.792, 'household portfolios')
plt.text(15, 0.755, 'Share of money balances')
plt.text(15, 0.752, 'in household portfolios')
fig.text(0.1, -.05, caption);
###Output
_____no_output_____
###Markdown
Figure 4.4
###Code
caption = '''
Figure 4.4 Evolution of disposable income and household consumption following an
increase of 100 points in the rate of interest on bills. '''
yddata = [s['YD'] for s in rate_shock.solutions[20:]]
cdata = [s['C'] for s in rate_shock.solutions[20:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(86, 91)
axes.plot(yddata, 'b')
axes.plot(cdata, linestyle='--', color='r')
# add labels
plt.text(10, 90.2, 'Disposable')
plt.text(10, 90.0, 'Income')
plt.text(10, 88, 'Consumption')
fig.text(0.1, -0.05, caption);
###Output
_____no_output_____
###Markdown
Model PCEX1
###Code
def create_pcex1_model():
model = Model()
model.set_var_default(0)
model.var('Bcb', desc='Government bills held by the Central Bank')
model.var('Bd', desc='Demand for government bills')
model.var('Bh', desc='Government bills held by households')
model.var('Bs', desc='Government bills supplied by the government')
model.var('C', desc='Consumption goods')
model.var('Hd', desc='Demand for cash')
model.var('Hh', desc='Cash held by households')
model.var('Hs', desc='Cash supplied by the central bank')
model.var('R', 'Interest rate on government bills')
model.var('T', desc='Taxes')
model.var('V', desc='Household wealth')
model.var('Ve', desc='Expected household wealth')
model.var('Y', desc='Income = GDP')
model.var('YD', desc='Disposable income of households')
model.var('YDe', desc='Expected disposable income of households')
model.set_param_default(0)
model.param('alpha1', desc='Propensity to consume out of income', default=0.6)
model.param('alpha2', desc='Propensity to consume o of wealth', default=0.4)
model.param('lambda0', desc='Parameter in asset demand function', default=0.635)
model.param('lambda1', desc='Parameter in asset demand function', default=5.0)
model.param('lambda2', desc='Parameter in asset demand function', default=0.01)
model.param('theta', desc='Tax rate', default=0.2)
model.param('G', desc='Government goods', default=20.)
model.param('Rbar', desc='Interest rate as policy instrument', default=0.025)
model.add('Y = C + G') # 4.1
model.add('YD = Y - T + R(-1)*Bh(-1)') # 4.2
model.add('T = theta*(Y + R(-1)*Bh(-1))') #4.3, theta < 1
model.add('V = V(-1) + (YD - C)') # 4.4
model.add('C = alpha1*YDe + alpha2*V(-1)') # 4.5E
model.add('Bd = Ve*lambda0 + Ve*lambda1*R - lambda2*YDe') # 4.7E
model.add('Hd = Ve - Bd') # 4.13
model.add('Ve = V(-1) + (YDe - C)') # 4.14
model.add('Hh = V - Bh') # 4.6
model.add('Bh = Bd') # 4.15
model.add('Bs - Bs(-1) = (G + R(-1)*Bs(-1)) - (T + R(-1)*Bcb(-1))') # 4.8
model.add('Hs - Hs(-1) = Bcb - Bcb(-1)') # 4.9
model.add('Bcb = Bs - Bh') # 4.10
model.add('R = Rbar') # 4.11
model.add('YDe = YD(-1)') # 4.16A
return model
pcex1 = create_pcex1_model()
pcex1.set_values({'Bcb': 21.576,
'Bh': 64.865,
'Bs': 86.441,
'Hh': 21.62,
'Hs': 21.62,
'V': 86.485,
'YD': 90,
'alpha1': 0.6,
'alpha2': 0.4,
'lambda0': 0.635,
'lambda1': 5.0,
'lambda2': 0.01,
'G': 20,
'Rbar': 0.025})
for i in range(10):
pcex1.solve(iterations=100, threshold=1e-5)
pcex1.parameters['alpha1'].value = 0.7
for i in range(40):
pcex1.solve(iterations=100, threshold=1e-5)
###Output
_____no_output_____
###Markdown
Figure 4.5
###Code
caption = '''
Figure 4.5 Rise and fall of national income (GDP) following an increase in the
propensity to consume out of expected disposable income ($\\alpha_1$) '''
ydata = [s['Y'] for s in pcex1.solutions[8:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(104, 123)
axes.plot(ydata, 'b')
# add labels
plt.text(10, 116, 'National Income (GDP)')
fig.text(0.1, -0.05, caption);
###Output
_____no_output_____
###Markdown
Figure 4.6
###Code
caption = '''
Figure 4.6 Evolution of consumtion, expected disposable income and lagged wealth,
following an increase in the propensity to consume out of expected disposable
income ($\\alpha_1$).'''
vdata = [s['V'] for s in pcex1.solutions[8:]]
ydedata = [s['YDe'] for s in pcex1.solutions[8:]]
cdata = [s['C'] for s in pcex1.solutions[8:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(60, 106)
axes.plot(cdata, linestyle=':', color='r')
axes.plot(ydedata, linestyle='--', color='b')
axes.plot(vdata, color='k')
# add labels
plt.text(5, 102, 'Consumption')
plt.text(5, 90, 'Expected')
plt.text(5, 88, 'disposable')
plt.text(5, 86, 'income')
plt.text(10, 70, 'Lagged wealth')
fig.text(0.1, -.1, caption);
###Output
_____no_output_____
###Markdown
Model PCEX2
###Code
def create_pcex2_model():
model = Model()
model.set_var_default(0)
model.var('Bcb', desc='Government bills held by the Central Bank')
model.var('Bd', desc='Demand for government bills')
model.var('Bh', desc='Government bills held by households')
model.var('Bs', desc='Government bills supplied by the government')
model.var('C', desc='Consumption goods')
model.var('Hd', desc='Demand for cash')
model.var('Hh', desc='Cash held by households')
model.var('Hs', desc='Cash supplied by the central bank')
model.var('R', 'Interest rate on government bills')
model.var('T', desc='Taxes')
model.var('V', desc='Household wealth')
model.var('Ve', desc='Expected household wealth')
model.var('Y', desc='Income = GDP')
model.var('YD', desc='Disposable income of households')
model.var('YDe', desc='Expected disposable income of households')
model.var('alpha1', desc='Propensity to consume out of income')
model.set_param_default(0)
model.param('alpha2', desc='Propensity to consume out of wealth', default=0.6)
model.param('alpha10', desc='Propensity to consume out of income - exogenous')
model.param('iota', desc='Impact of interest rate on the propensity to consume out of income')
model.param('lambda0', desc='Parameter in asset demand function', default=0.635)
model.param('lambda1', desc='Parameter in asset demand function', default=5.0)
model.param('lambda2', desc='Parameter in asset demand function', default=0.01)
model.param('theta', desc='Tax rate', default=0.2)
model.param('G', desc='Government goods')
model.param('Rbar', desc='Interest rate as policy instrument')
model.add('Y = C + G') # 4.1
model.add('YD = Y - T + R(-1)*Bh(-1)') # 4.2
model.add('T = theta*(Y + R(-1)*Bh(-1))') #4.3, theta < 1
model.add('V = V(-1) + (YD - C)') # 4.4
model.add('C = alpha1*YDe + alpha2*V(-1)') # 4.5E
model.add('Bd = Ve*lambda0 + Ve*lambda1*R - lambda2*YDe') # 4.7E
model.add('Hd = Ve - Bd') # 4.13
model.add('Ve = V(-1) + (YDe - C)') # 4.14
model.add('Hh = V - Bh') # 4.6
model.add('Bh = Bd') # 4.15
model.add('Bs - Bs(-1) = (G + R(-1)*Bs(-1)) - (T + R(-1)*Bcb(-1))') # 4.8
model.add('Hs - Hs(-1) = Bcb - Bcb(-1)') # 4.9
model.add('Bcb = Bs - Bh') # 4.10
model.add('R = Rbar') # 4.11
model.add('YDe = YD(-1)') # 4.16A
model.add('alpha1 = alpha10 - iota*R(-1)')
return model
pcex2 = create_pcex2_model()
pcex2.set_values({'Bcb': 21.576,
'Bh': 64.865,
'Bs': 86.441, # Bs = Bh + Bcb
'Hh': 21.62,
'Hs': 21.62, # Hs = Hh
'R': 0.025,
'V': 86.485, # V = Bh + Hh
'YD': 90,
'alpha1': 0.6,
'alpha2': 0.4,
'alpha10': 0.7,
'iota': 4,
'lambda0': 0.635,
'lambda1': 5,
'lambda2': 0.01,
'theta': 0.2,
'G': 20,
'Rbar': 0.025})
for i in range(15):
pcex2.solve(iterations=100, threshold=1e-5)
# Introduce the rate shock
pcex2.parameters['Rbar'].value += 0.01
for i in range(40):
pcex2.solve(iterations=100, threshold=1e-5)
###Output
_____no_output_____
###Markdown
Figure 4.9
###Code
caption = '''
Figure 4.9 Evolution of GDP, disposable income, consumptiona and wealth,
following an increase of 100 points in the rate of interest on bills, in Model PCEX2
where the propensity to consume reacts negatively to higher interest rates'''
vdata = [s['V'] for s in pcex2.solutions[12:]]
ydata = [s['Y'] for s in pcex2.solutions[12:]]
yddata = [s['YD'] for s in pcex2.solutions[12:]]
cdata = [s['C'] for s in pcex2.solutions[12:]]
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(80, 116)
axes.plot(ydata, linestyle=':', color='b')
axes.plot(vdata, linestyle='-', color='r')
axes.plot(yddata, linestyle='-.', color='k')
axes.plot(cdata, linestyle='--', color='g')
# add labels
plt.text(15, 112, 'National income (GDP)')
plt.text(15, 101, 'Household wealth')
plt.text(8, 89, 'Disposable')
plt.text(8, 87.5, 'income')
plt.text(12, 84, 'Consumption')
fig.text(0.1, -0.1, caption);
###Output
_____no_output_____
###Markdown
Figure 4.10
###Code
caption = '''
Figure 4.10 Evolution of tax revenues and government expenditures including net
debt servicing, following an increase of 100 points in the rate of interest on bills,
in Model PCEX2 where the propensity to consume reacts negatively to higher
interest rates'''
tdata = list()
sumdata = list()
for i in range(12, len(pcex2.solutions)):
s = pcex2.solutions[i]
s_1 = pcex2.solutions[i-1]
sumdata.append( s['G'] + s_1['R']*s_1['Bh'])
tdata.append(s['T'])
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1.1, 1.1])
axes.tick_params(top=False, right=False)
axes.spines['top'].set_visible(False)
axes.spines['right'].set_visible(False)
axes.set_ylim(20.5, 23)
axes.plot(sumdata, linestyle='-', color='r')
axes.plot(tdata, linestyle='--', color='k')
# add labels
plt.text(6, 22.9, 'Government expenditures plus net debt service')
plt.text(15, 22, 'Tax revenues')
fig.text(0.1, -0.15, caption);
###Output
_____no_output_____ |
Text_similarity_with_Bart.ipynb | ###Markdown
**Finding unique messages from a list of feedbacks**Our 9 Voice analysts(VA) leave a remark/comment/feedback on most of the consumer contacts they listen and analyse. They use different phrases to state similer sentiment. In this notebook we will explore the categories of comments the VAs are leaving, find their frequencies and try to make sense out of them.
###Code
import pandas as pd
import numpy as np
va_data = pd.read_csv('drive/MyDrive/va-analysis.csv')
va_data.head()
###Output
_____no_output_____
###Markdown
**Data description**We have all the VA checked data from 05 May, 2021 to 20th May, 2021. This data contains the following columns.
###Code
for i in va_data.columns:
print(i)
###Output
contact_id
VA_ID
VA_Name
VASUP_ID
VASUP_Name
contact_no
region
area
distribution_house
territory
distribution_point
brcode
brtype
brname
supcode
supname
campaign_name
contact_date
1. Did the RA mention 140 years of Havana's Art Culture & Passion experiences?
2. Did the RA mention John Player Havana journey experience?
3. Did the RA mention that the JPGL Havana Edition is made using Premium Tobacco & has Authentic taste?
4. Did the RA mention that the same premium & flavorful tobacco is also used in cigars & pipes?
5. Did the RA mention the dark texture in the JPGL Havana Edition cigarette sticks?
6. Did the RA mention that this is a Limited time pack?
7. Did the RA adhere to the script & prescribed contact flow/sequence?
8. Did the RA show the Interactive AV to the Consumer?
9. Did the RA ask the consumer to complete the Survey?
10. Did the RA avoid local tone & speak clearly?
Total Score
VA_Remarks
speech_url
###Markdown
Lets take the scoped columns for this analysis
###Code
va_data = va_data[['region', 'area', 'distribution_house', 'territory', 'distribution_point', 'brtype', 'VA_Remarks','Total Score']]
va_data.head()
###Output
_____no_output_____
###Markdown
**Note:** Lets quickly describe some statistics for the Total scores column. This column represents the final score given by the VAs to the checked contact.
###Code
va_data['Total Score'].describe()
###Output
_____no_output_____
###Markdown
We want to focus on the **VA_Remarks** column as this contains the remarks of the VA.This contains some null values. Lets find how many counts comments we have. we will remove these verdicts for this experiment.
###Code
print('Remarks count', va_data['VA_Remarks'].count())
print('Empty', va_data['VA_Remarks'].isna().sum() )
va_data = va_data.dropna()
###Output
Remarks count 1976
Empty 913
###Markdown
Find similer categoriesNow we will find sentence meaning similerities and try to build clusters.
###Code
va_data['VA_Remarks'] = va_data['VA_Remarks'].str.lower()
# data = va_data['VA_Remarks'].values.tolist()
sentences = va_data['VA_Remarks'].str.split().tolist()
from nltk.corpus import stopwords
nltk.download('stopwords')
from nltk.tokenize import word_tokenize
all_stopwords = stopwords.words('english')
all_stopwords.remove('no')
all_stopwords.remove('not')
sentences = []
for d in data:
tokens_without_sw = [word for word in d.split() if not word in all_stopwords]
sentences.append(tokens_without_sw)
print(sentences[:5])
from sklearn.metrics import silhouette_samples, silhouette_score
import matplotlib.pyplot as plt
import matplotlib.cm as cm
model = Word2Vec(sentences, min_count=1)
def sent_vectorizer(sent, model):
sent_vec =[]
numw = 0
for w in sent:
try:
if numw == 0:
sent_vec = model[w]
else:
sent_vec = np.add(sent_vec, model[w])
numw+=1
except:
pass
return np.asarray(sent_vec) / numw
X=[]
for sentence in sentences:
X.append(sent_vectorizer(sentence, model))
range_n_clusters = [2, 3, 4, 5, 6]
NUM_CLUSTERS=5
for n_clusters in range_n_clusters:
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.set_size_inches(18, 7)
# The 1st subplot is the silhouette plot
# The silhouette coefficient can range from -1, 1 but in this example all
# lie within [-0.1, 1]
ax1.set_xlim([-0.1, 1])
# The (n_clusters+1)*10 is for inserting blank space between silhouette
# plots of individual clusters, to demarcate them clearly.
ax1.set_ylim([0, len(X) + (n_clusters + 1) * 10])
kclusterer = KMeansClusterer(n_clusters, distance=nltk.cluster.util.cosine_distance, avoid_empty_clusters=True, repeats=30)
assigned_clusters = kclusterer.cluster(X, assign_clusters=True)
for index, sentence in enumerate(sentences):
print (str(assigned_clusters[index]) + ":" + str(sentence))
kmeans = cluster.KMeans(n_clusters=n_clusters)
cluster_labels = kmeans.fit_predict(X)
# cluster_labels = kmeans.labels_
# centroids = kmeans.cluster_centers_
print ("Cluster id labels for inputted data")
print (labels)
silhouette_avg = silhouette_score(X, cluster_labels)
print("For n_clusters =", n_clusters,
"The average silhouette_score is :", silhouette_avg)
# Compute the silhouette scores for each sample
sample_silhouette_values = silhouette_samples(X, cluster_labels)
y_lower = 10
for i in range(n_clusters):
# Aggregate the silhouette scores for samples belonging to
# cluster i, and sort them
ith_cluster_silhouette_values = \
sample_silhouette_values[cluster_labels == i]
ith_cluster_silhouette_values.sort()
size_cluster_i = ith_cluster_silhouette_values.shape[0]
y_upper = y_lower + size_cluster_i
color = cm.nipy_spectral(float(i) / n_clusters)
ax1.fill_betweenx(np.arange(y_lower, y_upper),
0, ith_cluster_silhouette_values,
facecolor=color, edgecolor=color, alpha=0.7)
# Label the silhouette plots with their cluster numbers at the middle
ax1.text(-0.05, y_lower + 0.5 * size_cluster_i, str(i))
# Compute the new y_lower for next plot
y_lower = y_upper + 10 # 10 for the 0 samples
ax1.set_title("The silhouette plot for the various clusters.")
ax1.set_xlabel("The silhouette coefficient values")
ax1.set_ylabel("Cluster label")
# The vertical line for average silhouette score of all the values
ax1.axvline(x=silhouette_avg, color="red", linestyle="--")
ax1.set_yticks([]) # Clear the yaxis labels / ticks
ax1.set_xticks([-0.1, 0, 0.2, 0.4, 0.6, 0.8, 1])
plt.suptitle(("Silhouette analysis for KMeans clustering on sample data "
"with n_clusters = %d" % n_clusters),
fontsize=14, fontweight='bold')
plt.show()
for i in range(len(labels)):
if cluster_labels[i] == 2:
print(va_data['VA_Remarks'].iloc[i])
###Output
_____no_output_____ |
cryptolytic/notebooks/trade_recommender_models.ipynb | ###Markdown
Requirements- pandas==0.25.1- ta==0.4.7- scikit-learn==21.3 Background on Trade Recommender Models Trade recommender models were created with with the goal of predicting whether the price of a cryptocurrency will go up or down in the next time period (the period is determined by the specific model). If the time period for the model was 6hrs, and if the model predicted that the price will go up, that would mean that if you bought that cryptocurrency 6 hours after the prediction time (this time comes from the data point that the model is predicting off of), the price of the crypto should have gone up after 6 hours from the time that you bought it. 100s of iterations of models were generated in this notebook and the best ones were selected from each exchange/trading pair based on which iteration returned the highest net profit. When training the random forest classifier models, performance was highly varied with different periods and parameters so there was no one size fits all model, and that resulted in the models having unique periods and parameters. The data was obtained from the respective exchanges via their api, and models were trained on 1 hour candlestick data from 2015 - Oct 2018. The test set contained data from Jan 2019 - Oct 2019 with a two month gap left between the train and test sets to prevent data leakage. The models' predictions output 0 (sell) and 1 (buy) and profit was calculated by backtesting on the 2019 test set. The profit calculation incorporated fees like in the real world and considered any consecutive "buy" prediction as a "hold" trade instead so that fees wouldn't have to be paid on those transactions. The final models were all profitable with gains anywhere from 40% - 95% within the Jan 1, 2019 to Oct 30, 2019 time period. Visualizations for how these models performed given a $10K portfolio can be viewed at https://github.com/Lambda-School-Labs/cryptolytic-ds/blob/master/finalized_notebooks/visualization/tr_performance_visualization.ipynbThe separate models created for each exchange/trading pair combination were:- Bitfinex BTC/USD- Bitfinex ETH/USD- Bitfinex LTC/USD- Coinbase Pro BTC/USD- Coinbase Pro ETH/USD- Coinbase Pro LTC/USD- HitBTC BTC/USD- HitBTC ETH/USD- HitBTC LTC/USD Folder Structure: ├── trade_recommender/ <-- The top-level directory for all trade recommender work│ ││ ├── trade_rec_models.ipynb <-- Notebook for trade recommender models│ ││ ├── data/ <-- Directory for csv files of 1 hr candle data│ │ └── data.csv │ ││ ├── pickles/ <-- Directory for all trade rec models│ │ └── models.pkl │ │ │ ├── tr_pickles/ <-- Directory for best trade rec models └── models.pkl Get all csv filenames into a variable - 1 hr candles
###Code
csv_filenames = glob.glob('data/*.csv') # modify to your filepath for data
print(len(csv_filenames))
csv_filenames
###Output
9
###Markdown
Functions OHLCV Data Resampling
###Code
def resample_ohlcv(df, period):
""" Changes the time period on cryptocurrency ohlcv data.
Period is a string denoted by '{time_in_minutes}T'(ex: '1T', '5T', '60T')."""
# Set date as the index. This is needed for the function to run
df = df.set_index(['date'])
# Aggregation function
ohlc_dict = {'open':'first',
'high':'max',
'low':'min',
'close': 'last',
'base_volume': 'sum'}
# Apply resampling
df = df.resample(period, how=ohlc_dict, closed='left', label='left')
return df
###Output
_____no_output_____
###Markdown
Filling NaNs
###Code
# resample_ohlcv function will create NaNs in df where there were gaps in the data.
# The gaps could be caused by exchanges being down, errors from cryptowatch or the
# exchanges themselves
def fill_nan(df):
"""Iterates through a dataframe and fills NaNs with appropriate
open, high, low, close values."""
# Forward fill close column.
df['close'] = df['close'].ffill()
# Backward fill the open, high, low rows with the close value.
df = df.bfill(axis=1)
return df
###Output
_____no_output_____
###Markdown
Feature Engineering
###Code
def feature_engineering(df, period):
"""Takes in a dataframe of 1 hour cryptocurrency trading data
and returns a new dataframe with selected period, new technical analysis features,
and a target.
"""
# Add a datetime column to df
df['date'] = pd.to_datetime(df['closing_time'], unit='s')
# Convert df to selected period
df = resample_ohlcv(df, period)
# Add feature to indicate gaps in the data
df['nan_ohlc'] = df['close'].apply(lambda x: 1 if pd.isnull(x) else 0)
# Fill in missing values using fill function
df = fill_nan(df)
# Reset index
df = df.reset_index()
# Create additional date features
df['year'] = df['date'].dt.year
df['month'] = df['date'].dt.month
df['day'] = df['date'].dt.day
# Add technical analysis features
df = add_all_ta_features(df, "open", "high", "low", "close", "base_volume")
# Replace infinite values with NaNs
df = df.replace([np.inf, -np.inf], np.nan)
# Drop any features whose mean of missing values is greater than 20%
df = df[df.columns[df.isnull().mean() < .2]]
# Replace remaining NaN values with the mean of each respective column and reset index
df = df.apply(lambda x: x.fillna(x.mean()),axis=0)
# Create a feature for close price difference
df['close_diff'] = (df['close'] - df['close'].shift(1))/df['close'].shift(1)
# Function to create target
def price_increase(x):
if (x-(.70/100)) > 0:
return True
else:
return False
# Create target
target = df['close_diff'].apply(price_increase)
# To make the prediction before it happens, put target on the next observation
target = target[1:].values
df = df[:-1]
# Create target column
df['target'] = target
# Remove first row of dataframe bc it has a null target
df = df[1:]
# Pick features
features = ['open', 'high', 'low', 'close', 'base_volume', 'nan_ohlc',
'year', 'month', 'day', 'volume_adi', 'volume_obv', 'volume_cmf',
'volume_fi', 'volume_em', 'volume_vpt', 'volume_nvi', 'volatility_atr',
'volatility_bbh', 'volatility_bbl', 'volatility_bbm', 'volatility_bbhi',
'volatility_bbli', 'volatility_kcc', 'volatility_kch', 'volatility_kcl',
'volatility_kchi', 'volatility_kcli', 'volatility_dch', 'volatility_dcl',
'volatility_dchi', 'volatility_dcli', 'trend_macd', 'trend_macd_signal',
'trend_macd_diff', 'trend_ema_fast', 'trend_ema_slow',
'trend_adx_pos', 'trend_adx_neg', 'trend_vortex_ind_pos',
'trend_vortex_ind_neg', 'trend_vortex_diff', 'trend_trix',
'trend_mass_index', 'trend_cci', 'trend_dpo', 'trend_kst',
'trend_kst_sig', 'trend_kst_diff', 'trend_ichimoku_a',
'trend_ichimoku_b', 'trend_visual_ichimoku_a', 'trend_visual_ichimoku_b',
'trend_aroon_up', 'trend_aroon_down', 'trend_aroon_ind', 'momentum_rsi',
'momentum_mfi', 'momentum_tsi', 'momentum_uo', 'momentum_stoch',
'momentum_stoch_signal', 'momentum_wr', 'momentum_ao',
'others_dr', 'others_dlr', 'others_cr', 'close_diff', 'date', 'target']
df = df[features]
return df
###Output
_____no_output_____
###Markdown
Profit and Loss function
###Code
def performance(X_test, y_preds):
""" Takes in a test dataset and a model's predictions, calculates and returns
the profit or loss. When the model generates consecutive buy predictions,
anything after the first one are considered a hold and fees are not added
for the hold trades. """
fee_rate = 0.35
# creates dataframe for features and predictions
df_preds = X_test
df_preds['y_preds'] = y_preds
# creates column with 0s for False predictions and 1s for True predictions
df_preds['binary_y_preds'] = df_preds['y_preds'].shift(1).apply(lambda x: 1 if x == True else 0)
# performance results from adding the closing difference percentage of the rows where trades were executed
performance = ((10000 * df_preds['binary_y_preds']*df_preds['close_diff']).sum())
# calculating fees and improve trading strategy
# creates a count list for when trades were triggered
df_preds['preds_count'] = df_preds['binary_y_preds'].cumsum()
# feature that determines the instance of whether the list increased
df_preds['increase_count'] = df_preds['preds_count'].diff(1)
# feature that creates signal of when to buy(1), hold(0), or sell(-1)
df_preds['trade_trig'] = df_preds['increase_count'].diff(1)
# number of total entries(1s)
number_of_entries = (df_preds.trade_trig.values==1).sum()
# performance takes into account fees given the rate at the beginning of this function
pct_performance = ((df_preds['binary_y_preds']*df_preds['close_diff']).sum())
# calculate the percentage paid in fees
fees_pct = number_of_entries * 2 * fee_rate/100
# calculate fees in USD
fees = number_of_entries * 2 * fee_rate / 100 * 10000
# calculate net profit in USD
performance_net = performance - fees
# calculate net profit percent
performance_net_pct = performance_net/10000
return pct_performance, performance, fees, performance_net, performance_net_pct
###Output
_____no_output_____
###Markdown
Modeling Pipeline
###Code
def modeling_pipeline(csv_filenames, periods=['360T','720T','960T','1440T']):
"""Takes csv file paths of data for modeling, performs feature engineering,
train/test split, creates a model, reports train/test score, and saves
a pickle file of the model in a directory called /pickles. The best models
are moved to a directory called tr_pickles at the end"""
line = '------------'
performance_list = []
for file in csv_filenames:
# define model name
name = file.split('/')[1][:-9]
# read csv
csv = pd.read_csv(file, index_col=0)
for period in periods:
max_depth_list = [17]
# max_depth_list = [17, 20, 25, 27]
for max_depth in max_depth_list:
max_features_list = [40]
# max_features_list = [40, 45, 50, 55, 60]
for max_features in max_features_list:
print(line + name + ' ' + period + ' ' + str(max_depth) + ' ' + str(max_features) + line)
# create a copy of the csv
df = csv.copy()
# engineer features
df = feature_engineering(df, period)
# train test split
train = df[df['date'] < '2018-10-30 23:00:00'] # cutoff oct 30 2018
test = df[df['date'] > '2019-01-01 23:00:00'] # cutoff jan 01 2019
print('train and test shape ({model}):'.format(model=name), train.shape, test.shape)
# features and target
features = df.drop(columns=['target', 'date']).columns.tolist()
target = 'target'
# define X, y vectors
X_train = train[features]
X_test = test[features]
y_train = train[target]
y_test = test[target]
# instantiate model
model = RandomForestClassifier(max_features=max_features,
max_depth=max_depth,
n_estimators=100,
n_jobs=-1,
random_state=42)
try:
# filter out datasets that are too small
if X_test.shape[0] > 500:
# fit model
model.fit(X_train, y_train)
print('model fitted')
# train accuracy
train_score = model.score(X_train, y_train)
print('train accuracy:', train_score)
# make predictions
y_preds = model.predict(X_test)
print('predictions made')
# test accuracy
score = accuracy_score(y_test, y_preds)
print('test accuracy:', score)
# get profit and loss
a, b, c, d, e = performance(X_test, y_preds)
print(f'net profits: {str(round(d,2))}')
# formatting for filename
t = period[:-1]
# download pickle
(pickle.dump(model, open('pickles/{model}_{t}_{max_features}_{max_depth}.pkl'
.format(model=name, t=t,
max_features=str(max_features),
max_depth=str(max_depth)), 'wb')))
print('{model} pickle saved!\n'.format(model=name))
# save net performance to list
performance_list.append([f'{name}', period, max_features, max_depth, a, b, c , d, e])
else:
print('{model} does not have enough data!\n'.format(model=name))
except:
print('error with model')
# create dataframe for model performance
df = pd.DataFrame(performance_list, columns = ['ex_tp', 'period', 'max_features',
'max_depth','pct_gain','gain', 'fees',
'net_profit', 'pct_net_profit'])
# sort by net profit descending and drop duplicates
df2 = df.sort_values(by='net_profit', ascending=False).drop_duplicates(subset='ex_tp')
# get the names, periods, max_features, max_depth for best models
models = df2['ex_tp'].values
periods = df2['period'].values
max_features = df2['max_features'].values
max_depth = df2['max_depth'].values
# save the best models in a new directory /tr_pickles
for i in range(len(models)):
model_name = models[i] + '_' + periods[i][:-1] + '_' + str(max_features[i]) + '_' + str(max_depth[i])
os.rename(f'pickles/{model_name}.pkl', f'tr_pickles/{models[i]}.pkl')
# returning the dataframes for model performance
# df1 contains performance for all models trained
# df2 contains performance for best models
return df, df2
periods=['360T']
df, df2 = modeling_pipeline(csv_filenames, periods)
###Output
_____no_output_____
###Markdown
training models with specific parameters This part is not necessary if you do the above. It's for when you want to only train the best models if you know the parameters so you don't have to train 100s of models
###Code
def modeling_pipeline(csv_filenames, param_dict):
"""Takes csv file paths of data for modeling and parameters, performs feature engineering,
train/test split, creates a model, reports train/test score, and saves
a pickle file of the model in a directory called /pickles."""
line = '------------'
performance_list = []
for file in csv_filenames:
# define model name
name = file.split('/')[1][:-9]
# read csv
df = pd.read_csv(file, index_col=0)
params = param_dict[name]
print(params)
period = params['period']
print(period)
max_features = params['max_features']
max_depth = params['max_depth']
print(line + name + ' ' + period + line)
# engineer features
df = feature_engineering(df, period)
# train test split
train = df[df['date'] < '2018-10-30 23:00:00'] # cutoff oct 30 2018
test = df[df['date'] > '2019-01-01 23:00:00'] # cutoff jan 01 2019
print('train and test shape ({model}):'.format(model=name), train.shape, test.shape)
# features and target
features = df.drop(columns=['target', 'date']).columns.tolist()
target = 'target'
# define X, y vectors
X_train = train[features]
X_test = test[features]
y_train = train[target]
y_test = test[target]
# instantiate model
model = RandomForestClassifier(max_features=max_features,
max_depth=max_depth,
n_estimators=100,
n_jobs=-1,
random_state=42)
# fit model
if X_train.shape[0] > 500:
model.fit(X_train, y_train)
print('model fitted')
# train accuracy
train_score = model.score(X_train, y_train)
print('train accuracy:', train_score)
# make predictions
y_preds = model.predict(X_test)
print('predictions made')
# test accuracy
score = accuracy_score(y_test, y_preds)
print('test accuracy:', score)
# get profit and loss
a, b, c, d, e = performance(X_test, y_preds)
print(f'net profits: {str(round(d,2))}')
# formatting for filename
t = period[:-1]
# download pickle
pickle.dump(model, open('pickles/{model}_{t}.pkl'.format(model=name, t=t,), 'wb'))
print('{model} pickle saved!\n'.format(model=name))
# save net performance to list
performance_list.append([f'{name}', period, a, b, c , d, e])
else:
print('{model} does not have enough data!\n'.format(model=name))
# create df of model performance
df = pd.DataFrame(performance_list, columns = ['ex_tp', 'period', 'pct_gain',
'gain', 'fees', 'net_profit', 'pct_net_profit'])
# sort performance by net_profit and drop duplicates
df2 = df.sort_values(by='net_profit', ascending=False).drop_duplicates(subset='ex_tp')
models = df2['ex_tp'].values
periods = df2['period'].values
# move models to new dir tr_pickles
for i in range(len(models)):
model_name = models[i] + '_' + periods[i][:-1]
os.rename(f'pickles/{model_name}.pkl', f'tr_pickles/{models[i]}.pkl')
# returning the dataframes for model performance
# df1 contains performance for all models trained
# df2 contains performance for best models
return df, df2
param_dict = {'bitfinex_ltc_usd': {'period': '1440T', 'max_features': 50, 'max_depth': 20},
'hitbtc_ltc_usdt': {'period': '1440T', 'max_features': 45, 'max_depth': 27},
'coinbase_pro_ltc_usd': {'period': '960T', 'max_features': 50, 'max_depth': 17},
'hitbtc_btc_usdt': {'period': '360T', 'max_features': 40, 'max_depth': 17},
'coinbase_pro_btc_usd': {'period': '960T', 'max_features': 55, 'max_depth': 25},
'coinbase_pro_eth_usd': {'period': '960T', 'max_features': 50, 'max_depth': 27},
'bitfinex_btc_usd': {'period': '1200T', 'max_features': 55, 'max_depth': 25},
'bitfinex_eth_usd': {'period': '1200T', 'max_features': 60, 'max_depth': 20}
}
# 'hitbtc_eth_usdt': {'period': '1440T', 'max_depth': 50}
# ^ this cant go in param dict bc its trained differently
csv_paths = csv_filenames.copy()
del csv_paths[4]
print(csv_paths)
print(len(csv_paths))
len(csv_filenames)
df, df2 = modeling_pipeline(csv_paths)
###Output
_____no_output_____
###Markdown
train hitbtc eth_usdt model separately - was a special case where it performed better with less parameters
###Code
# for the hitbtc eth usdt model
def modeling_pipeline(csv_filenames):
"""Takes csv file paths of data for modeling, performs feature engineering,
train/test split, creates a model, reports train/test score, and saves
a pickle file of the model in a directory called /pickles."""
line = '------------'
performance_list = []
for file in csv_filenames:
# define model name
name = file.split('/')[1][:-9]
# read csv
df = pd.read_csv(file, index_col=0)
period = '1440T'
print(period)
print(line + name + ' ' + period + line)
# engineer features
df = feature_engineering(df, period)
# train test split
train = df[df['date'] < '2018-10-30 23:00:00'] # cutoff oct 30 2018
test = df[df['date'] > '2019-01-01 23:00:00'] # cutoff jan 01 2019
print('train and test shape ({model}):'.format(model=name), train.shape, test.shape)
# features and target
features = df.drop(columns=['target', 'date']).columns.tolist()
target = 'target'
# define X, y vectors
X_train = train[features]
X_test = test[features]
y_train = train[target]
y_test = test[target]
# instantiate model
model = RandomForestClassifier(max_depth=50,
n_estimators=100,
n_jobs=-1,
random_state=42)
# filter out datasets that are too small
if X_train.shape[0] > 500:
# fit model
model.fit(X_train, y_train)
print('model fitted')
# train accuracy
train_score = model.score(X_train, y_train)
print('train accuracy:', train_score)
# make predictions
y_preds = model.predict(X_test)
print('predictions made')
# test accuracy
score = accuracy_score(y_test, y_preds)
print('test accuracy:', score)
# get profit and loss
a, b, c, d, e = performance(X_test, y_preds)
print(f'net profits: {str(round(d,2))}')
# formatting for filename
t = period[:-1]
# download pickle
pickle.dump(model, open('pickles/{model}_{t}.pkl'.format(model=name, t=t,), 'wb'))
print('{model} pickle saved!\n'.format(model=name))
# save net performance to list
performance_list.append([f'{name}', period, a, b, c , d, e])
else:
print('{model} does not have enough data!\n'.format(model=name))
# create df of model performance
df = pd.DataFrame(performance_list, columns = ['ex_tp', 'period', 'pct_gain',
'gain', 'fees', 'net_profit', 'pct_net_profit'])
models = df2['ex_tp'].values
periods = df2['period'].values
# move model to new dir tr_pickles
for i in range(len(models)):
model_name = models[i] + '_' + periods[i][:-1]
os.rename(f'pickles/{model_name}.pkl', f'tr_pickles/{models[i]}.pkl')
# returning the dataframes for model performance
# df1 contains performance for all models trained
# df2 contains performance for best models
return df, df2
filepath = ['data/hitbtc_eth_usdt_3600.csv']
df, df2 = modeling_pipeline(filepath)
###Output
1440T
------------hitbtc_eth_usdt 1440T------------
train and test shape (hitbtc_eth_usdt): (543, 69) (272, 69)
model fitted
train accuracy: 1.0
predictions made
test accuracy: 0.46691176470588236
net profits: 8874.99
hitbtc_eth_usdt pickle saved!
###Markdown
Requirements- pandas==0.25.1- ta==0.4.7- scikit-learn==21.3 Background on Trade Recommender Models Trade recommender models were created with with the goal of predicting whether the price of a cryptocurrency will go up or down in the next time period (the period is determined by the specific model). If the time period for the model was 6hrs, and if the model predicted that the price will go up, that would mean that if you bought that cryptocurrency 6 hours after the prediction time (this time comes from the data point that the model is predicting off of), the price of the crypto should have gone up after 6 hours from the time that you bought it. 100s of iterations of models were generated in this notebook and the best ones were selected from each exchange/trading pair based on which iteration returned the highest net profit. When training the random forest classifier models, performance was highly varied with different periods and parameters so there was no one size fits all model, and that resulted in the models having unique periods and parameters. The data was obtained from the respective exchanges via their api, and models were trained on 1 hour candlestick data from 2015 - Oct 2018. The test set contained data from Jan 2019 - Oct 2019 with a two month gap left between the train and test sets to prevent data leakage. The models' predictions output 0 (sell) and 1 (buy) and profit was calculated by backtesting on the 2019 test set. The profit calculation incorporated fees like in the real world and considered any consecutive "buy" prediction as a "hold" trade instead so that fees wouldn't have to be paid on those transactions. The final models were all profitable with gains anywhere from 40% - 95% within the Jan 1, 2019 to Oct 30, 2019 time period. Visualizations for how these models performed given a $10K portfolio can be viewed at https://github.com/Lambda-School-Labs/cryptolytic-ds/blob/master/finalized_notebooks/visualization/tr_performance_visualization.ipynbThe separate models created for each exchange/trading pair combination were:- Bitfinex BTC/USD- Bitfinex ETH/USD- Bitfinex LTC/USD- Coinbase Pro BTC/USD- Coinbase Pro ETH/USD- Coinbase Pro LTC/USD- HitBTC BTC/USD- HitBTC ETH/USD- HitBTC LTC/USD Folder Structure: ├── trade_recommender/ <-- The top-level directory for all trade recommender work│ ││ ├── trade_rec_models.ipynb <-- Notebook for trade recommender models│ ││ ├── data/ <-- Directory for csv files of 1 hr candle data│ │ └── data.csv │ ││ ├── pickles/ <-- Directory for all trade rec models│ │ └── models.pkl │ │ │ ├── tr_pickles/ <-- Directory for best trade rec models └── models.pkl Get all csv filenames into a variable - 1 hr candles
###Code
csv_filenames = glob.glob('data/*.csv') # modify to your filepath for data
print(len(csv_filenames))
csv_filenames
###Output
9
###Markdown
Functions OHLCV Data Resampling
###Code
def resample_ohlcv(df, period):
""" Changes the time period on cryptocurrency ohlcv data.
Period is a string denoted by '{time_in_minutes}T'(ex: '1T', '5T', '60T')."""
# Set date as the index. This is needed for the function to run
df = df.set_index(['date'])
# Aggregation function
ohlc_dict = {'open':'first',
'high':'max',
'low':'min',
'close': 'last',
'base_volume': 'sum'}
# Apply resampling
df = df.resample(period, how=ohlc_dict, closed='left', label='left')
return df
###Output
_____no_output_____
###Markdown
Filling NaNs
###Code
# resample_ohlcv function will create NaNs in df where there were gaps in the data.
# The gaps could be caused by exchanges being down, errors from cryptowatch or the
# exchanges themselves
def fill_nan(df):
"""Iterates through a dataframe and fills NaNs with appropriate
open, high, low, close values."""
# Forward fill close column.
df['close'] = df['close'].ffill()
# Backward fill the open, high, low rows with the close value.
df = df.bfill(axis=1)
return df
###Output
_____no_output_____
###Markdown
Feature Engineering
###Code
def feature_engineering(df, period):
"""Takes in a dataframe of 1 hour cryptocurrency trading data
and returns a new dataframe with selected period, new technical analysis features,
and a target.
"""
# Add a datetime column to df
df['date'] = pd.to_datetime(df['closing_time'], unit='s')
# Convert df to selected period
df = resample_ohlcv(df, period)
# Add feature to indicate gaps in the data
df['nan_ohlc'] = df['close'].apply(lambda x: 1 if pd.isnull(x) else 0)
# Fill in missing values using fill function
df = fill_nan(df)
# Reset index
df = df.reset_index()
# Create additional date features
df['year'] = df['date'].dt.year
df['month'] = df['date'].dt.month
df['day'] = df['date'].dt.day
# Add technical analysis features
df = add_all_ta_features(df, "open", "high", "low", "close", "base_volume")
# Replace infinite values with NaNs
df = df.replace([np.inf, -np.inf], np.nan)
# Drop any features whose mean of missing values is greater than 20%
df = df[df.columns[df.isnull().mean() < .2]]
# Replace remaining NaN values with the mean of each respective column and reset index
df = df.apply(lambda x: x.fillna(x.mean()),axis=0)
# Create a feature for close price difference
df['close_diff'] = (df['close'] - df['close'].shift(1))/df['close'].shift(1)
# Function to create target
def price_increase(x):
if (x-(.70/100)) > 0:
return True
else:
return False
# Create target
target = df['close_diff'].apply(price_increase)
# To make the prediction before it happens, put target on the next observation
target = target[1:].values
df = df[:-1]
# Create target column
df['target'] = target
# Remove first row of dataframe bc it has a null target
df = df[1:]
# Pick features
features = ['open', 'high', 'low', 'close', 'base_volume', 'nan_ohlc',
'year', 'month', 'day', 'volume_adi', 'volume_obv', 'volume_cmf',
'volume_fi', 'volume_em', 'volume_vpt', 'volume_nvi', 'volatility_atr',
'volatility_bbh', 'volatility_bbl', 'volatility_bbm', 'volatility_bbhi',
'volatility_bbli', 'volatility_kcc', 'volatility_kch', 'volatility_kcl',
'volatility_kchi', 'volatility_kcli', 'volatility_dch', 'volatility_dcl',
'volatility_dchi', 'volatility_dcli', 'trend_macd', 'trend_macd_signal',
'trend_macd_diff', 'trend_ema_fast', 'trend_ema_slow',
'trend_adx_pos', 'trend_adx_neg', 'trend_vortex_ind_pos',
'trend_vortex_ind_neg', 'trend_vortex_diff', 'trend_trix',
'trend_mass_index', 'trend_cci', 'trend_dpo', 'trend_kst',
'trend_kst_sig', 'trend_kst_diff', 'trend_ichimoku_a',
'trend_ichimoku_b', 'trend_visual_ichimoku_a', 'trend_visual_ichimoku_b',
'trend_aroon_up', 'trend_aroon_down', 'trend_aroon_ind', 'momentum_rsi',
'momentum_mfi', 'momentum_tsi', 'momentum_uo', 'momentum_stoch',
'momentum_stoch_signal', 'momentum_wr', 'momentum_ao',
'others_dr', 'others_dlr', 'others_cr', 'close_diff', 'date', 'target']
df = df[features]
return df
###Output
_____no_output_____
###Markdown
Profit and Loss function
###Code
def performance(X_test, y_preds):
""" Takes in a test dataset and a model's predictions, calculates and returns
the profit or loss. When the model generates consecutive buy predictions,
anything after the first one are considered a hold and fees are not added
for the hold trades. """
fee_rate = 0.35
# creates dataframe for features and predictions
df_preds = X_test
df_preds['y_preds'] = y_preds
# creates column with 0s for False predictions and 1s for True predictions
df_preds['binary_y_preds'] = df_preds['y_preds'].shift(1).apply(lambda x: 1 if x == True else 0)
# performance results from adding the closing difference percentage of the rows where trades were executed
performance = ((10000 * df_preds['binary_y_preds']*df_preds['close_diff']).sum())
# calculating fees and improve trading strategy
# creates a count list for when trades were triggered
df_preds['preds_count'] = df_preds['binary_y_preds'].cumsum()
# feature that determines the instance of whether the list increased
df_preds['increase_count'] = df_preds['preds_count'].diff(1)
# feature that creates signal of when to buy(1), hold(0), or sell(-1)
df_preds['trade_trig'] = df_preds['increase_count'].diff(1)
# number of total entries(1s)
number_of_entries = (df_preds.trade_trig.values==1).sum()
# performance takes into account fees given the rate at the beginning of this function
pct_performance = ((df_preds['binary_y_preds']*df_preds['close_diff']).sum())
# calculate the percentage paid in fees
fees_pct = number_of_entries * 2 * fee_rate/100
# calculate fees in USD
fees = number_of_entries * 2 * fee_rate / 100 * 10000
# calculate net profit in USD
performance_net = performance - fees
# calculate net profit percent
performance_net_pct = performance_net/10000
return pct_performance, performance, fees, performance_net, performance_net_pct
###Output
_____no_output_____
###Markdown
Modeling Pipeline
###Code
def modeling_pipeline(csv_filenames, periods=['360T','720T','960T','1440T']):
"""Takes csv file paths of data for modeling, performs feature engineering,
train/test split, creates a model, reports train/test score, and saves
a pickle file of the model in a directory called /pickles. The best models
are moved to a directory called tr_pickles at the end"""
line = '------------'
performance_list = []
for file in csv_filenames:
# define model name
name = file.split('/')[1][:-9]
# read csv
csv = pd.read_csv(file, index_col=0)
for period in periods:
max_depth_list = [17]
# max_depth_list = [17, 20, 25, 27]
for max_depth in max_depth_list:
max_features_list = [40]
# max_features_list = [40, 45, 50, 55, 60]
for max_features in max_features_list:
print(line + name + ' ' + period + ' ' + str(max_depth) + ' ' + str(max_features) + line)
# create a copy of the csv
df = csv.copy()
# engineer features
df = feature_engineering(df, period)
# train test split
train = df[df['date'] < '2018-10-30 23:00:00'] # cutoff oct 30 2018
test = df[df['date'] > '2019-01-01 23:00:00'] # cutoff jan 01 2019
print('train and test shape ({model}):'.format(model=name), train.shape, test.shape)
# features and target
features = df.drop(columns=['target', 'date']).columns.tolist()
target = 'target'
# define X, y vectors
X_train = train[features]
X_test = test[features]
y_train = train[target]
y_test = test[target]
# instantiate model
model = RandomForestClassifier(max_features=max_features,
max_depth=max_depth,
n_estimators=100,
n_jobs=-1,
random_state=42)
try:
# filter out datasets that are too small
if X_test.shape[0] > 500:
# fit model
model.fit(X_train, y_train)
print('model fitted')
# train accuracy
train_score = model.score(X_train, y_train)
print('train accuracy:', train_score)
# make predictions
y_preds = model.predict(X_test)
print('predictions made')
# test accuracy
score = accuracy_score(y_test, y_preds)
print('test accuracy:', score)
# get profit and loss
a, b, c, d, e = performance(X_test, y_preds)
print(f'net profits: {str(round(d,2))}')
# formatting for filename
t = period[:-1]
# download pickle
(pickle.dump(model, open('pickles/{model}_{t}_{max_features}_{max_depth}.pkl'
.format(model=name, t=t,
max_features=str(max_features),
max_depth=str(max_depth)), 'wb')))
print('{model} pickle saved!\n'.format(model=name))
# save net performance to list
performance_list.append([f'{name}', period, max_features, max_depth, a, b, c , d, e])
else:
print('{model} does not have enough data!\n'.format(model=name))
except:
print('error with model')
# create dataframe for model performance
df = pd.DataFrame(performance_list, columns = ['ex_tp', 'period', 'max_features',
'max_depth','pct_gain','gain', 'fees',
'net_profit', 'pct_net_profit'])
# sort by net profit descending and drop duplicates
df2 = df.sort_values(by='net_profit', ascending=False).drop_duplicates(subset='ex_tp')
# get the names, periods, max_features, max_depth for best models
models = df2['ex_tp'].values
periods = df2['period'].values
max_features = df2['max_features'].values
max_depth = df2['max_depth'].values
# save the best models in a new directory /tr_pickles
for i in range(len(models)):
model_name = models[i] + '_' + periods[i][:-1] + '_' + str(max_features[i]) + '_' + str(max_depth[i])
os.rename(f'pickles/{model_name}.pkl', f'tr_pickles/{models[i]}.pkl')
# returning the dataframes for model performance
# df1 contains performance for all models trained
# df2 contains performance for best models
return df, df2
periods=['360T']
df, df2 = modeling_pipeline(csv_filenames, periods)
df
df2
###Output
_____no_output_____
###Markdown
training models with specific parameters This part is not necessary if you do the above. It's for when you want to only train the best models if you know the parameters so you don't have to train 100s of models
###Code
def modeling_pipeline(csv_filenames, param_dict):
"""Takes csv file paths of data for modeling and parameters, performs feature engineering,
train/test split, creates a model, reports train/test score, and saves
a pickle file of the model in a directory called /pickles."""
line = '------------'
performance_list = []
for file in csv_filenames:
# define model name
name = file.split('/')[1][:-9]
# read csv
df = pd.read_csv(file, index_col=0)
params = param_dict[name]
print(params)
period = params['period']
print(period)
max_features = params['max_features']
max_depth = params['max_depth']
print(line + name + ' ' + period + line)
# engineer features
df = feature_engineering(df, period)
# train test split
train = df[df['date'] < '2018-10-30 23:00:00'] # cutoff oct 30 2018
test = df[df['date'] > '2019-01-01 23:00:00'] # cutoff jan 01 2019
print('train and test shape ({model}):'.format(model=name), train.shape, test.shape)
# features and target
features = df.drop(columns=['target', 'date']).columns.tolist()
target = 'target'
# define X, y vectors
X_train = train[features]
X_test = test[features]
y_train = train[target]
y_test = test[target]
# instantiate model
model = RandomForestClassifier(max_features=max_features,
max_depth=max_depth,
n_estimators=100,
n_jobs=-1,
random_state=42)
# fit model
if X_train.shape[0] > 500:
model.fit(X_train, y_train)
print('model fitted')
# train accuracy
train_score = model.score(X_train, y_train)
print('train accuracy:', train_score)
# make predictions
y_preds = model.predict(X_test)
print('predictions made')
# test accuracy
score = accuracy_score(y_test, y_preds)
print('test accuracy:', score)
# get profit and loss
a, b, c, d, e = performance(X_test, y_preds)
print(f'net profits: {str(round(d,2))}')
# formatting for filename
t = period[:-1]
# download pickle
pickle.dump(model, open('pickles/{model}_{t}.pkl'.format(model=name, t=t,), 'wb'))
print('{model} pickle saved!\n'.format(model=name))
# save net performance to list
performance_list.append([f'{name}', period, a, b, c , d, e])
else:
print('{model} does not have enough data!\n'.format(model=name))
# create df of model performance
df = pd.DataFrame(performance_list, columns = ['ex_tp', 'period', 'pct_gain',
'gain', 'fees', 'net_profit', 'pct_net_profit'])
# sort performance by net_profit and drop duplicates
df2 = df.sort_values(by='net_profit', ascending=False).drop_duplicates(subset='ex_tp')
models = df2['ex_tp'].values
periods = df2['period'].values
# move models to new dir tr_pickles
for i in range(len(models)):
model_name = models[i] + '_' + periods[i][:-1]
os.rename(f'pickles/{model_name}.pkl', f'tr_pickles/{models[i]}.pkl')
# returning the dataframes for model performance
# df1 contains performance for all models trained
# df2 contains performance for best models
return df, df2
param_dict = {'bitfinex_ltc_usd': {'period': '1440T', 'max_features': 50, 'max_depth': 20},
'hitbtc_ltc_usdt': {'period': '1440T', 'max_features': 45, 'max_depth': 27},
'coinbase_pro_ltc_usd': {'period': '960T', 'max_features': 50, 'max_depth': 17},
'hitbtc_btc_usdt': {'period': '360T', 'max_features': 40, 'max_depth': 17},
'coinbase_pro_btc_usd': {'period': '960T', 'max_features': 55, 'max_depth': 25},
'coinbase_pro_eth_usd': {'period': '960T', 'max_features': 50, 'max_depth': 27},
'bitfinex_btc_usd': {'period': '1200T', 'max_features': 55, 'max_depth': 25},
'bitfinex_eth_usd': {'period': '1200T', 'max_features': 60, 'max_depth': 20}
}
# 'hitbtc_eth_usdt': {'period': '1440T', 'max_depth': 50}
# ^ this cant go in param dict bc its trained differently
csv_paths = csv_filenames.copy()
del csv_paths[4]
print(csv_paths)
print(len(csv_paths))
len(csv_filenames)
df, df2 = modeling_pipeline(csv_paths)
###Output
_____no_output_____
###Markdown
train hitbtc eth_usdt model separately - was a special case where it performed better with less parameters
###Code
# for the hitbtc eth usdt model
def modeling_pipeline(csv_filenames):
"""Takes csv file paths of data for modeling, performs feature engineering,
train/test split, creates a model, reports train/test score, and saves
a pickle file of the model in a directory called /pickles."""
line = '------------'
performance_list = []
for file in csv_filenames:
# define model name
name = file.split('/')[1][:-9]
# read csv
df = pd.read_csv(file, index_col=0)
period = '1440T'
print(period)
print(line + name + ' ' + period + line)
# engineer features
df = feature_engineering(df, period)
# train test split
train = df[df['date'] < '2018-10-30 23:00:00'] # cutoff oct 30 2018
test = df[df['date'] > '2019-01-01 23:00:00'] # cutoff jan 01 2019
print('train and test shape ({model}):'.format(model=name), train.shape, test.shape)
# features and target
features = df.drop(columns=['target', 'date']).columns.tolist()
target = 'target'
# define X, y vectors
X_train = train[features]
X_test = test[features]
y_train = train[target]
y_test = test[target]
# instantiate model
model = RandomForestClassifier(max_depth=50,
n_estimators=100,
n_jobs=-1,
random_state=42)
# filter out datasets that are too small
if X_train.shape[0] > 500:
# fit model
model.fit(X_train, y_train)
print('model fitted')
# train accuracy
train_score = model.score(X_train, y_train)
print('train accuracy:', train_score)
# make predictions
y_preds = model.predict(X_test)
print('predictions made')
# test accuracy
score = accuracy_score(y_test, y_preds)
print('test accuracy:', score)
# get profit and loss
a, b, c, d, e = performance(X_test, y_preds)
print(f'net profits: {str(round(d,2))}')
# formatting for filename
t = period[:-1]
# download pickle
pickle.dump(model, open('pickles/{model}_{t}.pkl'.format(model=name, t=t,), 'wb'))
print('{model} pickle saved!\n'.format(model=name))
# save net performance to list
performance_list.append([f'{name}', period, a, b, c , d, e])
else:
print('{model} does not have enough data!\n'.format(model=name))
# create df of model performance
df = pd.DataFrame(performance_list, columns = ['ex_tp', 'period', 'pct_gain',
'gain', 'fees', 'net_profit', 'pct_net_profit'])
models = df2['ex_tp'].values
periods = df2['period'].values
# move model to new dir tr_pickles
for i in range(len(models)):
model_name = models[i] + '_' + periods[i][:-1]
os.rename(f'pickles/{model_name}.pkl', f'tr_pickles/{models[i]}.pkl')
# returning the dataframes for model performance
# df1 contains performance for all models trained
# df2 contains performance for best models
return df, df2
filepath = ['data/hitbtc_eth_usdt_3600.csv']
df, df2 = modeling_pipeline(filepath)
###Output
1440T
------------hitbtc_eth_usdt 1440T------------
train and test shape (hitbtc_eth_usdt): (543, 69) (272, 69)
model fitted
train accuracy: 1.0
predictions made
test accuracy: 0.46691176470588236
net profits: 8874.99
hitbtc_eth_usdt pickle saved!
|
mip-solvers/optim_analysis.ipynb | ###Markdown
Analysis of the Best Bounds In this notebook, we now analyze the performance of the different experiment-wares (in this case, MIP solvers) in terms of the (intermediate) best bounds they found.More precisely, we compare the solvers based on the best values they can find, and how fast they find them. Imports As usual, we start by importing the needed classes and constants from *Metrics-Wallet*.
###Code
from metrics.wallet import BasicAnalysis, OptiAnalysis
from metrics.wallet.analysis import EXPERIMENT_INPUT
###Output
_____no_output_____
###Markdown
Loading the data of the experiments In a [dedicated notebook](load_experiments.ipynb), we already read and preprocessed the data collected during our experiments.We can now simply reload the cached `BasicAnalysis` to retrieve it.
###Code
basic_analysis = BasicAnalysis.import_from_file('.cache')
###Output
_____no_output_____
###Markdown
For the purpose of an optimization analysis, we need to provide an additional *sampling* parameter.This sampling allows to divide the runtime of the solvers in different steps, and to identify for each solver the best bound it has found at this step.
###Code
timeout = 1200
n_samples = 200
sampling = list(range(1, timeout, timeout // n_samples))
###Output
_____no_output_____
###Markdown
Since we now want to perform a more specific analysis, we need to create an `OptiAnalysis` from the `BasicAnalysis`, to get methods that are dedicated to the analysis of the bounds found by optimization solvers.
###Code
analysis = OptiAnalysis(basic_analysis=basic_analysis, samp=sampling)
###Output
_____no_output_____
###Markdown
Focus on *SCIP* As discussed in the notebook in which we loaded our experiment data, we do not have all the intermediated bounds that *CPLEX* found during its execution.As such, we cannot make the analysis for *CPLEX*.We thus focus on the results of *SCIP* in the rest of this notebook.
###Code
analysis = analysis.keep_experiment_wares(['$SCIP_{default}$', '$SCIP_{barrier}$', '$SCIP_{barrier-crossover}$'])
###Output
_____no_output_____
###Markdown
Score computations We now need to compute various scores for the solvers we ran.We consider here the default scoring schemes provided by *Metrics*, namely:+ `optimality`, which is equal to 1 if the solver has found an optimal bound, and 0 otherwise,+ `dominance`, which is equal to 1 if the current bound is the best bound found so far for this input,+ `norm_bound`, which is the normalization of the current bound, based on the current minimum and maximum values found for this input, and+ `borda`, which is based on the Borda count method, and obtained by rating each solver for a given input. But before doing so, we observed that the instance `/home/cril/wwf/Benchmarks/MILP/enlight_hard.mps.gz` is always solved immediately by all the solvers, without any intermediate bound.We need to remove it from the analysis, as this causes failures in the scoring methods we use.
###Code
analysis = analysis.filter_analysis(lambda xp: xp[EXPERIMENT_INPUT] != '/home/cril/wwf/Benchmarks/MILP/enlight_hard.mps.gz')
###Output
_____no_output_____
###Markdown
Let us now compute the scores of the solvers.This computation is made **for each input**, by rating the intermediate bounds found by **all the solvers** considered in the analysis.
###Code
analysis.compute_scores()
###Output
_____no_output_____
###Markdown
Plots Now that we have computed the scores of the solvers for each input, we can draw the corresponding plots.They provide, for each solver, an aggregated view of the evolution of the quality of the bounds they found w.r.t. their runtime and the other solvers.
###Code
analysis.opti_line_plot(
col='optimality',
show_marker=False,
title='Evolution of optimality scores',
x_axis_name='Time (s)',
y_axis_name='Optimality score',
latex_writing=True,
)
analysis.opti_line_plot(
col='dominance',
show_marker=False,
title='Evolution of dominance scores',
x_axis_name='Time (s)',
y_axis_name='Dominance score',
latex_writing=True,
)
analysis.opti_line_plot(
col='norm_bound',
show_marker=False,
title='Evolution of normalized bounds',
x_axis_name='Time (s)',
y_axis_name='Normalized bound',
latex_writing=True,
)
analysis.opti_line_plot(
col='borda',
show_marker=False,
title='Evolution of Borda scores',
x_axis_name='Time (s)',
y_axis_name='Borda score',
latex_writing=True,
)
###Output
_____no_output_____ |
DAY 401 ~ 500/DAY461_[BaekJoon] 이번학기 평점은 몇점? (Python).ipynb | ###Markdown
2021년 8월 22일 일요일 BaekJoon - 이번학기 평점은 몇점? (Python) 문제 : https://www.acmicpc.net/problem/2755 블로그 : https://somjang.tistory.com/entry/BaekJoon-2755%EB%B2%88-%EC%9D%B4%EB%B2%88%ED%95%99%EA%B8%B0-%ED%8F%89%EC%A0%90%EC%9D%80-%EB%AA%87%EC%A0%90-Python Solution
###Code
def this_year_avg_score(grade_info):
credit_dict = {"A+": 4.3, "A0": 4.0, "A-": 3.7, "B+": 3.3, "B0": 3.0, "B-": 2.7,
"C+": 2.3, "C0": 2.0, "C-": 1.7, "D+": 1.3, "D0": 1.0, "D-": 0.7,
"F" : 0.0}
sum_score, div_num = 0, 0
for info in grade_info:
subject, grade, score = info.split()
sum_score += int(grade) * credit_dict[score]
div_num += int(grade)
avg_score = sum_score / div_num
result = avg_score * 1000
if result % 10 > 4:
result += (10 - result % 10)
result = result / 1000
return f"{result:.2f}"
if __name__ == "__main__":
report = []
for _ in range(int(input())):
grade_info = input()
report.append(grade_info)
print(this_year_avg_score(report))
###Output
_____no_output_____ |
Day_006_HW.ipynb | ###Markdown
[作業目標]- 仿造範例的 One Hot Encoding, 將指定的資料進行編碼 [作業重點]- 將 sub_train 進行 One Hot Encoding 編碼 (In[4], Out[4])
###Code
import os
import numpy as np
import pandas as pd
# 設定 data_path, 並讀取 app_train
dir_data = './data/'
f_app_train = os.path.join(dir_data, 'application_train.csv')
app_train = pd.read_csv(f_app_train)
import numpy as np
import pandas as pd
train = pd.read_csv('application_train.csv')
test = pd.read_csv('application_test.csv')
#train.head(5)
train.dtypes.value_counts()
train.select_dtypes('object').apply(pd.Series.nunique, axis = 0)
train_onehot = pd.get_dummies(train)
train_onehot.dtypes.value_counts() #obj 16 -> unit8 141
train_onehot.select_dtypes('uint8').head(5)
test.head(5)
## 作業
將下列部分資料片段 sub_train 使用 One Hot encoding, 並觀察轉換前後的欄位數量 (使用 shape) 與欄位名稱 (使用 head) 變化
sub_train = pd.DataFrame(train['WEEKDAY_APPR_PROCESS_START'])
print(sub_train.shape)
sub_train.apply(pd.Series.nunique, axis = 0)
sub_train_onehot = pd.get_dummies(sub_train)
print(sub_train_onehot.shape)
sub_train_onehot.head()
###Output
(147541, 7)
###Markdown
[作業目標]- 仿造範例的 One Hot Encoding, 將指定的資料進行編碼 [作業重點]- 將 sub_train 進行 One Hot Encoding 編碼 (In[4], Out[4])
###Code
import os
import numpy as np
import pandas as pd
# 設定 data_path, 並讀取 app_train
# dir_data = './data/'
# f_app_train = os.path.join(dir_data, 'application_train.csv')
app_train = pd.read_csv('application_train.csv')
## 作業
將下列部分資料片段 sub_train 使用 One Hot encoding, 並觀察轉換前後的欄位數量 (使用 shape) 與欄位名稱 (使用 head) 變化
sub_train = pd.DataFrame(app_train['WEEKDAY_APPR_PROCESS_START'])
print(sub_train.shape)
sub_train.head()
"""
Your Code Here
"""
sub_train = pd.get_dummies(sub_train)
print(sub_train.shape)
sub_train.head()
###Output
(21340, 7)
###Markdown
檢視與處理 Outliers 為何會有 outliers, 常見的 outlier 原因* 未知值,隨意填補 (約定俗成的代入),如年齡常見 0,999* 可能的錯誤紀錄/手誤/系統性錯誤,如某本書在某筆訂單的銷售量 = 1000 本
###Code
# Import 需要的套件
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# 設定 data_path
dir_data = './data'
f_app = os.path.join(dir_data, 'application_train.csv')
print('Path of read in data: %s' % (f_app))
app_train = pd.read_csv(f_app)
app_train.head()
###Output
Path of read in data: ./data\application_train.csv
###Markdown
請參考 HomeCredit_columns_description.csv 的欄位說明,觀察並列出三個你覺得可能有 outlier 的欄位並解釋可能的原因
###Code
app_train.dtypes.value_counts()
# 先篩選數值型的欄位
"""
YOUR CODE HERE, fill correct data types (for example str, float, int, ...)
"""
numeric_columns = app_train.select_dtypes(include=[np.number]).columns
# 再把只有 2 值 (通常是 0,1) 的欄位去掉
numeric_columns = list(app_train[numeric_columns].columns[list(app_train[numeric_columns].apply(lambda x:len(x.unique())!=2 ))])
print("Numbers of remain columns %i " % len(numeric_columns))
# 檢視這些欄位的數值範圍
for col in numeric_columns:
"""
Your CODE HERE, make the box plot
"""
app_train[col].plot.hist(title = col)
plt.show()
# 從上面的圖檢查的結果,至少這三個欄位好像有點可疑
# AMT_INCOME_TOTAL
# REGION_POPULATION_RELATIVE
# OBS_60_CNT_SOCIAL_CIRCLE
select_cols = ['AMT_INCOME_TOTAL', 'REGION_POPULATION_RELATIVE', 'OBS_60_CNT_SOCIAL_CIRCLE']
for col in select_cols:
app_train[col].plot.hist(title = col)
plt.show()
###Output
_____no_output_____
###Markdown
Hints: Emprical Cumulative Density Plot, [ECDF](https://zh.wikipedia.org/wiki/%E7%BB%8F%E9%AA%8C%E5%88%86%E5%B8%83%E5%87%BD%E6%95%B0), [ECDF with Python](https://stackoverflow.com/questions/14006520/ecdf-in-python-without-step-function)
###Code
def ECDF(data):
F=data.value_counts().sort_index().cumsum()
m=len(data)
return F / m
# 最大值離平均與中位數很遠
print(app_train['AMT_INCOME_TOTAL'].describe())
# 繪製 Empirical Cumulative Density Plot (ECDF)
"""
YOUR CODE HERE
"""
cdf = ECDF(app_train['AMT_INCOME_TOTAL'])
plt.plot(list(cdf.index), cdf/cdf.max())
plt.xlabel('Value')
plt.ylabel('ECDF')
plt.xlim([cdf.index.min(), cdf.index.max() * 1.05]) # 限制顯示圖片的範圍
plt.ylim([-0.05,1.05]) # 限制顯示圖片的範圍
plt.show()
# 改變 y 軸的 Scale, 讓我們可以正常檢視 ECDF
plt.plot(np.log(list(cdf.index)), cdf/cdf.max())
plt.xlabel('Value (log-scale)')
plt.ylabel('ECDF')
plt.ylim([-0.05,1.05]) # 限制顯示圖片的範圍
plt.show()
###Output
count 3.075110e+05
mean 1.687979e+05
std 2.371231e+05
min 2.565000e+04
25% 1.125000e+05
50% 1.471500e+05
75% 2.025000e+05
max 1.170000e+08
Name: AMT_INCOME_TOTAL, dtype: float64
###Markdown
補充:Normal dist 的 ECDF
###Code
# 最大值落在分布之外
print(app_train['REGION_POPULATION_RELATIVE'].describe())
# 繪製 Empirical Cumulative Density Plot (ECDF)
"""
Your Code Here
"""
cdf = ECDF(app_train['REGION_POPULATION_RELATIVE'])
plt.plot(list(cdf.index), cdf/cdf.max())
plt.xlabel('Value')
plt.ylabel('ECDF')
plt.ylim([-0.05,1.05]) # 限制顯示圖片的範圍
plt.show()
app_train['REGION_POPULATION_RELATIVE'].hist()
plt.show()
app_train['REGION_POPULATION_RELATIVE'].value_counts()
# 就以這個欄位來說,雖然有資料掉在分布以外,也不算異常,僅代表這間公司在稍微熱鬧的地區有的據點較少,
# 導致 region population relative 在少的部分較為密集,但在大的部分較為疏漏
# 最大值落在分布之外
print(app_train['OBS_60_CNT_SOCIAL_CIRCLE'].describe())
# 繪製 Empirical Cumulative Density Plot (ECDF)
"""
Your Code Here
"""
cdf = ECDF(app_train['OBS_60_CNT_SOCIAL_CIRCLE'])
plt.plot(list(cdf.index), cdf/cdf.max())
plt.xlabel('Value')
plt.ylabel('ECDF')
plt.xlim([cdf.index.min() * 0.95, cdf.index.max() * 1.05])
plt.ylim([-0.05,1.05]) # 限制顯示圖片的範圍
plt.show()
app_train['OBS_60_CNT_SOCIAL_CIRCLE'].hist()
plt.show()
print(app_train['OBS_60_CNT_SOCIAL_CIRCLE'].value_counts().sort_index(ascending = False))
###Output
count 306490.000000
mean 1.405292
std 2.379803
min 0.000000
25% 0.000000
50% 0.000000
75% 2.000000
max 344.000000
Name: OBS_60_CNT_SOCIAL_CIRCLE, dtype: float64
###Markdown
注意:當 histogram 畫出上面這種圖 (只出現一條,但是 x 軸延伸很長導致右邊有一大片空白時,代表右邊有值但是數量稀少。這時可以考慮用 value_counts 去找到這些數值
###Code
# 把一些極端值暫時去掉,在繪製一次 Histogram
# 選擇 OBS_60_CNT_SOCIAL_CIRCLE 小於 20 的資料點繪製
"""
Your Code Here
"""
loc_a = app_train['OBS_60_CNT_SOCIAL_CIRCLE'] < 20
loc_b = 'OBS_60_CNT_SOCIAL_CIRCLE'
app_train.loc[loc_a, loc_b].hist()
plt.show()
app_train.loc[:, loc_b].hist()
plt.show()
###Output
_____no_output_____
###Markdown
檢視與處理 Outliers 為何會有 outliers, 常見的 outlier 原因* 未知值,隨意填補 (約定俗成的代入),如年齡常見 0,999* 可能的錯誤紀錄/手誤/系統性錯誤,如某本書在某筆訂單的銷售量 = 1000 本
###Code
# Import 需要的套件
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# 設定 data_path
dir_data = './data'
f_app = os.path.join(dir_data, 'application_train.csv')
print('Path of read in data: %s' % (f_app))
app_train = pd.read_csv(f_app)
app_train.head()
###Output
Path of read in data: ./data/application_train.csv
###Markdown
請參考 HomeCredit_columns_description.csv 的欄位說明,觀察並列出三個你覺得可能有 outlier 的欄位並解釋可能的原因
###Code
# 先篩選數值型的欄位
"""
YOUR CODE HERE, fill correct data types (for example str, float, int, ...)
"""
num_df = app_train.select_dtypes(include=[np.number])
drop_list = list()
for colname in num_df:
unique_values = num_df[colname].unique()
if len(unique_values) == 2:
drop_list.append(colname)
#del num_df[colname] # It can also delete columns
numeric_df = num_df.drop(drop_list, axis=1)
print("Numbers of remain columns : " + str(numeric_df.shape[1]))
ncols = 4
nrows = np.ceil(numeric_df.shape[1]/4).astype(np.int)
fig, ax_arr = plt.subplots(nrows, ncols, figsize=(20, nrows*1.5))
for col_name, ax in zip(numeric_df, ax_arr.ravel()):
numeric_df.boxplot(column=col_name, ax=ax, vert=False)
ax.set_yticklabels(labels=[])
ax.set_title(label=col_name)
plt.tight_layout()
plt.show()
# 從上面的圖檢查的結果,至少這三個欄位好像有點可疑
# AMT_INCOME_TOTAL
# REGION_POPULATION_RELATIVE
# OBS_60_CNT_SOCIAL_CIRCLE
def empirical_distribute_function(data):
'''
Computing empirical distribution
: data: 1-D numpy array
'''
sort_data = np.sort(data)
cumulate_prob = np.arange(1, len(data) + 1)/len(data)
return sort_data, cumulate_prob
###Output
_____no_output_____
###Markdown
Hints: Emprical Cumulative Density Plot, [ECDF](https://zh.wikipedia.org/wiki/%E7%BB%8F%E9%AA%8C%E5%88%86%E5%B8%83%E5%87%BD%E6%95%B0), [ECDF with Python](https://stackoverflow.com/questions/14006520/ecdf-in-python-without-step-function)
###Code
# 最大值離平均與中位數很遠
print(app_train['AMT_INCOME_TOTAL'].describe())
# 繪製 Empirical Cumulative Density Plot (ECDF)
"""
YOUR CODE HERE
"""
sort_data, cum_prob = empirical_distribute_function(app_train['AMT_INCOME_TOTAL'])
plt.plot(sort_data, cum_prob)
plt.xlabel('Value')
plt.ylabel('ECDF')
#plt.xlim([sort_data.min(), sort_data.max()]) # 限制顯示圖片的範圍
plt.ylim([-0.05,1.05]) # 限制顯示圖片的範圍
plt.show()
# 改變 y 軸的 Scale, 讓我們可以正常檢視 ECDF
plt.plot(np.log(sort_data), cum_prob)
plt.xlabel('Value (log-scale)')
plt.ylabel('ECDF')
plt.ylim([-0.05,1.05]) # 限制顯示圖片的範圍
plt.show()
sub = app_train['AMT_INCOME_TOTAL']
###Output
_____no_output_____
###Markdown
補充:Normal dist 的 ECDF
###Code
# 最大值落在分布之外
print(app_train['REGION_POPULATION_RELATIVE'].describe())
# 繪製 Empirical Cumulative Density Plot (ECDF)
#"""
#Your Code Here
#"""
sort_data, cum_prob = empirical_distribute_function(app_train['REGION_POPULATION_RELATIVE'])
plt.plot(sort_data, cum_prob)
plt.xlabel('Value')
plt.ylabel('ECDF')
plt.ylim([-0.05,1.05]) # 限制顯示圖片的範圍
plt.show()
app_train['REGION_POPULATION_RELATIVE'].hist()
plt.show()
app_train['REGION_POPULATION_RELATIVE'].value_counts()
# 就以這個欄位來說,雖然有資料掉在分布以外,也不算異常,僅代表這間公司在稍微熱鬧的地區有的據點較少,
# 導致 region population relative 在少的部分較為密集,但在大的部分較為疏漏
# 最大值落在分布之外
print(app_train['OBS_60_CNT_SOCIAL_CIRCLE'].describe())
# 繪製 Empirical Cumulative Density Plot (ECDF)
"""
Your Code Here
"""
sort_data, cum_prob = empirical_distribute_function(app_train['OBS_60_CNT_SOCIAL_CIRCLE'])
plt.plot(sort_data, cum_prob)
plt.xlabel('Value')
plt.ylabel('ECDF')
#plt.xlim([sort_data.min() * 0.95, sort_data.max() * 1.05])
plt.ylim([-0.05,1.05]) # 限制顯示圖片的範圍
plt.show()
app_train['OBS_60_CNT_SOCIAL_CIRCLE'].hist()
plt.show()
print(app_train['OBS_60_CNT_SOCIAL_CIRCLE'].value_counts().sort_index(ascending = False))
###Output
count 306490.000000
mean 1.405292
std 2.379803
min 0.000000
25% 0.000000
50% 0.000000
75% 2.000000
max 344.000000
Name: OBS_60_CNT_SOCIAL_CIRCLE, dtype: float64
###Markdown
注意:當 histogram 畫出上面這種圖 (只出現一條,但是 x 軸延伸很長導致右邊有一大片空白時,代表右邊有值但是數量稀少。這時可以考慮用 value_counts 去找到這些數值
###Code
# 把一些極端值暫時去掉,在繪製一次 Histogram
# 選擇 OBS_60_CNT_SOCIAL_CIRCLE 小於 20 的資料點繪製
"""
Your Code Here
"""
loc_a = app_train['OBS_60_CNT_SOCIAL_CIRCLE'] < 20
loc_b = 'OBS_60_CNT_SOCIAL_CIRCLE'
app_train.loc[loc_a, loc_b].hist()
plt.show()
###Output
_____no_output_____ |
Copy_of_nlu_1.ipynb | ###Markdown
Copyright 2018 Google LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Natural Language Understanding: WordNetPlease **make a copy** of this Colab notebook before starting this lab. To do so, choose **File**->**Save a copy in Drive**. Topics covered 1. Synsets 1. Lemmas and synonyms 1. Word hierarchies 1. Measuring similarities One of the earliest attempts to create useful representations of meaning for language is [WordNet](https://en.wikipedia.org/wiki/WordNet) -- a lexical database of words and their relationships.NLTK provides a [WordNet wrapper](http://www.nltk.org/howto/wordnet.html) that we'll use here.
###Code
import nltk
assert(nltk.download('wordnet')) # Make sure we have the wordnet data.
from nltk.corpus import wordnet as wn
###Output
_____no_output_____
###Markdown
SynsetsThe fundamental WordNet unit is a **synset**, specified by a word form, a part of speech, and an index. The synsets() function retrieves the synsets that match the given word. For example, there are 4 synsets for the word "surf", one of which is a noun (n) and three of which are verbs (v). WordNet provides a definition and sometimes glosses (examples) for each synset. **Polysemy**, by the way, means having multiple senses.
###Code
for s in wn.synsets('surf'):
print s
print '\t', s.definition()
print '\t', s.examples()
###Output
_____no_output_____
###Markdown
Lemmas and synonymsEach synset includes its corresponding **lemmas** (word forms).We can construct a set of synonyms by looking up all the lemmas for all the synsets for a word.
###Code
synonyms = set()
for s in wn.synsets('triumphant'):
for l in s.lemmas():
synonyms.add(l.name())
print 'synonyms:', ', '.join(synonyms)
###Output
_____no_output_____
###Markdown
Word hierarchiesWordNet organizes nouns and verbs into hierarchies according to **hypernym** or is-a relationships.Let's examine the path from "rutabaga" to its root in the tree, "entity".
###Code
s = wn.synsets('rutabaga')
while s:
print s[0].hypernyms()
s = s[0].hypernyms()
###Output
_____no_output_____
###Markdown
Actually, the proper way to do this is with a transitive closure, which repeatedly applies the specified function (in this case, hypernyms()).
###Code
hyper = lambda x: x.hypernyms()
s = wn.synset('rutabaga.n.01')
for i in list(s.closure(hyper)):
print i
print
ss = wn.synset('root_vegetable.n.01')
for i in list(ss.closure(hyper)):
print i
###Output
_____no_output_____
###Markdown
Measuring similarityWordNet's word hierarchies (for nouns and verbs) allow us to measure similarity in various ways.Path similarity is defined as:> $1 / (ShortestPathDistance(s_1, s_2) + 1)$where $ShortestPathDistance(s_1, s_2)$ is computed from the hypernym/hyponym graph.
###Code
s1 = wn.synset('dog.n.01')
s2 = wn.synset('cat.n.01')
s3 = wn.synset('potato.n.01')
print s1, '::', s1, s1.path_similarity(s1)
print s1, '::', s2, s1.path_similarity(s2)
print s1, '::', s3, s1.path_similarity(s3)
print s2, '::', s3, s2.path_similarity(s3)
print
hyper = lambda x: x.hypernyms()
print(s1.hypernyms())
for i in list(s1.closure(hyper)):
print i
###Output
_____no_output_____ |
mlsec-workshop-ipinsights.ipynb | ###Markdown
Using ML with SageMaker and GuardDuty to Identify Anomalous Traffic Using IP Insights to score security findings-------[Return to the workshop instructions](https://ml-threat-detection.awssecworkshops.com/)Amazon SageMaker IP Insights is an unsupervised anomaly detection algorithm for susicipous IP addresses that uses statistical modeling and neural networks to capture associations between online resources (such as account IDs or hostnames) and IPv4 addresses. Under the hood, it learns vector representations for online resources and IP addresses. As a result, if the vector representing an IP address and an online resource are close together, then it is likely (not surprising) for that IP address to access that online resource, even if it has never accessed it before.In this notebook, we use the Amazon SageMaker IP Insights algorithm to train a model using the ` tuples we generated from the CloudTrail log data, and then use the model to perform inference on the same type of tuples generated from GuardDuty findings to determine how unusual it is to see a particular IP address for a given principal involved with a finding.After running this notebook, you should be able to:- obtain, transform, and store data for use in Amazon SageMaker,- create an AWS SageMaker training job to produce an IP Insights model,- use the model to perform inference with an Amazon SageMaker endpoint.If you would like to know more, please check out the [SageMaker IP Inisghts Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/ip-insights.html). Setup------*This notebook was created and tested on a ml.m4.xlarge notebook instance. We recommend using the same, but other instance types should still work.*The following is a cell that contains Python code. It can be run in two ways: 1. Selecting the cell (click anywhere inside it), and then clicking the button above labelled "Run". 2. Selecting the cell (click anywhere inside it), and typing Shift+Return on your keyboard. When a cell is running, you will see a star(\*\) in the brackets to the left (e.g., `In [*]`), and when it has completed you will see a number in the brackets. Each click of "Run" will execute the next cell in the notebook.Go ahead and click **Run** now. You should see the text in the `print` statement get printed just beneath the cell.All of these cells share the same interpreter, so if a cell imports modules, like this one does, those modules will be available to every subsequent cell.
###Code
import boto3
import botocore
import os
import sagemaker
print("Welcome to IP Insights!")
###Output
_____no_output_____
###Markdown
ACTION: Configure Amazon S3 BucketBefore going further, we to specify the S3 bucket that SageMaker will use for input and output data for the model, which will be the bucket where our training and inference tuples from CloudTrail logs and GuardDuty findings, respectively, are located. Edit the following cell to specify the name of the bucket and then run it; you do not need to change the prefix.
###Code
# Specify the full name of your "...tuplesbucket..." here (copy full bucketname from s3 console)
bucket = 'module-module-1-tuplesbucket-XXXXXXX'
prefix = ''
###Output
_____no_output_____
###Markdown
Finally, run the next cell to complete the setup.
###Code
execution_role = sagemaker.get_execution_role()
# Check if the bucket exists
try:
boto3.Session().client('s3').head_bucket(Bucket=bucket)
except botocore.exceptions.ParamValidationError as e:
print('Hey! You either forgot to specify your S3 bucket'
' or you gave your bucket an invalid name!')
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == '403':
print("Hey! You don't have permission to access the bucket, {}.".format(bucket))
elif e.response['Error']['Code'] == '404':
print("Hey! Your bucket, {}, doesn't exist!".format(bucket))
else:
raise
else:
print('Training input/output will be stored in: s3://{}/{}'.format(bucket, prefix))
###Output
_____no_output_____
###Markdown
TrainingExecute the two cells below to start training. Training should take several minutes to complete, and some logging information will output to the display. (These logs are also available in CloudWatch.) You can look at various training metrics in the log as the model trains. When training is complete, you will see log output like this: >`2019-02-11 20:34:41 Completed - Training job completed` >`Billable seconds: 71`
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
image = get_image_uri(boto3.Session().region_name, 'ipinsights')
# Configure SageMaker IP Insights input channels
train_key = os.path.join(prefix, 'train', 'cloudtrail_tuples.csv')
s3_train_data = 's3://{}/{}'.format(bucket, train_key)
input_data = {
'train': sagemaker.session.s3_input(s3_train_data, distribution='FullyReplicated', content_type='text/csv')
}
# Set up the estimator with training job configuration
ip_insights = sagemaker.estimator.Estimator(
image,
execution_role,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
output_path='s3://{}/{}/output'.format(bucket, prefix),
sagemaker_session=sagemaker.Session())
# Configure algorithm-specific hyperparameters
ip_insights.set_hyperparameters(
num_entity_vectors='20000',
random_negative_sampling_rate='5',
vector_dim='128',
mini_batch_size='1000',
epochs='5',
learning_rate='0.01',
)
# Start the training job (should take 3-4 minutes to complete)
ip_insights.fit(input_data)
print('Training job name: {}'.format(ip_insights.latest_training_job.job_name))
###Output
_____no_output_____
###Markdown
Now Deploy ModelExecute the cell below to deploy the trained model on an endpoint for inference. It should take 5-7 minutes to spin up the instance and deploy the model (the horizontal dashed line represents progress, and it will print an exclamation point \[!\] when it is complete).
###Code
# NOW DEPLOY MODEL
predictor = ip_insights.deploy(
initial_instance_count=1,
instance_type='ml.m4.xlarge'
)
# SHOW ENDPOINT NAME
print('Endpoint name: {}'.format(predictor.endpoint))
###Output
_____no_output_____
###Markdown
InferenceNow that we have trained the model on known data, we can pass new data to it to generate scores. We want to see if our new data looks normal, or anomalous. We can pass data in a variety of formats to our inference endpoint. In this example, we will pass CSV-formmated data.
###Code
from sagemaker.predictor import csv_serializer, json_deserializer
predictor.content_type = 'text/csv'
predictor.serializer = csv_serializer
predictor.accept = 'application/json'
predictor.deserializer = json_deserializer
###Output
_____no_output_____
###Markdown
When queried by a principal and an IPAddress, the model returns a score (called 'dot_product') which indicates how expected that event is. In other words, *the higher the dot_product, the more normal the event is.* Let's first run the inference on the training (normal) data for sanity check.
###Code
import pandas as pd
# Run inference on training (normal) data for sanity check
s3_infer_data = 's3://{}/{}'.format(bucket, train_key)
inference_data = pd.read_csv(s3_infer_data)
inference_data.head()
train_dot_products = predictor.predict(inference_data.values)
# Prepare for plotting by collecting just the dot products
train_plot_data = [x['dot_product'] for x in train_dot_products['predictions']]
train_plot_data[:10]
# Plot the training data inference values as a histogram
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
n, bins, patches = plt.hist(train_plot_data, 10, facecolor='blue')
plt.xlabel('IP Insights Score')
plt.ylabel('Frequency')
plt.show()
###Output
_____no_output_____
###Markdown
Notice (almost) all the values above are greater than zero.Now let's run inference on the GuardDuty findings. Since they are from GuardDuty alerts, we expect them to be generally more anomalous, so we would expect to see lower scores...
###Code
# Run inference on GuardDuty findings
infer_key = os.path.join(prefix, 'infer', 'guardduty_tuples.csv')
s3_infer_data = 's3://{}/{}'.format(bucket, infer_key)
inference_data = pd.read_csv(s3_infer_data)
inference_data.head()
GuardDuty_dot_products = predictor.predict(inference_data.values)
# Prepare GuardDuty data for plotting by collecting just the dot products
GuardDuty_plot_data = [x['dot_product'] for x in GuardDuty_dot_products['predictions']]
GuardDuty_plot_data[:10]
# Plot both the training data and the GuardDuty data together so we can compare
nT, binsT, patchesT = plt.hist(GuardDuty_plot_data, 10, facecolor='red')
nG, binsG, patchesG = plt.hist(train_plot_data, 10, facecolor='blue')
plt.legend(["GuardDuty", "Training"])
plt.xlabel('IP Insights Score')
plt.ylabel('Frequency')
plt.show()
###Output
_____no_output_____
###Markdown
Using ML with SageMaker and GuardDuty to Identify Anomalous Traffic Using IP Insights to score security findings-------[Return to the workshop instructions](https://ml-threat-detection.awssecworkshops.com/)Amazon SageMaker IP Insights is an unsupervised anomaly detection algorithm for susicipous IP addresses that uses statistical modeling and neural networks to capture associations between online resources (such as account IDs or hostnames) and IPv4 addresses. Under the hood, it learns vector representations for online resources and IP addresses. As a result, if the vector representing an IP address and an online resource are close together, then it is likely (not surprising) for that IP address to access that online resource, even if it has never accessed it before.In this notebook, we use the Amazon SageMaker IP Insights algorithm to train a model using the ` tuples we generated from the CloudTrail log data, and then use the model to perform inference on the same type of tuples generated from GuardDuty findings to determine how unusual it is to see a particular IP address for a given principal involved with a finding.After running this notebook, you should be able to:- obtain, transform, and store data for use in Amazon SageMaker,- create an AWS SageMaker training job to produce an IP Insights model,- use the model to perform inference with an Amazon SageMaker endpoint.If you would like to know more, please check out the [SageMaker IP Inisghts Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/ip-insights.html). Setup------*This notebook was created and tested on a ml.m4.xlarge notebook instance. We recommend using the same, but other instance types should still work.*The following is a cell that contains Python code. It can be run in two ways: 1. Selecting the cell (click anywhere inside it), and then clicking the button above labelled "Run". 2. Selecting the cell (click anywhere inside it), and typing Shift+Return on your keyboard. When a cell is running, you will see a star(\*\) in the brackets to the left (e.g., `In [*]`), and when it has completed you will see a number in the brackets. Each click of "Run" will execute the next cell in the notebook.Go ahead and click **Run** now. You should see the text in the `print` statement get printed just beneath the cell.All of these cells share the same interpreter, so if a cell imports modules, like this one does, those modules will be available to every subsequent cell.
###Code
import boto3
import botocore
import os
import sagemaker
print("Welcome to IP Insights!")
###Output
_____no_output_____
###Markdown
ACTION: Configure Amazon S3 BucketBefore going further, we need to specify the S3 bucket that SageMaker will use for input and output data for the model, which will be the bucket where our training and inference tuples from CloudTrail logs and GuardDuty findings, respectively, are located. Edit the following cell to specify the name of the bucket and then run it; you do not need to change the prefix.
###Code
# Specify the full name of your "...tuplesbucket..." here (copy full bucketname from s3 console)
bucket = 'module-module-1-tuplesbucket-XXXXXXX'
prefix = ''
###Output
_____no_output_____
###Markdown
Finally, run the next cell to complete the setup.
###Code
execution_role = sagemaker.get_execution_role()
# Check if the bucket exists
try:
boto3.Session().client('s3').head_bucket(Bucket=bucket)
except botocore.exceptions.ParamValidationError as e:
print('Hey! You either forgot to specify your S3 bucket'
' or you gave your bucket an invalid name!')
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == '403':
print("Hey! You don't have permission to access the bucket, {}.".format(bucket))
elif e.response['Error']['Code'] == '404':
print("Hey! Your bucket, {}, doesn't exist!".format(bucket))
else:
raise
else:
print('Training input/output will be stored in: s3://{}/{}'.format(bucket, prefix))
###Output
_____no_output_____
###Markdown
TrainingExecute the two cells below to start training. Training should take several minutes to complete, and some logging information will output to the display. (These logs are also available in CloudWatch.) You can look at various training metrics in the log as the model trains. When training is complete, you will see log output like this: >`2019-02-11 20:34:41 Completed - Training job completed` >`Billable seconds: 71`
###Code
from sagemaker.amazon.amazon_estimator import get_image_uri
image = get_image_uri(boto3.Session().region_name, 'ipinsights')
# Configure SageMaker IP Insights input channels
train_key = os.path.join(prefix, 'train', 'cloudtrail_tuples.csv')
s3_train_data = 's3://{}/{}'.format(bucket, train_key)
input_data = {
'train': sagemaker.session.s3_input(s3_train_data, distribution='FullyReplicated', content_type='text/csv')
}
# Set up the estimator with training job configuration
ip_insights = sagemaker.estimator.Estimator(
image,
execution_role,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
output_path='s3://{}/{}/output'.format(bucket, prefix),
sagemaker_session=sagemaker.Session())
# Configure algorithm-specific hyperparameters
ip_insights.set_hyperparameters(
num_entity_vectors='20000',
random_negative_sampling_rate='5',
vector_dim='128',
mini_batch_size='1000',
epochs='5',
learning_rate='0.01',
)
# Start the training job (should take 3-4 minutes to complete)
ip_insights.fit(input_data)
print('Training job name: {}'.format(ip_insights.latest_training_job.job_name))
###Output
_____no_output_____
###Markdown
Now Deploy ModelExecute the cell below to deploy the trained model on an endpoint for inference. It should take 5-7 minutes to spin up the instance and deploy the model (the horizontal dashed line represents progress, and it will print an exclamation point \[!\] when it is complete).
###Code
# NOW DEPLOY MODEL
predictor = ip_insights.deploy(
initial_instance_count=1,
instance_type='ml.m4.xlarge'
)
# SHOW ENDPOINT NAME
print('Endpoint name: {}'.format(predictor.endpoint))
###Output
_____no_output_____
###Markdown
InferenceNow that we have trained the model on known data, we can pass new data to it to generate scores. We want to see if our new data looks normal, or anomalous. We can pass data in a variety of formats to our inference endpoint. In this example, we will pass CSV-formmated data.
###Code
from sagemaker.predictor import csv_serializer, json_deserializer
predictor.content_type = 'text/csv'
predictor.serializer = csv_serializer
predictor.accept = 'application/json'
predictor.deserializer = json_deserializer
###Output
_____no_output_____
###Markdown
When queried by a principal and an IPAddress, the model returns a score (called 'dot_product') which indicates how expected that event is. In other words, *the higher the dot_product, the more normal the event is.* Let's first run the inference on the training (normal) data for sanity check.
###Code
import pandas as pd
# Run inference on training (normal) data for sanity check
s3_infer_data = 's3://{}/{}'.format(bucket, train_key)
inference_data = pd.read_csv(s3_infer_data)
print(inference_data.head())
train_dot_products = predictor.predict(inference_data.values)
# Prepare for plotting by collecting just the dot products
train_plot_data = [x['dot_product'] for x in train_dot_products['predictions']]
train_plot_data[:10]
# Plot the training data inference values as a histogram
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
n, bins, patches = plt.hist(train_plot_data, 10, facecolor='blue')
plt.xlabel('IP Insights Score')
plt.ylabel('Frequency')
plt.show()
###Output
_____no_output_____
###Markdown
Notice (almost) all the values above are greater than zero.Now let's run inference on the GuardDuty findings. Since they are from GuardDuty alerts, we expect them to be generally more anomalous, so we would expect to see lower scores...
###Code
# Run inference on GuardDuty findings
infer_key = os.path.join(prefix, 'infer', 'guardduty_tuples.csv')
s3_infer_data = 's3://{}/{}'.format(bucket, infer_key)
inference_data = pd.read_csv(s3_infer_data)
print(inference_data.head())
GuardDuty_dot_products = predictor.predict(inference_data.values)
# Prepare GuardDuty data for plotting by collecting just the dot products
GuardDuty_plot_data = [x['dot_product'] for x in GuardDuty_dot_products['predictions']]
GuardDuty_plot_data[:10]
# Plot both the training data and the GuardDuty data together so we can compare
nT, binsT, patchesT = plt.hist(GuardDuty_plot_data, 10, facecolor='red')
nG, binsG, patchesG = plt.hist(train_plot_data, 10, facecolor='blue')
plt.legend(["GuardDuty", "Training"])
plt.xlabel('IP Insights Score')
plt.ylabel('Frequency')
plt.show()
###Output
_____no_output_____ |
LS_DSPT3_Making_Data_backed_Assertions_Assignment_InProgress.ipynb | ###Markdown
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.Try and isolate the main relationships and then communicate them using crosstabs and graphs. Share any cool graphs that you make with the rest of the class in Slack!
###Code
# TODO - your code here
# Use what we did live in lecture as an example
# HINT - you can find the raw URL on GitHub and potentially use that
# to load the data with read_csv, or you can upload it yourself
import pandas as pd
df = pd.read_csv("https://raw.githubusercontent.com/LambdaSchool/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv")
df.columns
persons_df = df.rename(columns={'Unnamed: 0': 'unique_id'})
persons_df.describe()
persons_df.dtypes
persons_df.plot.scatter(x='exercise_time', y='age')
persons_df.plot.scatter(x='exercise_time', y='weight')
persons_df.plot.scatter(x='weight', y='age')
# Tried these but didnt like results
# persons_df.sort_values(by=['weight'])
# pd.crosstab(persons_df['weight'], persons_df['exercise_time'])
###Output
_____no_output_____
###Markdown
Assignment questionsAfter you've worked on some code, answer the following questions in this text block:1. What are the variable types in the data?2. What are the relationships between the variables?3. Which relationships are "real", and which spurious?
###Code
#Answers
#1. The variable types are all integers:
# age is between 18-80, weight 100-246, exercise_time 0-300
#2. There is a small negative relationship between age and exercise time,
# and a more significant one between weight and exercise time. There is also
# seemingly one between weight and age, but inconclusive from my graphs.
#3. I think the most real one is between weight and exercise time.
###Output
_____no_output_____ |
PS1/PS1_setup.ipynb | ###Markdown
Problem set 1: Financial Frictions, Liquidity and the Business Cycle.This notebook sums up relevant information on the setup for the exercises. If further suggests what to think about, before going to exercise classes. The solution follows from "PS1.ipynb".
###Code
import numpy as np
import math
import itertools
from scipy import optimize
import scipy.stats as stats
import PS1 as func
# For plots:
import matplotlib.pyplot as plt
import matplotlib as mpl
%matplotlib inline
plt.style.use('seaborn-whitegrid')
mpl.style.use('seaborn')
prop_cycle = plt.rcParams["axes.prop_cycle"]
colors = prop_cycle.by_key()["color"]
import ipywidgets as widgets
from ipywidgets import interact, interact_manual
###Output
_____no_output_____
###Markdown
Exercise 3.5 in JT (The Theory of Corporate Finance) Consider the continuous the continuous investment model with decreasing returns to scale outlined in chapter 3 of JT. The core of the model is as follows:- Let $I\in[0, \infty)$ be the level of investment in a given project.- Entrepreneur proposes a project. If successful the project generates income $R(I)$; if not the project generates $0$.- The probability of success depends on the behavior of the entrepreneur. If E behaves $(b)$ then probability of success is $p_H\in(0,1)$. If E does not behave $(nb)$ then the probability of success is $p_L$ where $0\leq p_L<p_H$. - The entrepreneur has an incentive to not behave $(nb)$, as he receives $BI$ in private benefits in this case.- The entrepreneur is endowed with A in assets. - No investment project is profitable, if entrepreneur chooses not to behave. - The entrepreneur's technology obeys the following conditions: 1. Positive, but decreasing returns to investments: $R'(I)>0, R''(I)<0$. 2. *Regularity condition 1:* Under perfect information a positive investment level would be optimal, i.e. $R'(0)>1/p_H$. 3. *Regularity condition 2:* Under perfect information a finite level of investment is optiaml, i.e. $ lim_{I\rightarrow \infty} R'(I) < 1/p_H$. - Assume perfect competition between lenders.- We will consider loan agreements where the entrepreneur *pledges income* $R(I)-R_b(I)$, to the lender. This leaves the entrepreneur with $R_b(I)$ if the project is successful. Suggestions before exercise class:1. Write up the utility for an entrepreneur, in the case where he behaves $(u_b)$ and in the case where he does not $(u_{nb})$.2. Write up the *incentive compatibility constraint* (IC) stating what level of $R_b(I)$ is needed, for the entrepreneur to choose to behave.3. Write up the *individual rationality constraint* (IR) for the lender, ensuring that he will agree to the loan contract.4. What does the *perfect competition* amongst lenders imply, for the contract the entrepreneur will offer? 5. Given that lenders are profit maximizing, what does this imply for the level $R_b(I)$ in the loan contract? (Hint: Think about the (IC) constraint). *NB: You may initialize a version of the model by calling the class contInvest. This has two plotting features that you might find usefull. See below*
###Code
Model_exercise35 = func.contInvest() # If you are interested in what resides in the model-instance "Model_exercise35" you can write 'Model_exercise35.__dict__' for an overview.
Model_exercise35.plot_eu() # This plots the function w. default parameter-values. For non-default parameters, see below:
###Output
_____no_output_____
###Markdown
*If you want to change parameters and run the model again, use the built-in 'upd_par' function as follows:*
###Code
par = {'pL': 0.1,
'pH': 0.9,
'B': 1} # New parameter-values. There are more parameters that you can change; this is only a selection of them.
Model_exercise35.upd_par(par)
Model_exercise35.plot_eu()
###Output
_____no_output_____
###Markdown
On a grid of assets the next plot shows:1. the 'unconstrained' solution (where there is no moral hazard), 2. the level of investment there the IC constraint (that induces 'good' behavior) is exactly binding), and 3. the equilibrium outcome.
###Code
Model_exercise35.plot_interactive_sol()
###Output
_____no_output_____
###Markdown
Exercise 6.1 in JT: Privately known private benefit and market breakdown Let us start with a brief outline of the setup (from section 6.2), compared to exercise 3.5 (a lot of is the same, will not be repeated here):* Two types of entrepreneurs: Good and bad types with private benefits of not behaving $B_H>B_L$. (good type has $B_L$)* No equity (A=0),* Investment is not continuous, but either 0 or I.* Investment is either successfull (return R) or not (return 0).* Capital markets put probability $\alpha\in(0,1)$ on type 'good' and $1-\alpha$ on type 'bad'. * Regularity conditions:$$ \begin{align} p_H\left(R-\dfrac{B_H}{\Delta p}\right)<I<p_H\left(R-\dfrac{B_L}{\Delta p}\right), && \text{and} && p_LR<I.\end{align} $$Recall that the IC condition for an entrpreneur was **behave if**: $\Delta p R_b \geq B$.The regularity conditions thus state that:1. It is not profitable for lenders to invest in project with 'bad' entrepreneurs **and** making sure that they behave. (first inequality)2. It is profitable to invest in 'good type' entrepreneur **and** making sure that he behaves (second inequality).3. It is not profitable to invest in **any** project, where the entrepreneur mis-behaves (third inequality). Suggestions before exercise class:1. Interpret the three regularity conditions (the three inequalities). 2. If the first inequality holds and $R_b=B_H\Delta p$, what will the two entrepreneurs do (behave/not behave)? When negotiating contract with lender, what will the two types have an incentive to reveal about their type?3. Same scenario for the second inequality? Exercise 6.2 in JT: More on pooling in credit markets. Compared to before, alter the setup as follows:* Continuum of types instead of good/bad. For entrepreneur $i$ the value $B_i$ is distributed according to the CDF function $H(B)$, with support on $[0, \bar{B}]$* Monopoly lender offers credit to borrowers (entrepreneurs).The lender offers $R_b$ for a successfull investment, otherwise 0. Borrower $i$ then behaves **if** the IC constraint holds: $B_i\leq \Delta p R_b$. The expected profits from offering the contract is then defined by:$$\begin{align} \pi(R_b) = H\big(\Delta p R_b\big)p_H(R-R_b)+\left[1-H\big(\Delta p R_b\big)\right] p_L(R-R_b)-I, \tag{Profits}\end{align} $$where $H(\Delta p R_b)$ measures the share of borrowers that behave, i.e. with $B_i<\Delta R_b$. From this note:* The share of high-quality borrowers increase with $R_b$ (bad types start to behave).* Same dynamics as before: Adverse selection reduces quality of lending, induces cross-subsidies between types. Suggestions before exercise class:Make sure you understand the profit function. To help you along, you can experiment with the *expected profits* function below.
###Code
par_62 = {'pL': 0.5,
'pH': 0.8,
'R': 10,
'I': 2, # investment cost
'Lower': 0, # lower bound on distribution of B
'Upper': 10} # upper bound on distribution of B
Model_exercise62 = func.poolingCredit(name='standard',**par_62)
Model_exercise62.plot_exp_profits()
###Output
_____no_output_____
###Markdown
You can change parameters by using the 'upd_par' function. Here, as a for instance, we lower $p_L$ and increase $p_H$:
###Code
par_update = {'pL': 0.2,
'pH': 0.9}
Model_exercise62.upd_par(par)
Model_exercise62.plot_exp_profits()
###Output
_____no_output_____
###Markdown
Problem set 1: Financial Frictions, Liquidity and the Business Cycle.This notebook sums up relevant information on the setup for the exercises. If further suggests what to think about, before going to exercise classes. The solution follows from "PS1.ipynb". Exercise 3.5 in JT (The Theory of Corporate Finance) Consider the continuous the continuous investment model with decreasing returns to scale outlined in chapter 3 of JT. The core of the model is as follows:- Let $I\in[0, \infty)$ be the level of investment in a given project.- Entrepreneur proposes a project. If successful the project generates income $R(I)$; if not the project generates $0$.- The probability of success depends on the behavior of the entrepreneur. If E behaves $(b)$ then probability of success is $p_H\in(0,1)$. If E does not behave $(nb)$ then the probability of success is $p_L$ where $0\leq p_L<p_H$. - The entrepreneur has an incentive to not behave $(nb)$, as he receives $BI$ in private benefits in this case.- The entrepreneur is endowed with A in assets. - No investment project is profitable, if entrepreneur chooses not to behave. - The entrepreneur's technology obeys the following conditions: 1. Positive, but decreasing returns to investments: $R'(I)>0, R''(I)<0$. 2. *Regularity condition 1:* Under perfect information a positive investment level would be optimal, i.e. $R'(0)>1/p_H$. 3. *Regularity condition 2:* Under perfect information a finite level of investment is optiaml, i.e. $ lim_{I\rightarrow \infty} R'(I) < 1/p_H$. - Assume perfect competition between lenders.- We will consider loan agreements where the entrepreneur *pledges income* $R(I)-R_b(I)$, to the lender. This leaves the entrepreneur with $R_b(I)$ if the project is successful. Suggestions before exercise class:1. Write up the utility for an entrepreneur, in the case where he behaves $(u_b)$ and in the case where he does not $(u_{nb})$.2. Write up the *incentive compatibility constraint* (IC) stating what level of $R_b(I)$ is needed, for the entrepreneur to choose to behave.3. Write up the *individual rationality constraint* (IR) for the lender, ensuring that he will agree to the loan contract.4. What does the *perfect competition* amongst lenders imply, for the contract the entrepreneur will offer? 5. Given that lenders are profit maximizing, what does this imply for the level $R_b(I)$ in the loan contract? (Hint: Think about the (IC) constraint). Exercise 6.1 in JT: Privately known private benefit and market breakdown Let us start with a brief outline of the setup (from section 6.2), compared to exercise 3.5 (a lot of is the same, will not be repeated here):* Two types of entrepreneurs: Good and bad types with private benefits of not behaving $B_H>B_L$. (good type has $B_L$)* No equity (A=0),* Investment is not continuous, but either 0 or I.* Investment is either successfull (return R) or not (return 0).* Capital markets put probability $\alpha\in(0,1)$ on type 'good' and $1-\alpha$ on type 'bad'. * Regularity conditions:$$ \begin{align} p_H\left(R-\dfrac{B_H}{\Delta p}\right)<I<p_H\left(R-\dfrac{B_L}{\Delta p}\right), && \text{and} && p_LR<I.\end{align} $$Recall that the IC condition for an entrpreneur was **behave if**: $\Delta p R_b \geq B$.The regularity conditions thus state that:1. It is not profitable for lenders to invest in project with 'bad' entrepreneurs **and** making sure that they behave. (first inequality)2. It is profitable to invest in 'good type' entrepreneur **and** making sure that he behaves (second inequality).3. It is not profitable to invest in **any** project, where the entrepreneur mis-behaves (third inequality). Suggestions before exercise class:1. Interpret the three regularity conditions (the three inequalities). 2. If the first inequality holds and $R_b=B_H\Delta p$, what will the two entrepreneurs do (behave/not behave)? When negotiating contract with lender, what will the two types have an incentive to reveal about their type?3. Same scenario for the second inequality? Exercise 6.2 in JT: More on pooling in credit markets. Compared to before, alter the setup as follows:* Continuum of types instead of good/bad. For entrepreneur $i$ the value $B_i$ is distributed according to the CDF function $H(B)$, with support on $[0, \bar{B}]$* Monopoly lender offers credit to borrowers (entrepreneurs).The lender offers $R_b$ for a successfull investment, otherwise 0. Borrower $i$ then behaves **if** the IC constraint holds: $B_i\leq \Delta p R_b$. The expected profits from offering the contract is then defined by:$$\begin{align} \pi(R_b) = H\big(\Delta p R_b\big)p_H(R-R_b)+\left[1-H\big(\Delta p R_b\big)\right] p_L(R-R_b)-I, \tag{Profits}\end{align} $$where $H(\Delta p R_b)$ measures the share of borrowers that behave, i.e. with $B_i<\Delta R_b$. From this note:* The share of high-quality borrowers increase with $R_b$ (bad types start to behave).* Same dynamics as before: Adverse selection reduces quality of lending, induces cross-subsidies between types. Suggestions before exercise class:Make sure you understand the profit function. You can experiment with the profit function by changing the parameters in the cell below and run (shift+Enter) that cell along with the next one with the graphs.
###Code
LowerBound = 0
UpperBound = 10
N = 100
x = np.linspace(LowerBound, UpperBound, N)
pH = 0.9
pL = 0.5
R = 10
I = 2
rv = stats.uniform(loc=LowerBound, scale=UpperBound-LowerBound)
Exp_profits = rv.cdf(x)*pH*(R-x)+(1-rv.cdf(x))*pL*(R-x)-I
def prof_func(y):
return rv.cdf(y)*pH*(R-y)+(1-rv.cdf(y))*pL*(R-y)-I
# For what value of B does the profits cross zero?
Zero_profits = optimize.newton(prof_func, 0)
print(Zero_profits)
fig = plt.figure(frameon=False, figsize=(8, 6), dpi=100)
ax = fig.add_subplot(1, 1, 1)
ax.plot(x, rv.cdf(x))
ax.plot(x, rv.pdf(x))
ax.set_xlim([LowerBound, UpperBound])
ax.set_ylim([0, 1])
# Labels:
ax.set_xlabel('$B$')
#ax.set_ylabel('Share of entrepreneurs with $B_i\leq B$')
plt.legend(('Share of entrepreneurs with $B_i\leq B$', '$dH/dB_i$'),
loc='upper left')
# Add a cool layout
fig.tight_layout()
fig2 = plt.figure(frameon=False, figsize=(8, 6), dpi=100)
ax2 = fig2.add_subplot(1, 1, 1)
ax2.plot(x,Exp_profits)
ax2.set_xlim([LowerBound, UpperBound])
ax2.set_ylim([math.floor(min(Exp_profits)), math.ceil(max(Exp_profits))])
ax2.set_xlabel('$B$')
ax2.set_ylabel('Expected profits')
plt.axvline(x=Zero_profits, color='k', linestyle='--')
plt.axhline(y=0, color='k')
plt.legend(('Expected profits' , 'Level of B implying zero profits' ),
loc='lower left')
plt.text(Zero_profits+0.25, (math.ceil(max(Exp_profits))-math.floor(min(Exp_profits)))/2, 'Zero profits at B='+str(Zero_profits), bbox=dict(facecolor='darkblue', alpha=0.5))
fig2.tight_layout()
###Output
_____no_output_____ |
Python-Standard-Library/FileSystem/filecomp.ipynb | ###Markdown
Example Data
###Code
import os
def mkfile(filename, body=None):
with open(filename, 'w') as f:
f.write(body or filename)
return
def make_example_dir(top):
if not os.path.exists(top):
os.mkdir(top)
curdir = os.getcwd()
os.chdir(top)
os.mkdir('dir1')
os.mkdir('dir2')
mkfile('dir1/file_only_in_dir1')
mkfile('dir2/file_only_in_dir2')
os.mkdir('dir1/dir_only_in_dir1')
os.mkdir('dir2/dir_only_in_dir2')
os.mkdir('dir1/common_dir')
os.mkdir('dir2/common_dir')
mkfile('dir1/common_file', 'this file is the same')
mkfile('dir2/common_file', 'this file is the same')
mkfile('dir1/not_the_same')
mkfile('dir2/not_the_same')
mkfile('dir1/file_in_dir1', 'This is a file in dir1')
os.mkdir('dir2/file_in_dir1')
os.chdir(curdir)
return
os.chdir(os.path.dirname('filecomp.ipynb') or os.getcwd())
make_example_dir('example')
make_example_dir('example/dir1/common_dir')
make_example_dir('example/dir2/common_dir')
###Output
_____no_output_____
###Markdown
Comparing Files
###Code
import filecmp
print('common_file :', end=' ')
print(filecmp.cmp('example/dir1/common_file',
'example/dir2/common_file'),
end=' ')
print(filecmp.cmp('example/dir1/common_file',
'example/dir2/common_file',
shallow=False))
print('not_the_same:', end=' ')
print(filecmp.cmp('example/dir1/not_the_same',
'example/dir2/not_the_same'),
end=' ')
print(filecmp.cmp('example/dir1/not_the_same',
'example/dir2/not_the_same',
shallow=False))
print('identical :', end=' ')
print(filecmp.cmp('example/dir1/file_only_in_dir1',
'example/dir1/file_only_in_dir1'),
end=' ')
print(filecmp.cmp('example/dir1/file_only_in_dir1',
'example/dir1/file_only_in_dir1',
shallow=False))
import filecmp
import os
# Determine the items that exist in both directories
d1_contents = set(os.listdir('example/dir1'))
d2_contents = set(os.listdir('example/dir2'))
common = list(d1_contents & d2_contents)
common_files = [
f
for f in common
if os.path.isfile(os.path.join('example/dir1', f))
]
print('Common files:', common_files)
# Compare the directories
match, mismatch, errors = filecmp.cmpfiles(
'example/dir1',
'example/dir2',
common_files,
)
print('Match :', match)
print('Mismatch :', mismatch)
print('Errors :', errors)
###Output
Common files: ['file_in_dir1', 'common_file', 'not_the_same']
Match : ['common_file', 'not_the_same']
Mismatch : ['file_in_dir1']
Errors : []
###Markdown
Comparing Directories
###Code
import filecmp
dc = filecmp.dircmp('example/dir1', 'example/dir2')
dc.report()
import filecmp
dc = filecmp.dircmp('example/dir1', 'example/dir2')
dc.report_full_closure()
###Output
diff example/dir1 example/dir2
Only in example/dir1 : ['dir_only_in_dir1', 'file_only_in_dir1']
Only in example/dir2 : ['dir_only_in_dir2', 'file_only_in_dir2']
Identical files : ['common_file', 'not_the_same']
Common subdirectories : ['common_dir']
Common funny cases : ['file_in_dir1']
diff example/dir1/common_dir example/dir2/common_dir
Common subdirectories : ['dir1', 'dir2']
diff example/dir1/common_dir/dir1 example/dir2/common_dir/dir1
Identical files : ['common_file', 'file_in_dir1', 'file_only_in_dir1', 'not_the_same']
Common subdirectories : ['common_dir', 'dir_only_in_dir1']
diff example/dir1/common_dir/dir1/common_dir example/dir2/common_dir/dir1/common_dir
diff example/dir1/common_dir/dir1/dir_only_in_dir1 example/dir2/common_dir/dir1/dir_only_in_dir1
diff example/dir1/common_dir/dir2 example/dir2/common_dir/dir2
Identical files : ['common_file', 'file_only_in_dir2', 'not_the_same']
Common subdirectories : ['common_dir', 'dir_only_in_dir2', 'file_in_dir1']
diff example/dir1/common_dir/dir2/common_dir example/dir2/common_dir/dir2/common_dir
diff example/dir1/common_dir/dir2/dir_only_in_dir2 example/dir2/common_dir/dir2/dir_only_in_dir2
diff example/dir1/common_dir/dir2/file_in_dir1 example/dir2/common_dir/dir2/file_in_dir1
###Markdown
Using Different
###Code
import filecmp
import pprint
dc = filecmp.dircmp('example/dir1', 'example/dir2')
print('Left:')
pprint.pprint(dc.left_list)
print('\nRight:')
pprint.pprint(dc.right_list)
import filecmp
import pprint
dc = filecmp.dircmp('example/dir1', 'example/dir2',
ignore=['common_file'])
print('Left:')
pprint.pprint(dc.left_list)
print('\nRight:')
pprint.pprint(dc.right_list)
import filecmp
import pprint
dc = filecmp.dircmp('example/dir1', 'example/dir2')
print('Common:')
pprint.pprint(dc.common)
print('\nLeft:')
pprint.pprint(dc.left_only)
print('\nRight:')
pprint.pprint(dc.right_only)
import filecmp
import pprint
dc = filecmp.dircmp('example/dir1', 'example/dir2')
print('Common:')
pprint.pprint(dc.common)
print('\nDirectories:')
pprint.pprint(dc.common_dirs)
print('\nFiles:')
pprint.pprint(dc.common_files)
print('\nFunny:')
pprint.pprint(dc.common_funny)
###Output
Common:
['common_dir', 'common_file', 'file_in_dir1', 'not_the_same']
Directories:
['common_dir']
Files:
['common_file', 'not_the_same']
Funny:
['file_in_dir1']
|
Neural_Model.ipynb | ###Markdown
SI 670: Applied Machine Learning Final Project Music Genre ClassificationMatt Whitehead (mwwhite) Neural Modeling
###Code
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from keras.utils import to_categorical
from keras import Model
from keras.layers import Dense, Flatten, Input, Dropout
from keras.applications.vgg16 import VGG16
from keras import regularizers
X = np.load('X.npy')
y = np.load('y.npy')
X = np.stack((X,) * 3, -1)
y = pd.factorize(y)[0]
y = to_categorical(y)
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=1026, test_size=0.3)
base_model = VGG16(include_top=False, weights='imagenet', input_shape=X_train[0].shape)
flatten = Flatten()(base_model.output)
dense = Dense(512, activation='relu', kernel_regularizer=regularizers.l2(0.001))(flatten)
drop = Dropout(0.3)(dense)
out = Dense(10, activation='softmax')(drop)
model = Model(inputs=base_model.input, outputs=out)
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.summary()
hist = model.fit(X_train, y_train,
batch_size=256,
epochs=30,
verbose=1,
validation_data=(X_test, y_test))
loss, accuracy = model.evaluate(X_train, y_train, verbose=False)
print("Training Accuracy: {:.4f}".format(accuracy))
loss, accuracy = model.evaluate(X_test, y_test, verbose=False)
print("Testing Accuracy: {:.4f}".format(accuracy))
# I recycled this code from a previous deep learning project of mine
import matplotlib.pyplot as plt
plt.style.use('ggplot')
def plot_history(history):
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
x = range(1, len(acc) + 1)
plt.figure(figsize=(12, 5))
plt.subplot(1, 2, 1)
plt.plot(x, acc, 'b', label='Training acc')
plt.plot(x, val_acc, 'r', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.subplot(1, 2, 2)
plt.plot(x, loss, 'b', label='Training loss')
plt.plot(x, val_loss, 'r', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plot_history(hist)
###Output
_____no_output_____ |
ipython_examples/Performance tests.ipynb | ###Markdown
Models to use in performance test
###Code
class egfngf_model:
def __init__(self):
self.name = 'egfngf'
self.ts = linspace(0, 120, 121, dtype=float)
self.has_userdata = True
self.has_userdata_odes = True
self.k = [
2.18503E-5,
0.0121008,
1.38209E-7,
0.00723811,
694.731,
6086070.0,
389.428,
2112.66,
1611.97,
896896.0,
32.344,
35954.3,
1509.36,
1432410.0,
0.884096,
62464.6,
185.759,
4768350.0,
125.089,
157948.0,
2.83243,
518753.0,
9.85367,
1007340.0,
8.8912,
3496490.0,
0.0213697,
763523.0,
10.6737,
184912.0,
0.0771067,
272056.0,
0.0566279,
653951.0,
15.1212,
119355.0,
146.912,
12876.2,
1.40145,
10965.6,
27.265,
295990.0,
2.20995,
1025460.0,
0.126329,
1061.71,
441.287,
1.08795E7
]
self.userdata = self.k
self.y0 = [
1000,
4560,
80000.0,
0.0,
10000.0,
0.0,
120000.0,
0.0,
120000.0,
0.0,
120000.0,
0.0,
120000.0,
0.0,
120000.0,
0.0,
600000.0,
0.0,
600000.0,
0.0,
120000.0,
0.0,
120000.0,
0.0,
120000.0,
0.0,
120000.0,
0.0,
120000.0,
120000.0,
120000.0,
120000.0
]
def f(self, t, y, k):
return [
((-1.0 * k[0] * y[0] * y[2])) + (1.0 * k[1] * y[3]),
((-1.0 * k[2] * y[1] * y[4])) + (1.0 * k[3] * y[5]),
((-1.0 * k[0] * y[0] * y[2])) + (1.0 * k[1] * y[3]),
((1.0 * k[0] * y[0] * y[2]) + (-1.0 * k[1] * y[3])),
((-1.0 * k[2] * y[1] * y[4]) + (1.0 * k[3] * y[5])),
((1.0 * k[2] * y[1] * y[4]) + (-1.0 * k[3] * y[5])),
((-1.0 * k[4] * y[3] * y[6] / (y[6] + k[5])) + (-1.0 * k[6] * y[5] * y[6] / (y[6] + k[7])) + (
1.0 * k[8] * y[9] * y[7] / (y[7] + k[9]))),
((1.0 * k[4] * y[3] * y[6] / (y[6] + k[5])) + (1.0 * k[6] * y[5] * y[6] / (y[6] + k[7])) + (
-1.0 * k[8] * y[9] * y[7] / (y[7] + k[9]))),
((-1.0 * k[26] * y[19] * y[8] / (y[8] + k[27]))),
((1.0 * k[26] * y[19] * y[8] / (y[8] + k[27]))),
((-1.0 * k[10] * y[7] * y[10] / (y[10] + k[11])) + (1.0 * k[12] * y[28] * y[11] / (y[11] + k[13]))),
((1.0 * k[10] * y[7] * y[10] / (y[10] + k[11])) + (-1.0 * k[12] * y[28] * y[11] / (y[11] + k[13]))),
((-1.0 * k[14] * y[11] * y[12] / (y[12] + k[15])) + (1.0 * k[44] * y[31] * y[13] / (y[13] + k[45])) + (
1.0 * k[34] * y[23] * y[13] / (y[13] + k[35]))),
((1.0 * k[14] * y[11] * y[12] / (y[12] + k[15])) + (-1.0 * k[44] * y[31] * y[13] / (y[13] + k[45])) + (
-1.0 * k[34] * y[23] * y[13] / (y[13] + k[35]))),
((-1.0 * k[42] * y[27] * y[14] / (y[14] + k[43])) + (1.0 * k[46] * y[31] * y[15] / (y[15] + k[47]))),
((1.0 * k[42] * y[27] * y[14] / (y[14] + k[43])) + (-1.0 * k[46] * y[31] * y[15] / (y[15] + k[47]))),
((-1.0 * k[16] * y[13] * y[16] / (y[16] + k[17])) + (-1.0 * k[18] * y[15] * y[16] / (y[16] + k[19])) + (
1.0 * k[20] * y[30] * y[17] / (y[17] + k[21]))),
((1.0 * k[16] * y[13] * y[16] / (y[16] + k[17])) + (1.0 * k[18] * y[15] * y[16] / (y[16] + k[19])) + (
-1.0 * k[20] * y[30] * y[17] / (y[17] + k[21]))),
((-1.0 * k[22] * y[17] * y[18] / (y[18] + k[23])) + (1.0 * k[24] * y[30] * y[19] / (y[19] + k[25]))),
((1.0 * k[22] * y[17] * y[18] / (y[18] + k[23])) + (-1.0 * k[24] * y[30] * y[19] / (y[19] + k[25]))),
((-1.0 * k[28] * y[3] * y[20] / (y[20] + k[29])) + (-1.0 * k[30] * y[11] * y[20] / (y[20] + k[31]))),
((1.0 * k[28] * y[3] * y[20] / (y[20] + k[29])) + (1.0 * k[30] * y[11] * y[20] / (y[20] + k[31]))),
((-1.0 * k[32] * y[21] * y[22] / (y[22] + k[33]))),
((1.0 * k[32] * y[21] * y[22] / (y[22] + k[33]))),
((-1.0 * k[36] * y[5] * y[24] / (y[24] + k[37]))),
((1.0 * k[36] * y[5] * y[24] / (y[24] + k[37]))),
((-1.0 * k[38] * y[25] * y[26] / (y[26] + k[39])) + (1.0 * k[40] * y[29] * y[27] / (y[27] + k[41]))),
((1.0 * k[38] * y[25] * y[26] / (y[26] + k[39])) + (-1.0 * k[40] * y[29] * y[27] / (y[27] + k[41]))),
0,
0,
0,
0
]
def f_odes(self, t, y, yout, k):
yout[:] = [
((-1.0 * k[0] * y[0] * y[2])) + (1.0 * k[1] * y[3]),
((-1.0 * k[2] * y[1] * y[4])) + (1.0 * k[3] * y[5]),
((-1.0 * k[0] * y[0] * y[2])) + (1.0 * k[1] * y[3]),
((1.0 * k[0] * y[0] * y[2]) + (-1.0 * k[1] * y[3])),
((-1.0 * k[2] * y[1] * y[4]) + (1.0 * k[3] * y[5])),
((1.0 * k[2] * y[1] * y[4]) + (-1.0 * k[3] * y[5])),
((-1.0 * k[4] * y[3] * y[6] / (y[6] + k[5])) + (-1.0 * k[6] * y[5] * y[6] / (y[6] + k[7])) + (
1.0 * k[8] * y[9] * y[7] / (y[7] + k[9]))),
((1.0 * k[4] * y[3] * y[6] / (y[6] + k[5])) + (1.0 * k[6] * y[5] * y[6] / (y[6] + k[7])) + (
-1.0 * k[8] * y[9] * y[7] / (y[7] + k[9]))),
((-1.0 * k[26] * y[19] * y[8] / (y[8] + k[27]))),
((1.0 * k[26] * y[19] * y[8] / (y[8] + k[27]))),
((-1.0 * k[10] * y[7] * y[10] / (y[10] + k[11])) + (1.0 * k[12] * y[28] * y[11] / (y[11] + k[13]))),
((1.0 * k[10] * y[7] * y[10] / (y[10] + k[11])) + (-1.0 * k[12] * y[28] * y[11] / (y[11] + k[13]))),
((-1.0 * k[14] * y[11] * y[12] / (y[12] + k[15])) + (1.0 * k[44] * y[31] * y[13] / (y[13] + k[45])) + (
1.0 * k[34] * y[23] * y[13] / (y[13] + k[35]))),
((1.0 * k[14] * y[11] * y[12] / (y[12] + k[15])) + (-1.0 * k[44] * y[31] * y[13] / (y[13] + k[45])) + (
-1.0 * k[34] * y[23] * y[13] / (y[13] + k[35]))),
((-1.0 * k[42] * y[27] * y[14] / (y[14] + k[43])) + (1.0 * k[46] * y[31] * y[15] / (y[15] + k[47]))),
((1.0 * k[42] * y[27] * y[14] / (y[14] + k[43])) + (-1.0 * k[46] * y[31] * y[15] / (y[15] + k[47]))),
((-1.0 * k[16] * y[13] * y[16] / (y[16] + k[17])) + (-1.0 * k[18] * y[15] * y[16] / (y[16] + k[19])) + (
1.0 * k[20] * y[30] * y[17] / (y[17] + k[21]))),
((1.0 * k[16] * y[13] * y[16] / (y[16] + k[17])) + (1.0 * k[18] * y[15] * y[16] / (y[16] + k[19])) + (
-1.0 * k[20] * y[30] * y[17] / (y[17] + k[21]))),
((-1.0 * k[22] * y[17] * y[18] / (y[18] + k[23])) + (1.0 * k[24] * y[30] * y[19] / (y[19] + k[25]))),
((1.0 * k[22] * y[17] * y[18] / (y[18] + k[23])) + (-1.0 * k[24] * y[30] * y[19] / (y[19] + k[25]))),
((-1.0 * k[28] * y[3] * y[20] / (y[20] + k[29])) + (-1.0 * k[30] * y[11] * y[20] / (y[20] + k[31]))),
((1.0 * k[28] * y[3] * y[20] / (y[20] + k[29])) + (1.0 * k[30] * y[11] * y[20] / (y[20] + k[31]))),
((-1.0 * k[32] * y[21] * y[22] / (y[22] + k[33]))),
((1.0 * k[32] * y[21] * y[22] / (y[22] + k[33]))),
((-1.0 * k[36] * y[5] * y[24] / (y[24] + k[37]))),
((1.0 * k[36] * y[5] * y[24] / (y[24] + k[37]))),
((-1.0 * k[38] * y[25] * y[26] / (y[26] + k[39])) + (1.0 * k[40] * y[29] * y[27] / (y[27] + k[41]))),
((1.0 * k[38] * y[25] * y[26] / (y[26] + k[39])) + (-1.0 * k[40] * y[29] * y[27] / (y[27] + k[41]))),
0,
0,
0,
0
]
return 0
%load_ext Cython
%%cython -I /home/benny/git/odes/scikits/odes/sundials/ -I /usr/local/lib/python3.5/dist-packages/scikits.odes-2.3.0.dev0-py3.5-linux-x86_64.egg/scikits/odes/sundials/
## update include flag -I to point to odes/sundials directory!
import numpy as np
from cpython cimport bool
cimport numpy as np
from scikits.odes.sundials.cvode cimport CV_RhsFunction
#scikits.odes allows cython functions only if derived from correct class
cdef class egfngf_cython_model(CV_RhsFunction):
cdef public ts, k, y0, userdata
cdef public object name
cdef public CV_RhsFunction f_odes
cdef public bool has_userdata, has_userdata_odes
def __cinit__(self):
self.name = 'egfngf_cython'
self.ts = np.linspace(0, 120, 121, dtype=float)
self.has_userdata = True
self.has_userdata_odes = True
self.k = np.array([
2.18503E-5,
0.0121008,
1.38209E-7,
0.00723811,
694.731,
6086070.0,
389.428,
2112.66,
1611.97,
896896.0,
32.344,
35954.3,
1509.36,
1432410.0,
0.884096,
62464.6,
185.759,
4768350.0,
125.089,
157948.0,
2.83243,
518753.0,
9.85367,
1007340.0,
8.8912,
3496490.0,
0.0213697,
763523.0,
10.6737,
184912.0,
0.0771067,
272056.0,
0.0566279,
653951.0,
15.1212,
119355.0,
146.912,
12876.2,
1.40145,
10965.6,
27.265,
295990.0,
2.20995,
1025460.0,
0.126329,
1061.71,
441.287,
1.08795E7
], float)
self.userdata = self.k
self.y0 = np.array([
1000,
4560,
80000.0,
0.0,
10000.0,
0.0,
120000.0,
0.0,
120000.0,
0.0,
120000.0,
0.0,
120000.0,
0.0,
120000.0,
0.0,
600000.0,
0.0,
600000.0,
0.0,
120000.0,
0.0,
120000.0,
0.0,
120000.0,
0.0,
120000.0,
0.0,
120000.0,
120000.0,
120000.0,
120000.0
], float)
cpdef np.ndarray[double, ndim=1] f(self, double t, np.ndarray[double, ndim=1] y,
np.ndarray[double, ndim=1] k):
return np.array([
((-1.0 * k[0] * y[0] * y[2])) + (1.0 * k[1] * y[3]),
((-1.0 * k[2] * y[1] * y[4])) + (1.0 * k[3] * y[5]),
((-1.0 * k[0] * y[0] * y[2])) + (1.0 * k[1] * y[3]),
((1.0 * k[0] * y[0] * y[2]) + (-1.0 * k[1] * y[3])),
((-1.0 * k[2] * y[1] * y[4]) + (1.0 * k[3] * y[5])),
((1.0 * k[2] * y[1] * y[4]) + (-1.0 * k[3] * y[5])),
((-1.0 * k[4] * y[3] * y[6] / (y[6] + k[5])) + (-1.0 * k[6] * y[5] * y[6] / (y[6] + k[7])) + (
1.0 * k[8] * y[9] * y[7] / (y[7] + k[9]))),
((1.0 * k[4] * y[3] * y[6] / (y[6] + k[5])) + (1.0 * k[6] * y[5] * y[6] / (y[6] + k[7])) + (
-1.0 * k[8] * y[9] * y[7] / (y[7] + k[9]))),
((-1.0 * k[26] * y[19] * y[8] / (y[8] + k[27]))),
((1.0 * k[26] * y[19] * y[8] / (y[8] + k[27]))),
((-1.0 * k[10] * y[7] * y[10] / (y[10] + k[11])) + (1.0 * k[12] * y[28] * y[11] / (y[11] + k[13]))),
((1.0 * k[10] * y[7] * y[10] / (y[10] + k[11])) + (-1.0 * k[12] * y[28] * y[11] / (y[11] + k[13]))),
((-1.0 * k[14] * y[11] * y[12] / (y[12] + k[15])) + (1.0 * k[44] * y[31] * y[13] / (y[13] + k[45])) + (
1.0 * k[34] * y[23] * y[13] / (y[13] + k[35]))),
((1.0 * k[14] * y[11] * y[12] / (y[12] + k[15])) + (-1.0 * k[44] * y[31] * y[13] / (y[13] + k[45])) + (
-1.0 * k[34] * y[23] * y[13] / (y[13] + k[35]))),
((-1.0 * k[42] * y[27] * y[14] / (y[14] + k[43])) + (1.0 * k[46] * y[31] * y[15] / (y[15] + k[47]))),
((1.0 * k[42] * y[27] * y[14] / (y[14] + k[43])) + (-1.0 * k[46] * y[31] * y[15] / (y[15] + k[47]))),
((-1.0 * k[16] * y[13] * y[16] / (y[16] + k[17])) + (-1.0 * k[18] * y[15] * y[16] / (y[16] + k[19])) + (
1.0 * k[20] * y[30] * y[17] / (y[17] + k[21]))),
((1.0 * k[16] * y[13] * y[16] / (y[16] + k[17])) + (1.0 * k[18] * y[15] * y[16] / (y[16] + k[19])) + (
-1.0 * k[20] * y[30] * y[17] / (y[17] + k[21]))),
((-1.0 * k[22] * y[17] * y[18] / (y[18] + k[23])) + (1.0 * k[24] * y[30] * y[19] / (y[19] + k[25]))),
((1.0 * k[22] * y[17] * y[18] / (y[18] + k[23])) + (-1.0 * k[24] * y[30] * y[19] / (y[19] + k[25]))),
((-1.0 * k[28] * y[3] * y[20] / (y[20] + k[29])) + (-1.0 * k[30] * y[11] * y[20] / (y[20] + k[31]))),
((1.0 * k[28] * y[3] * y[20] / (y[20] + k[29])) + (1.0 * k[30] * y[11] * y[20] / (y[20] + k[31]))),
((-1.0 * k[32] * y[21] * y[22] / (y[22] + k[33]))),
((1.0 * k[32] * y[21] * y[22] / (y[22] + k[33]))),
((-1.0 * k[36] * y[5] * y[24] / (y[24] + k[37]))),
((1.0 * k[36] * y[5] * y[24] / (y[24] + k[37]))),
((-1.0 * k[38] * y[25] * y[26] / (y[26] + k[39])) + (1.0 * k[40] * y[29] * y[27] / (y[27] + k[41]))),
((1.0 * k[38] * y[25] * y[26] / (y[26] + k[39])) + (-1.0 * k[40] * y[29] * y[27] / (y[27] + k[41]))),
0,
0,
0,
0], float)
cpdef int evaluate(self, double t,
np.ndarray[double, ndim=1] y,
np.ndarray[double, ndim=1] yout,
object userdata = None) except? -1:
#cdef np.ndarray[double, ndim=1] k = self.k # avoid self.k gives quite some speedup!
cdef np.ndarray[double, ndim=1] k = userdata
# avoiding creation of temporary arrays gives quite some speedup!
yout[0] = ((-1.0 * k[0] * y[0] * y[2])) + (1.0 * k[1] * y[3])
yout[1] = ((-1.0 * k[2] * y[1] * y[4])) + (1.0 * k[3] * y[5])
yout[2] = ((-1.0 * k[0] * y[0] * y[2])) + (1.0 * k[1] * y[3])
yout[3] = ((1.0 * k[0] * y[0] * y[2]) + (-1.0 * k[1] * y[3]))
yout[4] = ((-1.0 * k[2] * y[1] * y[4]) + (1.0 * k[3] * y[5]))
yout[5] = ((1.0 * k[2] * y[1] * y[4]) + (-1.0 * k[3] * y[5]))
yout[6] = ((-1.0 * k[4] * y[3] * y[6] / (y[6] + k[5])) + (-1.0 * k[6] * y[5] * y[6] / (y[6] + k[7])) + (
1.0 * k[8] * y[9] * y[7] / (y[7] + k[9])))
yout[7] = ((1.0 * k[4] * y[3] * y[6] / (y[6] + k[5])) + (1.0 * k[6] * y[5] * y[6] / (y[6] + k[7])) + (
-1.0 * k[8] * y[9] * y[7] / (y[7] + k[9])))
yout[8] = ((-1.0 * k[26] * y[19] * y[8] / (y[8] + k[27])))
yout[9] = ((1.0 * k[26] * y[19] * y[8] / (y[8] + k[27])))
yout[10] = ((-1.0 * k[10] * y[7] * y[10] / (y[10] + k[11])) + (1.0 * k[12] * y[28] * y[11] / (y[11] + k[13])))
yout[11] = ((1.0 * k[10] * y[7] * y[10] / (y[10] + k[11])) + (-1.0 * k[12] * y[28] * y[11] / (y[11] + k[13])))
yout[12] = ((-1.0 * k[14] * y[11] * y[12] / (y[12] + k[15])) + (1.0 * k[44] * y[31] * y[13] / (y[13] + k[45])) + (
1.0 * k[34] * y[23] * y[13] / (y[13] + k[35])))
yout[13] = ((1.0 * k[14] * y[11] * y[12] / (y[12] + k[15])) + (-1.0 * k[44] * y[31] * y[13] / (y[13] + k[45])) + (
-1.0 * k[34] * y[23] * y[13] / (y[13] + k[35])))
yout[14] = ((-1.0 * k[42] * y[27] * y[14] / (y[14] + k[43])) + (1.0 * k[46] * y[31] * y[15] / (y[15] + k[47])))
yout[15] = ((1.0 * k[42] * y[27] * y[14] / (y[14] + k[43])) + (-1.0 * k[46] * y[31] * y[15] / (y[15] + k[47])))
yout[16] = ((-1.0 * k[16] * y[13] * y[16] / (y[16] + k[17])) + (-1.0 * k[18] * y[15] * y[16] / (y[16] + k[19])) + (
1.0 * k[20] * y[30] * y[17] / (y[17] + k[21])))
yout[17] = ((1.0 * k[16] * y[13] * y[16] / (y[16] + k[17])) + (1.0 * k[18] * y[15] * y[16] / (y[16] + k[19])) + (
-1.0 * k[20] * y[30] * y[17] / (y[17] + k[21])))
yout[18] = ((-1.0 * k[22] * y[17] * y[18] / (y[18] + k[23])) + (1.0 * k[24] * y[30] * y[19] / (y[19] + k[25])))
yout[19] = ((1.0 * k[22] * y[17] * y[18] / (y[18] + k[23])) + (-1.0 * k[24] * y[30] * y[19] / (y[19] + k[25])))
yout[20] = ((-1.0 * k[28] * y[3] * y[20] / (y[20] + k[29])) + (-1.0 * k[30] * y[11] * y[20] / (y[20] + k[31])))
yout[21] = ((1.0 * k[28] * y[3] * y[20] / (y[20] + k[29])) + (1.0 * k[30] * y[11] * y[20] / (y[20] + k[31])))
yout[22] = ((-1.0 * k[32] * y[21] * y[22] / (y[22] + k[33])))
yout[23] = ((1.0 * k[32] * y[21] * y[22] / (y[22] + k[33])))
yout[24] = ((-1.0 * k[36] * y[5] * y[24] / (y[24] + k[37])))
yout[25] = ((1.0 * k[36] * y[5] * y[24] / (y[24] + k[37])))
yout[26] = ((-1.0 * k[38] * y[25] * y[26] / (y[26] + k[39])) + (1.0 * k[40] * y[29] * y[27] / (y[27] + k[41])))
yout[27] = ((1.0 * k[38] * y[25] * y[26] / (y[26] + k[39])) + (-1.0 * k[40] * y[29] * y[27] / (y[27] + k[41])))
yout[28] = 0
yout[29] = 0
yout[30] = 0
yout[31] = 0
return 0
model2 = egfngf_cython_model()
# for the performance comparator, f_odes is the right hand side.
# For cython odes, it must be CV_RhsFunction, so we make a circular link:
model2.f_odes = model2
models = [egfngf_model(), model2]
###Output
_____no_output_____
###Markdown
Methods to use to solve the models
###Code
class scipy_ode_int:
name = 'odeint'
def __call__(self, model, rtol):
def reordered_ode_userdata(t, y):
return model.f(y, t, model.userdata)
def reordered_ode(t, y):
return model.f(y, t)
if model.has_userdata:
result = odeint(reordered_ode_userdata, model.y0, model.ts, rtol=rtol)
else:
result = odeint(reordered_ode, model.y0, model.ts, rtol=rtol)
return result
class scipy_ode_class:
def __init__(self, name):
self.name = name
space_pos = name.find(" ")
if space_pos > -1:
self.solver = name[0:space_pos]
self.method = name[space_pos+1:]
else:
self.solver = name
self.method = None
def __call__(self, model, rtol):
solver = ode(model.f)
solver.set_integrator(self.solver, method=self.method, rtol=rtol,
nsteps=10000)
solver.set_initial_value(model.y0, 0.0)
if model.has_userdata:
solver.set_f_params(model.userdata)
result = np.empty((len(model.ts), len(model.y0)))
for i, t in enumerate(model.ts): # Drop t=0.0
if t == 0:
result[i, :] = model.y0
continue
result[i, :] = solver.integrate(t)
return result
class scipy_odes_class(scipy_ode_class):
def __call__(self, model, rtol):
userdata = None
if model.has_userdata_odes:
userdata = model.userdata
solver = odes_ode(self.solver, model.f_odes, old_api=False,
lmm_type=self.method, rtol=rtol,
user_data = userdata)
solution = solver.solve(model.ts, model.y0)
for i, t in enumerate(model.ts):
try:
result[i, :] = solution.values.y[i]
except:
# no valid solution anymore
result[i, :] = 0
return result
class scipy_solver_class:
def __init__(self, name):
self.name = name
def __call__(self, model, rtol):
def collected_ode_userdata(t, y):
return model.f(t, y, model.userdata)
def collected_ode(t, y):
return model.f(t, y)
if model.has_userdata:
sol = solve_ivp(collected_ode_userdata, [0.0, np.max(model.ts)], model.y0, method=self.name, rtol=rtol, t_eval=model.ts)
else:
sol = solve_ivp(collected_ode, [0.0, np.max(model.ts)], model.y0, method=self.name, rtol=rtol, t_eval=model.ts)
return sol.y.transpose()
methods = [
scipy_ode_int(),
scipy_ode_class("vode bdf"),
scipy_ode_class("vode adams"),
scipy_ode_class("lsoda"),
scipy_ode_class("dopri5"),
scipy_ode_class("dop853"),
]
if HAS_SOLVEIVP:
methods += [scipy_solver_class("RK45"),
scipy_solver_class("RK23"),
scipy_solver_class("Radau"),
scipy_solver_class("BDF"),
]
if HAS_ODES:
methods += [scipy_odes_class("cvode BDF"),
scipy_odes_class("cvode ADAMS"),
]
###Output
_____no_output_____
###Markdown
Compare the methods with the gold standard
###Code
rtols = 10 ** np.arange(-9.0, 0.0)
GoldStandard = namedtuple('GoldStandard', ['name', 'values', 'max'])
gold_standards = []
for model in models:
print('Gold standard for {}'.format(model.name))
result = methods[0](model, 1e-12)
gold_standards.append((model.name, GoldStandard(model.name, result, np.max(result))))
gold_standards = OrderedDict(gold_standards)
data = []
for method in methods:
for model in models:
for rtol in rtols:
print('method: {} model: {} rtol: {}'.format(method.name, model.name, rtol), end='')
# Run
tic = time.time()
result = method(model, rtol)
toc = time.time() - tic
# Compare to gold standard
standard = gold_standards[model.name]
diff = result - standard.values
max_rel_diff = np.max(diff/standard.max)
# Append to table
record = (method.name, model.name, rtol, max_rel_diff, toc)
print(' err: {} toc: {}'.format(max_rel_diff, toc))
data.append(record)
data = DataFrame(data, columns=['method', 'model', 'rtol', 'err', 'time'])
###Output
Gold standard for egfngf
Gold standard for egfngf_cython
method: odeint model: egfngf rtol: 1e-09 err: 7.494591409340501e-10 toc: 0.35076093673706055
method: odeint model: egfngf rtol: 1e-08 err: 3.5497856151778252e-09 toc: 0.29200196266174316
method: odeint model: egfngf rtol: 1e-07 err: 3.1049782798315086e-08 toc: 0.1904611587524414
method: odeint model: egfngf rtol: 1e-06 err: 2.503034344408661e-07 toc: 0.17496275901794434
method: odeint model: egfngf rtol: 1e-05 err: 2.5149287212601244e-06 toc: 0.1682443618774414
method: odeint model: egfngf rtol: 0.0001 err: 8.5922166040109e-06 toc: 0.16126561164855957
method: odeint model: egfngf rtol: 0.001 err: 0.00038126556983120586 toc: 0.07489609718322754
method: odeint model: egfngf rtol: 0.01 err: 0.0012045689673627687 toc: 0.07285642623901367
method: odeint model: egfngf rtol: 0.1 err: 0.008254680445357905 toc: 0.07691764831542969
method: odeint model: egfngf_cython rtol: 1e-09 err: 7.494591409340501e-10 toc: 0.05883359909057617
method: odeint model: egfngf_cython rtol: 1e-08 err: 3.5497856151778252e-09 toc: 0.04954171180725098
method: odeint model: egfngf_cython rtol: 1e-07 err: 3.1049782798315086e-08 toc: 0.03227996826171875
method: odeint model: egfngf_cython rtol: 1e-06 err: 2.503034344408661e-07 toc: 0.029372215270996094
method: odeint model: egfngf_cython rtol: 1e-05 err: 2.5149287212601244e-06 toc: 0.02083730697631836
method: odeint model: egfngf_cython rtol: 0.0001 err: 8.5922166040109e-06 toc: 0.054701805114746094
method: odeint model: egfngf_cython rtol: 0.001 err: 0.00038126556983120586 toc: 0.015133380889892578
method: odeint model: egfngf_cython rtol: 0.01 err: 0.0012045689673627687 toc: 0.02105879783630371
method: odeint model: egfngf_cython rtol: 0.1 err: 0.008254680445357905 toc: 0.01078653335571289
method: vode bdf model: egfngf rtol: 1e-09 err: 1.582705740778086e-08 toc: 1.3630039691925049
method: vode bdf model: egfngf rtol: 1e-08 err: 7.414737328266103e-08 toc: 1.3709888458251953
method: vode bdf model: egfngf rtol: 1e-07 err: 2.6406813733046877e-07 toc: 1.329686164855957
method: vode bdf model: egfngf rtol: 1e-06 err: 2.173654429070666e-06 toc: 1.4558334350585938
method: vode bdf model: egfngf rtol: 1e-05 err: 4.7244020594807805e-05 toc: 1.1136410236358643
method: vode bdf model: egfngf rtol: 0.0001 err: 0.0003194977287663884 toc: 1.0986061096191406
method: vode bdf model: egfngf rtol: 0.001 err: 0.0005875247020684765 toc: 1.0350258350372314
method: vode bdf model: egfngf rtol: 0.01 err: 0.0006779951775676454 toc: 1.0145373344421387
method: vode bdf model: egfngf rtol: 0.1 err: 0.0007534157881480254 toc: 1.005566120147705
method: vode bdf model: egfngf_cython rtol: 1e-09 err: 1.582705740778086e-08 toc: 0.2337808609008789
method: vode bdf model: egfngf_cython rtol: 1e-08 err: 7.414737328266103e-08 toc: 0.23300743103027344
method: vode bdf model: egfngf_cython rtol: 1e-07 err: 2.6406813733046877e-07 toc: 0.2381274700164795
method: vode bdf model: egfngf_cython rtol: 1e-06 err: 2.173654429070666e-06 toc: 0.2519502639770508
method: vode bdf model: egfngf_cython rtol: 1e-05 err: 4.7244020594807805e-05 toc: 0.19704341888427734
method: vode bdf model: egfngf_cython rtol: 0.0001 err: 0.0003194977287663884 toc: 0.19471335411071777
method: vode bdf model: egfngf_cython rtol: 0.001 err: 0.0005875247020684765 toc: 0.181898832321167
method: vode bdf model: egfngf_cython rtol: 0.01 err: 0.0006779951775676454 toc: 0.1848597526550293
method: vode bdf model: egfngf_cython rtol: 0.1 err: 0.0007534157881480254 toc: 0.17755579948425293
method: vode adams model: egfngf rtol: 1e-09 err: 1.1494973053534825e-09 toc: 0.8964440822601318
method: vode adams model: egfngf rtol: 1e-08 err: 2.755623655199694e-08 toc: 0.9457309246063232
method: vode adams model: egfngf rtol: 1e-07 err: 9.030623788324496e-08 toc: 3.319200277328491
method: vode adams model: egfngf rtol: 1e-06 err: 2.1279329442101396e-06 toc: 1.3783111572265625
method: vode adams model: egfngf rtol: 1e-05 err: 2.7317312334683568e-05 toc: 1.0065898895263672
method: vode adams model: egfngf rtol: 0.0001 err: 0.00015762923472056477 toc: 1.0114195346832275
method: vode adams model: egfngf rtol: 0.001 err: 0.00043566122939189274 toc: 1.0354232788085938
method: vode adams model: egfngf rtol: 0.01 err: 0.0007342446711998006 toc: 1.0209317207336426
method: vode adams model: egfngf rtol: 0.1 err: 0.0007754136760953892 toc: 1.012150526046753
method: vode adams model: egfngf_cython rtol: 1e-09 err: 1.1494973053534825e-09 toc: 0.16318130493164062
method: vode adams model: egfngf_cython rtol: 1e-08 err: 2.755623655199694e-08 toc: 0.1711866855621338
method: vode adams model: egfngf_cython rtol: 1e-07 err: 9.030623788324496e-08 toc: 0.6225242614746094
method: vode adams model: egfngf_cython rtol: 1e-06 err: 2.1279329442101396e-06 toc: 0.2435927391052246
method: vode adams model: egfngf_cython rtol: 1e-05 err: 2.7317312334683568e-05 toc: 0.17838668823242188
method: vode adams model: egfngf_cython rtol: 0.0001 err: 0.00015762923472056477 toc: 0.17899847030639648
method: vode adams model: egfngf_cython rtol: 0.001 err: 0.00043566122939189274 toc: 0.18297147750854492
method: vode adams model: egfngf_cython rtol: 0.01 err: 0.0007342446711998006 toc: 0.18081426620483398
method: vode adams model: egfngf_cython rtol: 0.1 err: 0.0007754136760953892 toc: 0.18032503128051758
method: lsoda model: egfngf rtol: 1e-09 err: 7.9299759818241e-10 toc: 0.30083489418029785
method: lsoda model: egfngf rtol: 1e-08 err: 2.9678468126803637e-09 toc: 0.2960207462310791
method: lsoda model: egfngf rtol: 1e-07 err: 4.500059920246713e-08 toc: 0.27411675453186035
method: lsoda model: egfngf rtol: 1e-06 err: 1.0189701929145183e-07 toc: 0.15819025039672852
method: lsoda model: egfngf rtol: 1e-05 err: 1.3192638509887426e-06 toc: 0.20678973197937012
method: lsoda model: egfngf rtol: 0.0001 err: 1.3745593086108177e-05 toc: 0.129103422164917
method: lsoda model: egfngf rtol: 0.001 err: 0.00025077333410454837 toc: 0.15668439865112305
method: lsoda model: egfngf rtol: 0.01 err: 0.0017300367523133658 toc: 0.11679291725158691
method: lsoda model: egfngf rtol: 0.1 err: 0.001518197812709744 toc: 0.26616454124450684
method: lsoda model: egfngf_cython rtol: 1e-09 err: 7.9299759818241e-10 toc: 0.1106269359588623
method: lsoda model: egfngf_cython rtol: 1e-08 err: 2.9678468126803637e-09 toc: 0.05040550231933594
method: lsoda model: egfngf_cython rtol: 1e-07 err: 4.500059920246713e-08 toc: 0.08156490325927734
method: lsoda model: egfngf_cython rtol: 1e-06 err: 1.0189701929145183e-07 toc: 0.02878880500793457
method: lsoda model: egfngf_cython rtol: 1e-05 err: 1.3192638509887426e-06 toc: 0.03278160095214844
method: lsoda model: egfngf_cython rtol: 0.0001 err: 1.3745593086108177e-05 toc: 0.02436351776123047
method: lsoda model: egfngf_cython rtol: 0.001 err: 0.00025077333410454837 toc: 0.021378040313720703
method: lsoda model: egfngf_cython rtol: 0.01 err: 0.0017300367523133658 toc: 0.014645576477050781
method: lsoda model: egfngf_cython rtol: 0.1 err: 0.001518197812709744 toc: 0.013463258743286133
method: dopri5 model: egfngf rtol: 1e-09 err: 3.487213689368218e-11 toc: 1.1242473125457764
method: dopri5 model: egfngf rtol: 1e-08 err: 3.0333498822680366e-10 toc: 0.8492116928100586
method: dopri5 model: egfngf rtol: 1e-07 err: 1.4577740997386476e-09 toc: 0.7640457153320312
method: dopri5 model: egfngf rtol: 1e-06 err: 3.179589143352738e-08 toc: 0.7960407733917236
method: dopri5 model: egfngf rtol: 1e-05 err: 1.7986466224707935e-07 toc: 0.84712815284729
method: dopri5 model: egfngf rtol: 0.0001 err: 2.3582500277264748e-06 toc: 0.7310514450073242
method: dopri5 model: egfngf rtol: 0.001 err: 2.2133255557008247e-05 toc: 0.7199337482452393
method: dopri5 model: egfngf rtol: 0.01 err: 0.0002418223835189565 toc: 0.7273342609405518
method: dopri5 model: egfngf rtol: 0.1 err: 0.003087030920236963 toc: 0.7233633995056152
method: dopri5 model: egfngf_cython rtol: 1e-09 err: 3.487213689368218e-11 toc: 0.16988801956176758
method: dopri5 model: egfngf_cython rtol: 1e-08 err: 3.0333498822680366e-10 toc: 0.13481640815734863
method: dopri5 model: egfngf_cython rtol: 1e-07 err: 1.4577740997386476e-09 toc: 0.12070393562316895
method: dopri5 model: egfngf_cython rtol: 1e-06 err: 3.179589143352738e-08 toc: 0.12108230590820312
method: dopri5 model: egfngf_cython rtol: 1e-05 err: 1.7986466224707935e-07 toc: 0.12436628341674805
method: dopri5 model: egfngf_cython rtol: 0.0001 err: 2.3582500277264748e-06 toc: 0.11689949035644531
method: dopri5 model: egfngf_cython rtol: 0.001 err: 2.2133255557008247e-05 toc: 0.12184667587280273
method: dopri5 model: egfngf_cython rtol: 0.01 err: 0.0002418223835189565 toc: 0.11848235130310059
method: dopri5 model: egfngf_cython rtol: 0.1 err: 0.003087030920236963 toc: 0.11866450309753418
method: dop853 model: egfngf rtol: 1e-09 err: 1.25751830637455e-11 toc: 1.3260033130645752
method: dop853 model: egfngf rtol: 1e-08 err: 6.61048970111248e-11 toc: 1.14265775680542
method: dop853 model: egfngf rtol: 1e-07 err: 8.888388742889219e-10 toc: 1.0359838008880615
method: dop853 model: egfngf rtol: 1e-06 err: 4.66497442327333e-09 toc: 0.960273027420044
method: dop853 model: egfngf rtol: 1e-05 err: 1.2721465764722477e-07 toc: 0.9472723007202148
method: dop853 model: egfngf rtol: 0.0001 err: 1.1252778214596523e-06 toc: 0.9492876529693604
method: dop853 model: egfngf rtol: 0.001 err: 2.291165946985075e-05 toc: 0.9411609172821045
method: dop853 model: egfngf rtol: 0.01 err: 9.271624730080172e-05 toc: 0.8935632705688477
method: dop853 model: egfngf rtol: 0.1 err: 0.0016197589501836288 toc: 0.8565003871917725
method: dop853 model: egfngf_cython rtol: 1e-09 err: 1.25751830637455e-11 toc: 0.2147822380065918
method: dop853 model: egfngf_cython rtol: 1e-08 err: 6.61048970111248e-11 toc: 0.18518447875976562
method: dop853 model: egfngf_cython rtol: 1e-07 err: 8.888388742889219e-10 toc: 0.17031097412109375
method: dop853 model: egfngf_cython rtol: 1e-06 err: 4.66497442327333e-09 toc: 0.14084482192993164
method: dop853 model: egfngf_cython rtol: 1e-05 err: 1.2721465764722477e-07 toc: 0.1377122402191162
method: dop853 model: egfngf_cython rtol: 0.0001 err: 1.1252778214596523e-06 toc: 0.13886094093322754
method: dop853 model: egfngf_cython rtol: 0.001 err: 2.291165946985075e-05 toc: 0.13952231407165527
method: dop853 model: egfngf_cython rtol: 0.01 err: 9.271624730080172e-05 toc: 0.1342453956604004
method: dop853 model: egfngf_cython rtol: 0.1 err: 0.0016197589501836288 toc: 0.1288437843322754
method: RK45 model: egfngf rtol: 1e-09 err: 3.6439984493578474e-11 toc: 1.4232659339904785
method: RK45 model: egfngf rtol: 1e-08 err: 6.912577130909388e-10 toc: 1.2879347801208496
method: RK45 model: egfngf rtol: 1e-07 err: 6.694102315426183e-09 toc: 1.196589469909668
method: RK45 model: egfngf rtol: 1e-06 err: 8.223089846372508e-08 toc: 1.3194856643676758
method: RK45 model: egfngf rtol: 1e-05 err: 2.2819689641437435e-07 toc: 1.329404354095459
method: RK45 model: egfngf rtol: 0.0001 err: 3.917911855702793e-06 toc: 1.3202934265136719
method: RK45 model: egfngf rtol: 0.001 err: 1.639140348158738e-05 toc: 1.3200581073760986
method: RK45 model: egfngf rtol: 0.01 err: 0.00034920415170319453 toc: 1.3547723293304443
method: RK45 model: egfngf rtol: 0.1 err: 0.0036178664214251087 toc: 1.3747754096984863
method: RK45 model: egfngf_cython rtol: 1e-09 err: 3.6439984493578474e-11 toc: 0.5883312225341797
method: RK45 model: egfngf_cython rtol: 1e-08 err: 6.912577130909388e-10 toc: 0.5303034782409668
method: RK45 model: egfngf_cython rtol: 1e-07 err: 6.694102315426183e-09 toc: 0.48837947845458984
method: RK45 model: egfngf_cython rtol: 1e-06 err: 8.223089846372508e-08 toc: 0.536247730255127
method: RK45 model: egfngf_cython rtol: 1e-05 err: 2.2819689641437435e-07 toc: 0.5374157428741455
method: RK45 model: egfngf_cython rtol: 0.0001 err: 3.917911855702793e-06 toc: 0.5388848781585693
method: RK45 model: egfngf_cython rtol: 0.001 err: 1.639140348158738e-05 toc: 0.5385806560516357
method: RK45 model: egfngf_cython rtol: 0.01 err: 0.00034920415170319453 toc: 0.5404636859893799
method: RK45 model: egfngf_cython rtol: 0.1 err: 0.0036178664214251087 toc: 0.5524845123291016
method: RK23 model: egfngf rtol: 1e-09 err: 6.681408073442678e-10 toc: 1.2001097202301025
method: RK23 model: egfngf rtol: 1e-08 err: 4.8487383658842495e-09 toc: 1.1402056217193604
method: RK23 model: egfngf rtol: 1e-07 err: 1.279591483277424e-08 toc: 1.0618393421173096
method: RK23 model: egfngf rtol: 1e-06 err: 1.1492905757525781e-07 toc: 1.0311594009399414
method: RK23 model: egfngf rtol: 1e-05 err: 1.6930524190926614e-06 toc: 1.027500867843628
method: RK23 model: egfngf rtol: 0.0001 err: 1.2193154355724498e-05 toc: 1.0268914699554443
method: RK23 model: egfngf rtol: 0.001 err: 0.0001278426051552712 toc: 1.0209898948669434
method: RK23 model: egfngf rtol: 0.01 err: 0.002285091844560026 toc: 0.9941794872283936
method: RK23 model: egfngf rtol: 0.1 err: 0.0527168081924734 toc: 0.8237001895904541
method: RK23 model: egfngf_cython rtol: 1e-09 err: 6.681408073442678e-10 toc: 0.5616023540496826
method: RK23 model: egfngf_cython rtol: 1e-08 err: 4.8487383658842495e-09 toc: 0.5162637233734131
method: RK23 model: egfngf_cython rtol: 1e-07 err: 1.279591483277424e-08 toc: 0.49440717697143555
method: RK23 model: egfngf_cython rtol: 1e-06 err: 1.1492905757525781e-07 toc: 0.4908134937286377
method: RK23 model: egfngf_cython rtol: 1e-05 err: 1.6930524190926614e-06 toc: 0.48294854164123535
method: RK23 model: egfngf_cython rtol: 0.0001 err: 1.2193154355724498e-05 toc: 0.4840507507324219
method: RK23 model: egfngf_cython rtol: 0.001 err: 0.0001278426051552712 toc: 0.48421287536621094
method: RK23 model: egfngf_cython rtol: 0.01 err: 0.002285091844560026 toc: 0.47213268280029297
method: RK23 model: egfngf_cython rtol: 0.1 err: 0.0527168081924734 toc: 0.3859896659851074
method: Radau model: egfngf rtol: 1e-09 err: 6.652087904512882e-11 toc: 1.3415403366088867
method: Radau model: egfngf rtol: 1e-08 err: 5.170814498948554e-10 toc: 1.1088793277740479
method: Radau model: egfngf rtol: 1e-07 err: 4.258043093917271e-09 toc: 0.8848395347595215
method: Radau model: egfngf rtol: 1e-06 err: 3.060537681449205e-08 toc: 0.5140721797943115
method: Radau model: egfngf rtol: 1e-05 err: 3.409584565088153e-07 toc: 0.41588926315307617
method: Radau model: egfngf rtol: 0.0001 err: 4.311698737486343e-06 toc: 0.28522610664367676
method: Radau model: egfngf rtol: 0.001 err: 2.4120030381309335e-05 toc: 0.13965725898742676
method: Radau model: egfngf rtol: 0.01 err: 0.0004926898438409504 toc: 0.16023492813110352
method: Radau model: egfngf rtol: 0.1 err: 0.0024953513627397478 toc: 0.0989832878112793
method: Radau model: egfngf_cython rtol: 1e-09 err: 6.652087904512882e-11 toc: 0.7576034069061279
method: Radau model: egfngf_cython rtol: 1e-08 err: 5.170814498948554e-10 toc: 0.5407040119171143
method: Radau model: egfngf_cython rtol: 1e-07 err: 4.258043093917271e-09 toc: 0.41968345642089844
method: Radau model: egfngf_cython rtol: 1e-06 err: 3.060537681449205e-08 toc: 0.28287482261657715
method: Radau model: egfngf_cython rtol: 1e-05 err: 3.409584565088153e-07 toc: 0.1735553741455078
method: Radau model: egfngf_cython rtol: 0.0001 err: 4.311698737486343e-06 toc: 0.1346416473388672
method: Radau model: egfngf_cython rtol: 0.001 err: 2.4120030381309335e-05 toc: 0.08444905281066895
method: Radau model: egfngf_cython rtol: 0.01 err: 0.0004926898438409504 toc: 0.05977010726928711
method: Radau model: egfngf_cython rtol: 0.1 err: 0.0024953513627397478 toc: 0.05173158645629883
method: BDF model: egfngf rtol: 1e-09 err: 3.1390972920538235e-09 toc: 0.5705592632293701
method: BDF model: egfngf rtol: 1e-08 err: 1.2343679627520033e-08 toc: 0.5837991237640381
method: BDF model: egfngf rtol: 1e-07 err: 8.442730720465382e-08 toc: 0.4266855716705322
method: BDF model: egfngf rtol: 1e-06 err: 5.070871530430547e-07 toc: 0.327714204788208
method: BDF model: egfngf rtol: 1e-05 err: 3.766178151369483e-06 toc: 0.32501935958862305
method: BDF model: egfngf rtol: 0.0001 err: 3.5510810666213124e-05 toc: 0.3260772228240967
method: BDF model: egfngf rtol: 0.001 err: 0.00017855276159510443 toc: 0.22342371940612793
method: BDF model: egfngf rtol: 0.01 err: 0.0032314630305983884 toc: 0.13854002952575684
method: BDF model: egfngf rtol: 0.1 err: 0.009260776487288725 toc: 0.17409443855285645
method: BDF model: egfngf_cython rtol: 1e-09 err: 3.1390972920538235e-09 toc: 0.3882126808166504
method: BDF model: egfngf_cython rtol: 1e-08 err: 1.2343679627520033e-08 toc: 0.36484789848327637
method: BDF model: egfngf_cython rtol: 1e-07 err: 8.442730720465382e-08 toc: 0.2005314826965332
method: BDF model: egfngf_cython rtol: 1e-06 err: 5.070871530430547e-07 toc: 0.18695449829101562
method: BDF model: egfngf_cython rtol: 1e-05 err: 3.766178151369483e-06 toc: 0.1921083927154541
method: BDF model: egfngf_cython rtol: 0.0001 err: 3.5510810666213124e-05 toc: 0.1263880729675293
method: BDF model: egfngf_cython rtol: 0.001 err: 0.00017855276159510443 toc: 0.12924456596374512
method: BDF model: egfngf_cython rtol: 0.01 err: 0.0032314630305983884 toc: 0.06199479103088379
method: BDF model: egfngf_cython rtol: 0.1 err: 0.009260776487288725 toc: 0.04790949821472168
method: cvode BDF model: egfngf rtol: 1e-09 err: 1.832806059004118e-09 toc: 0.11520576477050781
method: cvode BDF model: egfngf rtol: 1e-08 err: 1.2058996459624419e-08 toc: 0.06452536582946777
method: cvode BDF model: egfngf rtol: 1e-07 err: 1.2075101544420856e-07 toc: 0.05853128433227539
method: cvode BDF model: egfngf rtol: 1e-06 err: 5.660151867535509e-07 toc: 0.04850339889526367
method: cvode BDF model: egfngf rtol: 1e-05 err: 2.1248663150133023e-06 toc: 0.03807353973388672
method: cvode BDF model: egfngf rtol: 0.0001 err: 8.0369563686448e-06 toc: 0.03027200698852539
method: cvode BDF model: egfngf rtol: 0.001 err: 0.0006223608369100838 toc: 0.02919483184814453
method: cvode BDF model: egfngf rtol: 0.01 err: 0.0019098949502052468 toc: 0.01840806007385254
method: cvode BDF model: egfngf rtol: 0.1 err: 1532623.092336226 toc: 0.037409305572509766
method: cvode BDF model: egfngf_cython rtol: 1e-09 err: 1.832806059004118e-09 toc: 0.01679062843322754
method: cvode BDF model: egfngf_cython rtol: 1e-08 err: 1.2058996459624419e-08 toc: 0.013664960861206055
method: cvode BDF model: egfngf_cython rtol: 1e-07 err: 1.2075101544420856e-07 toc: 0.015189647674560547
method: cvode BDF model: egfngf_cython rtol: 1e-06 err: 5.660151867535509e-07 toc: 0.012308835983276367
method: cvode BDF model: egfngf_cython rtol: 1e-05 err: 2.1248663150133023e-06 toc: 0.008781194686889648
method: cvode BDF model: egfngf_cython rtol: 0.0001 err: 8.0369563686448e-06 toc: 0.007406711578369141
method: cvode BDF model: egfngf_cython rtol: 0.001 err: 0.0006223608369100838 toc: 0.004948139190673828
method: cvode BDF model: egfngf_cython rtol: 0.01 err: 0.0019098949502052468 toc: 0.004040956497192383
method: cvode BDF model: egfngf_cython rtol: 0.1 err: 1532623.092336226 toc: 0.006505489349365234
method: cvode ADAMS model: egfngf rtol: 1e-09 err: 1.9099610896470648e-09 toc: 0.24017071723937988
method: cvode ADAMS model: egfngf rtol: 1e-08 err: 7.36613825817282e-09 toc: 0.21523308753967285
method: cvode ADAMS model: egfngf rtol: 1e-07 err: 8.801648077981857e-08 toc: 0.13567042350769043
method: cvode ADAMS model: egfngf rtol: 1e-06 err: 1.1504714326777807e-06 toc: 0.08283209800720215
method: cvode ADAMS model: egfngf rtol: 1e-05 err: 4.671290605377484e-06 toc: 0.07387781143188477
method: cvode ADAMS model: egfngf rtol: 0.0001 err: 1.0608413538041836e-05 toc: 0.058342933654785156
method: cvode ADAMS model: egfngf rtol: 0.001 err: 0.00012178687288246389 toc: 0.052404165267944336
method: cvode ADAMS model: egfngf rtol: 0.01 err: 0.0011556670401549067 toc: 0.04270744323730469
method: cvode ADAMS model: egfngf rtol: 0.1 err: 0.005974432620540139 toc: 0.03546285629272461
method: cvode ADAMS model: egfngf_cython rtol: 1e-09 err: 1.9099610896470648e-09 toc: 0.05116581916809082
method: cvode ADAMS model: egfngf_cython rtol: 1e-08 err: 7.36613825817282e-09 toc: 0.036460161209106445
method: cvode ADAMS model: egfngf_cython rtol: 1e-07 err: 8.801648077981857e-08 toc: 0.029713153839111328
method: cvode ADAMS model: egfngf_cython rtol: 1e-06 err: 1.1504714326777807e-06 toc: 0.01849055290222168
method: cvode ADAMS model: egfngf_cython rtol: 1e-05 err: 4.671290605377484e-06 toc: 0.015970945358276367
method: cvode ADAMS model: egfngf_cython rtol: 0.0001 err: 1.0608413538041836e-05 toc: 0.014380693435668945
method: cvode ADAMS model: egfngf_cython rtol: 0.001 err: 0.00012178687288246389 toc: 0.011634111404418945
method: cvode ADAMS model: egfngf_cython rtol: 0.01 err: 0.0011556670401549067 toc: 0.00951528549194336
method: cvode ADAMS model: egfngf_cython rtol: 0.1 err: 0.005974432620540139 toc: 0.007592439651489258
###Markdown
Plot the performance
###Code
for model in models:
print(gg.ggplot(data[data.model == model.name], gg.aes(x='err', y='time', color='method'))
+ gg.geom_point(size=60.0)
+ gg.geom_line()
+ gg.scale_x_log()
+ gg.scale_y_log()
+ gg.xlim(1e-10, 1e-2)
+ gg.ggtitle('Model ' + model.name)
)
###Output
_____no_output_____ |
Germeval2019-Task1.ipynb | ###Markdown
Germeval 2019 - Task 1 (hierarchical classification) This code is for Germeval 2019 Task 1. In this task, blurbs of German text that describe a book are provided and the challange is to predict classifications of different genres for the books from these blurbs.In subtask a, only the highest level classification (8 classes) is used and in subtask b, the entire hierarchy is used (343 classes total).Note that this is a multiclass (each book can have >1 class) and multilable problem (there are 8, 93 and 242 labels on each level of the hierarchy). It's also noteworthy that each book can have different leafs on the last level of the hierarchy, for example a/b/c1, a/b/c2The approach uses a combination of logistic regression and Naive Bayes. Best results on the dev-set (checked with the provided Python script and the gold file blurbs_dev_participants.txt):* F1-Score Task A: 0.826775214835* F1-Score Task B: 0.618365180467Tokenization Parameters:* spaCy was used as the tokenizer* unicode accents were stripped* casing was kept* no lemmatization* no stopwordsVectorization Parameters:* Only words that appeared in at least 4 documents were used* Words that appeared in more than 40% of documents were ignored* Inverse document-frequency-reweighting* Sublinear term frequency scaling* Smoothing* N-grams of 1,1 and 1,2 were used for the two submitted entriesLogistic Regression Parameters:* Liblinear solver with a maximum number of 1000 iterations, automatic multiclass fitting and balanced class weights* L2 regularization with C=40.0* No dual formulationFinal competition results:* Entry 1: 0.82 / 0.62* Entry 2: 0.82 / 0.61Link to the competition: https://www.inf.uni-hamburg.de/en/inst/ab/lt/resources/data/germeval-2019-hmc.html Note: This notebook is inspired by Jeremy Howard's post about a strong baseline system: https://www.kaggle.com/jhoward/nb-svm-strong-linear-baseline 0) Imports and setup
###Code
# Time the entire notebook
import timeit
start_time = timeit.default_timer()
###Output
_____no_output_____
###Markdown
Imports
###Code
import pandas as pd, numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.multiclass import OneVsRestClassifier
import re
import sys, sklearn
print(sys.version)
print(sklearn.__version__)
print(pd.__version__)
print(np.__version__)
import spacy
print(spacy.__version__)
!anaconda --version
!ipython --version
!jupyter --version
###Output
anaconda Command line client (version 1.1.0)
4.0.0
4.0.6
###Markdown
Files
###Code
hierarchy_file = 'blurbs/hierarchy.txt'
# Use this for testing (testset with goldfile)
train_file = 'blurbs/blurbs_train.txt'
test_file = 'blurbs/blurbs_dev_participants.txt'
# Use this for the submission (bigger training set, correct testset)
# train_file = 'blurbs/blurbs_train_and_dev.txt'
# test_file = 'blurbs/blurbs_test_nolabel.txt'
# A dictionary of all labels. Taken from utilities.py from the organizers of Germeval 2019
all_labels = {0: [u"Ratgeber", u"Kinderbuch & Jugendbuch", u"Literatur & Unterhaltung", u"Sachbuch", u"Ganzheitliches Bewusstsein", u"Architektur & Garten", u"Glaube & Ethik", u"Künste"],
1: [u"Eltern & Familie", u"Echtes Leben, Realistischer Roman", u"Abenteuer", u"Märchen, Sagen", u"Lyrik, Anthologien, Jahrbücher", u"Frauenunterhaltung", u"Fantasy", u"Kommunikation & Beruf", u"Lebenshilfe & Psychologie", u"Krimi & Thriller", u"Freizeit & Hobby", u"Liebe, Beziehung und Freundschaft", u"Familie", u"Natur, Wissenschaft, Technik", u"Fantasy und Science Fiction", u"Geister- und Gruselgeschichten", u"Schicksalsberichte", u"Romane & Erzählungen", u"Science Fiction", u"Politik & Gesellschaft", u"Ganzheitliche Psychologie", u"Natur, Tiere, Umwelt, Mensch", u"Psychologie", u"Lifestyle", u"Sport", u"Lebensgestaltung", u"Essen & Trinken", u"Gesundheit & Ernährung", u"Kunst, Musik", u"Architektur", u"Biographien & Autobiographien", u"Romance", u"Briefe, Essays, Gespräche", u"Kabarett & Satire", u"Krimis und Thriller", u"Erotik", u"Historische Romane", u"Theologie", u"Beschäftigung, Malen, Rätseln", u"Schulgeschichten", u"Biographien", u"Kunst", u"(Zeit-) Geschichte", u"Ganzheitlich Leben", u"Garten & Landschaftsarchitektur", u"Körper & Seele", u"Energieheilung", u"Abenteuer, Reisen, fremde Kulturen", u"Historische Romane, Zeitgeschichte", u"Klassiker & Lyrik", u"Fotografie", u"Design", u"Beauty & Wellness", u"Kunst & Kultur", u"Mystery", u"Ratgeber Partnerschaft & Sexualität", u"Detektivgeschichten", u"Spiritualität & Religion", u"Sachbuch Philosophie", u"Tiergeschichten", u"Horror", u"Literatur & Unterhaltung Satire", u"Infotainment & erzählendes Sachbuch", u"Fitness & Sport", u"Übernatürliches", u"Psychologie & Spiritualität", u"Handwerk Farbe", u"Weisheiten der Welt", u"Naturheilweisen", u"Lustige Geschichten, Witze", u"Wissen & Nachschlagewerke", u"Sterben, Tod und Trauer", u"Romantasy", u"Wirtschaft & Recht", u"Comic & Cartoon", u"Schullektüre", u"Glaube und Grenzerfahrungen", u"Mode & Lifestyle", u"Mondkräfte", u"Musik", u"Geschichte, Politik", u"Gemeindearbeit", u"Wohnen & Innenarchitektur", u"Esoterische Romane", u"Schicksalsdeutung", u"Religionsunterricht", u"Religiöse Literatur", u"Geld & Investment", u"Sportgeschichten", u"Religion, Glaube, Ethik, Philosophie", u"Recht & Steuern", u"Handwerk Holz", u"Regionalia"],
2: [u"Vornamen", u"Heroische Fantasy", u"Joballtag & Karriere", u"Psychothriller", u"Große Gefühle", u"Feiern & Feste", u"Medizin & Forensik", u"Phantastik", u"Ökologie / Umweltschutz", u"Aktuelle Debatten", u"Ganzheitliche Psychologie Lebenshilfe", u"Nordamerikanische Literatur", u"Babys & Kleinkinder", u"Schwangerschaft & Geburt", u"Tod & Trauer", u"Nordische Krimis", u"Gesunde Ernährung", u"Junge Literatur", u"Kreatives", u"Einfamilienhausbau", u"Künstler, Dichter, Denker", u"Themenkochbuch", u"Abenteuer & Action", u"Science Thriller", u"Justizthriller", u"Besser leben", u"Starke Frauen", u"Gesellschaftskritik", u"Psychologie Partnerschaft & Sexualität", u"Krankheit", u"Abenteuer-Fantasy", u"Kirchen- und Theologiegeschichte", u"Biblische Theologie AT", u"Biblische Theologie NT", u"Politik & Gesellschaft Andere Länder & Kulturen", u"Hard Science Fiction", u"All Age Fantasy", u"Trauma", u"Krisen & Ängste", u"Space Opera", u"19./20. Jahrhundert", u"Agenten-/Spionage-Thriller", u"Französische Literatur", u"Selbstcoaching", u"Kopftraining", u"Erzählungen & Kurzgeschichten", u"Gartengestaltung", u"Weltpolitik & Globalisierung", u"Internet", u"Geschenkbuch & Briefkarten", u"Reiseberichte", u"Literatur aus Spanien und Lateinamerika", u"Romantische Komödien", u"Märchen, Legenden und Sagen", u"Humorvolle Unterhaltung", u"Natur, Wissenschaft, Technik Tiere", u"Familiensaga", u"Wellness", u"Romanbiographien", u"Patientenratgeber", u"Politische Theorien", u"Erotik & Sex", u"Rätsel & Spiele", u"Politiker", u"Future-History", u"Gerichtsmedizin / Pathologie", u"Spirituelles Leben", u"Nationalsozialismus", u"Musterbriefe & Rhetorik", u"Einzelthemen der Theologie", u"Dystopie", u"Lyrik", u"Literatur aus Russland und Osteuropa", u"Regionalkrimis", u"Starköche", u"Yoga, Pilates & Stretching", u"Pflanzen & Garten", u"Jenseits & Wiedergeburt", u"Fitnesstraining", u"Problemzonen", u"Italienische Literatur", u"Christlicher Glauben", u"Handwerk Farbe Praxis", u"Handwerk Farbe Grundlagenwissen", u"Östliche Weisheit", u"Ernährung", u"Magen & Darm", u"Nahrungsmittelintoleranz", u"Deutschsprachige Literatur", u"Mittelalter", u"Historische Krimis", u"Kindererziehung", u"Körpertherapien", u"High Fantasy", u"Science Fiction Sachbuch", u"Pubertät", u"Länderküche", u"Styling", u"Schönheitspflege", u"Getränke", u"Lady-Thriller", u"Abschied, Trauer, Neubeginn", u"Laufen & Nordic Walking", u"Neue Wirtschaftsmodelle", u"Utopie", u"Afrikanische Literatur", u"Science Fiction Science Fantasy", u"Englische Literatur", u"Steampunk", u"Alternativwelten", u"Geschichte nach '45", u"Spiritualität & Religion Weltreligionen", u"Theologie Religionspädagogik", u"Raucherentwöhnung", u"Funny Fantasy", u"Skandinavische Literatur", u"Film & Musik", u"Westliche Wege", u"Entspannung & Meditation", u"Kindergarten & Pädagogik", u"Schule & Lernen", u"Spiele & Beschäftigung", u"Psychologie Lebenshilfe", u"Persönlichkeitsentwicklung", u"Mystery-Thriller", u"Homöopathie & Bachblüten", u"Liebe & Beziehung", u"Literaturgeschichte / -kritik", u"Ernährung & Kochen", u"Wandern & Bergsteigen", u"Sucht & Abhängigkeit", u"Politthriller", u"Sterbebegleitung & Hospizarbeit", u"50 plus", u"Job & Karriere", u"Konfirmation", u"Gemeindearbeit Religionspädagogik", u"Kasualien und Sakramente", u"Schauspieler, Regisseure", u"Praktische Anleitungen", u"Rücken & Gelenke", u"Unternehmen & Manager", u"Landschaftsgestaltung", u"Krimikomödien", u"Musiker, Sänger", u"Freizeit & Hobby Tiere", u"Gebete und Andachten", u"Glauben mit Kindern", u"Dark Fantasy", u"Lesen & Kochen", u"Kunst & Kunstgeschichte", u"Flirt & Partnersuche", u"Partnerschaft & Sex", u"Kommunikation", u"Wissen der Naturvölker", u"Urban Fantasy", u"Andere Länder", u"21. Jahrhundert", u"Engel & Schutzgeister", u"Chakren & Aura", u"Science Fiction Satire", u"Bauherrenratgeber", u"Bautechnik", u"Systematische Theologie", u"Praktische Theologie", u"Kosmologie", u"Literatur aus Fernost", u"Bibeln & Katechismus", u"Humoristische Nachschlagewerke", u"Wohnen", u"Länder, Städte & Regionen", u"Spirituelle Entwicklung", u"Indische Literatur", u"Cyberpunk", u"Wissenschaftler", u"Dying Earth", u"Monographien", u"Gesang- und Liederbücher", u"Innenarchitektur", u"Baumaterialien", u"Antike und neulateinische Literatur", u"Gemeindearbeit mit Kindern & Jugendlichen", u"Wissenschaftsthriller", u"Ökothriller", u"Fantasy Science Fantasy", u"Psychotherapie", u"Farbratgeber", u"Hausmittel", u"Schicksalsberichte Andere Länder & Kulturen", u"Design / Lifestyle", u"Diakonie und Seelsorge", u"Gemeindearbeit Sachbuch", u"Gottesdienst und Predigt", u"Sprache & Sprechen", u"(Zeit-) Geschichte Andere Länder & Kulturen", u"Arbeitstechniken", u"Mantras & Mudras", u"NS-Zeit & Nachkriegszeit", u"Kinderschicksal", u"Altbausanierung / Denkmalpflege", u"Neuere Geschichte", u"Umgangsformen", u"Geschichte und Theorie", u"Familie & Religion", u"Niederländische Literatur", u"Handwerk Farbe Gestaltung", u"Historische Fantasy", u"Alte Geschichte", u"Fantasy-/SF-Thriller", u"Bewerbung", u"Wirtschaftsthriller", u"Bibel in gerechter Sprache", u"Fahrzeuge / Technik", u"Handwerk Holz Gestaltung", u"Handwerk Holz Grundlagenwissen", u"Anthologien", u"Handwerk Holz Praxis", u"Bibeln & Bibelarbeit", u"Theologie Weltreligionen", u"Dialog der Traditionen", u"Magie & Hexerei", u"Tierkrimis", u"Medizinthriller", u"Literatur des Nahen Ostens", u"Kirchenthriller", u"Spielewelten", u"Astrologie & Sternzeichen", u"Stadtplanung", u"Feministische Theologie", u"Entwurfs- und Detailplanung", u"Street Art", u"Trennung", u"Philosophie", u"Tarot", u"Systemische Therapie & Familienaufstellung", u"Bauaufgaben", u"Griechische Literatur", u"Gartendesigner", u"Urgeschichte", u"Reden & Glückwünsche", u"Antiquitäten", u"Theater / Ballett"]}
###Output
_____no_output_____
###Markdown
Constants
###Code
TEXT_COLUMN = 'body'
LEMMATIZE = False
###Output
_____no_output_____
###Markdown
Set seed for reproducible results
###Code
import random
seed = 23 # tried 23, 5, 42, 82
np.random.seed(seed)
random.seed(seed)
###Output
_____no_output_____
###Markdown
Tokenizer
###Code
nlp = spacy.load('de')
def tokenize_spacy(corpus, lemma=LEMMATIZE):
doc = nlp(corpus)
if lemma:
return list(str(x.lemma_) for x in doc) # lemma_ to get string instead of hash
else:
return list(str(x) for x in doc)
###Output
_____no_output_____
###Markdown
Helpers Data extraction
###Code
# Turn a provided XML-file into a string with an added root element
def data_from_xml(xml_file):
# Read in the file
with open(xml_file, 'r') as file :
data = file.read()
# Replace "&" with "und" to avoid parsing problems
data = data.replace("&", "und")
# Add a root node
data = '<root>\n' + data + '</root>\n'
return data
# Get the root element from a data-string
def root_from_data(data, encoding='utf-8'):
import xml.etree.ElementTree as ET
xml_parser = ET.XMLParser(encoding=encoding)
xml_root = ET.fromstring(data)
return xml_root
###Output
_____no_output_____
###Markdown
Labels
###Code
# Turn a list of labels with all sorts of special characters into one that is usable for data frames
# Examples:
# 'Kinderbuch & Jugendbuch' -> 'kinderbuch_jugendbuch'
# '(Zeit-) Geschichte' -> 'zeit geschichte'
def labels_to_ids(labels):
ids = []
for label in labels:
label = label.replace(' & ', '_')
label = label.replace(' und ', '_')
label = label.lower()
# Stuff for depth-level 2 and 3
label = label.replace(" / ", "_")
label = label.replace("/", "_")
label = label.replace(", ", "_")
label = label.replace("-", "")
label = label.replace("(", "")
label = label.replace(")", "")
label = label.replace(".", "")
label = label.replace("'", "")
ids.append(label)
return ids
# Get the labels from the previous level
# Examples:
# u"Romane & Erzählungen" -> ['Literatur & Unterhaltung']
# u"Joballtag & Karriere" -> ['Kommunikation & Beruf', 'Ratgeber']
# TODO: refactor
def find_previous_labels(label):
with open(hierarchy_file, 'r') as file :
data = file.read().split('\n')
extra_labels = []
level_two_label = ''
for row in data:
# Note: this assumes, there's always two items per row
# Also assumes that every highest level item is only found once in the file
items = re.split(r'\t+', row.rstrip('\t')) # split on tab
if(( len(items) == 2 ) and items[1] == label):
extra_labels.append(items[0])
level_two_label = items[0]
for row in data:
# Note: this assumes, there's always two items per row
# Also assumes that every highest level item is only found once in the file
items = re.split(r'\t+', row.rstrip('\t')) # split on tab
if(( len(items) == 2 ) and items[1] == level_two_label):
extra_labels.append(items[0])
return extra_labels
# Get the labels from the next level
# Example: u"Ratgeber"
# ['Essen & Trinken', 'Gesundheit & Ernährung', 'Lebenshilfe & Psychologie', 'Eltern & Familie',
# 'Ratgeber Partnerschaft & Sexualität', 'Beauty & Wellness', 'Fitness & Sport', 'Kommunikation & Beruf',
# 'Geld & Investment', 'Recht & Steuern', 'Freizeit & Hobby', 'Wissen & Nachschlagewerke']
def find_next_level(label):
extra_labels = []
with open(hierarchy_file, 'r') as file :
data = file.read().split('\n')
for row in data:
# Note: this assumes, there's always two items per row
# Also assumes that every highest level item is only found once in the file
items = re.split(r'\t+', row.rstrip('\t')) # split on tab
if(( len(items) == 2 ) and items[0] == label):
extra_labels.append(items[1])
return extra_labels
###Output
_____no_output_____
###Markdown
Data frame construction
###Code
# Construct an empty data frame from a provided list of santized label_ids
def dataframe_from_labels(label_ids=[]):
base_columns = ['isbn', 'title', 'body', 'copyright', 'authors', 'published']
# The testfile has no url and no labels (just the base columns)
if(label_ids==[]):
columns = base_columns
else:
base_columns.append('url')
columns = base_columns + label_ids
return pd.DataFrame(columns = columns)
# Write a 1 for every label that matches the ones passed and a 0 for every other label
def entries_from_labels(matching_labels, label_ids):
entries = [0 for x in label_ids]
for item in labels_to_ids(matching_labels):
entries[label_ids.index(item)] = 1
return entries
# Build a dataframe from a given root-element. Works for both training and test data.
# d is the depth of the labels to consider: 0 = level 1, 1=level 2, 2=level 3
def dataframe_from_root(root, label_ids=[], d=0):
if(label_ids==[]):
is_test = True
else:
is_test = False
# Empty dataframe for the given label_ids
df = dataframe_from_labels(label_ids)
for node in root:
matching_labels = []
isbn = node.find("isbn").text
title = node.find("title").text
body = node.find("body").text
copyright = node.find("copyright").text
authors = node.find("authors").text
published = node.find("published").text
# Training-set
if(is_test == False):
url = node.find("url").text
categories = node.find("categories").findall("category")
for c in categories:
topics = c.findall("topic")
# Use all top-level matching_labels
for t in topics:
if t.attrib.get("d") == str(d):
matching_labels.append(t.text)
df = df.append(pd.Series([isbn, title, body, copyright, authors, published, url]+entries_from_labels(matching_labels, label_ids),
index = df.columns), ignore_index = True)
# Test-set
else:
df = df.append(pd.Series([isbn, title, body, copyright, authors, published],
index = df.columns), ignore_index = True)
return df
# Create dataframes for training and test from given labels and a level depth
# depth=0 -> level 1 only (suitable for subtask a)
def get_train_test(label_ids, depth=0):
train_df = dataframe_from_root(train_root, label_ids, d=depth)
test_df = dataframe_from_root(test_root)
# Add 0s for all labels for the test_df
test_df = test_df.reindex(columns=[*test_df.columns.tolist(), *label_ids], fill_value=0)
return train_df, test_df
###Output
_____no_output_____
###Markdown
Predictions to expected answer-format
###Code
def write_answerfile(ansers_taskA, answers_TaskB, filename='ORGNAME__MODEL.txt'):
# Add the required subtask-headers
final_answers = 'subtask_a\n' + answers_taskA + '\nsubtask_b\n' + answers_taskB
out = open(filename, 'w')
out.write(final_answers)
out.close()
###Output
_____no_output_____
###Markdown
1) Data loading and labels
###Code
# XML->data-string
train_data = data_from_xml(train_file)
test_data = data_from_xml(test_file)
# Root element from data-string
train_root = root_from_data(train_data)
test_root = root_from_data(test_data)
# Level 1 labels
labels = all_labels[0]
label_ids = labels_to_ids(labels)
# labels, label_ids
len(labels) # 8
# Level 2 labels
labels_level2 = all_labels[1]
label_ids_level2 = labels_to_ids(labels_level2)
# labels_level2, label_ids_level2
len(labels_level2) # 93
# Level 3 labels
labels_level3 = all_labels[2]
label_ids_level3 = labels_to_ids(labels_level3)
# labels_level3, label_ids_level3
len(labels_level3) # 242
###Output
_____no_output_____
###Markdown
2) Subtask A (level 1) Load and sanitize the training and test data
###Code
train_df, test_df = get_train_test(label_ids, depth=0)
train_df.head()
train_df[200:250]
train = train_df.copy()
test = test_df.copy()
len(train), len(test)
###Output
_____no_output_____
###Markdown
Fix empty text in the train and test set
###Code
np.where(pd.isnull(test['body']))
np.where(pd.isnull(train['body']))
train[623:624]
# Fill empty body with just the title (better results than author+title)
train.body = np.where(train.body.isnull(), train.title , train.body)
#Fill empty body with the title+authors
#train.body = np.where(train.body.isnull(), train.title + ' ' + train.authors, train.body)
train[623:624]
###Output
_____no_output_____
###Markdown
Tokenize & Vectorize
###Code
# https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html
# analyzer='char': character level; 'char_wb': only in words; 'word' (default): word based
# lowercase defaults to True
# norm = 'l1'/'l2' (defaults to l2)
# smooth_idf: Add "fake document" with all 1s (don't use for char?!)
# use_idf: inverse-document-frequency reweighting
# sublinear_tf: Apply sublinear tf scaling, i.e. replace tf with 1 + log(tf).
# binary (defaults to False)
# Note: Set use_idf to False and norm to None to get 0/1 outputs.
# TF-IDF (Term Frequency - Inverse Document Frequency) Vectorizer
# Normalize term counts by taking into account how often they appear in a document,
# how long the document is and how common/rare a term is
vec = TfidfVectorizer(analyzer='word', ngram_range=(1,1), tokenizer=tokenize_spacy,
min_df=4, max_df=0.4, strip_accents='unicode', use_idf=True,
smooth_idf=True, sublinear_tf=True, lowercase=False, binary=False)
# Best: ngram 1,1; 4/0.4; T/T/T; lowercase=False; spacy-word tokenization
# C=40.0; dual=False; class_weight='balanced'
# For multi-label: limit=0.04
trn_term_doc = vec.fit_transform(train[TEXT_COLUMN])
test_term_doc = vec.transform(test[TEXT_COLUMN])
trn_term_doc, test_term_doc
# SAVE
scipy.sparse.save_npz('trn_term_doc_level1.npz', trn_term_doc)
scipy.sparse.save_npz('test_term_doc_level1.npz', test_term_doc)
# LOAD - SKIP TO HERE
trn_term_doc = scipy.sparse.load_npz('trn_term_doc_level1.npz')
test_term_doc = scipy.sparse.load_npz('test_term_doc_level1.npz')
###Output
_____no_output_____
###Markdown
Naive Bayes Logistic Regression Build the model
###Code
def pr(y_i, y, trn_term_doc):
p = trn_term_doc[y==y_i].sum(0)
return (p+1) / ((y==y_i).sum()+1)
def get_model(label_values, trn_term_doc):
y = label_values.astype('int') # convert objects to ints
r = np.log(pr(1,y, trn_term_doc) / pr(0,y, trn_term_doc))
model = LogisticRegression(C=40.0, dual=False, solver='liblinear', multi_class='auto', max_iter=1000,
penalty='l2', class_weight='balanced', verbose=1)
x_nb = trn_term_doc.multiply(r)
return model.fit(x_nb, y), r # x_nb=training; y=targets
###Output
_____no_output_____
###Markdown
Predictions from the model
###Code
preds = np.zeros((len(test), len(label_ids)))
for index, label_id in enumerate(label_ids):
label_values = train[label_id].values
print('Fitting: ', label_id)
m,r = get_model(label_values, trn_term_doc)
preds[:,index] = m.predict_proba(test_term_doc.multiply(r))[:,1] # why all rows, first column?!
preds[0,:]
###Output
_____no_output_____
###Markdown
Write Submission File Get answers
###Code
# Given preds and corresponding labels, produce an answer-file of format isbn <TAB> label
# The best default-value for for the limit has been empirically found (0.08)
# The default max_labels is 2, that is find up to two different labels
def answers_from_preds_multi(preds, labels, limit=0.08, max_labels=2):
label_column_strings = labels
test_isbn_df = test['isbn']
isbn_list = list(test_isbn_df)
answers_list = []
for index, item in enumerate(preds):
#max_index = np.argmax(item)
# Sort probabilities from highest to lowest, since there were no examples with more than
# three categories in the provided data, we'll stop there
sorted_indexes = (-item).argsort()
index_first = sorted_indexes[0]
index_second = sorted_indexes[1]
index_third = sorted_indexes[2]
max_index = index_first
# TODO: refactor
# Multi-label
if(max_labels > 1 and (item[index_first] - item[index_second]) < limit):
label_first = label_column_strings[index_first]
label_first = [label_first] + find_previous_labels(label_first)
label_second = label_column_strings[index_second]
label_second = [label_second] + find_previous_labels(label_second)
ls = label_first + label_second
if(max_labels > 2 and (item[index_second] - item[index_third]) < 0.005):
label_third = label_column_strings[index_third]
ls.append(label_third)
else:
label_first = label_column_strings[max_index]
label_first = [label_first] + find_previous_labels(label_first)
ls = label_first
isbn = isbn_list[index]
answers_list += [[isbn, ls]]
return answers_list # this is a list of lists to keep the order, in python 3.7+ a dict can be used
answers_list = answers_from_preds_multi(preds, labels)
# TODO: refactor
def answers_list_to_file(answers_list):
answers = ''
for item in answers_list:
isbn = item[0]
labels_level1 = item[1]
if(len(item) == 3):
labels_level2 = item[2]
answers += isbn + '\t' + '\t'.join(labels_level1) + '\t' + '\t'.join(labels_level2) + '\n'
elif(len(item) == 4):
labels_level2 = item[2]
labels_level3_nested = item[3] # can be a list of lists or a list
# flatten if list of lists
if(isinstance(labels_level3_nested[0], list)):
labels_level3 = [item for sublist in labels_level3_nested for item in sublist] # flatten the list
answers += isbn + '\t' + '\t'.join(labels_level1) + '\t' + '\t'.join(labels_level2) + '\t' + '\t'.join(labels_level3) + '\n'
else:
answers += isbn + '\t' + '\t'.join(labels_level1) + '\n'
return answers[:-1] # Remove trailing \n
answers_taskA = answers_list_to_file(answers_list)
answers_taskB = answers_list_to_file(answers_list) # dummy that just takes subtask a results for subtask b
write_answerfile(answers_taskA, answers_taskB)
elapsed_first = timeit.default_timer() - start_time
###Output
_____no_output_____
###Markdown
Subtask B (level 2) Level 2 Setup
###Code
train_df_level2, test_df_level2 = get_train_test(label_ids_level2, depth=1)
train_df_level2.head()
train2 = train_df_level2.copy()
test2 = test_df_level2.copy()
len(train2), len(test2)
np.where(pd.isnull(test2['body']))
np.where(pd.isnull(train2['body']))
# Fill empty body with just the title (better results than author+title)
train2.body = np.where(train2.body.isnull(), train2.title , train2.body)
trn_term_doc_level2 = vec.fit_transform(train2[TEXT_COLUMN])
test_term_doc_level2 = vec.transform(test2[TEXT_COLUMN])
trn_term_doc_level2, test_term_doc_level2
# SAVE
scipy.sparse.save_npz('trn_term_doc_level2.npz', trn_term_doc_level2)
scipy.sparse.save_npz('test_term_doc_level2.npz', test_term_doc_level2)
# LOAD - SKIP TO HERE
trn_term_doc_level2 = scipy.sparse.load_npz('trn_term_doc_level2.npz')
test_term_doc_level2 = scipy.sparse.load_npz('test_term_doc_level2.npz')
preds_level2 = np.zeros((len(test), len(label_ids_level2)))
for index, label_id in enumerate(label_ids_level2):
label_values = train2[label_id].values
print('Fitting: ', label_id)
m,r = get_model(label_values, trn_term_doc_level2)
preds_level2[:,index] = m.predict_proba(test_term_doc_level2.multiply(r))[:,1] # why all rows, first column?!
len(preds_level2), len(preds_level2[0,:]) # 2079 elements, 93 classes
###Output
_____no_output_____
###Markdown
Level 2 predictions
###Code
# Takes predictions and corresponding labels (of same length and order)
# and a target list and returns a list of lists, each item has the format [label, probability]
# The labels have to be in a specified target_list
def get_nextlevel_preds(preds, labels, target_list):
if(len(preds) != len(labels)):
raise Exception('The length of the predictions and the corresponding labels should be the same!')
new_list = []
for index, item in enumerate(labels):
if item in target_list:
new_list.append([labels[index], preds[index]])
return new_list
# Takes a list of lists, each item has the format [label, probability]
# Returns a list of the format [label, probability] that contains the label with the highest probability
def get_max_from_list(list, cutoff=0.0):
max = 0
l = ''
for item in list:
label = item[0]
probability = item[1]
if probability > max:
max = probability
l = label
if(max > cutoff):
return l, max
else:
return ()
# Takes a list of lists, each item has the format [label, probability]
# Returns a list of lists containing all items that meet the cutoff. The items have the format [label, probability]
def get_max_from_list_multi(list, cutoff=1.0):
items = []
for item in list:
label = item[0]
probability = item[1]
if probability > cutoff:
items.append(label)
return items
# Iterate over all level 1 classifications and get the level 2 label with
# the highest probability, but only if it is in the hirarchy that corresponds to the label from level 1
def new_answers(answers_list, cutoff=0.0):
new_ansers_list = []
for index, item in enumerate(answers_list):
new_label_strings = []
isbn = item[0]
labels_level1 = item[1] # can contain one or two labels
labels_next = find_next_level(labels_level1[0])
new_labels = get_nextlevel_preds(preds_level2[index],labels_level2,labels_next)
# Get the label with the highest probability
# Format: [label, probability]
max_label = get_max_from_list(new_labels, cutoff=cutoff)
if(max_label != ()):
new_label_strings.append(max_label[0])
# Add extra level 2 labels if level 1 had two labels
# TODO: refactor
if(len(labels_level1) == 2):
labels_next2 = find_next_level(labels_level1[1])
new_labels2 = get_nextlevel_preds(preds_level2[index],labels_level2,labels_next2)
max_label2 = get_max_from_list(new_labels2, cutoff=cutoff)
if(max_label2 != ()):
new_label_strings.append(max_label2[0])
new_entry = [isbn, labels_level1, new_label_strings]
new_ansers_list.append(new_entry)
return new_ansers_list
###Output
_____no_output_____
###Markdown
Write output file
###Code
answers_list2 = new_answers(answers_list, cutoff=0.09)
answers_taskB = answers_list_to_file(answers_list2)
write_answerfile(answers_taskA, answers_taskB)
elapsed_second = timeit.default_timer() - start_time
###Output
_____no_output_____
###Markdown
Subtask B (level 3) Level 3 Setup
###Code
# Depth level 3
train_df_level3, test_df_level3 = get_train_test(label_ids_level3, depth=2)
train_df_level3.head()
train3 = train_df_level3.copy()
test3 = test_df_level3.copy()
len(train3), len(test3)
np.where(pd.isnull(test3['body']))
np.where(pd.isnull(train3['body']))
# Fill empty body with just the title (better results than author+title)
train3.body = np.where(train3.body.isnull(), train3.title , train3.body)
trn_term_doc_level3 = vec.fit_transform(train3[TEXT_COLUMN])
test_term_doc_level3 = vec.transform(test3[TEXT_COLUMN])
trn_term_doc_level3, test_term_doc_level3
# SAVE
scipy.sparse.save_npz('trn_term_doc_level3.npz', trn_term_doc_level3)
scipy.sparse.save_npz('test_term_doc_level3.npz', test_term_doc_level3)
# LOAD - SKIP TO HERE
trn_term_doc_level3 = scipy.sparse.load_npz('trn_term_doc_level3.npz')
test_term_doc_level3 = scipy.sparse.load_npz('test_term_doc_level3.npz')
preds_level3 = np.zeros((len(test), len(label_ids_level3)))
for index, label_id in enumerate(label_ids_level3):
label_values = train3[label_id].values
print('Fitting: ', label_id)
m,r = get_model(label_values, trn_term_doc_level3)
preds_level3[:,index] = m.predict_proba(test_term_doc_level3.multiply(r))[:,1] # why all rows, first column?!
len(preds_level3), len(preds_level3[0,:]) # 2079 elements, 242 classes
###Output
_____no_output_____
###Markdown
Level 3 Predictions
###Code
# NOTE: Multileafs for LEVEL3!
def new_answers2(answers_list2, cutoff=0.7, multi_label_cutoff=1.0):
new_ansers_list = []
for index, item in enumerate(answers_list2):
new_label_strings = []
isbn = item[0]
labels_level1 = item[1] # can contain one or two labels
labels_level2 = item[2] # can contain one or two labels
# Find next labels if 2nd level is not empty
if(labels_level2 != []):
labels_next = find_next_level(labels_level2[0])
new_labels = get_nextlevel_preds(preds_level3[index],labels_level3,labels_next)
max_labels = get_max_from_list_multi(new_labels, cutoff=multi_label_cutoff)
new_label_strings.append(max_labels)
# For the cases where there are two level 2 labels
if(len(labels_level2) > 1):
labels_next2 = find_next_level(labels_level2[1])
new_labels2 = get_nextlevel_preds(preds_level3[index],labels_level3,labels_next2)
max_labels2 = get_max_from_list(new_labels2, cutoff=cutoff)
if(max_labels2 != ()):
new_label_strings.append([max_labels2[0]])
new_entry = [isbn, labels_level1, labels_level2, new_label_strings]
else:
new_entry = [isbn, labels_level1, labels_level2]
new_ansers_list.append(new_entry)
return new_ansers_list
###Output
_____no_output_____
###Markdown
Write Final Output File
###Code
answers_list3 = new_answers2(answers_list2, cutoff=0.7, multi_label_cutoff=0.15)
answers_taskB = answers_list_to_file(answers_list3)
write_answerfile(answers_taskA, answers_taskB)
###Output
_____no_output_____
###Markdown
Runtime
###Code
# Print the total time the notebook ran
elapsed_final = timeit.default_timer() - start_time
mins1, secs1 = divmod(elapsed_first, 60)
hours1, mins1 = divmod(mins1, 60)
print("Running time level 1: %d:%d:%d.\n" % (hours1, mins1, secs1))
mins2, secs2 = divmod(elapsed_second, 60)
hours2, mins2 = divmod(mins2, 60)
print("Running time level 2: %d:%d:%d.\n" % (hours2, mins2, secs2))
mins3, secs3 = divmod(elapsed_final, 60)
hours3, mins3 = divmod(mins3, 60)
print("Total running time: %d:%d:%d.\n" % (hours3, mins3, secs3))
###Output
Running time level 1: 0:8:24.
Running time level 2: 0:21:58.
Total running time: 0:46:55.
###Markdown
---- Model Search ---- Imports and data loading The provided training-data from phase one of the competition is split 80/20 into train and test.
###Code
# Imports
import scipy as scipy
# Data-loading
X_tmp = train['body'].copy()
y_tmp = train[label_ids].copy()
len(X_tmp), len(y_tmp)
# Setup
test_percentage=0.2
###Output
_____no_output_____
###Markdown
Vectorize
###Code
vec = TfidfVectorizer(analyzer='word', ngram_range=(1,1), tokenizer=tokenize_spacy,
min_df=4, max_df=0.4, strip_accents='unicode', use_idf=True,
smooth_idf=True, sublinear_tf=True, lowercase=False, binary=False)
X_vec = vec.fit_transform(X_tmp)
X_vec[0]
X_train, X_test = train_test_split(X_vec, test_size=test_percentage, shuffle=False)
X_train, X_test
y_train_tmp, y_test_tmp = train_test_split(y_tmp, test_size=test_percentage, shuffle=False)
len(y_train_tmp), len(y_test_tmp)
y_train = y_train_tmp.as_matrix()
y_train = scipy.sparse.coo_matrix(y_train_tmp, dtype=int)
y_train
y_test = y_test_tmp.as_matrix()
y_test = scipy.sparse.coo_matrix(y_test_tmp, dtype=int)
y_test
feature_names = [('text', 'TEXT')]
feature_names
label_names = list(zip(label_ids, [['0', '1'] for x in label_ids]))
label_names
###Output
_____no_output_____
###Markdown
Test of different models
###Code
import sklearn.metrics as metrics
from sklearn.ensemble import VotingClassifier
def prediction_from_model(model, X_train, y_train, y_test):
classifier = OneVsRestClassifier(model)
classifier.fit(X_train, y_train)
prediction = classifier.predict(X_test)
return metrics.f1_score(y_test, prediction, average='micro')
###Output
_____no_output_____
###Markdown
Decision Trees
###Code
from sklearn.tree import DecisionTreeClassifier
###Output
_____no_output_____
###Markdown
Vanilla
###Code
# criterion='gini' (alternative: 'entropy'), splitter='best' (alternative: 'random'),
# max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0,
# class_weight=None, max_features=None (int, float, 'auto', 'sqrt', 'log2')
model_dt_vanilla = DecisionTreeClassifier(random_state=seed)
prediction_from_model(model_dt_vanilla, X_train, y_train, y_test) # 0.60491432266408007
###Output
_____no_output_____
###Markdown
Optimized
###Code
model_dt = DecisionTreeClassifier(random_state=seed, splitter='random', min_samples_split=15)
prediction_from_model(model_dt, X_train, y_train, y_test) # 0.61245110821382009
###Output
_____no_output_____
###Markdown
Random Forest
###Code
from sklearn.ensemble import RandomForestClassifier
###Output
_____no_output_____
###Markdown
Vanilla
###Code
model_rf_vanilla = RandomForestClassifier(random_state=seed)
prediction_from_model(model_rf_vanilla, X_train, y_train, y_test) # 0.61246504194966034
###Output
//anaconda/envs/python3/lib/python3.5/site-packages/sklearn/ensemble/forest.py:248: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
//anaconda/envs/python3/lib/python3.5/site-packages/sklearn/ensemble/forest.py:248: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
//anaconda/envs/python3/lib/python3.5/site-packages/sklearn/ensemble/forest.py:248: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
//anaconda/envs/python3/lib/python3.5/site-packages/sklearn/ensemble/forest.py:248: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
//anaconda/envs/python3/lib/python3.5/site-packages/sklearn/ensemble/forest.py:248: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
//anaconda/envs/python3/lib/python3.5/site-packages/sklearn/ensemble/forest.py:248: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
//anaconda/envs/python3/lib/python3.5/site-packages/sklearn/ensemble/forest.py:248: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
//anaconda/envs/python3/lib/python3.5/site-packages/sklearn/ensemble/forest.py:248: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
###Markdown
Optimized
###Code
model_rf = RandomForestClassifier(random_state=seed, n_estimators=200, min_samples_split=5,
bootstrap=False, class_weight='balanced_subsample', n_jobs=-1)
prediction_from_model(model_rf, X_train, y_train, y_test) # 0.6667
###Output
_____no_output_____
###Markdown
K-Nearest Neighbors
###Code
from sklearn.neighbors import KNeighborsClassifier
# https://scikit-learn.org/stable/modules/neighbors.html#classification
###Output
_____no_output_____
###Markdown
Vanilla
###Code
# n_neighbors=5, weights=’uniform’, algorithm=’auto’, leaf_size=30, p=2
# metric=’minkowski’, metric_params=None, n_jobs=None, **kwargs
model_knn_vanilla = KNeighborsClassifier()
prediction_from_model(model_knn_vanilla, X_train, y_train, y_test) # 0.71636228102869925
###Output
_____no_output_____
###Markdown
Optimized
###Code
# p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2
model_knn = KNeighborsClassifier(weights='distance', n_neighbors=9, n_jobs=-1) # euclidean distance
prediction_from_model(model_knn, X_train, y_train, y_test) # 0.73769585253456216
###Output
_____no_output_____
###Markdown
Logistic Regression Vanilla
###Code
# random_state=None: If None, the random number generator is the RandomState instance used by np.random
model_lr_vanilla = LogisticRegression(random_state=seed)
prediction_from_model(model_lr_vanilla, X_train, y_train, y_test) # 0.69192876089427802
###Output
//anaconda/envs/python3/lib/python3.5/site-packages/sklearn/linear_model/logistic.py:432: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.
FutureWarning)
###Markdown
Optimized
###Code
model_lr = LogisticRegression(C=40.0, dual=False, solver='liblinear', multi_class='auto', max_iter=1000,
penalty='l2', class_weight='balanced', verbose=1)
prediction_from_model(model_lr, X_train, y_train, y_test) # 0.78828099708643573
###Output
[LibLinear][LibLinear][LibLinear][LibLinear][LibLinear][LibLinear][LibLinear][LibLinear]
###Markdown
Multinomial Naive Bayes
###Code
from sklearn.naive_bayes import MultinomialNB
###Output
_____no_output_____
###Markdown
Vanilla
###Code
# alpha=1.0, fit_prior=True, class_prior=None
model_mnb_vanilla = MultinomialNB()
prediction_from_model(model_mnb_vanilla, X_train, y_train, y_test) # 0.62527190033616764
###Output
_____no_output_____
###Markdown
Optimized
###Code
model_mnb = MultinomialNB(alpha=0.08)
prediction_from_model(model_mnb, X_train, y_train, y_test) # 0.77026346702466875
###Output
_____no_output_____
###Markdown
Linear SVC
###Code
from sklearn.svm import LinearSVC, SVC
###Output
_____no_output_____
###Markdown
Vanilla
###Code
# model_svm_vanilla = LinearSVC()
# Needed due to lack of predict_proba() for LinearSVC -> very slow
model_svm_vanilla = SVC(random_state=seed, kernel='linear', probability=True)
%%time
prediction_from_model(model_svm_vanilla, X_train, y_train, y_test) # 0.77313276193807945
###Output
CPU times: user 15min 32s, sys: 4.31 s, total: 15min 36s
Wall time: 15min 40s
###Markdown
Optimized
###Code
#model_svm = LinearSVC(C=1.0, class_weight='balanced') # 0.78895527208138228
# Needed due to lack of predict_proba() for LinearSVC -> very slow
# 0.78816793893129766
model_svm = SVC(kernel='linear', probability=True, C=1.0, class_weight='balanced')
%%time
prediction_from_model(model_svm, X_train, y_train, y_test) # 0.78816793893129766
###Output
CPU times: user 17min 46s, sys: 3.04 s, total: 17min 49s
Wall time: 17min 51s
###Markdown
SVC Vanilla
###Code
model_svc_vanilla = SVC()
%%time
prediction_from_model(model_svc_vanilla, X_train, y_train, y_test) # 0.51270131163871824
###Output
CPU times: user 3min 8s, sys: 961 ms, total: 3min 9s
Wall time: 3min 10s
###Markdown
Optimized
###Code
model_svc = SVC(C=15900.0, class_weight='balanced', cache_size=500)
%%time
prediction_from_model(model_svc, X_train, y_train, y_test) # 0.78778083077420391
###Output
CPU times: user 3min 40s, sys: 996 ms, total: 3min 41s
Wall time: 3min 42s
###Markdown
Ensemble
###Code
ensemble = VotingClassifier(estimators=[('Logistic Regression', model_lr),
('KNN', model_knn),
('Naive Bayes', model_mnb)],
voting='soft')
#model_dt
#model_rf
#model_mnb
#model_knn
#model_svc
#model_svm
#model_lr
top2_estimators = [('4', model_knn), ('7', model_lr)]
top2b_estimators = [('3', model_mnb), ('7', model_lr)]
top3_estimators = [('3', model_mnb), ('4', model_knn), ('7', model_lr)]
all_estimators = [('1', model_dt),('2', model_rf), ('3', model_mnb), ('4', model_knn), ('7', model_lr)]
ensemble_top2 = VotingClassifier(estimators=top2_estimators,
voting='soft')
ensemble_top2b = VotingClassifier(estimators=top2b_estimators,
voting='soft')
ensemble_top3 = VotingClassifier(estimators=top3_estimators,
voting='soft')
ensemble_all = VotingClassifier(estimators=all_estimators,
voting='soft')
# Soft (cannot use hard because predict_proba is not defined)
# model_lr, model_knn, model_mnb: 0.79495052882975081
%%time
prediction_from_model(ensemble_top2, X_train, y_train, y_test) # 0.80006788866259326
%%time
prediction_from_model(ensemble_top2b, X_train, y_train, y_test) # 0.79670239076669425
%%time
prediction_from_model(ensemble_top3, X_train, y_train, y_test) # 0.79495052882975081
%%time
prediction_from_model(ensemble_all, X_train, y_train, y_test) # 0.77105907025515563
###Output
[LibLinear][LibLinear][LibLinear][LibLinear][LibLinear][LibLinear][LibLinear][LibLinear]CPU times: user 2min 48s, sys: 41.4 s, total: 3min 29s
Wall time: 3min 23s
|
_notebooks/2020-07-03-Statistical Thinking in Python (Part 1).ipynb | ###Markdown
"Statistical Thinking in Python (Part 1)"> "Building the foundation you need to think statistically, speak the language of your data, and understand what your data is telling you."- toc: true- comments: true- author: Victor Omondi- categories: [statistical-thinking, eda, data-science]- image: images/statistical-thinking-1.png Graphical exploratory data analysisBefore diving into sophisticated statistical inference techniques, we should first explore our data by plotting them and computing simple summary statistics. This process, called **exploratory data analysis**, is a crucial first step in statistical analysis of data. Introduction to Exploratory Data AnalysisExploratory Data Analysis is the process of organizing, plo!ing, and summarizing a data set>“Exploratory data analysis can never be thewhole story, but nothing else can serve as thefoundation stone. ” > ~ John Tukey Tukey's comments on EDA* Exploratory data analysis is detective work.* There is no excuse for failing to plot and look.* The greatest value of a picture is that it forces us to notice what we never expected to see.* It is important to understand what you can do before you learn how to measure how well you seem to have done it.> If you don't have time to do EDA, you really don't have time to do hypothesis tests. And you should always do EDA first. Advantages of graphical EDA* It often involves converting tabular data into graphical form.* If done well, graphical representations can allow for more rapid interpretation of data.* There is no excuse for neglecting to do graphical EDA.> While a good, informative plot can sometimes be the end point of an analysis, it is more like a beginning: it helps guide you in the quantitative statistical analyses that come next. Plotting a histogram Plotting a histogram of iris dataWe will use a classic data set collected by botanist Edward Anderson and made famous by Ronald Fisher, one of the most prolific statisticians in history. Anderson carefully measured the anatomical properties of samples of three different species of iris, Iris setosa, Iris versicolor, and Iris virginica. The full data set is [available as part of scikit-learn](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html). Here, you will work with his measurements of petal length.We will plot a histogram of the petal lengths of his 50 samples of Iris versicolor using matplotlib/seaborn's default settings. The subset of the data set containing the Iris versicolor petal lengths in units of centimeters (cm) is stored in the NumPy array `versicolor_petal_length`. Libraries
###Code
# Import plotting modules
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
# Set default Seaborn style
sns.set()
%matplotlib inline
versicolor_petal_length = np.array([4.7, 4.5, 4.9, 4. , 4.6, 4.5, 4.7, 3.3, 4.6, 3.9, 3.5, 4.2, 4. ,
4.7, 3.6, 4.4, 4.5, 4.1, 4.5, 3.9, 4.8, 4. , 4.9, 4.7, 4.3, 4.4,
4.8, 5. , 4.5, 3.5, 3.8, 3.7, 3.9, 5.1, 4.5, 4.5, 4.7, 4.4, 4.1,
4. , 4.4, 4.6, 4. , 3.3, 4.2, 4.2, 4.2, 4.3, 3. , 4.1])
# Plot histogram of versicolor petal lengths
plt.hist(versicolor_petal_length)
plt.ylabel("count")
plt.xlabel("petal length (cm)")
plt.show()
###Output
_____no_output_____
###Markdown
Adjusting the number of bins in a histogramThe histogram we just made had ten bins. This is the default of matplotlib. >Tip: The "square root rule" is a commonly-used rule of thumb for choosing number of bins: choose the number of bins to be the square root of the number of samples. We will plot the histogram of _Iris versicolor petal lengths_ again, this time using the square root rule for the number of bins. You specify the number of bins using the `bins` keyword argument of `plt.hist()`.
###Code
# Compute number of data points: n_data
n_data = len(versicolor_petal_length)
# Number of bins is the square root of number of data points: n_bins
n_bins = np.sqrt(n_data)
# Convert number of bins to integer: n_bins
n_bins = int(n_bins)
# Plot the histogram
_ = plt.hist(versicolor_petal_length, bins=n_bins)
# Label axes
_ = plt.xlabel('petal length (cm)')
_ = plt.ylabel('count')
# Show histogram
plt.show()
###Output
_____no_output_____
###Markdown
Plot all data: Bee swarm plots Bee swarm plotWe will make a bee swarm plot of the iris petal lengths. The x-axis will contain each of the three species, and the y-axis the petal lengths.
###Code
iris_petal_lengths = pd.read_csv("../datasets/iris_petal_lengths.csv")
iris_petal_lengths.head()
iris_petal_lengths.shape
iris_petal_lengths.tail()
# Create bee swarm plot with Seaborn's default settings
_ = sns.swarmplot(data=iris_petal_lengths, x="species", y="petal length (cm)")
# Label the axes
_ = plt.xlabel("species")
_ = plt.ylabel("petal length (cm)")
# Show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Interpreting a bee swarm plot* _I. virginica_ petals tend to be the longest, and _I. setosa_ petals tend to be the shortest of the three species.> Note: Notice that we said **"tend to be."** Some individual _I. virginica_ flowers may be shorter than individual _I. versicolor_ flowers. It is also possible that an individual _I. setosa_ flower may have longer petals than in individual _I. versicolor_ flower, though this is highly unlikely, and was not observed by Anderson. Plot all data: ECDFs> Note: Empirical cumulative distribution function (ECDF) Computing the ECDFWe will write a function that takes as input a 1D array of data and then returns the `x` and `y` values of the ECDF.> Important: ECDFs are among the most important plots in statistical analysis.
###Code
def ecdf(data):
"""Compute ECDF for a one-dimensional array of measurements."""
# Number of data points: n
n = len(data)
# x-data for the ECDF: x
x = np.sort(data)
# y-data for the ECDF: y
y = np.arange(1, n+1) / n
return x, y
###Output
_____no_output_____
###Markdown
Plotting the ECDFWe will now use `ecdf()` function to compute the ECDF for the petal lengths of Anderson's _Iris versicolor_ flowers. We will then plot the ECDF.> Warning: `ecdf()` function returns two arrays so we will need to unpack them. An example of such unpacking is `x, y = foo(data)`, for some function `foo()`.
###Code
# Compute ECDF for versicolor data: x_vers, y_vers
x_vers, y_vers = ecdf(versicolor_petal_length)
# Generate plot
_ = plt.plot(x_vers, y_vers, marker=".", linestyle="none")
# Label the axes
_ = plt.xlabel("versicolor petal length, (cm)")
_ = plt.ylabel("ECDF")
# Display the plot
plt.show()
###Output
_____no_output_____
###Markdown
Comparison of ECDFsECDFs also allow us to compare two or more distributions ***(though plots get cluttered if you have too many)***. Here, we will plot ECDFs for the petal lengths of all three iris species. > Important: we already wrote a function to generate ECDFs so we can put it to good use!
###Code
setosa_petal_length = iris_petal_lengths["petal length (cm)"][iris_petal_lengths.species == "setosa"]
versicolor_petal_length = iris_petal_lengths["petal length (cm)"][iris_petal_lengths.species == "versicolor"]
virginica_petal_length = iris_petal_lengths["petal length (cm)"][iris_petal_lengths.species == "virginica"]
setosa_petal_length.head()
# Compute ECDFs
x_set, y_set = ecdf(setosa_petal_length)
x_vers, y_vers = ecdf(versicolor_petal_length)
x_virg, y_virg = ecdf(virginica_petal_length)
# Plot all ECDFs on the same plot
_ = plt.plot(x_set, y_set, marker=".", linestyle="none")
_ = plt.plot(x_vers, y_vers, marker=".", linestyle="none")
_ = plt.plot(x_virg, y_virg, marker=".", linestyle="none")
# Annotate the plot
plt.legend(('setosa', 'versicolor', 'virginica'), loc='lower right')
_ = plt.xlabel('petal length (cm)')
_ = plt.ylabel('ECDF')
# Display the plot
plt.show()
###Output
_____no_output_____
###Markdown
> Note: The ECDFs expose clear differences among the species. Setosa is much shorter, also with less absolute variability in petal length than versicolor and virginica. Onward toward the whole story!> Important: “Exploratory data analysis can never be thewhole story, but nothing else can serve as thefoundation stone.” —John Tukey Quantitative exploratory data analysisWe will compute useful summary statistics, which serve to concisely describe salient features of a dataset with a few numbers. Introduction to summary statistics: The sample mean and median$$mean = \bar{x} = \frac{1}{n} \sum_{i=1}^{n} x_i$$> Outliers● Data points whose value is far greater or less thanmost of the rest of the data> The median● The middle value of a data set> Note: An outlier can significantly affect the value of the mean, but not the median Computing meansThe mean of all measurements gives an indication of the typical magnitude of a measurement. It is computed using `np.mean()`.
###Code
# Compute the mean: mean_length_vers
mean_length_vers = np.mean(versicolor_petal_length)
# Print the result with some nice formatting
print('I. versicolor:', mean_length_vers, 'cm')
###Output
I. versicolor: 4.26 cm
###Markdown
Percentiles, outliers, and box plots Computing percentiles We will compute the percentiles of petal length of _Iris versicolor_.
###Code
# Specify array of percentiles: percentiles
percentiles = np.array([2.5, 25, 50, 75, 97.5])
# Compute percentiles: ptiles_vers
ptiles_vers = np.percentile(versicolor_petal_length, percentiles)
# Print the result
ptiles_vers
###Output
_____no_output_____
###Markdown
Comparing percentiles to ECDFTo see how the percentiles relate to the ECDF, we will plot the percentiles of _Iris versicolor_ petal lengths on the ECDF plot.
###Code
# Plot the ECDF
_ = plt.plot(x_vers, y_vers, '.')
_ = plt.xlabel('petal length (cm)')
_ = plt.ylabel('ECDF')
# Overlay percentiles as red diamonds.
_ = plt.plot(ptiles_vers, percentiles/100, marker='D', color='red',
linestyle="none")
# Show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Box-and-whisker plot> Warning: Making a box plot for the petal lengths is unnecessary because the iris data set is not too large and the bee swarm plot works fine.We will Make a box plot of the iris petal lengths.
###Code
# Create box plot with Seaborn's default settings
_ = sns.boxplot(data=iris_petal_lengths, x="species", y="petal length (cm)")
# Label the axes
_ = plt.xlabel("species")
_ = plt.ylabel("petal length (cm)")
# Show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Variance and standard deviation> Variance● The mean squared distance of the data from theirmean> Tip: Variance; nformally, a measure of the spread of data> $$variance = \frac{1}{n} \sum_{i=1}^{n} (x_i - \bar{x})^2$$> standard Deviation$$std = \sqrt {\frac{1}{n} \sum_{i=1}^{n} (x_i - \bar{x})^2}$$ Computing the variancewe will explicitly compute the variance of the petal length of _Iris veriscolor_, we will then use `np.var()` to compute it.
###Code
# Array of differences to mean: differences
differences = versicolor_petal_length-np.mean(versicolor_petal_length)
# Square the differences: diff_sq
diff_sq = differences**2
# Compute the mean square difference: variance_explicit
variance_explicit = np.mean(diff_sq)
# Compute the variance using NumPy: variance_np
variance_np = np.var(versicolor_petal_length)
# Print the results
print(variance_explicit, variance_np)
###Output
0.21640000000000004 0.21640000000000004
###Markdown
The standard deviation and the variancethe standard deviation is the square root of the variance.
###Code
# Compute the variance: variance
variance = np.var(versicolor_petal_length)
# Print the square root of the variance
print(np.sqrt(variance))
# Print the standard deviation
print(np.std(versicolor_petal_length))
###Output
0.4651881339845203
0.4651881339845203
###Markdown
Covariance and the Pearson correlation coefficient> Covariance● A measure of how two quantities vary together> $$covariance = \frac{1}{n} \sum_{i=1}^{n} (x_i\ \bar{x})\ (y_i \ - \bar{y})$$> Pearson correlation coefficient> $$\rho = Pearson\ correlation = \frac{covariance}{(std\ of\ x)\ (std\ of\ y)} = \frac{variability\ due\ to\ codependence}{independent variability}$$ Scatter plotsWhen we made bee swarm plots, box plots, and ECDF plots in previous exercises, we compared the petal lengths of different species of _iris_. But what if we want to compare two properties of a single species? This is exactly what we will do, we will make a **scatter plot** of the petal length and width measurements of Anderson's _Iris versicolor_ flowers. > Important: If the flower scales (that is, it preserves its proportion as it grows), we would expect the length and width to be correlated.
###Code
versicolor_petal_width = np.array([1.4, 1.5, 1.5, 1.3, 1.5, 1.3, 1.6, 1. , 1.3, 1.4, 1. , 1.5, 1. ,
1.4, 1.3, 1.4, 1.5, 1. , 1.5, 1.1, 1.8, 1.3, 1.5, 1.2, 1.3, 1.4,
1.4, 1.7, 1.5, 1. , 1.1, 1. , 1.2, 1.6, 1.5, 1.6, 1.5, 1.3, 1.3,
1.3, 1.2, 1.4, 1.2, 1. , 1.3, 1.2, 1.3, 1.3, 1.1, 1.3])
# Make a scatter plot
_ = plt.plot(versicolor_petal_length, versicolor_petal_width, marker=".", linestyle="none")
# Label the axes
_ = plt.xlabel("petal length, (cm)")
_ = plt.ylabel("petal length, (cm)")
# Show the result
plt.show()
###Output
_____no_output_____
###Markdown
> Tip: we see some correlation. Longer petals also tend to be wider. Computing the covarianceThe covariance may be computed using the Numpy function `np.cov()`. For example, we have two sets of data $x$ and $y$, `np.cov(x, y)` returns a 2D array where entries `[0,1`] and `[1,0]` are the covariances. Entry `[0,0]` is the variance of the data in `x`, and entry `[1,1]` is the variance of the data in `y`. This 2D output array is called the **covariance matrix**, since it organizes the self- and covariance.
###Code
# Compute the covariance matrix: covariance_matrix
covariance_matrix = np.cov(versicolor_petal_length, versicolor_petal_width)
# Print covariance matrix
print(covariance_matrix)
# Extract covariance of length and width of petals: petal_cov
petal_cov = covariance_matrix[0,1]
# Print the length/width covariance
print(petal_cov)
###Output
[[0.22081633 0.07310204]
[0.07310204 0.03910612]]
0.07310204081632653
###Markdown
Computing the Pearson correlation coefficientthe Pearson correlation coefficient, also called the **Pearson r**, is often easier to interpret than the covariance. It is computed using the `np.corrcoef()` function. Like `np.cov(`), it takes two arrays as arguments and returns a 2D array. Entries `[0,0]` and `[1,1]` are necessarily equal to `1`, and the value we are after is entry `[0,1]`.We will write a function, `pearson_r(x, y)` that takes in two arrays and returns the Pearson correlation coefficient. We will then use this function to compute it for the petal lengths and widths of $I.\ versicolor$.
###Code
def pearson_r(x, y):
"""Compute Pearson correlation coefficient between two arrays."""
# Compute correlation matrix: corr_mat
corr_mat = np.corrcoef(x,y)
# Return entry [0,1]
return corr_mat[0,1]
# Compute Pearson correlation coefficient for I. versicolor: r
r = pearson_r(versicolor_petal_length, versicolor_petal_width)
# Print the result
print(r)
###Output
0.7866680885228169
###Markdown
Thinking probabilistically-- Discrete variablesStatistical inference rests upon probability. Because we can very rarely say anything meaningful with absolute certainty from data, we use probabilistic language to make quantitative statements about data. We will think probabilistically about discrete quantities: those that can only take certain values, like integers. Probabilistic logic and statistical inference the goal of statistical inference* To draw probabilistic conclusions about what we might expect if we collected the same data again.* To draw actionable conclusions from data.* To draw more general conclusions from relatively few data or observations.> Note: Statistical inference involves taking your data to probabilistic conclusions about what you would expect if you took even more data, and you can make decisions based on these conclusions. Why we use the probabilistic language in statistical inference* Probability provides a measure of uncertainty and this is crucial because we can quantify what we might expect if the data were acquired again.* Data are almost never exactly the same when acquired again, and probability allows us to say how much we expect them to vary. We need probability to say how data might vary if acquired again.> Note: Probabilistic language is in fact very precise. It precisely describes uncertainty. Random number generators and hacker statistics> Hacker statistics- Uses simulated repeated measurements to computeprobabilities.> The np.random module- Suite of functions based on random number generation- `np.random.random()`: draw a number between $0$ and $1$ > Bernoulli trial● An experiment that has two options,"success" (True) and "failure" (False).> Random number seed- Integer fed into random number generating algorithm- Manually seed random number generator if you need reproducibility- Specified using `np.random.seed()`> Hacker stats probabilities- Determine how to simulate data- Simulate many many times- Probability is approximately fraction of trials with the outcome of interest Generating random numbers using the np.random modulewe'll generate lots of random numbers between zero and one, and then plot a histogram of the results. If the numbers are truly random, all bars in the histogram should be of (close to) equal height.
###Code
# Seed the random number generator
np.random.seed(42)
# Initialize random numbers: random_numbers
random_numbers = np.empty(100000)
# Generate random numbers by looping over range(100000)
for i in range(100000):
random_numbers[i] = np.random.random()
# Plot a histogram
_ = plt.hist(random_numbers, bins=316, histtype="step", density=True)
_ = plt.xlabel("random numbers")
_ = plt.ylabel("counts")
# Show the plot
plt.show()
###Output
_____no_output_____
###Markdown
> Note: The histogram is almost exactly flat across the top, indicating that there is equal chance that a randomly-generated number is in any of the bins of the histogram. The np.random module and Bernoulli trials> Tip: You can think of a Bernoulli trial as a flip of a possibly biased coin. Each coin flip has a probability $p$ of landing heads (success) and probability $1−p$ of landing tails (failure). We will write a function to perform `n` Bernoulli trials, `perform_bernoulli_trials(n, p)`, which returns the number of successes out of `n` Bernoulli trials, each of which has probability $p$ of success. To perform each Bernoulli trial, we will use the `np.random.random()` function, which returns a random number between zero and one.
###Code
def perform_bernoulli_trials(n, p):
"""Perform n Bernoulli trials with success probability p
and return number of successes."""
# Initialize number of successes: n_success
n_success = False
# Perform trials
for i in range(n):
# Choose random number between zero and one: random_number
random_number = np.random.random()
# If less than p, it's a success so add one to n_success
if random_number < p:
n_success += 1
return n_success
###Output
_____no_output_____
###Markdown
How many defaults might we expect?Let's say a bank made 100 mortgage loans. It is possible that anywhere between $0$ and $100$ of the loans will be defaulted upon. We would like to know the probability of getting a given number of defaults, given that the probability of a default is $p = 0.05$. To investigate this, we will do a simulation. We will perform 100 Bernoulli trials using the `perform_bernoulli_trials()` function and record how many defaults we get. Here, a success is a default. > Important: Remember that the word "success" just means that the Bernoulli trial evaluates to True, i.e., did the loan recipient default? You will do this for another $100$ Bernoulli trials. And again and again until we have tried it $1000$ times. Then, we will plot a histogram describing the probability of the number of defaults.
###Code
# Seed random number generator
np.random.seed(42)
# Initialize the number of defaults: n_defaults
n_defaults = np.empty(1000)
# Compute the number of defaults
for i in range(1000):
n_defaults[i] = perform_bernoulli_trials(100, 0.05)
# Plot the histogram with default number of bins; label your axes
_ = plt.hist(n_defaults, density=True)
_ = plt.xlabel('number of defaults out of 100 loans')
_ = plt.ylabel('probability')
# Show the plot
plt.show()
###Output
_____no_output_____
###Markdown
> Warning: This is actually not an optimal way to plot a histogram when the results are known to be integers. We will revisit this Will the bank fail?If interest rates are such that the bank will lose money if 10 or more of its loans are defaulted upon, what is the probability that the bank will lose money?
###Code
# Compute ECDF: x, y
x,y = ecdf(n_defaults)
# Plot the ECDF with labeled axes
_ = plt.plot(x,y, marker=".", linestyle="none")
_ = plt.xlabel("number of defaults")
_ = plt.ylabel("ECDF")
# Show the plot
plt.show()
# Compute the number of 100-loan simulations with 10 or more defaults: n_lose_money
n_lose_money = np.sum(n_defaults >= 10)
# Compute and print probability of losing money
print('Probability of losing money =', n_lose_money / len(n_defaults))
###Output
_____no_output_____
###Markdown
> Note: we most likely get 5/100 defaults. But we still have about a 2% chance of getting 10 or more defaults out of 100 loans. Probability distributions and stories: The Binomial distribution> Probability mass function (PMF)- The set of probabilities of discrete outcomes> Probability distribution- A mathematical description of outcomes> Discrete Uniform distribution: the story- The outcome of rolling a single fair die is Discrete Uniformly distributed.> Binomial distribution: the story- The number $r$ of successes in $n$ Bernoulli trials withprobability $p$ of success, is Binomially distributed- The number $r$ of heads in $4$ coin flips with probability$0.5$ of heads, is Binomially distributed Sampling out of the Binomial distributionWe will compute the probability mass function for the number of defaults we would expect for $100$ loans as in the last section, but instead of simulating all of the Bernoulli trials, we will perform the sampling using `np.random.binomial()`{% fn 1 %}.> Note: This is identical to the calculation we did in the last set of exercises using our custom-written `perform_bernoulli_trials()` function, but far more computationally efficient. Given this extra efficiency, we will take $10,000$ samples instead of $1000$. After taking the samples, we will plot the CDF. This CDF that we are plotting is that of the Binomial distribution.
###Code
# Take 10,000 samples out of the binomial distribution: n_defaults
n_defaults = np.random.binomial(100, 0.05, size=10000)
# Compute CDF: x, y
x,y = ecdf(n_defaults)
# Plot the CDF with axis labels
_ = plt.plot(x,y, marker=".", linestyle="-")
_ = plt.xlabel("number of defaults out of 100 loans")
_ = plt.ylabel("CDF")
# Show the plot
plt.show()
###Output
_____no_output_____
###Markdown
> Tip: If you know the story, using built-in algorithms to directly sample out of the distribution is ***much*** faster. Plotting the Binomial PMF> Warning: plotting a nice looking PMF requires a bit of matplotlib trickery that we will not go into here.we will plot the PMF of the Binomial distribution as a histogram. The trick is setting up the edges of the `bins` to pass to `plt.hist()` via the `bins` keyword argument. We want the bins centered on the integers. So, the edges of the bins should be $-0.5, 0.5, 1.5, 2.5, ...$ up to `max(n_defaults) + 1.5`. We can generate an array like this using `np.arange() `and then subtracting `0.5` from the array.
###Code
# Compute bin edges: bins
bins = np.arange(0, max(n_defaults) + 1.5) - 0.5
# Generate histogram
_ = plt.hist(n_defaults, density=True, bins=bins)
# Label axes
_ = plt.xlabel("number of defaults out of 100 loans")
_ = plt.ylabel("probability")
# Show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Poisson processes and the Poisson distribution> Poisson process- The timing of the next event is completely independent of when the previous event happened> Examples of Poisson processes- Natural births in a given hospital- Hit on a website during a given hour- Meteor strikes- Molecular collisions in a gas- Aviation incidents- Buses in Poissonville> Poisson distribution- The number $r$ of arrivals of a Poisson process in agiven time interval with average rate of $λ$ arrivalsper interval is Poisson distributed.- The number r of hits on a website in one hour withan average hit rate of 6 hits per hour is Poissondistributed.> Poisson Distribution- Limit of the Binomial distribution for lowprobability of success and large number of trials.- That is, for rare events. Relationship between Binomial and Poisson distributions> Important: Poisson distribution is a limit of the Binomial distribution for rare events.> Tip: Poisson distribution with arrival rate equal to $np$ approximates a Binomial distribution for $n$ Bernoulli trials with probability $p$ of success (with $n$ large and $p$ small). Importantly, the Poisson distribution is often simpler to work with because it has only one parameter instead of two for the Binomial distribution.Let's explore these two distributions computationally. We will compute the mean and standard deviation of samples from a Poisson distribution with an arrival rate of $10$. Then, we will compute the mean and standard deviation of samples from a Binomial distribution with parameters $n$ and $p$ such that $np = 10$.
###Code
# Draw 10,000 samples out of Poisson distribution: samples_poisson
samples_poisson = np.random.poisson(10, size=10000)
# Print the mean and standard deviation
print('Poisson: ', np.mean(samples_poisson),
np.std(samples_poisson))
# Specify values of n and p to consider for Binomial: n, p
n = [20, 100, 1000]
p = [.5, .1, .01]
# Draw 10,000 samples for each n,p pair: samples_binomial
for i in range(3):
samples_binomial = np.random.binomial(n[i],p[i], size=10000)
# Print results
print('n =', n[i], 'Binom:', np.mean(samples_binomial),
np.std(samples_binomial))
###Output
Poisson: 10.0145 3.1713545607516043
n = 20 Binom: 10.0592 2.23523944131272
n = 100 Binom: 10.0441 2.9942536949964675
n = 1000 Binom: 10.0129 3.139639085946026
###Markdown
> Note: The means are all about the same, which can be shown to be true by doing some pen-and-paper work. The standard deviation of the Binomial distribution gets closer and closer to that of the Poisson distribution as the probability $p$ gets lower and lower. Was 2015 anomalous?In baseball, a no-hitter is a game in which a pitcher does not allow the other team to get a hit. This is a rare event, and since the beginning of the so-called modern era of baseball (starting in 1901), there have only been 251 of them through the 2015 season in over 200,000 games. The ECDF of the number of no-hitters in a season is shown to the right. The probability distribution that would be appropriate to describe the number of no-hitters we would expect in a given season? is Both Binomial and Poisson, though Poisson is easier to model and compute.> Important: When we have rare events (low $p$, high $n$), the Binomial distribution is Poisson. This has a single parameter, the mean number of successes per time interval, in our case the mean number of no-hitters per season.1990 and 2015 featured the most no-hitters of any season of baseball (there were seven). Given that there are on average $\frac{251}{115}$ no-hitters per season, what is the probability of having seven or more in a season? Let's find out
###Code
# Draw 10,000 samples out of Poisson distribution: n_nohitters
n_nohitters = np.random.poisson(251/115, size=10000)
# Compute number of samples that are seven or greater: n_large
n_large = np.sum(n_nohitters >= 7)
# Compute probability of getting seven or more: p_large
p_large = n_large/10000
# Print the result
print('Probability of seven or more no-hitters:', p_large)
###Output
Probability of seven or more no-hitters: 0.0072
###Markdown
> Note: The result is about $0.007$. This means that it is not that improbable to see a 7-or-more no-hitter season in a century. We have seen two in a century and a half, so it is not unreasonable. Thinking probabilistically-- Continuous variablesIt’s time to move onto continuous variables, such as those that can take on any fractional value. Many of the principles are the same, but there are some subtleties. We will be speaking the probabilistic language needed to launch into the inference techniques. Probability density functions> Continuous variables- Quantities that can take any value, not justdiscrete values> Probability density function (PDF)- Continuous analog to the PMF- Mathematical description of the relative likelihoodof observing a value of a continuous variable Introduction to the Normal distribution> Normal distribution- Describes a continuous variable whose PDF has a single symmetric peak.>|Parameter| |Calculated from data||---|---|---||mean of a Normal distribution|≠| mean computed from data||st. dev. of a Normal distribution|≠|standard deviation computed from data| The Normal PDF
###Code
# Draw 100000 samples from Normal distribution with stds of interest: samples_std1, samples_std3, samples_std10
samples_std1 = np.random.normal(20,1,size=100000)
samples_std3 = np.random.normal(20, 3, size=100000)
samples_std10 = np.random.normal(20, 10, size=100000)
# Make histograms
_ = plt.hist(samples_std1, density=True, histtype="step", bins=100)
_ = plt.hist(samples_std3, density=True, histtype="step", bins=100)
_ = plt.hist(samples_std10, density=True, histtype="step", bins=100)
# Make a legend, set limits and show plot
_ = plt.legend(('std = 1', 'std = 3', 'std = 10'))
plt.ylim(-0.01, 0.42)
plt.show()
###Output
_____no_output_____
###Markdown
> Note: You can see how the different standard deviations result in PDFs of different widths. The peaks are all centered at the mean of 20. The Normal CDF
###Code
# Generate CDFs
x_std1, y_std1 = ecdf(samples_std1)
x_std3, y_std3 = ecdf(samples_std3)
x_std10, y_std10 = ecdf(samples_std10)
# Plot CDFs
_ = plt.plot(x_std1, y_std1, marker=".", linestyle="none")
_ = plt.plot(x_std3, y_std3, marker=".", linestyle="none")
_ = plt.plot(x_std10, y_std10, marker=".", linestyle="none")
# Make a legend and show the plot
_ = plt.legend(('std = 1', 'std = 3', 'std = 10'), loc='lower right')
plt.show()
###Output
_____no_output_____
###Markdown
> Note: The CDFs all pass through the mean at the 50th percentile; the mean and median of a Normal distribution are equal. The width of the CDF varies with the standard deviation. The Normal distribution: Properties and warnings Are the Belmont Stakes results Normally distributed?Since 1926, the Belmont Stakes is a $1.5$ mile-long race of 3-year old thoroughbred horses. Secretariat ran the fastest Belmont Stakes in history in $1973$. While that was the fastest year, 1970 was the slowest because of unusually wet and sloppy conditions. With these two outliers removed from the data set, we will compute the mean and standard deviation of the Belmont winners' times. We will sample out of a Normal distribution with this mean and standard deviation using the `np.random.normal()` function and plot a CDF. Overlay the ECDF from the winning Belmont times {% fn 2 %}.
###Code
belmont_no_outliers = np.array([148.51, 146.65, 148.52, 150.7 , 150.42, 150.88, 151.57, 147.54,
149.65, 148.74, 147.86, 148.75, 147.5 , 148.26, 149.71, 146.56,
151.19, 147.88, 149.16, 148.82, 148.96, 152.02, 146.82, 149.97,
146.13, 148.1 , 147.2 , 146. , 146.4 , 148.2 , 149.8 , 147. ,
147.2 , 147.8 , 148.2 , 149. , 149.8 , 148.6 , 146.8 , 149.6 ,
149. , 148.2 , 149.2 , 148. , 150.4 , 148.8 , 147.2 , 148.8 ,
149.6 , 148.4 , 148.4 , 150.2 , 148.8 , 149.2 , 149.2 , 148.4 ,
150.2 , 146.6 , 149.8 , 149. , 150.8 , 148.6 , 150.2 , 149. ,
148.6 , 150.2 , 148.2 , 149.4 , 150.8 , 150.2 , 152.2 , 148.2 ,
149.2 , 151. , 149.6 , 149.6 , 149.4 , 148.6 , 150. , 150.6 ,
149.2 , 152.6 , 152.8 , 149.6 , 151.6 , 152.8 , 153.2 , 152.4 ,
152.2 ])
# Compute mean and standard deviation: mu, sigma
mu = np.mean(belmont_no_outliers)
sigma = np.std(belmont_no_outliers)
# Sample out of a normal distribution with this mu and sigma: samples
samples = np.random.normal(mu, sigma, size=10000)
# Get the CDF of the samples and of the data
x_theor, y_theor = ecdf(samples)
x,y = ecdf(belmont_no_outliers)
# Plot the CDFs and show the plot
_ = plt.plot(x_theor, y_theor)
_ = plt.plot(x, y, marker='.', linestyle='none')
_ = plt.xlabel('Belmont winning time (sec.)')
_ = plt.ylabel('CDF')
plt.show()
###Output
_____no_output_____
###Markdown
> Note: The theoretical CDF and the ECDF of the data suggest that the winning Belmont times are, indeed, Normally distributed. This also suggests that in the last 100 years or so, there have not been major technological or training advances that have significantly affected the speed at which horses can run this race. What are the chances of a horse matching or beating Secretariat's record?The probability that the winner of a given Belmont Stakes will run it as fast or faster than Secretariat assuming that the Belmont winners' times are Normally distributed (with the 1970 and 1973 years removed)
###Code
# Take a million samples out of the Normal distribution: samples
samples = np.random.normal(mu, sigma, size=1000000)
# Compute the fraction that are faster than 144 seconds: prob
prob = np.sum(samples<=144)/len(samples)
# Print the result
print('Probability of besting Secretariat:', prob)
###Output
Probability of besting Secretariat: 0.000614
###Markdown
> Note: We had to take a million samples because the probability of a fast time is very low and we had to be sure to sample enough. We get that there is only a 0.06% chance of a horse running the Belmont as fast as Secretariat. The Exponential distributionThe waiting time between arrivals of a Poisson process is Exponentially distributed> Possible Poisson process- Nuclear incidents: - Timing of one is independent of all others $f(x; \frac{1}{\beta}) = \frac{1}{\beta} \exp(-\frac{x}{\beta})$ If you have a story, you can simulate it!Sometimes, the story describing our probability distribution does not have a named distribution to go along with it. In these cases, fear not! You can always simulate it.we looked at the rare event of no-hitters in Major League Baseball. _Hitting the cycle_ is another rare baseball event. When a batter hits the cycle, he gets all four kinds of hits, a single, double, triple, and home run, in a single game. Like no-hitters, this can be modeled as a Poisson process, so the time between hits of the cycle are also Exponentially distributed.How long must we wait to see both a no-hitter and then a batter hit the cycle? The idea is that we have to wait some time for the no-hitter, and then after the no-hitter, we have to wait for hitting the cycle. Stated another way, what is the total waiting time for the arrival of two different Poisson processes? The total waiting time is the time waited for the no-hitter, plus the time waited for the hitting the cycle.> Important: We will write a function to sample out of the distribution described by this story.
###Code
def successive_poisson(tau1, tau2, size=1):
"""Compute time for arrival of 2 successive Poisson processes."""
# Draw samples out of first exponential distribution: t1
t1 = np.random.exponential(tau1, size=size)
# Draw samples out of second exponential distribution: t2
t2 = np.random.exponential(tau2, size=size)
return t1 + t2
###Output
_____no_output_____
###Markdown
Distribution of no-hitters and cyclesWe'll use the sampling function to compute the waiting time to observe a no-hitter and hitting of the cycle. The mean waiting time for a no-hitter is $764$ games, and the mean waiting time for hitting the cycle is $715$ games.
###Code
# Draw samples of waiting times: waiting_times
waiting_times = successive_poisson(764, 715, size=100000)
# Make the histogram
_ = plt.hist(waiting_times, bins=100, density=True, histtype="step")
# Label axes
_ = plt.xlabel("Waiting times")
_ = plt.ylabel("probability")
# Show the plot
plt.show()
###Output
_____no_output_____
###Markdown
Notice that the PDF is peaked, unlike the waiting time for a single Poisson process. For fun (and enlightenment), Let's also plot the CDF.
###Code
x,y = ecdf(waiting_times)
_ = plt.plot(x,y)
_ = plt.plot(x,y, marker=".", linestyle="none")
_ = plt.xlabel("Waiting times")
_ = plt.ylabel("CDF")
plt.show()
###Output
_____no_output_____ |
Intro-Textbook/Chapter5_Lesson3_Vector_Data.ipynb | ###Markdown
https://www.earthdatascience.org/courses/intro-to-earth-data-science/file-formats/use-spatial-data/use-vector-data/ Lesson 3. Introduction to Spatial Vector Data File Formats in Open Source PythonPlotting a map with coastlines and country boundaries as well as cities.
###Code
# Import packages
import os
import matplotlib.pyplot as plt
import geopandas as gpd
import earthpy as et
###Output
_____no_output_____
###Markdown
The data: \Coastlines: https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/50m/physical/ne_50m_coastline.zip \Cities: https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/50m/cultural/ne_50m_populated_places_simple.zip \Countries: https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/50m/cultural/ne_50m_admin_0_countries.zip \ Coastline
###Code
# Open and examine coastline data
coastlines_path = os.path.join("data",
"earthpy-downloads",
"ne_50m_coastline",
"ne_50m_coastline.shp")
coastlines = gpd.read_file(coastlines_path)
coastlines.head()
# Plot the data
f, ax1 = plt.subplots(figsize=(12, 6))
coastlines.plot(ax=ax1)
ax1.set(title="Global Coastline Boundaries")
plt.show()
# Examine the data
coastlines.geom_type
# Show info
coastlines.info()
###Output
<class 'geopandas.geodataframe.GeoDataFrame'>
RangeIndex: 1428 entries, 0 to 1427
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 scalerank 1428 non-null int64
1 featurecla 1428 non-null object
2 min_zoom 1428 non-null float64
3 geometry 1428 non-null geometry
dtypes: float64(1), geometry(1), int64(1), object(1)
memory usage: 44.8+ KB
###Markdown
Cities
###Code
# Create a path to the populated places shapefile
populated_places_path = os.path.join("data",
"earthpy-downloads",
"ne_50m_populated_places_simple",
"ne_50m_populated_places_simple.shp")
cities = gpd.read_file(populated_places_path)
cities.head()
# Examine shape type
cities.geom_type
# Examine file info
cities.info()
# Plot coastlines and cities together
f, ax1 = plt.subplots(figsize=(10, 6))
coastlines.plot(ax=ax1,
color="black")
cities.plot(ax=ax1)
# Add a title
ax1.set(title="Map of Cities and Global Lines")
plt.show()
###Output
_____no_output_____
###Markdown
Countries
###Code
# Create a path to the countries shapefile
countries_path = os.path.join("data",
"earthpy-downloads",
"ne_50m_admin_0_countries",
"ne_50m_admin_0_countries.shp")
countries = gpd.read_file(countries_path)
countries.head()
# Plot coastlines, cities, and countries together
f, ax1 = plt.subplots(figsize=(15, 10))
coastlines.plot(ax=ax1,
color="black")
countries.plot(ax=ax1, color='lightgrey', edgecolor='black')
cities.plot(ax=ax1, color='red', markersize=5)
# Add a title
ax1.set(title="Map of Cities, Countries, and Coastlines")
plt.show()
# Color the cities according to population
f, ax1 = plt.subplots(figsize=(15, 6))
coastlines.plot(ax=ax1,
color="black")
countries.plot(ax=ax1, color='lightgrey', edgecolor='grey')
cities.plot(ax=ax1, markersize=10, column='pop_max', legend=True)
# Add a title
ax1.set(title="Map of Cities, Countries, and Coastlines")
plt.show()
# Examine population
for i in range(0, len(cities['pop_max'])):
if cities['pop_max'][i] > 15000000:
print(cities['name'][i], cities['pop_min'][i], cities['pop_max'][i])
plt.hist(cities['pop_max'])
plt.show()
# --> Few cities with very high population
# Recolor cities using quantiles
f, ax1 = plt.subplots(figsize=(15, 6))
coastlines.plot(ax=ax1,
color="black")
countries.plot(ax=ax1, color='lightgrey', edgecolor='grey')
cities.plot(ax=ax1, markersize=10, column='pop_max', legend=True, scheme='quantiles')
ax1.set(title="Map of Cities, Countries, and Coastlines")
plt.show()
###Output
_____no_output_____
###Markdown
Focus on one country
###Code
# Subset the countries data to just a single
united_states_boundary = countries.loc[countries['SOVEREIGNT']
== 'United States of America']
# Notice in the plot below, that only the boundary for the USA is in the new variable
f, ax = plt.subplots(figsize=(10, 6))
united_states_boundary.plot(ax=ax)
plt.show()
# Clip the cities data to the USA boundary
# Note -- this operation may take some time to run - be patient
cities_in_usa = gpd.clip(cities, united_states_boundary)
# Plot your final clipped data
f, ax = plt.subplots()
cities_in_usa.plot(ax=ax)
ax.set(title="Cities clipped to the USA Boundary")
plt.show()
###Output
_____no_output_____ |
Python Pandas US Census names/Vinit_Nalawade_Project_Pandas.ipynb | ###Markdown
**Name: Vinit Nalawade**
###Code
#import required libraries
import pandas as pd
import numpy as np
#for counter operations
from collections import Counter
#for plotting graphs
import matplotlib.pyplot as plt
# Make the graphs a bit prettier, and bigger
pd.set_option('display.mpl_style', 'default')
pd.set_option('display.width', 5000)
pd.set_option('display.max_columns', 60)
%matplotlib inline
###Output
C:\Users\Vinit\AppData\Local\Enthought\Canopy\User\lib\site-packages\IPython\core\interactiveshell.py:2885: FutureWarning:
mpl_style had been deprecated and will be removed in a future version.
Use `matplotlib.pyplot.style.use` instead.
exec(code_obj, self.user_global_ns, self.user_ns)
###Markdown
**Part One:** Go to the Social Security Administration US births website and select the births table there and copy it to your clipboard. Use the pandas read_clipboard function to read the table into Python, and use matplotlib to plot male and female births for the years covered in the data.
###Code
#informing python that ',' indicates thousands
df = pd.read_clipboard(thousands = ',')
df
#plot male and female births for the years covered in the data
plt.plot(df['Year of birth'], df['Male'], c = 'b', label = 'Male')
plt.plot(df['Year of birth'], df['Female'],c = 'r', label = 'Female')
plt.legend(loc = 'upper left')
#plt.axis([1880, 2015, 0, 2500000])
plt.xlabel('Year of birth')
plt.ylabel('No. of births')
plt.title('Total births by Sex and Year')
#double the size of plot for visibility
size = 2
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches((plSize[0]*size, plSize[1]*size))
plt.show()
###Output
_____no_output_____
###Markdown
plot xkcd style :)with plt.xkcd(): plt.plot(df['Year of birth'], df['Male'], c = 'b', label = 'Male') plt.plot(df['Year of birth'], df['Female'],c = 'r', label = 'Female') plt.legend(loc = 'upper left') plt.xlim(xmax = 2015) plt.xlabel('Year of birth') plt.ylabel('No. of births') plt.title('Male and Female births from 1880 to 2015') plt.show() In the same notebook, use Python to get a list of male and female names from these files. This data is broken down by year of birth.The files contain names data of the years from **1881 to 2010**.Aggregating this data in **"names"** dataframe below.
###Code
years = range(1881,2011)
pieces = []
columns = ['name','sex','births']
for year in years:
path = 'names/yob{0:d}.txt'.format(year)
frame = pd.read_csv(path,names=columns)
frame['year'] = year
pieces.append(frame)
names = pd.concat(pieces, ignore_index=True)
names.head()
names.tail()
###Output
_____no_output_____
###Markdown
**Part Two:** Aggregate the data for all years (see the examples in the Pandas notebooks). Use Python Counters to get letter frequencies for male and female names. Use matplotlib to draw a plot that for each letter (x-axis) shows the frequency of that letter (y-axis) as the last letter for both for male and female names. The data is already agregated in "names" dataframe.Getting separate dataframes for Males and Females.Defining a List for male and female names.
###Code
female_names = names[names.sex == 'F']
male_names = names[names.sex == 'M']
print "For Female names"
print female_names.head()
print "\nFor Male names"
print male_names.tail()
female_list = list(female_names['name'])
male_list = list(male_names['name'])
###Output
For Female names
name sex births year
0 Mary F 6919 1881
1 Anna F 2698 1881
2 Emma F 2034 1881
3 Elizabeth F 1852 1881
4 Margaret F 1658 1881
For Male names
name sex births year
1688779 Zymaire M 5 2010
1688780 Zyonne M 5 2010
1688781 Zyquarius M 5 2010
1688782 Zyran M 5 2010
1688783 Zzyzx M 5 2010
###Markdown
Calculating the letter frequency for male names.
###Code
male_letter_freq = Counter()
#converting every letter to lowercase
for name in map(lambda x:x.lower(),male_names['name']):
for i in name:
male_letter_freq[i] += 1
male_letter_freq
###Output
_____no_output_____
###Markdown
Calculating the letter frequency for female names.
###Code
female_letter_freq = Counter()
#converting every letter to lowercase
for name in map(lambda x:x.lower(),female_names['name']):
for i in name:
female_letter_freq[i] += 1
female_letter_freq
###Output
_____no_output_____
###Markdown
Calculating the last letter frequency for male names.
###Code
male_last_letter_freq = Counter()
for name in male_names['name']:
male_last_letter_freq[name[-1]] += 1
male_last_letter_freq
###Output
_____no_output_____
###Markdown
Calculating the last letter frequency for female names.
###Code
female_last_letter_freq = Counter()
for name in female_names['name']:
female_last_letter_freq[name[-1]] += 1
female_last_letter_freq
###Output
_____no_output_____
###Markdown
Plot for each letter showing the frequency of that letter as the last letter for both for male and female names.I use the OrderedDict function from collections here to arrange the letters present in counter in acsending order for plotting.
###Code
#for ordering items of counter in ascending order
from collections import OrderedDict
#plot of last letter frequency of male names in ascending order of letters
male_last_letter_freq_asc = OrderedDict(sorted(male_last_letter_freq.items()))
plt.bar(range(len(male_last_letter_freq_asc)), male_last_letter_freq_asc.values(), align='center')
plt.xticks(range(len(male_last_letter_freq_asc)), male_last_letter_freq_asc.keys())
plt.xlabel('Letters')
plt.ylabel('Frequency')
plt.title('Frequency of last letter for Male names')
plt.show()
#plot of last letter frequency of female names in ascending order of letters
female_last_letter_freq_asc = OrderedDict(sorted(female_last_letter_freq.items()))
plt.bar(range(len(female_last_letter_freq_asc)), female_last_letter_freq_asc.values(), align='center')
plt.xticks(range(len(female_last_letter_freq_asc)), female_last_letter_freq_asc.keys())
plt.xlabel('Letters')
plt.ylabel('Frequency')
plt.title('Frequency of last letter for Female names')
plt.show()
female_last_letter_freq_asc = OrderedDict(sorted(female_last_letter_freq.items()))
plt.plot(range(len(female_last_letter_freq_asc)), female_last_letter_freq_asc.values(), c = 'r', label = 'Female')
plt.plot(range(len(male_last_letter_freq_asc)), male_last_letter_freq_asc.values(), c = 'b', label = 'Male')
plt.xticks(range(len(male_last_letter_freq_asc)), male_last_letter_freq_asc.keys())
plt.xlabel('Letters')
plt.ylabel('Frequency')
plt.legend(loc = 'upper right')
plt.title('Frequency of last letter in names by Sex')
#double the size of plot for visibility
size = 2
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches((plSize[0]*size, plSize[1]*size))
plt.show()
###Output
_____no_output_____
###Markdown
**Part Three:** Now do just female names, but aggregate your data in decades (10 year) increments. Produce a plot that contains the 1880s line, the 1940s line, and the 1990s line, as well as the female line for all years aggregated together from Part Two. Evaluate how stable this statistic is. Speculate on why it is is stable, if it is, or on what demographic facts might explain any changes, if there are any. Turn in your ipython notebook file, showing the code you used to complete parts One, Two, an Three.
###Code
#to get the decade lists
#female_1880 = female_names[female_names['year'] < 1890]
#female_1890 = female_names[(female_names['year'] >= 1890) & (female_names['year'] < 1900)]
#female_1900 = female_names[(female_names['year'] >= 1900) & (female_names['year'] < 1910)]
#female_1910 = female_names[(female_names['year'] >= 1910) & (female_names['year'] < 1920)]
#female_1920 = female_names[(female_names['year'] >= 1920) & (female_names['year'] < 1930)]
#female_1930 = female_names[(female_names['year'] >= 1930) & (female_names['year'] < 1940)]
#female_1940 = female_names[(female_names['year'] >= 1940) & (female_names['year'] < 1950)]
#female_1950 = female_names[(female_names['year'] >= 1950) & (female_names['year'] < 1960)]
#female_1960 = female_names[(female_names['year'] >= 1960) & (female_names['year'] < 1970)]
#female_1970 = female_names[(female_names['year'] >= 1970) & (female_names['year'] < 1980)]
#female_1980 = female_names[(female_names['year'] >= 1980) & (female_names['year'] < 1990)]
#female_1990 = female_names[(female_names['year'] >= 1990) & (female_names['year'] < 2000)]
#female_2000 = female_names[(female_names['year'] >= 2000) & (female_names['year'] < 2010)]
#female_2010 = female_names[female_names['year'] >= 2010]
#another earier way to get the decade lists for females
female_1880 = female_names[female_names.year.isin(range(1880,1890))]
female_1890 = female_names[female_names.year.isin(range(1890,1900))]
female_1900 = female_names[female_names.year.isin(range(1900,1910))]
female_1910 = female_names[female_names.year.isin(range(1910,1920))]
female_1920 = female_names[female_names.year.isin(range(1920,1930))]
female_1930 = female_names[female_names.year.isin(range(1930,1940))]
female_1940 = female_names[female_names.year.isin(range(1940,1950))]
female_1950 = female_names[female_names.year.isin(range(1950,1960))]
female_1960 = female_names[female_names.year.isin(range(1960,1970))]
female_1970 = female_names[female_names.year.isin(range(1970,1980))]
female_1980 = female_names[female_names.year.isin(range(1980,1990))]
female_1990 = female_names[female_names.year.isin(range(1990,2000))]
female_2000 = female_names[female_names.year.isin(range(2000,2010))]
female_2010 = female_names[female_names.year.isin(range(2010,2011))] #just the year 2010 present
#to verify sorting of data
print female_1880.head()
print female_1880.tail()
###Output
name sex births year
0 Mary F 6919 1881
1 Anna F 2698 1881
2 Emma F 2034 1881
3 Elizabeth F 1852 1881
4 Margaret F 1658 1881
name sex births year
19627 Wessie F 5 1889
19628 Zepha F 5 1889
19629 Zilpha F 5 1889
19630 Zulema F 5 1889
19631 Zuma F 5 1889
###Markdown
Preparing data for the 1880s.A counter for last letter frequencies.
###Code
female_1880_freq = Counter()
for name in female_1880['name']:
female_1880_freq[name[-1]] += 1
female_1880_freq
###Output
_____no_output_____
###Markdown
Preparing data for the 1940s.A counter for last letter frequencies.
###Code
female_1940_freq = Counter()
for name in female_1940['name']:
female_1940_freq[name[-1]] += 1
female_1940_freq
###Output
_____no_output_____
###Markdown
Preparing data for the 1990s.A counter for last letter frequencies.
###Code
female_1990_freq = Counter()
for name in female_1990['name']:
female_1990_freq[name[-1]] += 1
female_1990_freq
###Output
_____no_output_____
###Markdown
Converting the frequency data from counter to dataframes after sorting the letters alphabetically.
###Code
#for 1880s
first = pd.DataFrame.from_dict((OrderedDict(sorted(female_1880_freq.items()))), orient = 'index').reset_index()
first.columns = ['letter','frequency']
first['decade'] = '1880s'
print first.head()
#for 1940s
second = pd.DataFrame.from_dict((OrderedDict(sorted(female_1940_freq.items()))), orient = 'index').reset_index()
second.columns = ['letter','frequency']
second['decade'] = '1940s'
print second.head()
#for 1990s
third = pd.DataFrame.from_dict((OrderedDict(sorted(female_1990_freq.items()))), orient = 'index').reset_index()
third.columns = ['letter','frequency']
third['decade'] = '1990s'
print third.head()
###Output
letter frequency decade
0 a 4718 1880s
1 c 2 1880s
2 d 111 1880s
3 e 3854 1880s
4 g 15 1880s
letter frequency decade
0 a 20207 1940s
1 b 18 1940s
2 c 14 1940s
3 d 486 1940s
4 e 16640 1940s
letter frequency decade
0 a 71322 1990s
1 b 105 1990s
2 c 159 1990s
3 d 663 1990s
4 e 28064 1990s
###Markdown
Aggregating all required decades (1880s, 1940s, 1990s) into a single dataframe and then into a pivot table for ease in plotting graphs.
###Code
#Aggregate 1880s, 1940s and 1990s frequencies
frames = [first, second, third]
columns = ["letter","frequency", "decade"]
req_decades = pd.DataFrame(pd.concat(frames))
req_decades.columns = columns
print req_decades.head()
print req_decades.tail()
#Get data into a pivot table for ease in plotting
decades_table = pd.pivot_table(req_decades, index=['letter'], values=['frequency'], columns=['decade'])
decades_table.head()
###Output
letter frequency decade
0 a 4718 1880s
1 c 2 1880s
2 d 111 1880s
3 e 3854 1880s
4 g 15 1880s
letter frequency decade
21 v 19 1990s
22 w 93 1990s
23 x 157 1990s
24 y 10689 1990s
25 z 348 1990s
###Markdown
Plot of last letter of females for 1880s , 1940s, 1990s, and for all years (from part 2).
###Code
#plot the decades as bars and the female line for all years as a line
c = ['m','g','c']
decades_table['frequency'].plot(kind = 'bar', rot = 0,color = c, title = 'Frequency of Last letter of Female names by Female Births')
#the female line for all years taken from part 2
plt.plot(range(len(female_last_letter_freq_asc)), female_last_letter_freq_asc.values(), c = 'r', label = 'All Female births')
plt.xlabel('Letters')
plt.ylabel('Frequency')
plt.legend(loc = 'best')
#double the size of plot for visibility
size = 2
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches((plSize[0]*size, plSize[1]*size))
plt.show()
###Output
_____no_output_____
###Markdown
The graph has extreme variations in highs and lows.Plotting the logarithmic scale of frequencies takes care of this and makes it easier for comparison.
###Code
#plot the decades as bars and the female line for all years as a line
c = ['m','g','c']
decades_table['frequency'].plot(kind = 'bar', rot = 0, logy = 'True',color = c, title = 'Log(Frequency) of Last letter of Female names by Female Births')
#the female line for all years taken from part 2
plt.plot(range(len(female_last_letter_freq_asc)), female_last_letter_freq_asc.values(), c = 'r', label = 'All Female births')
plt.xlabel('Letters')
plt.ylabel('Log(Frequency)')
plt.legend(loc = 'best')
#double the size of plot for visibility
size = 2
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches((plSize[0]*size, plSize[1]*size))
plt.show()
###Output
_____no_output_____
###Markdown
**Evaluate how stable this statistic is. Speculate on why it is is stable, if it is, or on what demographic facts might explain any changes, if there are any.** We can normalize the table by total births in each particular decades to compute a new table containing proportionof total births for each decade ending in each letter.
###Code
decades_table.sum()
#plot the decades as bars and the female line for all years as a line
c = ['m','g','c']
decades_table_prop = decades_table/decades_table.sum().astype(float)
decades_table_prop['frequency'].plot(kind = 'bar', rot = 0,color = c, title = 'Normalized Frequency of Last letter of Female names by Female Births')
#the female line for all years taken from part 2
#plt.plot(range(len(female_last_letter_freq_asc)), female_last_letter_freq_asc.values(), c = 'r', label = 'All Female births')
plt.xlabel('Letters')
plt.ylabel('Normalized Frequency')
plt.legend(loc = 'best')
#double the size of plot for visibility
size = 2
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches((plSize[0]*size, plSize[1]*size))
plt.show()
###Output
_____no_output_____ |
ROS/Kinect2/solveP.ipynb | ###Markdown
Extrinsic
###Code
import json
json_file = '/media/commaai-03/Data/tmp2/c1226/bg/c0.json'
bg = '/media/commaai-03/Data/tmp2/c1226/bg/c0.jpg'
with open(json_file, 'r') as f:
c0_info = json.load(f, encoding='utf-8')
_objPoints = {}
for i in range(5):
for j in range(5):
#
label1 = 'p%d-p%d'%(i,j)
label2 = 'p%d-n%d'%(i,j)
label3 = 'n%d-p%d'%(i,j)
label4 = 'n%d-n%d'%(i,j)
#
_objPoints[label1] = [i*0.5, j*0.5, 0]
_objPoints[label2] = [i*0.5, -j*0.5, 0]
_objPoints[label3] = [-i*0.5, j*0.5, 0]
_objPoints[label4] = [-i*0.5, -j*0.5, 0]
_objPoints.items()[:5]
c0_info.keys()
objps = []
imgps = []
for shape in c0_info['shapes']:
point = shape['points'][0]
objP = _objPoints[shape['label']]
imgps.append(point)
objps.append(objP)
imgps = np.array(imgps, dtype=np.float32)
objps = np.array(objps, dtype=np.float32)
ret, rvec, tvec, inliers \
= cv2.solvePnPRansac(objps,
imgps,
K,
D)
rvec
tvec
dst, jac = cv2.Rodrigues(rvec)
o = np.float32([[1.0, 0.5, 0]]).reshape(-1,3)
imgpts, jac = cv2.projectPoints(o, rvec, tvec, K, D)
x, y = int(tuple(imgpts[0].ravel())[0]), int(tuple(imgpts[0].ravel())[1])
_img = cv2.imread(bg)
p_img = cv2.circle(_img, (x, y), 3, (0,0,255), -1)
while 1:
cv2.imshow('P', p_img)
k = cv2.waitKey(1)
if k == 27:
cv2.destroyAllWindows()
break
###Output
_____no_output_____ |
tutorials/simulators/7_matrix_product_state_method.ipynb | ###Markdown
Matrix product state simulation method Simulation methodsThe `AerSimulator` has several simulation methods including `statevector`, `stabilizer`, `extended_stabilizer` and `matrix_product_state`. Each of these determines the internal representation of the quantum circuit and the algorithms used to process the quantum operations. They each have advantages and disadvantages, and choosing the best method is a matter of investigation.In this tutorial, we focus on the `matrix product state simulation method`. Matrix product state simulation methodThis simulation method is based on the concept of `matrix product states`. This structure was initially proposed in the paper *Efficient classical simulation of slightly entangled quantum computations* by Vidal in https://arxiv.org/abs/quant-ph/0301063. There are additional papers that describe the structure in more detail, for example *The density-matrix renormalization group in the age of matrix product states* by Schollwoeck https://arxiv.org/abs/1008.3477. A pure quantum state is usually described as a state vector, by the expression $|\psi\rangle = \sum_{i_1=0}^1 {\ldots} \sum_{i_n=0}^1 c_{i_1 \ldots i_n} |i_i\rangle {\otimes} {\ldots} {\otimes} |i_n\rangle$.The state vector representation implies an exponential size representation, regardless of the actual circuit. Every quantum gate operating on this representation requires exponential time and memory.The matrix product state (MPS) representation offers a local representation, in the form:$\Gamma^{[1]} \lambda^{[1]} \Gamma^{[2]} \lambda^{[2]}\ldots \Gamma^{[1]} \lambda^{[n-1]} \Gamma^{[n]}$, such that all the information contained in the $c_{i_1 \ldots i_n}$, can be generated out of the MPS representation. Every $\Gamma^{[i]}$ is a tensor of complex numbers that represents qubit $i$. Every $\lambda^{[i]}$ is a matrix of real numbers that is used to normalize the amplitudes of qubits $i$ and $i+1$. Single-qubit gates operate only on the relevant tensor. Two-qubit gates operate on consecutive qubits $i$ and $i+1$. This involves a tensor-contract operation over $\lambda^{[i-1]}$, $\Gamma^{[i-1]}$, $\lambda^{[i]}$, $\Gamma^{[i+1]}$ and $\lambda^{[i+1]}$, that creates a single tensor. We apply the gate to this tensor, and then decompose back to the original structure. This operation may increase the size of the respective tensors. Gates that involve two qubits that are not consecutive, require a series of swap gates to bring the two qubits next to each other and then the reverse swaps. In the worst case, the tensors may grow exponentially. However, the size of the overall structure remains 'small' for circuits that do not have 'many' two-qubit gates. This allows much more efficient operations in circuits with relatively 'low' entanglement. Characterizing when to use this method over other methods is a subject of current research. Using the matrix product state simulation methodThe matrix product state simulation method is invoked in the `AerSimulator` by setting the simulation method. Other than that, all operations are controlled by the `AerSimulator` itself, as in the following example:
###Code
import numpy as np
# Import Qiskit
from qiskit import QuantumCircuit, transpile
from qiskit.providers.aer import AerSimulator
# Construct quantum circuit
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
circ.measure([0,1], [0,1])
# Select the AerSimulator from the Aer provider
simulator = AerSimulator(method='matrix_product_state')
# Run and get counts, using the matrix_product_state method
tcirc = transpile(circ, simulator)
result = simulator.run(tcirc).result()
counts = result.get_counts(0)
counts
###Output
_____no_output_____
###Markdown
To see the internal state vector of the circuit we can use the `save_statevector` instruction. To return the full interal MPS structure we can also use the `save_matrix_product_state` instruction.
###Code
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
# Define a snapshot that shows the current state vector
circ.save_statevector(label='my_sv')
circ.save_matrix_product_state(label='my_mps')
circ.measure([0,1], [0,1])
# Execute and get saved data
tcirc = transpile(circ, simulator)
result = simulator.run(tcirc).result()
data = result.data(0)
#print the result data
data
###Output
_____no_output_____
###Markdown
Running circuits using the matrix product state simulation method can be fast, relative to other methods. However, if we generate the state vector during the execution, then the conversion to state vector is, of course, exponential in memory and time, and therefore we don't benefit from using this method. We can benefit if we only do operations that don't require printing the full state vector. For example, if we run a circuit and then take measurement. The circuit below has 200 qubits. We create an `EPR state` involving all these qubits. Although this state is highly entangled, it is handled well by the matrix product state method, because there are effectively only two states. We can handle more qubits than this, but execution may take a few minutes. Try running a similar circuit with 500 qubits! Or maybe even 1000 (you can get a cup of coffee while waiting).
###Code
num_qubits = 50
circ = QuantumCircuit(num_qubits, num_qubits)
# Create EPR state
circ.h(0)
for i in range (0, num_qubits-1):
circ.cx(i, i+1)
# Measure
circ.measure(range(num_qubits), range(num_qubits))
tcirc = transpile(circ, simulator)
result = simulator.run(tcirc).result()
print("Time taken: {} sec".format(result.time_taken))
result.get_counts()
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
Matrix product state simulation method Simulation methodsThe `AerSimulator` has several simulation methods including `statevector`, `stabilizer`, `extended_stabilizer` and `matrix_product_state`. Each of these determines the internal representation of the quantum circuit and the algorithms used to process the quantum operations. They each have advantages and disadvantages, and choosing the best method is a matter of investigation.In this tutorial, we focus on the `matrix product state simulation method`. Matrix product state simulation methodThis simulation method is based on the concept of `matrix product states`. This structure was initially proposed in the paper *Efficient classical simulation of slightly entangled quantum computations* by Vidal in https://arxiv.org/abs/quant-ph/0301063. There are additional papers that describe the structure in more detail, for example *The density-matrix renormalization group in the age of matrix product states* by Schollwoeck https://arxiv.org/abs/1008.3477. A pure quantum state is usually described as a state vector, by the expression $|\psi\rangle = \sum_{i_1=0}^1 {\ldots} \sum_{i_n=0}^1 c_{i_1 \ldots i_n} |i_i\rangle {\otimes} {\ldots} {\otimes} |i_n\rangle$.The state vector representation implies an exponential size representation, regardless of the actual circuit. Every quantum gate operating on this representation requires exponential time and memory.The matrix product state (MPS) representation offers a local representation, in the form:$\Gamma^{[1]} \lambda^{[1]} \Gamma^{[2]} \lambda^{[2]}\ldots \Gamma^{[1]} \lambda^{[n-1]} \Gamma^{[n]}$, such that all the information contained in the $c_{i_1 \ldots i_n}$, can be generated out of the MPS representation. Every $\Gamma^{[i]}$ is a tensor of complex numbers that represents qubit $i$. Every $\lambda^{[i]}$ is a matrix of real numbers that is used to normalize the amplitudes of qubits $i$ and $i+1$. Single-qubit gates operate only on the relevant tensor. Two-qubit gates operate on consecutive qubits $i$ and $i+1$. This involves a tensor-contract operation over $\lambda^{[i-1]}$, $\Gamma^{[i-1]}$, $\lambda^{[i]}$, $\Gamma^{[i+1]}$ and $\lambda^{[i+1]}$, that creates a single tensor. We apply the gate to this tensor, and then decompose back to the original structure. This operation may increase the size of the respective tensors. Gates that involve two qubits that are not consecutive, require a series of swap gates to bring the two qubits next to each other and then the reverse swaps. In the worst case, the tensors may grow exponentially. However, the size of the overall structure remains 'small' for circuits that do not have 'many' two-qubit gates. This allows much more efficient operations in circuits with relatively 'low' entanglement. Characterizing when to use this method over other methods is a subject of current research. Using the matrix product state simulation methodThe matrix product state simulation method is invoked in the `AerSimulator` by setting the simulation method. Other than that, all operations are controlled by the `AerSimulator` itself, as in the following example:
###Code
import numpy as np
# Import Qiskit
from qiskit import QuantumCircuit, transpile
from qiskit.providers.aer import AerSimulator
# Construct quantum circuit
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
circ.measure([0,1], [0,1])
# Select the AerSimulator from the Aer provider
simulator = AerSimulator(method='matrix_product_state')
# Run and get counts, using the matrix_product_state method
tcirc = transpile(circ, simulator)
result = simulator.run(circ).result()
counts = result.get_counts(0)
counts
###Output
_____no_output_____
###Markdown
To see the internal state vector of the circuit we can use the `save_statevector` instruction. To return the full interal MPS structure we can also use the `save_matrix_product_state` instruction.
###Code
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
# Define a snapshot that shows the current state vector
circ.save_statevector(label='my_sv')
circ.save_matrix_product_state(label='my_mps')
circ.measure([0,1], [0,1])
# Execute and get saved data
tcirc = transpile(circ, simulator)
result = simulator.run(circ).result()
data = result.data(0)
#print the result data
data
###Output
_____no_output_____
###Markdown
Running circuits using the matrix product state simulation method can be fast, relative to other methods. However, if we generate the state vector during the execution, then the conversion to state vector is, of course, exponential in memory and time, and therefore we don't benefit from using this method. We can benefit if we only do operations that don't require printing the full state vector. For example, if we run a circuit and then take measurement. The circuit below has 200 qubits. We create an `EPR state` involving all these qubits. Although this state is highly entangled, it is handled well by the matrix product state method, because there are effectively only two states. We can handle more qubits than this, but execution may take a few minutes. Try running a similar circuit with 500 qubits! Or maybe even 1000 (you can get a cup of coffee while waiting).
###Code
num_qubits = 50
circ = QuantumCircuit(num_qubits, num_qubits)
# Create EPR state
circ.h(0)
for i in range (0, num_qubits-1):
circ.cx(i, i+1)
# Measure
circ.measure(range(num_qubits), range(num_qubits))
tcirc = transpile(circ, simulator)
result = simulator.run(circ).result()
print("Time taken: {} sec".format(result.time_taken))
result.get_counts()
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
Matrix product state simulation method Simulation methodsThe `QasmSimulator` has several simulation methods including `statevector`, `stabilizer`, `extended_stabilizer` and `matrix_product_state`. Each of these determines the internal representation of the quantum circuit and the algorithms used to process the quantum operations. They each have advantages and disadvantages, and choosing the best method is a matter of investigation.In this tutorial, we focus on the `matrix product state simulation method`. Matrix product state simulation methodThis simulation method is based on the concept of `matrix product states`. This structure was initially proposed in the paper *Efficient classical simulation of slightly entangled quantum computations* by Vidal in https://arxiv.org/abs/quant-ph/0301063. There are additional papers that describe the structure in more detail, for example *The density-matrix renormalization group in the age of matrix product states* by Schollwoeck https://arxiv.org/abs/1008.3477. A pure quantum state is usually described as a state vector, by the expression $|\psi\rangle = \sum_{i_1=0}^1 {\ldots} \sum_{i_n=0}^1 c_{i_1 \ldots i_n} |i_i\rangle {\otimes} {\ldots} {\otimes} |i_n\rangle$.The state vector representation implies an exponential size representation, regardless of the actual circuit. Every quantum gate operating on this representation requires exponential time and memory.The matrix product state (MPS) representation offers a local representation, in the form:$\Gamma^{[1]} \lambda^{[1]} \Gamma^{[2]} \lambda^{[2]}\ldots \Gamma^{[1]} \lambda^{[n-1]} \Gamma^{[n]}$, such that all the information contained in the $c_{i_1 \ldots i_n}$, can be generated out of the MPS representation. Every $\Gamma^{[i]}$ is a tensor of complex numbers that represents qubit $i$. Every $\lambda^{[i]}$ is a matrix of real numbers that is used to normalize the amplitudes of qubits $i$ and $i+1$. Single-qubit gates operate only on the relevant tensor. Two-qubit gates operate on consecutive qubits $i$ and $i+1$. This involves a tensor-contract operation over $\lambda^{[i-1]}$, $\Gamma^{[i-1]}$, $\lambda^{[i]}$, $\Gamma^{[i+1]}$ and $\lambda^{[i+1]}$, that creates a single tensor. We apply the gate to this tensor, and then decompose back to the original structure. This operation may increase the size of the respective tensors. Gates that involve two qubits that are not consecutive, require a series of swap gates to bring the two qubits next to each other and then the reverse swaps. In the worst case, the tensors may grow exponentially. However, the size of the overall structure remains 'small' for circuits that do not have 'many' two-qubit gates. This allows much more efficient operations in circuits with relatively 'low' entanglement. Characterizing when to use this method over other methods is a subject of current research. Using the matrix product state simulation methodThe matrix product state simulation method is invoked in the `QasmSimulator` by setting the simulation method. Other than that, all operations are controlled by the `QasmSimulator` itself, as in the following example:
###Code
import numpy as np
# Import Qiskit
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
from qiskit import execute
from qiskit.providers.aer import QasmSimulator
# Construct quantum circuit
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
circ.measure([0,1], [0,1])
# Select the QasmSimulator from the Aer provider
simulator = QasmSimulator(method='matrix_product_state')
# Execute and get counts, using the matrix_product_state method
result = execute(circ, simulator).result()
counts = result.get_counts(circ)
counts
###Output
_____no_output_____
###Markdown
To see the internal state vector of the circuit, we can import the snapshot files:
###Code
from qiskit.extensions.simulator.snapshot import snapshot
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
# Define a snapshot that shows the current state vector
circ.snapshot('my_sv', snapshot_type='statevector')
circ.measure([0,1], [0,1])
# Execute
job_sim = execute(circ, simulator)
result = job_sim.result()
#print the state vector
result.data()['snapshots']['statevector']['my_sv'][0]
result.get_counts()
###Output
_____no_output_____
###Markdown
Running circuits using the matrix product state simulation method can be fast, relative to other methods. However, if we generate the state vector during the execution, then the conversion to state vector is, of course, exponential in memory and time, and therefore we don't benefit from using this method. We can benefit if we only do operations that don't require printing the full state vector. For example, if we run a circuit and then take measurement. The circuit below has 200 qubits. We create an `EPR state` involving all these qubits. Although this state is highly entangled, it is handled well by the matrix product state method, because there are effectively only two states. We can handle more qubits than this, but execution may take a few minutes. Try running a similar circuit with 500 qubits! Or maybe even 1000 (you can get a cup of coffee while waiting).
###Code
num_qubits = 50
circ = QuantumCircuit(num_qubits, num_qubits)
# Create EPR state
circ.h(0)
for i in range (0, num_qubits-1):
circ.cx(i, i+1)
# Measure
circ.measure(range(num_qubits), range(num_qubits))
job_sim = execute(circ, simulator)
result = job_sim.result()
print("Time taken: {} sec".format(result.time_taken))
result.get_counts()
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
Matrix product state simulation method Simulation methodsThe `QasmSimulator` has several simulation methods including `statevector`, `stabilizer`, `extended_stabilizer` and `matrix_product_state`. Each of these determines the internal representation of the quantum circuit and the algorithms used to process the quantum operations. They each have advantages and disadvantages, and choosing the best method is a matter of investigation.In this tutorial, we focus on the `matrix product state simulation method`. Matrix product state simulation methodThis simulation method is based on the concept of `matrix product states`. This structure was initially proposed in the paper *Efficient classical simulation of slightly entangled quantum computations* by Vidal in https://arxiv.org/abs/quant-ph/0301063. There are additional papers that describe the structure in more detail, for example *The density-matrix renormalization group in the age of matrix product states* by Schollwoeck https://arxiv.org/abs/1008.3477. A pure quantum state is usually described as a state vector, by the expression $|\psi\rangle = \sum_{i_1=0}^1 {\ldots} \sum_{i_n=0}^1 c_{i_1 \ldots i_n} |i_i\rangle {\otimes} {\ldots} {\otimes} |i_n\rangle$.The state vector representation implies an exponential size representation, regardless of the actual circuit. Every quantum gate operating on this representation requires exponential time and memory.The matrix product state (MPS) representation offers a local representation, in the form:$\Gamma^{[1]} \lambda^{[1]} \Gamma^{[2]} \lambda^{[2]}\ldots \Gamma^{[1]} \lambda^{[n-1]} \Gamma^{[n]}$, such that all the information contained in the $c_{i_1 \ldots i_n}$, can be generated out of the MPS representation. Every $\Gamma^{[i]}$ is a tensor of complex numbers that represents qubit $i$. Every $\lambda^{[i]}$ is a matrix of real numbers that is used to normalize the amplitudes of qubits $i$ and $i+1$. Single-qubit gates operate only on the relevant tensor. Two-qubit gates operate on consecutive qubits $i$ and $i+1$. This involves a tensor-contract operation over $\lambda^{[i-1]}$, $\Gamma^{[i-1]}$, $\lambda^{[i]}$, $\Gamma^{[i+1]}$ and $\lambda^{[i+1]}$, that creates a single tensor. We apply the gate to this tensor, and then decompose back to the original structure. This operation may increase the size of the respective tensors. Gates that involve two qubits that are not consecutive, require a series of swap gates to bring the two qubits next to each other and then the reverse swaps. In the worst case, the tensors may grow exponentially. However, the size of the overall structure remains 'small' for circuits that do not have 'many' two-qubit gates. This allows much more efficient operations in circuits with relatively 'low' entanglement. Characterizing when to use this method over other methods is a subject of current research. Using the matrix product state simulation methodThe matrix product state simulation method is invoked in the `QasmSimulator` by setting the `simulation_method`. Other than that, all operations are controlled by the `QasmSimulator` itself, as in the following example:
###Code
import numpy as np
# Import Qiskit
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
from qiskit import Aer, execute
from qiskit.providers.aer import QasmSimulator
# Construct quantum circuit
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
circ.measure([0,1], [0,1])
# Select the QasmSimulator from the Aer provider
simulator = Aer.get_backend('qasm_simulator')
# Define the simulation method
backend_opts_mps = {"method":"matrix_product_state"}
# Execute and get counts, using the matrix_product_state method
result = execute(circ, simulator, backend_options=backend_opts_mps).result()
counts = result.get_counts(circ)
counts
###Output
_____no_output_____
###Markdown
To see the internal state vector of the circuit, we can import the snapshot files:
###Code
from qiskit.extensions.simulator import Snapshot
from qiskit.extensions.simulator.snapshot import snapshot
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
# Define a snapshot that shows the current state vector
circ.snapshot('my_sv', snapshot_type='statevector')
circ.measure([0,1], [0,1])
# Execute
job_sim = execute([circ], QasmSimulator(), backend_options=backend_opts_mps)
result = job_sim.result()
#print the state vector
result.data()['snapshots']['statevector']['my_sv'][0]
result.get_counts()
###Output
_____no_output_____
###Markdown
Running circuits using the matrix product state simulation method can be fast, relative to other methods. However, if we generate the state vector during the execution, then the conversion to state vector is, of course, exponential in memory and time, and therefore we don't benefit from using this method. We can benefit if we only do operations that don't require printing the full state vector. For example, if we run a circuit and then take measurement. The circuit below has 200 qubits. We create an `EPR state` involving all these qubits. Although this state is highly entangled, it is handled well by the matrix product state method, because there are effectively only two states. We can handle more qubits than this, but execution may take a few minutes. Try running a similar circuit with 500 qubits! Or maybe even 1000 (you can get a cup of coffee while waiting).
###Code
num_qubits = 50
qr = QuantumRegister(num_qubits)
cr = ClassicalRegister(num_qubits)
circ = QuantumCircuit(qr, cr)
# Create EPR state
circ.h(qr[0])
for i in range (0,num_qubits-1):
circ.cx(qr[i], qr[i+1])
# Measure
circ.measure(qr, cr)
job_sim = execute([circ], QasmSimulator(), backend_options=backend_opts_mps)
result = job_sim.result()
print("Time taken: {} sec".format(result.time_taken))
result.get_counts()
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
Matrix product state simulation method Simulation methodsThe `AerSimulator` has several simulation methods including `statevector`, `stabilizer`, `extended_stabilizer` and `matrix_product_state`. Each of these determines the internal representation of the quantum circuit and the algorithms used to process the quantum operations. They each have advantages and disadvantages, and choosing the best method is a matter of investigation.In this tutorial, we focus on the `matrix product state simulation method`. Matrix product state simulation methodThis simulation method is based on the concept of `matrix product states`. This structure was initially proposed in the paper *Efficient classical simulation of slightly entangled quantum computations* by Vidal in https://arxiv.org/abs/quant-ph/0301063. There are additional papers that describe the structure in more detail, for example *The density-matrix renormalization group in the age of matrix product states* by Schollwoeck https://arxiv.org/abs/1008.3477. A pure quantum state is usually described as a state vector, by the expression $|\psi\rangle = \sum_{i_1=0}^1 {\ldots} \sum_{i_n=0}^1 c_{i_1 \ldots i_n} |i_i\rangle {\otimes} {\ldots} {\otimes} |i_n\rangle$.The state vector representation implies an exponential size representation, regardless of the actual circuit. Every quantum gate operating on this representation requires exponential time and memory.The matrix product state (MPS) representation offers a local representation, in the form:$\Gamma^{[1]} \lambda^{[1]} \Gamma^{[2]} \lambda^{[2]}\ldots \Gamma^{[1]} \lambda^{[n-1]} \Gamma^{[n]}$, such that all the information contained in the $c_{i_1 \ldots i_n}$, can be generated out of the MPS representation. Every $\Gamma^{[i]}$ is a tensor of complex numbers that represents qubit $i$. Every $\lambda^{[i]}$ is a matrix of real numbers that is used to normalize the amplitudes of qubits $i$ and $i+1$. Single-qubit gates operate only on the relevant tensor. Two-qubit gates operate on consecutive qubits $i$ and $i+1$. This involves a tensor-contract operation over $\lambda^{[i-1]}$, $\Gamma^{[i-1]}$, $\lambda^{[i]}$, $\Gamma^{[i+1]}$ and $\lambda^{[i+1]}$, that creates a single tensor. We apply the gate to this tensor, and then decompose back to the original structure. This operation may increase the size of the respective tensors. Gates that involve two qubits that are not consecutive, require a series of swap gates to bring the two qubits next to each other and then the reverse swaps. In the worst case, the tensors may grow exponentially. However, the size of the overall structure remains 'small' for circuits that do not have 'many' two-qubit gates. This allows much more efficient operations in circuits with relatively 'low' entanglement. Characterizing when to use this method over other methods is a subject of current research. Using the matrix product state simulation methodThe matrix product state simulation method is invoked in the `AerSimulator` by setting the simulation method. Other than that, all operations are controlled by the `AerSimulator` itself, as in the following example:
###Code
import numpy as np
# Import Qiskit
from qiskit import QuantumCircuit, transpile
from qiskit.providers.aer import AerSimulator
# Construct quantum circuit
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
circ.measure([0,1], [0,1])
# Select the AerSimulator from the Aer provider
simulator = AerSimulator(method='matrix_product_state')
# Run and get counts, using the matrix_product_state method
tcirc = transpile(circ, simulator)
result = simulator.run(tcirc).result()
counts = result.get_counts(0)
counts
###Output
_____no_output_____
###Markdown
To see the internal state vector of the circuit we can use the `save_statevector` instruction. To return the full internal MPS structure we can also use the `save_matrix_product_state` instruction.
###Code
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
# Define a snapshot that shows the current state vector
circ.save_statevector(label='my_sv')
circ.save_matrix_product_state(label='my_mps')
circ.measure([0,1], [0,1])
# Execute and get saved data
tcirc = transpile(circ, simulator)
result = simulator.run(tcirc).result()
data = result.data(0)
#print the result data
data
###Output
_____no_output_____
###Markdown
Running circuits using the matrix product state simulation method can be fast, relative to other methods. However, if we generate the state vector during the execution, then the conversion to state vector is, of course, exponential in memory and time, and therefore we don't benefit from using this method. We can benefit if we only do operations that don't require printing the full state vector. For example, if we run a circuit and then take measurement. The circuit below has 200 qubits. We create an `EPR state` involving all these qubits. Although this state is highly entangled, it is handled well by the matrix product state method, because there are effectively only two states. We can handle more qubits than this, but execution may take a few minutes. Try running a similar circuit with 500 qubits! Or maybe even 1000 (you can get a cup of coffee while waiting).
###Code
num_qubits = 50
circ = QuantumCircuit(num_qubits, num_qubits)
# Create EPR state
circ.h(0)
for i in range (0, num_qubits-1):
circ.cx(i, i+1)
# Measure
circ.measure(range(num_qubits), range(num_qubits))
tcirc = transpile(circ, simulator)
result = simulator.run(tcirc).result()
print("Time taken: {} sec".format(result.time_taken))
result.get_counts()
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____
###Markdown
Matrix product state simulation method Simulation methodsThe `QasmSimulator` has several simulation methods including `statevector`, `stabilizer`, `extended_stabilizer` and `matrix_product_state`. Each of these determines the internal representation of the quantum circuit and the algorithms used to process the quantum operations. They each have advantages and disadvantages, and choosing the best method is a matter of investigation.In this tutorial, we focus on the `matrix product state simulation method`. Matrix product state simulation methodThis simulation method is based on the concept of `matrix product states`. This structure was initially proposed in the paper *Efficient classical simulation of slightly entangled quantum computations* by Vidal in https://arxiv.org/abs/quant-ph/0301063. There are additional papers that describe the structure in more detail, for example *The density-matrix renormalization group in the age of matrix product states* by Schollwoeck https://arxiv.org/abs/1008.3477. A pure quantum state is usually described as a state vector, by the expression $|\psi\rangle = \sum_{i_1=0}^1 {\ldots} \sum_{i_n=0}^1 c_{i_1 \ldots i_n} |i_i\rangle {\otimes} {\ldots} {\otimes} |i_n\rangle$.The state vector representation implies an exponential size representation, regardless of the actual circuit. Every quantum gate operating on this representation requires exponential time and memory.The matrix product state (MPS) representation offers a local representation, in the form:$\Gamma^{[1]} \lambda^{[1]} \Gamma^{[2]} \lambda^{[2]}\ldots \Gamma^{[1]} \lambda^{[n-1]} \Gamma^{[n]}$, such that all the information contained in the $c_{i_1 \ldots i_n}$, can be generated out of the MPS representation. Every $\Gamma^{[i]}$ is a tensor of complex numbers that represents qubit $i$. Every $\lambda^{[i]}$ is a matrix of real numbers that is used to normalize the amplitudes of qubits $i$ and $i+1$. Single-qubit gates operate only on the relevant tensor. Two-qubit gates operate on consecutive qubits $i$ and $i+1$. This involves a tensor-contract operation over $\lambda^{[i-1]}$, $\Gamma^{[i-1]}$, $\lambda^{[i]}$, $\Gamma^{[i+1]}$ and $\lambda^{[i+1]}$, that creates a single tensor. We apply the gate to this tensor, and then decompose back to the original structure. This operation may increase the size of the respective tensors. Gates that involve two qubits that are not consecutive, require a series of swap gates to bring the two qubits next to each other and then the reverse swaps. In the worst case, the tensors may grow exponentially. However, the size of the overall structure remains 'small' for circuits that do not have 'many' two-qubit gates. This allows much more efficient operations in circuits with relatively 'low' entanglement. Characterizing when to use this method over other methods is a subject of current research. Using the matrix product state simulation methodThe matrix product state simulation method is invoked in the `QasmSimulator` by setting the `simulation_method`. Other than that, all operations are controlled by the `QasmSimulator` itself, as in the following example:
###Code
import numpy as np
# Import Qiskit
from qiskit import QuantumCircuit, QuantumRegister, ClassicalRegister
from qiskit import Aer, execute
from qiskit.providers.aer import QasmSimulator
# Construct quantum circuit
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
circ.measure([0,1], [0,1])
# Select the QasmSimulator from the Aer provider
simulator = Aer.get_backend('qasm_simulator')
# Define the simulation method
backend_opts_mps = {"method":"matrix_product_state"}
# Execute and get counts, using the matrix_product_state method
result = execute(circ, simulator, backend_options=backend_opts_mps).result()
counts = result.get_counts(circ)
counts
###Output
_____no_output_____
###Markdown
To see the internal state vector of the circuit, we can import the snapshot files:
###Code
from qiskit.extensions.simulator import Snapshot
from qiskit.extensions.simulator.snapshot import snapshot
circ = QuantumCircuit(2, 2)
circ.h(0)
circ.cx(0, 1)
# Define a snapshot that shows the current state vector
circ.snapshot('my_sv', snapshot_type='statevector')
circ.measure([0,1], [0,1])
# Execute
job_sim = execute([circ], QasmSimulator(), backend_options=backend_opts_mps)
result = job_sim.result()
res = result.results
#print the state vector
statevector = res[0].data.snapshots.statevector
statevector['my_sv']
result.get_counts()
###Output
_____no_output_____
###Markdown
Running circuits using the matrix product state simulation method can be fast, relative to other methods. However, if we generate the state vector during the execution, then the conversion to state vector is, of course, exponential in memory and time, and therefore we don't benefit from using this method. We can benefit if we only do operations that don't require printing the full state vector. For example, if we run a circuit and then take measurement. The circuit below has 200 qubits. We create an `EPR state` involving all these qubits. Although this state is highly entangled, it is handled well by the matrix product state method, because there are effectively only two states. We can handle more qubits than this, but execution may take a few minutes. Try running a similar circuit with 500 qubits! Or maybe even 1000 (you can get a cup of coffee while waiting).
###Code
num_qubits = 50
qr = QuantumRegister(num_qubits)
cr = ClassicalRegister(num_qubits)
circ = QuantumCircuit(qr, cr)
# Create EPR state
circ.h(qr[0])
for i in range (0,num_qubits-1):
circ.cx(qr[i], qr[i+1])
# Measure
circ.measure(qr, cr)
job_sim = execute([circ], QasmSimulator(), backend_options=backend_opts_mps)
result = job_sim.result()
print("Time taken: {} sec".format(result.time_taken))
result.get_counts()
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
###Output
_____no_output_____ |
notebooks/AE_2_CNN_011.ipynb | ###Markdown
Install library
###Code
import os
import random
import numpy as np
import pandas as pd
import optuna
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, accuracy_score, precision_score, recall_score, f1_score, classification_report
from sklearn.model_selection import KFold, StratifiedKFold
from imblearn.over_sampling import SMOTE # smote
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
from tensorflow.keras.utils import plot_model, to_categorical
from tensorflow.keras.layers import Input, Dense, Conv2D, Activation
from tensorflow.keras.layers import MaxPooling2D, UpSampling2D, BatchNormalization, Dropout, GlobalAveragePooling2D
from tensorflow.keras.models import Model, Sequential
from tensorflow.keras.callbacks import EarlyStopping, ModelCheckpoint
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.preprocessing.image import ImageDataGenerator
def set_randvalue(value):
# Set a seed value
seed_value= value
# 1. Set `PYTHONHASHSEED` environment variable at a fixed value
os.environ['PYTHONHASHSEED']=str(seed_value)
# 2. Set `python` built-in pseudo-random generator at a fixed value
random.seed(seed_value)
# 3. Set `numpy` pseudo-random generator at a fixed value
np.random.seed(seed_value)
# 4. Set `tensorflow` pseudo-random generator at a fixed value
tf.random.set_seed(seed_value)
set_randvalue(42)
###Output
_____no_output_____
###Markdown
Dataset preprocessing and EDA
###Code
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() # load data
x_train,x_test = x_train.astype('float32')/255.0,x_test.astype('float32')/255.0 # normalization
print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
y_train
###Output
_____no_output_____
###Markdown
Limit three class preprocessing
###Code
# No method on keras to get cifar10 category label name by categoly label?
cifar10_labels = np.array([
'airplane',
'automobile',
'bird',
'cat',
'deer',
'dog',
'frog',
'horse',
'ship',
'truck'])
bird_num = np.where(cifar10_labels=='bird')
deer_num = np.where(cifar10_labels=='deer')
truck_num = np.where(cifar10_labels=='truck')
limit_num = 2500
# get limit label indexes
bird_indexes = [i for i, label in enumerate(y_train) if label == bird_num]
deer_indexes = [i for i, label in enumerate(y_train) if label == deer_num]
truck_indexes = [i for i, label in enumerate(y_train) if label == truck_num]
other_indexes = [i for i, label in enumerate(y_train) if label not in [bird_num, deer_num, truck_num]]
# limit
bird_indexes = bird_indexes[:limit_num]
deer_indexes = deer_indexes[:limit_num]
truck_indexes = truck_indexes[:limit_num]
print(f'Bird label num is {len(bird_indexes)}') # 2500
print(f'Deer label num is {len(deer_indexes)}') # 2500
print(f'Truck label num is {len(truck_indexes)}') # 2500
print(f'Other label num is {len(other_indexes)}') # 35000; 5000*7
# merge and sort
merge_indexes = np.concatenate([other_indexes, bird_indexes, deer_indexes, truck_indexes], 0)
merge_indexes.sort()
print(f'Train label num is {len(merge_indexes)}') # 42500
# create three labels removed train data
x_train_removed = np.zeros((len(merge_indexes), 32, 32, 3))
y_train_removed = np.zeros(len(merge_indexes))
for i, train_index in enumerate(merge_indexes):
x_train_removed[i] = x_train[train_index]
y_train_removed[i] = y_train[train_index]
print(x_train_removed.shape)
print(y_train_removed.shape)
print(x_train_removed.shape)
print(y_train_removed.shape)
del x_train
del y_train
df = pd.DataFrame(y_train_removed.flatten())
print(df.value_counts())
del df
import matplotlib.pyplot as plt
# plot data labels
plt.hist(y_train_removed.flatten())
###Output
_____no_output_____
###Markdown
AutoEncoder Load AE models weight
###Code
# Batch Norm Model
def create_AE01_model(k_size):
input_img = Input(shape=(32, 32, 3)) # 0
conv1 = Conv2D(64, (k_size, k_size), padding='same', name="Dense_AE01_1")(input_img) # 1
conv1 = BatchNormalization(name="BN_AE01_1")(conv1) # 2
conv1 = Activation('relu', name="Relu_AE01_1")(conv1) # 3
decoded = Conv2D(3, (k_size, k_size), padding='same', name="Dense_AE01_2")(conv1) # 4
decoded = BatchNormalization(name="BN_AE01_2")(decoded) # 5
decoded = Activation('relu', name="Relu_AE01_2")(decoded) # 6
return Model(input_img, decoded)
class AE01():
def __init__(self, ksize, optimizer):
self.optimizer = optimizer
self.autoencoder = create_AE01_model(ksize)
self.encoder = None
def compile(self, optimizer='adam', loss='binary_crossentropy'):
self.autoencoder.compile(optimizer=self.optimizer, loss=loss)
def train(self, x_train=None, x_test=None, epochs=1, batch_size=32, shuffle=True):
es_cb = EarlyStopping(monitor='val_loss', patience=2, verbose=1, mode='auto')
ae_model_path = '../models/AE/AE01_AE_Best.hdf5'
cp_cb = ModelCheckpoint(filepath = ae_model_path, monitor='val_loss', verbose=1, save_best_only=True, mode='auto')
history = self.autoencoder.fit(x_train, x_train,
epochs=epochs,
batch_size=batch_size,
shuffle=shuffle,
callbacks=[es_cb, cp_cb],
validation_data=(x_test, x_test))
self.autoencoder.load_weights(ae_model_path)
self.encoder = Model(self.autoencoder.input, self.autoencoder.get_layer('Relu_AE01_1').output)
encode_model_path = '../models/AE/AE01_Encoder_Best.hdf5'
self.encoder.save(encode_model_path)
return history
def load_weights(self, ae_model_path, encode_model_path):
self.autoencoder.load_weights(ae_model_path)
self.encoder = Model(self.autoencoder.input, self.autoencoder.get_layer('Relu_AE01_1').output)
self.encoder.load_weights(encode_model_path)
ae_ksize = 3
ae_optimizer = 'rmsprop'
stack01 = AE01(ae_ksize, ae_optimizer)
stack01.load_weights('../models/AE/AE01_AE_Best.hdf5', '../models/AE/AE01_Encoder_Best.hdf5')
stack01.encoder.trainable = False
stack01.encoder.summary()
###Output
Model: "functional_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 32, 32, 3)] 0
_________________________________________________________________
Dense_AE01_1 (Conv2D) (None, 32, 32, 64) 1792
_________________________________________________________________
BN_AE01_1 (BatchNormalizatio (None, 32, 32, 64) 256
_________________________________________________________________
Relu_AE01_1 (Activation) (None, 32, 32, 64) 0
=================================================================
Total params: 2,048
Trainable params: 0
Non-trainable params: 2,048
_________________________________________________________________
###Markdown
Train Create Model AE to CNN
###Code
def create_StackedAE01_CNN01_model(encoder):
input_img = encoder.input
output = encoder.layers[-1].output # 32,32,64
x = Conv2D(64,(3,3),padding = "same",activation= "relu")(output)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(2, 2))(x) # 16,16,64
x = Conv2D(128,(3,3),padding = "same",activation= "relu")(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = Conv2D(128,(3,3),padding = "same",activation= "relu")(x)
x = BatchNormalization()(x)
x = Activation('relu')(x)
x = MaxPooling2D(pool_size=(2, 2))(x) # 8,8,128
x = GlobalAveragePooling2D()(x)
x = Dense(512)(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
y = Dense(10,activation = "softmax")(x)
return Model(input_img, y)
###Output
_____no_output_____
###Markdown
Train SMOTE without data augumentation
###Code
%%time
# train
saveDir = "../models/CNN/"
histories = []
nb_classes = 10
predicts = np.zeros((10000, 10))
cv_acc = 0
cv_f1 = 0
# cross validation
# Define the K-fold Cross Validator
n_splits = 3
kfold = StratifiedKFold(n_splits=n_splits, shuffle=True)
fold_no = 1
for train_index, test_index in kfold.split(x_train_removed, y_train_removed):
# model instance
model01 = create_StackedAE01_CNN01_model(stack01.encoder)
adam = Adam() # defalut
model01.compile(loss = "categorical_crossentropy", optimizer = adam, metrics = ["accuracy"])
x_train_ = x_train_removed[train_index]
y_train_ = y_train_removed[train_index]
x_valid_ = x_train_removed[test_index]
y_valid_ = y_train_removed[test_index]
# over sampling smote
sm = SMOTE(random_state=42)
x_train_smote = x_train_.reshape(x_train_.shape[0],
x_train_.shape[1]*x_train_.shape[2]*x_train_.shape[3])
y_train_smote = y_train_.reshape(y_train_.shape[0])
x_train_smote, y_train_smote = SMOTE().fit_resample(x_train_smote, y_train_smote)
x_train_smote = x_train_smote.reshape((x_train_smote.shape[0],
x_train_removed.shape[1],
x_train_removed.shape[2],
x_train_removed.shape[3]))
# one hot encoding
y_train_onehot_smote = to_categorical(y_train_smote, nb_classes)
y_valid_onehot = to_categorical(y_valid_, nb_classes)
y_test_onehot = to_categorical(y_test, nb_classes)
# callback
es_cb = EarlyStopping(monitor='val_loss', patience=3, verbose=1, mode='auto')
chkpt = saveDir + 'Model_023_' + str(fold_no) + '_Best.hdf5'
cp_cb = ModelCheckpoint(filepath = chkpt, \
monitor='val_loss', verbose=1, save_best_only=True, mode='auto')
model01_history = model01.fit(x_train_smote, y_train_onehot_smote,
batch_size=32,
epochs=400,
verbose=1,
validation_data=(x_valid_, y_valid_onehot),
callbacks=[es_cb, cp_cb],
shuffle=True)
# inference
model01.load_weights(chkpt)
scores = model01.evaluate(x_valid_, y_valid_onehot)
# CV value
cv_acc += scores[1]*100
y_valid_pred = model01.predict(x_valid_)
y_valid_pred = np.argmax(y_valid_pred, axis=1)
cv_f1 += f1_score(y_valid_, y_valid_pred, average='macro')*100
print(f'Score for fold {fold_no}: {model01.metrics_names[0]} of {scores[0]}; {model01.metrics_names[1]} of {scores[1]*100}%')
predict = model01.predict(x_test)
predicts += predict
histories.append(model01_history.history)
fold_no += 1
ensemble_histories = histories
ensemble_predicts = predicts
ensemble_predicts_ = ensemble_predicts / n_splits
y_pred = np.argmax(ensemble_predicts_, axis=1)
print(classification_report(y_test, y_pred))
print(f'CV ACC is {cv_acc//n_splits},n_splits CV macro F1 is {cv_f1//n_splits}')
###Output
CV ACC is 73.0,n_splits CV macro F1 is 71.0
###Markdown
Train SMOTE with data augumentation
###Code
# train
saveDir = "../models/CNN/"
histories = []
nb_classes = 10
predicts = np.zeros((10000, 10))
cv_acc = 0
cv_f1 = 0
# cross validation
# Define the K-fold Cross Validator
n_splits = 3
kfold = StratifiedKFold(n_splits=n_splits, shuffle=True)
fold_no = 1
for train_index, test_index in kfold.split(x_train_removed, y_train_removed):
# model instance
model02 = create_StackedAE01_CNN01_model(stack01.encoder)
adam = Adam() # defalut
model02.compile(loss = "categorical_crossentropy", optimizer = adam, metrics = ["accuracy"])
x_train_ = x_train_removed[train_index]
y_train_ = y_train_removed[train_index]
x_valid_ = x_train_removed[test_index]
y_valid_ = y_train_removed[test_index]
# over sampling smote
sm = SMOTE(random_state=42)
x_train_smote = x_train_.reshape(x_train_.shape[0],
x_train_.shape[1]*x_train_.shape[2]*x_train_.shape[3])
y_train_smote = y_train_.reshape(y_train_.shape[0])
x_train_smote, y_train_smote = SMOTE().fit_resample(x_train_smote, y_train_smote)
x_train_smote = x_train_smote.reshape((x_train_smote.shape[0],
x_train_removed.shape[1],
x_train_removed.shape[2],
x_train_removed.shape[3]))
# one hot encoding
y_train_onehot_smote = to_categorical(y_train_smote, nb_classes)
y_valid_onehot = to_categorical(y_valid_, nb_classes)
y_test_onehot = to_categorical(y_test, nb_classes)
# callback
es_cb = EarlyStopping(monitor='val_loss', patience=3, verbose=1, mode='auto')
chkpt = saveDir + 'Model_024_' + str(fold_no) + '_Best.hdf5'
cp_cb = ModelCheckpoint(filepath = chkpt, \
monitor='val_loss', verbose=1, save_best_only=True, mode='auto')
# create generator
train_datagen = ImageDataGenerator(
# rescale=1./255,
# rotation_range=10,
# shear_range=0.2,
horizontal_flip=True,
# vertical_flip=True,
# width_shift_range=0.1,
# height_shift_range=0.1,
zoom_range=0.1
# channel_shift_range=0.2
)
batch_size = 32
train_datagenerator = train_datagen.flow(x_train_smote, y_train_onehot_smote, batch_size)
valid_datagenerator = ImageDataGenerator().flow(x_valid_, y_valid_onehot, batch_size)
model02_history = model02.fit_generator(train_datagenerator,
steps_per_epoch=int(len(x_train_smote)//batch_size),
epochs=400,
validation_data=valid_datagenerator,
validation_steps=int(len(x_valid_)//batch_size),
verbose=1,
shuffle=True,
callbacks=[es_cb, cp_cb])
# inference
model02.load_weights(chkpt)
scores = model02.evaluate(x_valid_, y_valid_onehot)
# CV value
cv_acc += scores[1]*100
y_valid_pred = model02.predict(x_valid_)
y_valid_pred = np.argmax(y_valid_pred, axis=1)
cv_f1 += f1_score(y_valid_, y_valid_pred, average='macro')*100
print(f'Score for fold {fold_no}: {model02.metrics_names[0]} of {scores[0]}; {model02.metrics_names[1]} of {scores[1]*100}%')
# oof prediction
predict = model02.predict(x_test)
predicts += predict
histories.append(model02_history.history)
fold_no += 1
ensemble_dataaug_histories = histories
ensemble_dataaug_predicts = predicts
ensemble_dataaug_predicts_ = ensemble_dataaug_predicts / n_splits
y_pred = np.argmax(ensemble_dataaug_predicts_, axis=1)
print(classification_report(y_test, y_pred))
print(f'CV ACC is {cv_acc//n_splits}, CV macro F1 is {cv_f1//n_splits}')
###Output
CV ACC is 76.0, CV macro F1 is 75.0
|
pytorch/cnn/4.1-classify-FashionMNIST-exercise.ipynb | ###Markdown
CNN for Classification---In this notebook, we define **and train** an CNN to classify images from the [Fashion-MNIST database](https://github.com/zalandoresearch/fashion-mnist). Load the [data](http://pytorch.org/docs/master/torchvision/datasets.html)In this cell, we load in both **training and test** datasets from the FashionMNIST class.
###Code
# our basic libraries
import torch
import torchvision
# data loading and transforming
from torchvision.datasets import FashionMNIST
from torch.utils.data import DataLoader
from torchvision import transforms
# The output of torchvision datasets are PILImage images of range [0, 1].
# We transform them to Tensors for input into a CNN
## Define a transform to read the data in as a tensor
data_transform = transforms.ToTensor()
# choose the training and test datasets
train_data = FashionMNIST(root='./data', train=True,
download=True, transform=data_transform)
test_data = FashionMNIST(root='./data', train=False,
download=True, transform=data_transform)
# Print out some stats about the training and test data
print('Train data, number of images: ', len(train_data))
print('Test data, number of images: ', len(test_data))
# prepare data loaders, set the batch_size
## TODO: you can try changing the batch_size to be larger or smaller
## when you get to training your network, see how batch_size affects the loss
batch_size = 20
train_loader = DataLoader(train_data, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_data, batch_size=batch_size, shuffle=True)
# specify the image classes
classes = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
###Output
_____no_output_____
###Markdown
Visualize some training dataThis cell iterates over the training dataset, loading a random batch of image/label data, using `dataiter.next()`. It then plots the batch of images and labels in a `2 x batch_size/2` grid.
###Code
import os
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
os.environ['KMP_DUPLICATE_LIB_OK']='True'
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(batch_size):
ax = fig.add_subplot(2, batch_size/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
Define the network architectureThe various layers that make up any neural network are documented, [here](http://pytorch.org/docs/master/nn.html). For a convolutional neural network, we'll use a simple series of layers:* Convolutional layers* Maxpooling layers* Fully-connected (linear) layersYou are also encouraged to look at adding [dropout layers](http://pytorch.org/docs/stable/nn.htmldropout) to avoid overfitting this data.---To define a neural network in PyTorch, you define the layers of a model in the function `__init__` and define the feedforward behavior of a network that employs those initialized layers in the function `forward`, which takes in an input image tensor, `x`. The structure of this Net class is shown below and left for you to fill in.Note: During training, PyTorch will be able to perform backpropagation by keeping track of the network's feedforward behavior and using autograd to calculate the update to the weights in the network. Define the Layers in ` __init__`As a reminder, a conv/pool layer may be defined like this (in `__init__`):``` 1 input image channel (for grayscale images), 32 output channels/feature maps, 3x3 square convolution kernelself.conv1 = nn.Conv2d(1, 32, 3) maxpool that uses a square window of kernel_size=2, stride=2self.pool = nn.MaxPool2d(2, 2) ``` Refer to Layers in `forward`Then referred to in the `forward` function like this, in which the conv1 layer has a ReLu activation applied to it before maxpooling is applied:```x = self.pool(F.relu(self.conv1(x)))```You must place any layers with trainable weights, such as convolutional layers, in the `__init__` function and refer to them in the `forward` function; any layers or functions that always behave in the same way, such as a pre-defined activation function, may appear *only* in the `forward` function. In practice, you'll often see conv/pool layers defined in `__init__` and activations defined in `forward`. Convolutional layerThe first convolution layer has been defined for you, it takes in a 1 channel (grayscale) image and outputs 10 feature maps as output, after convolving the image with 3x3 filters. FlatteningRecall that to move from the output of a convolutional/pooling layer to a linear layer, you must first flatten your extracted features into a vector. If you've used the deep learning library, Keras, you may have seen this done by `Flatten()`, and in PyTorch you can flatten an input `x` with `x = x.view(x.size(0), -1)`. TODO: Define the rest of the layersIt will be up to you to define the other layers in this network; we have some recommendations, but you may change the architecture and parameters as you see fit.Recommendations/tips:* Use at least two convolutional layers* Your output must be a linear layer with 10 outputs (for the 10 classes of clothing)* Use a dropout layer to avoid overfitting
###Code
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel (grayscale), 10 output channels/feature maps
# 3x3 square convolution kernel
## output size = (W-F)/S +1 = (28-3)/1 +1 = 26
# the output Tensor for one image, will have the dimensions: (10, 26, 26)
# after one pool layer, this becomes (10, 13, 13)
self.conv1 = nn.Conv2d(1, 10, 3)
# maxpool layer
# pool with kernel_size=2, stride=2
self.pool = nn.MaxPool2d(2, 2)
# second conv layer: 10 inputs, 20 outputs, 3x3 conv
## output size = (W-F)/S +1 = (13-3)/1 +1 = 11
# the output tensor will have dimensions: (20, 11, 11)
# after another pool layer this becomes (20, 5, 5); 5.5 is rounded down
self.conv2 = nn.Conv2d(10, 20, 3)
# 20 outputs * the 5*5 filtered/pooled map size
# 10 output channels (for the 10 classes)
self.fc1 = nn.Linear(20*5*5, 50)
self.fc1_drop = nn.Dropout(p=0.4)
self.fc2 = nn.Linear(50, 10)
# define the feedforward behavior
def forward(self, x):
# two conv/relu + pool layers
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
# prep for linear layer
# flatten the inputs into a vector
x = x.view(x.size(0), -1)
# two linear layer
x = self.fc1(x)
x = self.fc1_drop(x)
x = self.fc2(x)
# final output
return x
# instantiate and print your Net
net = Net()
print(net)
###Output
Net(
(conv1): Conv2d(1, 10, kernel_size=(3, 3), stride=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv2): Conv2d(10, 20, kernel_size=(3, 3), stride=(1, 1))
(fc1): Linear(in_features=500, out_features=50, bias=True)
(fc1_drop): Dropout(p=0.4)
(fc2): Linear(in_features=50, out_features=10, bias=True)
)
###Markdown
TODO: Specify the loss function and optimizerLearn more about [loss functions](http://pytorch.org/docs/master/nn.htmlloss-functions) and [optimizers](http://pytorch.org/docs/master/optim.html) in the online documentation.Note that for a classification problem like this, one typically uses cross entropy loss, which can be defined in code like: `criterion = nn.CrossEntropyLoss()`. PyTorch also includes some standard stochastic optimizers like stochastic gradient descent and Adam. You're encouraged to try different optimizers and see how your model responds to these choices as it trains.
###Code
import torch.optim as optim
## TODO: specify loss function (try categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
## TODO: specify optimizer
optimizer = optim.SGD(net.parameters(), lr=1e-2, momentum=0.9)
###Output
_____no_output_____
###Markdown
A note on accuracyIt's interesting to look at the accuracy of your network **before and after** training. This way you can really see that your network has learned something. In the next cell, let's see what the accuracy of an untrained network is (we expect it to be around 10% which is the same accuracy as just guessing for all 10 classes).
###Code
# Calculate accuracy before training
correct = 0
total = 0
# Iterate through test dataset
for images, labels in test_loader:
# forward pass to get outputs
# the outputs are a series of class scores
outputs = net(images)
# get the predicted class from the maximum value in the output-list of class scores
_, predicted = torch.max(outputs.data, 1)
# count up total number of correct labels
# for which the predicted and true labels are equal
total += labels.size(0)
correct += (predicted == labels).sum()
# calculate the accuracy
accuracy = 100 * correct / total
# print it out!
print('Accuracy before training: ', accuracy)
###Output
Accuracy before training: tensor(9)
###Markdown
Train the NetworkBelow, we've defined a `train` function that takes in a number of epochs to train for. The number of epochs is how many times a network will cycle through the training dataset. Here are the steps that this training function performs as it iterates over the training dataset:1. Zero's the gradients to prepare for a forward pass2. Passes the input through the network (forward pass)3. Computes the loss (how far is the predicted classes are from the correct labels)4. Propagates gradients back into the network’s parameters (backward pass)5. Updates the weights (parameter update)6. Prints out the calculated loss
###Code
def train(n_epochs):
for epoch in range(n_epochs): # loop over the dataset multiple times
running_loss = 0.0
for batch_i, data in enumerate(train_loader):
# get the input images and their corresponding labels
inputs, labels = data
# zero the parameter (weight) gradients
optimizer.zero_grad()
# forward pass to get outputs
outputs = net(inputs)
# calculate the loss
loss = criterion(outputs, labels)
# backward pass to calculate the parameter gradients
loss.backward()
# update the parameters
optimizer.step()
# print loss statistics
# to convert loss into a scalar and add it to running_loss, we use .item()
running_loss += loss.item()
if batch_i % 1000 == 999: # print every 1000 mini-batches
print('Epoch: {}, Batch: {}, Avg. Loss: {}'.format(epoch + 1, batch_i+1, running_loss/1000))
running_loss = 0.0
print('Finished Training')
# define the number of epochs to train for
n_epochs = 5 # start small to see if your model works, initially
# call train
train(n_epochs)
###Output
Epoch: 1, Batch: 1000, Avg. Loss: 0.8336686544567347
Epoch: 1, Batch: 2000, Avg. Loss: 0.5402662580311298
Epoch: 1, Batch: 3000, Avg. Loss: 0.480510668694973
Epoch: 2, Batch: 1000, Avg. Loss: 0.44284135618805887
Epoch: 2, Batch: 2000, Avg. Loss: 0.42459061155840755
Epoch: 2, Batch: 3000, Avg. Loss: 0.4012631203774363
Epoch: 3, Batch: 1000, Avg. Loss: 0.3853324173353612
Epoch: 3, Batch: 2000, Avg. Loss: 0.37787555013224483
Epoch: 3, Batch: 3000, Avg. Loss: 0.3745156655535102
Epoch: 4, Batch: 1000, Avg. Loss: 0.36138682106882336
Epoch: 4, Batch: 2000, Avg. Loss: 0.36682122997101396
Epoch: 4, Batch: 3000, Avg. Loss: 0.36240479587204755
Epoch: 5, Batch: 1000, Avg. Loss: 0.3454405435901135
Epoch: 5, Batch: 2000, Avg. Loss: 0.3519078265577555
Epoch: 5, Batch: 3000, Avg. Loss: 0.35703855950571595
Finished Training
###Markdown
Test the Trained NetworkOnce you are satisfied with how the loss of your model has decreased, there is one last step: test!You must test your trained model on a previously unseen dataset to see if it generalizes well and can accurately classify this new dataset. For FashionMNIST, which contains many pre-processed training images, a good model should reach **greater than 85% accuracy** on this test dataset. If you are not reaching this value, try training for a larger number of epochs, tweaking your hyperparameters, or adding/subtracting layers from your CNN.
###Code
# initialize tensor and lists to monitor test loss and accuracy
test_loss = torch.zeros(1)
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
# set the module to evaluation mode
net.eval()
for batch_i, data in enumerate(test_loader):
# get the input images and their corresponding labels
inputs, labels = data
# forward pass to get outputs
outputs = net(inputs)
# calculate the loss
loss = criterion(outputs, labels)
# update average test loss
test_loss = test_loss + ((torch.ones(1) / (batch_i + 1)) * (loss.data - test_loss))
# get the predicted class from the maximum value in the output-list of class scores
_, predicted = torch.max(outputs.data, 1)
# compare predictions to true label
correct = np.squeeze(predicted.eq(labels.data.view_as(predicted)))
# calculate test accuracy for *each* object class
# we get the scalar value of correct items for a class, by calling `correct[i].item()`
for i in range(batch_size):
label = labels.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
print('Test Loss: {:.6f}\n'.format(test_loss.numpy()[0]))
for i in range(10):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
Test Loss: 0.348536
Test Accuracy of T-shirt/top: 79% (790/1000)
Test Accuracy of Trouser: 97% (977/1000)
Test Accuracy of Pullover: 86% (860/1000)
Test Accuracy of Dress: 81% (818/1000)
Test Accuracy of Coat: 81% (810/1000)
Test Accuracy of Sandal: 97% (973/1000)
Test Accuracy of Shirt: 62% (626/1000)
Test Accuracy of Sneaker: 93% (934/1000)
Test Accuracy of Bag: 97% (971/1000)
Test Accuracy of Ankle boot: 97% (972/1000)
Test Accuracy (Overall): 87% (8731/10000)
###Markdown
Visualize sample test results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
# get predictions
preds = np.squeeze(net(images).data.max(1, keepdim=True)[1].numpy())
images = images.numpy()
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(batch_size):
ax = fig.add_subplot(2, batch_size/2, idx+1, xticks=[], yticks=[])
ax.imshow(np.squeeze(images[idx]), cmap='gray')
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx] else "red"))
###Output
_____no_output_____
###Markdown
Question: What are some weaknesses of your model? (And how might you improve these in future iterations.) **Answer**: Double-click and write answer, here. Save Your Best ModelOnce you've decided on a network architecture and are satisfied with the test accuracy of your model after training, it's time to save this so that you can refer back to this model, and use it at a later data for comparison of for another classification task!
###Code
## TODO: change the model_name to something uniqe for any new model
## you wish to save, this will save it in the saved_models directory
model_dir = 'saved_models/'
model_name = 'better_model.pt'
# after training, save your model parameters in the dir 'saved_models'
# when you're ready, un-comment the line below
torch.save(net.state_dict(), model_dir+model_name)
###Output
_____no_output_____
###Markdown
Load a Trained, Saved ModelTo instantiate a trained model, you'll first instantiate a new `Net()` and then initialize it with a saved dictionary of parameters (from the save step above).
###Code
# instantiate your Net
# this refers to your Net class defined above
net = Net()
# load the net parameters by name
# uncomment and write the name of a saved model
net.load_state_dict(torch.load('saved_models/better_model.pt'))
print(net)
# Once you've loaded a specific model in, you can then
# us it or further analyze it!
# This will be especialy useful for feature visualization
###Output
Net(
(conv1): Conv2d(1, 10, kernel_size=(3, 3), stride=(1, 1))
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(conv2): Conv2d(10, 20, kernel_size=(3, 3), stride=(1, 1))
(fc1): Linear(in_features=500, out_features=50, bias=True)
(fc1_drop): Dropout(p=0.4)
(fc2): Linear(in_features=50, out_features=10, bias=True)
)
|
python-data-structures/interview-fb/clone-graph.ipynb | ###Markdown
Clone GraphGiven a reference of a node in a connected undirected graph.Return a deep copy (clone) of the graph. Each node in the graph contains a value (int) and a list (List[Node]) of its neighbors.```Javaclass Node { public int val; public List neighbors;}```Test case format:For simplicity, each node's value is the same as the node's index (1-indexed). For example, the first node with val == 1, the second node with val == 2, and so on. The graph is represented in the test case using an adjacency list.An adjacency list is a collection of unordered lists used to represent a finite graph. Each list describes the set of neighbors of a node in the graph.The given node will always be the first node with val = 1. You must return the copy of the given node as a reference to the cloned graph.
###Code
class Node:
def __init__(self, val = 0, neighbors = []):
self.val = val
self.neighbors = neighbors
def clone(node, visited = {}):
if not node:
return None
elif node.val in visited:
return visited[node.val]
else:
cloned = Node(node.val, [])
visited[node.val] = cloned
for n in node.neighbors:
c = clone(n, visited)
if c:
cloned.neighbors.append(c)
return cloned
###Output
_____no_output_____ |
notebooks/Day14_Running_job.ipynb | ###Markdown
Running Databricks job named Day14_Job This will be the simple notebook and we will be testing job input parameters
###Code
dbutils.widgets.dropdown("select_number", "1", [str(x) for x in range(1, 10)])
dbutils.widgets.get("select_number")
num_Select = dbutils.widgets.get('select_number')
for i in range(1, int(num_Select)):
print(i, end=',')
###Output
_____no_output_____ |
Off Center Ring Simulation.ipynb | ###Markdown
Simulate binaural hearing when the stimulus is rotated around a ring of speakers.
###Code
#%%
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
from matplotlib.collections import PatchCollection
###Output
_____no_output_____
###Markdown
First, some code to render a mouse's head and a ring of speakers
###Code
# MEASURE SOURCE ANGLE RELATIVE TO NOSE. POSITIVE IS CLOCKWISE WHEN LOOKING DOWN
def render_ring(num_speakers=16, radius=0.5, speaker_radius=0.025, ax=None):
if not ax:
fig, ax = plt.subplots(1,1)
ax.set_xlim(-2*radius,2*radius)
ax.set_ylim(-2*radius,2*radius)
ax.set_aspect('equal')
SpeakerAngles = np.linspace(0, np.pi*2, num=num_speakers, endpoint=False)
for n in range(num_speakers):
center = (np.sin(SpeakerAngles[n])*radius,
np.cos(SpeakerAngles[n])*radius )
ax.add_patch(mpatches.Circle(center,speaker_radius))
return ax
def render_mouse(interaural_distance=0.0086, ax=None,
ear_diameter=0.008, scale=3, xpos=0):
if not ax:
fig, ax = plt.subplots(1,1)
ax.set_xlim(-2*radius,2*radius)
ax.set_ylim(-2*radius,2*radius)
ax.set_aspect('equal')
x = interaural_distance/2*scale
eye_y=1*interaural_distance*scale
eye_r=interaural_distance/4*scale
ear_height=1.25*ear_diameter*scale
ear_width=1.25*ear_diameter/2*scale
head_ytop=2.5*interaural_distance*scale
head_ybot=0.75*interaural_distance*scale
ax.add_patch(mpatches.Circle((x+xpos,eye_y),eye_r,color='black'))
ax.add_patch(mpatches.Circle((-x+xpos,eye_y),eye_r,color='black'))
ax.add_patch(mpatches.Polygon([[-x*1.5+xpos,-head_ybot],[0+xpos,head_ytop],[x*1.5+xpos,-head_ybot]],
color='gray'))
ax.add_patch(mpatches.Ellipse((x+xpos,0),ear_width,ear_height,30,color='slateblue'))
ax.add_patch(mpatches.Ellipse((-x+xpos,0),ear_width,ear_height,-30,color='slateblue'))
return ax
# Test it out
fig = plt.figure(figsize=(9,3))
gs = fig.add_gridspec(1,3, hspace=0.3, wspace=0.3)
ax1 = fig.add_subplot(gs[0:,0])
ax2 = fig.add_subplot(gs[0,1])
ax3 = fig.add_subplot(gs[0,2])
NumSpeakers = 16
Radius = 0.5
HeadPos = 0
ax1 = render_mouse(ax=ax1, xpos=HeadPos)
ax1 = render_ring(num_speakers=NumSpeakers, radius=Radius, ax=ax1)
ax1.autoscale()
ax1.set_aspect('equal')
ax1.set_title('Interaural distance 8.6 mm\nHead centered at {} m\n'
'{} Speakers at {} m'.format(HeadPos, NumSpeakers, Radius))
HeadPos = 0.6
ax2 = render_mouse(ax=ax2, xpos=HeadPos)
ax2 = render_ring(num_speakers=NumSpeakers, radius=Radius, ax=ax2)
ax2.autoscale()
ax2.set_aspect('equal')
ax2.set_title('Interaural distance 8.6 mm\nHead centered at {} m\n'
'{} Speakers at {} m'.format(HeadPos, NumSpeakers, Radius))
NumSpeakers = 64
HeadPos = 0.4
ax3 = render_mouse(ax=ax3, xpos=HeadPos)
ax3 = render_ring(num_speakers=NumSpeakers, radius=Radius, ax=ax3, speaker_radius=0.025/2)
ax3.autoscale()
ax3.set_aspect('equal')
ax3.set_title('Interaural distance 8.6 mm\nHead centered at {} m\n'
'{} Speakers at {} m'.format(HeadPos, NumSpeakers, Radius))
###Output
_____no_output_____
###Markdown
Virtual sourcesVirtual sources are sounds that come from a location between two speakers.We will synthesize their sound using the two nearest speakers.Next, let's find the nearest two speakers for an arbitrary virtual angle.Subsequent code requires that the first speaker returned be the closest,so be extra careful!**Important note:** Our reference for virtual source angles is "north" (in front of the nose), going clockwise.
###Code
def virtual_source_angle(virtual_angle, num_speakers, radius):
speaker_angles = np.linspace(0, 2*np.pi, num_speakers, endpoint=False)
speaker_angles = np.expand_dims(speaker_angles,axis=0)
virtual_angle = np.mod(virtual_angle, 2*np.pi) # make angles between 0 and 2pi
dist_mat = np.minimum(np.abs(virtual_angle - speaker_angles),
np.abs(2*np.pi - (virtual_angle - speaker_angles)) )
ClosestSourceAngle = np.take(speaker_angles, np.argmin(dist_mat, axis=1))
if (virtual_angle.ndim > 0):
ClosestSourceAngle = np.expand_dims(ClosestSourceAngle,1)
dist_mat2 = dist_mat
for rowidx, row in enumerate(dist_mat):
dist_mat2[rowidx, row.argmin()] = np.inf
NextClosestSourceAngle = np.take(speaker_angles, np.argmin(dist_mat2, axis=1))
if (virtual_angle.ndim > 0):
NextClosestSourceAngle = np.expand_dims(NextClosestSourceAngle,1)
return [ClosestSourceAngle, NextClosestSourceAngle]
###Output
_____no_output_____
###Markdown
Synthetic speaker amplitudes for virtual sourcesNext, we want to generate the amplitude that each of our two closestspeakers should be driven at. In free space, the sound that reaches the earwill fall off in amplitude by 1/distance to the ear. We'll use the centerof the head as our target point and will match the synthesized amplitudeto what the virtual source would have created at that point. (We don't tryto match phase!). More complicatedly, speaking of phase, we need to be awareof the fact that because the two speakers are different distances from theear, it's possible that their signal will combine destructively. So ideally,we'd scale them to make sure that their sum has the right amplitude.In order to do this, we will use 3 methods. - "Closest": This will just drive the closest speaker, with amplitude scaled by distance - "Naive": This will just scale the amplitude of the sound based on the relative angle between the virtual source and the two speakers. - "Phasor": This will take into account the relative phase of the signals.
###Code
def cos_triangle_rule(x1, x2, phi):
# Really useful formula for finding the length of a vector difference when you
# know the lengths of the two arguments and the angle between them.
# In other words, return ||A - B||, where phi is the angle between them,
# and ||A|| = x1 and ||B|| = x2.
return np.sqrt(x1**2 + x2**2 - 2*x1*x2*np.cos(phi))
def virtual_source_amplitude(virtual_angle, radius, source1_angle, source2_angle,
freq, head_x=0, speed_of_sound=343.0, synthesis='phasor'):
# We assume that the virtual source is on a ring with the given radius, and
# that the head is positioned at the vertical center, and horizontally displaced
# by head_x. The amplitude is equally split between the two given sources
# based on their relative distance from the virtual source.
# NOTE: source angles are measured clockwise from vertical axis, but head is displaced
# in the horizontal axis
# Finally, we assume the virtual amplitude is 1. Everything scales with that
virtual_angle = np.mod(virtual_angle, 2*np.pi) # make angles between 0 and 2pi
source1_angle = np.mod(source1_angle, 2*np.pi)
source2_angle = np.mod(source2_angle, 2*np.pi)
virtual_distance = cos_triangle_rule(radius, head_x, np.pi/2 - virtual_angle)
source1_distance = cos_triangle_rule(radius, head_x, np.pi/2 - source1_angle)
source2_distance = cos_triangle_rule(radius, head_x, np.pi/2 - source2_angle)
dAngle = np.minimum(np.abs(source2_angle-source1_angle), # Angle between sources. Note that this assumes
np.abs(2*np.pi - (source2_angle-source1_angle))) # they are adjacent!
angularDistance = np.minimum(np.abs(virtual_angle - source1_angle), # Take into account wrapping
np.abs(2*np.pi - (virtual_angle - source1_angle)))
source1_relative_angle = 1 - angularDistance/dAngle # Want amplitude to be large (close to 1)
source2_relative_angle = 1 - source1_relative_angle # for closest source, source1
if synthesis=='phasor':
omega = 2*np.pi*freq
k = omega / speed_of_sound
phi_v = -k*virtual_distance # This is the phase angle of the sound from virtual source.
phi_1 = -k*source1_distance
phi_2 = -k*source2_distance
# If we wanted to match phase and amplitude of the virtual signal, it's easy
# Define a triangle by the three sounds in phase space.
# Use the law of sines find out proper amplitudes.
scale = (1/virtual_distance) / np.sin(phi_2 - phi_1)
source1_amp = source1_distance * np.sin(phi_2 - phi_v) * scale
source2_amp = source2_distance * np.sin(phi_v - phi_1) * scale
# The downside of this is that at higher frequencies, we'll end up cycling
# our amplitudes really fast to match the phase. Instead, let's just
# smoothly interpolate our phase from one speaker to the next
# TBD.
# Fix the situation where the speakers are at the same difference
source1_amp = np.where(phi_2 != phi_1, source1_amp,
source1_relative_angle)
source2_amp = np.where(phi_2 != phi_1, source2_amp,
source2_relative_angle)
elif synthesis == 'naive':
source1_amp = source1_relative_angle * source1_distance / virtual_distance
source2_amp = source2_relative_angle * source2_distance / virtual_distance
elif synthesis == 'closest':
source1_amp = np.ones(source1_relative_angle.shape) * source1_distance / virtual_distance
source2_amp = 0 * source1_amp # get dimensions right
return [source1_amp, source2_amp]
###Output
_____no_output_____
###Markdown
Calculate actual distances to earsThe speakers are being scaled to aim at the center of the head. The ears are off center of this. The cosine rule for triangleshelps us again here to find out the distance to each ear ifwe know the distance and angle to a source and the interauraldistance
###Code
# remember - 0 degrees is straight ahead
def left_ear_distance(angle, radius, head_x=0, interaural_distance=0.0086):
if (head_x - interaural_distance/2) < 0:
return cos_triangle_rule(radius, head_x - interaural_distance/2, np.pi/2 + angle)
else:
return cos_triangle_rule(radius, head_x - interaural_distance/2, np.pi/2 - angle)
def right_ear_distance(angle, radius, head_x=0, interaural_distance=0.0086):
if (head_x + interaural_distance/2) < 0:
return cos_triangle_rule(radius, head_x + interaural_distance/2, np.pi/2 + angle)
else:
return cos_triangle_rule(radius, head_x + interaural_distance/2, np.pi/2 - angle)
###Output
_____no_output_____
###Markdown
To simulate, we need the wave equationThis describes the signal which is detected at some distance from a spherically propagatingperfect sound source. The amplitude at unit distance (the units here are m, based on thedefinition of the speed of sound) is . We do everything at time =0, becauseall the phases and so on scale by this.
###Code
def wave_eq(amp, r, freq, t=0,speed_of_sound = 343.0):
omega = 2*np.pi*freq
k = omega / speed_of_sound
return (amp/r) * np.exp(1j * (omega*t - k*r))
###Output
_____no_output_____
###Markdown
Make a helper function to run the simulation
###Code
def ring_audio_synthesis(num_speakers, radius, head_pos, frequencies, synthesis='phasor'):
VirtualAngles = np.expand_dims(np.linspace(-np.pi, np.pi, 8*num_speakers, endpoint=False),axis=1)
VirtualDistance = radius # This is a fixed assumption!!!
VirtualAmp = 1 # This is a fixed assumption
LR_Distances = [left_ear_distance(VirtualAngles, radius, head_pos),
right_ear_distance(VirtualAngles, radius, head_pos)]
DesiredSounds = [np.abs(wave_eq(VirtualAmp, LR_Distances[0], frequencies)),
np.abs(wave_eq(VirtualAmp, LR_Distances[1], frequencies))]
SynthAngle = virtual_source_angle(VirtualAngles, num_speakers, radius)
SynthAmp = virtual_source_amplitude(VirtualAngles, radius,
SynthAngle[0], SynthAngle[1], frequencies, head_pos, synthesis=synthesis)
LeftDistances = [left_ear_distance(s, radius, head_pos) for s in SynthAngle]
RightDistances = [right_ear_distance(s, radius, head_pos) for s in SynthAngle]
SynthesizedSounds = [
np.abs(wave_eq(SynthAmp[0], LeftDistances[0], frequencies) + \
wave_eq(SynthAmp[1], LeftDistances[1], frequencies)),
np.abs(wave_eq(SynthAmp[0], RightDistances[0], frequencies) + \
wave_eq(SynthAmp[1], RightDistances[1], frequencies))
]
SynthesizedSoundAtCenter = np.abs(wave_eq(SynthAmp[0], LR_Distances[0], frequencies) + \
wave_eq(SynthAmp[1], LR_Distances[1], frequencies))
return VirtualAngles, LR_Distances, DesiredSounds, SynthAngle, SynthAmp, \
[LeftDistances, RightDistances], SynthesizedSounds, SynthesizedSoundAtCenter
###Output
_____no_output_____
###Markdown
And another one to plot results
###Code
def plot_results(virtual_angles, frequencies, radius, head_x, num_speakers,
desired_sounds, synthesized_sounds,
synthesis='phasor',
speaker_radius=0.05/2, # 5 cm diameter speakers
mouse_scale=3,
interaural_distance=0.0086): # 8.6 mm
fig = plt.figure(figsize=(20,12))
gs = fig.add_gridspec(2,5, hspace=0.3, wspace=0.3)
ax1 = fig.add_subplot(gs[0:,0])
ax2 = fig.add_subplot(gs[0,1:3])
ax3 = fig.add_subplot(gs[1,1:3])
ax4 = fig.add_subplot(gs[0,3:])
ax5 = fig.add_subplot(gs[1,3:])
ax1 = render_mouse(interaural_distance=interaural_distance, ax=ax1,
scale=mouse_scale, xpos=head_x)
ax1 = render_ring(num_speakers=num_speakers, radius=radius, ax=ax1, speaker_radius=speaker_radius)
ax1.autoscale()
ax1.set_aspect('equal')
ax1.set_title('Interaural distance 8.6 mm\nHead centered at {} m\n'
'{} Speakers at {} m'.format(head_x, num_speakers, radius))
vmax = 1.25 * np.max(desired_sounds[1])
vmin = np.min(desired_sounds[1])
if (vmin < 0):
vmin = 1.25 * vmin
else:
vmin = 0.5 * vmin
RightEarSignal = ax2.imshow((synthesized_sounds[1]).T,
origin='lower', interpolation=None, aspect='auto',
vmin=vmin, vmax=vmax,
extent=[virtual_angles[0,0],virtual_angles[-1,0],
frequencies[0,0],frequencies[0,-1]])
ax2.set_yscale('log')
ax2.set_ylabel('Virtual Source Frequency')
RightEarSignal.cmap.set_over('red')
RightEarSignal.cmap.set_over('pink')
fig.colorbar(RightEarSignal, ax=ax2, extend='max')
ax2.set_title('Phasor Synthesis Sound at Right Ear')
DesiredRightEar = ax3.imshow(desired_sounds[1].T,
origin='lower', interpolation=None, aspect='auto',
vmin=vmin, vmax=vmax,
extent=[virtual_angles[0,0],virtual_angles[-1,0],
frequencies[0,0],frequencies[0,-1]])
ax3.set_xlabel('Virtual Source Angle')
ax3.set_yscale('log')
ax3.set_ylabel('Virtual Source Frequency')
fig.colorbar(DesiredRightEar, ax=ax3, extend='both')
ax3.set_title('Desired Sound at Right Ear')
# Next, plot ILDs
if head_x > radius:
first = 0
second = 1
label = '(Left - Right)'
else:
first = 1
second = 0
label = '(Right - Left)'
vmax = 1.25 * np.max(desired_sounds[first] - desired_sounds[second])
vmin = 1.25 * np.min(desired_sounds[first] - desired_sounds[second])
ILD = ax4.imshow((synthesized_sounds[first] - synthesized_sounds[second]).T,
origin='lower', interpolation=None, aspect='auto',
vmin=vmin, vmax=vmax,
extent=[virtual_angles[0,0],virtual_angles[-1,0],
frequencies[0,0],frequencies[0,-1]])
ax4.set_yscale('log')
ax4.set_ylabel('Virtual Source Frequency')
ILD.cmap.set_over('red')
ILD.cmap.set_under('pink')
fig.colorbar(ILD, ax=ax4, extend='both')
ax4.set_title('Phasor Synthesis Interaural Level Difference {}'.format(label))
ExpectedILD = ax5.imshow(desired_sounds[first].T - desired_sounds[second].T,
origin='lower', interpolation=None, aspect='auto',
vmin=vmin, vmax=vmax,
extent=[virtual_angles[0,0],virtual_angles[-1,0],
frequencies[0,0],frequencies[0,-1]])
ax5.set_yscale('log')
ax5.set_xlabel('Virtual Source Angle')
ax5.set_ylabel('Virtual Source Frequency')
fig.colorbar(ExpectedILD, ax=ax5, extend='both')
ax5.set_title('Expected Interaural Level Difference {}'.format(label))
return (fig, ax1, ax2, ax3, ax4, ax5)
###Output
_____no_output_____
###Markdown
Let's do it!Simulate the situation where the mouse is offset from the center of the ring, but still inside.
###Code
#%%
# Info about ring
NumSpeakers = 16
Radius = 0.5 # 50 cm radius
# Mouse position
HeadPos = 0.4
Frequencies = np.expand_dims(np.linspace(500,20000,num=200),axis=0)
VirtualAngles, LR_Distances, DesiredSounds, SynthAngle, SynthAmp, LR2, SynthesizedSounds, SynthesizedSoundsAtCenter = \
ring_audio_synthesis(NumSpeakers, Radius, HeadPos, Frequencies)
#%%
#%%
plot_results(VirtualAngles, Frequencies, Radius, HeadPos, NumSpeakers,
DesiredSounds, SynthesizedSounds);
#%%
VirtualAngles, LR_Distances, DesiredSounds, SynthAngle, SynthAmp, LR2, SynthesizedSounds, SynthesizedSoundsAtCenter = \
ring_audio_synthesis(NumSpeakers, Radius, HeadPos, Frequencies, synthesis='naive')
plot_results(VirtualAngles, Frequencies, Radius, HeadPos, NumSpeakers,
DesiredSounds, SynthesizedSounds);
#%%
VirtualAngles, LR_Distances, DesiredSounds, SynthAngle, SynthAmp, LR2, SynthesizedSounds, SynthesizedSoundsAtCenter = \
ring_audio_synthesis(NumSpeakers, Radius, HeadPos, Frequencies, synthesis='closest')
plot_results(VirtualAngles, Frequencies, Radius, HeadPos, NumSpeakers,
DesiredSounds, SynthesizedSounds);
# %%
###Output
_____no_output_____ |
unsig_gen.ipynb | ###Markdown
Press "SHIFT + ENTER" to run a cell, all cells should be run in order, from top to bottom First we import some libraries
###Code
import numpy as np
from PIL import Image
###Output
_____no_output_____
###Markdown
Then we set some parameters:1. Change 4096 to the size you want your unsig to be (width and height will be the same)
###Code
#pixel dimension
dim = 4096
###Output
_____no_output_____
###Markdown
input your unsigs:1. index (number)2. number of properties3. the values of these properties (be careful, words like 'Green' and 'Normal' need to be enclosed in quotes
###Code
#replace the content inside {} with your unsig's properties
unsig = {'index': 0,
'num_props': 0,
'properties': {
'multipliers' : [],
'colors' : [],
'distributions' : [],
'rotations' : []}}
###Output
_____no_output_____
###Markdown
Run the cell below!
###Code
def norm(x , mean , std):
p = (np.pi*std) * np.exp(-0.5*((x-mean)/std)**2)
return p
def scale_make2d(s):
scaled = np.interp(s, (s.min(), s.max()), (0, u_range))
two_d = np.tile(scaled, (dim, 1))
return two_d
def gen_nft(nft):
idx = unsig['index']
props = unsig['properties']
n = np.zeros((dim, dim, 3)).astype(np.uint32)
for i in range(unsig['num_props']):
mult = props['multipliers'][i]
col = props['colors'][i]
dist = props['distributions'][i]
rot = props['rotations'][i]
c = channels[col]
buffer = mult * np.rot90(dists[dist], k=(rot / 90))
n[ :, :, c ] = n[ :, :, c ] + buffer
n = np.interp(n, (0, u_range), (0, 255)).astype(np.uint8)
return (idx, n)
if __name__ == '__main__':
#setup
x = list(range(dim))
u_range = 4294967293
mean = np.mean(x)
std = dim/6
#probability and cumulative distribution
p_1d = np.array(norm(x, mean, std)).astype(np.uint32)
c_1d = np.cumsum(p_1d)
#2d arrays
p_2d = scale_make2d(p_1d)
c_2d = scale_make2d(c_1d)
#dicts for retrieving values
dists = {'Normal': p_2d, 'CDF': c_2d}
channels = {'Red': 0, 'Green': 1, 'Blue': 2}
#make your nft
i, nft = gen_nft(unsig)
img = Image.fromarray(nft)
img.save(f'unsig_{i:05d}.png')
###Output
_____no_output_____ |
server/transformers/notebooks/Test Models.ipynb | ###Markdown
Tokenizing smiles
###Code
import re
import regex
def tokenize_smiles(smiles: str) -> str:
"""
Tokenize a SMILES molecule or reaction
"""
pattern = r"(\[[^\]]+]|Br?|Cl?|N|O|S|P|F|I|b|c|n|o|s|p|\(|\)|\.|=|#|-|\+|\\|\/|:|~|@|\?|>>?|\*|\$|\%[0-9]{2}|\%\([0-9]{3}\)|[0-9])"
regex = re.compile(pattern)
tokens = [token for token in regex.findall(smiles)]
if smiles != ''.join(tokens):
raise
# return ' '.join(tokens)
return tokens
rxn = 'CO.COC(=O)C(C)(C)c1ccc(C(=O)CCCN2CCC(C(O)(c3ccccc3)c3ccccc3)CC2)cc1.Cl.O[Na]>>CC(C)(C(=O)O)c1ccc(C(=O)CCCN2CCC(C(O)(c3ccccc3)c3ccccc3)CC2)cc1'
tokenized_rxn = tokenize_smiles(rxn)
print(tokenized_rxn)
tokenized_rxn
###Output
_____no_output_____ |
tarefas/02_Passo_2.ipynb | ###Markdown
> Texto fornecido sob a Creative Commons Attribution license, CC-BY. Todo o código está disponível sob a FSF-approved BSD-3 license.> (c) Original por Lorena A. Barba, Gilbert F. Forsyth em 2017, traduzido por Felipe N. Schuch em 2020.> [@LorenaABarba](https://twitter.com/LorenaABarba) - [@fschuch](https://twitter.com/fschuch) 12 passos para Navier-Stokes======*** Esse notebook continua a apresentação dos **12 passos para Navier-Stokes**, um módulo prático aplicado como um curso interativo de Dinâmica dos Fluidos Computacional (CFD, do Inglês *Computational Fluid Dynamics*), por [Prof. Lorena Barba](http://lorenabarba.com). Adaptado e traduzido para português por [Felipe N. Schuch](https://fschuch.github.io/). Você deve completar o [Passo 1](./01_Passo_1.ipynb) antes de continuar, tendo escrito seu próprio script Python ou notebook, e tendo experimentado variar os parâmetros de discretização e observado o que aconteceu. Passo 2: Convecção não Linear-----*** Aqui iremos implementar a equação de convecção não linear, com os mesmos métodos empregados no passo anterior. A equação convectiva unidimensional é escrita como:$$\frac{\partial u}{\partial t} + u \frac{\partial u}{\partial x} = 0$$Perceba que em vez de uma constante $c$ multiplicando o segundo termo, ele é agora multiplicado pela própria solução $u$. Portanto, o segundo termo da equação é agora denominado *não linear*. Vamos usar a mesma discretização do Passo 1: diferença para frente para a derivada temporal e diferença para trás para a derivada espacial.A equação discretizada é dado por:$$\frac{u_i^{n+1}-u_i^n}{\Delta t} + u_i^n \frac{u_i^n-u_{i-1}^n}{\Delta x} = 0$$Isolando o termo com a única incógnita, $u_i^{n+1}$, temos:$$u_i^{n+1} = u_i^n - u_i^n \frac{\Delta t}{\Delta x} (u_i^n - u_{i-1}^n)$$ Como antes, o código Python começa ao importar as bibliotecas necessárias. Então, declaramos alguns parâmetros que determinam a discretização no espaço e no tempo (você deve experimentar alterar esses valores e ver o que acontece). Finalmente, definimos a condição inicial (CI) ao inicializar o arranjo para a solução usando $u_0 = 2$ onde $ 0,5 \leq x \leq 1 $, senão $u = 1$, no intervalo $0 \le x \le 2$ (i.e., função chapéu).
###Code
import numpy #Aqui carregamos numpy
from matplotlib import pyplot #Aqui carregamos matplotlib
%matplotlib inline
x = numpy.linspace(0., 2., num = 41)
nt = 20 #Número de passos de tempo que queremos calcular
dt = .025 #Tamanho de cada passo de tempo
nx = x.size
dx = x[1] - x[0]
u = numpy.ones_like(x) #Como antes, iniciamos o u com todos os valores 1
u[(0.5<=x) & (x<=1)] = 2 #Então definimos u = 2 entre 0,5 e 1, nossa CI
un = numpy.ones_like(u) #Inicializar o arranjo temporário, para manter a solução no passo de tempo
###Output
_____no_output_____
###Markdown
O código no bloco abaixo está *incompleto*. Nós apenas copiamos as linhas do [Passo 1](./01_Passo_1.ipynb), que executavam o avanço temporal. Será que você pode editar o código para que dessa vez execute a convecção não linear?
###Code
for n in range(nt): #Laço temporal
un = u.copy() ##Cópia dos valores de u para un
for i in range(1, nx): ##Laço espacial
###A linha abaixo foi copiada do Passo 1. Edite ela de acordo com a nova equação.
###Então descomente e execute a célula para calcular o resultado do Passo 2.
###u[i] = un[i] - c * dt / dx * (un[i] - un[i-1])
pyplot.plot(x, u) ##Plot the results
###Output
_____no_output_____
###Markdown
Quais diferenças você observou para a evolução da função chapéu em comparação com o caso linear? O que acontece se você modificar os parâmetros numéricos e executar o código novamente?Após refletir sobre essas questões, você pode prosseguir para o arquivo complementar [Convergência e condição CFL](./03_Condicao_CFL.ipynb), ou ir direto para o [Passo 3](./04_Passo_3.ipynb). Material Complementar-----*** Para uma explicação passo à passo sobre a discretização da equaçaõ de convecção linear com diferenças finitas (e também os passos seguintes, até o Passo 4), assista **Video Lesson 4** por Prof. Barba no YouTube.
###Code
from IPython.display import YouTubeVideo
YouTubeVideo('y2WaK7_iMRI')
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
###Output
_____no_output_____ |
AIC/decisionTree.ipynb | ###Markdown
(A and B) or (not A and C)
###Code
import numpy as np
# A B C T/F
dataSet = [[0, 0, 0, 0],
[0, 0, 1, 1],
[0, 1, 0, 0],
[0, 1, 1, 1],
[1, 0, 0, 0],
[1, 0, 1, 0],
[1, 1, 0, 1],
[1, 1, 1, 1]]
nd_dataSet = np.array(dataSet)
class node():
def __init__(self, data=0, left_child=None, right_child=None):
self.data = data
self.left_child = left_child
self.right_child = right_child
root = node(0)
split = node()
###Output
_____no_output_____ |
doc/nb/FluxSpec.ipynb | ###Markdown
Fluxing with PYPIT [v2]
###Code
%matplotlib inline
# import
from importlib import reload
import os
from matplotlib import pyplot as plt
import glob
import numpy as np
from astropy.table import Table
from pypeit import fluxspec
from pypeit.spectrographs.util import load_spectrograph
###Output
_____no_output_____
###Markdown
For the standard User (Running the script) Generate the sensitivity function from an extracted standard star Here is an example fluxing file (see the fluxing docs for details): User-defined fluxing parameters [rdx] spectrograph = vlt_fors2 [fluxcalib] balm_mask_wid = 12. std_file = spec1d_STD_vlt_fors2_2018Dec04T004939.578.fits sensfunc = bpm16274_fors2.fits Here is the call, and the sensitivity function is written to bpm16274_fors2.fits pypit_flux_spec fluxing_filename --plot Apply it to all spectra a spec1d science file Add a flux block and you can comment out the std_file parameter to avoid remaking the sensitivity function User-defined fluxing parameters [rdx] spectrograph = vlt_fors2 [fluxcalib] balm_mask_wid = 12. std_file = spec1d_STD_vlt_fors2_2018Dec04T004939.578.fits sensfunc = bpm16274_fors2.fits flux read spec1d_UnknownFRBHostY_vlt_fors2_2018Dec05T020241.687.fits FRB181112_fors2_1.fits spec1d_UnknownFRBHostY_vlt_fors2_2018Dec05T021815.356.fits FRB181112_fors2_2.fits spec1d_UnknownFRBHostY_vlt_fors2_2018Dec05T023349.816.fits FRB181112_fors2_3.fits flux end The new files contain fluxed spectra (and the original, unfluxed data too) pypit_flux_spec fluxing_filename Multi-detector (DEIMOS) pypit_flux_spec sensfunc --std_file=spec1d_G191B2B_DEIMOS_2017Sep14T152432.fits --instr=keck_deimos --sensfunc_file=sens.yaml --multi_det=3,7 ---- For Developers (primarily) To play along from here, you need the Development suite *reduced* And the $PYPIT_DEV environmental variable pointed at it
###Code
os.getenv('PYPEIT_DEV')
###Output
_____no_output_____
###Markdown
Instrument and parameters
###Code
spectrograph = load_spectrograph('shane_kast_blue')
par = spectrograph.default_pypeit_par()
###Output
_____no_output_____
###Markdown
Instantiate
###Code
FxSpec = fluxspec.FluxSpec(spectrograph, par['fluxcalib'])
###Output
[1;32m[INFO] ::[0m [1;34mflux.py 899 load_extinction_data()[0m - Using mthamextinct.dat for extinction corrections.
###Markdown
Sensitivity function
###Code
std_file = os.getenv('PYPEIT_DEV')+'Cooked/Science/spec1d_Feige66_KASTb_2015May20T041246.960.fits'
sci_file = os.getenv('PYPEIT_DEV')+'Cooked/Science/spec1d_J1217p3905_KASTb_2015May20T045733.560.fits'
###Output
_____no_output_____
###Markdown
Load
###Code
FxSpec.load_objs(std_file, std=True)
###Output
[1;32m[INFO] ::[0m [1;34mfluxspec.py 118 load_objs()[0m - Loaded 1 spectra from the spec1d standard star file: /home/xavier/local/Python/PypeIt-development-suite/Cooked/Science/spec1d_Feige66_KASTb_2015May20T041246.960.fits
###Markdown
Find the standard (from the brightest spectrum)
###Code
_ = FxSpec.find_standard()
###Output
[1;32m[INFO] ::[0m [1;34mflux.py 980 find_standard()[0m - Putative standard star <Table length=1>
shape [2] slit_spat_pos ... idx
int64 object ... str30
------------ ------------- ... ------------------------------
2048 .. 1024 None ... SPAT0169-SLIT0000-DET01-SCI023 has a median boxcar count of 16123.35030125018
###Markdown
Sensitivity Function
###Code
sensfunc = FxSpec.generate_sensfunc()
sensfunc
###Output
[1;32m[INFO] ::[0m [1;34mflux.py 183 generate_sensfunc()[0m - Applying extinction correction
[1;32m[INFO] ::[0m [1;34mflux.py 899 load_extinction_data()[0m - Using mthamextinct.dat for extinction corrections.
[1;32m[INFO] ::[0m [1;34mflux.py 194 generate_sensfunc()[0m - Get standard model
[1;32m[INFO] ::[0m [1;34mflux.py 806 find_standard_file()[0m - Using standard star FEIGE66
[1;32m[INFO] ::[0m [1;34mflux.py 935 load_standard_file()[0m - Loading standard star file: /home/xavier/local/Python/PypeIt/pypeit/data/standards/calspec/feige66_002.fits.gz
[1;32m[INFO] ::[0m [1;34mflux.py 936 load_standard_file()[0m - Fluxes are flambda, normalized to 1e-17
[1;32m[INFO] ::[0m [1;34mflux.py 256 generate_sensfunc()[0m - Set nresln to 20.0
[1;30m[WORK IN ]::[0m
[1;33m[PROGRESS]::[0m [1;34mflux.py 262 generate_sensfunc()[0m - Should pull resolution from arc line analysis
[1;30m[WORK IN ]::[0m
[1;33m[PROGRESS]::[0m [1;34mflux.py 263 generate_sensfunc()[0m - At the moment the resolution is taken as the PixelScale
[1;30m[WORK IN ]::[0m
[1;33m[PROGRESS]::[0m [1;34mflux.py 264 generate_sensfunc()[0m - This needs to be changed!
[1;32m[INFO] ::[0m [1;34mflux.py 275 generate_sensfunc()[0m - Masking spectral regions:
[1;32m[INFO] ::[0m [1;34mflux.py 279 generate_sensfunc()[0m - Masking bad pixels
[1;32m[INFO] ::[0m [1;34mflux.py 284 generate_sensfunc()[0m - Masking edges
[1;32m[INFO] ::[0m [1;34mflux.py 289 generate_sensfunc()[0m - Masking Balmer
[1;32m[INFO] ::[0m [1;34mflux.py 297 generate_sensfunc()[0m - Masking Paschen
[1;32m[INFO] ::[0m [1;34mflux.py 307 generate_sensfunc()[0m - Masking Brackett
[1;32m[INFO] ::[0m [1;34mflux.py 317 generate_sensfunc()[0m - Masking Pfund
[1;32m[INFO] ::[0m [1;34mflux.py 327 generate_sensfunc()[0m - Masking Below the atmospheric cutoff
[1;32m[INFO] ::[0m [1;34mflux.py 334 generate_sensfunc()[0m - Masking Telluric
[1;32m[INFO] ::[0m [1;34mflux.py 536 bspline_magfit()[0m - Initialize bspline for flux calibration
[1;32m[INFO] ::[0m [1;34mflux.py 551 bspline_magfit()[0m - Bspline fit: step 1
[1;32m[INFO] ::[0m [1;34mflux.py 587 bspline_magfit()[0m - Bspline fit: step 2
[1;32m[INFO] ::[0m [1;34mflux.py 637 bspline_magfit()[0m - Difference between fits is 5.61567e-05
[1;30m[WORK IN ]::[0m
[1;33m[PROGRESS]::[0m [1;34mflux.py 660 bspline_magfit()[0m - Add QA for sensitivity function
###Markdown
Plot
###Code
FxSpec.show_sensfunc()
FxSpec.steps
###Output
_____no_output_____
###Markdown
Write
###Code
_ = FxSpec.save_sens_dict(FxSpec.sens_dict, outfile='sensfunc.fits')
###Output
[1;32m[INFO] ::[0m [1;34mfluxspec.py 417 save_sens_dict()[0m - Wrote sensfunc to MasterFrame: sensfunc.fits
###Markdown
Flux science
###Code
FxSpec.flux_science(sci_file)
FxSpec.sci_specobjs
FxSpec.sci_specobjs[0].optimal
###Output
_____no_output_____
###Markdown
Plot
###Code
plt.clf()
ax = plt.gca()
ax.plot(FxSpec.sci_specobjs[0].optimal['WAVE'], FxSpec.sci_specobjs[0].optimal['FLAM'])
ax.plot(FxSpec.sci_specobjs[0].optimal['WAVE'], FxSpec.sci_specobjs[0].optimal['FLAM_SIG'])
ax.set_ylim(-2, 30.)
#
ax.set_xlabel('Wavelength')
ax.set_ylabel('Flux (cgs 1e-17)')
plt.show()
###Output
_____no_output_____
###Markdown
Write science frames
###Code
FxSpec.write_science('tmp.fits')
FxSpec.steps
###Output
_____no_output_____
###Markdown
Instantiate and Load a sensitivity function
###Code
par['fluxcalib']['sensfunc'] = 'sensfunc.fits'
FxSpec2 = fluxspec.FluxSpec(spectrograph, par['fluxcalib'])
FxSpec2.show_sensfunc()
###Output
_____no_output_____
###Markdown
Clean up
###Code
os.remove('sensfunc.fits')
os.remove('tmp.fits')
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.